Sat 02 May 2009 12:12:23 AM UTC, original submission:
This appears to be related to bug #26402. I'm writing and reading a 1GB file full of random data into a stripe of 4 idle machines and seeing how they perform. I've discovered that I really have to pump up the block size to get the performance I'd expect without any of the performance translators:
server.vol:
volume posix
type storage/posix
option directory /tmp/gluster
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume io-threads
type performance/io-threads
option thread-count 16
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.io-threads.allow *
subvolumes io-threads
end-volume
client.vol:
volume machine01
type protocol/client
option transport-type tcp
option remote-host machine01
option remote-subvolume io-threads
end-volume
volume machine02
type protocol/client
option transport-type tcp
option remote-host machine02
option remote-subvolume io-threads
end-volume
volume machine03
type protocol/client
option transport-type tcp
option remote-host machine03
option remote-subvolume io-threads
end-volume
volume machine04
type protocol/client
option transport-type tcp
option remote-host machine04
option remote-subvolume io-threads
end-volume
volume stripe
type cluster/stripe
option block-size *:512KB
subvolumes machine01 machine02 machine03 machine04
end-volume
My test is just to stream the 1GB file to and from gluster with this:
rm /mnt/glusterfs/giant-1gb
cat /tmp/gluster-data/giant-1gb | pv > /mnt/glusterfs/giant-1gb
sleep 5
cat /mnt/glusterfs/giant-1gb | pv > /dev/null
Here are the results I had without the performance translators:
| block-size | write | read |
| 64MB | 64MB/s | 113MB/s |
| 32MB | 64MB/s | 113MB/s |
| 24MB | 65MB/s | 113MB/s |
| 20MB | 64MB/s | 113MB/s |
| 16MB | 64MB/s | 105MB/s |
| 8MB | 62MB/s | 73MB/s |
| 4MB | 62MB/s | 55MB/s |
| 1MB | 49MB/s | 32MB/s |
| 512KB | 54MB/s | 38MB/s |
| 256KB | 51MB/s | 45MB/s |
| 128KB | 48MB/s | 40MB/s |
While there's some noise in the results, I was able to get in this range of bandwidth usage repeatably. So somewhere around 8-16MB the read performance drops, and later for the writes , which are held up by the slow disks.
With read-ahead and write-behind (default options) in the client.vol file:
volume read-ahead
type performance/read-ahead
subvolumes stripe
end-volume
volume write-behind
type performance/write-behind
subvolumes read-ahead
end-volume
I'm getting:
| block-size | write | read |
| 64MB | 108MB/s | 113MB/s |
| 32MB | 106MB/s | 105MB/s |
| 16MB | 77MB/s | 87MB/s |
| 8MB | 89MB/s | 88MB/s |
| 4MB | 74MB/s | 105MB/s |
| 1MB | 63MB/s | 68MB/s |
| 512KB | 67MB/s | 55MB/s |
| 256KB | 61MB/s | 42MB/s |
| 128KB | 48MB/s | 56MB/s |
The overall performance for the smaller block-size is better, but it appeared to be a bit more noisy. The tests results weren't as repeatable as before as I'm guessing the caches are not being consistently used.
Is this expected behavior?
|