Fri 01 May 2009 11:19:00 PM UTC, comment #2:
I believe it was with just the default arguments, which I believe is a stripe-size of 128KB. How can I check what the write-chunk-size is? Here's my server.vol file:
volume posix
type storage/posix
option directory /tmp/gluster
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume io-threads
type performance/io-threads
option thread-count 16
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.io-threads.allow *
subvolumes io-threads
end-volume
and my client.vol file:
volume machine01
type protocol/client
option transport-type tcp
option remote-host machine01
option remote-subvolume io-threads
end-volume
volume stripe
type cluster/stripe
subvolumes machine01
end-volume
If I add a write-behind, the writes go up to 113MB/s:
volume write-behind
type performance/write-behind
option cache-size 1MB
# subvolumes stripe
subvolumes read-ahead
end-volume
I've also compared different block sizes and their read-write performance, without any of the client performance translators:
| block size | write | read |
| 1GB | 62MB/s | 113MB/s |
| 128MB | 62MB/s | 113MB/s |
| 64MB | 64MB/s | 113MB/s |
| 32MB | 60MB/s | 113MB/s |
| 16MB | 55MB/s | 113MB/s |
| 8MB | 50MB/s | 113MB/s |
| 4MB | 45MB/s | 113MB/s |
| 1MB | 23MB/s | 113MB/s |
| 512KB | 12MB/s | 113MB/s |
| 256KB | 8MB/s | 113MB/s |
| 128KB | 4MB/s | 113MB/s |
| 64KB | 2MB/s | 113MB/s |
| default | 4MB/s | 113MB/s |
So it appears that these performance issues really start happening around a block size of 16-32MB. I'm guessing that's still a much larger size than the write-chunk-size, and would correlate with some other performance problems I've seen with a stripe of 4 machines. With that, I'm seeing the overhead of striping limit the maximum amount a client can read unless the stripe-size is greater than 16MB. I'll run some numbers and file another bug about that.
|