Tue 19 Aug 2008 07:14:47 PM UTC, original submission:
This might be a bug or feature, I'm not exactly positive, but I consider it "improper behavior."
Steps:
1) Create an AFR setup. Mine is server-side, using 3 copies.
2) Mount the AFR volume using a client; works as expected.
3) Copy a file ("text.txt") onto the GlusterFS mount point on the client. File is created 3 times, as expected.
4) Shut down glusterfsd on Server0. Failover using RR DNS works as expected on the client. SSH to Server0 then remove test.txt from the underlying filesystem (in my case, ext3).
5) File still exists on Servers 1 & 2, and is viewable by the client
6) Restart glusterfsd on Server0; it rejoins the cluster.
7) test.txt is not recreated on Server0
8) Go back to the client, do a `cat test.txt`; Works as expected. test.txt is still not recreated on Server0.
9) Server0 shows the following:
2008-08-19 13:06:21 D [inode.c:577:__create_inode] localMountLocks/inode: create inode(2)
2008-08-19 13:06:21 D [inode.c:367:__active_inode] localMountLocks/inode: activating inode(2), lru=0/1024
2008-08-19 13:06:21 D [inode.c:367:__active_inode] localMountLocks/inode: activating inode(2), lru=0/1024
2008-08-19 13:06:23 D [inode.c:367:__active_inode] localMountLocks/inode: activating inode(2), lru=0/1024
10) Force client to connect to Server0 instead of the RR DNS entry, client does not show test.txt
Result is that test.txt is unavailable to 1/3 of clients connecting. I view this as a bug because in the event of a crash, it's possible that the test.txt inode may be cleared/unlinked. We would expect the gluster backend to replicate the file correctly and therefore wouldn't worry about it.
Thanks!
Wes Deviers/aka "Yhetti"
-unavailable-
|