bugGluster - Bugs: bug #23257, "EOF from peer" between...

 
 

bug #23257: "EOF from peer" between 32bit client and 64bit server

Submitted by:  None
Submitted on:  Thu 15 May 2008 09:31:20 AM UTC  
Votes:  100  
 
Category: GlusterFSSeverity: 3 - Normal
Priority: 5 - NormalItem Group: Improper behaviour
Status: FixedPrivacy: Public
Assigned to: NoneOriginator Name: Amar Tumballi
Originator Email: -unavailable-Open/Closed: Closed
Release: Operating System: GNU/Linux
Reproducibility: None

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

(Jump to the original submission Jump to the original submission)

Tue 23 Jun 2009 01:32:31 PM UTC, comment #7:

same behaviour is not seen on glusterfs-2.0.

--
Gowda

Basavanagowda Kanur <gowda>
Project Member
Mon 28 Jul 2008 05:29:34 PM UTC, comment #6:

Amar,
Thanks for looking into this, I feel like the server acting as both brick and client is the crux of the problem. Anyways, I noticed that gluster-1.3.10 came out, and I tried to update one of my clients but received a notice that the server had to be updated too. I am about to embark on upgrading our system, but would like to know if there are any gotchas to watch out for.

We are currently using client side AFR w/posix-locks. We are not using Unify yet as we only have 2 bricks that are being mirrored with AFR.

Thanks,
Mike

Michael Taggart <mikeytag>
Fri 25 Jul 2008 02:15:26 AM UTC, comment #5:

Hi Mike,
Thanks for the observation. Actually, I suspect it something with notify behavior of glusterfs. I recently saw this behavior with single_AFR example spec, (which is used as both server and client), and two nodes were not connecting, and by changing the order of volume definition, it became ok.
I will get back to you with more testing. (I think if you try to run client on the 64bit machine with another glusterfs process, it should work fine).

Regards,
Amar

Amar Tumballi <amarts>
Project Member
Thu 24 Jul 2008 10:12:01 PM UTC, comment #4:

I would like to add some more clues as to how this was happening with me. It just so happens that the 2 64 bit machines we run we were using as storage bricks. The problems occurred when I tried to enable those machines to be both bricks AND clients at the same time.
I bet this is not a recommended way of running gluster seeing as how I have awesome stability when I keep bricks as bricks only and clients as clients only.

Michael Taggart <mikeytag>
Wed 11 Jun 2008 06:12:14 PM UTC, comment #3:

Hi Anand,
I would be open to that. Do you have my email address from my user profile. Basically, send me an email with the ip that you will be logging in from and I can take the appropriate measures to get you a login from that ip.
Mike

Anonymous
Tue 10 Jun 2008 04:57:54 AM UTC, comment #2:

We have tried to reproduce this issue locally but have been unsuccessful. Is it possible to let us inspect your setup while the issue is being reproduced via remote login?

avati

Anand Avati <avati>
Project Member
Fri 06 Jun 2008 09:39:40 PM UTC, comment #1:

I would like to report that I am seeing the same problem.

Funny, because in my situation my 2 servers (using AFR) are 64 bit machines, and all my clients are 32 bit. Whenever I run the 64 bit servers as clients, the whole cluster eventually stops. Sometimes it stops after 5 minutes, sometimes after a few days.

Running the servers as only servers and all the other 32 bit machines as solely clients seems to fix the issue.

Michael Taggart <mikeytag>
Thu 15 May 2008 09:31:20 AM UTC, original submission:

Hi,

I detect a strange behavior between a 32bit client and a 64bit server.

I have a test configuration with :
- 1 32bit client :
Linux 2.6.18-6-686 i686 GNU/Linux
glusterfs 1.3.8pre1 built on Feb 26 2008 18:21:47
Repository revision: glusterfs--mainline--2.5--patch-676
- 4 64bit server :
Linux 2.6.18-6-amd64 x86_64 GNU/Linux
glusterfs 1.3.8pre1 built on Feb 26 2008 18:12:58
Repository revision: glusterfs--mainline--2.5--patch-676
- 1 64bit client:
Linux 2.6.18-6-amd64 x86_64 GNU/Linux
glusterfs 1.3.8pre1 built on Feb 26 2008 18:12:58
Repository revision: glusterfs--mainline--2.5--patch-676

The test :
- I modify a file on the 32bit client
- I open the same file on the 64bit client
- The file is empty or completely corrupted with binary data
- I close the file an reopen it
- The file is correct with modified lines

The log file :
- Nothing on the 32bit client :

- On the 64bit server :

2008-05-15 11:22:45 E [protocol.c:259:gf_block_unserialize_transport] server: EOF from peer ([32bit_Client_IP]:52317)
2008-05-15 11:22:45 C [tcp.c:87:tcp_disconnect] server: connection disconnected

- On the 64bit client :

2008-05-15 11:25:31 E [unify.c:261:unify_lookup_cbk] unify: Revalidate failed for [path_to_file]
2008-05-15 11:25:31 E [fuse-bridge.c:436:fuse_entry_cbk] glusterfs-fuse: 24682: [path_to_file] => -1 (2)

The server config file :

volume brick
type storage/posix
option directory /data/export/
end-volume

volume brick-ns
type storage/posix
option directory /data/export-ns/
end-volume

### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
subvolumes brick brick-ns
option auth.ip.brick.allow * # Allow access to "brick" volume
option auth.ip.brick-ns.allow * # Allow access to "brick-ns" volume
end-volume

The client config file :

volume brick1
type protocol/client
option transport-type tcp/client
option remote-host [server1]
option remote-subvolume brick
end-volume

volume brick2
type protocol/client
option transport-type tcp/client
option remote-host [server2]
option remote-subvolume brick
end-volume

volume brick3
type protocol/client
option transport-type tcp/client
option remote-host [server3]
option remote-subvolume brick
end-volume

volume brick4
type protocol/client
option transport-type tcp/client
option remote-host [server4]
option remote-subvolume brick
end-volume

volume brick-ns1
type protocol/client
option transport-type tcp/client
option remote-host [server1]
option remote-subvolume brick-ns
end-volume

volume brick-ns2
type protocol/client
option transport-type tcp/client
option remote-host [server2]
option remote-subvolume brick-ns
end-volume

volume afr1
type cluster/afr
subvolumes brick1 brick3
end-volume

volume afr2
type cluster/afr
subvolumes brick2 brick4
end-volume

volume afr-ns
type cluster/afr
subvolumes brick-ns1 brick-ns2
end-volume

volume unify
type cluster/unify
option namespace afr-ns
option scheduler rr
subvolumes afr1 afr2
end-volume

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by gowda (Posted a comment)
  • -unavailable- added by amarts (Posted a comment)
  • -unavailable- added by avati (Posted a comment)
  • -unavailable- added by mikeytag (Posted a comment)
  • -unavailable- added by mikeytag (Voted in favor of this item)
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 100 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 5 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Tue 23 Jun 2009 01:32:31 PM UTCgowdaStatusNone=>Fixed
      Open/ClosedOpen=>Closed
    Fri 25 Jul 2008 02:15:26 AM UTCamartsOriginator Name=>Amar Tumballi
      Originator Email=>-unavailable-
    Fri 06 Jun 2008 09:39:40 PM UTCmikeytagCarbon-Copy-=>Added mikeytag

    Back to the top


    Powered by Savane 3.1-cleanup1