bugGluster - Bugs: bug #26058, Glusterfc 2.0rc7 always crash when...

 
 

bug #26058: Glusterfc 2.0rc7 always crash when trying to mount (always if mixes servers with 2.0rc4 and 2.0rc7)

Submitted by:  Fernando Arconada <farconada>
Submitted on:  Tue 31 Mar 2009 02:09:28 PM UTC  
 
Category: GlusterFSSeverity: 3 - Normal
Priority: 5 - NormalItem Group: Crash
Status: FixedPrivacy: Public
Assigned to: Basavanagowda Kanur <gowda>Originator Name: Fernando
Open/Closed: ClosedRelease: Centos 5.2 2.6.18-92.1.22.el5PAE i686
Operating System: GNU/LinuxReproducibility: Every Time

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

(Jump to the original submission Jump to the original submission)

Thu 25 Jun 2009 04:29:11 AM UTC, comment #12:

Fernando,
to mount glusterfs using mount command use:
mount -t glusterfs <volume-spec-file-path> <mount-point>

or you can mount it by running glusterfs manually.

Closing the bug as the issues in the bug have been addressed and fixed.
--
Gowda

Basavanagowda Kanur <gowda>
Project MemberIn charge of this item.
Wed 01 Apr 2009 03:32:21 PM UTC, comment #11:

Hi Fernando,

Please check your client side volfile again. I suspect that the "remote-host" option in the protocol/client section is followed by "/mnt/glusterfs".

Shehjar Tikoo <shehjart>
Project Member
Wed 01 Apr 2009 02:42:27 PM UTC, comment #10:

Worse than before:
Now when i try to mount it cant find glusterfs command althought glusterfs is in the path. When I try "mount.glusterfs /mnt/glusterfs" the glusterfs.log
2009-04-01 16:31:14 E [common-utils.c:102:gf_resolve_ip6] resolver: getaddrinfo failed (Name or service not known)
2009-04-01 16:31:14 E [name.c:238:af_inet_client_get_remote_sockaddr] trans: DNS resolution failed on host /mnt/glusterfs
2009-04-01 16:31:14 E [common-utils.c:102:gf_resolve_ip6] resolver: getaddrinfo failed (Name or service not known)
2009-04-01 16:31:14 E [name.c:238:af_inet_client_get_remote_sockaddr] trans: DNS resolution failed on host /mnt/glusterfs
2009-04-01 16:31:24 E [common-utils.c:102:gf_resolve_ip6] resolver: getaddrinfo failed (Name or service not known)
2009-04-01 16:31:24 E [name.c:238:af_inet_client_get_remote_sockaddr] trans: DNS resolution failed on host /mnt/glusterfs
2009-04-01 16:31:24 E [common-utils.c:102:gf_resolve_ip6] resolver: getaddrinfo failed (Name or service not known)
2009-04-01 16:31:24 E [name.c:238:af_inet_client_get_remote_sockaddr] trans: DNS resolution failed on host /mnt/glusterfs
2009-04-01 16:31:34 E [common-utils.c:102:gf_resolve_ip6] resolver: getaddrinfo failed (Name or service not known)
2009-04-01 16:31:34 E [name.c:238:af_inet_client_get_remote_sockaddr] trans: DNS resolution failed on host /mnt/glusterfs
2009-04-01 16:31:34 E [common-utils.c:102:gf_resolve_ip6] resolver: getaddrinfo failed (Name or service not known)
2009-04-01 16:31:34 E [name.c:238:af_inet_client_get_remote_sockaddr] trans: DNS resolution failed on host /mnt/glusterfs

previously there isnt this error and i have no changed the config

Fernando Arconada <farconada>
Wed 01 Apr 2009 11:11:19 AM UTC, comment #9:

I cannot reproduce this particular segfault but a colleague thinks the second attached patch might fix the seg-fault.

Again, please test with this one and let us know the result.

Thanks

Shehjar Tikoo <shehjart>
Project Member
Wed 01 Apr 2009 09:58:49 AM UTC, comment #8:

The patch works. I mean, I'm able to mount and use it...but ....
when I'm doing a rsync ( rsync -aP /var/nfs-shared/v* /mnt/glusterfs/) in a client. Then the client crashes (100% reproducible)

Client log:
2009-04-01 11:48:45 E [afr-self-heal-metadata.c:551:afr_sh_metadata_fix] afr1: Unable to resolve conflicting metadata of /var/www/t3pueblos/typo3temp/tmb_a0efcf0be1.jpg. Please resolve manually by fixing the permissions/ownership of /var/www/t3pueblos/typo3temp/tmb_a0efcf0be1.jpg on your subvolumes. You can also consider 'option favorite-child <>'
2009-04-01 11:48:45 W [afr-self-heal-metadata.c:77:afr_sh_metadata_done] afr1: aborting selfheal of /var/www/t3pueblos/typo3temp/tmb_a0efcf0be1.jpg
2009-04-01 11:48:45 E [afr-self-heal-metadata.c:551:afr_sh_metadata_fix] afr1: Unable to resolve conflicting metadata of /var/www/t3pueblos/typo3temp/tmb_a0f80d7119.jpg. Please resolve manually by fixing the permissions/ownership of /var/www/t3pueblos/typo3temp/tmb_a0f80d7119.jpg on your subvolumes. You can also consider 'option favorite-child <>'
2009-04-01 11:48:45 W [afr-self-heal-metadata.c:77:afr_sh_metadata_done] afr1: aborting selfheal of /var/www/t3pueblos/typo3temp/tmb_a0f80d7119.jpg
2009-04-01 11:48:45 E [afr-self-heal-metadata.c:551:afr_sh_metadata_fix] afr1: Unable to resolve conflicting metadata of /var/www/t3pueblos/typo3temp/tmb_a17c668625.jpg. Please resolve manually by fixing the permissions/ownership of /var/www/t3pueblos/typo3temp/tmb_a17c668625.jpg on your subvolumes. You can also consider 'option favorite-child <>'
2009-04-01 11:48:45 W [afr-self-heal-metadata.c:77:afr_sh_metadata_done] afr1: aborting selfheal of /var/www/t3pueblos/typo3temp/tmb_a17c668625.jpg
2009-04-01 11:48:45 E [afr-self-heal-metadata.c:551:afr_sh_metadata_fix] afr1: Unable to resolve conflicting metadata of /var/www/t3pueblos/typo3temp/tmb_a259f91da0.jpg. Please resolve manually by fixing the permissions/ownership of /var/www/t3pueblos/typo3temp/tmb_a259f91da0.jpg on your subvolumes. You can also consider 'option favorite-child <>'
2009-04-01 11:48:45 W [afr-self-heal-metadata.c:77:afr_sh_metadata_done] afr1: aborting selfheal of /var/www/t3pueblos/typo3temp/tmb_a259f91da0.jpg
2009-04-01 11:48:45 E [afr-self-heal-metadata.c:551:afr_sh_metadata_fix] afr1: Unable to resolve conflicting metadata of /var/www/t3pueblos/typo3temp/tmb_a2ac7ad772.gif. Please resolve manually by fixing the permissions/ownership of /var/www/t3pueblos/typo3temp/tmb_a2ac7ad772.gif on your subvolumes. You can also consider 'option favorite-child <>'
2009-04-01 11:48:45 W [afr-self-heal-metadata.c:77:afr_sh_metadata_done] afr1: aborting selfheal of /var/www/t3pueblos/typo3temp/tmb_a2ac7ad772.gif
2009-04-01 11:48:45 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:48:49 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:48:52 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:48:54 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:48:55 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:48:57 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:48:59 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:01 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:03 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:05 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:06 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:08 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:12 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:15 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:17 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:20 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:22 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:24 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:26 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:28 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:30 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:32 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:33 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:36 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:37 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:39 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
2009-04-01 11:49:41 W [unify-self-heal.c:593:unify_sh_checksum_cbk] unify: Self-heal triggered on directory /var/www/t3pueblos/typo3temp
pending frames:
frame : type(1) op(LOOKUP)

patchset: 4e5c297d7c3480d0d3ab1c0c2a184c6a4fb801ef
signal received: 11
configuration details:argp 1
backtrace 1
db.h 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.0rc7
[0x6a7420]
/lib/libpthread.so.0(pthread_spin_lock+0x6)[0x601b46]
/usr/lib/libglusterfs.so.0(dict_unref+0x2f)[0xdb430f]
/usr/lib/glusterfs/2.0.0rc7/xlator/cluster/unify.so(unify_sh_setdents_cbk+0x25e)[0x8f691e]
/usr/lib/glusterfs/2.0.0rc7/xlator/cluster/afr.so(afr_setdents_done+0x58)[0x7c2da8]
/usr/lib/glusterfs/2.0.0rc7/xlator/cluster/afr.so(afr_unlock_common_cbk+0x5d)[0x7d0fcd]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/client.so(client_fentrylk_cbk+0x69)[0x119159]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/client.so(protocol_client_interpret+0x1fa)[0x1151ea]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/client.so(protocol_client_pollin+0xd2)[0x1153e2]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/client.so(notify+0x186)[0x11c7b6]
/usr/lib/glusterfs/2.0.0rc7/transport/socket.so(socket_event_poll_in+0x3b)[0x1316fb]
/usr/lib/glusterfs/2.0.0rc7/transport/socket.so(socket_event_handler+0xae)[0x1324ce]
/usr/lib/libglusterfs.so.0[0xdd18ca]
/usr/lib/libglusterfs.so.0(event_dispatch+0x21)[0xdd0791]
/usr/sbin/glusterfs(main+0xe13)[0x804b193]
/lib/libc.so.6(__libc_start_main+0xdc)[0x499dec]
/usr/sbin/glusterfs[0x8049911]

Fernando Arconada <farconada>
Wed 01 Apr 2009 09:06:50 AM UTC, comment #7:

Fernando,

Please test again with the attached patch and let us know if it works for you. It does for me.

Regards

Shehjar Tikoo <shehjart>
Project Member
Wed 01 Apr 2009 08:12:33 AM UTC, comment #6:

I can reproduce it on my system and am working on it.

Thanks

Shehjar Tikoo <shehjart>
Project Member
Wed 01 Apr 2009 07:41:27 AM UTC, comment #5:

My client config

### Add client feature and attach to remote subvolume of server1
volume server1.example.org
type protocol/client
option transport-type tcp/client
option remote-host server1.example.org # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume

volume server2.example.org
type protocol/client
option transport-type tcp/client
option remote-host server2.example.org # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume

volume server3.example.org
type protocol/client
option transport-type tcp/client
option remote-host server3.example.org # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume

volume server4.example.org
type protocol/client
option transport-type tcp/client
option remote-host server4.example.org # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume

### The file index on server1

volume server1.example.org-ns
type protocol/client
option transport-type tcp/client
option remote-host server1.example.org # IP address of the remote brick
option remote-subvolume brick-ns # name of the remote volume
end-volume

volume server2.example.org-ns
type protocol/client
option transport-type tcp/client
option remote-host server2.example.org # IP address of the remote brick
option remote-subvolume brick-ns # name of the remote volume
end-volume

volume server3.example.org-ns
type protocol/client
option transport-type tcp/client
option remote-host server3.example.org # IP address of the remote brick
option remote-subvolume brick-ns # name of the remote volume
end-volume

volume server4.example.org-ns
type protocol/client
option transport-type tcp/client
option remote-host server4.example.org # IP address of the remote brick
option remote-subvolume brick-ns # name of the remote volume
end-volume

#The replicated volume with data
volume afr1
type cluster/afr
subvolumes server1.example.org server2.example.org server3.example.org server4.example.org
end-volume

#The replicated volume with indexes
volume afr-ns
type cluster/afr
subvolumes server1.example.org-ns server2.example.org-ns server3.example.org-ns server4.example.org-ns
end-volume

###volume nufa
### type cluster/nufa
### option local-volume-name `hostname` # note the backquote, so 'hostname' output will be used as the option.
### subvolumes afr1
###end-volume
volume unify
type cluster/unify
option namespace afr-ns
option scheduler rr
### option scheduler nufa
### option nufa.local-volume-name `hostname`
subvolumes afr1
end-volume

Fernando Arconada <farconada>
Wed 01 Apr 2009 07:28:13 AM UTC, comment #4:

Fernando,

Could you please post the client side vol file also?

Shehjar Tikoo <shehjart>
Project Member
Wed 01 Apr 2009 06:12:01 AM UTC, comment #3:

even with rc4 and rc7?
But it always crash between rc7 servers and rc7 clients, without version mixing

Fernando Arconada <farconada>
Tue 31 Mar 2009 06:24:09 PM UTC, comment #2:

Fernando,
'always if mixes servers with 2.0rc4 and 2.0rc7' - does that mean that you are using client from one version and server from another version?
as of now glusterfs does not support backward compatibility. please make sure that you use all the servers and clients in the network from same release of glusterfs.

--
Gowda

Basavanagowda Kanur <gowda>
Project MemberIn charge of this item.
Tue 31 Mar 2009 02:26:44 PM UTC, comment #1:

always between 2.0rc7 servers
This is a more detailed log:
2009-03-31 16:15:54 D [glusterfsd.c:335:_get_specfp] glusterfs: loading volume file /etc/glusterfs/glusterfs-server.vol
================================================================================
Version : glusterfs 2.0.0rc7 built on Mar 31 2009 14:51:32
TLA Revision : 4e5c297d7c3480d0d3ab1c0c2a184c6a4fb801ef
Starting Time: 2009-03-31 16:15:54
Command line : /usr/sbin/glusterfsd -f /etc/glusterfs/glusterfs-server.vol -l /var/log/glusterfs/glusterfsd.log -L DEBUG
PID : 8810
System name : Linux
Nodename : xxx.xxx.org
Kernel Release : 2.6.18-92.1.22.el5PAE
Hardware Identifier: i686

Given volfile:
+------------------------------------------------------------------------------+
1: volume posix
2: type storage/posix
3: option directory /var/datos-glusterfs
4: end-volume
5:
6: volume locks
7: type features/locks
8: subvolumes posix
9: end-volume
10:
11: volume brick
12: type performance/io-threads
13: option thread-count 8
14: subvolumes locks
15: end-volume
16:
17: volume posix-ns
18: type storage/posix
19: option directory /var/datos-glusterfs-ns
20: end-volume
21:
22: volume locks-ns
23: type features/locks
24: subvolumes posix-ns
25: end-volume
26:
27: volume brick-ns
28: type performance/io-threads
29: option thread-count 8
30: subvolumes locks-ns
31: end-volume
32:
33: volume server
34: type protocol/server
35: option transport-type tcp
36: option auth.addr.brick.allow 127.0.0.1,172.16.1.153,172.16.1.155,172.16.1.158,172.16.1.159
37: option auth.addr.brick-ns.allow 127.0.0.1,172.16.1.153,172.16.1.155,172.16.1.158,172.16.1.159
38: subvolumes brick brick-ns
39: end-volume
40:

+------------------------------------------------------------------------------+
2009-03-31 16:15:54 D [spec.y:188:new_section] parser: New node for 'posix'
2009-03-31 16:15:54 D [xlator.c:469:xlator_set_type] xlator: attempt to load file /usr/lib/glusterfs/2.0.0rc7/xlator/storage/posix.so
2009-03-31 16:15:54 D [spec.y:214:section_type] parser: Type:posix:storage/posix
2009-03-31 16:15:54 D [spec.y:243:section_option] parser: Option:posix:directory:/var/datos-glusterfs
2009-03-31 16:15:54 D [spec.y:327:section_end] parser: end:posix
2009-03-31 16:15:54 D [spec.y:188:new_section] parser: New node for 'locks'
2009-03-31 16:15:54 D [xlator.c:469:xlator_set_type] xlator: attempt to load file /usr/lib/glusterfs/2.0.0rc7/xlator/features/locks.so
2009-03-31 16:15:54 D [xlator.c:509:xlator_set_type] xlator: dlsym(notify) on /usr/lib/glusterfs/2.0.0rc7/xlator/features/locks.so: undefined symbol: notify -- neglecting
2009-03-31 16:15:54 D [spec.y:214:section_type] parser: Type:locks:features/locks
2009-03-31 16:15:54 D [spec.y:312:section_sub] parser: child:locks->posix
2009-03-31 16:15:54 D [spec.y:327:section_end] parser: end:locks
2009-03-31 16:15:54 D [spec.y:188:new_section] parser: New node for 'brick'
2009-03-31 16:15:54 D [xlator.c:469:xlator_set_type] xlator: attempt to load file /usr/lib/glusterfs/2.0.0rc7/xlator/performance/io-threads.so
2009-03-31 16:15:54 D [xlator.c:509:xlator_set_type] xlator: dlsym(notify) on /usr/lib/glusterfs/2.0.0rc7/xlator/performance/io-threads.so: undefined symbol: notify -- neglecting
2009-03-31 16:15:54 D [spec.y:214:section_type] parser: Type:brick:performance/io-threads
2009-03-31 16:15:54 D [spec.y:243:section_option] parser: Option:brick:thread-count:8
2009-03-31 16:15:54 D [spec.y:312:section_sub] parser: child:brick->locks
2009-03-31 16:15:54 D [spec.y:327:section_end] parser: end:brick
2009-03-31 16:15:54 D [spec.y:188:new_section] parser: New node for 'posix-ns'
2009-03-31 16:15:54 D [xlator.c:469:xlator_set_type] xlator: attempt to load file /usr/lib/glusterfs/2.0.0rc7/xlator/storage/posix.so
2009-03-31 16:15:54 D [spec.y:214:section_type] parser: Type:posix-ns:storage/posix
2009-03-31 16:15:54 D [spec.y:243:section_option] parser: Option:posix-ns:directory:/var/datos-glusterfs-ns
2009-03-31 16:15:54 D [spec.y:327:section_end] parser: end:posix-ns
2009-03-31 16:15:54 D [spec.y:188:new_section] parser: New node for 'locks-ns'
2009-03-31 16:15:54 D [xlator.c:469:xlator_set_type] xlator: attempt to load file /usr/lib/glusterfs/2.0.0rc7/xlator/features/locks.so
2009-03-31 16:15:54 D [xlator.c:509:xlator_set_type] xlator: dlsym(notify) on /usr/lib/glusterfs/2.0.0rc7/xlator/features/locks.so: undefined symbol: notify -- neglecting
2009-03-31 16:15:54 D [spec.y:214:section_type] parser: Type:locks-ns:features/locks
2009-03-31 16:15:54 D [spec.y:312:section_sub] parser: child:locks-ns->posix-ns
2009-03-31 16:15:54 D [spec.y:327:section_end] parser: end:locks-ns
2009-03-31 16:15:54 D [spec.y:188:new_section] parser: New node for 'brick-ns'
2009-03-31 16:15:54 D [xlator.c:469:xlator_set_type] xlator: attempt to load file /usr/lib/glusterfs/2.0.0rc7/xlator/performance/io-threads.so
2009-03-31 16:15:54 D [xlator.c:509:xlator_set_type] xlator: dlsym(notify) on /usr/lib/glusterfs/2.0.0rc7/xlator/performance/io-threads.so: undefined symbol: notify -- neglecting
2009-03-31 16:15:54 D [spec.y:214:section_type] parser: Type:brick-ns:performance/io-threads
2009-03-31 16:15:54 D [spec.y:243:section_option] parser: Option:brick-ns:thread-count:8
2009-03-31 16:15:54 D [spec.y:312:section_sub] parser: child:brick-ns->locks-ns
2009-03-31 16:15:54 D [spec.y:327:section_end] parser: end:brick-ns
2009-03-31 16:15:54 D [spec.y:188:new_section] parser: New node for 'server'
2009-03-31 16:15:54 D [xlator.c:469:xlator_set_type] xlator: attempt to load file /usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so
2009-03-31 16:15:54 D [spec.y:214:section_type] parser: Type:server:protocol/server
2009-03-31 16:15:54 D [spec.y:243:section_option] parser: Option:server:transport-type:tcp
2009-03-31 16:15:54 D [spec.y:243:section_option] parser: Option:server:auth.addr.brick.allow:127.0.0.1,172.16.1.153,172.16.1.155,172.16.1.158,172.16.1.159
2009-03-31 16:15:54 D [spec.y:243:section_option] parser: Option:server:auth.addr.brick-ns.allow:127.0.0.1,172.16.1.153,172.16.1.155,172.16.1.158,172.16.1.159
2009-03-31 16:15:54 D [spec.y:312:section_sub] parser: child:server->brick
2009-03-31 16:15:54 D [spec.y:312:section_sub] parser: child:server->brick-ns
2009-03-31 16:15:54 D [spec.y:327:section_end] parser: end:server
2009-03-31 16:15:54 D [glusterfsd.c:1115:main] glusterfs: running in pid 8811
2009-03-31 16:15:54 D [xlator.c:599:xlator_init_rec] posix: Initialization done
2009-03-31 16:15:54 D [xlator.c:599:xlator_init_rec] locks: Initialization done
2009-03-31 16:15:54 D [io-threads.c:926:init] io-threads: Using conf->thread_count = 8
2009-03-31 16:15:54 D [xlator.c:599:xlator_init_rec] brick: Initialization done
2009-03-31 16:15:54 D [xlator.c:599:xlator_init_rec] posix-ns: Initialization done
2009-03-31 16:15:54 D [xlator.c:599:xlator_init_rec] locks-ns: Initialization done
2009-03-31 16:15:54 D [io-threads.c:926:init] io-threads: Using conf->thread_count = 8
2009-03-31 16:15:54 D [xlator.c:599:xlator_init_rec] brick-ns: Initialization done
2009-03-31 16:15:54 D [transport.c:141:transport_load] transport: attempt to load file /usr/lib/glusterfs/2.0.0rc7/transport/socket.so
2009-03-31 16:15:54 D [server-protocol.c:8160:init] server: defaulting limits.transaction-size to 4194304
2009-03-31 16:15:54 N [glusterfsd.c:1134:main] glusterfs: Successfully started
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick: allowed = "127.0.0.1", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick: allowed = "172.16.1.153", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick: allowed = "172.16.1.155", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick: allowed = "172.16.1.158", received addr = "172.16.1.158"
2009-03-31 16:15:59 N [server-protocol.c:7513:mop_setvolume] server: accepted client from 172.16.1.158:1015
2009-03-31 16:15:59 D [server-protocol.c:7558:mop_setvolume] server: creating inode table with lru_limit=1024, xlator=brick
2009-03-31 16:15:59 D [inode.c:1010:inode_table_new] brick: creating new inode table with lru_limit=1024
2009-03-31 16:15:59 D [inode.c:471:__inode_create] brick/inode: create inode(0)
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick: allowed = "127.0.0.1", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick: allowed = "172.16.1.153", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick: allowed = "172.16.1.155", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick: allowed = "172.16.1.158", received addr = "172.16.1.158"
2009-03-31 16:15:59 N [server-protocol.c:7513:mop_setvolume] server: accepted client from 172.16.1.158:1014
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick-ns: allowed = "127.0.0.1", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick-ns: allowed = "172.16.1.153", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick-ns: allowed = "172.16.1.155", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick-ns: allowed = "172.16.1.158", received addr = "172.16.1.158"
2009-03-31 16:15:59 N [server-protocol.c:7513:mop_setvolume] server: accepted client from 172.16.1.158:1013
2009-03-31 16:15:59 D [server-protocol.c:7558:mop_setvolume] server: creating inode table with lru_limit=1024, xlator=brick-ns
2009-03-31 16:15:59 D [inode.c:1010:inode_table_new] brick-ns: creating new inode table with lru_limit=1024
2009-03-31 16:15:59 D [inode.c:471:__inode_create] brick-ns/inode: create inode(0)
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick-ns: allowed = "127.0.0.1", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick-ns: allowed = "172.16.1.153", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick-ns: allowed = "172.16.1.155", received addr = "172.16.1.158"
2009-03-31 16:15:59 D [addr.c:174:gf_auth] brick-ns: allowed = "172.16.1.158", received addr = "172.16.1.158"
2009-03-31 16:15:59 N [server-protocol.c:7513:mop_setvolume] server: accepted client from 172.16.1.158:1012
2009-03-31 16:16:33 D [addr.c:174:gf_auth] brick: allowed = "127.0.0.1", received addr = "172.16.1.153"
2009-03-31 16:16:33 D [addr.c:174:gf_auth] brick: allowed = "172.16.1.153", received addr = "172.16.1.153"
2009-03-31 16:16:33 N [server-protocol.c:7513:mop_setvolume] server: accepted client from 172.16.1.153:1006
2009-03-31 16:16:33 D [addr.c:174:gf_auth] brick: allowed = "127.0.0.1", received addr = "172.16.1.153"
2009-03-31 16:16:33 D [addr.c:174:gf_auth] brick: allowed = "172.16.1.153", received addr = "172.16.1.153"
2009-03-31 16:16:33 N [server-protocol.c:7513:mop_setvolume] server: accepted client from 172.16.1.153:1007
2009-03-31 16:16:33 D [addr.c:174:gf_auth] brick-ns: allowed = "127.0.0.1", received addr = "172.16.1.153"
2009-03-31 16:16:33 D [addr.c:174:gf_auth] brick-ns: allowed = "172.16.1.153", received addr = "172.16.1.153"
2009-03-31 16:16:33 N [server-protocol.c:7513:mop_setvolume] server: accepted client from 172.16.1.153:1015
2009-03-31 16:16:33 D [addr.c:174:gf_auth] brick-ns: allowed = "127.0.0.1", received addr = "172.16.1.153"
2009-03-31 16:16:33 D [addr.c:174:gf_auth] brick-ns: allowed = "172.16.1.153", received addr = "172.16.1.153"
2009-03-31 16:16:33 N [server-protocol.c:7513:mop_setvolume] server: accepted client from 172.16.1.153:1014
2009-03-31 16:16:33 D [inode.c:293:__inode_activate] brick/inode: activating inode(1), lru=0/1024 active=1 purge=0
2009-03-31 16:16:33 D [server-protocol.c:3629:server_lookup_resume] brick: 3: LOOKUP '0/(null)'
2009-03-31 16:16:33 D [inode.c:293:__inode_activate] brick-ns/inode: activating inode(1), lru=0/1024 active=1 purge=0
2009-03-31 16:16:33 D [server-protocol.c:3629:server_lookup_resume] brick-ns: 3: LOOKUP '0/(null)'
2009-03-31 16:16:33 D [server-protocol.c:7283:server_checksum] brick: 3: CHECKSUM '/ (1)'
pending frames:
frame : type(1) op(CHECKSUM)

patchset: 4e5c297d7c3480d0d3ab1c0c2a184c6a4fb801ef
signal received: 11
configuration details:argp 1
backtrace 1
db.h 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.0rc7
[0x2b2420]
/usr/lib/glusterfs/2.0.0rc7/xlator/performance/io-threads.so[0x16a971]
/usr/lib/glusterfs/2.0.0rc7/xlator/performance/io-threads.so(iot_checksum+0x51)[0x16b601]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(server_checksum+0x116)[0x118076]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(protocol_server_interpret+0x15e)[0x11615e]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(protocol_server_pollin+0xac)[0x11639c]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(notify+0x9f)[0x11643f]
/usr/lib/glusterfs/2.0.0rc7/transport/socket.so(socket_event_poll_in+0x3b)[0xaa76fb]
/usr/lib/glusterfs/2.0.0rc7/transport/socket.so(socket_event_handler+0xae)[0xaa84ce]
/usr/lib/libglusterfs.so.0[0xc338ca]
/usr/lib/libglusterfs.so.0(event_dispatch+0x21)[0xc32791]
/usr/sbin/glusterfsd(main+0xe13)[0x804b193]
/lib/libc.so.6(__libc_start_main+0xdc)[0x499dec]
/usr/sbin/glusterfsd[0x8049911]
---------

Fernando Arconada <farconada>
Tue 31 Mar 2009 02:09:28 PM UTC, original submission:

================================================================================
Version : glusterfs 2.0.0rc7 built on Mar 31 2009 14:51:32
TLA Revision : 4e5c297d7c3480d0d3ab1c0c2a184c6a4fb801ef
Starting Time: 2009-03-31 13:51:22
Command line : /usr/sbin/glusterfsd -f /etc/glusterfs/glusterfs-server.vol -l /var/log/glusterfs/glusterfsd.log -L WARNING
PID : 16649
System name : Linux
Nodename : XXX.XXX
Kernel Release : 2.6.18-92.1.22.el5PAE
Hardware Identifier: i686

Given volfile:
+------------------------------------------------------------------------------+
1: volume posix
2: type storage/posix
3: option directory /var/datos-glusterfs
4: end-volume
5:
6: volume locks
7: type features/locks
8: subvolumes posix
9: end-volume
10:
11: volume brick
12: type performance/io-threads
13: option thread-count 8
14: subvolumes locks
15: end-volume
16:
17: volume posix-ns
18: type storage/posix
19: option directory /var/datos-glusterfs-ns
20: end-volume
21:
22: volume locks-ns
23: type features/locks
24: subvolumes posix-ns
25: end-volume
26:
27: volume brick-ns
28: type performance/io-threads
29: option thread-count 8
30: subvolumes locks-ns
31: end-volume
32:
33: volume server
34: type protocol/server
35: option transport-type tcp
36: option auth.addr.brick.allow 127.0.0.1,172.16.1.153,172.16.1.155,172.16.1.158,172.16.1.159
37: option auth.addr.brick-ns.allow 127.0.0.1,172.16.1.153,172.16.1.155,172.16.1.158,172.16.1.159
38: subvolumes brick brick-ns
39: end-volume
40:

+------------------------------------------------------------------------------+
pending frames:
frame : type(1) op(CHECKSUM)

patchset: 4e5c297d7c3480d0d3ab1c0c2a184c6a4fb801ef
signal received: 11
configuration details:argp 1
backtrace 1
db.h 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.0rc7
[0xfec420]
/usr/lib/glusterfs/2.0.0rc7/xlator/performance/io-threads.so[0xa4a971]
/usr/lib/glusterfs/2.0.0rc7/xlator/performance/io-threads.so(iot_checksum+0x51)[0xa4b601]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(server_checksum+0x116)[0xebd076]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(protocol_server_interpret+0x15e)[0xebb15e]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(protocol_server_pollin+0xac)[0xebb39c]
/usr/lib/glusterfs/2.0.0rc7/xlator/protocol/server.so(notify+0x9f)[0xebb43f]
/usr/lib/glusterfs/2.0.0rc7/transport/socket.so(socket_event_poll_in+0x3b)[0x8b06fb]
/usr/lib/glusterfs/2.0.0rc7/transport/socket.so(socket_event_handler+0xae)[0x8b14ce]
/usr/lib/libglusterfs.so.0[0xe348ca]
/usr/lib/libglusterfs.so.0(event_dispatch+0x21)[0xe33791]
/usr/sbin/glusterfsd(main+0xe13)[0x804b193]
/lib/libc.so.6(__libc_start_main+0xdc)[0x499dec]
/usr/sbin/glusterfsd[0x8049911]

Fernando Arconada <farconada>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

Attached Files
file #17846:  iothreads-chksum.patch added by shehjart (2KiB - text/x-patch)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by shehjart (Posted a comment)
  • -unavailable- added by shehjart
  • -unavailable- added by gowda (Posted a comment)
  • -unavailable- added by farconada (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 0 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 7 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Thu 25 Jun 2009 04:29:35 AM UTCgowdaStatusNone=>Fixed
      Open/ClosedOpen=>Closed
    Wed 01 Apr 2009 11:11:59 AM UTCshehjartAttached File-=>Added 0001-Temporary-fix-for-self-heal.patch, #17847
    Wed 01 Apr 2009 09:07:56 AM UTCshehjartAttached File-=>Added iothreads-chksum.patch, #17846
    Wed 01 Apr 2009 07:06:43 AM UTCshehjartCarbon-Copy-=>Added -unavailable-
    Tue 31 Mar 2009 06:24:09 PM UTCgowdaAssigned toNone=>gowda
      Originator Name=>Fernando

    Back to the top


    Powered by Savane 3.1-cleanup1