bugGluster - Bugs: bug #25501, client memory leak

 
 

bug #25501: client memory leak

Submitted by:  None
Submitted on:  Thu 05 Feb 2009 08:17:41 PM UTC  
 
Category: GlusterFSSeverity: 3 - Normal
Priority: 5 - NormalItem Group: None
Status: PostponedPrivacy: Public
Assigned to: Raghavendra <raghavendra>Originator Name: -
Originator Email: -unavailable-Open/Closed: Closed
Release:  glusterfs-2.0.0rc1Operating System: GNU/Linux
Reproducibility: Every Time

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

(Jump to the original submission Jump to the original submission)

Wed 24 Jun 2009 10:22:20 AM UTC, comment #6:

This bug has been moved to http://bugs.gluster.com.

Please track the progress of the bug at http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=43

--
Gowda

Basavanagowda Kanur <gowda>
Project Member
Wed 22 Apr 2009 06:16:49 AM UTC, comment #5:

Memory still leaks with the 2.0.0rc8 client. While runing "du on a freshly mounted, quite big volume (/home with c.a. 14 GB of files), the memory consumption increases steadily from 15 to 130 MB. My setup is as follows: 6 afr (2x) volumes, unify over these 6 volumes. This leak is probably not setup- or even translator-dependent, as "stripe" leaks memory as well as "unify over afr" (tried that out). Servers are all OK.

Krzysztof Strasburger <strasbur>
Thu 09 Apr 2009 11:14:06 AM UTC, comment #4:

I observe similar client memory leak with the older, 1.3.12 release.
It is sufficient to run du on a big directory tree.
My setup is quite complicated (6 pairs of replicated data, the volume is distributed over these 6 pairs)

Krzysztof Strasburger <strasbur>
Sat 21 Mar 2009 10:21:20 PM UTC, comment #3:

Same error is happening with 2.0.0rc4 also.

Even without afr, most simple setup also leaks:

----------------------------------------------
# SERVER:
volume posix
type storage/posix
option directory /home2
end-volume

volume brick
type features/posix-locks
subvolumes posix
end-volume

volume server
type protocol/server
option transport-type tcp/server
option auth.addr.brick.allow x
subvolumes brick
end-volume

-----------------------------------------
# CLIENT

volume home2
type protocol/client
option transport-type tcp/client
option remote-host 64.88.252.51
option remote-subvolume brick
option transport-timeout 10
end-volume

This is busy server that is doing other things, but this is situation after running for 90 minutes, copying files the whole time:

root 12501 1.9 0.5 72892 42364 ? Ssl 16:36 2:00 glusterfs -f client-afr.vol /mnt/gluster

Hrvoje <hrvoje>
Mon 09 Feb 2009 10:09:57 AM UTC, comment #2:

Copying files with cp command

big number of small jpg files in tree that runs 3 levels deep.

/xxx/yyy/zzz/some.jpg

Thousands of them

Copy from hard drive to gluster fs mounted point

Hrvoje

Hrvoje <hrvoje>
Mon 09 Feb 2009 06:36:15 AM UTC, comment #1:

What were the operations being run on glusterfs mount point when the client was leaking memory?

Raghavendra <raghavendra>
Project MemberIn charge of this item.
Thu 05 Feb 2009 08:17:41 PM UTC, original submission:

I have memory increase in newest glusterfs client glusterfs-2.0.0rc1:

Leak is about 1 Mb per minute during usage. For example
# ps aux|grep glusterfs
root 16674 1.8 2.6 246676 216344 ? Ssl 10:25 5:06 glusterfs -f client.vol /mnt/gluster/

-------------------------------------------------------------
client config:
cat client.vol

volume home1
type protocol/client
option transport-type tcp/client
option remote-host xx.xx.xx.xx # IP address of the remote brick
option remote-subvolume home # name of the remote volumei
option transport-timeout 30
end-volume

volume home2
type protocol/client
option transport-type tcp/client
option remote-host xx.xx.xx.xx # IP address of the remote brick
option remote-subvolume home # name of the remote volume
option transport-timeout 10
end-volume

volume home-ha
type cluster/ha
subvolumes home1 home2
end-volume
----------------------------------------------------------------
client machine is 64bit kernel 2.6.25.4
fuse is compiled inside kernel

Leak is present with both standard fuse 2.7.4 and your custom fuse

I logged with WARNING level, there is nothing in log file, except
2009-02-05 10:02:45 E [client-protocol.c:263:call_bail] home2: activating bail-out. pending frames = 1. last sent = 2009-02-05 10:02:29. last received = 2009-02-05 10:02:29. transport-timeout = 10
2009-02-05 10:02:45 C [client-protocol.c:298:call_bail] home2: bailing transport
2009-02-05 10:02:45 E [saved-frames.c:148:saved_frames_unwind] home2: forced unwinding frame type(1) op(OPEN)
2009-02-05 10:25:34 W [fuse-bridge.c:2526:fuse_thread_proc] fuse: unmounting /mnt/gluster/
2009-02-05 10:25:34 W [glusterfsd.c:775:cleanup_and_exit] glusterfs: shutting down
-------------------------------------------------------------
server1 configs:

volume posix
type storage/posix
option directory /home2
end-volume

volume brick
type features/posix-locks
subvolumes posix
end-volume

volume krishna
type protocol/client
option transport-type tcp/client
option remote-host xx.xx.xx.xx
option remote-subvolume brick
end-volume

volume rama
type protocol/client
option transport-type tcp/client
option remote-host xx.xx.xxx.xx
option remote-subvolume brick
end-volume

volume home
type cluster/afr
option read-subvolume krishna
subvolumes krishna rama
end-volume

volume server
type protocol/server
option transport-type tcp/server
# solaire tornado
option auth.ip.brick.allow *
option auth.ip.home.allow xxx.xx.xxx.xxx,xx.xx.xxx.xx
subvolumes brick home
end-volume

----------------------------------------------------------------
server2 config:
volume posix
type storage/posix
option directory /home2
end-volume

volume brick
type features/posix-locks
subvolumes posix
end-volume

#volume brick
# type performance/io-threads
# option thread-count 8
# subvolumes locks
#end-volume

volume krishna
type protocol/client
option transport-type tcp/client
option remote-host 64.88.252.51
option remote-subvolume brick
end-volume

volume rama
type protocol/client
option transport-type tcp/client
option remote-host 64.88.252.15
option remote-subvolume brick
end-volume

volume home
type cluster/afr
option read-subvolume rama
subvolumes krishna rama
end-volume

volume server
type protocol/server
option transport-type tcp/server
option auth.ip.brick.allow xx.xx.xx.xx
option auth.ip.home.allow xx.xx.xx.xx
subvolumes brick home
end-volume

glusterfs server memory is VZS 100 Mb, although it started with 33. Maybe they also leak something, but not much

Let me know if you need any other info.

Hrvoje

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by gowda (Posted a comment)
  • -unavailable- added by strasbur (Posted a comment)
  • -unavailable- added by strasbur
  • -unavailable- added by hrvoje (Posted a comment)
  • -unavailable- added by raghavendra (Updated the item)
  • -unavailable- added by hrvoje
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 0 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 7 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Wed 24 Jun 2009 10:22:20 AM UTCgowdaStatusNone=>Postponed
      Open/ClosedOpen=>Closed
    Thu 09 Apr 2009 11:14:07 AM UTCstrasburCarbon-Copy-=>Added strasbur
    Mon 09 Feb 2009 01:16:53 AM UTCraghavendraAssigned toNone=>raghavendra
      Originator Name=>-
      Originator Email=>-
    Thu 05 Feb 2009 08:22:01 PM UTChrvojeCarbon-Copy-=>Added hrvoje

    Back to the top


    Powered by Savane 3.1-cleanup1