bugGluster - Bugs: bug #25177, Memory leak...

 
 

bug #25177: Memory leak...

Submitted by:  swankier <swankier>
Submitted on:  Fri 26 Dec 2008 06:04:24 PM UTC  
 
Category: GlusterFSSeverity: 3 - Normal
Priority: 5 - NormalItem Group: Improper behaviour
Status: FixedPrivacy: Public
Assigned to: Raghavendra <raghavendra>Originator Name: swankier
Open/Closed: ClosedRelease: 
Operating System: GNU/LinuxReproducibility: Every Time

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

(Jump to the original submission Jump to the original submission)

Tue 23 Jun 2009 05:56:21 PM UTC, comment #7:

Lots of enhancements went into glusterfs, to reduce memory foot-print and also many leaks were fixed.

most notable change which fixed the issue regarding io-cache leak is the use of mmap()ed iobuf for i/o buffers.

--
Gowda

Basavanagowda Kanur <gowda>
Project Member
Thu 14 May 2009 06:05:48 PM UTC, comment #6:

I just filed http://savannah.nongnu.org/bugs/index.php?26579 regarding my last comment, it is a different leak.

Sorry for the spam.

Anonymous
Thu 14 May 2009 12:04:59 PM UTC, comment #5:

I'm experiencing a worrying leak as well, but maybe it's not the same. I'll open a different bug if it isn't.

Using 2.0.1 (git tag) and fuse 2.7.4-1.1glfs11, Debian etch i386.
The leak is at the server side. After six hours of run:

root 32437 0.7 0.2 2406412 21504 ? - 07:32 3:05 /usr/sbin/glusterfsd
root 32462 0.1 0.1 1446240 14100 ? - 07:32 0:41 /usr/sbin/glusterfsd

The VSS is really big and growing faster than the RSS, which still grows over time.

The vol is pretty simple:
volume brick-homes
type storage/posix
option directory /srv/data/homes
end-volume

volume locks
type features/posix-locks
subvolumes brick-homes
end-volume

volume export-homes
type performance/io-threads
option autoscaling on
option thread-count 16 # default is 16
subvolumes locks
end-volume

### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
option listen-port 6997
subvolumes export-homes
option auth.addr.locks.allow 192.168.3.2 # AFR, turing2
option auth.addr.export-homes.allow * # Clients
end-volume

Anonymous
Tue 20 Jan 2009 02:08:31 PM UTC, comment #4:

The same issue. Debian Lenny.

# glusterfs --version
glusterfs 2.0.0rc1 built on Jan 20 2009 02:23:06

On test filesystem:

/mnt/gluster/1 — 128MB file
/mnt/gluster/linux - Linux kernel

How to reproduce:

while true; do
dd if=/mnt/gluster/1 of=/dev/null bs=1M count=128
grep -r '' /mnt/gluster/linux/fs/ >/dev/null
done

After 5 minutes of run process grows to 300MB:

# ps auxw|grep gluster
root 19571 20.1 15.2 870660 315576 ? Ssl 08:56 0:44 /usr/local/sbin/glusterfs --log-level=WARNING --volfile=/etc/glusterfs/client.vol /mnt/gluster

My config:

=========

volume localdisk
type storage/posix
option directory /mnt/disk2
end-volume

volume plock-localdisk
type features/posix-locks
option mandatory-locks on
subvolumes localdisk
end-volume

volume iothreads-localdisk
type performance/io-threads
option thread-count 8
subvolumes plock-localdisk
end-volume

volume debian1
type protocol/client
option transport-type tcp/client
option remote-host [another-machine]
option remote-subvolume iothreads-localdisk
end-volume

volume afr
type cluster/afr
subvolumes plock-localdisk debian1
option data-self-heal off
option entry-self-heal off
option metadata-self-heal off
end-volume

volume io-cache
type performance/io-cache
option cache-size 32MB
option page-size 128KB
option cache-timeout 10
subvolumes afr
end-volume

=========

Without io-cache it looks much better. But io-cache must not use more than 32MB.

John Lepikhin <johnlepikhin>
Mon 05 Jan 2009 05:01:50 PM UTC, comment #3:

I just left it running for a day.

I suspect it may have something to do with cron tasks in Debian Etch which crawl file systems.

swankier <swankier>
Mon 05 Jan 2009 04:50:11 PM UTC, comment #2:

as reported by the user the command causing the leak is ls -lahR.

swankier,

I tried to reproduce the issue with afr self heal off, but without success.

did you observe afr self-heal in action when the client is consuming lots of memory? Can you try the same operation with afr self heal turned off? afr self heal can be turned off with the following options to afr

option data-self-heal off
option entry-self-heal off
option metadata-self-heal off

regards,
Raghavendra.

Raghavendra <raghavendra>
Project MemberIn charge of this item.
Mon 29 Dec 2008 12:19:43 AM UTC, comment #1:

I did misread this output.

There is 2GB of memory in this server. However, gluster is still using more than expected.

swankier <swankier>
Fri 26 Dec 2008 06:04:24 PM UTC, original submission:

As you can see from the Top output below...

This server has 12GB of memory, and 55% of it is being used by gluster.

This has grown from 29% at this time yesterday.

========

top - 12:47:54 up 2 days, 16:20, 9 users, load average: 1.03, 1.04, 1.00
Tasks: 109 total, 1 running, 108 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.0%id, 0.4%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 2059668k total, 2049412k used, 10256k free, 15168k buffers
Swap: 12287992k total, 180668k used, 12107324k free, 65852k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13579 root 15 0 1141m 1.1g 696 S 1 55.1 57:49.02 glusterfs

========

[root@nas1 glusterfs]# cat glusterfs-client.vol
volume nas1brick1
type protocol/client
option transport-type tcp/client
option remote-host 10.5.1.180
option remote-subvolume nas1brick1p
end-volume

volume nas1brick2
type protocol/client
option transport-type tcp/client
option remote-host 10.5.1.180
option remote-subvolume nas1brick2p
end-volume

volume nas1-ns
type protocol/client
option transport-type tcp/client
option remote-host 10.5.1.180
option remote-subvolume nas1-nsp
end-volume

volume nas1
type cluster/unify
option namespace nas1-ns
option scheduler rr
subvolumes nas1brick1 nas1brick2
end-volume

volume nas2
type protocol/client
option transport-type tcp/client
option remote-host 10.10.1.185
option remote-subvolume nas2
end-volume

volume array
type cluster/afr
subvolumes nas2 nas1
option favorite-child nas2
end-volume

swankier <swankier>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by gowda (Posted a comment)
  • -unavailable- added by johnlepikhin (Posted a comment)
  • -unavailable- added by raghavendra (Posted a comment)
  • -unavailable- added by swankier (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 0 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 4 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Tue 23 Jun 2009 05:56:21 PM UTCgowdaStatusNone=>Fixed
      Open/ClosedOpen=>Closed
    Mon 05 Jan 2009 04:50:11 PM UTCraghavendraAssigned toNone=>raghavendra
      Originator Name=>swankier

    Back to the top


    Powered by Savane 3.1-cleanup1