Wed 06 Feb 2008 07:06:00 PM UTC, comment #1:
Gow,
Is the OOM kill of glusterfsd happen again? do we still have memory leak with the product? In all my testing with recent code base, never observed memory leaks. I assume all these memory leak bugs are fixed. Can you confirm its fixed ? Its been 3months since this bug is left open. We need to close it.
|
Tue 30 Oct 2007 10:44:24 AM UTC, original submission:
Glusterfs was killed by kernel OOM killer.
Host system was 64-bit, dual-core pentium machine. Linux - 2.6.18 built for x86_64 architecture.
oom-killer: gfp_mask=0x200d2, order=0
Call Trace:
[<ffffffff802a6950>] out_of_memory+0x33/0x212
[<ffffffff8020e002>] __alloc_pages+0x21a/0x2a5
[<ffffffff802301d9>] read_swap_cache_async+0x45/0xd8
[<ffffffff802ab12a>] swapin_readahead+0x62/0xd5
[<ffffffff802089e5>] __handle_mm_fault+0x644/0x91f
[<ffffffff8020a6ab>] do_page_fault+0x39d/0x706
[<ffffffff8020c4a2>] do_lookup+0x63/0x173
[<ffffffff8020c8c5>] dput+0x23/0x153
[<ffffffff80209ba9>] __link_path_walk+0xde8/0xf44
[<ffffffff80258ea5>] error_exit+0x0/0x84
[<ffffffff8026eff6>] physflat_send_IPI_mask+0x0/0x6a
[<ffffffff8025ba73>] copy_user_generic+0x93/0x12a
[<ffffffff8021469c>] sys_select+0x2ac/0x3d6
[<ffffffff80218b3c>] cp_new_stat+0xe7/0xff
[<ffffffff8021ffe3>] __up_read+0x13/0x8a
[<ffffffff8020a6df>] do_page_fault+0x3d1/0x706
[<ffffffff802214ec>] sys_newstat+0x28/0x31
[<ffffffff802581d6>] system_call+0x7e/0x83
Mem-info:
Node 0 DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
Node 0 DMA32 per-cpu:
cpu 0 hot: high 186, batch 31 used:2
cpu 0 cold: high 62, batch 15 used:48
cpu 1 hot: high 186, batch 31 used:22
cpu 1 cold: high 62, batch 15 used:59
Node 0 Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:4
cpu 0 cold: high 62, batch 15 used:14
cpu 1 hot: high 186, batch 31 used:29
cpu 1 cold: high 62, batch 15 used:53
Node 0 HighMem per-cpu: empty
Free pages: 23400kB (0kB HighMem)
Active:152 inactive:979221 dirty:0 writeback:0 unstable:0 free:5850 slab:13802 mapped:21 pagetables:2375
Node 0 DMA free:12576kB min:24kB low:28kB high:36kB active:0kB inactive:0kB present:12224kB pages_scanned:0 all_u
nreclaimable? yes
lowmem_reserve[]: 0 3510 4015 4015
Node 0 DMA32 free:9496kB min:7076kB low:8844kB high:10612kB active:48kB inactive:3513112kB present:3594848kB page
s_scanned:1056096 all_unreclaimable? no
lowmem_reserve[]: 0 0 505 505
Node 0 Normal free:1328kB min:1016kB low:1268kB high:1524kB active:560kB inactive:403644kB present:517120kB pages
_scanned:25460 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
Node 0 HighMem free:0kB min:128kB low:128kB high:128kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_un
reclaimable? no
lowmem_reserve[]: 0 0 0 0
Node 0 DMA: 44kB 48kB 516kB 332kB 364kB 1128kB 1256kB 1512kB 11024kB 12048kB 2*4096kB = 12576kB
Node 0 DMA32: 1124kB 78kB 216kB 032kB 6864kB 34128kB 1256kB 0512kB 01024kB 02048kB 0*4096kB = 9496kB
Node 0 Normal: 1824kB 358kB 216kB 132kB 064kB 0128kB 1256kB 0512kB 01024kB 02048kB 0*4096kB = 1328kB
Node 0 HighMem: empty
Swap cache: add 988309, delete 9069, find 379/727, race 0+0
Free swap = 3868972kB
Total swap = 7815580kB
Free swap: 3868972kB
1179648 pages of RAM
166655 reserved pages
184 pages shared
979245 pages swap cached
Out of Memory: Kill process 8873 (glusterfs) score 31175 and children.
Out of memory: Killed process 8873 (glusterfs).
|