buglwIP - A Lightweight TCP/IP stack - Bugs: bug #23726, pbuf pool exhaustion on slow recv()

 
 

You are not allowed to post comments on this tracker with your current authentication level.

bug #23726: pbuf pool exhaustion on slow recv()

Submitted by:  Thomas Taranowski <taranowski>
Submitted on:  Fri 27 Jun 2008 11:50:27 AM UTC  
 
Category: sockets/netconnSeverity: 3 - Normal
Item Group: Feature RequestStatus: Fixed
Privacy: PublicAssigned to: Simon Goldschmidt <goldsimon>
Open/Closed: ClosedPlanned Release: None
lwIP version: 1.2.0

(Jump to the original submission Jump to the original submission)

Wed 11 Feb 2009 07:09:34 PM UTC, comment #10:

Fixed by adding configurable default value for conn->recv_bufsize: RECV_BUFSIZE_DEFAULT

Simon Goldschmidt <goldsimon>
Project AdministratorIn charge of this item.
Sun 13 Jul 2008 01:24:54 PM UTC, comment #9:

Just about INT_MAX as default value for conn->recv_bufsize, I just want to remember it was to keep the same behavior than before SO_RCVBUF. But perhaps got a configurable default value in opt.h/lwipopts.h could be good.

Frédéric Bernon <fbernon>
Project Member
Fri 11 Jul 2008 12:03:12 PM UTC, comment #8:

Hehe, I'm sorry Jim, but I meant to ask for Thomas' comments, only your name seems to have popped into my eye as it was in the topmost comment ;-)

Thomas, I wanted to implement the recv_bufsize configurable default just yesterday, only I didn't find the time. Maybe today...

Simon Goldschmidt <goldsimon>
Project AdministratorIn charge of this item.
Thu 10 Jul 2008 08:07:08 PM UTC, comment #7:

Regarding the initial question, the socket in question was UDP. I see some mention here that SO_RCVBUF has been implemented, and I see it in CVS head. I believe that solution will work for my needs, with the following tweak in netconn_alloc():
conn->recv_bufsize = INT_MAX
to
conn->recv_bufsize = RECV_BUFSIZE //as defined in my opts.h, and lwip_opts.h, as Simon mentions.

I see this feature was implemented by Frederic back in Novemeber, so my bad for not seeing it. If we implement Simon's proposed mod to default to a sane recvbuf size (num pbufs/2 or something similar), as defined in opts.h, rather than INT_MAX, then this bug can be closed out.

Thomas Taranowski <taranowski>
Project Member
Thu 10 Jul 2008 07:22:18 PM UTC, comment #6:

I think I commented enough ;) This was never an issue for me, as my targets are strictly raw API and TCP apps (except for ARP, DNS and ICMP which should never get backed up). It did get me thinking about other things I've seen though, as you noticed.

Jim Pettinato <jim_pettinato>
Project Member
Thu 10 Jul 2008 07:07:04 PM UTC, comment #5:

Jim, any more comments on this? Do you still have a problem although you could use LWIP_SO_RCVBUF in 1.3.0?

Simon Goldschmidt <goldsimon>
Project AdministratorIn charge of this item.
Fri 27 Jun 2008 09:18:42 PM UTC, comment #4:

re comment #3:
Jim, your ideas are interesting, but seem a little off-topic here, I'm afraid.

re original post:

First, which kind of connection are you talking about, RAW/UDP (which are handled the same way) or TCP?

I think this is already covered but must (of course) be achieved by the programmer through lwipopts.h:

- For TCP, of course limit the receive window: you have to tweak the window and the PBUF_POOL count to never run out of pbufs!
- For UDP/RAW: use SO_RCVBUF (it is already implemented!) and change the default from INT_MAX (as it is now) to a reasonable, configurable default (again, PBUF_POOL must be configured accordingly, keeping the number of connections in mind)

Of course, this configuration (buf-size per connection * connection count vs. PBUF_POOL size) is not easy and it would be better to have some kind of handling here, but like Kieran, I think the current situation is acceptable.

The only reasonable change would be to introduce a second limitation (number of pbufs) in addition to the already introduced TCP window size and conn->recv_bufsize (which is used by SO_RCVBUF) and to change the default of conn->recv_bufsize.

Simon Goldschmidt <goldsimon>
Project AdministratorIn charge of this item.
Fri 27 Jun 2008 02:30:07 PM UTC, comment #3:

Well, with the callback (raw) API, since there is no packet queueing typically between stack and app (unless the application itself implements one for some reason) it seems to me that this issue would not be present.

Typically I would think that pbuf pool depletion when using the raw API comes when the driver (at the int level) is queueing packets faster than the combined lwIP+application(s) task can process them. Careful tuning can avoid problems here (at least I finally figured out how to strike a balance).

This brings me to some related issues when using the raw API - one, the potential for broadcast flooding, and also the presence of a disconnect in the TCP receive window advertising.

With the raw API, since there is not typically an app-level queue, incoming packets must be buffered at the ISR/driver level until the stack task runs and processes them. A high level of broadcast traffic really adds to the driver buffering requirements. I don't know about your corporate LAN, but ours has a ridiculous amount of broadcast traffic and makes a good stress test platform for our lwIP devices!

Would it would be beneficial for raw API users to have a function or macro to block broadcast packets at the ISR level - packets that will eventually be discarded up the chain anyway? Saves buffering them and tying up pbufs...

Regarding the TCP window advertising... if you watch a large file download via FTP (for example) to my lwIP raw API based system, I don't see the window being reduced more than one TCP_MSS - since the stack hasn't seen the buffered packets yet, and when it does, the app is processing the packet immediately via the callback anyway and hence restoring the original window size. It makes it difficult to get TCP working smoothly, since we have no way to tell a fast sender that the driver (ISR) receive buffer is full other than to drop packets. Am I missing something here?? I can't think of any way to provide some method to do TCP receive window updates at the ISR level. With the raw API, this seems the only location where it is known how much data has already been received and buffered but not acknowledged.

Speaking of TCP windows - wouldn't that be a good way to handle the socket buffer problem??... with UDP and raw IP, just dropping packets posted to a full mailbox would be okay... and with TCP, synching the receive window to the predetermined mailbox available size, rather than initializing all port rcv_wnds to some generic value??

Jim Pettinato <jim_pettinato>
Project Member
Fri 27 Jun 2008 12:09:00 PM UTC, comment #2:

Good point on the weakness in the second solution. It looks like SO_RCVBUF should be the primary solution, as it's already part of the standard. For my case, that probably won't be enough. I may get a chance to implement these in the next couple of months, and will do so on a 1.3+ baseline.

Is this issue present with the native callback interface? If not, there could be some way to fix this in the socket layer.

Thomas Taranowski <taranowski>
Project Member
Fri 27 Jun 2008 12:03:48 PM UTC, comment #1:

This is one of those weaknesses in our sockets layer that is close to the point where we say that the complexity to fix it is not worth the extra code, and that the lwIP sockets layer isn't going to work in all corner cases.

However, the correct solution is definitely to do SO_RCVBUF, or possibly your last solution where we stop enqueueing packets on a socket when it has got more than a fraction of the total pbufs already queued.

The second solution you suggest of cleaning up packets is not good in that we'll be binning data that has been acknowledged, thus potentially leading to data corruption. This might be better than the stack locking up altogether, but not by much! I'd much prefer any solution that relies on throwing away received packets to do so early, i.e. before they are acknowledged.

Kieran Mansley <kieranm>
Project Administrator
Fri 27 Jun 2008 11:50:27 AM UTC, original submission:

Using a late 1.2 based CVS head

When a user binds a socket, the stack will enqueue incoming data into that socket's port. If the recv() is never called to empty that socket, the port's queue will grow until it consumes all available pbufs, thereby essentially latching up the stack. There are several fixes:

*Implement SO_RCVBUF (still leaves stack open to failure)

*Implement a handler that will be called when there are no PBUF_POOL available. This handler will walk through the open port's, and clear each port's queue, freeing PBUF_POOLs. This would allow the stack to recover from such a condition.

*Implement a mechanism to limit the depth of each port's queue. Similar to SO_RCVBUF, but would be a stack-enforced limit. If the stack attempts to enqueue an incoming payload, but there are already MAX PBUF_POOLs already in the port, discard the incoming message.

Thomas Taranowski <taranowski>
Project Member

 

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by fbernon (Posted a comment)
  • -unavailable- added by goldsimon (Posted a comment)
  • -unavailable- added by jim_pettinato (Posted a comment)
  • -unavailable- added by kieranm (Posted a comment)
  • -unavailable- added by taranowski (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 0 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 3 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Wed 11 Feb 2009 07:09:34 PM UTCgoldsimonStatusNone=>Fixed
      Assigned toNone=>goldsimon
      Open/ClosedOpen=>Closed

    Back to the top


    Powered by Savane 3.1-cleanup1