buglwIP - A Lightweight TCP/IP stack - Bugs: bug #62231, LwIP tcp socket stalls when...

 
 

You are not allowed to post comments on this tracker with your current authentication level.

bug #62231: LwIP tcp socket stalls when "plug pulled", creates leak when closed

Submitter:  Chris Hultqvist <chrishul>
Submitted:  Tue 29 Mar 2022 10:52:34 AM UTC
   
 
Category:  sockets/netconn Severity:  3 - Normal
Item Group:  Faulty Behaviour Status:  None
Privacy:  Public Assigned to:  None
Open/Closed:  Open Planned Release:  None
lwIP version:  2.1.3

Jump to the original submission

Wed 04 Sep 2024 04:35:07 PM UTC, comment #21: 

comment #19:

> Proving that other systems' API provide a similar functionality would certainly help you to pull through your suggested patch.


Good idea.

FreeBSD (for example) seems to match my logic. I see nothing in theirs evaluating whether there is unsent or unacked data (the part that my PR removes) in their determination of whether to drop the TCP connection.

https://github.com/freebsd/freebsd-src/blob/c75a18905e308f69b01f19c3d7d613883a008e79/sys/netinet/tcp_usrreq.c#L2675-L2676

Brendan McDonnell <brendan_mod_emb>
Fri 05 Jul 2024 04:52:36 PM UTC, comment #20: 


comment #19:

> Still, it raises the question in which instances RST is being used in other IP layers (lwIP counterparts), if implemented at all.


RST is a feature of TCP. Not of IP, nor of UDP, etc.

Brendan McDonnell <brendan_mod_emb>
Fri 05 Jul 2024 04:07:49 PM UTC, comment #19: 


>OTOH, if an application is periodically making (or attempting to make) connections, and the allocated heap memory for those connections are persisting for a long time beyond what's expected, one could wind up with similar results to a memory leak. If the total available heap is less than the steady state excess allocation that would occur (between connections/allocations and timeouts), then the resulting behavior would look like a memory leak, right?

Correct, absolutely. That's why I performed the stress test repeatedly triggering the server module (having the issue) and leaving it overnight. On each operation the heap usage only increased by a maximum of 40 bytes (random sizes), but in the morning this procedure had consumed tens of kilobytes. Without a doubt this memory had been consumed by a leak. Testing the same thing with a normal disconnect (client TCP_close) did not have the same effect.

I will not argue too much with you about the TCP protocol. I did not consider the possibility of queuing the closure, but I guess it may occur in some cases. You are definitely making a point if FIN requires a handshake and RST doesn't. Still, it raises the question in which instances RST is being used in other IP layers (lwIP counterparts), if implemented at all.

Proving that other systems' API provide a similar functionality would certainly help you to pull through your suggested patch.

Chris Hultqvist <chrishul>
Fri 05 Jul 2024 02:25:08 PM UTC, comment #18: 

comment #17:

> Brendan, I hope we can agree on the following:
> A memory leak is 1. Memory allocation that is never freed. 2. It is not done intentionally (permanent allocation on the heap)
> This means that allocations being kept beyond regular use and only returned after some sort of timeout is not a leak.


I appreciate the distinction you're making here. I may have conflated a detail here or there, as I troubleshot a few network-related issues in quick succession.

OTOH, if an application is periodically making (or attempting to make) connections, and the allocated heap memory for those connections are persisting for a long time beyond what's expected, one could wind up with similar results to a memory leak. If the total available heap is less than the steady state excess allocation that would occur (between connections/allocations and timeouts), then the resulting behavior would look like a memory leak, right?

Is it possible that was the case in your stress testing?

> Somewhere I read that whenever a new PCB is needed it is allocated permanently never to be freed and used over and over again. This is not a memory leak either.


IDK if that is the case, but I would agree (not a leak), hypothetically.

> I managed to stress-test my system over a long period of time and remaining heap space kept decreasing. That is a memory leak. I might be able to make some more testing this coming fall.


Should we table this part of discussion until then?

Should discussion of my issue and patch continue here on this bug report? Since it may or may not be the same as OP.

> I can  perfectly understand your point wanting to terminate the connection altogether instantly. Probably you have secondary effects from the presence of the zombie connection.


Indeed.

> I don't know a lot about the TCP protocol (particularly the difference between FIN and RST), but it seems to me that once you want the connection to terminate there should be only one single way to do it.


No, a normal graceful termination (w/ FIN/ACK etc) includes a handshake to ensure that both sides of the connection acknowledge. The sockets implementation ensures pending transmissions complete before initiating the termination sequence. The necessary heap remains allocated (on both ends) until it's done, and then are cleaned up at termination.

A hard disconnect doesn't handshake. The peer that wants to disconnect sends a TCP RST and frees its resources right away. The other end may receive that RST and abort/cleanup then; or it may not receive it, and eventually timeout (and cleanup).

> What could be gained by letting the connection hang around for a while?


In normal cases, it is part of a non-blocking implementation. e.g. It allows the application to enqueue its final transmission, and the socket closure.

In hard disconnect cases, for the peer (A) initiating the closure, nothing is to be gained. But on the other end (B), if the RST is not received, (B) can wait to receive something from (A), or timeout.

Brendan McDonnell <brendan_mod_emb>
Fri 05 Jul 2024 07:14:47 AM UTC, comment #17: 

Brendan, I hope we can agree on the following:
A memory leak is 1. Memory allocation that is never freed. 2. It is not done intentionally (permanent allocation on the heap)
This means that allocations being kept beyond regular use and only returned after some sort of timeout is not a leak.
Somewhere I read that whenever a new PCB is needed it is allocated permanently never to be freed and used over and over again. This is not a memory leak either.
I managed to stress-test my system over a long period of time and remaining heap space kept decreasing. That is a memory leak. I might be able to make some more testing this coming fall.
I can  perfectly understand your point wanting to terminate the connection altogether instantly. Probably you have secondary effects from the presence of the zombie connection.
I don't know a lot about the TCP protocol (particularly the difference between FIN and RST), but it seems to me that once you want the connection to terminate there should be only one single way to do it. What could be gained by letting the connection hang around for a while?

Chris Hultqvist <chrishul>
Thu 04 Jul 2024 06:48:45 PM UTC, comment #16: 

comment #14:

> It is quite obvious to me that Brendan has not checked the heap for memory leaks, so that part of my bug report is not really addressed.


Yes I had.

I will (attempt to) attach some excerpt outputs from stats_display(), with and without my patch. (If it works, you can) diff them and note the "used:" differences.

> Still I would have tested the linger(0) procedure to see if the cleanup within the lwip layer works better after a sudden physical breakup of the connection. I was pretty convinced at the time that the issue had to be found in that layer.


If you had unsent or unacked data, I believe your setup may have worked w/ just linger(0), w/o my patch.

> Unfortunately I currently don't even have the hardware to test it.
> Besides the problem is rather insidious with only a few bytes being consumed each time it happens. Apparently no one else has come across it during the two years of presence in the bug listing.


(file #56227, file #56228)

Brendan McDonnell <brendan_mod_emb>
Thu 04 Jul 2024 05:52:03 PM UTC, comment #15: 

comment #13:

> Re comment #12:
> Looks like that patch may be correct.


Good news, thanks.

> However, I still don't get why we need this in the first place. Does anyone think there's a memory leak bug in lwip that we want to work around by using linger(0)? That would not be the best thing to do, in my opinion! If there's a memory leak, try to find it and report us the fix! I'm not aware of any problems in this area!


= Summary =

If an application times out on a TCP download, there are no unacked nor unsent packets. The application wants to abort the transfer, close the socket and free its resources immediately. Is linger(0) the way to do so? If not, what is?

= Details =

My simplified test setup:

lwip_httpClient <---> ethSwitch1 <---> ethSwitch2 <---> httpServerPC

For the data download of an HTTP GET operation, my HTTP client application uses lwip_select() (with a nonzero struct timeval) and lwip_recv(). While the download is in progress, I break the link between ethSwitch[12], such that the client application times out (lwip_select() returns 0). At the time, there are neither any unsent nor any unacked data in the tcp pcb. My application - wishing to send a TCP RST, abort the socket, and free the pcb/resources immediately - calls its hardDisconnect() function (quoted below for convenience).

Without my patch, lwip sends a TCP FIN (not RST), and the resources (tcp pcb, etc) remain allocated until the TCP_MAXRTX (12) FIN retries timeout.

void hardDisconnect(int sock)
{
   struct linger linger =
   {
     .l_onoff  = 1,
     .l_linger = 0,
   };

   lwip_setsockopt(sock, SOL_SOCKET, SO_LINGER, &linger, sizeof linger);
   lwip_shutdown(sock, SHUT_RDWR);
   lwip_close(sock);
}


Brendan McDonnell <brendan_mod_emb>
Thu 04 Jul 2024 02:39:28 PM UTC, comment #14: 

It is quite obvious to me that Brendan has not checked the heap for memory leaks, so that part of my bug report is not really addressed.
Still I would have tested the linger(0) procedure to see if the cleanup within the lwip layer works better after a sudden physical breakup of the connection. I was pretty convinced at the time that the issue had to be found in that layer.
Unfortunately I currently don't even have the hardware to test it.
Besides the problem is rather insidious with only a few bytes being consumed each time it happens. Apparently no one else has come across it during the two years of presence in the bug listing.

Chris Hultqvist <chrishul>
Thu 04 Jul 2024 02:02:49 PM UTC, comment #13: 

Re comment #12:
Looks like that patch may be correct.

However, I still don't get why we need this in the first place. Does anyone think there's a memory leak bug in lwip that we want to work around by using linger(0)? That would not be the best thing to do, in my opinion! If there's a memory leak, try to find it and report us the fix! I'm not aware of any problems in this area!

Simon Goldschmidt <goldsimon>
Group administrator
Wed 03 Jul 2024 11:06:21 AM UTC, comment #12: 
Brendan McDonnell <brendan_mod_emb>
Tue 02 Jul 2024 09:12:06 PM UTC, comment #11: 

Enrico, unfortunately I don't have the possibility to test the patch (it's been a while since both bugs were reported).
Still I find it hard to believe they are related. The triggering event in my case is the interruption of the communication with the other peer.
On the other hand the tcp_input will be called when a packet is completed and ready to be processed. The packet in progress when interrupted will most likely not arrive here.
I would rather suspect a deeper level such as ip.c in my case. I was discouraged to debug it by the complex memory management with delayed releases of allocations.

Chris Hultqvist <chrishul>
Tue 02 Jul 2024 08:27:49 AM UTC, comment #10: 

Maybe this behaviour could be related to bug #61666? Could you try to apply the patch proposed there (e.g diff file #53000)?

Enrico Murador <hen>
Tue 02 Jul 2024 07:38:03 AM UTC, comment #9: 

As stated before tcp_close worked for me. I made the client reset every 20 seconds and when it reconnected the server killed the old socket with a simple close. It would have saturated in no time with some sockets being "lingering". The issue was that overnight that accumulated massive amounts of leaked memory.

Brendan, maybe you should open a new bug report with more detailed information on your setup and a more accurate title. How do you detect that the socket is still alive, by the way?

Chris Hultqvist <chrishul>
Tue 02 Jul 2024 05:14:11 AM UTC, comment #8: 

I'm not sure I can follow this discussion. However, if you want to abort (send RST), call tcp_abort, not tcp_close.

Simon Goldschmidt <goldsimon>
Group administrator
Tue 02 Jul 2024 02:50:53 AM UTC, comment #7: 

On further research and testing, I believe this is an lwIP bug. I may submit a patch.

Brendan McDonnell <brendan_mod_emb>
Tue 02 Jul 2024 12:46:11 AM UTC, comment #6: 

Stian, I don't follow. The whole reason I started using linger on (+ timeout = 0) is because it was not aborting the socket.

Brendan McDonnell <brendan_mod_emb>
Tue 02 Jul 2024 12:31:11 AM UTC, comment #5: 

My thought: Disable linger, if you want to go directly to abort.

For "linger" in general, see Linux man page regarding SO_LINGER.

https://man7.org/linux/man-pages/man7/socket.7.html

Stian Sebastian Skjelstad <mywave>
Tue 02 Jul 2024 12:16:04 AM UTC, comment #4: 

In further testing, I've found that my proposed solution only works when there is neither unsent nor unacked data in the TCP PCB. So for instance if the route between host and server goes down in the middle of an HTTP GET download, and you try to abort as I described in comment #1, lwIP will try to normal close the socket (i.e. client sends a FIN and waits for FIN/ACK).

It's this line of code that makes it behave that way:
https://github.com/lwip-tcpip/lwip/blob/62ac0faad804198e00bbcc70f5ff5927e05a5791/src/api/api_msg.c#L985

Can anyone clarify:
 - Is this lwIP behavior correct? If so, why?
 - How can an application be sure to trigger a TCP RST and immediately free the socket resources when there are no unsent and unacked data?

(... When my application wants to abort and immediately free the socket resources, in this scenario, I think my workaround for the moment may be to add a call to lwip_send() w/ a dummy byte before calling lwip_shutdown() / lwip_close().)

Brendan McDonnell <brendan_mod_emb>
Tue 25 Jun 2024 12:27:26 PM UTC, comment #3: 

I think the key thing is whether the application wants the socket to close normally - in which case the app should wait for lwip to complete the transfer (or timeout) - or if the app wants to abort the operation (as in my case) - in which case lwip should (and I believe does) RST and cleanup immediately.

Brendan McDonnell <brendan_mod_emb>
Tue 25 Jun 2024 12:16:31 PM UTC, comment #2: 

Hi Brendan and thanks for the input

Interesting. It has been a while and I have since migrated the server side to an Raspberry Pi for various reasons.

My memory has faded slightly but I remember that closing the socket actually worked because the number of allowed connections is very limited in the ESP IDF framework and rapidly saturated when client modules were abruptly reset and started up again as a new connection. When the old ones were identified and killed, the only remains were small chunks of memory on the heap that didn't go away after the TCP time-out.

It does seem probable that this memory corresponds to pending socket transfer data, but it should in my view be freed in the "lwip_close(socket)" call.

I would really like to test your approach to see if that stops the module from leaking memory, but currently I have no setup that enables me to do so.

My conclusion is that our cases differ in that I managed to close the identified stalled socket while you didn't. The suggested lingering option may have resolved the leak.

Chris Hultqvist <chrishul>
Mon 24 Jun 2024 08:56:59 PM UTC, comment #1: 

I had a similar issue. In my case, the application had timed out and decided to abort the transfer, but it was not telling the socket that it wanted to abort (instead of normal shutdown/close), so the socket stayed alive, trying to complete the transfer of the in-flight data (whose memory remained allocated), until the socket timed out.

My solution when the application wants to abort close the connection is to use lwip_setsockopt() to set linger on, 0 seconds, before lwip_shutdown() / lwip_close(). As described here:

https://stackoverflow.com/a/41431236/1995714

Brendan McDonnell <brendan_mod_emb>
Tue 29 Mar 2022 10:52:34 AM UTC, original submission:  

Socket connection stalls when other peer is reset and cannot be reached. Connection only resolves by itself after 2 hours.

When socket is closed by force it leaves behind small chunks on the Heap (0-36 bytes),

Running espressif idf framework and unable to read actual lwIP version in release. Using Mongoose 7.4 layer.

Refer to https://github.com/espressif/esp-idf/issues/8668
for details

Chris Hultqvist <chrishul>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attached Files

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by hen (Posted a comment)
  • -email is unavailable- added by goldsimon (Posted a comment)
  • -email is unavailable- added by mywave (Posted a comment)
  • -email is unavailable- added by brendan_mod_emb (Posted a comment)
  • -email is unavailable- added by chrishul (Submitted the item)
  • -email is unavailable- added by chrishul
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follow 3 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2024-07-04 brendan_mod_emb Attached File- Added stats-without-patch.txt, #56227
        Attached File- Added stats-with-patch.txt, #56228
    2022-03-29 chrishul Carbon-Copy- Added chrishul

    Back to the top

    Powered by Savane 3.13-5884.
    Corresponding source code