buglwIP - A Lightweight TCP/IP stack - Bugs: bug #50837, TCP: zero window probe doesn't...

 
 

You are not allowed to post comments on this tracker with your current authentication level.

bug #50837: TCP: zero window probe doesn't timeout

Submitter:  preet <preetpal>
Submitted:  Thu 20 Apr 2017 07:12:52 PM UTC
   
 
Category:  TCP Severity:  3 - Normal
Item Group:  Faulty Behaviour Status:  Fixed
Privacy:  Public Assigned to:  jcunningham
Open/Closed:  Closed Planned Release:  None
lwIP version:  2.0.0

Jump to the original submission

Fri 25 Aug 2017 07:01:08 PM UTC, comment #53: 


> tcp_create_segment() is being used


Right, didn't think of that ;-)

Simon Goldschmidt <goldsimon>
Group administrator
Fri 25 Aug 2017 03:11:44 PM UTC, comment #52: 

I attempted to move tcp_split_unsent_seg() to tcp.c, but ran into the complication that tcp_create_segment() is being used, which is local to tcp_out.c

I have committed the patch as is in f582c8833969ac7f4ab0c8329ac277bf3ac3a3e3

Closing issue for now, since this change should resolve the remaining report persist timer/zero window issues :)

Joel Cunningham <jcunningham>
Group Member
Thu 24 Aug 2017 07:30:26 PM UTC, comment #51: 


> Yes, that's the core concept change


Alright. Just making sure... :-)

> Would you prefer it in tcp.c


Perhaps. If it's used somewhere else, we can still move it, but keeping it static might make it smaller?

Other than that, go ahead, I'd say...

Simon Goldschmidt <goldsimon>
Group administrator
Tue 22 Aug 2017 08:51:17 PM UTC, comment #50: 


> tcp_in.c not resetting the persist timer any more looks a little odd. But since we always call tcp_output() for every received ACK, it should be good I guess.


Yes, that's the core concept change, only perform maintenance on the timer in tcp_output(), which determines whether we can really send a segment or not :)

> Maybe I'd move tcp_split_unsent_seg() and make it static...


So tcp_split_unsent_seg() is called from tcp.c and I placed the implementation in tcp_output.c since it was an "output" specific function.  Would you prefer it in tcp.c?



Joel Cunningham <jcunningham>
Group Member
Tue 22 Aug 2017 07:55:14 PM UTC, comment #49: 

OK. From reading the patch, it seems to look better now.
tcp_in.c not resetting the persist timer any more looks a little odd. But since we always call tcp_output() for every received ACK, it should be good I guess.

And with TCP_CHECKSUM_ON_COPY==0, the added code size should be small, too.

Maybe I'd move tcp_split_unsent_seg() and make it static...

Simon Goldschmidt <goldsimon>
Group administrator
Tue 22 Aug 2017 03:08:13 PM UTC, comment #48: 

Simon,

I have rebased the patch onto current master (1ed1cfe83adee1c7c4c0e91fa5d728c781e3803a), there was a merge conflict with CHANGELOG.  Can you try to apply it again?  Thanks

(file #41616)

Joel Cunningham <jcunningham>
Group Member
Mon 21 Aug 2017 07:22:35 AM UTC, comment #47: 

Joel, I tried, but it says "revspec 'c2af41b' not found"

Simon Goldschmidt <goldsimon>
Group administrator
Fri 11 Aug 2017 06:49:57 PM UTC, comment #46: 

Will do, thanks!

Joel Cunningham <jcunningham>
Group Member
Fri 11 Aug 2017 06:29:47 PM UTC, comment #45: 

Yeah, sorry, I tried to find the time but failed. Give me a week or so :-/

Simon Goldschmidt <goldsimon>
Group administrator
Fri 11 Aug 2017 02:05:56 PM UTC, comment #44: 

Ping!  Anyone have a chance to review my patch?

Joel Cunningham <jcunningham>
Group Member
Fri 04 Aug 2017 07:16:40 PM UTC, comment #43: 

Attached patch re-works the persist timer to make the discussed changes.  From commit message:

This re-works the persist timer to have the following behavior:

  1) Only start persist timer when a buffered segment doesn't fit within
     the current window and there is no in-fligh data.  Previously, the
     persist timer was always started when the window went to zero even
     if there was no buffered data (since timer was managed in receive
     pathway rather than transmit pathway)
  2) Upon first fire of persist timer, fill the remaining window if
     non-zero by splitting the unsent segment.  If split segment is sent,
     persist timer is stopped, RTO timer is now ensuring reliable window
     updates
  3) If window is already zero when persist timer fires, send 1 byte probe
  4) Persist timer and zero window probe should only be active when the
     following are true:
       * no in-flight data (pcb->unacked == NULL)
       * when there is buffered data (pcb->unsent != NULL)
       * when pcb->unsent->len > pcb->snd_wnd

I also found a really helpful way to exercise a full window.  Download a large file (hosted by LwIP device with HTTP server) via curl but use rate limiting, which only reads from the socket buffer according to the defined rate:

curl --limit-rate 1024k http://lwip-server/video.mp4

I'd appreciate any testing and code review, especially on the new function tcp_split_unsent_seg().  I have exercised both TCP_CHECKSUM_ON_COPY and !TCP_CHECKSUM_ON_COPY cases with full windows to ensure split is working correctly

(file #41415)

Joel Cunningham <jcunningham>
Group Member
Thu 27 Jul 2017 09:19:56 PM UTC, comment #42: 


> Can we combine this with sending a segment partly once the persist timer has fired (instead of sending a probe then)? Of course we would still have to start the persist timer earlier than the standard - if a pre-packaged segment doesn't fit into the window.


Yes I can work this change in as well :)  I'll try and find some time to work on this issue over the coming weekend/week

Joel Cunningham <jcunningham>
Group Member
Thu 27 Jul 2017 07:00:25 PM UTC, comment #41: 

That sounds like an interesting solution that could work for us!

Can we combine this with sending a segment partly once the persist timer has fired (instead of sending a probe then)? Of course we would still have to start the persist timer earlier than the standard - if a pre-packaged segment doesn't fit into the window.

Simon Goldschmidt <goldsimon>
Group administrator
Wed 26 Jul 2017 09:46:54 PM UTC, comment #40: 

Anymore thoughts on how to move forward?

Follow up to the NetBSD behavior, another interesting thing about their stack is that when receiving a window update, it doesn't cancel the persist timer, but simply calls tcp_output() which then performs maintenance on the timer

Maybe we could adopt similar changes in our implementation that only stops the persist timer when we successfully send again.  This would be an alternate fix to what was described in comment #34 and would have the benefit of not introducing more checks for our "early" closed window

Joel Cunningham <jcunningham>
Group Member
Sun 02 Jul 2017 04:41:33 PM UTC, comment #39: 

Simon,

I spent some time investigating how NetBSD uses its persist timer and have found some similar behavior to the proposed delayed small send. It takes some spinning up, but the logic is contained within the Sender Silly Window check:

https://github.com/jsonn/src/blob/trunk/sys/netinet/tcp_output.c#L996

When the connection is idle (i.e. no in-flight data) and the window doesn't allow a full segment and Nagles is on AND there is more data in the socket buffer, rather than sending a small segment, sending is skipped and the persist timer is started (follow down to line 1075).

When persist timer fires (https://github.com/jsonn/src/blob/trunk/sys/netinet/tcp_timer.c#L435) , it sets tp->t_force to 1 and calls tcp_output(), which will now forces sending of the small segment. See https://github.com/jsonn/src/blob/trunk/sys/netinet/tcp_output.c#L819 for logic of t_force being 1 and window being zero vs small

So while there isn't behavior indefinitely treating the window as closed, delaying sending in this case and using the persist timer is done by at least NetBSD

Joel Cunningham <jcunningham>
Group Member
Mon 26 Jun 2017 02:38:43 PM UTC, comment #38: 

I agree that treating the send window as effectively zero when the next unsent segment is larger, is becoming more messy to maintain.

Good idea on only splitting the segment once the persist timer has expired, that should ensure that we are not doing the split at the end of every transmission burst that fills the window.  There is still my original patch on bug #49533 which added functionality to split the unsent segment, I could work on getting that working again with this idea, but it probably won't be for a couple of weeks

Joel Cunningham <jcunningham>
Group Member
Thu 22 Jun 2017 06:22:11 PM UTC, comment #37: 

Reading comment #34 (which totally slipped through), I guess Joel's proposed fix for this looks good.

Nevertheless, by now I'm wondering if we should instead go the standard way an send what we can. Probably at the time we would send a zero window probe. That would ensure we wouldn't do the necessary copying too often...

On the other hand, I can't afford the time to do this right now...

Simon Goldschmidt <goldsimon>
Group administrator
Thu 22 Jun 2017 04:11:29 PM UTC, comment #36: 

Hi Simon,
Any chance to review Joel's comment #34, comment #35?
I'm wondering if it's a bug fix?

Axel Lin <axellin>
Group Member
Thu 18 May 2017 02:59:54 PM UTC, comment #35: 

Minor correction in proposed check, the equality should be flipped so snd_wnd is larger than pcb->unsent->len:

+verbtaim+
(lwip_ntohl(pcb->unsent->tcphdr->seqno) - pcb->lastack + pcb->unsent->len <= pcb->snd_wnd))) {
-verbatim-

Joel Cunningham <jcunningham>
Group Member
Thu 18 May 2017 02:56:49 PM UTC, comment #34: 

Fayek,

From your wireshark cpature, the backport looks to be working correctly.  The zero window probing begins at time 170.718 (packet #1692) and gets restarted a number of times due to Windows sending window probe ACKs with an increasing SEQ.  Then at time 188.719 (packet #1713), LwIP starts its final run of probes.  I think your capture doesn't include all the probes.  See my attached spreadsheet, but there should have been 6 more probes before connection closing

Simon,

These captures have highlighted an issue in the following code in tcp_receive:


    /* Update window. */
    if (TCP_SEQ_LT(pcb->snd_wl1, seqno) ||
       (pcb->snd_wl1 == seqno && TCP_SEQ_LT(pcb->snd_wl2, ackno)) ||
       (pcb->snd_wl2 == ackno && (u32_t)SND_WND_SCALE(pcb, tcphdr->wnd) > pcb->snd_wnd)) {
      pcb->snd_wnd = SND_WND_SCALE(pcb, tcphdr->wnd);
      /* keep track of the biggest window announced by the remote host to calculate
         the maximum segment size */
      if (pcb->snd_wnd_max < pcb->snd_wnd) {
        pcb->snd_wnd_max = pcb->snd_wnd;
      }
      pcb->snd_wl1 = seqno;
      pcb->snd_wl2 = ackno;
      if (pcb->snd_wnd == 0) {
        if (pcb->persist_backoff == 0) {
          /* start persist timer */
          pcb->persist_cnt = 0;
          pcb->persist_backoff = 1;
          pcb->persist_probe = 0;
        }
      } else if (pcb->persist_backoff > 0) {
        /* stop persist timer */
        pcb->persist_backoff = 0;
      }


What's happening in these captures is that pcb->persist_backoff is being cleared when seqno increases, but snd_wnd doesn't increase and is: 0 < snd_wnd < pcb->unsent->len (remember LwIP now treats this case as zero window).  LwIP calls tcp_output() after input processing and this restarts pcb->persist_backoff.

I'm wondering if we should have an additional check before clearing pcb->persist_backoff to ensure that the previous non-zero small window which prevented sending pcb->unsent, has resolved.  Something like


      if (pcb->snd_wnd == 0) {
        if (pcb->persist_backoff == 0) {
          /* start persist timer */
          pcb->persist_cnt = 0;
          pcb->persist_backoff = 1;
          pcb->persist_probe = 0;
        }
      } else if ((pcb->persist_backoff > 0) &&
                 (pcb->unsent == NULL ||
                  (lwip_ntohl(pcb->unsent->tcphdr->seqno) - pcb->lastack + pcb->unsent->len > pcb->snd_wnd))) {
        /* stop persist timer */
        pcb->persist_backoff = 0;
      }


(file #40733)

Joel Cunningham <jcunningham>
Group Member
Wed 17 May 2017 10:32:18 PM UTC, comment #33: 

Joel/Simon,

I'm working with Preet on this issue. I cherry-picked the patch commit (c03fef9a3cf030e7dced45b7af1d0d17f2233d42) to the head of 2.0.2 and tried reproducing the issue using port 5004. The socket lingered for about 4 minutes before a timeout occurred. Other socket connections on this port failed to receive data from the server until the zombie socket disconnected.

Attached is the Wireshark trace of this new test, hopefully the "backporting" issue has been resolved.

(file #40729)

Fayek <fwahhab>
Tue 16 May 2017 10:01:10 PM UTC, comment #32: 

Preet,

Taking a look at your wireshark capture, I see 12 zero window probes, so the timeout based on count looks correct.  But the thing that looks incorrect is the time between probes. It appears that the zero window timer is being reset during the first 7 probes when the windows PC sends an zero window probe ACK with an increasing SEQ.  See packets #1710, 1717, 1732.  Anytime the persist_backoff is set to 1 (from 0) persist_probe is also set to 0.  Did the resetting of persist_probe not get backported correctly in your git repo?  persist_probe should also be reset anytime an ACK is received (same place as pcb->keep_cnt_sent)

I have attached an excel spreadsheet with the time deltas of the probs

(file #40717)

Joel Cunningham <jcunningham>
Group Member
Tue 16 May 2017 08:36:48 PM UTC, comment #31: 

I had to manually add the changes.  The patch did not apply; I have never tried "git apply" command so maybe it was a user error on my end.

First I tested the workaround fix that we have on our end which used 5 second send timeout.  Workaround still works and there was no lingering socket.  The send() returned an error eventually after the send timeout and the socket was gracefully closed by the application and I was able to re-connect as expected.  PCAP file for this is attached.

Test results WITHOUT the 5 second socket timeout workaround:
After the Windows/Python app got killed, I noticed that the LWIP memory pools claimed that 10 buffers from my MALLOC_256 byte pool were busy, but this time I did not get any mailbox errors that were tracked during the failure of sys_try_post() method.

One thing I did notice is that while the send() was blocked, the listening socket was taking connections even though the listen backlog was set to 1 during the listen() function call and no one was doing accept().  This may be a separate issue, so let me not distract ourselves.

After Windows/Python app killed itself, I noticed LWIP was consistently using 9 MALLOC_256 pools and 8 MALLOC_1564 pools while the zombie socket was still alive.  After about 2 minutes and 30 seconds, most likely the send() timed out with this patch, and not only the socket error led to the application closing the socket, but the listen backlog released the allocated NETCONN memory resources.  The memory pools also dropped down to near zero proving that the socket let go of the allocated memory resources.  So the fix seems to work, except that it did not really take all 11 minutes for it to release the LWIP resources.

Please inspect the PCAP files to make sure things went well.

(file #40714, file #40715)

preet <preetpal>
Fri 12 May 2017 11:57:22 PM UTC, comment #30: 

I will test this by Monday.  Sorry about the delay guys.

preet <preetpal>
Fri 12 May 2017 07:06:20 PM UTC, comment #29: 

Preet? Did you have the chance to test this?

Simon Goldschmidt <goldsimon>
Group administrator
Tue 09 May 2017 02:10:12 PM UTC, comment #28: 

Simon,

I've pushed the patch (committed in c03fef9a3cf030e7dced45b7af1d0d17f2233d42). 

This fixes the current zero-window probing implementation, but as per recommendation of RFC 1122, I think we can combine the retransmission timer and zero-window probe into a single mechanism.  This should reduce the number of TCP counters (and hopefully result in less code size).

This is a bigger feature development, so I'll open a new task for the work.  We can also use this work to audit the timeouts.  RFC 1122 section 4.3.2.5 TCP Connection Failures contains guidance. (https://tools.ietf.org/html/rfc1122#page-100)

Is there anything else we need to do for this reported issue?

Preet,

If you want to try reproducing with the latest git head (include c03fef9a3cf030e7dced45b7af1d0d17f2233d42) the socket should not block indefinitely though it will block for a long time (around 11 minutes once in zero window)

Joel Cunningham <jcunningham>
Group Member
Mon 08 May 2017 08:07:27 PM UTC, comment #27: 

Joel, sorry for taking so long again.

Technically, the patch seems to do what we need. I'm not happy with adding yet another counter though. Our TCP keeps growing lately :-( (even if this doesn't influence RAM in this special case).

This and the fact that TCP_MAXRTX leads to a different timeout for the persist phase (backoff timeouts are different) makes me think we should rather have a connection timeout expressed in seconds, but I don't know if that's OK with the RFCs...

Maybe it's best for now to push your patch.

Simon Goldschmidt <goldsimon>
Group administrator
Thu 04 May 2017 04:56:43 AM UTC, comment #26: 

Attached patch adds zero-window probing timeout.  The implementation was straight-forward, the only exception was that I couldn't easily re-use pcb->nrtx because when the persist timer started in tcp_receive(), there could be in-flight data and thus an active RTO timer running.

The current interaction between persist timer and RTO timer is that persist timer pre-empts any running RTO timer and then when zero-window resolves, the RTO timer is resumed.

Realistically, this condition could only happen if the remote TCP shrunk the window by more than the amount of data it received, since our in-flight data should match the advertised window.  Having this behavior in LwIP seems like good defensive behavior so I didn't want to change it

The good news is that struct tcp_pcb had an existing u8_t hole being used for padding, so I just added a new u8_t counter for the probes and the size did not change

Lastly the patch adds two unit tests to ensure RTO timeout and zero-window probing timeout reset the connection after maximum transmissions are reach

(file #40576)

Joel Cunningham <jcunningham>
Group Member
Wed 03 May 2017 10:27:20 PM UTC, comment #25: 


> My suggestion would be to use pcb->nrtx here, too. We could increase it for every zero window probe and reset it for every matching rx segment if the persist timer is running (as opposed to resetting it when new data is acknowledged otherwise).


> With default settings, that would still take ~11 minutes to time out such a connection (if my calculations with 'tcp_persist_backoff' and 'TCP_MAXRTX' are correct).


I think I can put a patch together that does this.

I went back and looked at the original persist timer patch in an attempt to understand the backoff and maximum timeout.

Originally it used tcp_output to send a 1 byte segment, then tcp_rexmit() to send subsequent segments.  I believe this would have timed out since tcp_rexmit() increments pcb->nrtx.  But when the patch was cleaned up and submitted, this behavior was omitted.  See bug #20511.  TCP_MAXRTX seems to have always been 12, so the upper bound doesn't seem to have changed from the original patch

11 minutes seems very long to me, but I'm fine with adding the timeout behavior first, then tuning the timeout (if we decide too).  RFC 1122 has guidance on upper bound before closing a connection




Joel Cunningham <jcunningham>
Group Member
Wed 03 May 2017 08:02:57 PM UTC, comment #24: 

As to the mbox not being able to buffer one whole TCP RX window: I'm tempted to use pbufs as an RX chain and remove the mbox from each netconn...

That would remove the extra configuration option to set the mbox size at the risk of consuming more pbufs...

Simon Goldschmidt <goldsimon>
Group administrator
Wed 03 May 2017 08:00:57 PM UTC, comment #23: 


> All that aside, I think we have a problem with a connection stuck in zero window probing.


Right. I haven't found the time to look at the source be fore but it seems like my memory tricked me. There's really no upper limit on the zero window phase.

My suggestion would be to use pcb->nrtx here, too. We could increase it for every zero window probe and reset it for every matching rx segment if the persist timer is running (as opposed to resetting it when new data is acknowledged otherwise).

With default settings, that would still take ~11 minutes to time out such a connection (if my calculations with 'tcp_persist_backoff' and 'TCP_MAXRTX' are correct).

Simon Goldschmidt <goldsimon>
Group administrator
Tue 02 May 2017 07:00:04 PM UTC, comment #22: 

Preet,

The mailbox implementation is port specific.  My port uses static memory to provide storage for the mbox messages, sounds like your uses LwIP's memory pool

All that aside, I think we have a problem with a connection stuck in zero window probing. 

Simon,

I spent more time examining the source and for a socket using blocking sockets (not using SO_SNDTIMEO) I can't find anything that would time out the connection.  If I've missed something, please point me in the direction that contains the timeout

If I get some more time, I might put together a unit test.  We should be able to reasonably test connection failures for both retransmission timeouts and zero window probes


Joel Cunningham <jcunningham>
Group Member
Tue 02 May 2017 06:45:02 PM UTC, comment #21: 

Yes, I believe it is indeed memp errors. "lwip_stats.memp[128_pool]->err"

 From what I can tell, the mailbox of the socket (depth of 8) has already absorbed the buffers obtained from the MALLOC_128 pool and it continues to try to get more of them, however since the "sys_mbox_trypost()" is failing, it probably releases the malloc'd 128 byte of data back.

preet <preetpal>
Tue 02 May 2017 06:28:25 PM UTC, comment #20: 

I've added bug #50912. However, note that in your packet traces, fixing mbox handling of RST does NOT change anything as the RST is rejected because of invalid seqno!

REgarding the "mailbox errors": do you mean "memp errors"? "memp" stands for memory pool, not mailbox. I'm still confused what "mailbox errors" should be...

Simon Goldschmidt <goldsimon>
Group administrator
Tue 02 May 2017 02:41:03 PM UTC, comment #19: 

I think the root of the issue is that the mailbox used to post a pbuf pointer is full, and the RST is dropped.  But even if the client on the other side goes missing without sending the RST, we should still be able to reset the connection on the LWIP (server) side.

To recap, what is happening is that the CLIENT(win/python)'s WINDOW is shrinking, and eventually it is killed.  Meanwhile, our LWIP SERVER (accepted conn) is stuck in the send() operation.  In this scenario, the mailbox that is used for "sys_mbox_trypost()" operation is full, and hence it fails to handle the socket logic gracefully.  The workaround is that I set the socket send timeout (like 5 seconds), and if that was set, then the send() eventually times out, rather than being continuously stuck inside.

The mailbox errors that I see continously increment when I do not use the socket send timeout is this:

      MALLOC_128:   8/  8/8   - 232/0


It doesn't go to the next pool of memory, just that 128byte pool error counter continues to climb and the send() function never returns.

preet <preetpal>
Tue 02 May 2017 02:19:49 PM UTC, comment #18: 


> I haven't looked at the zero window timeout. I thought that would work, though. After all, the zerow window time shown in the pcap is "only" a little less than 7 minutes. I'm not sure this is enough to let a TCP connection time out. And it's certainly not "forever"!


What mechanism is the connection timing out with then if it's not explicitly in the handling for persist_backoff/persist_cnt?


Joel Cunningham <jcunningham>
Group Member
Tue 02 May 2017 10:23:56 AM UTC, comment #17: 

I haven't looked at the zero window timeout. I thought that would work, though. After all, the zerow window time shown in the pcap is "only" a little less than 7 minutes. I'm not sure this is enough to let a TCP connection time out. And it's certainly not "forever"!

What I still don't get is that preet says the mbox error count goes up for the sending task. Can you explain that, preet? Which error count is that and why does it go up? I'm not aware there's  an mbox involved in sending.

There's still an issue of RST not closing the connection (if mbox is full), but that should not be the problem here (since the RST is not accepted). I'll file a bug for that.

Simon Goldschmidt <goldsimon>
Group administrator
Tue 02 May 2017 03:05:19 AM UTC, comment #16: 

preet,

Thanks for the update.  I analyzed the wireshark logs and have a couple of findings to report:

1) LwIP 2.0.2 is now sending zero window probes from server to client, but unfortunately this does not re-trigger an RST after the challenge ACK from the windows client.

I'm not sure why Windows does not respond to the challenge ACK.  I found some evidence online that Windows has yet to implement RFC 5961, which may or may not explain it's ACK processing once the control block has been reset

If anyone remembers back in August of 2016, the Linux kernel came into the news with an exploit that took advantage of the implementation of RFC 5961.  See here: https://arstechnica.com/security/2016/08/linux-bug-leaves-usa-today-other-top-sites-vulnerable-to-serious-hijacking-attacks/.  This article and the underlying security paper both claim Windows has not implemented RFC 5961

See section 8 here: https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_cao.pdf

2) The thing that still didn't seem right to me (which I mentioned in comment #11) is that if a TCP socket is sending, a dead connection should eventually timeout.  Now with LwIP 2.0.2, we've got the zero window probes fixed and working, but the connection still lingers indefinitely.

Taking a look at the persist timer implementation in tcp_slowtmr(), once we hit the largest exponential backoff, we don't close the connection, but instead just repeat the probing at the largest value.  The connection is only closed if pcb->nrtx >= TCP_MAXRTX and pcb->nrtx is not incremented for the persist timer logic.

I reviewed the zero window probe specification from RFC 1122 (section 4.2.2.17):


A TCP MAY keep its offered receive window closed
indefinitely.  As long as the receiving TCP continues to
send acknowledgments in response to the probe segments, the
sending TCP MUST allow the connection to stay open.


https://tools.ietf.org/html/rfc1122#section-4.2.2.17

In the case provided by preet, the remote TCP is not responding to the probes and nothing is sent after the RST.  You can see at the end of the capture, there are 6 probes that are 60 seconds apart, which is the last tier in tcp_persist_backoff[].  I'm guessing preet stopped the capture at this point.

Regardless of the RST that was not processed, we should time out.  This case is really no different then if the wire was silently disconnected during the zero-window condition

Section 4.2.2.17 goes on to mention that the zero-window probe logic should backoff like the regular RTO and the two mechanisms may even be combined into the same implementation.

Joel Cunningham <jcunningham>
Group Member
Tue 02 May 2017 12:14:13 AM UTC, comment #15: 

Tested with LWIP 2.0.2 with the same behavior.  I noticed this time that there were KEEP-ALIVE probes being sent, as customized by my application but it was still not being used to close the lingering/zombie connection.  The "mbox" error count continues to climb while the CLIENT(windows/python) had already disconnected.


---------------------------------
        WITH TIMEOUT
Captured before the test started:
---------------------------------
      : Avl/Usd/Max/Err
 Mbox :   -/ 12/ 12/  0
Mutex :   -/  1/  1/  0
  Sem :   -/ 11/ 11/  0
Memory:   0/ 64/1772/  0/0
TCP Errors: 0-C 0-E 0-L 0-M 0-O 0-P 0-T
TCP Counts: 6/5 (rx/tx) 0 dropped

                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   - 0/0
         TCP_PCB:   1/  1/12  - 0/0
  TCP_PCB_LISTEN:   5/  5/8   - 0/0
         TCP_SEG:   2/  2/20  - 0/0
       REASSDATA:   0/  0/5   - 0/0
       FRAG_PBUF:   0/  0/8   - 0/0
          NETBUF:   0/  0/1   - 0/0
         NETCONN:  11/ 11/20  - 0/0
   TCPIP_MSG_API:   0/  0/8   - 0/0
 TCPIP_MSG_INPKT:   0/  2/24  - 0/0
     SYS_TIMEOUT:   5/  5/8   - 0/0
    PBUF_REF/ROM:   0/  0/8   - 0/0
       PBUF_POOL:   0/  0/0   - 0/0
       MALLOC_64:   1/  2/8   - 0/0
      MALLOC_128:   0/  2/8   - 0/0
      MALLOC_256:   1/  1/8   - 0/0
     MALLOC_1564:   1/  2/24  - 0/0

---------------------------------
Captured after the test
after CLIENT(windows/python) disconnected
---------------------------------
                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   - 0/0
         TCP_PCB:   1/  2/12  - 0/0
  TCP_PCB_LISTEN:   5/  5/8   - 0/0
         TCP_SEG:   2/  3/20  - 0/0
       REASSDATA:   0/  0/5   - 0/0
       FRAG_PBUF:   0/  0/8   - 0/0
          NETBUF:   0/  0/1   - 0/0
         NETCONN:  11/ 12/20  - 0/0
   TCPIP_MSG_API:   0/  0/8   - 0/0
 TCPIP_MSG_INPKT:   0/  3/24  - 0/0
     SYS_TIMEOUT:   5/  5/8   - 0/0
    PBUF_REF/ROM:   0/  0/8   - 0/0
       PBUF_POOL:   0/  0/0   - 0/0
       MALLOC_64:   1/  2/8   - 0/0
      MALLOC_128:   0/  8/8   - 11/0
      MALLOC_256:   1/  3/8   - 0/0
     MALLOC_1564:   1/  6/24  - 0/0





---------------------------------
        WITHOUT TIMEOUT
Captured before the test started:
---------------------------------
      : Avl/Usd/Max/Err
 Mbox :   -/ 12/ 12/  0
Mutex :   -/  1/  1/  0
  Sem :   -/ 11/ 11/  0
Memory:   0/ 64/1860/  0/0
TCP Errors: 0-C 0-E 0-L 0-M 0-O 0-P 0-T
TCP Counts: 5/4 (rx/tx) 0 dropped

                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   - 0/0
         TCP_PCB:   1/  1/12  - 0/0
  TCP_PCB_LISTEN:   5/  5/8   - 0/0
         TCP_SEG:   2/  2/20  - 0/0
       REASSDATA:   0/  0/5   - 0/0
       FRAG_PBUF:   0/  0/8   - 0/0
          NETBUF:   0/  0/1   - 0/0
         NETCONN:  11/ 11/20  - 0/0
   TCPIP_MSG_API:   0/  0/8   - 0/0
 TCPIP_MSG_INPKT:   0/  4/24  - 0/0
     SYS_TIMEOUT:   5/  5/8   - 0/0
    PBUF_REF/ROM:   0/  0/8   - 0/0
       PBUF_POOL:   0/  0/0   - 0/0
       MALLOC_64:   1/  2/8   - 0/0
      MALLOC_128:   0/  4/8   - 0/0
      MALLOC_256:   1/  1/8   - 0/0
     MALLOC_1564:   1/  2/24  - 0/0


---------------------------------
Captured after the test
---------------------------------
      : Avl/Usd/Max/Err
 Mbox :   -/ 13/ 13/ 57
Mutex :   -/  1/  1/  0
  Sem :   -/ 12/ 12/  0
Memory:   0/5360/6108/  0/0
TCP Errors: 0-C 0-E 0-L 0-M 0-O 0-P 0-T
TCP Counts: 630/219 (rx/tx) 7 dropped

                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   - 0/0
         TCP_PCB:   2/  2/12  - 0/0
  TCP_PCB_LISTEN:   5/  5/8   - 0/0
         TCP_SEG:   5/  5/20  - 0/0
       REASSDATA:   0/  0/5   - 0/0
       FRAG_PBUF:   0/  0/8   - 0/0
          NETBUF:   0/  0/1   - 0/0
         NETCONN:  12/ 12/20  - 0/0
   TCPIP_MSG_API:   0/  0/8   - 0/0
 TCPIP_MSG_INPKT:   0/  4/24  - 0/0
     SYS_TIMEOUT:   5/  5/8   - 0/0
    PBUF_REF/ROM:   0/  0/8   - 0/0
       PBUF_POOL:   0/  0/0   - 0/0
       MALLOC_64:   1/  2/8   - 0/0
      MALLOC_128:   8/  8/8   - 37/0
      MALLOC_256:   2/  2/8   - 0/0
     MALLOC_1564:   5/  5/24  - 0/0

      : Avl/Usd/Max/Err
 Mbox :   -/ 13/ 13/171
Mutex :   -/  1/  1/  0
  Sem :   -/ 12/ 12/  0
Memory:   0/5360/7156/  0/0
TCP Errors: 0-C 0-E 0-L 0-M 0-O 0-P 0-T
TCP Counts: 656/238 (rx/tx) 8 dropped

                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   - 0/0
         TCP_PCB:   2/  2/12  - 0/0
  TCP_PCB_LISTEN:   5/  5/8   - 0/0
         TCP_SEG:   5/  5/20  - 0/0
       REASSDATA:   0/  0/5   - 0/0
       FRAG_PBUF:   0/  0/8   - 0/0
          NETBUF:   0/  0/1   - 0/0
         NETCONN:  12/ 12/20  - 0/0
   TCPIP_MSG_API:   0/  0/8   - 0/0
 TCPIP_MSG_INPKT:   0/  4/24  - 0/0
     SYS_TIMEOUT:   5/  5/8   - 0/0
    PBUF_REF/ROM:   0/  0/8   - 0/0
       PBUF_POOL:   0/  0/0   - 0/0
       MALLOC_64:   1/  2/8   - 0/0
      MALLOC_128:   8/  8/8   - 169/0
      MALLOC_256:   2/  4/8   - 0/0
     MALLOC_1564:   5/  6/24  - 0/0


      : Avl/Usd/Max/Err
 Mbox :   -/ 13/ 13/300
Mutex :   -/  1/  1/  0
  Sem :   -/ 12/ 12/  0
Memory:   0/5360/7264/  0/0
TCP Errors: 0-C 0-E 0-L 0-M 0-O 0-P 0-T
TCP Counts: 662/243 (rx/tx) 8 dropped

                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   - 0/0
         TCP_PCB:   2/  2/12  - 0/0
  TCP_PCB_LISTEN:   5/  5/8   - 0/0
         TCP_SEG:   5/  5/20  - 0/0
       REASSDATA:   0/  0/5   - 0/0
       FRAG_PBUF:   0/  0/8   - 0/0
          NETBUF:   0/  0/1   - 0/0
         NETCONN:  12/ 12/20  - 0/0
   TCPIP_MSG_API:   0/  0/8   - 0/0
 TCPIP_MSG_INPKT:   0/  4/24  - 0/0
     SYS_TIMEOUT:   5/  5/8   - 0/0
    PBUF_REF/ROM:   0/  0/8   - 0/0
       PBUF_POOL:   0/  0/0   - 0/0
       MALLOC_64:   1/  2/8   - 0/0
      MALLOC_128:   8/  8/8   - 232/0
      MALLOC_256:   2/  4/8   - 0/0
     MALLOC_1564:   5/  6/24  - 0/0



(file #40548, file #40549)

preet <preetpal>
Tue 25 Apr 2017 08:32:32 PM UTC, comment #14: 

Windows Version: 7
Python Version: 2.7.11

I will try LWIP 2.0.2 within the next few days to see if I can reproduce the issue.

Just for sanity sake, I ran the test again and observed that there were approximately 85 Mbox errors during the "sys_mbox_trypost()" function call.  I hope this can help narrow down the timer responsible to do the socket operation.  While the mailbox errors are going up, I do not see any activity of any sort on the Wireshark.

preet <preetpal>
Mon 24 Apr 2017 07:56:21 PM UTC, comment #13: 

My thought is that the zero window probe (from the server) would ensure the RST mechanisms is re-triggered once the client has closed the socket, otherwise we end up with a half open connection and send is stuck blocking, which should not happen.

Whether other not we can end up getting the RST with SEQ = RCV.NXT is unknown :P 

Preet,

What version of Windows is this BTW?

Joel Cunningham <jcunningham>
Group Member
Mon 24 Apr 2017 07:22:05 PM UTC, comment #12: 


> Can you pull up to 2.0.1 at least and try to reproduce this?


Does that help much? The remote host does not seem to open the window in 26 seconds although always sending packets. A zero-window probe would be required if the 2 hosts wouldn't exchange packets.

This might be different if we would either transmit 320 bytes (out of 464) or just send 464 bytes and risk 144 bytes to be retransmitted.

Simon Goldschmidt <goldsimon>
Group administrator
Sun 23 Apr 2017 02:42:11 PM UTC, comment #11: 

Simon,

I reviewed RFC 5961, section 3.2 and our implementation follows the specification so I'm not sure why Windows did not respond to the challenge ACK.

There was one remaining piece of this that didn't make sense.  With sockets and TCP you always have the chance of ending up with a half-open connection if you aren't sending on the socket.  In this case, the server ends up with the connection half open, but is constantly sending.  Even if Windows responded to the ACK containing an RST with SEQ = RCV.NXT, it could be dropped in transit and again we'd have a half open connection (but still sending).

I reviewed the wireshark log again and this time I think the server encountered an effective zero window, but not the client.  If you see packet #324, the reported window from the client drops to 320, which is below 464 (the server's segment size).  LwIP won't segment the next 464 byte segment and we effectively have a zero window.  After this the server never sends another data segment.

LwIP 2.0.0 didn't have the fix to start the persistent timer in this case (https://savannah.nongnu.org/bugs/?49533).

Preet,

Can you pull up to 2.0.1 at least and try to reproduce this?

Joel Cunningham <jcunningham>
Group Member
Fri 21 Apr 2017 07:38:50 PM UTC, comment #10: 

Joel, thanks for that excellent analysis. I wonder why the Win/Python client doesn't re-send that ACK. Is there something wrong with our implementation of RFC 5961 section 3.2?

Preet, the mbox holds pbufs pointers. Increasing the mbox size indeed does not help much as you can get into this situation at any time. You would have to set mbox size to your window's size as any byte could come in in one segment/pbuf.

In fact, the segments dropped by lwIP could in fact have been dropped by the network. In that case, we would have a zombie pcb without doing anything wrong other than protecting against the RST spoofing attack!

Simon Goldschmidt <goldsimon>
Group administrator
Fri 21 Apr 2017 06:13:20 PM UTC, comment #9: 

Joel,

You are absolutely right, I did not look through the wireshark trace in detail and hypothesized that the window was shrinking down to zero.  So the correction here is that window size of the Win/Python client was shrinking, until it stopped doing ACKs as you mentioned.

The mailbox errors are coming from:

err_t sys_mbox_trypost( sys_mbox_t *pxMailBox, void *pxMessageToPost )


I don't think it is a matter of the mbox implementation not handling a full window's worth of data because the mailbox under question is a mailbox to queue a pointer, not the entire window (it is probably the pbuf ptr being queued?).  The recvmbox is likely full because the RTOS task is stuck inside the send() function.  I also see that in this situation, the TCP/IP dropped packet count starts to go up.  I will search for the pcb->refused_data next

We should not increase size the mailbox queue bigger because otherwise it starts to eat up all of the LWIP buffer pools and locks up the entire LWIP stack.  More specifically, if I increase DEFAULT_TCP_RECVMBOX_SIZE then it starts to chew on the memory pools and eventually pings and other sockets become non-operational.  So this is the deadlock I was talking about when I opened up the issue.  I disagree that we shouldn't resolve the situation by asking the programmer to read the data, because LWIP is blocking the send() operation.  Maybe the next line after send() could be to read it, but while a blocking call to send() is made, the receive mailboxes should be serviced somehow and not chew up the buffers as defined by DEFAULT_TCP_RECVMBOX_SIZE   I don't think I am mis-using the LWIP or socket API by any means.  The RTOS combined with LWIP should be able to guard itself from this.

Thanks again Joel for looking into the wireshark.  Like I mentioned, I put a work-around by forcing blocking timeout of 5 seconds to EVERY socket, but we should be able to gracefully handle this deadlock.  I do not know the LWIP code in detail, but my gut feeling says that we should handle the RST more gracefully, and we need to figure out which code is consuming those mailbox slots from sys_mbox_trypost function

preet <preetpal>
Fri 21 Apr 2017 02:11:39 PM UTC, comment #8: 

Also, the comment in the below code makes it seem like we are doing non-standard behavior, but we aren't.  It's following RFC 5961 section 3.2

Joel Cunningham <jcunningham>
Group Member
Fri 21 Apr 2017 01:31:05 PM UTC, comment #7: 

preet,

I took a look at your wireshark capture and I have a new idea what's going on.  You're not encountering a zero-window situation, instead the server stops ACKing new data.  You can see this starting on packet 349 where the client is retransmitting the same packet with SEQ = 1457 (wireshark relative numbering)

Combine this with the report of mbox errors and what I think is happening is that your mbox implementation can't handle a full window's worth of data according to the segmentation from the client.  Once the recvmbox is full and your application is not draining it, LwIP has a feature called "refused data" where it stops accepting new data, and won't issue any ACKS.  Search the code for pcb->refused_data.  The refused data situation can be resolved by having your application read data or increasing your mbox to accommodate a full windows worth of data

Then the client tries to issue a RST at packet 378 and the server issues an ACK in response.  I believe this is because the client is using SEQ = 2097 but since we've been dropping data, pcb->rcv_nxt is at 1457.  See the below code segment in tcp_process:


/* Process incoming RST segments. */
  if (flags & TCP_RST) {
    /* First, determine if the reset is acceptable. */
    if (pcb->state == SYN_SENT) {
      /* "In the SYN-SENT state (a RST received in response to an initial SYN),
          the RST is acceptable if the ACK field acknowledges the SYN." */
      if (ackno == pcb->snd_nxt) {
        acceptable = 1;
      }
    } else {
      /* "In all states except SYN-SENT, all reset (RST) segments are validated
          by checking their SEQ-fields." */
      if (seqno == pcb->rcv_nxt) {
        acceptable = 1;
      } else  if (TCP_SEQ_BETWEEN(seqno, pcb->rcv_nxt,
                                  pcb->rcv_nxt + pcb->rcv_wnd)) {
        /* If the sequence number is inside the window, we only send an ACK
           and wait for a re-send with matching sequence number.
           This violates RFC 793, but is required to protection against
           CVE-2004-0230 (RST spoofing attack). */
        tcp_ack_now(pcb);
      }
    }


acceptable is not being set to 1 and this basically means the RST did not reset the connection.  The client never attempted again to send a RST with SEQ = 1457, so the connection enters a zombie state

Joel Cunningham <jcunningham>
Group Member
Thu 20 Apr 2017 10:51:09 PM UTC, comment #6: 

Sorry to go back and forth (git submodule confusion).
I am using STABLE-2.0.0

preet <preetpal>
Thu 20 Apr 2017 09:37:15 PM UTC, comment #5: 

The CLIENT was windows machine running a python script.

The SERVER was embedded CPU with a few hundred Kilobytes of RAM with the LWIP options I provided and on LWIP 1.4.x

The "mbox" errors are seen on the CLIENT running LWIP.  I have fairly limited knowledge of LWIP right now beyond reading the basics of the code and the LWIP options.  I took it for granted that the "mbox" is used during transmit, but surely something in that zombie sockets keeps trying while the mbox is full and not getting emptied.  Maybe the PCAP file can tell more?

 Let me know if I answered all of the questions.

preet <preetpal>
Thu 20 Apr 2017 09:28:36 PM UTC, comment #4: 

The target port was 6060, and I was collecting the memory statistics in parallel on port 4005.  Attached are the PCAP file, and also my LWIP configuration options.  Note that this is with 1.4.x version, and not the 2.x which I accidentally used while opening the issue.  The MBOX err counter related to the socket continues to climb and the task that was serving port 6060 is stuck inside the send() function.


Before the test starts:

Memory: 0/64/1004/0 (Avail/Used/Max/Err)
 Mbox :  0 11 11 (Err/Used/Max)
Mutex :  0  1  1 (Err/Used/Max)
  Sem :  0 10 11 (Err/Used/Max)
TCP Errors: 0-C 0-E 0-L 0-M 0-O 0-P 0-T
TCP Counts: 15/9 (rx/tx) 0 dropped

                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   -  0
         TCP_PCB:   1/  1/12  -  0
  TCP_PCB_LISTEN:   4/  4/6   -  0
         TCP_SEG:   1/  2/16  -  0
       REASSDATA:   0/  0/5   -  0
       FRAG_PBUF:   0/  0/8   -  0
          NETBUF:   0/  0/1   -  0
         NETCONN:  10/ 10/20  -  0
   TCPIP_MSG_API:   0/  0/8   -  0
 TCPIP_MSG_INPKT:   0/  6/20  -  0
     SYS_TIMEOUT:   5/  5/8   -  0
    PBUF_REF/ROM:   0/  0/8   -  0
       PBUF_POOL:   0/  0/8   -  0
       MALLOC_64:   1/  2/8   -  0
      MALLOC_256:   0/  6/32  -  0
      MALLOC_768:   2/  2/32  -  0
     MALLOC_1564:   0/  0/8   -  0



After the problem hits:
Memory: 0/2320/3648/0 (Avail/Used/Max/Err)
 Mbox : 191 12 12 (Err/Used/Max)
Mutex :  0  1  1 (Err/Used/Max)
  Sem :  0 11 12 (Err/Used/Max)
TCP Errors: 0-C 0-E 0-L 0-M 0-O 0-P 0-T
TCP Counts: 208/176 (rx/tx) 8 dropped

                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   -  0
         TCP_PCB:   2/  2/12  -  0
  TCP_PCB_LISTEN:   4/  4/6   -  0
         TCP_SEG:   3/  4/16  -  0
       REASSDATA:   0/  0/5   -  0
       FRAG_PBUF:   0/  0/8   -  0
          NETBUF:   0/  0/1   -  0
         NETCONN:  11/ 11/20  -  0
   TCPIP_MSG_API:   0/  0/8   -  0
 TCPIP_MSG_INPKT:   0/  6/20  -  0
     SYS_TIMEOUT:   5/  5/8   -  0
    PBUF_REF/ROM:   0/  0/8   -  0
       PBUF_POOL:   0/  0/8   -  0
       MALLOC_64:   1/  2/8   -  0
      MALLOC_256:  10/ 13/32  -  0
      MALLOC_768:   4/  5/32  -  0
     MALLOC_1564:   0/  0/8   -  0


After some more time when the CLIENT has disconnected at least 30 seconds prior:
Memory: 0/2320/3648/0 (Avail/Used/Max/Err)
 Mbox : 294 12 12 (Err/Used/Max)
Mutex :  0  1  1 (Err/Used/Max)
  Sem :  0 11 12 (Err/Used/Max)
TCP Errors: 0-C 0-E 0-L 0-M 0-O 0-P 0-T
TCP Counts: 211/178 (rx/tx) 8 dropped

                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   -  0
         TCP_PCB:   2/  2/12  -  0
  TCP_PCB_LISTEN:   4/  4/6   -  0
         TCP_SEG:   3/  4/16  -  0
       REASSDATA:   0/  0/5   -  0
       FRAG_PBUF:   0/  0/8   -  0
          NETBUF:   0/  0/1   -  0
         NETCONN:  11/ 11/20  -  0
   TCPIP_MSG_API:   0/  0/8   -  0
 TCPIP_MSG_INPKT:   0/  6/20  -  0
     SYS_TIMEOUT:   5/  5/8   -  0
    PBUF_REF/ROM:   0/  0/8   -  0
       PBUF_POOL:   0/  0/8   -  0
       MALLOC_64:   1/  2/8   -  0
      MALLOC_256:  10/ 13/32  -  0
      MALLOC_768:   4/  5/32  -  0
     MALLOC_1564:   0/  0/8   -  0

Memory: 0/2320/3648/0 (Avail/Used/Max/Err)
 Mbox : 324 12 12 (Err/Used/Max)
Mutex :  0  1  1 (Err/Used/Max)
  Sem :  0 11 12 (Err/Used/Max)
TCP Errors: 0-C 0-E 0-L 0-M 0-O 0-P 0-T
TCP Counts: 214/180 (rx/tx) 8 dropped

                :  use/wmk/MAX - err
         UDP_PCB:   6/  6/8   -  0
         TCP_PCB:   2/  2/12  -  0
  TCP_PCB_LISTEN:   4/  4/6   -  0
         TCP_SEG:   3/  4/16  -  0
       REASSDATA:   0/  0/5   -  0
       FRAG_PBUF:   0/  0/8   -  0
          NETBUF:   0/  0/1   -  0
         NETCONN:  11/ 11/20  -  0
   TCPIP_MSG_API:   0/  0/8   -  0
 TCPIP_MSG_INPKT:   0/  6/20  -  0
     SYS_TIMEOUT:   5/  5/8   -  0
    PBUF_REF/ROM:   0/  0/8   -  0
       PBUF_POOL:   0/  0/8   -  0
       MALLOC_64:   1/  2/8   -  0
      MALLOC_256:  10/ 13/32  -  0
      MALLOC_768:   4/  5/32  -  0
     MALLOC_1564:   0/  0/8   -  0


(file #40456)

preet <preetpal>
Thu 20 Apr 2017 07:59:04 PM UTC, comment #3: 

Thanks for the correction.  I'm still not 100% clear what happened in preet's case.  If RST was received, failed to deliver to the mbox, but the pcb would still freed as part of the RST processing, meaning there would be no more zero-window probes.  I would also expect the blocking send to be returned with an error (maybe that's also a bug)?

Joel Cunningham <jcunningham>
Group Member
Thu 20 Apr 2017 07:34:43 PM UTC, comment #2: 


> things like an empty ACK or RST shouldn't end up in the mbox


Well, the RST by now ends up in the mbox to be able to read data before RST (if any). It's true that we need to fix the case where putting the RST into the mbox fails...

Simon Goldschmidt <goldsimon>
Group administrator
Thu 20 Apr 2017 07:30:56 PM UTC, comment #1: 

Hi preet, couple of questions about your scenario:

> While this was occuring, the "Mbox" error counts are going up to indicate that the LWIP SERVER is trying to send the data. But while these errors have occurred, the CLIENT's RST/ACK packet with the FYN was dropped and could not be queued onto the socket's mailbox.


Are you seeing mbox errors on the server?  I don't believe mboxes are used in the transmit path.  The recvmbox used in the receive path should only contain data, so things like an empty ACK or RST shouldn't end up in the mbox

> So, we end up with a zombie socket, which is trying to send the data in blocking mode, and is never successful.


Once the client exited, I would expect each following zero-window probe to be received by the client and have an RST generated in response.  Is this not the case?  Or does the server not process the RST in this case (could be a bug)?

Joel Cunningham <jcunningham>
Group Member
Thu 20 Apr 2017 07:12:52 PM UTC, original submission:  

LWIP race condition:

Consider a TCP/IP CLIENT connected to LWIP TCP/IP server:
    while True:
        send(data_128)
        time.sleep(.1)

Meanwhile, the LWIP TCP/IP server, after accepting the connection is doing:

while(1)
{
    // Socket is in blocking mode
    send(client_sock, foo_string, 128, no_flags);
    recv(all);
}

In this situation, the CLIENT's OS is shrinking its TCP/IP WINDOW size because its application is not pulling any data out of the socket (it is only transmitting).  Meanwhile, the LWIP SERVER is continuing to send data to the CLIENT until its window doesn't allow it any longer.

The Wireshark trace shows that the LWIP SERVER starts to send zero byte packets and the CLIENT responds with an ACK with the PSH flag set.  There are about 10 of these packets until the CLIENT just gives up and it's application terminates.

While this was occuring, the "Mbox" error counts are going up to indicate that the LWIP SERVER is trying to send the data.  But while these errors have occurred, the CLIENT's RST/ACK packet with the FYN was dropped and could not be queued onto the socket's mailbox.

So, we end up with a zombie socket, which is trying to send the data in blocking mode, and is never successful.  In this state, the rest of the LWIP stack is working fine, however, the "Mbox" error count continues to go up (forever), particularly related to this socket.

I believe there is a race condition in the stack when using the blocking mode.  As a workaround, we enabled 5 second timeout on every socket, and after 5 seconds, it times out and the zombie socket is able to closed and re-open later.

preet <preetpal>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attached Files
file #40733:  bug-50837-zero-win-probes -2.xlsx added by jcunningham (9KiB - application/vnd.openxmlformats-officedocument.spreadsheetml.sheet)
file #40729:  lwip_5004_trace.pcapng added by fwahhab (273KiB - application/octet-stream - 4 minute timeout)
file #40717:  bug-50837-zero-win-probes.xlsx added by jcunningham (9KiB - application/vnd.openxmlformats-officedocument.spreadsheetml.sheet)
file #40714:  lwip_50837_patch_test___without_timeout.pcapng added by preetpal (336KiB - application/octet-stream)
file #40548:  lwip202_with_timeout.pcapng added by preetpal (217KiB - application/octet-stream)
file #40549:  lwip202_WITHOUT_timeout.pcapng added by preetpal (252KiB - application/octet-stream)
file #40457:  lwipopts.h added by preetpal (6KiB - application/octet-stream)
file #40456:  lwip_trace.pcapng added by preetpal (117KiB - application/octet-stream)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by axellin (Posted a comment)
  • -email is unavailable- added by fwahhab (Updated the item)
  • -email is unavailable- added by goldsimon (Posted a comment)
  • -email is unavailable- added by jcunningham (Posted a comment)
  • -email is unavailable- added by preetpal (Submitted the item)
  • -email is unavailable- added by preetpal
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follow 18 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2017-08-25 jcunningham StatusIn Progress Fixed
        Open/ClosedOpen Closed
    2017-08-22 jcunningham Attached File- Added 0001-tcp-persist-timer-re-work-bug-50837-rebase.patch, #41616
    2017-08-04 jcunningham Attached File- Added 0001-tcp-persist-timer-re-work-bug-50837.patch, #41415
    2017-07-27 jcunningham StatusNone In Progress
    2017-05-18 jcunningham Attached File- Added bug-50837-zero-win-probes -2.xlsx, #40733
    2017-05-17 fwahhab Attached File- Added lwip_5004_trace.pcapng, #40729
    2017-05-16 jcunningham Attached File- Added bug-50837-zero-win-probes.xlsx, #40717
    2017-05-16 preetpal Attached File- Added lwip_50837_patch_test___without_timeout.pcapng, #40714
        Attached File- Added lwip_50837_patch_test___with_5s_send_timeout.pcapng, #40715
    2017-05-04 jcunningham Attached File- Added 0001-bug-50837-add-zero-window-probe-timeout.patch, #40576
    2017-05-03 jcunningham Assigned toNone jcunningham
        SummaryLWIP TCP/IP race condition TCP: zero window probe doesn't timeout
    2017-05-02 preetpal Attached File- Added lwip202_with_timeout.pcapng, #40548
        Attached File- Added lwip202_WITHOUT_timeout.pcapng, #40549
    2017-04-20 preetpal Attached File- Added lwipopts.h, #40457
    2017-04-20 preetpal Attached File- Added lwip_trace.pcapng, #40456
    2017-04-20 preetpal Carbon-Copy- Added preetpal

    Back to the top

    Powered by Savane 3.13-f8d8.
    Corresponding source code