buglwIP - A Lightweight TCP/IP stack - Bugs: bug #34176, select after non-blocking send...

 
 

You are not allowed to post comments on this tracker with your current authentication level.

bug #34176: select after non-blocking send times out

Submitted by:  Cyril <xryl669>
Submitted on:  Tue 30 Aug 2011 05:14:26 PM UTC  
 
Category: TCPSeverity: 2 - Minor
Item Group: Faulty BehaviourStatus: Fixed
Privacy: PublicAssigned to: Simon Goldschmidt <goldsimon>
Open/Closed: ClosedPlanned Release: None
lwIP version: 1.4.0

(Jump to the original submission Jump to the original submission)

Wed 19 Oct 2011 11:07:26 AM UTC, comment #7:

Right. What's missing is limiting it to "TCP_SND_BUF - 1". However, that's only a problem for the default value of TCP_SND_BUF (2 * TCP_MSS). And I think we should change that default value to "4 * TCP_MSS" as with 2 * mss and a remote host implementing delayed ACKs, you break down from a sliding window into a ping-pong (send 2 segments, wait for ACK). It doesn't break things, but it slows down transmission.

Anyway, I've added a MIN(old-value, TCP_SND_BUF - 1), which should fix things.

Thanks for checking this.

Simon Goldschmidt <goldsimon>
Project AdministratorIn charge of this item.
Wed 19 Oct 2011 09:16:44 AM UTC, comment #6:

The comment above TCP_SNDLOWAT says it must be less than TCP_SND_BUF.

but, TCP_SND_BUF is by default defined to (2 * TCP_MSS)
and then
TCP_SNDLOWAT = MAX(TCP_SND_BUF / 2, 2 * TCP_MSS + 1) = MAX(TCP_SND_BUF / 2, TCP_SND_BUF + 1) = TCP_SND_BUF + 1

to sum it up,
1. the MAX() macro is irrelevant, and TCP_SNDLOWAT equals TCP_SND_BUF + 1
2. its always bigger than TCP_SND_BUF.

Amir Shalem <amirshalem>
Tue 18 Oct 2011 06:31:43 PM UTC, comment #5:

I've changed these defines to:

#define TCP_SNDLOWAT LWIP_MAX(((TCP_SND_BUF)/2), (2 * TCP_MSS) + 1)
#define TCP_SNDQUEUELOWAT LWIP_MAX(((TCP_SND_QUEUELEN)/2), 5)

To ensure select works correctly even for small TCP windows.

Simon Goldschmidt <goldsimon>
Project AdministratorIn charge of this item.
Tue 27 Sep 2011 07:09:11 PM UTC, comment #4:

ALright, so no bug in lwIP. But we have to decide whether we want to change the defaults in opt.h to give decent performance or to minimize RAM usage (while sacrifying performance). I'd vote for the first.

Simon Goldschmidt <goldsimon>
Project AdministratorIn charge of this item.
Fri 23 Sep 2011 09:37:00 AM UTC, comment #3:

Well, since we had another bug (EMSGSIZE) that required patching, and a buggy ethernet switch, I've tried many things to solve this and finally it works.

I'm afraid I can't really say what solved the issue, but I've definitively disabled the low water test, patched the code for fixing EMSGSIZE code, and changed the ethernet switch.

When using these parameters, it's also working:
#define TCP_SNDLOWAT 2 * TCP_MSS + 1
#define TCP_SNDQUEUELOWAT 5

Seems like that, even with new parameters, in case of packets loss, lwIP perform worst than Win32 / Linux IP stack, and transmission eventually succeed after few minutes (!), while a PC on the same (faulty) ethernet switch works in decent time.

Cyril <xryl669>
Wed 21 Sep 2011 05:44:26 PM UTC, comment #2:

Any update on this, please?

Simon Goldschmidt <goldsimon>
Project AdministratorIn charge of this item.
Fri 09 Sep 2011 06:55:44 PM UTC, comment #1:

This works for me, however, I can see that it might not work with the defaults from opt.h.

Could you make sure that:
- TCP_SNDLOWAT is at least > (2 * TCP_MSS) and
- TCP_SNDQUEUELOWAT is at least > 4

for a safe test, please set these values in your lwipopts.h:
#define TCP_SNDLOWAT 0
#define TCP_SNDQUEUELOWAT TCP_SND_QUEUELEN

(this will completely disable the low-water test).

If that works, my solution would be to have compile-time sanity-checks in init.c to test that these defines don't break select.

Setting severity minor because it seems to only be an issue of sane configuration settings.

Simon Goldschmidt <goldsimon>
Project AdministratorIn charge of this item.
Tue 30 Aug 2011 05:14:26 PM UTC, original submission:

Hi,

I'm using lwip 1.4.0 version.
I'm sending a big buffer split in TCP_MSS sized's block over a non-blocking SOCK_STREAM socket.

Before sending a single byte, select(write_fd, timeout) works (returns 1, and the write_fd contains my socket)
Then I'm sending TCP_MSS bytes, and I'm calling select(write_fd, timeout again).
No matter what I'm trying, the second select times out and never report the socket as writeable again.
This obviously breaks the sending code.

If I'm sending the buffer at once, I get a EMSGSIZE error after send instead of the amount of data sent like the other platforms (but I think this bug is already reported).

Since the code is a cross platform server that's working perfectly on both Linux, Mac and Windows, I would like to minimize the change required by lwip.

Regards,

Cyril <xryl669>

 

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by amirshalem (Posted a comment)
  • -unavailable- added by goldsimon (Posted a comment)
  • -unavailable- added by xryl669 (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 0 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 8 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Tue 18 Oct 2011 06:31:43 PM UTCgoldsimonCategoryNone=>TCP
      StatusWorks For Me=>Fixed
      Open/ClosedOpen=>Closed
      Planned Release=>1.4.1
      Summaryselect after non-blocking send timesout=>select after non-blocking send times out
    Fri 09 Sep 2011 06:55:44 PM UTCgoldsimonSeverity3 - Normal=>2 - Minor
      StatusNone=>Works For Me
      Assigned toNone=>goldsimon

    Back to the top


    Powered by Savane 3.1-cleanup1