Wed 19 Oct 2011 11:07:26 AM UTC, comment #7:
Right. What's missing is limiting it to "TCP_SND_BUF - 1". However, that's only a problem for the default value of TCP_SND_BUF (2 * TCP_MSS). And I think we should change that default value to "4 * TCP_MSS" as with 2 * mss and a remote host implementing delayed ACKs, you break down from a sliding window into a ping-pong (send 2 segments, wait for ACK). It doesn't break things, but it slows down transmission.
Anyway, I've added a MIN(old-value, TCP_SND_BUF - 1), which should fix things.
Thanks for checking this.
|
Wed 19 Oct 2011 09:16:44 AM UTC, comment #6:
The comment above TCP_SNDLOWAT says it must be less than TCP_SND_BUF.
but, TCP_SND_BUF is by default defined to (2 * TCP_MSS)
and then
TCP_SNDLOWAT = MAX(TCP_SND_BUF / 2, 2 * TCP_MSS + 1) = MAX(TCP_SND_BUF / 2, TCP_SND_BUF + 1) = TCP_SND_BUF + 1
to sum it up,
1. the MAX() macro is irrelevant, and TCP_SNDLOWAT equals TCP_SND_BUF + 1
2. its always bigger than TCP_SND_BUF.
|
Tue 18 Oct 2011 06:31:43 PM UTC, comment #5:
I've changed these defines to:
#define TCP_SNDLOWAT LWIP_MAX(((TCP_SND_BUF)/2), (2 * TCP_MSS) + 1)
#define TCP_SNDQUEUELOWAT LWIP_MAX(((TCP_SND_QUEUELEN)/2), 5)
To ensure select works correctly even for small TCP windows.
|
Tue 27 Sep 2011 07:09:11 PM UTC, comment #4:
ALright, so no bug in lwIP. But we have to decide whether we want to change the defaults in opt.h to give decent performance or to minimize RAM usage (while sacrifying performance). I'd vote for the first.
|
Fri 23 Sep 2011 09:37:00 AM UTC, comment #3:
Well, since we had another bug (EMSGSIZE) that required patching, and a buggy ethernet switch, I've tried many things to solve this and finally it works.
I'm afraid I can't really say what solved the issue, but I've definitively disabled the low water test, patched the code for fixing EMSGSIZE code, and changed the ethernet switch.
When using these parameters, it's also working:
#define TCP_SNDLOWAT 2 * TCP_MSS + 1
#define TCP_SNDQUEUELOWAT 5
Seems like that, even with new parameters, in case of packets loss, lwIP perform worst than Win32 / Linux IP stack, and transmission eventually succeed after few minutes (!), while a PC on the same (faulty) ethernet switch works in decent time.
|
Wed 21 Sep 2011 05:44:26 PM UTC, comment #2:
Any update on this, please?
|
Fri 09 Sep 2011 06:55:44 PM UTC, comment #1:
This works for me, however, I can see that it might not work with the defaults from opt.h.
Could you make sure that:
- TCP_SNDLOWAT is at least > (2 * TCP_MSS) and
- TCP_SNDQUEUELOWAT is at least > 4
for a safe test, please set these values in your lwipopts.h:
#define TCP_SNDLOWAT 0
#define TCP_SNDQUEUELOWAT TCP_SND_QUEUELEN
(this will completely disable the low-water test).
If that works, my solution would be to have compile-time sanity-checks in init.c to test that these defines don't break select.
Setting severity minor because it seems to only be an issue of sane configuration settings.
|
Tue 30 Aug 2011 05:14:26 PM UTC, original submission:
Hi,
I'm using lwip 1.4.0 version.
I'm sending a big buffer split in TCP_MSS sized's block over a non-blocking SOCK_STREAM socket.
Before sending a single byte, select(write_fd, timeout) works (returns 1, and the write_fd contains my socket)
Then I'm sending TCP_MSS bytes, and I'm calling select(write_fd, timeout again).
No matter what I'm trying, the second select times out and never report the socket as writeable again.
This obviously breaks the sending code.
If I'm sending the buffer at once, I get a EMSGSIZE error after send instead of the amount of data sent like the other platforms (but I think this bug is already reported).
Since the code is a cross platform server that's working perfectly on both Linux, Mac and Windows, I would like to minimize the change required by lwip.
Regards,
|