tasklwIP - A Lightweight TCP/IP stack - Tasks: task #7040, Work on tcp_enqueue

 
 

You are not allowed to post comments on this tracker with your current authentication level.

task #7040: Work on tcp_enqueue

Submitter:  Simon Goldschmidt <goldsimon>
Submitted:  Tue 26 Jun 2007 07:56:13 PM UTC
   
 
Category:  None Should Start On:  Tue 26 Jun 2007 12:00:00 AM UTC
Should be Finished on:  Tue 26 Jun 2007 12:00:00 AM UTC Priority:  1 - Later
Status:  Done Privacy:  Public
Assigned to:  goldsimon Percent Complete:  100%
Open/Closed:  Closed Planned Release:  1.4.0
Effort:  0.00

Jump to the original submission

Tue 04 May 2010 07:39:58 PM UTC, comment #27: 

Setting to 'done' as there haven't been any bug reports since setting to 'ready for test'.

Simon Goldschmidt <goldsimon>
Group administrator
Thu 11 Mar 2010 09:30:16 PM UTC, comment #26: 

Just found a bug in the committed patch: mixing COPY with NOCOPY writes leads to copying NOCOPY writes, too (which doesn't corrupt anything but leads to unnecessary copying).

Simon Goldschmidt <goldsimon>
Group administrator
Fri 05 Mar 2010 12:27:11 PM UTC, comment #25: 


> When you say "merged tcp_write with tcp_enqueue_data" I'm a
> little confused - perhaps I should look at the change - as
> tcp_write sometimes will want to enqueue flags and data.


I don't think so: tcp_write always enqueued data - sometimes together with apiflags or options, but never with (tcp-header-)flags: SYN/FIN are now covered by tcp_enqueue_flags, RST is sent directly, ACK is added in tcp_output.

Of course I'm still happy for someone else to review the code!

> One thing to check if you're concerned about breaking things is
> if the TCP timestamp option code still works.


I already tested that and it still works ;-)

The only remaining big function is tcp_write. And although I also thing that function is too long, it will be difficult to split it into smaller parts as many local vars are used... I'll see what I can do.

Simon Goldschmidt <goldsimon>
Group administrator
Fri 05 Mar 2010 11:47:10 AM UTC, comment #24: 

Thanks for taking the time to get this change in.

When you say "merged tcp_write with tcp_enqueue_data" I'm a little confused - perhaps I should look at the change - as tcp_write sometimes will want to enqueue flags and data.

I would still like to see some of the big functions in this code path (tcp_enqueue, tcp_write, etc) split into multiple smaller more manageable functions, but that could be a new task.

One thing to check if you're concerned about breaking things is if the TCP timestamp option code still works.

Kieran Mansley <kieranm>
Group Member
Fri 05 Mar 2010 11:39:09 AM UTC, comment #23: 

I just noticed we might want to trim (pbuf_realloc) the oversized pbufs once they are sent (i.e. moved from unsent to unacked) to free unused memory.

Also, I'll have to review the places where pcb->unsent_oversize is used/reset and last-unsent is changed to make sure this keeps track of the oversize in the correct segment/pbuf and not accidentally adding "oversize" to a wrong pbuf.

Simon Goldschmidt <goldsimon>
Group administrator
Fri 05 Mar 2010 11:15:23 AM UTC, comment #22: 

I finally found the time to integrate Jakob's patches into CVS. There have been some changes to the CVS tcp code though since creating the patch - I hope I didn't break it while adapting to CVS HEAD.

While at it, I improved some things in the patch and some other things in tcp:
- renamed tcp_send_ctrl -> tcp_send_fin (only used for FIN)
  -> tcp_send_fin can add TCP_FIN flag to last unsent header
- merged tcp_write with tcp_enqueue_data
- renamed tcp_enqueue_options to tcp_enqueue_flags (as it is only used for SYN and FIN, which are flags not options)
- created define TCPH_HDRLEN_FLAGS_SET() to set flags and hdrlen at the same time
- moved calculation of which options to enqueue into tcp_write/tcp_enqueue_flags
- use TCPH_SET_FLAG in some places instead of TCPH_FLAGS_SET


I hope I didn't break anything. I tested the tcp apps in contrib and everything works, so I'm pretty hopeful it works ;-)

Simon Goldschmidt <goldsimon>
Group administrator
Sun 19 Apr 2009 12:35:35 PM UTC, comment #21: 

Reminder from bug #25094: at least for no-copy data, TCP header should not be allocated until we know whether we enqueue the data in a previous segment, but I think that's covered by Jakob's patches.

Simon Goldschmidt <goldsimon>
Group administrator
Fri 03 Apr 2009 11:07:53 AM UTC, comment #20: 

There have been some changes to tcp_enqueue recently that probably mean these patches don't apply cleanly, and that splitting the tcp_enqueue function into two (one for options and one for data) is no longer viable as tcp_write() may want to do both.  However, there is still room for cleaning up tcp_enqueue() further (e.g. by sorting out the coalescing of segments, and abstracting some of the code out into separate smaller functions).

The oversize stuff is interesting, particularly if it gets a big performance gain.

I've marked this as targetted at 1.4.0 rather than 1.3.1 as I think it's not urgent.

Kieran Mansley <kieranm>
Group Member
Fri 13 Feb 2009 05:16:51 PM UTC, comment #19: 

There's a typo in tcp-enqueue-split: "SYN for FIN" should be "SYN or FIN".

Charles Landau <clandau>
Fri 06 Feb 2009 09:55:05 AM UTC, comment #18: 

Ok, attached patch tcp-oversize-smaller reduces the code size a bit when applied on top of tcp-oversize.

It looks like my compiler spotted the same optimizations as me, so the reductions are small. Object code size for tcp_out:

  • CVS: 5626
  • 3 patches: 6456 (was 6494)
  • TCP_OVERSIZE=0: 5886


So the cost of cleaning up and fixing the segmentation issues is 260 bytes, oversizing costs 608 bytes of code.

Patch stack now is:

  1. tcp-enqueue-split
  2. tcp-oversize
  3. tcp-oversize-smaller



(file #17413)

Jakob Stoklund Olesen <stoklund>
Group Member
Fri 06 Feb 2009 08:35:30 AM UTC, comment #17: 

Here are some code size numbers for tcp_out.c (Blackfin):

  • CVS:      5626
  • split:    5906
  • oversize: 6494


(These numbers differ from what I reported earlier because I am now using LWIP_PLATFORM_BYTESWAP)

I can try to reduce the impact on code size, but at some point readability suffers.

The patches are rather big because tcp_enqueue has been more or less rewritten. Here is what I did for tcp-enqueue-split:

  • Split tcp_enqueue into two identical functions.
  • Remove superfluous code from each.
  • Move common code to tcp_create_segment.
  • Move coalescing of segments from the bottom of tcp_enqueue_data to the top.


The tcp-oversize patch is actually a simple addition with almost everything bracketed by #if TCP_OVERSIZE. Unfortunately, I couldn't resist the temptation of further cleanups in tcp_enqueue_data. This makes the patch a bit bigger than it needs to be.

If you want to understand what is going on, I would recommend applying both patches, then comparing with the old tcp_enqueue side by side.

Jakob Stoklund Olesen <stoklund>
Group Member
Thu 05 Feb 2009 09:00:54 PM UTC, comment #16: 

Cool new features! But the patch is rather big, I'll try to test it on the weekend.

Did you measure code size changes with the new patch?

Simon Goldschmidt <goldsimon>
Group administrator
Wed 04 Feb 2009 11:47:44 AM UTC, comment #15: 

Attached patch tcp-oversize applies on top of tcp-enqueue-split. Please disregard the earlier tcp-enqueue-concat.

Implement the TCP_OVERSIZE setting
----------------------------------

When relevant, tcp_enqueue_data will allocate pbufs that are larger than strictly necessary. Data from following writes is copied into the oversized pbuf before new pbufs are allocated. The TCP_OVERSIZE setting controls the maximum overallocation. It ranges from 0 to TCP_MSS.

Oversized allocation has the following advantages:

- Pbuf lengths are aligned to help picky DMA engines.
- Pbuf chain length and snd_queuelen can be controlled when using many small writes.
- Pbuf vs data overhead can be controlled.
- Unfragmented TCP packets can be created for interfaces that can DMA but no scatter-gather.

The disadvantage is a slightly higher RAM usage:

- struct tcp_pcb has an extra 16-bit field (unsent_oversize). This overhead disappears due to alignment on 32-bit platforms.
- Unused overallocation in transmitted pbufs is wasted. This typically happens in the last segment produced by a burst of writes.

To control the extra memory usage, tcp_enqueue_data will not allocate an oversized pbuf if it thinks the segment will be transmitted immediately. The heuristic is not perfect, but it works very well for slow stop-and-go protocols like telnet.

Setting the TF_NODELAY flag on a pcb will disable oversizing unless TCP_WRITE_FLAG_MORE is passed to tcp_write.



(file #17402)

Jakob Stoklund Olesen <stoklund>
Group Member
Tue 03 Feb 2009 09:28:33 AM UTC, comment #14: 

OK, second attempt: The attached patch tcp-enqueue-split splits tcp_enqueue into two separate functions for data and options.

This patch is against CVS HEAD and replaces the previous tcp-enqueue-concat patch.

- tcp_enqueue_data does proper TCP segmentation.

- tcp_enqueue_data has static linkage and will be inlined in tcp_write by a savvy compiler.

- tcp_enqueue_options fixes a bug where sending a FIN with snd_buf==0 would underflow snd_buf.

This change increases the tcp_out code size from 5356 to 5616 bytes on a Blackfin.


(file #17393)

Jakob Stoklund Olesen <stoklund>
Group Member
Tue 03 Feb 2009 09:18:14 AM UTC, comment #13: 

I can confirm that by sending
MSS-sized blocks of data I gained huge
performance boost. My application
generates small chunks of data (20-30)
bytes, which I cached in a MSS-sized
application buffer (not a pbuf).
I'm sending files over GPRS connection
and I am saturating the link, so I
do not need zero-copy.

At first I accomplished transfer rate of
at most 200-300 bytes/sec. Now I'm sending
3-3.5 KBytes/sec :)
At application level I use something like this:
int my_enque(*pcb, u8_t *src, u16_t len, u8_t push_flag);

Most of the times it is called with len = 20-30, push_flag= FALSE.
These bytes are accumulated in the buffer and
when it reaches MSS, I call tcp_write() and tcp_output()
to send it (note: I do not call directly tcp_enque()).

When I want to send the data eventhough the temp buffer
is not filled up to MSS, I pass push_flag = TRUE;
This is used when I'm enqueing the last chunk just
before I close the file and also when I send commands
to the FTP server over the control connection.

Maybe it wold be best if I add the small amounts of bytes
directly to the last pbuf. Then I should not
need the temp buffer any more but I did not want to
touch the core.

The price I have to pay is the extra MSS-sized buffer
and very small amount of code which is not problem
in my case. This approach does not support zero-copy,
but even if it did I would not be able to benefit from
it since the GPRS net is extremely slow and the RTT is
huge compared to Ethernet (an ACK comes afte [500ms - 30-40 sec]!!!

To me integrating somewhere in
the stack something like this:
 1.   if( len > pcb->mss )
       len -= len % pcb->mss;
 2.   Add len bytes to current pbuf
 3.   ...

looks pretty good. Everyone should benefit
from this change.

I'm a bit conservative about changing the API.
Also, I don't think that tcp_sndbuf(pcb) should
return a multiple of MSS.

We could add new function which does what my_enque()
does. It should be
implemented in different ways with #ifdefs according to user
needs. So everyone would be free to chose which
approach is best for his application and if he
wants to use my_enque at all because this works for me,
but for most of you it will not.

Greetings,
Iordan

Iordan Neshev <iordan_neshev>
Mon 02 Feb 2009 06:12:57 PM UTC, comment #12: 


> Bill, thanks for testing the patch!


No problem - I was looking into the code and thinking maybe I could give it a shot. When I saw what it took you to do it I knew I was in over my head!

> What is the limiting factor for your performance?
> CPU? Network bandwidth? Bandwidth-delay product? DMA
> bandwidth/fragmentation?
> Do you have retransmissions?


CPU and CPU memory accesses (large blocks of outbound data, fairly slow SDRAM, zero-copy RAW_API). We plan on doing a DMA-checksum FPGA addition and until then, I'm trying to improve performance where that won't.  There is no retransmission.  Every WireShark capture has been error-free.  That was when I noticed the handful of not-MSS-sized packets for a 300+k transfer.

> Sending full-sized TCP segments only helps you if you are saturating
> the network link. It minimizes the protocol header overhead.


It's safe to say I am not saturating the link, but trying to. :) However, it did slightly help performance to have full packets (measured as how many 300k chunks of data can I send to the PC per second).

> If you are
> CPU-limited, it could well be more work. The pbuf chains will be
> slightly longer on average.


It is slightly longer to send data. Whether there is benefit to "remembering" the last pbuf in the change I don't know.  I would think we would want to maintain this because for sending small data chunk in tcp_write, the chain could be quite large (100 or more in the list).

> Your 1% number is rather small. Did you measure a variance as well?


That's an average over several MB of data.  But it's a fair test because only one change was made - the patch versus my calling tcp_write with MSS sized chunks, except the last packet.  This includes an optimized assembly inet_chksum as that is where there is a lot of gain.

Bill Auerbach <billauerbach>
Sat 31 Jan 2009 01:35:09 AM UTC, comment #11: 

Re comment #8: I think that this is leaning even further towards the argument of shaking up the raw TCP API by allowing data to be written as pbufs rather than char * + lengths.

Because of the need to allocate pbufs in chunks of the segment size, there could perhaps be a separate function to allocate a pbuf chain of the appropriate dimensions. Thinking out loud, that function could also maybe then be used in future to allocate pbufs complying with zero-copy requirements of alignment etc.? Or at least possibly call such a function. So for example:

tcp_pbuf_alloc(tcp_pcb, size, pbuf_type)
(where pbuf_type is PBUF_RAM, PBUF_ROM etc.)

and in turn that calls pbuf_alloc_zerocopy(pbuflayer, len, pbuftype, netif)

The netif would be required in order to identify driver/hardware requirements for alignment, address, etc.

Should something like this even be a prerequisite before doing a full zero-copy implementation, essentially by being its first phase? I think it might be.

There would be API breakage doing this of course. Maybe we could minimise this by having a wrapper used for backwards compatibility.

Further discussion about this should perhaps move back to lwip-devel though, as it's out of place in this bug.

Jifl

Jonathan Larmour <jifl>
Group Member
Fri 30 Jan 2009 11:26:24 PM UTC, comment #10: 

Bill, thanks for testing the patch!

What is the limiting factor for your performance?
CPU? Network bandwidth? Bandwidth-delay product? DMA bandwidth/fragmentation?
Do you have retransmissions?

Sending full-sized TCP segments only helps you if you are saturating the network link. It minimizes the protocol header overhead. If you are CPU-limited, it could well be more work. The pbuf chains will be slightly longer on average.

Your 1% number is rather small. Did you measure a variance as well?


Jakob Stoklund Olesen <stoklund>
Group Member
Fri 30 Jan 2009 10:19:07 PM UTC, comment #9: 

I can confirm that this works with the same test I've been using when I first noticed the problem.

I don't know the impact to the code size, but the impact on performance (throughput) for sending large amounts of data is about 1%.  It might help if the last packet in the queue were saved in the pcb to eliminate the search to the end that is required. Maybe for long chains this is where the time goes?  Or have tcp_write return the number of bytes queued and let it simply queue 'len - len % mss' bytes and return this amount.  The problem is that this breaks code that calls tcp_write.

I know there was a lot of effort put into this and I appreciate that.  From my position, I would like to use it, but I need a performance increase, not decrease.

Bill Auerbach <billauerbach>
Fri 30 Jan 2009 09:39:28 PM UTC, comment #8: 

First of all: I am sorry for posting comments directly to the lwip-devel list. For any new listeners out there, there is some discussion about this task in the January 2009 archives of lwip-devel.

I have attached a patch for tcp_enqueue that fixes the issue discussed here. It was not as simple as I had hoped. The tcp_enqueue function is quite scary. I have tested this patch, but please try to test it on different systems. There are many things that can go wrong.

The code to extend the last unsent segment is separate from the code that creates new segments. This has a couple of advantages: 1. pbufs are allocated with PBUF_RAW, so no memory is wasted on a header that is never prepended. 2. The old code would allocate a segment only to free it again immediately. In the no-copy case, we can also avoid allocating an extra pbuf for the header.

The downside is that the pbuf allocation code is duplicated. Twice as many bugs!

I think it would be a good idea to split tcp_enqueue into two separate functions:

tcp_enqueue_data to be called from tcp_write and
tcp_enqueue_options for everybody else.

The logic trying to handle data or options has become rather convoluted.

I think the user should be able to call tcp_write with small data chunks without worrying too much about performance. My own user code does that. Currently small writes cause very long pbuf chains because each write creates at least one pbuf. With no-copy tcp_write this is inevitable. With copying tcp_write it is really not necessary.

tcp_enqueue_data could allocate a pbuf with room for a full segment (or some smaller chunk size). Following tcp_writes could append data to the same pbuf. This would use more memory, so it must be considered carefully.

This is perhaps related to the zero-copy driver discussion on lwip-devel? Maybe the driver could have a say in how outgoing pbufs are allocated?

My own Ethernet driver cannot directly DMA the long pbuf chains created by small writes. I have to copy them into continuous memory first. This means that data is copied both by tcp_enqueue AND the driver.


(file #17364)

Jakob Stoklund Olesen <stoklund>
Group Member
Wed 28 Jan 2009 10:34:47 PM UTC, comment #7: 

Can we revisit this?  I don't think this has to do with Nagle. Or it also occurs without Nagle.  On large transfers (RAW_API) I see it not send a full payload about 1 in 40 packets.  True, this isn't a huge problem.  I would expect if sending e.g. 300kB that all but the last packet would have a full payload.

Do we know why it sends a partial packet?  In my tcp_sent, if I have more than MTU bytes to send, should I wait for tcp_sndbuf to have MTU free?  Is this what was meant by solving it at the application level?

Thanks,
Bill

Bill Auerbach <billauerbach>
Thu 28 Jun 2007 11:08:28 AM UTC, comment #6: 

OK, I wait until nagle is fixed and take a look at it again. I still would love to fix this, but only if the fix isn't much bigger or slower than the existing solution!

Simon Goldschmidt <goldsimon>
Group administrator
Thu 28 Jun 2007 10:54:01 AM UTC, comment #5: 

Nagle should indeed be solving that problem, and I suggest we fix that rather than the segmentation in the queue.

Kieran Mansley <kieranm>
Group Member
Thu 28 Jun 2007 07:16:13 AM UTC, comment #4: 

re comment #3:
What I've seen there might also come from an incorrect implementation of tcp_output_nagle(): tcp_output should only be called if there is a full-sized-segment to be sent (or unacked == NULL). pcb->snd_queuelen is not sufficient for this when calling tcp_write with many small chunks! I'll work on that.

Anyway, the current code already tries to combine an existing segment with the newly created segments, but only succeeds if the first segment on 'queue' fits in. I would see this as a bug (it both wastes space and is inefficient).

Simon Goldschmidt <goldsimon>
Group administrator
Wed 27 Jun 2007 05:13:22 PM UTC, comment #3: 

Not? If you call tcp_write many times for small chunks instead of one time for a big chunk, you will get every chunk in a single segment. Now that's what I call inefficient!

I'm not saying I'm using it like this, but that's not what I thought tcp should be doing...


Simon Goldschmidt <goldsimon>
Group administrator
Wed 27 Jun 2007 03:51:01 PM UTC, comment #2: 

I'd suggest this is not worth worrying about.  The consequences are very minor.


Kieran Mansley <kieranm>
Group Member
Wed 27 Jun 2007 03:42:19 PM UTC, comment #1: 

OK, some more details: When sending much data very fast (so fast that tcp_pcb->snd_buf gets empty), an incomplete segment (smaller than pcb->mss) can be queued (and will, in most cases).

When there is send buffer available again, the caller will call tcp_enqueue again, which splits the data into pcb->mss size chunks and tries to enqueue them. At this point, the last segment (which is smaller than mss) will not be filled, which results in sending segments smaller than mss, although there is enough data to be sent/enqueued.

To solve this, we can either
- in tcp_enqueue, check if the last segment on pcb->uncack is < mss or
- at application level, check the last segment and call tcp_write with a smaller length so that the last segment will be filled.

I'd prefer the first solution because it solves the problem for the raw api also and is faster.

Any comments? Maybe this is intended in order to be small?

Simon Goldschmidt <goldsimon>
Group administrator
Tue 26 Jun 2007 07:56:13 PM UTC, original submission:  

If the last segment on pcb->unsent is < pcb->mss, it could hold additional data. Instead, tcp_enqueue only fills it with additional data if all the data passed to it fit in that segment.

ToDo: first check space in last unsent segment, then create pbufs.

Simon Goldschmidt <goldsimon>
Group administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attached Files
file #17413:  tcp-oversize-smaller added by stoklund (7KiB - application/octet-stream - Smaller code size for tcp-oversize)
file #17402:  tcp-oversize added by stoklund (15KiB - application/octet-stream)
file #17393:  tcp-enqueue-split added by stoklund (25KiB - application/octet-stream - Alternate pacth: Split tcp_enqueue in two.)
file #17364:  tcp-enqueue-concat added by stoklund (8KiB - application/octet-stream - Patch for tcp_enqueue())

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by clandau (Posted a comment)
  • -email is unavailable- added by iordan_neshev (Posted a comment)
  • -email is unavailable- added by jifl (Posted a comment)
  • -email is unavailable- added by stoklund (Updated the item)
  • -email is unavailable- added by billauerbach (Posted a comment)
  • -email is unavailable- added by kieranm (Posted a comment)
  • -email is unavailable- added by goldsimon (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follow 10 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2010-05-04 goldsimon StatusReady For Test Done
        Open/ClosedOpen Closed
    2010-03-05 goldsimon StatusNone Ready For Test
        Percent Complete0% 100%
        Assigned toNone goldsimon
    2009-04-03 kieranm Planned ReleaseNone 1.4.0
    2009-02-06 stoklund Attached File- Added tcp-oversize-smaller, #17413
    2009-02-04 stoklund Attached File- Added tcp-oversize, #17402
    2009-02-03 stoklund Attached File- Added tcp-enqueue-split, #17393
    2009-01-30 stoklund Attached File- Added tcp-enqueue-concat, #17364

    Back to the top

    Powered by Savane 3.13-02a9.
    Corresponding source code