tasklwIP - A Lightweight TCP/IP stack - Tasks: task #6935, Problems to be solved with the...

 
 

You are not allowed to post comments on this tracker with your current authentication level.

task #6935: Problems to be solved with the current socket/netconn API

Submitter:  Simon Goldschmidt <goldsimon>
Submitted:  Thu 24 May 2007 12:00:11 PM UTC
   
 
Category:  None Should Start On:  Thu 24 May 2007 12:00:00 AM UTC
Should be Finished on:  Thu 24 May 2007 12:00:00 AM UTC Priority:  1 - Later
Status:  None Privacy:  Public
Assigned to:  None Percent Complete:  0%
Open/Closed:  Closed Planned Release:  None
Effort:  0.00

Jump to the original submission

Mon 11 Jun 2007 06:26:57 PM UTC, comment #20: 

yep

Simon Goldschmidt <goldsimon>
Group administrator
Mon 11 Jun 2007 06:20:16 PM UTC, comment #19: 

Simon, ok to close it, and to be continue with https://savannah.nongnu.org/task/?6994 ?

Frédéric Bernon <fbernon>
Group Member
Sat 09 Jun 2007 11:16:25 AM UTC, comment #18: 

A design goal should be to easily disable code that is not used. While function-based linking helps with this, things like non-blocking sockets or the code needed for select is inside some functions and can only be left out by creating compile time switches for that.

Simon Goldschmidt <goldsimon>
Group administrator
Fri 08 Jun 2007 07:30:02 PM UTC, comment #17: 

Ok, I have check in. I hope this code can help to reach same performance in sequential API than in raw API...

Frédéric Bernon <fbernon>
Group Member
Mon 04 Jun 2007 10:13:39 AM UTC, comment #16: 

I'd consider this task relatively low priority, but if you're keen to experiment (and can keep it clean so easy to back out again if we decide this approach isn't worth it) then go ahead.

Kieran Mansley <kieranm>
Group Member
Mon 04 Jun 2007 09:47:18 AM UTC, comment #15: 

Wrong copy/cut for the last part:

In api_msg, most of "sys_mbox_post(msg->conn->mbox, NULL);" would be replace by a TCPIP_APIMSG_ACK(m).

#if LWIP_TCPIP_CORE_LOCKING
#define TCPIP_APIMSG_ACK(m)
#else
#define TCPIP_APIMSG_ACK(m) sys_mbox_post(m->conn->mbox, NULL)
#endif

Frédéric Bernon <fbernon>
Group Member
Mon 04 Jun 2007 09:38:55 AM UTC, comment #14: 

To be able to continue this item and to integrate last patchs, I would like to add a way to enable the alternative communication feature with tcpip_thread (the global mutex).

The idea is to add a define in opt.h like:

/* EXPERIMENTAL, Don't use it if you're not a lwIP project member */
#ifndef LWIP_TCPIP_CORE_LOCKING
#define LWIP_TCPIP_CORE_LOCKING 0
#endif

#if LWIP_TCPIP_CORE_LOCKING
LOCK_TCPIP_CORE()    sys_sem_signal(lock_tcpip_core)
UNLOCK_TCPIP_CORE()  sys_sem_signal(lock_tcpip_core)
#else
LOCK_TCPIP_CORE()
UNLOCK_TCPIP_CORE()
#endif

lock_tcpip_core will be defined and initialize in tcpip.c, and macro will be add to avoid to change the code... About tcpip_apimsg, we could add a new function like tcpipip_apimsg_lock, and a define to select the first one or the second one in api_lib:

#if LWIP_TCPIP_CORE_LOCKING
#define TCPIP_APIMSG(m) tcpip_apimsg(m)
#else
#define TCPIP_APIMSG(m) tcpipip_apimsg_lock(m)
#endif

Note that all the functions in api_lib.c would used TCPIP_APIMSG, except "netconn_connect" which is the only "asynchrone" function (it's need several "tcpip_thread main loop" to provide a result.

In api_msg, most of "sys_mbox_post(msg->conn->mbox, NULL);" would be replace by a TCPIP_ACKNOWLEDGE_CONN(c).

#if LWIP_TCPIP_CORE_LOCKING
#define TCPIP_ACK_APIMSG(m)
#else
#define TCPIP_ACK_APIMSG(m) sys_mbox_post(m->conn->mbox, NULL)
#endif

The exception is one again the "connect" functions (do_connected, and err_tcp during "conn->state == NETCONN_CONNECT").

Agree for you?

Frédéric Bernon <fbernon>
Group Member
Fri 01 Jun 2007 08:29:58 AM UTC, comment #13: 

Another thing with the current socket layer (it's not true for netconn layer): "struct netbuf" is just used like a "wrapper" of a "struct pbuf" with addr & port fields. In socket layer, lot of netbuf functions are not used. And each time that a packet have to be exchange between socket layer & api_msg, there is an netbuf allocation, initialization & free.

If we redesign the socket layer, I propose to remove any code using netbuf and to replace that by adding "addr & port" fields directly in pbuf struct. I don't think that the memory use will be increase (because even if we lost memory in each pbuf, we gain the netbuf memp & functions code).




Frédéric Bernon <fbernon>
Group Member
Tue 29 May 2007 05:23:48 PM UTC, comment #12: 

Some details about "where is spend the time"?

First, the "sendto" measure is done inside lwip_sendto, from the first instruction (just after the local variables declaration) until the return (just before it). That's what David call the "total elapsed time". The application thread and tcpip_thread have the same priority (to be in the worth case, it's important for this bench).

Some details on the Current CVS HEAD code. On 204µs, there is:

  • ~31µs before the sys_mbox_post in tcpip_apimsg (so, get_socket, netbuf_ref+pbuf_alloc, netconn_send, prepare api_msg struct, and tcpip_apimsg).


  • The application task keep the hand ~6µs to do the sys_arch_mbox_fetch.


  • Here, there is the "task switch".


  • The tcpip_thread spend ~6µs to terminate the sys_arch_mbox_fetch (note the delay between the "application post" and the "tcpip_thread fetch" is around ~12µs).


  • There is ~18µs between the previous step and udp_sendto (decrement next timeout, unpack tcpip_msg, unpack api_msg, some checks in api_msg).


  • udp_sendto take ~74µs.


  • The return and post take ~6µs. tcpip_thread keep the hand until ~32µs.


  • Here, there is the "task switch".


  • The application terminate the "fetch" in ~12µs.


  • To terminate the lwip_sendto, there is ~17µs.


Note the sum give a little bit different than 204µs, just because I don't give exact measures. So, take that like "relative" values...

About priority, I would like to measure the worth concurrent case
to maximize the delay before the end of tcpip_apimsg.

About the idea to change tcpip_apimsg, once again, I remember that it's experimental, and just to give ideas for the "next" sequential layer. And there is some exceptions, the main is of course the "connect": in all the other cases, the "processing" is something "synchrone" done in only one "tcpip_thread MAIN Loop". But "connect" is different, and the real "end of processing" is done after an ip_input or a timers (by invoking the callback "do_connected" or "err_tcp"). So, we can lock by the same way the core. But perhaps we could keep the current feature for that...

I join a jpeg image about the tool I use to analyze (to give you an idea)...


Frédéric Bernon <fbernon>
Group Member
Mon 28 May 2007 11:30:57 PM UTC, comment #11: 

>I at least would be interested in knowing where the stack spends the ~100 extra milliseconds when using mboxes.

That's an extra 128 microseconds (0.128 ms) in total, or 87 for the first solution (replacing an mbox with a global mutex but still using netconn_sendto).

The current mbox method requires the TCPIP thread to take over and process the API message from netconn_send/sendto, calling udp_send/sendto. With a global mutex, I assume the application thread would directly call udp_sendto (possibly via netconn_sendto), but might be blocked while another thread owns the global mutex.

Either solution 1 or 2 will therefore be avoiding a context switch, but the quoted times will be best case results (with the global mutex available).

Do Frédéric's stats for the current CVS method include time required for context switches, i.e. total elapsed time, or is it just execution times for the application thread, without any overhead?

Even if it is just execution time for the application thread, there is quite a lot of overhead in using the mailboxes, notably the construction and posting of an API message, and several layers of function calls. These could be adding up to a significant chunk of that 87 microseconds, most of which is avoided by using a global mutex.

David Empson <dempson>
Mon 28 May 2007 08:40:29 AM UTC, comment #10: 


> Conclusion: from current CVS code, it seems we can get easily a very important gain of performance (from 0.204ms to 0.076, so, a ratio ~2.7).

Wow, that's impressive. Seems that at least on your platform using mboxes alone consumes about half of the time required to send a packet. Do you have the possibility to do even more detailed analysis. I at least would be interested in knowing where the stack spends the ~100 extra milliseconds when using mboxes.

It seems that just replacing mboxes with a giant mutex one could get a nice peformance gain (~2) without doing any extensive changes to the existing APIs.

> But some problems I see with the global mutex solution: it a task with a "low" priority lock the core to do any action (a send by example), and if others tasks in the system with higher performance run, and often preempt it, the global perf of the stack can decrease (because the perf is not only based on the tcpip_thread priority, but on the time during the core is locked).

But isn't this actually desirable in some situations? Now even if the low priority task generates a lot of network traffic it won't disturb higher priority tasks when they don't use the stack. And with a mutex supporting priority inheritance, priority inversion even isn't a problem.

Atte Kojo <kojo>
Sat 26 May 2007 01:51:17 PM UTC, comment #9: 

Some statistics I get about "redesign socket api". I join some part of the code I patch to test that, but it's not a "patch file" (just some part to show differences).

The test is based on a simple "while(1) sendto( s, data, 1458, 0, (struct sockaddr *)&saRemote, sizeof(saRemote));" (using a measure performance tools for NXP, a little bit intrusive, so, these stats are more to give you some ideas of gain)...
 
CURRENT CVS:
Statistics: a lwip_sendto take ~0.204ms, the udp_sendtto part ~0.074ms/0.204ms

SOLUTION1: "always use netconn_sendto, but add a global mutex to replace mbox"
Statistics: a lwip_sendto take ~0.117ms, the udp_sendtto part ~0.074ms/0.117ms

SOLUTION2: SOLUTION1 patchs+"directly use udp_sendto, and the global mutex"
Statistics: a lwip_sendto take ~0.076ms, the udp_sendtto part ~0.074ms/0.076ms

Conclusion: from current CVS code, it seems we can get easily a very important gain of performance (from 0.204ms to 0.076, so, a ratio ~2.7). For sockets layer user, we could also gain an independancy from netconn (like in solution2): we could avoid netbuf use, api_msg preparation, and we could reduce the footprint.

But some problems I see with the global mutex solution: it a task with a "low" priority lock the core to do any action (a send by example), and if others tasks in the system with higher performance run, and often preempt it, the global perf of the stack can decrease (because the perf is not only based on the tcpip_thread priority, but on the time during the core is locked).

Comments ? Atte, Simon, I will be interested about stats with such patchs...





(file #12864)

Frédéric Bernon <fbernon>
Group Member
Thu 24 May 2007 03:12:42 PM UTC, comment #8: 

About comment #6:

>Function-like macros could be used to control it so that we don't have a mess of ifdefs - depending on the user's chosen options, some will evaluate to nothing. So for example, on entry to an API function, something like LOCK_CONNECTION(conn) would be called, which in some configurations would actually lock the whole stack, not just the connection.


Jonathan, is the idea to avoid #if/#else/#endif ? If right, can you give me your opinion about https://savannah.nongnu.org/patch/?5959, comment #2 ?

Frédéric Bernon <fbernon>
Group Member
Thu 24 May 2007 01:52:41 PM UTC, comment #7: 


>I vote for re-implementing the sockets API from scratch directly on top of the raw API using mutexes for synchronization.


Me too. Only I think we should first shrink the list of bugs, patches and tasks. Then we can decide if the rewrite is going to be before or after the next release.

> Function-like macros could be used to control it so that we don't have a mess of ifdefs - depending on the user's chosen options, some will evaluate to nothing.


That's the way I'd do it. Only (as I said before) I would also integrate some of those macros into the core. That way, ARP doesn't get locked while TCP is still copying segments.

Simon Goldschmidt <goldsimon>
Group administrator
Thu 24 May 2007 01:39:13 PM UTC, comment #6: 

Re comment #14: Speed versus footprint is usually a trade-off. But it's harder to shrink a complex implementation, than it is to speed up a small implementation. So I suggest ensuring that footprint is first, with speed only in mind, than the other way round.

I do sometimes wonder that lwIP is being used by some people not because of the "lw" but because there just aren't many free easy-to-port TCP/IP stacks out there.

In terms of implementing in terms of mutexes, I'll just draw attention to comment #22 of patch #5960.

I think it's worth mentioning that there could be different grains of locking, that could be options.
1) Lock the whole stack
2) Lock each connection
3) Lock each relevant chunk of functionality.

Each one in turn would increase the footprint, but by having different granularity, overall performance with multiple threads could be improved. Only with 3) would you be able to do things like send and receive on the same socket from multiple threads. But many people don't need that, so 1 and 2 would reduce both complexity and footprint for those.

Function-like macros could be used to control it so that we don't have a mess of ifdefs - depending on the user's chosen options, some will evaluate to nothing. So for example, on entry to an API function, something like LOCK_CONNECTION(conn) would be called, which in some configurations would actually lock the whole stack, not just the connection.

Just some thoughts anyway.


Jonathan Larmour <jifl>
Group Member
Thu 24 May 2007 01:33:42 PM UTC, comment #5: 


>The suggestions you make seem to be along the lines of fixing up the existing sockets API.
> I agree speed is not the main goal for lwIP.


I'm agree with Kieran. Performance is not the primary goal of lwIP. Stability is for me the top priority.

Of course, if some little changes can improve perf (without change the stability of the stack), I'm agree. But I think that rewrite a new socket layer as stable as the current one is so "easy"...

But ok, Atte and Simon (and others), seems ready to do it, and if I can help, I will do with pleasure...

First of all, I think it will be good to define some "unit tests" to measure this gain of performance. If it's just to tell "yeh, it's better, I feel it..."  ;) , without any statistics and measures, I don't think it's was so good. On the mailing list, we have talk to define some tests to get a framework for evaluation (with Thomas if I remember). I think it's the first thing to do: get some common tools or applications to measure perfs, study behavior, to compare current api, with the new/future one...

Frédéric Bernon <fbernon>
Group Member
Thu 24 May 2007 01:14:12 PM UTC, comment #4: 

Actually, our number, number two, and number three customer requirements are: 1.  speed, you must be able to run near 100mbps, 2.  processor loading; it must not use 100% of the cpu to do that, and 3. footprint, it needs to be small.   We are likely going to be taking a very hard look at these three items in parallel to the R & D of the extended protocols I was talking about yesterday.

Regards,
Paul Decker
ADI

Paul Decker <pdecker>
Group Member
Thu 24 May 2007 01:02:24 PM UTC, comment #3: 

I vote for re-implementing the sockets API from scratch directly on top of the raw API using mutexes for synchronization.

Actually, what I'd really like is that Kieran wouldn't have to use phrases "lwIP is not thread-safe" and "to get good performance, you should use raw API" ever again on lwip-users ;-).

> I agree speed is not the main goal for lwIP.


Me too, but if everybody and their dog are running lwIP at wire-speeds using the raw API, getting good performance using a sequential API shouldn't be that difficult.

Maybe integrate this with tasks #6849 and #6863, and suddenly we might end up with the smallest, fastest and meanest embedded TCP/IP stack ever ;-D.

Atte Kojo <kojo>
Thu 24 May 2007 12:26:06 PM UTC, comment #2: 


> The suggestions you make seem to be along the lines of fixing up the existing sockets API.


Whether this gets fixed up in the current implementation or in a new one is a question to be asked, but it would work for both!

> (ii) and not require additional stuff inside the stack to support it on the raw API


While locking the whole stack core would work, I would (in a second stage) also implement more locks (as defines so raw-api-only would not require them) to let one thread run in ARP while another one puts together TCP segments.

> However, this is very low priority.


I agree. But talking about it doesn't hurt. And that way, we get an impression whether to 'repair' the old api or impelment a new one.

> (i) do a better job of supporting the sockets API than we have already


I think in terms of speed and prioritization, we can. Otherwise, the sockets API seems kind of OK to me (if you don't count the multithreading issue from patch #5960).

I agree speed is not the main goal for lwIP. But what I want is not gaining much wire-speed but spending as little time as I can with TCPIP processing (because in our device, it's not the most important thing to do).

Simon Goldschmidt <goldsimon>
Group administrator
Thu 24 May 2007 12:17:21 PM UTC, comment #1: 

The suggestions you make seem to be along the lines of fixing up the existing sockets API.

There seems to be a lot of support at the moment for implementing a new sockets API on top of the raw API rather than the netconn API.  I think this is worthy of further investigation, and on the condition that we can (i) do a better job of supporting the sockets API than we have already; (ii) and not require additional stuff inside the stack to support it on the raw API; then I think it would be worth trying.  However, this is very low priority.

Note, I would still retain the netconn API even if the sockets API on top of it was removed.

Kieran Mansley <kieranm>
Group Member
Thu 24 May 2007 12:00:11 PM UTC, original submission:  

Note that this is marked as 'later'!

I'd like to have this task to collect all (design) problems (or change requests) with the current socket/sequential API so we have one place where we list them (instead of here and there in other threads).

I make the beginning with this:
- move from message passing to mutual exclusion (lock netconns)
- for send, we could lock the whole core instead of pass a message to another thread: that way, TX can be prioritized (if using a mutex). the runtime of locking a mutex should not be much longer than message passing (in fact, message passing always includes a task-change, locking only sometimes)
- for RXed frames, we could also lock the whole core. That way, tcpip_thread only would have to process RX and timers. Also, incoming RX frames could be prioritized by handling low-prio frames (ports or protocols) in a lower thread.

Simon Goldschmidt <goldsimon>
Group administrator

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attached Files
file #12899:  API Layers measures.JPG added by fbernon (59KiB - image/jpeg)
file #12864:  API Layers measures.txt added by fbernon (4KiB - text/plain)

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by dempson (Posted a comment)
  • -email is unavailable- added by jifl (Posted a comment)
  • -email is unavailable- added by fbernon (Posted a comment)
  • -email is unavailable- added by pdecker (Posted a comment)
  • -email is unavailable- added by kieranm (Posted a comment)
  • -email is unavailable- added by goldsimon (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follow 3 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2007-06-11 fbernon Open/ClosedOpen Closed
    2007-05-29 fbernon Attached File- Added API Layers measures.JPG, #12899
    2007-05-26 fbernon Attached File- Added API Layers measures.txt, #12864

    Back to the top

    Powered by Savane 3.13-4448.
    Corresponding source code