patchlwIP - A Lightweight TCP/IP stack - Patches: patch #5960, Enable multithread send/recv...

 
 

You are not allowed to post comments on this tracker with your current authentication level.

patch #5960: Enable multithread send/recv operations on same socket on TCP netconns

Submitter:  Frédéric Bernon <fbernon>
Submitted:  Wed 23 May 2007 10:44:53 AM UTC
   
 
Category:  None Priority:  5 - Normal
Status:  Wont Do Privacy:  Public
Assigned to:  fbernon Open/Closed:  Closed
Planned Release:  None

Jump to the original submission

Mon 30 Nov 2009 11:31:53 PM UTC, comment #41: 

Had you done this, you would have saved me a lot of time!

Let me add a vote for full duplex support... I assumed it would work and felt it was a bug when it didn't.  (It seemed to work, but with a rare and difficult to diagnose problem.  This made lwip seem unstable.)

We have one thread waiting for data from a controller, plus an occasional need to notified the controller of and event.  So with one thread blocked on netconn_recv(), how can we send on the same connection?

Mike <mikeredd>
Sat 09 Jun 2007 11:10:27 AM UTC, comment #40: 

Because LWIP_TCPIP_CORE_LOCKING option is already a partial solution to fix this problem, and because it seems I'm the only active developer using full-duplex protocols, I close that (but it should be one of the goal of a new socket layer)...

Frédéric Bernon <fbernon>
Group Member
Tue 29 May 2007 01:11:55 PM UTC, comment #39: 

About comment #36, of course, I can (I have some others differences with current CVS I don't check in). But I don't share your point of view that is "a partial solution to a rare problem" : full duplex protocols are pretty common (perhaps only in my domain of activity?), and I thought it could help some others users... But ok if you prefer to don't check in that...

About comment #38, "Although at least conn->err is set in api_lib.c"  I think it's true, but only to the next netconn_write call. To be sure, we could remove in netconn_write "conn->err = ERR_OK;" and replace the "while" condition by :

"while (((conn->err==ERR_OK)||(conn->err==ERR_MEM)) && size > 0) {"

   

Frédéric Bernon <fbernon>
Group Member
Tue 29 May 2007 12:58:18 PM UTC, comment #38: 

OK, now I understand you ;-)
It's the other way round like what we have now: first wait then signal. OK, I don't know from memory, but as long as all the processing is done in api_msg.c and no values in the struct netconn are set in api_lib.c, that should be fine. (Although at least conn->err is set in api_lib.c, so that would need to be investigated)

Simon Goldschmidt <goldsimon>
Group administrator
Tue 29 May 2007 12:55:36 PM UTC, comment #37: 

Some answers are in comment #31:

>Where would you add the sys_sem_signal(apimsg->msg.conn->apisem)?

"in "tcpip.c", lock the connection to protect concurrent access in "tcpip_apimsg()", adding a sys_sem_wait(apimsg->msg.conn->apisem) and a sys_sem_signal(apimsg->msg.conn->apisem)."

>When/How would it be initialized?

in netconn_new_with_proto_and_callback(), if LWIP_API_MULTITHREAD_PROT=1

>Would it be one apisem for all netconns or one per thread? (if one per netconn, that would be the same as what we have now, wouldn't it?)

it's one per connection(netconn), but I don't understand "would be the same as what we have now"? Which one? "sys_sem_t sem;" in "struct netconn"? It it's this one, it is not used for that (I suppose it's mainly used in "write" and "close" to avoid to consumne cycles to check a state).



Frédéric Bernon <fbernon>
Group Member
Tue 29 May 2007 12:49:45 PM UTC, comment #36: 

From the sound of it it's a pretty small change.  Would it be possible for you to keep this in your own tree for the time being?  I know CVS doesn't make that easy, but this seems to be a partial solution to a rare problem, and so I'm not convinced it should be checked in yet.


Kieran Mansley <kieranm>
Group Member
Tue 29 May 2007 12:48:31 PM UTC, comment #35: 

- Where would you add the sys_sem_signal(apimsg->msg.conn->apisem)?
- When/How would it be initialized?
- Would it be one apisem for all netconns or one per thread? (if one per netconn, that would be the same as what we have now, wouldn't it?)

Simon Goldschmidt <goldsimon>
Group administrator
Tue 29 May 2007 12:36:07 PM UTC, comment #34: 

About comment #32, I think there is no risk with a patch like this, even for netconn_write (where the problem is more on the RX side, with the "tcp_sndbuf", but here, only one thread write). The main problem is to avoid that task1 fetch a msg posted in answer of an action to task2. Note unlike you said, it will not "protect the core stack", but only the connection (the apisem is only in struct netconn).

Because I would like to support full duplex protocols, and because the patch is small and can be disabled, I would like to integrate this feature...

About comment #33, even if I have provide some results about a possible new design for sequential api in http://savannah.nongnu.org/task/?6935, I would like first "terminate" potentials problems with current one...

Agree to check in ?

Frédéric Bernon <fbernon>
Group Member
Tue 29 May 2007 12:17:58 PM UTC, comment #33: 

Couldn't we leave that limitation (only use the sockets from one thread) in the current socket layer if we develop a new one?
Or do you really need that feature now, Frédéric?

Simon Goldschmidt <goldsimon>
Group administrator
Tue 29 May 2007 09:17:26 AM UTC, comment #32: 

Does this really solve the problem?  I can see that it is going to protect the core stack, but is there no state in the upper layers (netconn, sockets) that will also require protection?


Kieran Mansley <kieranm>
Group Member
Sat 26 May 2007 12:53:40 PM UTC, comment #31: 

So, to fix the problem "one thread read, another one write on a tcp connection" to support full duplex on a TCP connection, I propose the solution:

- add a "sys_sem_t apisem;" in the "struct netconn" in "api.h".
- in "tcpip.c", lock the connection to protect concurrent access in "tcpip_apimsg()",  adding a sys_sem_wait(apimsg->msg.conn->apisem) and a sys_sem_signal(apimsg->msg.conn->apisem).

Like this, the problem is solved. To avoid a such protection if users doesn't need it, I propose to add a LWIP_API_MULTITHREAD_PROT in opt.h.

Agree for you?

Frédéric Bernon <fbernon>
Group Member
Thu 24 May 2007 12:34:33 PM UTC, comment #30: 

I think this task should stay open, for the "one thread read, another one write on a tcp connection". Just a sample for that: a remote serial port (like moxa products). And any other full duplex protocol.

About task #1549, I will prefer the Kieran title "new higher performance Sockets API", and yes, it will perhaps do twice the job. But, once again, due to the time already spend to improve current architecture, it will be good to keep it for the next release, without do too lot changes in tcpip_thread...

(I will also do measures with Mumtaz to compare performances on the same target...)

Frédéric Bernon <fbernon>
Group Member
Thu 24 May 2007 12:12:30 PM UTC, comment #29: 

I've already opened task #1549 (prio later) for that discussion.
Does that imply closing this one?

Simon Goldschmidt <goldsimon>
Group administrator
Thu 24 May 2007 12:09:13 PM UTC, comment #28: 

Lots of separate issues being discussed here, but here is my thoughts on the big ones:

 - the issue of having poor protection against the application using the same socket from multiple threads doesn't worry me greatly.  I can see the situations where it is useful, but for most of these there are application level solutions that the applications that want this rare behaviour can use.  I'm not against this protection going in to lwIP, but we shouldn't have it at the top of our list.  Of course, if someone like Frédéric finds it important, they are free to ignore me and spend as long as they like on it!

 - likewise for the "new higher performance Sockets API".  lwIP's primary goal is not high performance.  That's not to say that we shouldn't move towards higher performance, or consider it when making changes, but that it's lower priority than some of the other pending tasks.  However, again, I'm not against it and if someone wants to spend the time then I can't stop them!  This should be discussed in a separate task from this one from now on please.

Kieran Mansley <kieranm>
Group Member
Thu 24 May 2007 10:43:03 AM UTC, comment #27: 


>On UDP, it's not a problem, but in this case, each threads got different packets, but it's a usual way to do parallel processing.


With multicast UDP, each socket should get all the data! But that's a task for bug #2809

>I would like to see current architecture fixed about this kind of problem,


Which problem, the recv/close or read/write?
If we invent a new socket layer, that would be twice the work...

> propose to open a new task named "New Socket API" where we could talk what would be this new API (but after the next release?)


Agreed. But I think this is very independent of releases. It would be a little change to tcpip.c and simply be a new socket.c (which I would hold redundant to the current sockets.c until it is stable). Of course not a thing we can integrate real easy in a next release...

Simon Goldschmidt <goldsimon>
Group administrator
Thu 24 May 2007 10:34:53 AM UTC, comment #26: 

About recv/close scenario, I don't think it's something really important, some OS propose it, but most of time, only to close a service running.

About multi recv on TCP, I'm agree, as I said in the initial purpose, "it's not a usual case". On UDP, it's not a problem, but in this case, each threads got different packets, but it's a usual way to do parallel processing.

The main problem to my point of view is one thread read a TCP socket, and another one write on it (for full duplex protocols).

About blocking an application thread during the process of an api_msg, I think the best solution is not semaphores or mutex, but events (with the current architecture, of course). But sys_arch doesn't propose them...

About a new socket API, I will be really interest to know what performance we could get. But is it only for socket layer (independant of netconn, poor netconn users :( ), or for sequential api? But, even if it's an interesting task, I would like to see current architecture fixed about this kind of problem, and I propose to open a new task named "New Socket API" where we could talk what would be this new API (but after the next release?)...




Frédéric Bernon <fbernon>
Group Member
Thu 24 May 2007 07:57:03 AM UTC, comment #25: 


> As Frédéric said: one task receives, the other sends. Or one task is blocking in recv(), another can call close() to gracefully shut down an app. (I don't think that's standard, but it works on some OSes...).


Seems quite weird to me ;-). But could also be handled by a bit of message passing by the application so that there would only be one task poking the socket.

> I think the idea of having one mutex per netconn is a very good one! (Only we would have to have mutexes, but could eliminate mboxes).


Me too :-D. The mutexes aren't even mandatory. They can be replaced (with a few #defines) by binary semaphores on platforms that don't support them. Should still be better than the current implementation.

> Then again, maybe it's time to re-write the api layer? :-)


If I can convince my employer that this would be good way to use my time, I'm so with you on this :). It seems very strange to me that currently whenever somebody asks about lwip performance the standard answer is: use the raw api and write your own sequential layer on top of it. I mean, it doesn't make much sense that everybody writes their own high-performance sequential layer and the one included with lwip sucks :(. Maybe make this a very low-priority task?

Atte Kojo <kojo>
Thu 24 May 2007 07:13:05 AM UTC, comment #24: 

re comment #23:

>First case is where connection state is not sufficiently protected against current access from a user thread and tcpip thread.


That should be solved throught the mboxes and callbacks running in tcpip-thread context.

>Second case, which is the issue with the original patch, is how to protect a connection from concurrent access of several user threads. My first suggestion to this would be: don't do it. What even is the supposed result when multiple threads try to recv from the same socket?


As Frédéric said: one task receives, the other sends. Or one task is blocking in recv(), another can call close() to gracefully shut down an app. (I don't think that's standard, but it works on some OSes...).

re comment 22#:

>In general condition variables and mutexes are better, as they allow priority inversion to be prevented (if the OS implements it). That's impossible with semaphores. So in general, they are better.


Agree!

> [1 mutex per conn]

That would help much, since it would eliminate many OS calls (apart from mutexes). But here again, semaphores would be a bad idea, mutexes would be much better!

> [..] All this needs protecting from other threads. It's not only a case of which thread was intended to get the NULL to the conn->mbox.


You're right, but most of the problems you've stated are not really a race condition (at least on 32-bit systems) (e.g. conn->pcb.tcp->state is only read) or can be solved somehow.

But don't get me wrong: You're right that with the current code, changing conn->mbox to a per-thread mbox (or sem) is not the only thing to do!

I think the idea of having one mutex per netconn is a very good one! (Only we would have to have mutexes, but could eliminate mboxes).

Then again, maybe it's time to re-write the api layer? :-)
I'd like to have mutexes protecting parts of the stack. That way one could prioritize RX-frames by processing them in different threads...

Simon Goldschmidt <goldsimon>
Group administrator
Thu 24 May 2007 06:24:07 AM UTC, comment #23: 

If I've understood correctly there are actually two multithread issues here.

First case is where connection state is not sufficiently protected against current access from a user thread and tcpip thread. Protecting against this I could come up with two different ideas. First is to protect each connection using a per-connection mutex. Second is to protect all accesses to lwip internal data structures using a traditional BSD-style giant (global) mutex.

Second case, which is the issue with the original patch, is how to protect a connection from concurrent access of several user threads. My first suggestion to this would be: don't do it. What even is the supposed result when multiple threads try to recv from the same socket? Everyone gets the same data? Each threads gets a chunk of data in a round-robin fashion? Protecting against this is done similarly as in the previous case: another per-connection mutex for locking the connection over an api call or a global mutex locking the whole netconn api during an api call.

Actually, using a giant mutex for lwip would eliminate the need for api and callback messages completely, but isn't probably otherwise a very good idea ;-).

Atte Kojo <kojo>
Thu 24 May 2007 12:17:50 AM UTC, comment #22: 

Re comment #15:

In general condition variables and mutexes are better, as they allow priority inversion to be prevented (if the OS implements it). That's impossible with semaphores. So in general, they are better.

Anyway, what I was thinking about, and to be honest I was only intending to mention it in passing, is that you can remove a lot of the mbox-based message passing with the TCP/IP thread entirely if there are per-connection mutexes and condition variables. The mutexes are used to protect the connection, and that includes even from the TCP/IP thread. An extra requirement would be that the api_lib functions do not block when holding the mutex, except on the condition variable.

Then you do away with much of the message passing stuff. Threads operate directly on shared data, rather than messages in a mailbox. To get data, you lock the mutex and just e.g. take data out of a queue (no need for mbox). The TCP/IP thread also locks the mutex when changing state about the connection. And the condition variable is used to wake up potentially waiting threads. You can't do this sort of thing easily with a semaphore as you can't choose what thread you wake up with the semaphore. You can "broadcast" a condition variable though - every thread then wakes up and checks for the condition it was waiting for. Most times there will only be one anyway of course.

But I don't think I can seriously propose this now really - it's pretty much a complete rewrite. It would be nice though!

> I see the semaphore as a global event -> tcpip_thread has
> finished processing my request. And for that, you only need one
> semaphore per thread.


That's not how I see it. I see it as "protecting" the connection. More happens than just the tcpip_apimsg call, and its NULL response. Most of the netconn functions are like this: to protect from multiple threads, you need more.
e.g. netconn_recv also has race conditions on changes to conn->err, conn->pcb.tcp->state, conn->recv_avail, and conn->recvmbox (if the connection gets closed). All this needs protecting from other threads. It's not only a case of which thread was intended to get the NULL to the conn->mbox.

Jonathan Larmour <jifl>
Group Member
Wed 23 May 2007 08:07:24 PM UTC, comment #21: 

Misusing a field ? ;) To study, but if we replace the "sys_mbox_t mbox;" by a "sys_sem_t* apisem", the size won't change (from actual code, of course).

But to be honest, I think that adding this field is the only "clean" thing to do, but there is several fields which could be optional: conn->socket, conn->callback, conn->recv_available. Most of them seems to be used for "select" only ...

Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 07:53:49 PM UTC, comment #20: 

Ah, OK, you're right, I missed that. Can't we somehow reuse the acceptmbox for that? That way, the struct netconn wouldn't grow, and accept and connect can't be called on the same netconn anyway.

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 06:29:31 PM UTC, comment #19: 

Umm, I think I have understand, but there is always the problem for  "connect". For all others, you could have something like :

do_xxx(msg)
{ /* todo */
  /* ...  */
  sys_sem_signal(msg->sem);
}

But for connect, you leave the context of the "msg" without having any results (tcpip_thread doesn't block until the connection success or not, of course :) ).

This is in another "loop" in tcip_thread that the final result will be signal, by err_tcp, or do_connnected.

The problem is to know which "msg->sem" call in these two functions.

Right for you ?


Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 06:21:23 PM UTC, comment #18: 


>It could be good, but there is ONE case where it's necessary to let it at inside "netconn" struct: the connect !!!


I think you got me wrong. I meant to include a sys_sem_t field in struct api_msg_msg so that all the do_* functions don't post to conn->sem (like conn->mbox until now) but post to msg->sem instead (same for pend in tcpip_apimsg()). That way, you are free to set this sem to conn->sem or to use a per-thread sem and api_lib.c is the only file where to chose (tcpip.c and api_msg.c would always use this msg.sem).

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 04:22:11 PM UTC, comment #17: 


>If we integrate the semaphore to use in one of the messages,...

It could be good, but there is ONE case where it's necessary to let it at inside "netconn" struct: the connect !!!

Because in this special case, the final result (do_connected or err_tcp) doesn't have any track of the "original" api_msg.

That why in a first time, I think to just do a simple type change (mbox -> sem). But if you got a good idea for that (adding a "struct api_msg*" in the netconn struct, just for the "connect" ?)...

Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 04:11:26 PM UTC, comment #16: 


>In a first step, I propose to replace the conn->mbox by a conn->apisem. Good for you?


Yes. If we integrate the semaphore to use in one of the messages, we can make tcpip.c and api_msg.c independent of the type of semaphore (per-conn vs. per-thread) and can use one or the other depending on a configuration switch. That way systems not having per-thread storage can use the old mechanism to be faster while lacking "multithread send/recv operations on same socket on TCP netconns". (I would make the per-thread the default to keep users from being confused!)

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 02:48:24 PM UTC, comment #15: 


> With a mutex and condition variable instead of a semaphore, you can interlock more nicely with the tcpip thread.


You can do that with a semaphore that is used as a binary semaphore, don't you? Why the condition variable and mutex?

> That's hard now without things like SYS_ARCH_PROTECT (which just acts like a really large mutex). These would be per-connection, not per-thread though. But it would more nicely solve the send/recv safety issue.


But per-connection would make the mutex be used by 2 threads again, how would you solve this?

I see the semaphore as a global event -> tcpip_thread has finished processing my request. And for that, you only need one semaphore per thread.

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 02:47:05 PM UTC, comment #14: 

prio can be use like an ID if you never have two tasks with the same prio value. In my OS, sys_get_current_task() return a value inside [0...MAX_TASK-1], and it's good for me..

But I think it's a little bit outside the scope of this task (but it's important, because we want to reduce port-breakage after the next release)...

In a first step, I propose to replace the conn->mbox by a conn->apisem. Good for you?

Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 02:43:18 PM UTC, comment #13: 

Re timeouts: sort of, yes. But it's a pain if you don't have real per-thread data. Instead you have to have a table with entries for each thread, and look it up each time (and this is even worse if threads can be deleted). Anyway, this is a side issue - more important is whether a per-thread sem can even work anyway.

With a mutex and condition variable instead of a semaphore, you can interlock more nicely with the tcpip thread. That's hard now without things like SYS_ARCH_PROTECT (which just acts like a really large mutex). These would be per-connection, not per-thread though. But it would more nicely solve the send/recv safety issue.


Jonathan Larmour <jifl>
Group Member
Wed 23 May 2007 02:40:41 PM UTC, comment #12: 


>return &(tasks_sys_timeouts[sys_get_current_task()]);


OK, but on systems like windows, sys_get_current_task() would return a DWORD for the ID or a void* for a HANDLE, and you wouldn't want to implement an array for that. You'd normally use the prio on small embedded OSes for the array and have a MAX_PRIO. But on systems using round-robin, where tasks can have the same prio, the prio is not enough again...

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 02:36:33 PM UTC, comment #11: 


>Select doesn't use timeouts


For "select", it's an indirect call: lwip_select calls sys_sem_wait_timeout which calls sys_timeout.

>a workaround that can be used for system not supporting per-thread storage but is (almost for sure) slower than direct per-thread storage


I think that the "per-thread storage" is "OS dependant", but I'm sure that code is pretty efficient :

struct sys_timeouts * sys_arch_timeouts(void)
{ return &(tasks_sys_timeouts[sys_get_current_task()]);
}

But it reserve memory for each task in the system (one pointer per task, I have max 20 tasks, so 80 bytes on a 32bits arch).


Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 02:24:00 PM UTC, comment #10: 


>Note, than now, timeouts are mainly used by tcpip_thread (exceptions are select & PPP)...


Select doesn't use timeouts, only tcpip_thread and PPP.
But if another thread (api thread before your last checkin changing sys_mbox_fetch to sys_arch_mbox_fetch) waits parsing timeouts and parses the timeout list of tcpip_thread, the core is very likely to crash! Therefore, for the current implementation, per-thread storage is necessary.

What you do (static table indexed per task id) is a workaround that can be used for system not supporting per-thread storage but is (almost for sure) slower than direct per-thread storage. I think we should keep the speed of that in mind while re-designing!

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 02:14:35 PM UTC, comment #9: 


>Yes, eCos for example has this issue - mboxes are a fixed size. Even for variable size queues, ultimately it can fail - memory has to come from somewhere!


Good to know, I will use it in a future project

>not all OSes have per-thread data


Mine none, but I a memory, so, I have a static table indexed per task id. I will look in ecos port how it implement that. Note, than  now, timeouts are mainly used by tcpip_thread (exceptions are select & PPP)...

>More importantly what do you do when threads go away? lwIP may not be told.


Yes, it's true. In my project, my threads configuration is pretty static, but it's not the case for everyone...




Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 02:06:31 PM UTC, comment #8: 


> not all OSes have per-thread data


The current timeouts / sys_arch_timeouts() implementation relies on per-thread data... if that doesn't work, timers can be called from a wrong thread-context...

Creating/destroying every time makes the interface very slow!

> If we had mutexes and condition variables, it might be different though!


What do you mean with that?

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 02:01:13 PM UTC, comment #7: 

Yes, eCos for example has this issue - mboxes are a fixed size. Even for variable size queues, ultimately it can fail - memory has to come from somewhere!

Therefore it seems a return value for sys_mbox_post would be good. In another task as Frederic says I guess.

As for the original suggestion of one sem per thread.... that might be hard to do - not all OSes have per-thread data. I guess for those systems it might not be so bad to create/destroy the semaphore each time (depending on a config option).

More importantly what do you do when threads go away? lwIP may not be told. Maybe creating/destroying the semaphore each time is the only solution if you want it to be per-thread.

If we had mutexes and condition variables, it might be different though!


Jonathan Larmour <jifl>
Group Member
Wed 23 May 2007 01:33:22 PM UTC, comment #6: 

Yes, it's a thing to do (once we will use a semaphore instead of conn->mbox), but I think in another task...

Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 01:24:21 PM UTC, comment #5: 

Introducing a return value to shown the message wasn't posted would prevent tcpip_thread from blocking, the datagram would simply not be received.

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 01:19:52 PM UTC, comment #4: 

Yes, ASSERT can be used to signal that the size used is to small and that we have fill the queue, but it's just good at development time. Take the current code and this situation:

You open a UDP connection and bind it to a port (514 syslog) by example). If the process which have to receive doesn't call "recvfrom" faster than you receive packets (by example, in a DoS attack), the conn->recvmbox for this connection will be filled, and the tcpip_thread will be block. If tcpip_thread blocks, there is no possibility to close/delete the UDP connection by example, or to "sendto" any datagram.
 



Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 12:57:26 PM UTC, comment #3: 


>If you have a "variable size queue", and if you don't tune your sys_new_queue with a good size, you can get a deadlock situation and block tcpip_thread...


For that case you should have an ASSERT. Maybe we should also give sys_mbox_post a return value so that ASSERTs can safely be turned off in RELEASE mode. (If we use a semaphore instead of conn->mbox, we should give sys_sem_signal a return value, too!)

But I think the idea of tuning the mbox sizes is a good one!

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 12:39:56 PM UTC, comment #2: 


>And that would also significantly reduce mboxes needed to run lwIP. Currently we need one conn->mbox per netconn. That would be reduced to one mbox (or semaphore?) per thread since one thread can only be executing one netconn function at a time.


Yes, I think it would be better with a semaphore, since "mbox" is in fact use like this (before fixing "err_tcp" bug, it could mask the problem).

Another thing around mailbox: sys_mbox_new doesn't take any "size" in parameters, and lwIP presume (and I don't think it's write somewhere, but it's true) that you never block on a sys_mbox_post. Some OS implements "variable size queues", and no problem with that. But lot of them use "fix size queues", so, with lwIP all queues must have the same size. Now, there is mainly three "real" mbox (if I don't consider "conn->mbox") :

- tcpip_thread mailbox, which have to receive all API messages, all input packets, and some callbacks..
- conn->acceptmbox for listen connections (which should be limited to "backlog" size, but netconn_listen doesn't have a such parameter)
- conn->recvmbox for receives netbuf(datagrams) and pbuf (tcp segments)

If you have a "variable size queue", and if you don't tune your sys_new_queue with a good size, you can get a deadlock situation and block tcpip_thread...





Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 11:13:20 AM UTC, comment #1: 


>Doing something clean should need to be have to have a semaphore "per thread" to set in the api_msg struct...


And that would also significantly reduce mboxes needed to run lwIP. Currently we need one conn->mbox per netconn. That would be reduced to one mbox (or semaphore?) per thread since one thread can only be executing one netconn function at a time.

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 10:44:53 AM UTC, original submission:  

Currently, with UDP connections, you can sendto and recvfrom datagrams with two threads (one read, the other write). You can't do two reads in the same times, but it's not a usual case. You could do two sends in the same time, but there is a problem with current tcpip_thread api, because the tcpip_thread can't know which task will fetch the message (and will be wake up). More, about TCP, it's not possible to have a thread who read and another who write (to implement full duplex protocols)...

Doing something clean should need to be have to have a semaphore "per thread" to set in the api_msg struct...

See https://savannah.nongnu.org/task/?6683, comment #3, comment #4, and comment #5...

Frédéric Bernon <fbernon>
Group Member

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by mikeredd (Posted a comment)
  • -email is unavailable- added by kieranm (Posted a comment)
  • -email is unavailable- added by jifl (Posted a comment)
  • -email is unavailable- added by goldsimon (Posted a comment)
  • -email is unavailable- added by fbernon (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follow 3 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2007-06-09 fbernon StatusNone Wont Do
        Assigned toNone fbernon
        Open/ClosedOpen Closed

    Back to the top

    Powered by Savane 3.13-f8d8.
    Corresponding source code