buglwIP - A Lightweight TCP/IP stack - Bugs: bug #19157, lwip_close problems

 
 

You are not allowed to post comments on this tracker with your current authentication level.

bug #19157: lwip_close problems

Submitter:  Frédéric Bernon <fbernon>
Submitted:  Mon 26 Feb 2007 08:40:01 PM UTC
   
 
Category:  sockets/netconn Severity:  3 - Normal
Item Group:  Faulty Behaviour Status:  In Progress
Privacy:  Public Assigned to:  fbernon
Open/Closed:  Closed Planned Release:  None
lwIP version:  None

Jump to the original submission

Sat 09 Jun 2007 10:17:20 AM UTC, comment #31: 

Because I don't have reproduce the problem during my last tests (perhaps fix by another modification?), I close that (if I reproduce it in next tests, I could reopen i)...

Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 11:16:22 AM UTC, comment #30: 

Well, we have determined that waiting in close is wrong,that's for sure.

So Frederic, is the problem really that the mechanism by which lwIP is meant to send unacked data and FINs until the connection is closed, and doing so from timers, is not working correctly for you?

Jonathan Larmour <jifl>
Group Member
Wed 23 May 2007 10:41:55 AM UTC, comment #29: 

Not a patch file, but look first comment and comment #5... I do something a little bit different now, but it's the idea. I just don't like to use sys_sleep...

Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 10:39:30 AM UTC, comment #28: 

Your patch? Sorry, must have missed that.

Simon Goldschmidt <goldsimon>
Group administrator
Wed 23 May 2007 10:01:02 AM UTC, comment #27: 

Not yet, but it's true I don't take time to show with traces and others that the problem is real. I will try to find time for that, because, even if my patch works for me, there were several users who seem to have the same.

For SO_LINGER, we will continue on tsak #6930.

Frédéric Bernon <fbernon>
Group Member
Wed 23 May 2007 09:52:34 AM UTC, comment #26: 

I've opened a task (#6930) for implementing SO_LINGER. Can we close this now?

Simon Goldschmidt <goldsimon>
Group administrator
Tue 27 Mar 2007 03:02:40 PM UTC, comment #25: 

If one day, we integrate SO_LINGER, so I think that adding SO_DONTLINGER could be done in the same time. First, because it is already defined in sockets.h (for source code compatibility I think), second, because there will no lot of job to do. I could be useful to someone who activate SO_LINGER on a sockets, and, due to his application or protocol, need to come back to SO_DONTLINGER.

But this is not the first problem. Before anything else, I have to provide informations about the problem I see to be sure that the current close really do the job it should do (and it seems I "alone" with this problem)...

Frédéric Bernon <fbernon>
Group Member
Tue 27 Mar 2007 02:33:13 PM UTC, comment #24: 

SO_DONTLINGER is a non-standard option. I would be against its provision in lwIP on that basis (why support a non-standard bell and whistle when we don't support all standard ones!).

SO_DONTLINGER corresponds to what the standards say is the default state. Windows may have the option, but only as a way of returning to that state. From Per's comment #20, if it's true that SO_LINGER defaults to on there, then that is non-standard (Microsoft being non-standard? Surely not!). The standard is clear on correct behaviour for a BSD sockets API. That would explain the need for SO_DONTLINGER on Windows.


Jonathan Larmour <jifl>
Group Member
Tue 27 Mar 2007 01:13:01 PM UTC, comment #23: 

Ok, I will try to provide a trace + a network capture to show the problem...

About SO_LINGER/SO_DONTLINGER, I would like to be sure what is the official BSD behavior, and how current lwIP should work (or work):

1/ close with SO_DONTLINGER: calling thread is not blocked, and end of transmission is done in background.

2/ close with SO_LINGER with a 0 interval, calling thread is not blocked but pending datas are trashed.

3/ close with SO_LINGER with a interval > 0, calling thread is blocked during "interval" max delay, or until end of transmission.

So, If I understand what you say, lwIP behavior should be 1/, right?

Frédéric Bernon <fbernon>
Group Member
Tue 27 Mar 2007 10:33:14 AM UTC, comment #22: 

A detailed description of an error situation (maybe including a packet log) would be helpful deciding if this is a real problem or not.

Simon Goldschmidt <goldsimon>
Group administrator
Tue 27 Mar 2007 07:53:51 AM UTC, comment #21: 

We seem to be tracking two issues on this bug report:
 - the need for SO_LINGER; this should be moved to a new bug/task
 - an alleged problem with unreliable data transfer when closing sockets.  This worries me as from looking at the code it should all work fine at the moment (as described by Jonathan in comment #19).  I'd really like to understand if this is a real problem or just a misunderstanding of what should happen.

Kieran Mansley <kieranm>
Group Member
Mon 26 Mar 2007 02:53:17 PM UTC, comment #20: 

Hi,

I'm using lwip in an embedded device (which uses the raw API).

I've used the SO_LINGER option in the "Windows" side when communicating with my devices.
When one sends one large chunk of data (using TCP) and the send() command fails due to timeout (slow network or some other problem), one must close the socket without using lingering.
Otherwise the Win32 TCP/IP stack will complete the transmission.

Normally when a send() command fails some upper layer in the application uses re-transmissions. Therefore it's important that a failed transission due to timeout isn't completed by the TCP/IP stack itself.

In Windows SO_LINGER==1 is default when the option isn't set/specified.

Regards
/Per/

Per Strandh <strandh>
Mon 26 Mar 2007 02:27:12 PM UTC, comment #19: 

Okay, I think we need to be clear here... my understanding of the current state is that do_delconn() will call tcp_close(). tcp_close() will send a FIN and move the connection to FIN_WAIT_1. That FIN is also added to the unacked queue, and will be retransmitted as needed in tcp_slowtmr().

So the data transfer should be reliable. The only exception is if there is an error, in which case that error should be passed right back up to the user.

But in the normal case, there isn't an error, all data should be relably sent, and the stack waits for an ACK. All this happens in the "background" as far as the application is concerned - control returns to the application, possibly before data has all actually been sent. But it will be sent.

This is all correct behaviour.

Implementing SO_LINGER is a different issue. The default is indeed disabled (non-blocking). I personally don't see a need in lwIP for SO_DONTLINGER.

Jonathan Larmour <jifl>
Group Member
Mon 26 Mar 2007 02:26:28 PM UTC, comment #18: 

Adding SO_LINGER (with default off) is ok for me. In that way, unsent data is purged on close by default, which seems to be what opengroup.org wants to say. I would stick to this standard instead of Microsoft, if only to be portable...

Simon Goldschmidt <goldsimon>
Group administrator
Mon 26 Mar 2007 02:04:36 PM UTC, comment #17: 

Ok, implement SO_LINGER will give us the possibility to choose. I suppose the default value is disabled ("non blocking on close").

So, what is for you a good (and simple) way to poll sent&unack segments list?

Last, what do you think about SO_DONTLINGER (which seems to be the Kieran & Simon idea)? Doesn't seems to be BSD, but usefull...

http://msdn2.microsoft.com/en-us/library/ms737582.aspx

Frédéric Bernon <fbernon>
Group Member
Mon 26 Mar 2007 01:49:22 PM UTC, comment #16: 

Just an information point to confirm what was said earlier, according to http://www.opengroup.org/onlinepubs/000095399/functions/close.html

"If fildes refers to a socket, close() shall cause the socket to be destroyed. If the socket is in connection-mode, and the SO_LINGER option is set for the socket with non-zero linger time, and the socket has untransmitted data, then close() shall block for up to the current linger interval until all data is transmitted."

and for the avoidance of doubt, http://www.opengroup.org/onlinepubs/009695399/functions/setsockopt.html
describes SO_LINGER which says the default is "the system handles the call in a way that allows the calling thread to continue as quickly as possible".

So the data has to be reliably sent, but if possible, the application should not block.

Jonathan Larmour <jifl>
Group Member
Mon 26 Mar 2007 01:22:21 PM UTC, comment #15: 

Yes, you're right, netconn_delete do a part of netconn_close (except that netconn_close retry to sent if there is a ERR_MEM error, but netconn_close is not call in socket layer).

The problem is when your last sent segments are not received by peer (a wan connection by example, where some packets can be lost). do_close and do_delconn will "push out" last segments, but don't check if there are acknowldeged.

Without a such patch, I have sometimes some end of packets lost, and client doesn't receive the end of the document...

But as I said in comment #9, if you think this is not a problem, we can close this item...

Frédéric Bernon <fbernon>
Group Member
Mon 26 Mar 2007 01:14:01 PM UTC, comment #14: 

Hold on a minute, from looking at the code, isn't that what happens already?  So why is there a problem?  do_delconn() among other things simply calls tcp_close() which should do an orderly close and send pending packet, and will only abort the connection if that returns an error. The only case I can see for this happening is if we're out of memory or the send queue is full as then we fail to add the FIN, return an error, and abort the connection.  Is this the problem Frederic is seeing?

Kieran Mansley <kieranm>
Group Member
Mon 26 Mar 2007 12:55:57 PM UTC, comment #13: 

Hmm (as you say), ok, the idea is good, but I'm not sure to be able  to get something like that. Kieran or Simon, do you take this item to propose a patch based on your idea?


Frédéric Bernon <fbernon>
Group Member
Mon 26 Mar 2007 12:43:35 PM UTC, comment #12: 

Hmm, yes, I suppose so.  It should already get recycled once it gets to CLOSED, meaning we'd need no extra mechanism to do that, and so the only modification would be to not delete the pcb at lwip_close() time if there is outstanding packets.

Any problem with that?

Kieran Mansley <kieranm>
Group Member
Mon 26 Mar 2007 12:38:27 PM UTC, comment #11: 

Can't the netconn be deleted while the tcp_pcb lives on (e.g. in a closing-list which is handled by timers)?

Simon Goldschmidt <goldsimon>
Group administrator
Mon 26 Mar 2007 12:33:06 PM UTC, comment #10: 

OK, thanks. I understand it better now.

lwIP should guarantee reliable transmission of the data, but I don't think it needs to block the application to do this.  It can send the remaining packets from timers for example.

The problem is that lwip_close() is calling netconn_delete().  What we really want to happen is for the connection to delete itself once the connection is finished.  i.e. Once it has got to the CLOSED TCP state (for TCP connections).

Perhaps a possibility would be for lwip_close() to either:
 i) do netconn_delete if the connection is already finished
 ii) or mark the connection as "needs delete" and then do the netconn_delete() once it has reached the CLOSED state. 

Not sure how this would work in practice though, particularly without adding special case code into the core to support this.  Perhaps we could have a "connection is closed" callback that would be set by lwip_close in case (ii) above, and this callback would then do the delete?

Any other ideas/thoughts?

Kieran Mansley <kieranm>
Group Member
Mon 26 Mar 2007 12:23:11 PM UTC, comment #9: 

Ok, the "classic" example is a - simple - web server, which only return one document per connection.

The algorithm is

WEBSERVER::accept HTTP connection()
WEBSERVER::recv headers()
WEBSERVER::recv datas()
WEBSERVER::send headers()
WEBSERVER::send datas()
WEBSERVER::close socket()

The problem is when you close the socket just after a send(), at socket layer. In the current socket implementation, lwip_close just delete the connection: it call netconn_delete() in api_lib.c, which call do_delconn() in api_msg.c. In this last, we try to send last datas with a tcp_output() (it is done at the end of tcp_close, or in tcp_abort() if tcp_close "failed"). There can be several cause that these last datas are not received by peer, and, because we don't wait acknowledge, last "writes" can be lost (so, the tcp connection can't be defined "reliable").

I think that most of implementation have to guarantee that any write is successfuly received, or that you can detect an error. I only can suppose they "block" until an error or until all datas are sent & acknowledge. Perhaps some implementations can to this process in a asynchrone way, but with lwIP it's not a simple thing to change...

Most of time, this blocking is very short. And it seems to be an old problem, which was never really solved. We can find it from
bug #15926, and always this morning in "local connection failed on loopif, race?" thread.

A first and simple thing to do is perhaps to call netconn_close in lwip_close (it also force a tcp_output, and check there is no ERR_MEM error). But, even with that, we can be sure that peer received the data if we don't check the pcb's unack list.

I surprise we don't get more problems with that in socket layer. So, perhaps it's something in my port, but I don't see what in lwIP can guarantee receive...

But if you don't think it's a problem, ok, we can close this item...








Frédéric Bernon <fbernon>
Group Member
Mon 26 Mar 2007 11:43:32 AM UTC, comment #8: 

I agree that this doesn't seem like a good solution.  Can you briefly describe what the remaining problem is?  I've rather lost track of this one. I don't think you normally get (on linux for example) blocked on close() until all packets are sent.  Can you remind me why this is necessary for lwIP?

Kieran Mansley <kieranm>
Group Member
Mon 26 Mar 2007 10:33:51 AM UTC, comment #7: 

I just can tell that closesocket on winsock give you insurance that all pending segments are sent, even it you call the close directly just after the write. if the peer crash at the close time, you should get an error after MAXRTX sent (I suppose)?

Frédéric Bernon <fbernon>
Group Member
Mon 26 Mar 2007 10:29:11 AM UTC, comment #6: 

This does not look like a good solution to me. What I don't like is that the stack keeps the application waiting without doing anything and the time it needs depends on the peer. I don't think any other OS lets you wait when closing a socket...
I did not investigate the problem myself, but can't we look into what other OSes / stacks do when faced with this problem?

Simon Goldschmidt <goldsimon>
Group administrator
Mon 26 Mar 2007 07:31:41 AM UTC, comment #5: 

Can I get some comments about that (I have to test, but it's about the polling feature) :

err_t
netconn_close(struct netconn *conn)
{
  struct api_msg msg;

  if (conn == NULL) {
    return ERR_VAL;
  }

  conn->state = NETCONN_CLOSE;
 again:
  msg.type = API_MSG_CLOSE;
  msg.msg.conn = conn;
  api_msg_post(&msg);
  if (conn->err == ERR_MEM && conn->sem != SYS_SEM_NULL) {
    sys_sem_wait(conn->sem);
    goto again;
  }
  if (conn->type==NETCONN_TCP) {
    if (((conn->pcb.tcp->unacked!=NULL) || (conn->pcb.tcp->unsent!=NULL)) && (conn->err==ERR_OK)) {
      sys_msleep(1);//I don't like that, but...
      goto again;
    }
  }
  conn->state = NETCONN_NONE;
  return conn->err;
}

Frédéric Bernon <fbernon>
Group Member
Fri 23 Mar 2007 08:03:03 AM UTC, comment #4: 

I can confirm that Christiaan's change doesn't solve this problem. When he commit that, I have try to disable my patch (which is not commited) and I always got problem.

Because even if you force tcp_output in do_close, first, I have to call netconn_close (which is not done in sockets layer in CVS release), second, you can't be sure that last segments are received & acknowledged by peer...

So, that why I don't have yet propose a patch for that (mine, is not very nice - working for me - but not very nice).



Frédéric Bernon <fbernon>
Group Member
Fri 23 Mar 2007 05:38:13 AM UTC, comment #3: 

I thought the second part is fixed with Christian's change in comment number 3 of bug #15926. Are you sure there's still a problem here?
If not, I guess this bug can be closed.

Jonathan Larmour <jifl>
Group Member
Sun 11 Mar 2007 05:00:01 PM UTC, comment #2: 

First part commit with fix bug #19225.

Second part need to see if it's possible to add a feature like described inside "bug #15926 : netconn API bugs", comment #1 :

http://savannah.nongnu.org/bugs/index.php?15926





Frédéric Bernon <fbernon>
Group Member
Mon 26 Feb 2007 08:54:48 PM UTC, comment #1: 

The second part is one of the solutions from :

http://savannah.nongnu.org/bugs/?func=detailitem&item_id=15926


Frédéric Bernon <fbernon>
Group Member
Mon 26 Feb 2007 08:40:01 PM UTC, original submission:  

First, in lwip_close, I think that last sock_set_errno have to be set BEFORE sys_sem_signal like this :
 
   sock->lastdata   = NULL;
   sock->lastoffset = 0;
   sock->conn       = NULL;
   sock_set_errno(sock, 0);
   sys_sem_signal(socksem);
 
Because with the "original" code :

   sys_sem_signal(socksem);
   sock_set_errno(sock, 0);

If between this two calls, if another thread (with higher priority) reopen and use the same socket, and got an error, it will be masked by the sock_set_errno.

Second, closing TCP sockets which just send data cause the data to be purged. the "old" bug was already signals, but none of the fix are really correct. The only solution seems to poll the unacked and unsent lists until they're empty, until an error, or until a timeout. To handle this polling timeout, I use my own sysarch.c function (there is no really same function in sys.h).

extern u32_t sys_start_tickcount ();
extern u32_t sys_stop_tickcount  ( u32_t tickcount);


//...
#define CLOSE_TIMEOUT 2000
//...
int
lwip_close(int s)
{
  struct lwip_socket *sock;
  u32_t  tickcount;

  LWIP_DEBUGF(SOCKETS_DEBUG, ("lwip_close(%d)\n", s));

  sock = get_socket(s);
  if (!sock) {
      set_errno(EBADF);
      return -1;
  }

  if (sock->conn->type==NETCONN_TCP)
   { tickcount = sys_start_tickcount(); 
     while (((sock->conn->pcb.tcp->unacked!=NULL) || (sock->conn->pcb.tcp->unsent!=NULL)) && (sock->conn->err==ERR_OK) && (sys_stop_tickcount(tickcount)<CLOSE_TIMEOUT))
      { sys_sleep(1);
      }
   }
  
  sys_sem_wait(socksem);
  netconn_close(sock->conn);//FB really usefull?
 
  netconn_delete(sock->conn);
  if (sock->lastdata) {
    netbuf_delete(sock->lastdata);
  }
  sock->lastdata   = NULL;
  sock->lastoffset = 0;
  sock->conn       = NULL;
  sock_set_errno(sock, 0);
  sys_sem_signal(socksem);
  return 0;
}

I think that code would have to be in netconn layer...

Frédéric Bernon <fbernon>
Group Member

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by strandh (Posted a comment)
  • -email is unavailable- added by kieranm (Posted a comment)
  • -email is unavailable- added by goldsimon (Posted a comment)
  • -email is unavailable- added by jifl (Posted a comment)
  • -email is unavailable- added by fbernon (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follow 3 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2007-06-09 fbernon Open/ClosedOpen Closed
    2007-03-11 fbernon StatusNone In Progress
    2007-03-04 fbernon Assigned toNone fbernon

    Back to the top

    Powered by Savane 3.13-cf05.
    Corresponding source code