buglwIP - A Lightweight TCP/IP stack - Bugs: bug #3031, Implement a new fully pool-based...

 
 

You are not allowed to post comments on this tracker with your current authentication level.

bug #3031: Implement a new fully pool-based pbuf implementation.

Submitter:  Leon Woestenberg <likewise>
Submitted:  Tue 01 Apr 2003 11:44:15 AM UTC
   
 
Category:  pbufs Severity:  2 - Minor
Item Group:  Feature Request Status:  None
Privacy:  Public Assigned to:  None
Open/Closed:  Open Planned Release:  None
lwIP version:  None

Jump to the original submission

Thu 27 Mar 2008 09:57:28 PM UTC, comment #29: 

Of course the current approach has the downside that MEM_USE_POOLS diverts the whole heap to pools, not only PBUF_RAM. Maybe we should review this... ?

Simon Goldschmidt <goldsimon>
Group administrator
Tue 25 Mar 2008 09:19:06 PM UTC, comment #28: 

As nice as this idea is, it's a little off-topic in this thread: The original idea of this 'bug' has already been implemented: PBUF_RAM using pools. This can be achieved by the new memp_std.h file.

Apart from this, a new task to optimize memory usage might be worth it. The idea of a netif_dhcp seems quite interesting. SNMP is another candidate for optimization, I think.

Simon Goldschmidt <goldsimon>
Group administrator
Tue 25 Mar 2008 01:50:04 PM UTC, comment #27: 

I think the mem_malloc in dhcp can be easily removed by creating a netif_dhcp which has a netif and definition of struct dhcp added.  The allocation looks to be a one time event but I have not dug that deep into the code.  In dhcp_inform, it's allocated and freed - maybe that struct dhcp can be stack based?

Bill Auerbach <billauerbach>
Tue 25 Mar 2008 01:38:41 PM UTC, comment #26: 

Would anyone like to summarise what remains to be done to close this bug.  Now that 1.3.0 has been made we should perhaps reconsider this.  I feel like this should really be moved to a task.

Kieran Mansley <kieranm>
Group Member
Thu 09 Aug 2007 02:22:47 PM UTC, comment #25: 


> That's not true! The first changes I made to tcp_in.c was in
> version 1.65 of the file. But the line 'pcb->snd_queuelen -=
> pbuf_clen(next->p);' was already present in tcp_receive() before
> that version!


Bizarre - I had used cvs annotate before. I must have just misread it, sorry. Doesn't make a difference to what we do going forward fortunatley.

> As for the macro, I agree. Perhaps we can make the function use
> that macro so that we don't have the same code twice.


Indeed.

Jonathan Larmour <jifl>
Group Member
Thu 09 Aug 2007 02:01:40 PM UTC, comment #24: 


>> snd_queuelen is decremented using pbuf_clen
> before your changes it wasn't decremented using pbuf_clen either.


That's not true! The first changes I made to tcp_in.c was in version 1.65 of the file. But the line 'pcb->snd_queuelen -= pbuf_clen(next->p);' was already present in tcp_receive() before that version!

As for the macro, I agree. Perhaps we can make the function use that macro so that we don't have the same code twice.

Simon Goldschmidt <goldsimon>
Group administrator
Thu 26 Jul 2007 03:12:09 PM UTC, comment #23: 


> Ah, OK. But there are two problems with counting the segments:
> - you have no control of how many pbufs you need to configure


That is true.

> - snd_queuelen is decremented using pbuf_clen (because when
> sending with copy=0, a segment will consists of one PBUF_REF for
> the data and one PBUF_RAM for the header) so it must be
> incremented using pbuf_clen also, I think.


Before your changes it wasn't decremented using pbuf_clen either.

But I guess if pbuf_clen becomes a macro, it's not worth worrying about - the only problem being now that I'm not sure it can be made a macro. Perhaps we need a different name for the macro version:

#define pbuf_clen_p( p, c ) \
{                           \
  struct pbuf *_p = (p);    \
  u8_t _len = 0;            \
  while (_p != NULL) {      \
    ++_len;                 \
    _p = _p->next;          \
  }                         \
  (*c) = _len;              \
}

Jonathan Larmour <jifl>
Group Member
Thu 26 Jul 2007 02:43:53 PM UTC, comment #22: 

Ah, OK. But there are two problems with counting the segments:
- you have no control of how many pbufs you need to configure
- snd_queuelen is decremented using pbuf_clen (because when sending with copy=0, a segment will consists of one PBUF_REF for the data and one PBUF_RAM for the header) so it must be incremented using pbuf_clen also, I think.

Simon Goldschmidt <goldsimon>
Group administrator
Thu 26 Jul 2007 02:38:27 PM UTC, comment #21: 


>> So perhaps is it ok for it to just represent the length of the
>> queue, rather than being an accurate count of the number of
>> pbufs?
>
> I don't really get what you mean here,


what I was meaning is that the specific number of pbufs limited by TCP_SND_QUEUELEN may not be that important. I might be wrong, but it seems to me that TCP_SND_QUEUELEN is just there to act as a limit to the queue getting too long. At present it looks like instead of counting pbufs, it is counting segments. But is this actually a problem?

Maybe we should just change the comments in tcp_out.c and opt.h!

I'm prepared to be corrected on this though.


Jonathan Larmour <jifl>
Group Member
Tue 24 Jul 2007 11:03:20 AM UTC, comment #20: 


> So perhaps is it ok for it to just respresent the length of the
> queue, rather than being an accurate count of the number of
> pbufs?


I don't really get what you mean here, I think. The problem came from the fact that on enqueueing, queuelen is ++ed for each pbuf (whether consisting of a chain or not), while when removing from the queue, pbuf_clen is substracted. So for chained pbufs, the amoutn substracted is bigger than the amount previously added.

I'll change pbuf_clen to a macro and use it in tcp_enqueue also, then we should be pretty safe while not really slower.

Simon Goldschmidt <goldsimon>
Group administrator
Fri 20 Jul 2007 06:43:17 PM UTC, comment #19: 

Re comment #17: Only raw API applications would be affected. anything using the socket API or netbuf_* functions of the sequential API would be fine. But even then, I think it's always been the case that we should have been able to handle chained PBUF_RAMs, and the issue in tcp_enqueue falls into the category of a bug, not a feature ;).

Re comment #16 pbuf_clen() versus new pbuf_count.
Firstly be aware that adding one-byte to struct pbuf would probably in practice add 4 for most people, due to alignment constraints (assuming 32-bit ints).

I think most chains will be short enough that pbuf_clen() isn't much of an issue - the biggest thing would be the function call overhead itself, and to be honest, pbuf_clen() is so short it should probably be made a function-like macro anyway.

OTOH, I'm not sure there is anything magic about the queue len. Correct me if I'm wrong, but it doesn't appear to represent anything other than a contrived view of "how long the queue is". The value that really represents something is the number of bytes in the queue. So perhaps is it ok for it to just respresent the length of the queue, rather than being an accurate count of the number of pbufs?

Jonathan Larmour <jifl>
Group Member
Tue 03 Jul 2007 08:30:48 PM UTC, comment #18: 

I've added the asserts for a start.

Simon Goldschmidt <goldsimon>
Group administrator
Mon 25 Jun 2007 01:23:46 PM UTC, comment #17: 


> I see no reason why we shouldn't allow chained PBUF_RAMs, so the assertions are probably a good idea.


I still have doubts on this since I think this could affect existing applications that are built on the assumption that PBUF_RAM pbufs are not chained. Maybe we have to implement a new type for this? PBUF_RAM_NOCOPY could be the right type for this.

Simon Goldschmidt <goldsimon>
Group administrator
Mon 25 Jun 2007 01:20:37 PM UTC, comment #16: 


> However, i don't understand why pbuf_clen() is insufficient to count the length of such a chain.


Sorry, my explanation is lacking the description of where pcb->snd_queuelen is incremented. This is done in tcp_enqueue via a simple 'queuelen++'. Of course it's also OK to change that to 'queuelen += pbuf_clen(p)', that gives the same result. (Maybe it's even better, the only downside would be that it's slower.)

Simon Goldschmidt <goldsimon>
Group administrator
Mon 25 Jun 2007 12:51:34 PM UTC, comment #15: 

I see no reason why we shouldn't allow chained PBUF_RAMs, so the assertions are probably a good idea.  However, i don't understand why pbuf_clen() is insufficient to count the length of such a chain.


Kieran Mansley <kieranm>
Group Member
Sat 23 Jun 2007 03:56:33 PM UTC, comment #14: 


> In the meantime, I'll add assertions in places where a PBUF_RAM type that is chained can be dangerous.


Should I add those or not? Kieran, if we decide to never move to PBUF_RAM pbufs that can be chained, we can leave the assertions away.

I think having chained PBUF_RAMs can be a good compromise to get the speed of a pool while not wasting much memory (when having small pool elements -> multiple pbufs that are chained).

Simon Goldschmidt <goldsimon>
Group administrator
Sat 23 Jun 2007 01:29:23 PM UTC, comment #13: 

Some changes are necessary to make tcp work with chained pbufs:

- introcude a 'pbuf_count' counter in struct tcp_seg and increment it when chaining 2 segments together for output using pbuf_cat (pbuf_cat(useg->p, queue->p); in tcp_enqueue; around line 342).

- this counter is then used for 'pcb->snd_queuelen -= next->pbuf_count;' instead of 'pcb->snd_queuelen -= pbuf_clen(next->p);' (pbuf_clen(next->p) had the assumption of one pbuf per tcp_enqueue-call)

This is both compatible with chained PBUF_RAM pbufs and faster than the current version since pbuf_clen() doesn't have to be called. Of course at the cost of one u8_t per segment.

Should I check this in or am I the only one interested in this 'bug' (I'd prefer to call it a 'task' though)?

Simon Goldschmidt <goldsimon>
Group administrator
Fri 22 Jun 2007 09:41:54 PM UTC, comment #12: 

... and it is a problem! At least for TCP, the other parts have such low requirements that small PBUF_POOL pbufs are big enough.

tcp_enqueue wants to copy the whole data (a segment) using memcpy an that fails for a pbuf chain. I don't know yet if it is the only point, since a segment has a dataptr and a len value, I'll study that.

In the meantime, I'll add assertions in places where a PBUF_RAM type that is chained can be dangerous.

By the way, in my sources, in pbuf_alloc, I am allocating PBUF_POOLs instead of PBUF_RAMs and fail if the pool has fallen below a low-water-mark. PBUF_RAMs are effectively removed from the binary (and so is mem.c for me, aside from dhcp, snmp & loopif but I don't use them ;-) ).

Simon Goldschmidt <goldsimon>
Group administrator
Tue 22 May 2007 11:14:59 AM UTC, comment #11: 

What I meant was maybe there are some corner cases when a PBUF_RAM is allocated and it not fails that the code allocating it expects it to be in one piece. The parts of the stack that can deal with PBUF_REF or PBUF_POOL can cope with chains, of course!

Simon Goldschmidt <goldsimon>
Group administrator
Tue 22 May 2007 11:12:42 AM UTC, comment #10: 

I think the stack should already be handling pbuf chains. For transmission, the netconn API pretty much encourages it with use of netbuf_chain, and this function would be well used.

For reception, people frequently set PBUF_POOL_BUFSIZE much smaller than the MTU.

Jonathan Larmour <jifl>
Group Member
Tue 22 May 2007 07:08:37 AM UTC, comment #9: 

[I'm working on a mem to memp implementation (that is written in ANSI C; and I'm short of time at the moment so it could take a while).]

>If an application sends and receives a lot of small packets (text-based communication, for example) with an occasional large packet, the pools can be configured to be small and pbuf chaining takes care of the large packets.


That would imply that PBUF_RAM packets are not guaranteed any more to be one piece of memory. That would be a good option for low memory targets, though. But I think the stack is not yet ready to handle pbuf chains everywhere...

Also, we might need some kind of algorithm when to use a bigger buffer and waste memory and when to use 2 smaller ones. And we would need some kind of configuration options to use the 'one-piece' or chained method.

Simon Goldschmidt <goldsimon>
Group administrator
Thu 12 Apr 2007 07:18:24 AM UTC, comment #8: 


> I strongly suspect Atte and Simon have significantly more memory.


I have 72Kish (depending on how you count, my byte is 24 bits) for the whole system and the application pushes a lot of data through the stack (maximum 12 channels of data @ 10 kHz), although using UDP. TCP is used for control connections.

Jonathan, I'm sorry but I really didn't understand your argument against my suggestion about multiple memory pools. As of now lwIP has three memory managers (mem_, memp_, and pbuf) of which memp has 11 memory pools, one for each dynamic structure type. It is a difficult system for a beginner to understand, let alone tune. At least I was confused for quite some time ;-).

I personallty don't see a problem in using a fixed-size pool allocator in low-memory systems; the pool sizes just have to be tuned appropriately. There are several possible methods for this. If an application sends and receives a lot of small packets (text-based communication, for example) with an occasional large packet, the pools can be configured to be small and pbuf chaining takes care of the large packets. To complement this TCP's MSS option can be used to keep TCP packets reasonably small.

Using any fully-functional TCP/IP stack on a system with 16K of RAM is going to be a tight fit, but I don't see any reason why it couldn't be done using fixed-size pools with proper tuning. I'm also guessing that systems with 16K of RAM don't have very large ROMs either so the code space savings are also going to matter in such systems.

BTW, I don't see any good reason the use a buddy allocator over a decent malloc implementation in a small-memory system. As far as I understand with small block sizes a buddy allocator has the same fragmentation problems as a dynamic allocator. And is probably just as complicated to implement, too.

Atte Kojo <kojo>
Thu 12 Apr 2007 06:56:44 AM UTC, comment #7: 

Oh, I forgot to comment about the buddy allocator. It was me you mentioned that one to, some half a year ago... But I must say I'm not really satisfied with that. While it gives less fragmentation than our current heap implementation, it's still worse than our pool-implementation. And since we are not that low on RAM, we've chosen to stay with our pools ;-)

Simon Goldschmidt <goldsimon>
Group administrator
Thu 12 Apr 2007 06:26:19 AM UTC, comment #6: 


>Since you've already done the work....
>Would it be possible to have it in the repository


Hm, I'm afraid, that won't be possible as it is: I'm using our internal (self-written) memory pool we use elsewhere in our application. That would add extra code AND they are written in C++ ;-)

I can easily rewrite it into C, of course, but that would mean having almost the same code twice (memp). I'd rather make mem-alt.c refer to new memp pools and have those included with mem-alt.c using a compiler switch.

Anyway, first dhcp and snmp have to get rid of mem_malloc(), I think...

Simon Goldschmidt <goldsimon>
Group administrator
Wed 11 Apr 2007 10:37:58 PM UTC, comment #5: 

Yes, I was slightly more concerned about Atte's suggestion of making everything come from pools.

I half recall mentioning this somewhere before, but I'm not sure. But one fast and compact way to implement power of two buffers is a buddy allocator (see http://en.wikipedia.org/wiki/Buddy_memory_allocation if you don't know what I'm referring to, or know it by a different name).

So rather than having, say, three pools for three sizes of pbufs, you have a single pool, but can efficiently split up and coalesce pbufs. The fact we can chain pbufs makes this even better. So we can satisfy a 64K allocation with a 64K pbuf, or two 32K pbufs chained together. Of course ethernet frames are not (usually) a power of two, and although a 1024+512 chain would be close, it's probably better to keep using a separate pool for that special case.

It's just one possibility anyway.

Since you've already done the work.... Does your rewritten mem.c have the precise same API as the present mem.c? Is it fairly self-contained? Would it be possible to have it in the repository named perhaps as mem-alt.c (for alternative) and users can choose which they build?


Jonathan Larmour <jifl>
Group Member
Wed 11 Apr 2007 06:20:27 PM UTC, comment #4: 


>I strongly suspect Atte and Simon have significantly more memory.


Of course, you're right. We have around 500KB for the IP stack, and I wasn't thinking about memory usage on such low-memory systems (although we might be developing such a system in the near future...).

You're right also about the wasting of memory (for me, in order to avoid fragmentation).

I definitively would take this as an option for people who want the pbufs to come from pools (and PBUF_RAM is the main client of mem.c), or from a byte-heap. Other than that, the heap would mainly be used in dhcp.c and snmp/msg_in.c.

I have solved this issue for me by creating different-sized pools in a completly rewritten mem.c file, I only thought maybe other users would like to share this.

  • Oh, and also I would not regard this as important!

-> setting this to minor severity!

Simon Goldschmidt <goldsimon>
Group administrator
Wed 11 Apr 2007 05:17:11 PM UTC, comment #3: 

By having multiple separate pools, you increase overall RAM memory use - which is far more of an issue than code footprint for most embedded systems. It's inevitable you will sometimes run out of memory in one pool while still having memory in another, which is wasteful. And pools of fixed chunks which don't fit the allocations will intrinsically have waste. Statistical analysis may reduce some but not all of the overhead, and only really helps "average" operation.

If you want to be memory efficient, you should only use pools of fixed size memory chunks for fixed size allocations. With fixed size allocations that is when an implementation like the current memp one becomes efficient, rather than wasteful as Atte states: users can fine-tune precisely the resources to match the number of TCP connections (pcbs), in-flight TCP segments, etc. they intend to support. They are tuning lwIP so that it will meet their requirements, rather than statistically assume that sometimes it will, sometimes it may not, because you gave it a fairly arbitrarily chosen lump of memory to play with. Otherwise it can mean that "low priority" allocations can use up memory for high priority ones - dropping a packet can be recovered from far more easily than no longer being able to create new outbound connections. The point is, the user chooses how to divide the resources according to the application requirements.

No, it may not be as fast, but lwIP's primary goal is low memory use. Speed is secondary.

If people want this type of solution then I think it must be a configurable option. With what Leon describes, achieving that wouldn't be so hard - he's only referring to pbuf allocations, so that's all fairly self-contained. Trying to adopt fixed size pools for all pbufs permanently, or for variable sized quantities more generally (although it is true that there aren't many non-pbuf variable sized allocations around), or using wrongly sized pools for fixed size quantities, would be bad. 

I'm using lwIP on systems with 32K RAM and I believe should get into 16K for some people with simpler applications. I strongly suspect Atte and Simon have significantly more memory. I think it would be a step in the wrong direction to increase memory overheads for everyone.

Jonathan Larmour <jifl>
Group Member
Fri 09 Mar 2007 01:58:43 PM UTC, comment #2: 

I'm also in favor of a general pool allocator. But the current memp implementation is unnecessarily complex. It reserves separate pools for all possible object types that can be allocated, thus wasting a lot of space and time. And on top of that pbuf module has its own pool.

In my implementation I have replaced memp_malloc and memp_free with uC/OS-II's similar functions and two pools, one big for tcp_pcbs and one small for everything else. This simplified the code tremendously and brought also a nice performance gain.

Rewriting the memp module to use 3 or 4 pools with exponentially bigger pools and using this module for all memory allocation inside lwIP core should reduce code footprint and also make memory allocation and deallocation a lot faster. An example of a 3-pool scheme could be:

pool #1: small chunks for all dynamically allocated structures inside lwIP (excluding tcp_pcb, which is HUGE).
pool #2: 128-byte chunks for small packets and tcp_pcbs (must be at least sizeof(tcp_pcb))
pool #3: 1536-byte chunks for big packets (MTU of ethernet is 1500, but I rounded it up a bit (1536 == 3 * 2^9)).

Making all pool sizes a multiple of 8 also guarantees that dynamically allocated memory always satisfies alignment restrictions.

With some simple statistics collecting, user could tune the size of each pool individually for the application at hand and the resulting RAM footprint would probably be smaller that what lwIP currently has with its multiple memory allocation schemes.

Atte Kojo <kojo>
Tue 06 Mar 2007 08:27:08 AM UTC, comment #1: 

This one would be of great interest for me, as I think embedded systems running for a long time should not have a heap if memory is allocated ADN deallocated often with different sizes. (And our devices have to run a looong time, ~30 years is our goal)

Implementing fully pooled pbufs would be one step into that direction.
BUT: I think this would only make sense if mem_malloc() could be eliminated for the rest of the core code. Since this is mainly dhcp and snmp, I hope introducing new memp pools for those would be enough.

This could also solve the issue where no pbufs were free to acknowledge tcp segments (which again would free some tcp enqueued pbufs). This was an error report somewhere, though I don't remeber it.

I'm not planning to take on this right now, just wanted to know your opinions.

Simon Goldschmidt <goldsimon>
Group administrator
Tue 01 Apr 2003 11:44:15 AM UTC, original submission:  

Experience has shown a flexible pool pbuf system can replace all current pbuf types. A pool with 2^n sized pools, with some special sizes added (such as sizeof(struct pbuf), sizeof(struct pbuf) + IP_HLEN + ETH_HLEN) has high potential in realizing a low-latency high performance buffering system for packet representation.

This pbuf layer API should closely resemble the current pbuf.c API.

Added features may be:
- demanding minimum sizes of contiguous pbufs in a chain.

Any developers wishing to cooperate, pls. discuss this issue on -email is unavailable-

Leon Woestenberg <likewise>
Group Member

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by billauerbach (Posted a comment)
  • -email is unavailable- added by kieranm (Posted a comment)
  • -email is unavailable- added by kojo (Posted a comment)
  • -email is unavailable- added by jifl (Posted a comment)
  • -email is unavailable- added by goldsimon (Posted a comment)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

     

    Follow 3 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2007-04-11 goldsimon Severity4 - Important 2 - Minor
    2007-03-09 kojo Carbon-CopyRemoved 51921 -
    2007-03-06 goldsimon Carbon-Copy- Added -email is unavailable-

    Back to the top

    Powered by Savane 3.13-f8d8.
    Corresponding source code