Mon 07 Mar 2016 04:19:55 PM UTC, comment #19:
I'm going to suggest that this is probably due to the other open bug, where the packet field is probably not calculated properly. So it might be that complete packets are remaining in queue because of a miscalculation of expected size. Lets test this again after we fix the open bug (sometime this week I hope)
|
Sat 05 Mar 2016 03:21:06 PM UTC, comment #18:
> OK now I see: you're probably coming from a uIP world where you don't want to allocate 10 KByte for the IP stack?
Yapp, 10 KByte is huge in my world (though it's not the uIP world ^^).
> Are you sure that your PBUF_POOL_BUFSIZE is set to around 127?
Since I'm not setting that propably not. I know 6LoWPAN is pretty new to master, but it would be nice to have some documentation on what is needed to set and called apart from the usual stuff for Ethernet :-).
> I'm closing this, as you said the crash (which is what this bug is about, after all) might probably be in your driver.
That's alright. Thanks for the nice discussion anyway.
|
Sat 05 Mar 2016 02:34:20 PM UTC, comment #17:
OK now I see: you're probably coming from a uIP world where you don't want to allocate 10 KByte for the IP stack? In lwIP, this is rather little memory, given that you allocate all memory from the heap, even all RX pbufs!
Are you sure that your PBUF_POOL_BUFSIZE is set to around 127?
Anyway, this is not a 6lowpan issue but rather a question "why does lwIP consume more memory than I would think it would?"
By tracing the calls to mem_malloc(), you should find out...
I'm closing this, as you said the crash (which is what this bug is about, after all) might probably be in your driver.
|
Sat 05 Mar 2016 02:06:07 PM UTC, comment #16:
(I have MEMP_MEM_MALLOC set to 1 btw)
|
Sat 05 Mar 2016 01:41:09 PM UTC, comment #15:
> What I don't understand is what does MEM_SIZE have to do with it? RX pbufs should be allocated as PBUF_POOL, which has nothing to do with MEM_SIZE. Could it be that you allocate RX pbufs as PBUF_RAM and allocate them much bigger than the 127 bytes required for 6lowpan?
All I do to the packet buffer is allocating a received packet as PBUF_POOL (of exactly the length it is, which can't go over 125 (excluding FCS) for a IEEE 802.15.4 radio): https://github.com/RIOT-OS/RIOT/pull/3551/files#diff-6629cafbf0fc4d482ccec31da7bb62bdR200 anything doesn't use lwIP's packet buffer (sizeof(_tmp_buf) is 127 in my applications context btw). Where else is the data stored if not in the memory (which size is determined by MEM_SIZE). What's the stuff in mem for then if not for allocating stuff like packets?
|
Sat 05 Mar 2016 01:20:42 PM UTC, comment #14:
What I don't understand is what does MEM_SIZE have to do with it? RX pbufs should be allocated as PBUF_POOL, which has nothing to do with MEM_SIZE. Could it be that you allocate RX pbufs as PBUF_RAM and allocate them much bigger than the 127 bytes required for 6lowpan?
|
Sat 05 Mar 2016 01:06:41 PM UTC, comment #13:
> Maybe you did not call lowpan6_tmr()?
In fact: I don't. But nevertheless, there is no time for the fragment to time out (as I said I pretty much flood the interface, there are no duplicates and I receive either all packets (with a very big MEM_SIZE) or non at all (with a normal MEM_SIZE)). As far as I can see: completed datagrams are removed from the reassembly buffer when the last fragment is received so there shouldn't be any packets left in the reassembly buffer.
|
Sat 05 Mar 2016 12:26:50 PM UTC, comment #12:
> However, it is still odd that I need 9 KB of RAM just to receive 1 KB sized fragmented packets
It is. And although I could not test it, from reading the code I don't know how that would happen. There are simply no allocations involved in the 6lowpan file other than the 22 byte per fragmented packet.
Maybe you did not call lowpan6_tmr()?
|
Fri 04 Mar 2016 10:06:40 PM UTC, comment #11:
> We could enforce an upper limit on allocated packets and/or fragments, but I don't see how it would crash.
Hi, as stated the crashes occur rarely and after Ivans comment I assume it could be related to my low-level driver I'm using. However, it is still odd that I need 9 KB of RAM just to receive 1 KB sized fragmented packets at all (and not be discarded since there is no space left to allocate them, at least that's what I observed when I debugged this).
|
Fri 04 Mar 2016 08:32:56 PM UTC, comment #10:
>So that means you have 308 extra bytes allocated
Ooops, I got that wrong: struct lowpan6_reass_helper is only allocated once per FRAG1 (not per FRAGN), so you only have 22 bytes of overhead per fragmented IPv6 packet.
Given that, I don't know what's the issue and what's the proposed fix.
We could enforce an upper limit on allocated packets and/or fragments, but I don't see how it would crash.
|
Fri 04 Mar 2016 06:22:43 PM UTC, comment #9:
> Sorry, I meant MEM_USE_POOLS
I'm not quite sure I understand. MEM_USE_POOLS makes mem_malloc() use different-sized memp pools. MEM_SIZE is not needed in this case.
> so we are talking about the range of 12-14 received fragments here
So that means you have 308 extra bytes allocated through mem_malloc until you have received an MTU-sized fragmented frame. I don't quite get how that correllates to your 10KByte
|
Fri 04 Mar 2016 05:32:31 PM UTC, comment #8:
Ah you were refering to the MTU of IPv6 over IEEE 802.15.4. This is of course correct (and I'm not going over it btw). I thought you were refering to the general MTU in the scope IPv6.
|
Fri 04 Mar 2016 05:26:55 PM UTC, comment #7:
The MTU for transmission of IPv6 packets over IEEE 802.15.4 is set to 1280.
Larger packets must go trhough IPv6-layer fragmentation first, in order to comply with this maximum.
https://tools.ietf.org/html/rfc4944#section-4
https://tools.ietf.org/html/rfc4944#section-5.3 [datagram_size]
|
Fri 04 Mar 2016 05:21:23 PM UTC, comment #6:
This is not correct. 1280 is the minimum MTU (source: https://tools.ietf.org/html/rfc2460#section-5). Otherwise the M in MTU would be quite redundant and the fragmentation and compression in 6LoWPAN would not been required ;-).
My setup is this application for RIOT-OS: https://github.com/authmillenon/RIOT_playground/tree/master/stack_comparison/time_rx/lwip (note that this is not intended for publication so I hope my funky symlink usage isn't too appalling ;-P). My port for RIOT-OS you can find here: https://github.com/RIOT-OS/RIOT/pull/3551
|
Fri 04 Mar 2016 05:09:39 PM UTC, comment #5:
Please note that max MTU is 1280, although it should be safe if we receive larger packets.
We are going to need more info to debug this. My own flooding tests have gone well. Not to imply that there is no bug, but we also need to rule out that it is not a problem with the low-level driver.
|
Fri 04 Mar 2016 02:04:24 PM UTC, comment #4:
(so we are talking about the range of 12-14 received fragments here)
|
Fri 04 Mar 2016 01:58:49 PM UTC, comment #3:
This issue was written very badly.... It was very late and I was about to go to bed. Sorry for that. I also forgot to mention that this only happens for very big (but within the range of IPv6's minimum MTU of 1280 byte) IPv6 datagrams.
|
Fri 04 Mar 2016 01:49:50 PM UTC, comment #2:
Sorry, I meant `MEM_USE_POOLS` (^.^).
|
Fri 04 Mar 2016 07:31:45 AM UTC, comment #1:
What does LWIP_USE_POOLS mean? I can't find that in the sources.
6LowPAN reassembly stores the received pbufs + a helper struct of 22 bytes per fragment. Now it depends on the input pbuf size how many bytes are wasted per 6LowPAN RX packet. But 10KByte seems a bit too much...
|
Fri 04 Mar 2016 03:53:53 AM UTC, original submission:
For an evaluation of several stacks I flood the stacks (admittedly with a virtual network device so next to no delay compared to realistic situations). After the introduction of 6LoWPAN to your stack last week I also considered it, but currently I need to set the `MEM_SIZE` value (with `LWIP_USE_POOLS`) to a comparably very big value (10592 + stack size for TCP/IP thread for a Cortex-M platform [this one: https://www.iot-lab.info/hardware/m3/, but I also saw the same behavior this one]). Otherwise, the stack can't reassemble the packet since it runs out of memory or some cases even crashes.
|