Re: [Yaffs] YAFFS2 Memory Usage Redux

Top Page
Attachments:
Message as email
+ (text/plain)
Delete this message
Reply to this message
Author: Chris
Date:  
To: yaffs
Subject: Re: [Yaffs] YAFFS2 Memory Usage Redux

On 2/1/2010 9:45 PM, Chris wrote:

> //Making large virtual pages like this is certainly the easiest way to
> reduce
> memory footprint. As Ross says, the down side is that you end up with
> more
> space wastage due to fractional chunk use. However the impart of this
> is very
> dependent on usage. If you have many small files then the wastage is
> significant. If, instead, your application tends towards larger files
> ( say a
> Mbyte or more typically) then the wastage is only going to be 1% or so.
>


Agreed. It would be very easy for me to go to larger virtual pages
because I could handle this at the lowest flash layer. Additionally my
system uses hundreds of files max, not thousands, or tens of thousands,
so most of the RAM use will be the tnodes.

However what I am really after is to break the O(n) relationship between
flash size and system ram requirement. Again, IMO this is structurally
unsustainable moving forward. Even if I go to 8k pages for 2GB memory I
still need another 2 to 4 fold reduction to hit my memory target of
<256KB for the filesystem. Even if you think 256KB is unreasonably
small, once you use a FLASH of 4, 8, or 16MB on a single chip (current
technology) you will be beyond the RAM size of most SRAM based embedded
systems. In another two years you'd be hard pressed to even buy < 16GB
single chip as old flash goes EOL very quickly. FLASH sizes will
increase, but the embedded system needed to drive it shouldn't have to,
any more than needing to add another 1GB memory stick to my PC just to
plug in a 128GB SD card.

> I did once implement something experimental that could work in some
> cases.
> This approach would replace the lowest level node of the tnode tree if it
> contained a sequential run of data chunks, that replaces the level 1
> tnode
> pointer with a start and extents, That really reduces memory
> significantly
> for large fines with sequentially written data.
>


This is more or less what I was describing. Statistically this works
out great in data logging systems because you log lots of data until
your disk is full. Then you read out the data at once, then erase all
the data at once for a new run. The worst case is if you continuously
interleave the fwrite()'s of 2+ more files at the same time, then you
break the consecutive sequence.

>
> Of course that is absolutely best case so don't count on it in the
> real world.
> That code is also not available for release into the public domain. :-(.
>


Doh!

> I'm starting to dabble with some ideas for a yaffs3 (no promises yet).
> This
> would provide mechanisms to use the space in the file headers better
> and use
> them as a way to store either data or a tnode tree. This would give both
> better flash usage as well for small files as well as the ability to
> load a
> file's tnode tree only when it is required and dump it from memory
> when it is
> not needed. That would certainly reduce the RAM footprint considerably.


I think this is certainly the way to go and needs to be address. I
would draw an analogy to the JFFS2 mount time problems. It was never on
the radar as a fundamental problem, but as flash sizes grew faster than
anyone expected, it has now become such a serious issue it disqualifies
JFFS2 for a whole class of embedded systems. 15 minutes to mount a full
2GB partition on a 200MHz ARM9. Ouch!

I would try to think in terms of a hybrid solution where you could set a
user performance parameter (like a tnode level) at which FLASH is
substituted for RAM, or in terms of certain operations not being
penalized (sequential reads, open for append) at the expense of others
(random seeks, truncations, etc).

Thanks Charles!

Chris