Re: [Yaffs] YAFFS2 Memory Usage Redux

Top Page
Attachments:
Message as email
+ (text/plain)
Delete this message
Reply to this message
Author: Charles Manning
Date:  
To: yaffs
Subject: Re: [Yaffs] YAFFS2 Memory Usage Redux
On Saturday 30 January 2010 22:52:46 Ross Younger wrote:
> Chris wrote:
> > Is there any "low hanging fruit" to be had as far as compromising on
> > certain file system performance aspects, but gaining a much more
> > aggressive memory footprint?


If there was any really low hanging fruit it would have been taken ages
ago. :-).

>
> If you can accept the higher wastage you could move to larger virtual
> blocks: you could do this without touching the guts of the filesystem by
> creating a funky NAND driver which exposed a larger page size (say 8k or
> 16k, or perhaps even as large as a single eraseblock) to YAFFS.
>
> As every page of every file requires a Tnode entry, you've then at a
> stroke cut the number of entries required by a factor of four or eight.
> You've perhaps also shrunk the size of a single Tnode, and hence a few
> more bytes off every Tnode group of every file, if your NAND array had
> 2^16 or more physical pages. (This wouldn't help if your filesystem
> comprised mainly small files, as I think the per-object overhead would
> dominate?)
>
> Such a driver might conceivably be written as an intervening layer -
> that is to say, pretending to YAFFS to be an MTD device and itself
> talking to the real MTD device - which would I think be a good candidate
> to be contributed.


Making large virtual pages like this is certainly the easiest way to reduce
memory footprint. As Ross says, the down side is that you end up with more
space wastage due to fractional chunk use. However the impart of this is very
dependent on usage. If you have many small files then the wastage is
significant. If, instead, your application tends towards larger files ( say a
Mbyte or more typically) then the wastage is only going to be 1% or so.

I did once implement something experimental that could work in some cases.
This approach would replace the lowest level node of the tnode tree if it
contained a sequential run of data chunks, that replaces the level 1 tnode
pointer with a start and extents, That really reduces memory significantly
for large fines with sequentially written data.

As an example consider a 3MB file written using 2k pages.
The current mechanism needs:
1500 level 0 tnode entries. ie 94 level0 tnodes approx 3384 bytes.
94 level 1 tnode entries, ie 12 level 1 tnoodes approx 432 bytes
12 level 2 tnode entries. ie 2 level 2 tnodes. approx 72 bytes
2 level 3 tnode entries ie. 1 level 3 tnode approx 36 bytes.
Total 109 tnodes = approx 3924 bytes.

Absolute best case, the level 0 stripping would dispense with all the level0
tnodes except for the fractional one at the end of the file, so it would
need:
1 level 0 tnode
12 level 1 tnodes
2 level 2 tnodes
1 level 3 tnode
Total 16 tnodes = approx 576 bytes.

Of course that is absolutely best case so don't count on it in the real world.
That code is also not available for release into the public domain. :-(.

I'm starting to dabble with some ideas for a yaffs3 (no promises yet). This
would provide mechanisms to use the space in the file headers better and use
them as a way to store either data or a tnode tree. This would give both
better flash usage as well for small files as well as the ability to load a
file's tnode tree only when it is required and dump it from memory when it is
not needed. That would certainly reduce the RAM footprint considerably.



-- Charles