On Thursday 02 July 2009 06:29:17 Ross Younger wrote:
> Hi all,
>
> I've ported YAFFS to eCos and am currently confirming the stability of the
> port.
>
> I've run into questions surrounding YAFFS's expected memory usage.
>
>
> The board I am working with at the moment has a Samsung 1Gbit NAND chip (2k
> pages x 64 pages per block x 1024 blocks), and I am using the whole NAND as
> my filesystem.
>
> I found a hint ("system requirements" in
> http://www.yaffs.net/yaffs-direct-user-guide) that one should budget 2
> bytes of RAM per page of NAND. On my chip that comes out to 128kbytes. I
> then found http://www.yaffs.net/yaffs-memory-footprint which feels a
> "better" estimate given the greater number of inputs.
>
> I have created a test filesystem with 10,000 files of around 10k each. By
> the latter estimate above I expect to use around 1.74Mb. On querying my
> heap usage, I'm coming in slightly under that at 1.6Mb.
>
> I'm pretty sure this number is accurate as nothing else is calling malloc;
> after I unmount the filesystem, heap usage drops to zero.
>
> Admittedly this is a contrived test, but I'm slightly surprised that I came
> close to a YAFFS1 estimate despite using the large-page code of YAFFS2;
> wasn't there intended to be a saving somewhere? Is this estimate generally
> reliable with both small-page and large-page devices? It seems to be more
> like an upper bound than an average, which itself would be useful to have
> confirmed.
>
> Any insights would be gratefully received...
Hi Ross
That calculation is a bit old.
YAFFS main uses of memory are in two places:
* yaffs_Objects: Each object (file, directory,...) holds a yaffs_Object in
memory which is around 120 bytes per object.
* yaffs_Tnodes: These are the things used to build the trees.
Tnode calculation is more than a one liner.
To calculate the size of a Tnode:
0) "Non-wide tnodes" are 16 bits. However these are slower on large flash.
For wide tnodes (default):
1)Take the number of chunks in your flash and figure out the number of bits
required to represent that. Note that if you are using all the chunks in a
NAND you need to add one to that because we still need to represent "0" ==
the invalid chunk. In your case In your case 64k chunks + 1 ==> 17 bits.
2) Round up to a multiple of 2. In your case 18 bits
3) Minimum of 16 bits. In your case still 18 bits.
With "reasonable size" files, each chunk in a file needs 18 bits to represent
it, plus a bit.
However the Tnodes are allocated in arrays of 16. When you're only allocating
10k files you end up wasting a lot of space.
If your test had a mix of different file sizes between, say, 10k and 10M you
would see far less memory usage.
There are some other costs: The Tnodes used within a tree (level 1, level 2
etc), as well as the cache and buffer, but the above are the major items.
Hope that helps
Charles