Hey Charles,
When testing with the 2GB NAND I was visited by the OOM killer a few times.
This makes me think we're short on RAM for handling a 2GB NAND part. Our
board currently has 32MB of RAM, of which 8MB is used for a RAM disk. When
I dropped the ramdisk down to 3.5MB for testing purposes, I didn't have
issues with the OOM killer any more. We're looking at moving up to 64MB of
RAM to avoid this issue. However in the future I'd like to be able to
estimate the memory usage of YAFFS2 based on NAND size.
I found a thread about YAFFS2 memory usage, and I just want to make sure I
understand it correctly.
http://www.yaffs.net/lurker/message/20090701.190059.23524635.ca.html#yaffs
> * yaffs_Objects: Each object (file, directory,...) holds a yaffs_Object
in memory which is around 120 bytes per object.
So every file, directory, etc. uses up 120 bytes of RAM. This is all the
time? Right from when the filesystem is mounted? So if I have 1000 objects
on the device I'll be using up 120000 bytes?
> * yaffs_Tnodes: These are the things used to build the trees.
The part I'm using is 8192 erase blocks, and 64 pages per erase block.
That means there are 524288 chunks in my filesystem. Using your equation I
come up with
Log2(524288) = 19 bits
19 + 1 = 20 (which is already even)
So 20 bits will be used to represent each chunk.
Assuming worst case and the filesystem is full I will be using all 524288
chunks. This means that I will need 20 * 512K which is 10MB of RAM to
store all the Tnodes. Does that seem about right?
I was also copying my Linux source tree to NAND. It's about 22527 files.
This would mean it would require 120 * 22527, or about 2.6MB of RAM for all
of the Objects.
Of course as you mentioned in the email there is some other overhead on top
of this, but this should be a large portion of the memory required to
handle a YAFFS filesystem?
Thanks again,
Andrew McKay
Iders Inc.