Andrew McKay wrote:
> > Do you have a rule of thumb for amount of RAM
> > required based on the size or number of chunks of NAND?
YAFFS's actual RAM usage varies with the number of files and directories,
but - given the current code - you can compute a worst-case bound by
assuming the filesystem is full of directories or very small files.
For example, on the device I've been using recently, there are 65536 chunks
of 2k each. This leads to each Tnode requiring 18 bits, so each Tnode group
takes 36 bytes.
Assuming each directory takes a whole chunk of NAND to record (Charles,
please correct me if I'm wrong!, but that's my understanding) you can fit
nearly 65536 of them into this device [0]. Each takes a yaffs_Object of 124
bytes in RAM: a total of 8.1MB (plus a few k more for the yaffs_DeviceStruct
and the short-op caches).
[0] YAFFS reserves some blocks to allow for failures in-circuit and for
garbage collection.
If you instead fill the NAND full of small files (each of size up to one
chunk) and no directories, each file takes two chunks: one for the metadata,
one for the file data. So we can fit nearly 32768 of these worst-case files
onto the array. Given the current YAFFS code, each requires a yaffs_Object
of 124 bytes, and a Tnode group of 36 bytes: a RAM consumption of 32768 *
160 bytes, or 5.24MB.
These are drastic worst-case bounds, though. In my own testing - in the
thread you cited earlier - I stored 10000 10k files. The RAM use calculation
came out around 1.6MB, which my heap instrumentation agreed with.
Ross
--
Embedded Software Engineer, eCosCentric Limited.
Barnwell House, Barnwell Drive, Cambridge CB5 8UU, UK.
Registered in England no. 4422071. www.ecoscentric.com