Greetings, I am new to YAFFS and looking to use it in my embedded system as I really like what I see about the technical inner workings. However the one killer issue for me is the tnode memory usage requirement. My embedded system is a fairly typical data logging application where lots of data is logged into a few files by a moderately powered system: ARM7, 2MB RAM, 2GB NAND. The NAND requirement can easily grow to 4, 8, or 16GB, etc. Is there a general consensus that a linear size relationship between NAND size and RAM system memory requirement (no matter how good the ratio) is unsustainable? I would say that this is especially true in embedded systems. 2GB NAND ~= 1MB RAM usage (with 4K blocks). This is already half of my system memory. As NAND scales to higher density (in a footprint compatible package) the rest of the system such as the SRAM will not. Is there any "low hanging fruit" to be had as far as compromising on certain file system performance aspects, but gaining a much more aggressive memory footprint? Perhaps a hybrid tree / run-length encoding of t-nodes? Much of the file writing is consecutive pages anyway. How can one contribute toward this effort?