Author: Charles Manning Date: To: yaffs Subject: Re: [Yaffs] YAFFS2 Memory Usage Redux
On Tuesday 02 February 2010 17:17:19 Chris wrote: > On 2/1/2010 9:45 PM, Chris wrote:
> > //Making large virtual pages like this is certainly the easiest way to
> > reduce
> > memory footprint. As Ross says, the down side is that you end up with
> > more
> > space wastage due to fractional chunk use. However the impart of this
> > is very
> > dependent on usage. If you have many small files then the wastage is
> > significant. If, instead, your application tends towards larger files
> > ( say a
> > Mbyte or more typically) then the wastage is only going to be 1% or so.
>
> Agreed. It would be very easy for me to go to larger virtual pages
> because I could handle this at the lowest flash layer. Additionally my
> system uses hundreds of files max, not thousands, or tens of thousands,
> so most of the RAM use will be the tnodes.
>
> However what I am really after is to break the O(n) relationship between
> flash size and system ram requirement. Again, IMO this is structurally
> unsustainable moving forward. Even if I go to 8k pages for 2GB memory I
> still need another 2 to 4 fold reduction to hit my memory target of
> <256KB for the filesystem. Even if you think 256KB is unreasonably
> small, once you use a FLASH of 4, 8, or 16MB on a single chip (current
> technology) you will be beyond the RAM size of most SRAM based embedded
> systems. In another two years you'd be hard pressed to even buy < 16GB
> single chip as old flash goes EOL very quickly. FLASH sizes will
> increase, but the embedded system needed to drive it shouldn't have to,
> any more than needing to add another 1GB memory stick to my PC just to
> plug in a 128GB SD card.
All file systems, and flash file systems especially so, have various
constraints and the file system needs to be designed in a way that gives a
useful trade off between function and those constraints. YAFFS works well
into the few Gbyte range but does not scale well to the many tens of GB.
As a result, efforts to get one set of usefulfeatures tends to require giving
away some other useful attributes.
This is what has lead to there being so many different file systems. Each
exists because it does something that others don't and it would be incorrect
to expect any fs to fit your requirements.
>
> > I did once implement something experimental that could work in some
> > cases.
> > This approach would replace the lowest level node of the tnode tree if it
> > contained a sequential run of data chunks, that replaces the level 1
> > tnode
> > pointer with a start and extents, That really reduces memory
> > significantly
> > for large fines with sequentially written data.
>
> This is more or less what I was describing. Statistically this works
> out great in data logging systems because you log lots of data until
> your disk is full. Then you read out the data at once, then erase all
> the data at once for a new run. The worst case is if you continuously
> interleave the fwrite()'s of 2+ more files at the same time, then you
> break the consecutive sequence.
It works well for some data logging applications and things like storing
executables, photos of MP3s and such which get written sequentially. It does
not work well for databases and the like. >
> > Of course that is absolutely best case so don't count on it in the
> > real world.
> > That code is also not available for release into the public domain. :-(.
>
> Doh!
>
> > I'm starting to dabble with some ideas for a yaffs3 (no promises yet).
> > This
> > would provide mechanisms to use the space in the file headers better
> > and use
> > them as a way to store either data or a tnode tree. This would give both
> > better flash usage as well for small files as well as the ability to
> > load a
> > file's tnode tree only when it is required and dump it from memory
> > when it is
> > not needed. That would certainly reduce the RAM footprint considerably.
>
> I think this is certainly the way to go and needs to be address. I
> would draw an analogy to the JFFS2 mount time problems. It was never on
> the radar as a fundamental problem, but as flash sizes grew faster than
> anyone expected, it has now become such a serious issue it disqualifies
> JFFS2 for a whole class of embedded systems. 15 minutes to mount a full
> 2GB partition on a 200MHz ARM9. Ouch!
This is an example of what I mentioned above. JFFS2 works pretty well on small
NOR partitions. The design is a good match for some features. However it is
terrible for overwrite or larger fs sizes.
>
> I would try to think in terms of a hybrid solution where you could set a
> user performance parameter (like a tnode level) at which FLASH is
> substituted for RAM, or in terms of certain operations not being
> penalized (sequential reads, open for append) at the expense of others
> (random seeks, truncations, etc).
That's a bit like my thinking. I would limit the RAM use to a certain size so
that some would be cached. If there there is enough RAM then all would be
cached any you might get something like what we have now.