> -----Original Message-----
> From: yaffs-bounces@lists.aleph1.co.uk [mailto:yaffs-
> bounces@lists.aleph1.co.uk] On Behalf Of Chris
> Sent: Tuesday, 2 February 2010 2:17 PM
> To: yaffs@lists.aleph1.co.uk
> Subject: Re: [Yaffs] YAFFS2 Memory Usage Redux
>
...
>
> However what I am really after is to break the O(n) relationship
between
> flash size and system ram requirement. Again, IMO this is
structurally
> unsustainable moving forward.
I don't think *anyone* has yet figured out how to break the O(n)
relationship when using Flash. JFFS2, Yaffs, UBI - all linear (IIRC;
some might be worse). Everyone needs some sort of dynamic one-to-one
logical-block -> physical-block mapping, and that scales as the number
of blocks.
Perhaps someone needs to get an Intel SSD engineer drunk sometime - but
then again, maybe they don't use the huge amounts of RAM just as a data
cache...
> ... FLASH sizes will increase, but the embedded system needed
> to drive it shouldn't have to, any more than needing to add another
> 1GB memory stick to my PC just to plug in a 128GB SD card.
>
Block sizes are scaling too (even magnetic media, now). That helps
things.
Just purely in terms of component-EOL management, nothing says you have
to allocate all 1G of the new chip as filesystem; it wouldn't be
elegant, but your product wouldn't break.
> I would try to think in terms of a hybrid solution where you could set
a
> user performance parameter (like a tnode level) at which FLASH is
> substituted for RAM, or in terms of certain operations not being
> penalized (sequential reads, open for append) at the expense of others
> (random seeks, truncations, etc).
>
This is probably the way forward - if there were some method of moving
some of the data out of RAM, which didn't require 'rescan entire media'
to recreate it...
J