Re: [Yaffs] Products shipping YAFFS ??

Top Page
Attachments:
Message as email
+ (text/plain)
Delete this message
Reply to this message
Author: Charles Manning
Date:  
To: yaffs
Subject: Re: [Yaffs] Products shipping YAFFS ??
On Sunday 04 March 2007 06:09, Vitaly Wool wrote:
> On 3/2/07, Charles Manning <> wrote:
> > 1) YAFFS uses less run-time RAM to hold its state so YAFFS scales better
> > than JFFS2. There are quite a few systems using large NAND arrays (the
> > largest partitions I know of are around the 6Gbyte mark).
> >
> > 2) YAFFS garbage collection is simpler and faster typically making for
> > better performance under hard writing.
> >
> > 3) YAFFS uses a whole page per file for headers, and does not provide
> > compression. This means that JFFS2 is probably a better fit to small
> > partitions. The flip side though is that a lot of data (eg. MP3) does
> > not compress very well and enabling compression really hurts performance.
> >
> > 4) While YAFFS has been used on NOR, it does not fit that well (because
> > YAFFS does not use erase suspend). Thus if you're using smaller NAND
> > partitions and using jffs2 on NOR, then a jffs2-only solution is very
> > appealing.
>
> Absolutely agreeing with Charles on these points, I'd like to add a
> couple of cents.


Vitaly, thanx for your 2c, but a few corrections I think:
>
> a) YAFFS2 is not in the mainline, thus gaining less testing/support
> from the open-source community

This is possibly true, but I don't think that it ncessarily follows. Just
because stuff is in the mainline does not mean it is well tested. I have
found more than one driver in the mainline that got broken due to a kernel
change impacting on a driver.

Stuff only gets tested if people use it and test it. For example, consider the
Lego tower USB driver. That has been broken more than once. It only gets
tested by the people who really use it. I doubt that, for instance, Linus has
the Lego USB tower in any tests.

Changes in the VFS do break YAFFS occasionally, but I think that they tend to
get found and fixed quite soon.

> b) but YAFFS2 code is generally simpler than that of JFFS2, so once
> you're in a trouble debugging something, you'll probably cope with it
> faster with YAFFS2
> c) YAFFS2 extensively uses OOB area which might make it unusable with
> some NAND controllers taking a lot of OOB space for ECC, and with some
> types of NAND flash (e. g. OneNAND)
> d) YAFFS2 is likely to be slower with bitbanging NAND due to hard OOB usage

Can you explain that a bit more. I don't understand what point you are trying
to make.
> e) YAFFS2 supports a lot of legacy kernels which is probably a plus
> for the most cases
> f) YAFFS doesn't implement any wear levelling AFAIK which is a minus.


Wear levelling is typically a red herring for any log-structured file system
like YAFFS or jffs2. Flash wear is a problem in file systems that make
extensive writes to the same positions in the media, for example FAT. On a
FATfs, the FAT area is continually being overwritten, which would cause gross
media failures if you do not have wear levelling. There are two wear
strategies that are typically used:
1) Free-pool (as per SmartMedia and SD): Instead of using physical NAND
addresses, the blocks are assigned a logical block number. When a block is
rewritten a new physical block is selected from the free pool. This is
assigned the new logical block number and the old block is erased and thrown
into the free pool. This means that the blocks get moved around (ie logical
to physical mapping changes) meaning that over time we get some wear
levelling behaviour.
2) Explicitly managed (as per M-Systems): Each block has usage counters and
the wear levelling is explicitly managed.

If is true that YAFFS does not do any explicit wear levelling. Instead, a
degree of wear levelling is a side-effect of how free blocks are managed.
Although the code and motivation is entirely different, the result is similar
to how the SmartMedia/SD free is managed but the pool is typically far larger
for YAFFS meaning that the wear levelling is far better.

To explain that last statement a bit: In a FATfs on SmartMedia/SD, typically
1000 out of 1024 blocks are formatted and in use and the free pool is only 24
blocks (reduced by the number of bad blocks) this makes the pool relatively
small meaning that the averaging effect is only scattered over a few blocks
at a time. In other words: a formatted block is "in use" whether it holds
data or not. In YAFFS, blocks that do not contain useful data are unused and
in the free pool. This means that the free pool is typically far bigger and
the averaging effect is better.

Of course YMMV according to system usage, but I have done a few accelerated
lifetime tests with YAFFS, doing over 100Gbytes of writes to a system in some
tests. The wear came nowhere near anything that caused me the slightest
concern. 100GB == approx 30MB per day for 10 years.

So, for most YAFFS systems, I don't see wear being a real issue. Of course
there will always be some systems where this might be a concern. If you find
yourself in that category, then I'm more than happy to help analyse what the
impact might be.


-- CHarles