On Friday 06 October 2006 09:08, Vitaly Wool wrote:
> Charles,
>
> > Can you also post details of the tests you used?
>
> The tests were very simple.
> The "write big file" thing stands for { dd if=/dev/urandom
> of=/mnt/bigfile bs=512 count=10240 && sync }, the "write folder" thing
> is {cp -a /bin /mnt && sync }.
>
> The reason why I started that was the letter from one guy to YAFFS2
> list saying that the r/w performance of JFFS2 is higher than that of
> YAFFS2... which was contrary to my expectations.
>
> > I find it hard to understand why there should be such a difference in the
> > write speed (JFFS2 is 2x YAFFS2) because, apart form gc effects, they
> > should both be writing pretty much the same amount to flash and doing
> > equal amounts of processing (assuming there is no compression, as you
> > have said).
>
> Yea, I was surprised with that as well, However, after some
> considerations, I'm close to the conclusion that it's
> controller-specific. The thing is that the OOB is spread across the
> page IOT facilitate HW ECC, so it's (512b data, 16b OOB) x 4 actually
> for each page, so each OOB write is costly. However, such things
> appear more and more as manufacturers go producing cheaper chips that
> need stronger ECC.
Well that is a "driver thing" in some ways too. A particular driver that is
unfriendly to oob writing will penalise yaffs2. As always, a streamlined
custom driver is going to be faster than a generic one, and is the approach
taken by almost all those interested in top performance.
I am doing some stuff to get in-band tags going which would get around this.
That will allow yaffs2 to be used in various situations where it cannot be
used at present (including wierdo flash, block drivers etc).
***However*** it is important to understand that inband-tags breaks page
alignment which means that pages need to be copied and buffered. Thankfully,
yaffs already has the short op cache which will do this automatically, but
there is still the extra copying cost.
As a rule of thimb, I think the following outlines the preferences for best
performance:
* Fastest: Tags stored in oob. Custom NAND driver that is oob friendly.
* Using inband-tags, generic drivers, forcing erased checks on, etc will slow
things down, but the mix is difficult to predict. For instance, a driver that
is oob-unfriendly might do better with inband-tags than with oob-tags.
>
> > I would expect YAFFS2 to grind a bit if you're writing to a partition
> > that has just had a lot of files deleted. Since YAFFS2 defers the garbage
> > collection until subsequent writes, this impacts on write speed. However
> > that effect should not last long.
>
> I haven't had enought time to continue with that, but YAFFS2 writes'
> time looks stabler than that of JFFS2 prolly b/c of absense of the
> separate GC thread ...
YAFFS can grind quite a bit under gc load, but I hunch that JFFS2 does worse
in these scenarios because of differences in the log structured handling. NB
that's just a hunch with no hard measurements to back it up.
I don't think a gc thread would have negative impact for yaffs. Indeed some
experimentation in this area has been done. The thread would do some of those
cleanups that yaffs currently defers to write time, making them happen in
time that is less visible to the outside world.
-- Charles