[Yaffs] heavy file usage in yaffs

Charles Manning manningc2@actrix.gen.nz
Tue, 18 Jan 2005 15:20:57 +1300


On Tuesday 18 January 2005 04:00, Jacob Dall wrote:
> Hello Charles,
>
> All file names are on a 8.3 format, and NO, I'm not using
> SHORT_NAMES_IN_RAM.
>
> I've just recompiled my project defining CONFIG_YAFFS_SHORT_NAMES_IN_RA=
M,
> but unfortunately I notice no change in time used to perform the dumpDi=
r().
>
> The files I'm listing, was written before I defined short names in RAM.=
 In
> this case, should one expect the operation to take less time?
>
> The CPU I'm running this test on, is comparable to a Pentium I-200MHz.


NB This only applies to yaffs_direct, not to Linux.

I did some tests using yaffs direct as a user application on a ram emulat=
ion=20
under Linux.

This too showed slowing. I did some profiling with gprof which pretty qui=
ckly=20
pointed to the problem...

The way yaffs does the directory searching is to create a linked list of=20
items found so far in the DIR handle. When it does a read_dir, it has to =
walk=20
the list of children in the directory and check if the entry in in the li=
st=20
of items found so far. This makes the look up time increase proportional =
to=20
the square of the number of items found (O(n^2)) so far ( ie. each time i=
t=20
looks at more directory entries as well as compare them to a longer "alre=
ady=20
found" list).

The current implementation could be sped up somewhat by using a balanced=20
binary tree for the "found list". This would reduce the time to O(n log n=
). I=20
could be motivated to do something about this but it is not a current=20
priority for me.

The other approach is the weasle approach. Don't use such large directori=
es.=20
but rather structure your directory tree to use smaller sub-directories.

-- Charles