Hi Charles,
I am also fcing the same problem.
I am thinking of following approaches to
address this issue. Please correct me if i am wrong
1. I tried the GC during
SoftDeleteWorker context.
i.e, in the function
"yaffs_SoftDeleteChunk" after incrementing the "softdeletion"
parameter
of the block information, call
yaffs_CheckGarbageCollection
I have tried this method to
minimise the GC load during file write (i.e, the load on
yaffs_WriteChunkDataToObject)
2. While observed during
Garbage Collection that, even if the pagesInUse of
a block becomes ZERO
(i.e, bi->pagesInUse
- bi->softDeletions in "
yaffs_FindBlockForGarbageCollection" function,), in
function
yaffs_GarbageCollectBlock, we are reading all the chunks and
decrement the nDataChunks
counter of the correcponding
object.
I am just wondering is it
possible to remove this overhead (reading all the chunks while the
pagesInUse
of that block is zero) by
updating the nDataChunks counter while softDelete of the corresponding
object,
and erase the block directly while the
pagesInUse of that block are zero.
Also i am wondering, will the above two approaches
cause any abnormal behaviour in any other context.
Thanks,
Asif
>On Friday 06 April 2007 04:53, Николай М.
Виноградов wrote:
>> Hello,
>>
>> I'm run into problem with yaffs2 usage.
>> When
yaffs2 was only created(0% usage), I have write performance about
>> ~1Mb/sec.
>> When usage increased to 40-50%, I have only
450-500Kb/sec.
>> When usage increased to 95-99%, it's only
50-90Kb/sec. :-()
>>
>> My test is very simple, just
copy 1Mb file from mem to yaffs2:
>> time cp /test.txt /usr/local
>>
>> /usr/local is yaffs2, / - is initramfs.
>>
>> Is there any reason for that? Maybe it's some GC
related things and it's
>> normal behaviour?
> Yes
it is GC causing this.
>
> Depending on how stuff is written to
the fs, the GC sometimes has to work much
> harder to get erased
space to write.
>
> It seems the GC could do with some
exploration to look for some improvements.
>
> -- Charles