>
> We ran into this situation with 1Gb partition on NAND having
> about 27Mb free space. I suppose there were pretty heavy file
> processing performed at this time.
>
> There were dozens of these "Allocator out" messages
> accompanied by "yaffs tragedy: no more eraased blocks" and
> many file operations failed.
>
> I think that in this situation GC was not fast enough to
> provide new blocks for allocation.
The GC is not in a separate thread, but as a parasitic activity from
within writes etc, so there is no question of it being "fast enough" and
heavy file allocation should not matter.
Are you getting a lot of bad blocks? About the only way I can see that
allocation failures are likely is with blocks going bad during garbage
collection.
>
> Question:
> ---------
> I see that currently so called "aggressive" GC strategy is
> triggered when number of erased blocks is less than 15. I
> believe this value depends on those hard-coded values:
>
> dev->nReservedBlocks = 5;
> dev->nShortOpCaches = 10;
>
> What are the guidelines for choosing above constants ? Should
> we probably adjust them for bigger sized partitions ?
No, that will not really change things. It might just put off the eveil
day, but it should not fix the issue.
>
> Thank you in advance for any hint you can provide.
Two things to do:
1) Show us the /proc/yaffs before/after the problem.
2) Do more tracing to see if there are bad blocks happening too.
-- Charles