On Friday 16 December 2005 04:39, John M Cavallo wrote: > I am having problems with portions of a flash becoming increasingly > unavailable during usage. After a few cycles of filling and emptying the > flash, with occasional reboots/remounts thrown in (see the script at the > end of this message), the majority of the flash will become lost. For > example, I had two partitions on one device, with 160 Mbytes on each, > one was stressed, otherwise they are identical. When I get the disk > usage of the two, there is approximately the same amount of information > on both: Hi John Could you please provide a /proc/yaffs from from before and after running the test. > > # du -sk /nand /nand1 > 22544 /nand > 22560 /nand1 > > However, when I look at the available space on the two, none is left on > the one that had been stressed. > > # df /nand /nand1 > Filesystem 1k-blocks Used Available Use% Mounted on > /dev/mtdblock/5 163840 24856 138984 16% /nand > /dev/mtdblock/4 163840 163840 0 100% /nand1 > > This effect is repeatable. We haven't been able to track down the exact > cause of the problem, but it looks like it is connected to the initial > scan not properly disposing of remnants of files that have been deleted. > > ------------------------- > > This is a stripped down version of the script that we used to stress > tests a partition mounted at /nand1. > > tar -cf /nand1/usr.tar /usr > > dirnum=1 > > while true > # fill and empty 5 times > for i in 1 2 3 4 5 > do > # create a sub directory and try filling it > mkdir -p /nand1/test/$dirnum > while cp -a /nand1/usr.tar /usr /nand1/test/$dirnum > do > dirnum=`expr $dirnum + 1` > mkdir -p /nand1/test/$dirnum > done > # the copy failed, so remove the test directory > rm -r /nand1/test > done > # unmount and remount. Could also reboot at this point, > # but that requires more than a single script. > umount /nand1 > mount /nand1 > done