Hi,
LvS has suggested that I post this issue here.
I'm looking for way to recover a yaffs2 fs that has been marked with 66%
bad blocks. This is not expected from the use the device has had and a
number of threads discussing such defects leads me to believe most of
the bad blocks are in fact good.
This is the same TSLogistics 7250 that I reported (off list) as having a
very long mount delay about 6mths ago.
Are you able to comment on whether having a very large number of bad
blocks would produce this behaviour?
Finally what can be done to recover the device. I have a hacked
erase_all and 2.4 kernel that simply ignore the bad block check. I am
holding off on using that until I have a strategy for rediscovery of the
truly bad blocks afterwards. This failure in itself will surely have
created some.
This board has been running the stock TS 2.4 kernel from new. I have a
2.6 kernel with a yaffs cvs pull from a few months back. The device has
not yet been mounted with that kernel but booting via nfs I get this:
# flash_info /dev/mtd1
Device /dev/mtd1 has 0 erase regions
That is the partition with the root fs in question
whether that means anything useful is questionable since mnt0 and mnt1
also give the same result and anything else gives a spurious "file open
error".
bash-4.0#flash_info --help
File open error
I see one report on your site where a device became unusable after a
forced erase but this manufacturer has apparently been using the (hack)
tool and suggesting to customers to use it. Presumably that means this
is non critical for the NAND devices they are using.
In view of this I would be prepared to try a forced erase if I had a
valid way of redetermining the true bad blocks prior to reloading the fs.
Any help in recovering from this unfortunate mess would help restore my
confidence in yaffs2.
My aim is to move to a recent (2.6.29) kernel once this is cleared. If I
can be confident this issue has been solved in recent yaffs code I may
well stick with it.
TIA, Peter.