On Tue, May 17, 2011 at 09:33, Mark Whitney <
markwhitney@gmail.com> wrote:
>
> For now, I have a few questions:
> 1. Is it expected to see duplicate file listings in "ls", as long as the entries do not persist?
> 2. Is there additional logging information that I can provide? Maybe set something in yaffs_trace_mask?
> 3. Should I try running any of the test scripts that are included in the current yaffs repo? Or are there other benchmarks you recommend? I picked dbench because it is already included in the version of Angstrom we are using, so it is easy to build in.
>
> Also, here is the output of /proc/yaffs for the affected partition after over 12 hours of running the benchmarks, not sure if there is anything notable here:
> Device 2 "userdata"
> start_block.......... 0
> end_block............ 2047
> total_bytes_per_chunk 2048
> use_nand_ecc......... 1
> no_tags_ecc.......... 0
> is_yaffs2............ 1
> inband_tags.......... 0
> empty_lost_n_found... 1
> disable_lazy_load.... 0
> refresh_period....... 500
> n_caches............. 10
> n_reserved_blocks.... 5
> always_check_erased.. 0
> data_bytes_per_chunk. 2048
> chunk_grp_bits....... 0
> chunk_grp_size....... 1
> n_erased_blocks...... 12
> blocks_in_checkpt.... 0
> n_tnodes............. 8135
> n_obj................ 750
> n_free_chunks........ 13841
> n_page_writes........ 67583164
> n_page_reads......... 45722362
> n_erasures........... 1053991
> n_gc_copies.......... 42778519
> all_gcs.............. 4236088
> passive_gc_count..... 3624301
> oldest_dirty_gc_count 608773
> n_gc_blocks.......... 1031592
> bg_gcs............... 189204
> n_retired_writes..... 0
> n_retired_blocks..... 52
> n_ecc_fixed.......... 449
> n_ecc_unfixed........ 0
> n_tags_ecc_fixed..... 0
> n_tags_ecc_unfixed... 0
> cache_hits........... 537206
> n_deleted_files...... 0
> n_unlinked_files..... 1033124
> refresh_count........ 2063
> n_bg_deletions....... 0
I have an update to my duplicated file problem. I observed it again
on one of our systems during normal operation. This time I took a
nanddump of the partition, so I can recreate the problem as a test
case now.
Again, the problem appeared while running the android-omap 2.6.32 kernel:
git://android.git.kernel.org/kernel/omap
commit: 703932d07237252c0aca76ab693463664f0a71a3
After I observed the problem, I booted another 2.6.32 kernel with a
recent version from the yaffs repository patched in:
commit 912be3d8414dea3a2ebf1792698174d3b6d3cbf1
Author: Charles Manning <
cdhmanning@gmail.com>
Date: Thu May 19 13:23:08 2011 +1200
yaffs: Add utils header file
Signed-off-by: Charles Manning <cdhmanning@gmail.com>
And mounted the same partition, using the tags-ecc-off option that was
suggested here:
http://www.aleph1.co.uk/lurker/thread/20110627.225454.012ad745.en.html
The new version does not seem to clean up the duplicated files that
were created by the old version:
root@homebase:~# mount -t yaffs2 -o tags-ecc-off /dev/mtdblock7
/mnt/card/; ls -li /mnt/card/data/net.energyhub.homebase/
yaffs: dev is 32505863 name is "mtdblock7" rw
yaffs: passed flags "tags-ecc-off"
427 drwxrwx--x 1 10014 10014 2048 Apr 28 16:26 cache
229 drwxrwx--x 1 10014 10014 2048 Jul 7 14:30 databases
229 drwxrwx--x 1 10014 10014 2048 Jul 7 14:30 databases
229 drwxrwx--x 1 10014 10014 2048 Jul 7 14:30 databases
460 drwxrwx--x 1 10014 10014 2048 Jun 28 15:54 files
413 drwxr-xr-x 1 1000 1000 2048 Apr 28 16:25 lib
450 drwxrwx--x 1 10014 10014 2048 Jul 7 14:30 shared_prefs
root@homebase:~# cat /proc/yaffs
Multi-version YAFFS built:Jul 6 2011 18:43:04
Device 0 "userdata"
start_block.......... 0
end_block............ 2047
total_bytes_per_chunk 2048
use_nand_ecc......... 1
no_tags_ecc.......... 1
is_yaffs2............ 1
inband_tags.......... 0
empty_lost_n_found... 1
disable_lazy_load.... 0
refresh_period....... 500
n_caches............. 10
n_reserved_blocks.... 5
always_check_erased.. 0
data_bytes_per_chunk. 2048
chunk_grp_bits....... 0
chunk_grp_size....... 1
n_erased_blocks...... 511
blocks_in_checkpt.... 2
n_tnodes............. 3089
n_obj................ 544
n_free_chunks........ 92844
n_page_writes........ 0
n_page_reads......... 15
n_erasures........... 0
n_gc_copies.......... 0
all_gcs.............. 0
passive_gc_count..... 0
oldest_dirty_gc_count 0
n_gc_blocks.......... 0
bg_gcs............... 0
n_retired_writes..... 0
n_retired_blocks..... 0
n_ecc_fixed.......... 0
n_ecc_unfixed........ 0
n_tags_ecc_fixed..... 0
n_tags_ecc_unfixed... 0
cache_hits........... 0
n_deleted_files...... 2
n_unlinked_files..... 85
refresh_count........ 0
n_bg_deletions....... 0
tags_used............ 0
summary_used......... 0
The one improvement in the new version is that it always appears to
pick the inode entry 229 instead of flipping back and forth between
different inodes and directory contents as the old version did, so
that is nice.
If I mount the partition without the tags-ecc-off flag, almost all of
my data on the partition disappears. Even if I had previously mounted
it with tags-ecc-off. lost+found is also empty. Here is the contents
of /proc/yaffs when I lose my data:
Multi-version YAFFS built:Jul 6 2011 18:43:04
Device 0 "userdata"
start_block.......... 0
end_block............ 2047
total_bytes_per_chunk 2048
use_nand_ecc......... 1
no_tags_ecc.......... 0
is_yaffs2............ 1
inband_tags.......... 0
empty_lost_n_found... 1
disable_lazy_load.... 0
refresh_period....... 500
n_caches............. 10
n_reserved_blocks.... 5
always_check_erased.. 0
data_bytes_per_chunk. 2048
chunk_grp_bits....... 0
chunk_grp_size....... 1
n_erased_blocks...... 2046
blocks_in_checkpt.... 0
n_tnodes............. 0
n_obj................ 5
n_free_chunks........ 131007
n_page_writes........ 1
n_page_reads......... 2
n_erasures........... 2
n_gc_copies.......... 1
all_gcs.............. 2
passive_gc_count..... 2
oldest_dirty_gc_count 0
n_gc_blocks.......... 1
bg_gcs............... 1
n_retired_writes..... 0
n_retired_blocks..... 0
n_ecc_fixed.......... 0
n_ecc_unfixed........ 0
n_tags_ecc_fixed..... 0
n_tags_ecc_unfixed... 0
cache_hits........... 0
n_deleted_files...... 0
n_unlinked_files..... 10
refresh_count........ 1
n_bg_deletions....... 0
tags_used............ 96512
summary_used......... 0
Are there consistency checks on mount that should catch anomalies like
these duplicate entries and fix them in the recent version I am using?
Are the extra directory entries I am observing now mostly harmless?
If the same duplicate entry is always picked, it seems like it is not
a big problem in terms of data loss.
Thank you.