Re: [Yaffs] duplicate directories and file reverting to prev…

Top Page
Attachments:
Message as email
+ (text/plain)
+ (text/html)
Delete this message
Reply to this message
Author: Mark Whitney
Date:  
To: Charles Manning
CC: yaffs
New-Topics: Re: [Yaffs] duplicate directories and file reverting to previous contents
Subject: Re: [Yaffs] duplicate directories and file reverting to previous contents
On Wed, Apr 20, 2011 at 08:46, Mark Whitney <> wrote:

>
>
>
> I saw this same duplicate-directory-entry issue again (this time with 2
> different directories) on a second device.
>
> There was is an error that appears on the console each time on boot:
>
> yaffs: dev is 32505863 name is "mtdblock7"
> yaffs: passed flags ""
> yaffs: Attempting MTD mount on 31.7, "mtdblock7"
> yaffs tragedy: Bad object type, -1 != 15, for object 268435455 at chunk
> 67709 during scan
> yaffs tragedy: Bad object type, 1 != 3, for object 266 at chunk 92802
> during scan
> yaffs_read_super: isCheckpointed 0
> yaffs: dev is 32505864 name is "mtdblock8"
> yaffs: passed flags ""
> yaffs: Attempting MTD mount on 31.8, "mtdblock8"
> yaffs_read_super: isCheckpointed 0
>
> I remember seeing a "yaffs tragedy" error msg on the last system, but I did
> not capture the error text, so I do not know if it was also a "Bad object
> type".
>
> Is this helpful at all?
>
>>
>>

I have set up a fs benchmark (dbench, distributed with Angstrom linux) and
was able to reproduce a similar case reliably, in less time (< 2 hours). I
tried it with the old release in which we originally saw the problem and
with a recent snapshot of the yaffs master branch
(commit 7715144e7d55b2854f907001c432348e4caa5954), also patched into the
2.6.32 kernel.

My current experiment is to run dbench, which creates a directory tree and
does a bunch of file writes, reads, deletes, and creates, and take snapshots
of the directory structure as it is running, looking for duplicate file
entries. In this simple test, to find duplicates I am running "ls -Ri1" and
uniq'ing the output. Here are a couple of example directory listings with
duplicate files (using yaffs 7715144e...):

/data/clients/client5/~dmtmp/PARADOX:
   1134 ANSWER.DB
   1386 ANSWER.MB
   1109 CHANGED.DB
    869 COURSES.DB
    869 COURSES.DB
    760 COURSES.FSL
   1130 COURSES.PX
...



/data/clients/client0/~dmtmp/PWRPNT:
   1307 NEWPCB.PPT
   1307 NEWPCB.PPT
    478 PCBENCHM.PPT
    478 PCBENCHM.PPT
   1029 PPTB1E4.TMP
    495 PPTOOLS1.PPA
    495 PPTOOLS1.PPA
    762 TIPS.PPT
    762 TIPS.PPT
    514 TRIDOTS.POT
    514 TRIDOTS.POT
    741 ZD16.BMP
    741 ZD16.BMP


...

etc. There have been 51 duplicate entries observed over approximately 18
hours. It does seem like I see fewer duplicate file entries in the running
the new yaffs version (7715144e...), but I did not collect statistics on
this yet. In the above experiment, I also do not know if the duplicate
files are persisting as they did when I originally reported this problem, or
if this is only transient oddness. This is because the benchmark continues
to run and deletes the whole directory tree periodically. If it is
interesting, I can try is to halt the benchmark when an error case is
detected to further analyze.


For now, I have a few questions:

1. Is it expected to see duplicate file listings in "ls", as long as the
entries do not persist?

2. Is there additional logging information that I can provide? Maybe set
something in yaffs_trace_mask?

3. Should I try running any of the test scripts that are included in the
current yaffs repo? Or are there other benchmarks you recommend? I picked
dbench because it is already included in the version of Angstrom we are
using, so it is easy to build in.


Also, here is the output of /proc/yaffs for the affected partition after
over 12 hours of running the benchmarks, not sure if there is anything
notable here:

Device 2 "userdata"
start_block.......... 0
end_block............ 2047
total_bytes_per_chunk 2048
use_nand_ecc......... 1
no_tags_ecc.......... 0
is_yaffs2............ 1
inband_tags.......... 0
empty_lost_n_found... 1
disable_lazy_load.... 0
refresh_period....... 500
n_caches............. 10
n_reserved_blocks.... 5
always_check_erased.. 0

data_bytes_per_chunk. 2048
chunk_grp_bits....... 0
chunk_grp_size....... 1
n_erased_blocks...... 12
blocks_in_checkpt.... 0

n_tnodes............. 8135
n_obj................ 750
n_free_chunks........ 13841

n_page_writes........ 67583164
n_page_reads......... 45722362
n_erasures........... 1053991
n_gc_copies.......... 42778519
all_gcs.............. 4236088
passive_gc_count..... 3624301
oldest_dirty_gc_count 608773
n_gc_blocks.......... 1031592
bg_gcs............... 189204
n_retired_writes..... 0
n_retired_blocks..... 52
n_ecc_fixed.......... 449
n_ecc_unfixed........ 0
n_tags_ecc_fixed..... 0
n_tags_ecc_unfixed... 0
cache_hits........... 537206
n_deleted_files...... 0
n_unlinked_files..... 1033124
refresh_count........ 2063
n_bg_deletions....... 0


Thanks a lot for your time.