Any new update on this issue? Thanks. I just came across similar issue in
> I have now tested the latest version of YAFFS2 from the CVS repository and
> am
> still getting the same bug with the file system in linux 2.6.20.4. I have
> tested other filesystems, ext2, and some network shares, and I do not have
> any
> issues with the number of files in a directory. This is exclusive to my
> YAFFS2
> mount point. I have managed to "fix" the problem, but I'm pretty sure it's
> not
> the appropriate solution. Here's some more details of my test and results.
>
> > I'm having some issues with YAFFS and directories with quite a few files
> in them
> > (110+ files). When I do an ls in the directory the system locks up and
> > eventually ls dies with an out of memory error.
>
> I'm testing the file system with the following script:
>
> #!/bin/sh
>
> getfiles(){
> FILES=`ls | wc -l`
> echo "FILES=[$FILES]"
> }
>
> generate_file(){
> dd if=/dev/zero of=file$1.txt bs=128 count=1 > /dev/null 2>&1
> }
>
> NUM=100000
> echo working in $1
> cd $1
> while true
> do
> FILENUM=`echo $NUM | cut -c 2-`
> echo "--- Generating file $FILENUM ---"
> generate_file "$FILENUM"
> echo "--- Running ls ---"
> getfiles
> NUM=`expr $NUM + 1`
> echo " "
> echo " "
> done
>
> I checked out the latest version of yaffs2 and patched it into my kernel
> and ran
> the code with the same results. Here's a trimmed up version of the log
> from
> this script.
>
>
> # sh filelimit_test.sh /mnt/nand/test/tempdir
> working in /mnt/nand/test/tempdir
> --- Generating file 00000 ---
> --- Running ls ---
> FILES=[ 1]
>
>
> --- Generating file 00001 ---
> --- Running ls ---
> FILES=[ 2]
>
>
> --- Generating file 00002 ---
> --- Running ls ---
> FILES=[ 3]
>
>
> --- Generating file 00003 ---
> --- Running ls ---
> FILES=[ 4]
>
>
> --- Generating file 00004 ---
> --- Running ls ---
> FILES=[ 5]
>
>
> --- Generating file 00005 ---
> --- Running ls ---
> FILES=[ 6]
>
>
> --- Generating file 00006 ---
> --- Running ls ---
> FILES=[ 7]
>
>
> --- Generating file 00007 ---
> --- Running ls ---
> FILES=[ 8]
>
>
> --- Generating file 00008 ---
> --- Running ls ---
> FILES=[ 9]
>
>
> --- Generating file 00009 ---
> --- Running ls ---
> FILES=[ 10]
>
> .
> .
> .
>
> --- Generating file 00123 ---
> --- Running ls ---
> FILES=[ 124]
>
>
> --- Generating file 00124 ---
> --- Running ls ---
> FILES=[ 125]
>
>
> --- Generating file 00125 ---
> --- Running ls ---
> FILES=[ 126]
>
>
> --- Generating file 00126 ---
> --- Running ls ---
> FILES=[ 253]
>
>
> --- Generating file 00127 ---
> --- Running ls ---
> FILES=[ 254]
>
>
> --- Generating file 00128 ---
> --- Running ls ---
> syslogd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
> Mem-info:
> DMA per-cpu:
> CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1
> usd: 0
> Active:4922 inactive:962 dirty:0 writeback:0 unstable:0 free:179 slab:791
> mapped:96 pagetables:37
> DMA free:716kB min:716kB low:892kB high:1072kB active:19688kB
> inactive:3848kB
> present:32064kB pages_scanned:36201 all_unreclaimable? yes
> lowmem_reserve[]: 0 0
> DMA: 1*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB
> 0*2048kB
> 0*4096kB = 716kB
> Free swap: 0kB
> 8192 pages of RAM
> 251 free pages
> 1052 reserved pages
> 791 slab pages
> 310 pages shared
> 0 pages swap cached
> Out of memory: kill process 7455 (ls) score 222 or a child
> Killed process 7455 (ls)
> klogd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
> Mem-info:
> DMA per-cpu:
> CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1
> usd: 0
> Active:4922 inactive:962 dirty:0 writeback:0 unstable:0 free:179 slab:791
> mapped:96 pagetables:37
> DMA free:716kB min:716kB low:892kB high:1072kB active:19688kB
> inactive:3848kB
> present:32064kB pages_scanned:36201 all_unreclaimable? yes
> lowmem_reserve[]: 0 0
> DMA: 1*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB
> 0*2048kB
> 0*4096kB = 716kB
> Free swap: 0kB
> 8192 pages of RAM
> 251 free pages
> 1052 reserved pages
> 791 slab pages
> 311 pages shared
> 0 pages swap cached
> syslogd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
> Mem-info:
> DMA per-cpu:
> CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1
> usd: 0
> Active:4922 inactive:962 dirty:0 writeback:0 unstable:0 free:179 slab:791
> mapped:96 pagetables:37
> DMA free:716kB min:716kB low:892kB high:1072kB active:19688kB
> inactive:3848kB
> present:32064kB pages_scanned:36201 all_unreclaimable? yes
> lowmem_reserve[]: 0 0
> DMA: 1*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB
> 0*2048kB
> 0*4096kB = 716kB
> Free swap: 0kB
> 8192 pages of RAM
> 251 free pages
> 1052 reserved pages
> 791 slab pages
> 311 pages shared
> 0 pages swap cached
> ls invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
> Mem-info:
> DMA per-cpu:
> CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1
> usd: 0
> Active:4922 inactive:962 dirty:0 writeback:0 unstable:0 free:179 slab:791
> mapped:96 pagetables:37
> DMA free:716kB min:716kB low:892kB high:1072kB active:19688kB
> inactive:3848kB
> present:32064kB pages_scanned:36201 all_unreclaimable? yes
> lowmem_reserve[]: 0 0
> DMA: 1*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB
> 0*2048kB
> 0*4096kB = 716kB
> Free swap: 0kB
> 8192 pages of RAM
> 251 free pages
> 1052 reserved pages
> 791 slab pages
> 311 pages shared
> 0 pages swap cached
> syslogd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
> Mem-info:
> DMA per-cpu:
> CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1
> usd: 0
> Active:4923 inactive:961 dirty:0 writeback:0 unstable:0 free:178 slab:791
> mapped:97 pagetables:37
> DMA free:712kB min:716kB low:892kB high:1072kB active:19692kB
> inactive:3844kB
> present:32064kB pages_scanned:36201 all_unreclaimable? yes
> lowmem_reserve[]: 0 0
> DMA: 0*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB
> 0*2048kB
> 0*4096kB = 712kB
> Free swap: 0kB
> 8192 pages of RAM
> 250 free pages
> 1052 reserved pages
> 791 slab pages
> 313 pages shared
> 0 pages swap cached
> klogd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
> Mem-info:
> DMA per-cpu:
> CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1
> usd: 0
> Active:4923 inactive:961 dirty:0 writeback:0 unstable:0 free:178 slab:791
> mapped:97 pagetables:37
> DMA free:712kB min:716kB low:892kB high:1072kB active:19692kB
> inactive:3844kB
> present:32064kB pages_scanned:36201 all_unreclaimable? yes
> lowmem_reserve[]: 0 0
> DMA: 0*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB
> 0*2048kB
> 0*4096kB = 712kB
> Free swap: 0kB
> 8192 pages of RAM
> 250 free pages
> 1052 reserved pages
> 791 slab pages
> 313 pages shared
> 0 pages swap cached
> syslogd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
> Mem-info:
> DMA per-cpu:
> CPU 0: Hot: hi: 0, btch: 1 usd: 0 Cold: hi: 0, btch: 1
> usd: 0
> Active:4923 inactive:961 dirty:0 writeback:0 unstable:0 free:178 slab:791
> mapped:97 pagetables:37
> DMA free:712kB min:716kB low:892kB high:1072kB active:19692kB
> inactive:3844kB
> present:32064kB pages_scanned:36201 all_unreclaimable? yes
> lowmem_reserve[]: 0 0
> DMA: 0*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB
> 0*2048kB
> 0*4096kB = 712kB
> Free swap: 0kB
> 8192 pages of RAM
> 250 free pages
> 1052 reserved pages
> 791 slab pages
> 313 pages shared
> 0 pages swap cached
> Killed
> FILES=[ 0]
>
> You'll notice that the file count goes funny for two exections of ls where
> it
> doubly lists all the files, then it gets stuck in an infinite loop that
> eventually crashes ls.
>
> > It seems the action of rewinding the directory if the file pointer and
> inode
> > pointer versions differ is not the correct solution. I have added trace
> code
> > and confirmed that when the following if statement is entered the
> directory
> > listing starts from the beginning.
> >
> > if (f->f_version != inode->i_version) {
> > offset = 2;
> > f->f_pos = offset;
> > f->f_version = inode->i_version;
> > }
> >
>
>
> If I change the previous code to not rewind the directory when f->f_version
> is
> not equal to inode->i_version:
>
> if (f->f_version != inode->i_version) {
> //offset = 2;
> f->f_pos = offset;
> f->f_version = inode->i_version;
> }
>
> Then I don't have the problem with ls
>
> sh filelimit_test.sh /mnt/nand/test/tempdir
> working in /mnt/nand/test/tempdir
> --- Generating file 00000 ---
> --- Running ls ---
> FILES=[ 1]
>
>
> --- Generating file 00001 ---
> --- Running ls ---
> FILES=[ 2]
>
>
> --- Generating file 00002 ---
> --- Running ls ---
> FILES=[ 3]
>
>
> --- Generating file 00003 ---
> --- Running ls ---
> FILES=[ 4]
>
>
> --- Generating file 00004 ---
> --- Running ls ---
> FILES=[ 5]
>
>
> --- Generating file 00005 ---
> --- Running ls ---
> FILES=[ 6]
>
>
> --- Generating file 00006 ---
> --- Running ls ---
> FILES=[ 7]
>
>
> --- Generating file 00007 ---
> --- Running ls ---
> FILES=[ 8]
>
>
> --- Generating file 00008 ---
> --- Running ls ---
> FILES=[ 9]
>
>
> --- Generating file 00009 ---
> --- Running ls ---
> FILES=[ 10]
>
> .
> .
> .
>
> --- Generating file 00494 ---
> --- Running ls ---
> FILES=[ 495]
>
>
> --- Generating file 00495 ---
> --- Running ls ---
> FILES=[ 496]
>
>
> --- Generating file 00496 ---
> --- Running ls ---
> FILES=[ 497]
>
>
> --- Generating file 00497 ---
> --- Running ls ---
> FILES=[ 498]
>
>
> --- Generating file 00498 ---
> --- Running ls ---
> FILES=[ 499]
>
>
> --- Generating file 00499 ---
> --- Running ls ---
> FILES=[ 500]
>
> I'm hoping the added information will help understanding of this problem.
> Has
> anyone seen anything like this with YAFFS2 and the Linux kernel? I'm going
> to
> test my change with my other software developers and see if they're going
> to
> have any issues with the fix I did. We're using YAFFS2 in an application
> where
> we get 100+ files in a directory, and it has been having problems.
>
> Andrew McKay
> Iders Inc.
>
>
>
>
> _______________________________________________
> yaffs mailing list
> yaffs@lists.aleph1.co.uk
> http://lists.aleph1.co.uk/cgi-bin/mailman/listinfo/yaffs
>