[Yaffs] yaffs_direct performance mounting device

Top Page
Attachments:
Message as email
+ (text/plain)
+ (text/html)
Delete this message
Reply to this message
Author: Rob Philip
Date:  
To: yaffs
Subject: [Yaffs] yaffs_direct performance mounting device
I have a question on yaffs2 performance.

First, some specifications.

I am running on a Cortex M4 processor from Freescale, clocked at 120Mhz.
512MB NAND chip from Micron.
32MB of RAM. Basically no resource constraints.
The NAND chip is connected via a controller interface. Accessed via DMA.

Everything works great in terms of functionality, and I'm now
investigating
performance.

For comparison I also created a 16MB RAM disk.

For my testing I wrote 7200 files, each of 100 bytes, to both devices.

I then unmounted them both and remounted them.


Here are the results:

MU2> mount ramdisk
yaffs: yaffs: Mounting ramdisk
yaffs: yaffs: restored from checkpoint
ramdisk mounted. Elapsed time = 2.243 seconds
MU2>
MU2>
MU2> mount nanddisk
yaffs: yaffs: Mounting nanddisk
yaffs: yaffs: restored from checkpoint
nanddisk mounted. Elapsed time = 2.556 seconds
MU2>

I noted that yaffs does 776 page reads on the NAND device. I did not
instrument the
RAMdisk, but since the pages are half the size and the device is full
I'm going
to guess there are about 1500 page "reads" there.

The time to read those 776 pages of NAND is consistent with the I/O
time I've
measured previously of about 300 microseconds for page reads.

And now my question:

What is yaffs doing that is consuming 2.2 seconds of processor time to
"mount"
the ramdisk? Nothing else is going on so semaphore & memory management
time
is negligible.

Obviously the RAM disk is the degenerate case, for testing. My
application *may*
have as many as 20,000 - 30,000 small-ish files in NAND when it boots.
It won't
ever *do* much with those files and under the correct circumstances
the file
count will drop to the low hundreds.

Thoughts? Any thing I can do to speed things up?



Here is the configuration for the NAND


#define NAND_FLASH_PAGE_COUNT (4096)
#define NAND_FLASH_PAGE_USER_DATA_SIZE (2048)
#define NAND_FLASH_PAGE_SPARE_AREA_SIZE (64)


#define DEVICE_SIZE        (NAND_FLASH_PAGE_COUNT * 
NAND_FLASH_PAGE_USER_DATA_SIZE)
#define CHUNKS_PER_BLOCK   (NAND_FLASH_PAGE_COUNT / 
NAND_FLASH_BLOCK_COUNT)
#define CHUNK_SIZE         (NAND_FLASH_PAGE_USER_DATA_SIZE)
#define SPARE_SIZE         (NAND_FLASH_PAGE_SPARE_AREA_SIZE)
#define BLOCK_SIZE         (CHUNK_SIZE * CHUNKS_PER_BLOCK )
#define PAGE_SIZE          (NAND_FLASH_PHY_PAGE_SIZE)




    dev->param.total_bytes_per_chunk  = CHUNK_SIZE;
    dev->param.spare_bytes_per_chunk  = SPARE_SIZE;
    dev->param.chunks_per_block       = CHUNKS_PER_BLOCK;
    dev->param.start_block            = 1;
    dev->param.end_block              = BLOCKS_PER_DEVICE - 1;


    dev->param.n_reserved_blocks     = 10;
    dev->param.is_yaffs2             = 1;   // yaffs2, of course
    dev->param.use_nand_ecc          = 1;   // let the NFC hardware do 
the ECC.
    dev->param.no_tags_ecc           = 1;   // no ECC data in the tags
    dev->param.n_caches              = 40;  // memory page cache size.



Here is the configuration for the RAMDisk

#define DATA_SIZE         (1024)
#define SPARE_SIZE        (32)
#define PAGES_PER_BLOCK   (128)


    dev->param.start_block           = 1;
    dev->param.end_block             = 
(16*1024*1024)/(PAGES_PER_BLOCK*PAGE_SIZE)
    dev->param.chunks_per_block      = PAGES_PER_BLOCK;
    dev->param.total_bytes_per_chunk = DATA_SIZE;
    dev->param.spare_bytes_per_chunk = SPARE_SIZE;


    dev->param.n_reserved_blocks     = 2;
    dev->param.is_yaffs2             = 1;
    dev->param.use_nand_ecc          = 1;
    dev->param.n_caches              = 10;