Re: optimising filesystem for many small files
|[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]|
Viji V Nair wrote:
Hi, System : Fedora 11 x86_64 Current Filesystem: 150G ext4 (formatted with "-T small" option) Number of files: 50 Million, 1 to 30K png imagesWe are generating these files using a python programme and getting very slow IO performance. While generation there in only write, no read. After generation there is heavy read and no write.I am looking for best practices/recommendation to get a better performance. Any suggestions of the above are greatly appreciated. Viji
I would start with using blktrace and/or seekwatcher to see what your IO patterns look like when you're populating the disk; I would guess that you're seeing IO scattered all over.
How you are placing the files in subdirectories will affect this quite a lot; sitting in 1 directory for a while, filling with images, before moving on to the next directory, will probably help. Putting each new file in a new subdirectory will probably give very bad results.
-Eric _______________________________________________ Ext3-users mailing list Ext3-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ext3-users
[Linux RAID] [Kernel List] [Red Hat Install] [Video 4 Linux] [Postgresql] [Fedora] [Fedora Legacy] [Gimp] [Yosemite News] [Linux Software]