Re: btrfs and 1 billion small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/05/12 20:06, Boyd Waters wrote:

> Use a directory hierarchy. Even if the filesystem handles a
> flat structure effectively, userspace programs will choke on
> tens of thousands of files in a single directory. For example
> 'ls' will try to lexically sort its output (very slowly) unless
> given the command-line option not to do so.

In my experience it's not so much that lexical sorting that kills you
but the default -F option which gets set for users these days, that
results in ls doing an lstat() on every file to work out if it's an
executable, directory, symlink, etc to modify how it displays it to you.

For instance on one of our HPC systems here we've a user with over
200,000 files in one directory.  It takes about 4 seconds for \ls
whereas \ls -F takes, well I can't tell you because it was still running
after 53 minutes (strace confirmed it was still lstat()ing) when I
killed it..

cheers,
Chris
-- 
 Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux