Space cache degradation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've found that, after using some btrfs filesystems for some time,
that the first large write after a reboot takes a very long time.  So
I went to work trying out different test cases to simplify
reproduction of the issue, and I've got it down to just these steps:

1) mkfs.btrfs on a large-ish device.  I used a 14TB MD RAID5 device.
2) Fill it up a bit over half-way with ~5MB files.  In my test I made
30 copies of a 266GB data set consisting of 52,356 files and 20,268
folders.
3) umount
4) mount
5) time fallocate -l 2G /mount/point/2G.dat
real 3m9.412s
user 0m0.002s
sys 0m2.939s

By comparison, if I don't use space cache things go much better:
# umount
# mount -o nospace_cache
# time fallocate -l 2G /mount/point/2G.dat
real 0m15.982s
user 0m0.002s
sys 0m0.103s

If I use the clear_cache mount option, that also resolves the slowness.

Is this a known issue?  For me it's 100% reproducible, on various
kernel versions including 3.14-rc8.  Is there anything I should
provide to help debug?

-Justin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux