On 04/02/2014 01:54 PM, Justin Maggard wrote:
I've found that, after using some btrfs filesystems for some time, that the first large write after a reboot takes a very long time. So I went to work trying out different test cases to simplify reproduction of the issue, and I've got it down to just these steps: 1) mkfs.btrfs on a large-ish device. I used a 14TB MD RAID5 device. 2) Fill it up a bit over half-way with ~5MB files. In my test I made 30 copies of a 266GB data set consisting of 52,356 files and 20,268 folders. 3) umount 4) mount 5) time fallocate -l 2G /mount/point/2G.dat real 3m9.412s user 0m0.002s sys 0m2.939s By comparison, if I don't use space cache things go much better: # umount # mount -o nospace_cache # time fallocate -l 2G /mount/point/2G.dat real 0m15.982s user 0m0.002s sys 0m0.103s If I use the clear_cache mount option, that also resolves the slowness. Is this a known issue? For me it's 100% reproducible, on various kernel versions including 3.14-rc8. Is there anything I should provide to help debug?
Neat, not a known issue. What's probably happening is that without space cache on, you're jumping out into unused space while it regenerates the space cache. Once the caching thread is done caching all the free space, you should go slowly again.
I'll try to reproduce. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
