On 5/1/20 21:18, Zygo Blaxell wrote: > > Also, in large delete operations, half of the IOs are random _reads_, > which can't be optimized by write caching. The writes are mostly > sequential, so they take less IO time. So, say, 1% of the IO time > is made 80% faster by write caching, for a net benefit of 0.8% (not real > numbers). Write caching helps fsync() performance and not much else. Thanks for everyone's help, but listening to everyone else also talk about taking weeks or months to delete a drive, with terrible performance for other applications because of all the background I/O, it really looks to me that despite the many theoretical advantages of integrating raid into btrfs, it simply doesn't work in the real world with real spinning disk drives with real and significant seek latencies. Btrfs is too far ahead of the technology; its drive management features look great until you actually try to use them. Maybe I can revisit this in a few years when SSDs have displaced spinning drives and have made seek latencies a thing of the past. Spinning drives seem to have pretty much hit their technology limits while SSDs are still making good progress in both size and price. In the meantime I think I'll return to what I used to use before I tried btrfs several years ago: XFS over LVM, with LVM working in large contiguous allocation chunks that can be efficiently copied, moved and resized on real spinning disks regardless of how the file system above them allocates and uses them. I do give btrfs considerable credit for not (yet) losing any of my data through all this. But that's what offline backups and LVM snapshots are also for. Phil
