On 25/03/14 01:49, Marc MERLIN wrote: > I had a tree with some amount of thousand files (less than 1 million) > on top of md raid5. > > It took 18H to rm it in 3 tries: > Data, single: total=3.28TiB, used=2.70TiB > System, DUP: total=8.00MiB, used=384.00KiB > System, single: total=4.00MiB, used=0.00 > Metadata, DUP: total=73.50GiB, used=62.46GiB > Metadata, single: total=8.00MiB, used=0.00 > > This is running from > md8 : active raid5 sdf1[6] sdb1[5] sda1[3] sde1[2] sdd1[1] > 7814045696 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] > bitmap: 0/15 pages [0KB], 65536KB chunk > > The filesystem is pretty new, it shouldn't be fragmented much. > The problem does not seem to be just rm though, du is taking way way too > long too. I started one, it's been running for 30mn now. > Interestingly, sending ^C to that du takes 15 seconds to respond, so it > seems that each system call is just slow. > > I checked that btrfs scrub is not running. > What else can I check from here? "noatime" set? What's your cpu hardware wait time? And is not *the 512kByte raid chunk* going to give you horrendous write amplification?! For example, rm updates a few bytes in one 4kByte metadata block and the system has to then do a read-modify-write on 512kBytes... Also, the 64MByte chunk bit-intent map will add a lot of head seeks to anything you do on that raid. (The map would be better on a separate SSD or other separate drive.) So... That sort of setup is fine for archived data that is effectively read-only. You'll see poor performance for small writes/changes. Hope that helps, Martin -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
