btrfs-scrub: slow scrub speed (raid5)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

when I run a scrub on my 5 disk raid5 array (data: raid5, metadata:
raid6) I notice very slow scrubbing speed: max. 5MB/s per device,
about 23-24 MB/s in sum (according to btrfs scrub status).

What's interesting is at the same time the gross read speed across the
involved devices (according to iostat) is about ~71 MB/s in sum (14-15
MB/s per device). Where are the remaining 47 MB/s going? I expect
there would be some overhead because it's a raid5, but it shouldn't be
much more than a factor of (n-1) / n , no? At the moment it appears to
be only scrubbing 1/3 of all data that is being read and the rest is
thrown out (and probably re-read again at a different time).

Surely this can't be right? Are iostat or possibly btrfs scrub status
lying to me? What am I seeing here? I've never seen this problem with
scrubbing a raid1, so maybe there's a bug in how scrub is reading data
from raid5 data profile?

Just to be clear: I can read data from the array in regular file
system usage much faster - it's just the scrub that's very slow for
some reason:

ionice -c idle dd if=/mnt/raid5/testfile.mkv bs=1M of=/dev/null
7876+1 records in
7876+1 records out
8258797247 bytes (8.3 GB, 7.7 GiB) copied, 63.2118 s, 131 MB/s

It seems to me that I could perform a much faster scrub by rsyncing
the whole fs into /dev/null... btrfs is comparing the checksums anyway
when reading data, no?


Best regards,

Sebastian


~ » btrfs --version
btrfs-progs v5.4.1

kernel version: 5.5.2




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux