Re: Filesystem Went Read Only During Raid-10 to Raid-6 Data Conversion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-07-21 21:48, Goffredo Baroncelli wrote:
On 7/21/20 12:15 PM, Steven Davies wrote:
On 2020-07-20 18:57, Goffredo Baroncelli wrote:
On 7/18/20 12:36 PM, Steven Davies wrote:

/dev/sdf, ID: 12
    Device size:             9.10TiB
    Device slack:              0.00B
    Data,RAID10:           784.31GiB
    Data,RAID10:             4.01TiB
    Data,RAID10:             3.34TiB
    Data,RAID6:            458.56GiB
    Data,RAID6:            144.07GiB
    Data,RAID6:            293.03GiB
    Metadata,RAID10:         4.47GiB
    Metadata,RAID10:       352.00MiB
    Metadata,RAID10:         6.00GiB
    Metadata,RAID1C3:        5.00GiB
    System,RAID1C3:         32.00MiB
    Unallocated:            85.79GiB

[...]

RFE: improve 'dev usage' to show these details.

As a user I'd look at this output and assume a bug in btrfs-tools because of the repeated conflicting information.

What would be the expected output ?
What about the example below ?

 /dev/sdf, ID: 12
     Device size:             9.10TiB
     Device slack:              0.00B
     Data,RAID10:           784.31GiB
     Data,RAID10:             4.01TiB
     Data,RAID10:             3.34TiB
     Data,RAID6[3]:         458.56GiB
     Data,RAID6[5]:         144.07GiB
     Data,RAID6[7]:         293.03GiB
     Metadata,RAID10:         4.47GiB
     Metadata,RAID10:       352.00MiB
     Metadata,RAID10:         6.00GiB
     Metadata,RAID1C3:        5.00GiB
     System,RAID1C3:         32.00MiB
     Unallocated:            85.79GiB

That works for me for RAID6. There are three lines for RAID10 too - what's the difference between these?

The differences is the number of the disks involved. In raid10, the
first 64K are on the first disk, the 2nd 64K are in the 2nd disk and
so until the last disk. Then the n+1 th 64K are again in the first
disk... and so on.. (ok I missed the RAID1 part, but I think the have
giving the idea )

So the chunk layout depends by the involved number of disk, even if
the differences is not so dramatic.

Is this information that the user/sysadmin needs to be aware of in a similar manner to the original problem that started this thread? If not I'd be tempted to sum all the RAID10 chunks into one line (each for data and metadata).

    Data,RAID6:        123.45GiB
        /dev/sda     12.34GiB
        /dev/sdb     12.34GiB
        /dev/sdc     12.34GiB
    Data,RAID6:        123.45GiB
        /dev/sdb     12.34GiB
        /dev/sdc     12.34GiB
        /dev/sdd     12.34GiB
        /dev/sde     12.34GiB
        /dev/sdf     12.34GiB

Here there would need to be something which shows what the difference in the RAID6 blocks is - if it's the chunk size then I'd do the same as the above example with e.g. Data,RAID6[3].

We could add a '[n]' for the profile where it matters, e.g. raid0,
raid10, raid5, raid6.
What do you think ?

So like this? That would make sense to me, as long as the meaning of [n] is explained in --help or the manpage.
     Data,RAID6[3]:     123.45GiB
         /dev/sda     12.34GiB
         /dev/sdb     12.34GiB
         /dev/sdc     12.34GiB
     Data,RAID6[5]:     123.45GiB
         /dev/sdb     12.34GiB
         /dev/sdc     12.34GiB
         /dev/sdd     12.34GiB
         /dev/sde     12.34GiB
         /dev/sdf     12.34GiB

--
Steven Davies



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux