On 7/21/20 12:15 PM, Steven Davies wrote:
On 2020-07-20 18:57, Goffredo Baroncelli wrote:
On 7/18/20 12:36 PM, Steven Davies wrote:
On 17/07/2020 06:57, Zygo Blaxell wrote:
On Thu, Jul 16, 2020 at 09:11:17PM -0400, John Petrini wrote:
--snip--
/dev/sdf, ID: 12
Device size: 9.10TiB
Device slack: 0.00B
Data,RAID10: 784.31GiB
Data,RAID10: 4.01TiB
Data,RAID10: 3.34TiB
Data,RAID6: 458.56GiB
Data,RAID6: 144.07GiB
Data,RAID6: 293.03GiB
Metadata,RAID10: 4.47GiB
Metadata,RAID10: 352.00MiB
Metadata,RAID10: 6.00GiB
Metadata,RAID1C3: 5.00GiB
System,RAID1C3: 32.00MiB
Unallocated: 85.79GiB
[...]
RFE: improve 'dev usage' to show these details.
As a user I'd look at this output and assume a bug in btrfs-tools because of the repeated conflicting information.
What would be the expected output ?
What about the example below ?
/dev/sdf, ID: 12
Device size: 9.10TiB
Device slack: 0.00B
Data,RAID10: 784.31GiB
Data,RAID10: 4.01TiB
Data,RAID10: 3.34TiB
Data,RAID6[3]: 458.56GiB
Data,RAID6[5]: 144.07GiB
Data,RAID6[7]: 293.03GiB
Metadata,RAID10: 4.47GiB
Metadata,RAID10: 352.00MiB
Metadata,RAID10: 6.00GiB
Metadata,RAID1C3: 5.00GiB
System,RAID1C3: 32.00MiB
Unallocated: 85.79GiB
That works for me for RAID6. There are three lines for RAID10 too - what's the difference between these?
The differences is the number of the disks involved. In raid10, the first 64K are on the first disk, the 2nd 64K are in the 2nd disk and so until the last disk. Then the n+1 th 64K are again in the first disk... and so on.. (ok I missed the RAID1 part, but I think the have giving the idea )
So the chunk layout depends by the involved number of disk, even if the differences is not so dramatic.
Another possibility (but the output will change drastically, I am
thinking to another command)
Filesystem '/'
Data,RAID1: 123.45GiB
/dev/sda 12.34GiB
/dev/sdb 12.34GiB
Data,RAID1: 123.45GiB
/dev/sde 12.34GiB
/dev/sdf 12.34GiB
Is this showing that there's 123.45GiB of RAID1 data which is mirrored between sda and sdb, and 123.45GiB which is mirrored between sde and sdf? I'm not sure how useful that would be if there are a lot of disks in a RAID1 volume with different blocks mirrored between different ones. For RAID1 (and RAID10) I would keep it simple.
Data,RAID6: 123.45GiB
/dev/sda 12.34GiB
/dev/sdb 12.34GiB
/dev/sdc 12.34GiB
Data,RAID6: 123.45GiB
/dev/sdb 12.34GiB
/dev/sdc 12.34GiB
/dev/sdd 12.34GiB
/dev/sde 12.34GiB
/dev/sdf 12.34GiB
Here there would need to be something which shows what the difference in the RAID6 blocks is - if it's the chunk size then I'd do the same as the above example with e.g. Data,RAID6[3].
We could add a '[n]' for the profile where it matters, e.g. raid0, raid10, raid5, raid6.
What do you think ?
The number are the chunks sizes (invented). Note: for RAID5/RAID6 a
chunk will uses near all disks; however for (e.g.) RAID1 there is the
possibility that CHUNKS use different disks pairs (see the two RAID1
instances).
--
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D 17B2 0EDA 9B37 8B82 E0B5