BTRFS state on kernel 5.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Being a long time BTRFS user and frequent reader of the mailing list I do have some (hopefully practical) questions / requests. Some asked before perhaps , but I think it is about time with an update. So without further ado... here we go:

1. THE STATUS PAGE:
The status page has not been updated with information for the latest stable kernel which is 5.2 as of writing this. Can someone please update?

2. DEFRAG: (status page)
The status page marks defrag as "mostly ok" for stability and "ok" for performance. While I understand that extents gets unshared I don't see how this will affect stability. Performance (as in space efficiency) on the other hand is more likely to be affected. Also is is not (perfectly) clear what the difference is in consequence by using the autodefrag mount option vs "btrfs filesystem defrag" Can someone please consider rewriting this?

3. SCRUB + RAID56: (status page)
The status page says it is mostly ok for both stability and performance.
It is not stated what the problem is with stability, does this have to do with the write-hole ?

4. DEVICE REPLACE: (status page)
This is also marked mostly ok for stability. As I understand it BTRFS have no issues of recovering from a failed device if it is completely removed. If the device is still (partly) working you may get "stuck" during a replace operation because BTRFS keeps trying to read from the failed device. From my point of view I think it is important to clear up this a bit so that people will understand that it is not the ability to replace a device that is "mostly ok" but the "online replace" functionality that might be problematic (but will not damage data).

5. DEVICE REPLACE: (Using_Btrfs_with_Multiple_Devices page)
It is not clear what to do to recover from a device failure on BTRFS.
If a device is partly working then you can run the replace functionality and hopefully you're good to go afterwards. Ok fine , if this however does not work or you have a completely failed device it is a different story. My understanding of it is: If not enough free space (or devices) is available to restore redundancy you first need to add a new device, and then you need to A: first run metadata balance (to ensure that the filesystem structures is redundant) and then B: run a data balance to restore redundancy for your data. Is there any filters that can be applied to only restore chunks which are having a missing mirror / stripe member?

6. RAID56 (status page)
The RAID56 have had the write hole problem for a long time now, but it is not well explained what the consequence of it is for data - especially if you have metadata stored in raid1/10. If you encounter a powerloss / kernel panic during write - what will actually happen?
Will a fresh file simply be missing or corrupted (as in partly written).
If you overwrite/append to a existing file - what is the consequence then? will you end up with... A: The old data, B: Corrupted or zeroed data?! This is not made clear in the comment and it would be great if we, the BTRFS users would understand what the risk of hitting the write hole actually is.

7. QUOTAS, QGROUPS (status page)
Again marked as "mostly ok" on the stability. Is there any risk of dataloss or irrecoverable failure? If not I think it should be marked as stable - The only note seems to be performance related.

8. PER SUBVOLUME REDUNDANCY LEVEL:
What is the state / plan for per subvolume (or object level) redundancy levels - is that on the agenda somewhere? One use case is to flag the main filesystem as RAID1/10 and another subvolume as RAID5/6. That way you could be fairly sure that the server comes up while you are prepared to tolerate some issue (depends on the answer to question #6) on the subvolume that (for now) is prone to the write hole.

9. ADDING EXISTING FILESYSTEM TO THE POOL?:
Is it somehow, or will it ever be possible to add a existing BTRFS filesystem to a pool? It would be a wet dream come true to be able to add a device containing an existing BTRFS filesystem and get it to show up as a subvolume in the main pool

10. PURE BTRFS BOOTLOADER?
This probably belongs somewhere else, but has someone considered the very idea of a pure BTRFS bootloader which only supports booting up a BTRFS filesystem in a (as failsafe as possible) way. It is a pain to ensure that grub is installed on all devices and update as you add/remove devices from the pool and a "butterboot"-loader would be fantastic

11. DEDUPLICATION:
Is deduplication planned to be part of the btrfs management tool? e.g btrfs filesystem[/subvolume?] deuplicate /mnt

12. SPACE CACHE: (Manpage/btrfs(5) page):
I have been using space cache v2 for a long time. No issues (that I know about) yet. That page states that the safe default space cache is v1. What is the current recommended default?

13. NODATACOW:
As far as I can remember there was some issues regarding NOCOW files/directories on the mailing list a while ago. I can't find any issues related to nocow on the wiki (I might note have searched enough) but I don't think they are fixed so maybe someone can verify that. And by the way ...are NOCOW files still not checksummed? If yes, are there plans to add that (it would be especially nice to know if a nocow file is correct or not)

14. VIRTUAL BLOCK DEVICE EXPORT
Are there plans to allow BTRFS to export virtual block devices from the BTRFS pool? E.g. so it would be possible to run other filesystems on top of a "protected" BTRFS layer (much like LVM / mdraid).



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux