On Sat, Jul 11, 2020 at 12:44 PM Ken D'Ambrosio <ken@xxxxxxxx> wrote: > * Swap files. At least last time I checked, it was a PITA to take a > snapshot of a volume that had a swapfile on it -- I wound up writing a > wrapper that goes, does a swapoff, removes the file, creates the > snapshot, and then re-creates the file. Is this still "a thing"? Or > is there a way to work around that that isn't kludgey? Put the swapfile in its own subvolume and don't snapshot it. One way is to create a (nested) subvolume named "swap" inside of the "root" subvolume created at installation time; use chattr +C on it; now create the swapfile per 'man 5 btrfs'. Since btrfs snapshots aren't recursive, making a snapshot of 'root' will not cause a snapshot to be taken of 'swap' or its swapfiles. > * When Stuff Goes Wrong(tm). Again, my experience is not terribly > current, but when things hit the fan, for most FSes, you do an > fsck -y /path/to/dev > and hope things come together. But with btrfs, it seems that it's > substantially more complicated to figure out what to do. Have the > tools, perhaps, been updated to help end users figure out what choices > to make, etc., when dealing with an issue? UX of the tools needs improvement. But for various reasons, it's difficult to repair a Btrfs file system, so the emphasis is on taking advantage of more tolerant read-only mount to freshen backups. Also, stuff going wrong implies some sort of hardware/firmware problem, not just Btrfs sensitivity to critical areas being damaged as a result. The offline scrape tool is hard to use but really effective if you stick with it. > > * RAID 5/6. Last time I looked, that was in an unhappy state, so I just > set up a RAID with mdadm, lay btrfs on top of that, and call it good. That's fine. You don't get btrfs self healing, except for DUP metadata. But you still get error detection, with path to damaged file. > That seems to do the job, though it loses lots of smarts that would be > had with btrfs running the RAID. I see discussion on the wiki > (https://btrfs.wiki.kernel.org/index.php/RAID56) talking about an RFC > submitted to address the underlying issues; is this still broken? You should read Zygo's recent write up on raid5, most of which applies to raid6. https://lore.kernel.org/linux-btrfs/20200627032414.GX10769@xxxxxxxxxxxxxx/ -- Chris Murphy
