RE: how to best segment a big block device in resizeable btrfs filesystems?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: linux-btrfs-owner@xxxxxxxxxxxxxxx <linux-btrfs-
> owner@xxxxxxxxxxxxxxx> On Behalf Of Marc MERLIN
> Sent: Tuesday, 3 July 2018 2:16 PM
> To: Qu Wenruo <quwenruo.btrfs@xxxxxxx>
> Cc: Su Yue <suy.fnst@xxxxxxxxxxxxxx>; linux-btrfs@xxxxxxxxxxxxxxx
> Subject: Re: how to best segment a big block device in resizeable btrfs
> filesystems?
> 
> On Tue, Jul 03, 2018 at 09:37:47AM +0800, Qu Wenruo wrote:
> > > If I do this, I would have
> > > software raid 5 < dmcrypt < bcache < lvm < btrfs That's a lot of
> > > layers, and that's also starting to make me nervous :)
> >
> > If you could keep the number of snapshots to minimal (less than 10)
> > for each btrfs (and the number of send source is less than 5), one big
> > btrfs may work in that case.
> 
> Well, we kind of discussed this already. If btrfs falls over if you reach
> 100 snapshots or so, and it sure seems to in my case, I won't be much better
> off.
> Having btrfs check --repair fail because 32GB of RAM is not enough, and it's
> unable to use swap, is a big deal in my case. You also confirmed that btrfs
> check lowmem does not scale to filesystems like mine, so this translates into
> "if regular btrfs check repair can't fit in 32GB, I am completely out of luck if
> anything happens to the filesystem"

Just out of curiosity I had a look at my backup filesystem.
vm-server /media/backup # btrfs fi us /media/backup/
Overall:
    Device size:                   5.46TiB
    Device allocated:              3.42TiB
    Device unallocated:            2.04TiB
    Device missing:                  0.00B
    Used:                          1.80TiB
    Free (estimated):              1.83TiB      (min: 1.83TiB)
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,RAID1: Size:1.69TiB, Used:906.26GiB
   /dev/mapper/a-backup--a         1.69TiB
   /dev/mapper/b-backup--b         1.69TiB

Metadata,RAID1: Size:19.00GiB, Used:16.90GiB
   /dev/mapper/a-backup--a        19.00GiB
   /dev/mapper/b-backup--b        19.00GiB

System,RAID1: Size:64.00MiB, Used:336.00KiB
   /dev/mapper/a-backup--a        64.00MiB
   /dev/mapper/b-backup--b        64.00MiB

Unallocated:
   /dev/mapper/a-backup--a         1.02TiB
   /dev/mapper/b-backup--b         1.02TiB

compress=zstd,space_cache=v2
202 snapshots, heavily de-duplicated
551G / 361,000 files in latest snapshot

Btrfs check normal mode took 12 mins and 11.5G ram
Lowmem mode I stopped after 4 hours, max memory usage was around 3.9G
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux