Re: btrfs-cleaner / snapshot performance analysis

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Feb 10, 2018 at 13:29:15 -0500, Ellis H. Wilson III wrote:

>> Well, sometimes those answers help. :) "Oh, yes, I disabled qgroups, I
>> didn't even realize I had those, and now the problem is gone."
> 
> I meant less than helpful for me, since for my project I need detailed 
> and fairly accurate capacity information per sub-volume, and the 

You won't have anything close to "accurate" in btrfs - quotas don't
include space wasted by fragmentation, which happens to allocate from tens
to thousands times (sic!) more space than the files itself.
Not in some worst-case scenarios, but in real life situations...
I got 10 MB db-file which was eating 10 GB of space after a week of
regular updates - withOUT snapshotting it. All described here.

> relationship between qgroups and subvolume performance wasn't being 
> spelled out in the responses.  Please correct me if I am wrong about 
> needing qgroups enabled to see detailed capacity information 
> per-subvolume (including snapshots).

Yes, you need that. But while snapshots are in use, it's not
straighforward to interpret the values, especially in regard of
exclusive spaace (which is not a btrfs limitation, just pure logical
conclusion) - this was also described in my thread.

> course) or how many subvolumes/snapshots there are.  If I know that 
> above N snapshots per subvolume performance tanks by M%, I can apply 
> limits on the use-case in the field, but I am not aware of those kinds 
> of performance implications yet.

This doesn't work like this. It all depends on data that are subject of
snapshots, especially how they are updated. How exactly, including write
patterns.

I think you expect answers that can't be formulated - with fs architecture so
advanced as ZFS or btrfs it's behavior can't be analyzed for simple
answers like 'keep less than N snapshots'.

If you want PRACTICAL rules, there is one not known commonly: since
the btrfs limitation is that defragmentation breaks CoW links, so all
your snapshots can grow like regular copies, defrag data just
before snapshotting them.

> I noticed the problem when Thunderbird became completely unresponsive. 

Is it using some database engine for storage? Mark the files with nocow.

This is an exception of easy-answer: btrfs doesn't handle databases with
CoW. Period. Doesn't matter if snapshotted or not, ANY database files
(systemd-journal, PostgreSQL, sqlite, db) are not handled at all. They
slow down entire system to the speed of cheap SD card.

If you have btrfs on your home partition, make sure that AT LEAST all
$USER/.cache directories are chattr +C. The same applies to entire /var
partition and dozen of other various directories with user-databases
(~/.mozilla/firefox, ~/.ccache and many many more application-specific).

In fact, if you want the quotas to be accurate, you NEED to mount every
volume with possibly hostile write patterns (like /home) as nocow.


Actually, if you do not use compression and don't need checksums of data
blocks, you may want to mount all the btrfs with nocow by default.
This way the quotas would be more accurate (no fragmentation _between_
snapshots) and you'll have some decent performance with snapshots.
If that is all you care.

-- 
Tomasz Pala <gotar@xxxxxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux