Duncan,
thanks for your extensive answer.
On 17.03.2016 11:51, Duncan wrote:
> Ole Langbehn posted on Wed, 16 Mar 2016 10:45:28 +0100 as excerpted:
>
> Have you tried the autodefrag mount option, then defragging? That should
> help keep rewritten files from fragmenting so heavily, at least. On
> spinning rust it doesn't play so well with large (half-gig plus)
> databases or VM images, but on ssds it should scale rather larger; on
> fast SSDs I'd not expect problems until 1-2 GiB, possibly higher.
Since I do have some big VM images, I never tried autodefrag.
> For large dbs or VM images, too large for autodefrag to handle well, the
> nocow attribute is the usual suggestion, but I'll skip the details on
> that for now, as you may not need it with autodefrag on an ssd, unless
> your database and VM files are several gig apiece.
Since posting the original post, I experimented with setting the firefox
places.sqlite to nodatacow (on a new file). 1 extent since, seems to work.
>> BTW: I did a VACUUM on the sqlite db and afterwards it had 1 extent.
>> Expected, just saying that vacuuming seems to be a good measure for
>> defragmenting sqlite databases.
>
> I know the concept, but out of curiousity, what tool do you use for
> that? I imagine my firefox sqlite dbs could use some vacuuming as well,
> but don't have the foggiest idea how to go about it.
simple call of the command line interface, like with any other SQL DB:
# sqlite3 /path/to/db.sqlite "VACUUM;"
> Of *most* importance, you really *really* need to do something about that
> data chunk imbalance, and to a lessor extent that metadata chunk
> imbalance, because your unallocated space is well under a gig (306 MiB),
> with all that extra space, hundreds of gigs of it, locked up in unused or
> only partially used chunks.
I'm curious - why is that a bad thing?
> The subject says 4.4.1, but it's unclear whether that's your kernel
> version or your btrfs-progs userspace version. If that's your userspace
> version and you're running an old kernel, strongly consider upgrading to
> the LTS kernel 4.1 or 4.4 series if possible, or at least the LTS series
> before that, 3.18. Those or the latest couple current kernel series, 4.5
> and 4.4, and 4.3 for the moment as 4.5 is /just/ out, are the recommended
> and best supported versions.
# uname -r
4.4.1-gentoo
# btrfs --version
btrfs-progs v4.4.1
So, both 4.4.1 ;), but I meant userspace.
> Try this:
>
> btrfs balance start -dusage=0 -musage=0.
Did this although I'm reasonably up to date kernel-wise. I am very sure
that the filesystem has never seen <3.18. Took some minutes, ended up with
# btrfs filesystem usage /
Overall:
Device size: 915.32GiB
Device allocated: 681.32GiB
Device unallocated: 234.00GiB
Device missing: 0.00B
Used: 153.80GiB
Free (estimated): 751.08GiB (min: 751.08GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Data,single: Size:667.31GiB, Used:150.22GiB
/dev/sda2 667.31GiB
Metadata,single: Size:14.01GiB, Used:3.58GiB
/dev/sda2 14.01GiB
System,single: Size:4.00MiB, Used:112.00KiB
/dev/sda2 4.00MiB
Unallocated:
/dev/sda2 234.00GiB
-> Helped with data, not with metadata.
> Then start with metadata, and up the usage numbers which are percentages,
> like this:
>
> btrfs balance start -musage=5.
>
> Then if it works up the number to 10, 20, etc.
upped it up to 70, relocated a total of 13 out of 685 chunks:
Metadata,single: Size:5.00GiB, Used:3.58GiB
/dev/sda2 5.00GiB
> Once you have several gigs in unallocated, then try the same thing with
> data:
>
> btrfs balance start -musage=5
>
> And again, increase it in increments of 5 or 10% at a time, to 50 or
> 70%.
did
# btrfs balance start -dusage=70
straight away, took ages, regularly froze processes for minutes, after
about 8h status is:
# btrfs balance status /
Balance on '/' is paused
192 out of about 595 chunks balanced (194 considered), 68% left
# btrfs filesystem usage /
Overall:
Device size: 915.32GiB
Device allocated: 482.04GiB
Device unallocated: 433.28GiB
Device missing: 0.00B
Used: 154.36GiB
Free (estimated): 759.48GiB (min: 759.48GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Data,single: Size:477.01GiB, Used:150.80GiB
/dev/sda2 477.01GiB
Metadata,single: Size:5.00GiB, Used:3.56GiB
/dev/sda2 5.00GiB
System,single: Size:32.00MiB, Used:96.00KiB
/dev/sda2 32.00MiB
Unallocated:
/dev/sda2 433.28GiB
-> Looking good. Will proceed when I don't need the box to actually be
responsive.
> Second thing, consider tweaking your trim/discard policy [...]
>
> The recommendation is to put fstrim in a cron or systemd timer job,
> executing it weekly or similar, preferably at a time when all those
> unqueued trims won't affect your normal work.
I have it in cron.weekly, since the creation of the filesystem:
fstrim -v / >> $LOG
Cheers,
Ole
Attachment:
signature.asc
Description: OpenPGP digital signature
