Re: Have 15GB missing in btrfs filesystem.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



27.10.2018 21:12, Remi Gauvin пишет:
> On 2018-10-27 01:42 PM, Marc MERLIN wrote:
> 
>>
>> I've been using btrfs for a long time now but I've never had a
>> filesystem where I had 15GB apparently unusable (7%) after a balance.
>>
> 
> The space isn't unusable.  It's just allocated.. (It's used in the sense
> that it's reserved for data chunks.).  Start writing data to the drive,
> and the data will fill that space before more gets allocated.. (Unless

No (at least, not necessarily).

On empty filesystem:

bor@10:~> df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1      1023M   17M  656M   3% /mnt
bor@10:~> sudo dd if=/dev/zero of=/mnt/foo bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.260088 s, 403 MB/s
bor@10:~> sync
bor@10:~> df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1      1023M  117M  556M  18% /mnt
bor@10:~> sudo filefrag -v /mnt/foo
Filesystem type is: 9123683e
File size of /mnt/foo is 104857600 (25600 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..   14419:      36272..     50691:  14420:
   1:    14420..   25599:     125312..    136491:  11180:      50692:
last,eof
/mnt/foo: 2 extents found
bor@10:~> sudo dd if=/dev/zero of=/mnt/foo bs=10M count=1 conv=notrunc
seek=2
1+0 records in
1+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0300004 s, 350 MB/s
bor@10:~> sync
bor@10:~> df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1      1023M  127M  546M  19% /mnt
bor@10:~> sudo filefrag -v /mnt/foo
Filesystem type is: 9123683e
File size of /mnt/foo is 104857600 (25600 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..    5119:      36272..     41391:   5120:
   1:     5120..    7679:      33696..     36255:   2560:      41392:
   2:     7680..   14419:      43952..     50691:   6740:      36256:
   3:    14420..   25599:     125312..    136491:  11180:      50692:
last,eof
/mnt/foo: 4 extents found
bor@10:~> sudo dd if=/dev/zero of=/mnt/foo bs=10M count=1 conv=notrunc
seek=7
1+0 records in
1+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0314211 s, 334 MB/s
bor@10:~> sync
bor@10:~> df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1      1023M  137M  536M  21% /mnt
bor@10:~> sudo filefrag -v /mnt/foo
Filesystem type is: 9123683e
File size of /mnt/foo is 104857600 (25600 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..    5119:      36272..     41391:   5120:
   1:     5120..    7679:      33696..     36255:   2560:      41392:
   2:     7680..   14419:      43952..     50691:   6740:      36256:
   3:    14420..   17919:     125312..    128811:   3500:      50692:
   4:    17920..   20479:     136492..    139051:   2560:     128812:
   5:    20480..   25599:     131372..    136491:   5120:     139052:
last,eof
/mnt/foo: 6 extents found
bor@10:~> ll -sh /mnt
total 100M
100M -rw-r--r-- 1 root root 100M Oct 27 23:30 foo
bor@10:~>

So you still have the single file with size of 100M but space consumed
on filesystem is 120M because two initial large extents remain. Each
write of 10M will get new extent allocated, but large extents are not
split. If you look at file details

bor@10:~/python-btrfs/examples> sudo ./show_file.py /mnt/foo
filename /mnt/foo tree 5 inum 259
inode generation 239 transid 242 size 104857600 nbytes 104857600
block_group 0 mode 100644 nlink 1 uid 0 gid 0 rdev 0 flags 0x0(none)
inode ref list size 1
    inode ref index 3 name utf-8 foo
extent data at 0 generation 239 ram_bytes 46563328 compression none type
regular disk_bytenr 148570112 disk_num_bytes 46563328 offset 0 num_bytes
20971520

This extent consumes about 44MB on disk but only 20MB of it is part of file.

extent data at 20971520 generation 241 ram_bytes 10485760 compression
none type regular disk_bytenr 138018816 disk_num_bytes 10485760 offset 0
num_bytes 10485760
extent data at 31457280 generation 239 ram_bytes 46563328 compression
none type regular disk_bytenr 148570112 disk_num_bytes 46563328 offset
31457280 num_bytes 15106048

And another 14MB. So 10MB allocated on disk are "lost".

extent data at 46563328 generation 239 ram_bytes 12500992 compression
none type regular disk_bytenr 195133440 disk_num_bytes 12500992 offset 0
num_bytes 12500992
extent data at 59064320 generation 239 ram_bytes 45793280 compression
none type regular disk_bytenr 513277952 disk_num_bytes 45793280 offset 0
num_bytes 14336000
extent data at 73400320 generation 242 ram_bytes 10485760 compression
none type regular disk_bytenr 559071232 disk_num_bytes 10485760 offset 0
num_bytes 10485760
extent data at 83886080 generation 239 ram_bytes 45793280 compression
none type regular disk_bytenr 513277952 disk_num_bytes 45793280 offset
24821760 num_bytes 20971520

Same here. 10MB in extent at 513277952 are lost.

There is no way to regain them using balance. Only defragmentation can
potentially rewrite file freeing partial extents.

> you are using an older kernel and the filesystem gets mounted with ssd
> option, in which case, you'll want to add nossd option to prevent that
> behaviour.)
> 
> You can use btrfs fi usage to display that more clearly.
> 
> 
>> I can try a defrag next, but since I have COW for snapshots, it's not
>> going to help much, correct?
> 
> The defrag will end up using more space, as the fragmented parts of
> files will get duplicated.  That being said, if you have the luxury to
> defrag *before* taking new snapshots, that would be the time to do it.
> 




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux