Re: Problem with file system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





At 04/25/2017 01:33 PM, Marat Khalili wrote:
On 25/04/17 03:26, Qu Wenruo wrote:
IIRC qgroup for subvolume deletion will cause full subtree rescan which can cause tons of memory.
Could it be this bad, 24GB of RAM for a 5.6TB volume? What does it even use this absurd amount of memory for? Is it swappable?

The memory is used for 2 reasons.

1) Record which extents are needed to trace
   Freed at transaction commit.

   Need better idea to handle them. Maybe create a new tree so that we
   can write it to disk?
   Or another qgroup rework?

2) Record current roots referring to this extent
   Only after v4.10 IIRC.

The memory allocated is not swappable.

How many memory it uses depends on the number of extents of that subvolume.

It's 56 bytes for one extent, both tree block and data extent.
To use up 16G ram, it's about 300 million extents.
For 5.6T volume, its average extent size is about 20K.

It seems that your volume is highly fragmented though.

If that's the problem, disabling qgroup may be the best workaround.

Thanks,
Qu


Haven't read about RAM limitations for running qgroups before, only about CPU load (which importantly only requires patience, does not crash servers).

--

With Best Regards,
Marat Khalili
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux