> >>> I'm trying to use quotas for a simple chrooted sftp setup, limiting > >>> space for each user's subvolume (now for testing to 1M). > >>> > >>> I tried to hit the limit by uploading files and once it comes to the > >>> limit I face following problem: if I try to free space by removing a > >>> file via Linux sftp client (or Filezilla) - I get error: > >>> "Couldn't delete file: Failure" > >>> > >>> Sometimes, but not always, if I repeat it for 3-5 times it does removes > >>> the file at the end. > >>> If I login as root and try to remove the file via SSH I get the error: > >>> "rm: cannot remove 'example.txt': Disk quota exceeded" > >>> > >>> What is the problem? And how can I solve it? > >> > >> Kernel version first. > >> > >> If it's possible, please use latest kernel, at least newer than v4.10, > >> since we have a lot of qgroup reservation related fixes in newer kernel. > >> > >> Then, for small quota, due to the nature of btrfs metadata CoW and > >> relative large default node size (16K), it's quite easy to hit disk > >> quota for metadata. > > > > Yes, but why I get the error specifically on REMOVING a file? Even if I > > hit disk quota - if I free up space - it should be possible, isn't it? > > It's only true for fs modifying its metadata in-place (and use journal > to protect it). > > For fs using metadata CoW, even freeing space needs extra space for new > metadata. > Wait, it doesn't sound like a bug, but rather like a flaw in design. This means - each time a user hits his quota limit he will get stuck without being able to free space?!! Thank you. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
