Re: Massive loss of disk space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On August 3, 2017 7:01:06 PM GMT+03:00, Goffredo Baroncelli 
>The file is physically extended
>
>ghigo@venice:/tmp$ fallocate -l 1000 foo.txt

For clarity let's replace the fallocate above with:
$ head -c 1000 </dev/urandom >foo.txt

>ghigo@venice:/tmp$ ls -l foo.txt
>-rw-r--r-- 1 ghigo ghigo 1000 Aug  3 18:00 foo.txt
>ghigo@venice:/tmp$ fallocate -o 500 -l 1000 foo.txt
>ghigo@venice:/tmp$ ls -l foo.txt
>-rw-r--r-- 1 ghigo ghigo 1500 Aug  3 18:00 foo.txt
>ghigo@venice:/tmp$

According to explanation by Austin the foo.txt at this point somehow occupies 2000 bytes of space because I can reflink it and then write another 1000 bytes of data into it without losing 1000 bytes I already have or getting out of drive space. (Or is it only true while there are open file handles?)
-- 

With Best Regards,
Marat Khalili
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux