Re: how can I copy files bigger than ~32 GB when using compress-force?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12.10.2010 13:12, Tomasz Chmielewski wrote:
On 05.10.2010 00:28, Chris Mason wrote:

Does it happen with any 32gb file that doesn't compress well?

The 220 GB qcow2 file was basically uncompressible (backuppc archive
full of bzip2-compressed files).

Ok, I think I know what is happening here, they all lead to the same
chunk of code.  I'll be able to reproduce this locally.

FYI, qemu/kvm doesn't seem to like its files located on btrfs mounted with compress-force.

I have a filesystem mounted with noatime,compress-force, where I created a 100GB sparse file.
There, I wanted to install a Linux distribution - however, the whole qemu-kvm process hanged with these entries being repeated over and over.
It's not possible to kill the qemu-kvm process (even with kill -9) etc.


[103678.068429] INFO: task qemu-kvm:18722 blocked for more than 120 seconds.

Hmm, I see it blocks infinitely like this whether qemu-kvm tries to access a sparse file mounted with compress, compress-force, or no compression at all.

Also hangs with non-sparse files mounted without compression (didn't try with compression).

So must be some other bug.


And, I see occasional:

bad ordered accounting left 4096 size 12288

when it happens.


--
Tomasz Chmielewski
http://wpkg.org
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux