Re: how can I copy files bigger than ~32 GB when using compress-force?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 04, 2010 at 11:42:12PM +0200, Tomasz Chmielewski wrote:
> >I'm assuming this works without compress-force?  I can make a guess at
> >what is happening, the compression forces a relatively small extent
> >size, and this is making our worst case metadata reservations get upset.
> 
> Yes, it works without compress-force.
> 
> Interesting is that cp or rsync sometimes just exit quite fast with
> "no space left".
> 
> Sometimes, they just "hang" (waited up to about an hour) - file size
> does not grow anymore, last modified time is not updated, iostat

Sorry is this hang/fast exit with or without compress-force.

> does not show any bytes read/written, there are no btrfs or any
> other processes taking too much CPU, cp/rsync is not in "D" state
> (although it gets to "D" state and uses 100% CPU as I try to kill
> it).
> 
> Could it be we're hitting two different bugs here?
> 
> 
> >Does it happen with any 32gb file that doesn't compress well?
> 
> The 220 GB qcow2 file was basically uncompressible (backuppc archive
> full of bzip2-compressed files).

Ok, I think I know what is happening here, they all lead to the same
chunk of code.  I'll be able to reproduce this locally.

-chris

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux