Re: high cpu load for random write

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 30, 2009 at 03:54:09PM +0300, Piavlo wrote:
>  Hi,
> 
> I've just ran a tiobench benchmark  on  2.6.31-rc1-git5 with
> btrfs-progs-0.18
> on single ATA disk with default mount options.  Then performance now
> looks great compared with previous kernel versions.
> 
> But one thing that I noticed - is that CPU load (besides being terribly
> high) for random write  - is several  times higher than for sequential
> write.
> While for all other file systems ever tried, I've always seen the
> opposite - the less data is written to the disk the less is the cpu load
> (no matter if it's random or sequential writes).

There are two causes of the high CPU load.  The first is data
checksumming (which is constant for creating the file and for random
writes) and the second is the cost of maintaining back references for
the file data extent.

In btrfs, we track the owners of each extent, which makes repair, volume
management and other things much easier.  Small random writes make for a
lot of extents, and so they also make for a lot of tracking.

In general, you'll find that mount -o ssd will be faster here, just
because it forces the allocator into more sequential allocations for
this workload.

You'll find that mount -o nodatacow uses much less CPU time, but this
disables checksumming and a few other advanced features.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux