Re: Why does btrfs benchmark so badly in this case?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Aug 8, 2013, at 2:23 PM, John Williams <jwilliams4200@xxxxxxxxx> wrote:
> 
> So I guess the reason that ZFS does well with that workload is that
> ZFS is using smaller blocks, maybe just 512B ?

Likely. It uses a variable block size.


> I wonder how common these type of non-4K aligned workloads are.
> Apparently, people with such workloads should avoid btrfs, but maybe
> these types of workloads are very rare?

I can't directly answer the question, but all of the typical file systems on OS X, Linux, and Windows default to 4KB block sizes for many years now, baked in at creation time. On OS X, the block size varies automatically with respect to volume size at fs creation time (it goes to 8KB block sizes above 2TB, and scales up to 1MB block sizes), but still isn't ever less than 4KB unless manually created this way. So I'd think such workloads are rare.

I also don't know if any common use fs has an optimization whereby just the modified sector(s) is overwritten, rather than all sectors making up the file system block being modified.

Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux