On 2019-04-01 15:22, Hendrik Friedel wrote:
Dear btrfs-team,
I am aware, that barriers are essential for btrfs [1].
I have some questions on that topic:
1) I am not aware how to determine, whether barriers are supported,
except for searching dmesg for a message that barriers are disabled. Is
that correct? It would be nice, if that could be determined before
creating the FS.
AFAIK, this is correct. However, not supporting DPO or FUA is
non-critical, because the kernel emulates them properly (there would be
many problems far beyond BTRFS if it didn't, as most manufacturers treat
FUA the same way they treat SCT ERC, it's an 'enterprise' feature, so
consumers aren't allowed to have it).
2) I find the location of the (only?) warning -dmesg- well hidden. I
think it would be better to notify the user when creating the file-system.
A notification on creating the volume and ones when adding devices
(either via `device add` or via a replace operation) would indeed be
nice, but we should still keep the kernel log warning. Note also that
messages like what Qu mentioned as being fine are from the SCSI layer
(yes, even if you're using ATA or USB disks, they both go through the
SCSI layer in Linux), not BTRFS.
3) Even more, it would be good, if btrfs would disable the write cache
in that case, so that one does not need to rely on the user
I would tend to disagree here. We should definitely _recommend_ this to
the user if we know there is no barrier support, but just doing it
behind their back is not a good idea. There are also plenty of valid
reasons to want to use the write cache anyway.
4) If [2] is still valid, there are drives 'lying' about their barrier
support. Can someone comment? If that is the case, it would be even
advisable to provide a test to test the actual capability. In fact, if
this is still valid, this may be the reason for some btrfs corruptions
that have been discussed here. [I did read, that LVM/Device-Mapper does
not support barriers, but I think that this is outdated]There are two things to consider here, the FLUSH command which is
mandatory as per SCSI, ATA, and pretty much every other storage protocol
specification, and FUA/DPO, which is not. If you have FLUSH, you can
emulate FUA/DPO.
The only modern devices I know of that actually _lied_ about FLUSH are
OCZ SSD's. They've stopped making them because the associated data-loss
issues killed any consumer trust in the product. The only other devices
I've ever seen _any_ issue with the FLUSH implementation in are some
ancient SCSI-2 5.25 inch full height disk drives where I work, which
have a firmware bug that reports the FLUSH completed before the last
sector in the write cache is written out (they still write that last
sector, they just report command completion early).
As far as FUA/DPO, I know of exactly _zero_ devices that lie about
implementing it and don't. Unlike FLUSH, which is a required part of
almost all modern storage protocols, FUA/DPO isn't required, so there's
essentially zero incentive to claim you implement it when you don't
(people who would be looking for it generally will know what they're doing
As far as that article you're linking about disks lying, note first that
it's just over 14 years old (that's almost three times the MTBF for
normal hard drives), and much has changed since then. The actual issue
there is not the disks doing write caching (which is what is actually
being complained about), but the fact that Linux used to not issue a
FLUSH command to the disks when you called fsync in userspace.
Greetings,
Hendrik
[1]
https://btrfs.wiki.kernel.org/index.php/FAQ#I_see_a_warning_in_dmesg_about_barriers_being_disabled_when_mounting_my_filesystem._What_does_that_mean.3F
[2] https://brad.livejournal.com/2116715.html