Re: evidence of persistent state, despite device disconnects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chris Murphy posted on Sat, 09 Jan 2016 15:29:31 -0700 as excerpted:

> On Sat, Jan 9, 2016 at 3:55 AM, Duncan <1i5t5.duncan@xxxxxxx> wrote:
>>
>> If you're mounting degraded,rw, and you're down to a single device on a
>> raid1, then once the existing chunks fill up, it /has/ to create single
>> chunks, because it can't create them raid1 as there's not enough
>> devices (a minimum of two devices are required to create raid1 chunks,
>> since two copies are required and they can't be on the same device).
>>
>> And by mounting degraded,rw you've given it permission to create those
>> single mode chunks if it has to, so it's not "silent", as you've
>> explicitly mounted it degraded,rw, and single is what raid1 degrades to
>> when there's only one device.
> 
> This is esoteric for mortal users (let alone without documentation) that
> degraded,rw means single chunks will be made, and now new data is no
> longer replicated once the bad device is replaced and volume scrubbed.
> 
> There's an incongruency between the promise of "fault tolerance, repair,
> and easy administration" and the esoteric reality. This is not easy,
> this is a gotcha. I'll bet almost no users have any idea this is how
> rw,degraded behaves and the risk it entails.

Certainly, documentation is an issue.  But while the degraded option 
doesn't force degraded, only allows it if there are missing devices, it's 
not recommended, and this is one reason why.  Using the degraded option 
really /does/ give the filesystem permission to break the rules that 
would apply in normal operation, and adding to your mount options 
shouldn't be done lightly or routinely.  Ideally, it's /only/ added after 
a device fails, in ordered to be able to mount the filesystem and replace 
the failing/failed device with a new one or reshape the filesystem to one 
less device if a new one isn't to be added.

OTOH, if there are three devices in the raid1, and all three have 
unallocated space, then loss of a device shouldn't result in single-mode 
chunks even when mounting degraded, because it's still possible in that 
case to create raid1 chunks as there's still two devices with free space 
available.  Again, creation of single chunks in that case would be a bug.

But I think we're past the effective argument point and pretty much just 
restating our position at this point.  Given that I'm definitely not a 
btrfs coder and to my knowledge, while you may well read the code and do 
occasional trivial patches, you're not really a btrfs coder either, 
alleviating that documentation issue, which we both agree is there, is 
the best either of us can really do.  The rest remains with the real 
btrfs coders, and arguing further about it as non-btrfs-devs isn't going 
to help.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux