> That's actually the reason btrfs defaults to SINGLE metadata mode on > single-device SSD-backed filesystems, as well. > > But as Imran points out, SSDs aren't all there is. There's still > spinning rust around. > > And defaults aside, even on SSDs it should be /possible/ to specify data- > dup mode, because there's enough different SSD variants and enough > different use-cases, that it's surely going to be useful some-of-the-time > to someone. =:^) We didn't start with SSDs but the thread heads to there. Well ok then. Since hard drives with more complex firmwares, hybrids, and so.. are becoming available. Eventually they will share common problems with SSDs. To make story short lets say "Eventually we all will have block address devices, without any sensible physically bound addresses." Without physically bound addresses, any duplicate written to device, MAY end up in the same unreliable portion of the device. Note it "MAY". However the devices are so large that, this probability is very low. The paranoid who wants to make this lower may simply increase the number of duplicates. On the other hand people who work with multiple physical devices may want to decrease number of duplicates. (Probably to single copy) Hence, there is definetely use case for tunable duplicates both data and metadata. Now, there is one open issue: In its current form "-d dup" interferes with "-M". Is it constraint of design? Or an arbitrary/temporary constraint. What will be the situation if there is tunable duplicates? And more: Is "-M" good for everyday usage on large fs for efficient packing? What's the penalty? Can it be curable? If so, why not make it default? Imran -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
