It is the same idea for the way space efficient snapshots work currently in btrfs. I'm just planning out a way to make a process got find them and link them together and increase the reference counts and another method for having the allocator do it while writing. -Morey On Wed, 2008-08-13 at 12:35 -0700, Kevin Cantu wrote: > This would be a kind of filesystem block level compression, right? > > On Wed, Aug 13, 2008 at 12:28 PM, <btrfs-devel@xxxxxxxxxxxxxxxxxxxxx> wrote: > >> Don't do it!!! > >> > >> OK, I know Chris has described some block sharing. But I hate it. > >> > >> If I copy "resume" to "resume.save", it is because I want 2 copies > >> for safety. I don't want the fs to reduce it to 1 copy. And > >> reducing the duplicates is exactly opposite to Chris's paranoid > >> make-multiple-copies-by-default. > >> > >> Now feel free to tell me I'm an idiot (other people do) :) > > > > In situations where there's non-trivial benefits to some workloads, but > > also non-trivial drawbacks, it strikes me as something that could be > > enabled and disabled as a mount option, like data=ordered. > > > > -- > > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > > the body of a message to majordomo@xxxxxxxxxxxxxxx > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
