Why is dedup inline, not delayed (as opposed to offline)? Explain like I'm five pls.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

This must be a silly question! Please assume that I know not much more than
nothing abou*t fs. 
I know dedup is traditionally costs a lot of memory, but I don't really
understand why it is done like that. Let me explain my question:

AFAICT dedup matches file level chunks (or whatever you call them) using a
hash function or something which has limited collision potential. The hash
is used to match blocks as they are committed to disk, I'm talking online
dedup*, and reflink/eliminate the duplicated blocks as necessary.  This
bloody great hash tree is saved in memory for speed of lookup (I assume).

But why?

Is there any urgency for dedup? What's wrong with storing the hash on disk
with the block and having a separate process dedup the written data over
time; dedup'ing data immediately when written to high-write-count data is
counter productive because no sooner has it been deduped then it is rendered
obsolete by another COW write.

There's also the problem of opening a potential problem window before the
commit to disk, hopefully covered by the journal, whilst we seek the
relevant duplicate if there is one.

Help me out peeps? Why is there a such an urgency to have online dedup,
rather than a triggered/delayed dedup, similar the current autodefrag process?

Thank you. I'm sure the answer is obvious, but not to me!

* dedup/dedupe/deduplication 




--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux