Re: About per-file dedup flag

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Qu Wenruo posted on Tue, 12 Jan 2016 12:51:33 +0800 as excerpted:

> Duncan wrote on 2016/01/12 04:13 +0000:
>> Qu Wenruo posted on Tue, 12 Jan 2016 11:09:23 +0800 as excerpted:
>>
>>> Now we hope to add support to enable/disable dedup per-file.
>>> Much like current NODATACOW/NOCOMPRESS for inode.
>>
>> How is this going to work?
>>
>> NODATACOW/NOCOMPRESS can apply to a single file.  But a dup flag, by
>> definition, needs two files, except for the special case of parts of a
>> file duplicating other parts of the same file.
> 
> You are still thinking in the way off-band dedup.

> So the things should be quite easy to understand:
> 
> For normal case (no NODEDUP flag), valid data(page cache) will be hashed
> to find if it's a duplicated one.
> 
> For NODEDUP flag case, all its page cache just direct write to disk or
> compressed then write to disk.
> No hash will be calculated.

Oh, _NO_DEDUP.  =:^)

Opposite the dedup logic implied by the subject, with no hint in the 
original post indicating logic actually the reverse of that.

NODEDUP indeed makes more sense, since with a mount or filesystem option 
enabling dedup, it would then be the default and nodedup as a per-file 
exception is the next logical extension.

Thanks.  I knew I must be missing something.  A little negation makes a 
big difference!  =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux