On 4/18/12 12:48 PM, NeilBrown wrote:
On Wed, 18 Apr 2012 08:58:14 +0800 Shaohua Li<shli@xxxxxxxxxx> wrote:
On 4/18/12 4:26 AM, NeilBrown wrote:
On Tue, 17 Apr 2012 07:46:03 -0700 Dan Williams<dan.j.williams@xxxxxxxxx>
On Tue, Apr 17, 2012 at 1:35 AM, Shaohua Li<shli@xxxxxxxxxx> wrote:
Discard for raid4/5/6 has limitation. If discard request size is small, we do
discard for one disk, but we need calculate parity and write parity disk. To
correctly calculate parity, zero_after_discard must be guaranteed.
I'm wondering if we could use the new bad blocks facility to mark
discarded ranges so we don't necessarily need determinate data after
...but I have not looked into it beyond that.
The bad blocks framework can only store a limited number of bad ranges - 512
in the current implementation.
That would not be an acceptable restriction for discarded ranges.
You would need a bitmap of some sort if you wanted to record discarded
This appears to remove the unnecessary resync for discarded range after
or discard error, eg an enhancement. From my understanding, it can't
limitation I mentioned in the patch. For raid5, we still need discard a
stripe (discarding one disk but writing parity disk isn't good).
It is certainly not ideal, but it is worse than not discarding at all?
And would updating some sort of bitmap be just as bad as updating the parity
How about treating a DISCARD request as a request to write a block full of
zeros, then at the lower level treat any request to write a block full of
zeros as a DISCARD request. So when the parity becomes zero, it gets
Certainly it is best if the filesystem would discard whole stripes at a time,
and we should be sure to optimise that. But maybe there is still room to do
something useful with small discards?
Sure, it would be great we can do small discards. But I didn't get how to do
it with the bitmap approach. Let's give an example, data disk1, data disk2,
parity disk3. Say discard some sectors of disk1. The suggested approach is
to mark the range bad. Then how to deal with parity disk3? As I said,
parity disk3 isn't good. So mark the corresponding range of parity disk3
bad too? If we did this, if disk2 is broken, how can we restore it?
Am I missed something or are you talking about different issues?
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html