Re: PATCH: md/raid1: sync_request_write() may complete r1_bio without rescheduling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

On Mon, 16 Jul 2012 17:55:25 +0300 Alexander Lyakas <alex.bolshoy@xxxxxxxxx>

> Hi Neil,
> this is yet another issue I encounter, which is indirectly related to
> bad-blocks code, but I think it can be hit when bad-blocks logging is
> disabled too.
> Scenario:
> - RAID1 with one device A, one device missing
> - mdadm --manage /dev/mdX --add /dev/B (fresh device B added)
> - recovery of B starts
> Now at some point, end_sync_write() on B returns with error. Now the
> following can happen:
> In sync_request_write() we do:
> 1/
> 	/*
> 	 * schedule writes
> 	 */
> 	atomic_set(&r1_bio->remaining, 1);
> 2/ then we schedule WRITEs, so for each WRITE scheduled we do:
> atomic_inc(&r1_bio->remaining);
> 3/ then we do:
> 	if (atomic_dec_and_test(&r1_bio->remaining)) {
> 		/* if we're here, all write(s) have completed, so clean up */
> 		md_done_sync(mddev, r1_bio->sectors, 1);
> 		put_buf(r1_bio);
> 	}
> So assume that end_sync_write() completed with error, before we got to
> 3/. Then in end_sync_write() we set R1BIO_WriteError, and the we
> decrement r1_bio->remaining, so it becomes 1, so we bail out and don't
> call reschedule_retry().
> Then in 3/ we decrement r1_bio->remaining again, see that it is 0 now
> and complete the bio....without marking bad block or failing the
> device. So we think that this region is in-sync, while it's not,
> because we hit IO error on B.
> I checked vs 2.6 versions and such behavior makes sense there, because
> R1BIO_WriteError or R1BIO_MadeGood cases are not present there (no
> bad-blocks functionality). But now, we must call reschedule_retry() at
> both places (if needed). Does this make sense?
> I tested the following patch, which seems to work ok:

Thanks. I agree with you analysis.

I've made a small change to fix another problem with that code.


From af671b264f271563d343249886db16155a3130e0 Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb@xxxxxxx>
Date: Tue, 17 Jul 2012 11:43:47 +1000
Subject: [PATCH] commit 4367af556133723d0f443e14ca8170d9447317cb    md/raid1:
 clear bad-block record when write succeeds.

Added a 'reschedule_retry' call possibility at the end of
end_sync_write, but didn't add matching code at the end of
sync_request_write.  So if the writes complete very quickly, or
scheduling makes it seem that way, then we can miss rescheduling
the request and the resync could hang.

Also commit 73d5c38a9536142e062c35997b044e89166e063b
    md: avoid races when stopping resync.

Fix a race condition in this same code in end_sync_write but didn't
make the change in sync_request_write.

This patch updates sync_request_write to fix both of those.
Patch is suitable for 3.1 and later kernels.

Reported-by: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
Original-version-by: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: NeilBrown <neilb@xxxxxxx>

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index e2e6ec2..506d055 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1892,8 +1892,14 @@ static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio)
 	if (atomic_dec_and_test(&r1_bio->remaining)) {
 		/* if we're here, all write(s) have completed, so clean up */
-		md_done_sync(mddev, r1_bio->sectors, 1);
-		put_buf(r1_bio);
+		int s = r1_bio->sectors;
+		if (test_bit(R1BIO_MadeGood, &r1_bio->state) ||
+		    test_bit(R1BIO_WriteError, &r1_bio->state))
+			reschedule_retry(r1_bio);
+		else {
+			put_buf(r1_bio);
+			md_done_sync(mddev, s, 1);
+		}

Attachment: signature.asc
Description: PGP signature

[ATA RAID]     [Linux SCSI Target Infrastructure]     [Managing RAID on Linux]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device-Mapper]     [Kernel]     [Linux Books]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Photos]     [Yosemite Photos]     [Yosemite News]     [AMD 64]     [Linux Networking]

Add to Google Powered by Linux