Re: [PATCH] raid5: Flush data when do_md_stop in order to avoid stripe_cache leaking.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


On Mon, 16 Jul 2012 18:20:30 +1000 NeilBrown <neilb@xxxxxxx> wrote:

> On Fri, 13 Jul 2012 13:58:00 +0800 majianpeng <majianpeng@xxxxxxxxx> wrote:
> 
> > I found some kernel message below:
> > =============================================================================
> > [  432.213819] BUG raid5-md0 (Not tainted): Objects remaining on kmem_cache_close()
> > [  432.213820]
> > -----------------------------------------------------------------------------
> > [  432.213820]
> > [  432.213823] INFO: Slab 0xffffea00029cae00 objects=24 used=1
> > fp=0xffff8800a72bb910 flags=0x100000000004080
> > [  432.213825] Pid: 6207, comm: mdadm Not tainted 3.5.0-rc6+ #56
> > [  432.213827] Call Trace:
> > [  432.213833]  [<ffffffff8111b4a1>] slab_err+0x71/0x80
> > [  432.213836]  [<ffffffff810e4998>] ? __free_pages+0x18/0x30
> > [  432.213839]  [<ffffffff8111f45a>] ? kmem_cache_destroy+0x16a/0x3a0
> > [  432.213841]  [<ffffffff8111f47d>] kmem_cache_destroy+0x18d/0x3a0
> > [  432.213845]  [<ffffffffa007ef8d>] free_conf+0x2d/0xf0 [raid456]
> > [  432.213848]  [<ffffffffa007f8b1>] stop+0x41/0x60 [raid456]
> 
> This is certainly something to be concerned about...
> 
> 
> > 
> > By the following steps, it can reappear.
> > 1:create raid5
> > 2:dd if=/dev/zero of=/dev/md0 bs=1M
> > 3:when exec "mdadm -S /dev/md0", press "CTRL+C" in dd-command screen.
> 
> Thanks - this worked first time.
> Thought the 'bug' I got was different.
> 
> [  526.720419] BUG: unable to handle kernel paging request at ffff8801363fbe30
> [  526.720877] IP: [<ffffffff8183cb34>] raid5_end_write_request+0x1f4/0x2f0
> 
> I think it is the same root cause though.
> 
> > 
> > In commit 271f5a9b8f8ae0db95de72779d115c9d0b9d3cc5
> > Author: NeilBrown <neilb@xxxxxxx>
> > Remove invalidate_partition call from do_md_stop
> > 
> > Because the deadlock, Neil remove the flush data.But in this pathch,Neil
> > only thoungt the filesystem,but not consider the raw block deivces.
> 
> Please don't try to guess what I am thinking ;-)
> 
> Raw block device access is in many ways just like filesystem access.
> /dev/md0 provides access to a special filesystem that provides exactly one
> file which is 1-1 mapped to the block device. e.g It uses the same page cache
> as other filesystems.
> 
> do_md_stop will fail with -EBUSY if any program has the block device open.
> When the last program closes /dev/md0, __blkdev_put will call sync_blockdev
> which will flush out any writes.  So when do_md_stop starts disassembling the
> array there should be no writes.
> But there are.
> 
> The problem happens if some other program (dd in this case) closes /dev/md0
> between the time that mdadm opens it and the time that mdadm issues the
> STOP_ARRAY ioctl.
> In this case dd isn't the last program with /dev/md0 open, so the
> sync_blockdev in __blkdev_put isn't called.
> 
> So if mdadm stopped the array by effectively doing
>    echo clear > /sys/block/md0/md/array_state
> (without holding /dev/md0 open) this race could not happen.  Either the
> 'echo' would, or the last close of /dev/md0 would have happened and the block
> device will have been flushed.
> 
> So the problem is that the ioctl holds the block device open preventing the
> last flush.  So the fix should be in md_ioctl.
> in md_ioctl we have a pointer to the bdev, so we don't need to try to find
> one with bdget_disk.
> 
> Unfortunately we need to call sync_blockdev *after* taking ->open_mutex
> otherwise the race still exists, and md_ioctl can't currently take open_mutex.
> 
> We need to fix md_set_readonly too - it has the same problem.
> Maybe I'll pass the bdev in to these two functions and get them to
> call sync_blockdev after taking the lock and checking the count.
> 
> But it's late today - I'll sort this out tomorrow.
> 

This is what I have come up with.

Thanks again,
NeilBrown


From 6191e4929a4a240b4eab9ca751715945dd9e3180 Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb@xxxxxxx>
Date: Tue, 17 Jul 2012 09:23:29 +1000
Subject: [PATCH] md: avoid crash when stopping md array races with closing
 other open fds.

md will refuse to stop an array if any other fd (or mounted fs) is
using it.
When any fs is unmounted of when the last open fd is closed all
pending IO will be flushed (e.g. sync_blockdev call in __blkdev_put)
so there will be no pending IO to worry about when the array is
stopped.

However in order to send the STOP_ARRAY ioctl to stop the array one
must first get and open fd on the block device.
If some fd is being used to write to the block device and it is closed
after mdadm open the block device, but before mdadm issues the
STOP_ARRAY ioctl, then there will be no last-close on the md device so
__blkdev_put will not call sync_blockdev.

If this happens, then IO can still be in-flight while md tears down
the array and bad things can happen (use-after-free and subsequent
havoc).

So in the case where do_md_stop is being called from an open file
descriptor, call sync_block after taking the mutex to ensure there
will be no new openers.

This is needed when setting a read-write device to read-only too.

Cc: stable@xxxxxxxxxxxxxxx
Reported-by: majianpeng <majianpeng@xxxxxxxxx>
Signed-off-by: NeilBrown <neilb@xxxxxxx>

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 415c10e..7bd9c20 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -3921,8 +3921,8 @@ array_state_show(struct mddev *mddev, char *page)
 	return sprintf(page, "%s\n", array_states[st]);
 }
 
-static int do_md_stop(struct mddev * mddev, int ro, int is_open);
-static int md_set_readonly(struct mddev * mddev, int is_open);
+static int do_md_stop(struct mddev * mddev, int ro, struct block_device *bdev);
+static int md_set_readonly(struct mddev * mddev, struct block_device *bdev);
 static int do_md_run(struct mddev * mddev);
 static int restart_array(struct mddev *mddev);
 
@@ -3938,14 +3938,14 @@ array_state_store(struct mddev *mddev, const char *buf, size_t len)
 		/* stopping an active array */
 		if (atomic_read(&mddev->openers) > 0)
 			return -EBUSY;
-		err = do_md_stop(mddev, 0, 0);
+		err = do_md_stop(mddev, 0, NULL);
 		break;
 	case inactive:
 		/* stopping an active array */
 		if (mddev->pers) {
 			if (atomic_read(&mddev->openers) > 0)
 				return -EBUSY;
-			err = do_md_stop(mddev, 2, 0);
+			err = do_md_stop(mddev, 2, NULL);
 		} else
 			err = 0; /* already inactive */
 		break;
@@ -3953,7 +3953,7 @@ array_state_store(struct mddev *mddev, const char *buf, size_t len)
 		break; /* not supported yet */
 	case readonly:
 		if (mddev->pers)
-			err = md_set_readonly(mddev, 0);
+			err = md_set_readonly(mddev, NULL);
 		else {
 			mddev->ro = 1;
 			set_disk_ro(mddev->gendisk, 1);
@@ -3963,7 +3963,7 @@ array_state_store(struct mddev *mddev, const char *buf, size_t len)
 	case read_auto:
 		if (mddev->pers) {
 			if (mddev->ro == 0)
-				err = md_set_readonly(mddev, 0);
+				err = md_set_readonly(mddev, NULL);
 			else if (mddev->ro == 1)
 				err = restart_array(mddev);
 			if (err == 0) {
@@ -5346,15 +5346,17 @@ void md_stop(struct mddev *mddev)
 }
 EXPORT_SYMBOL_GPL(md_stop);
 
-static int md_set_readonly(struct mddev *mddev, int is_open)
+static int md_set_readonly(struct mddev *mddev, struct block_device *bdev)
 {
 	int err = 0;
 	mutex_lock(&mddev->open_mutex);
-	if (atomic_read(&mddev->openers) > is_open) {
+	if (atomic_read(&mddev->openers) > !!bdev) {
 		printk("md: %s still in use.\n",mdname(mddev));
 		err = -EBUSY;
 		goto out;
 	}
+	if (bdev)
+		sync_blockdev(bdev);
 	if (mddev->pers) {
 		__md_stop_writes(mddev);
 
@@ -5376,18 +5378,26 @@ out:
  *   0 - completely stop and dis-assemble array
  *   2 - stop but do not disassemble array
  */
-static int do_md_stop(struct mddev * mddev, int mode, int is_open)
+static int do_md_stop(struct mddev * mddev, int mode,
+		      struct block_device *bdev)
 {
 	struct gendisk *disk = mddev->gendisk;
 	struct md_rdev *rdev;
 
 	mutex_lock(&mddev->open_mutex);
-	if (atomic_read(&mddev->openers) > is_open ||
+	if (atomic_read(&mddev->openers) > !!bdev ||
 	    mddev->sysfs_active) {
 		printk("md: %s still in use.\n",mdname(mddev));
 		mutex_unlock(&mddev->open_mutex);
 		return -EBUSY;
 	}
+	if (bdev)
+		/* It is possible IO was issued on some other
+		 * open file which was closed before we took ->open_mutex.
+		 * As that was not the last close __blkdev_put will not
+		 * have called sync_blockdev, so we must.
+		 */
+		sync_blockdev(bdev);
 
 	if (mddev->pers) {
 		if (mddev->ro)
@@ -5461,7 +5471,7 @@ static void autorun_array(struct mddev *mddev)
 	err = do_md_run(mddev);
 	if (err) {
 		printk(KERN_WARNING "md: do_md_run() returned %d\n", err);
-		do_md_stop(mddev, 0, 0);
+		do_md_stop(mddev, 0, NULL);
 	}
 }
 
@@ -6476,11 +6486,11 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
 			goto done_unlock;
 
 		case STOP_ARRAY:
-			err = do_md_stop(mddev, 0, 1);
+			err = do_md_stop(mddev, 0, bdev);
 			goto done_unlock;
 
 		case STOP_ARRAY_RO:
-			err = md_set_readonly(mddev, 1);
+			err = md_set_readonly(mddev, bdev);
 			goto done_unlock;
 
 		case BLKROSET:

Attachment: signature.asc
Description: PGP signature


[ATA RAID]     [Linux SCSI Target Infrastructure]     [Managing RAID on Linux]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device-Mapper]     [Kernel]     [Linux Books]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Photos]     [Yosemite Photos]     [Yosemite News]     [AMD 64]     [Linux Networking]

Add to Google Powered by Linux