Re: dm-table: delayed cleanup for dm_table_destroy()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

On Tue, Mar 20 2012 at 11:35am -0400,
Hannes Reinecke <hare@xxxxxxx> wrote:

> Hi Mikulas,
> On 03/19/2012 10:45 PM, Mikulas Patocka wrote:
> > Hi Hannes
> > 
> > I wouldn't do this. This behavior actually existed in kernels <= 2.6.28.
> > And it caused trouble.
> > 
> > The problems are these:
> > 
> > Some code may hold a spinlock and call dm_table_put. Currently, 
> > dm_table_put just decrements the counter and the cleanup is done 
> > synchronously in dm_table_destroy. With your patch, cleanup would be done 
> > in dm_table_put --- you call a target destructor (which may sleep) in a 
> > non-sleeping context.
> > 
> > Some code may hold a mutex and call dm_table_put. If you call a target 
> > destructor from dm_table_put and it takes the same mutex, it deadlocks.
> > 
> > Currently, when dm_table_destroy exits, it is guaranteed that the table is 
> > destroyed and all target destructors have been called. With your patch, 
> > dm_table_destroy may exit and the table could be still alive because of 
> > some other reference. This may cause leaked references to some other files 
> > or devices. For example, suppose that the user has a device mapper device 
> > "dm" that redirects requests to "/dev/sda". The user assumes that if he 
> > runs "dmsetup remove dm" and this command returns, "/dev/sda" will not be 
> > open. Your patch breaks this assumption.
> > 
> Hmm. Yes, true.
> > Look at the commit "d58168763f74d1edbc296d7038c60efe6493fdd4" in 2.6.29 
> > --- I was actually removing the old-style reference counting (that is 
> > functionally equivalent to what your patch does) because of these three 
> > problems. The old code really caused a crash although it happened very 
> > rarely.
> > 
> Ah. That'll explain it.
> The actual problem I'm trying to track down is that I'm seeing an
> excessive duration for the 'resume' dm ioctl after a table update.
> I've got reports where the ioctl can take up to several seconds.
> Which (as this is multipath) causes an extremely erratic behaviour.
> And the 'msleep' here is one of the obvious culprits.
> But I'll continue debugging, maybe I'll find something else.

I once wanted to replace all msleep(1); with cpu_relax();:

But Alasdair wasn't sure if cpu_relax() would provide the required
delay effect:

dm-devel mailing list

[DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

Add to Google Powered by Linux