Re: [PATCH] btrfs: Handle ENOMEM gracefully in cow_file_range_async

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 25, 2019 at 3:08 PM David Sterba <dsterba@xxxxxxx> wrote:
>
> On Wed, Jan 09, 2019 at 04:43:03PM +0200, Nikolay Borisov wrote:
> > If we run out of memory during delalloc filling in compress case btrfs
> > is going to BUG_ON. This is unnecessary since the higher levels code
> > (btrfs_run_delalloc_range and its callers) gracefully handle error
> > condtions and error out the page being submittede. Let's be a model
> > kernel citizen and no panic the machine due to ENOMEM and instead fail
> > the IO.
> >
> > Signed-off-by: Nikolay Borisov <nborisov@xxxxxxxx>
> > ---
> >  fs/btrfs/inode.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> > index cde51ace68b5..b4b2d7f8a98b 100644
> > --- a/fs/btrfs/inode.c
> > +++ b/fs/btrfs/inode.c
> > @@ -1197,7 +1197,8 @@ static int cow_file_range_async(struct inode *inode, struct page *locked_page,
> >                        1, 0, NULL);
> >       while (start < end) {
> >               async_cow = kmalloc(sizeof(*async_cow), GFP_NOFS);
> > -             BUG_ON(!async_cow); /* -ENOMEM */
> > +             if (!async_cow)
> > +                     return -ENOMEM;
>
> The error handling here is very simple and breaks the usual rule that
> all functions must clean up after themselves before returning to the
> caller.
>
> This is async submission so it can be expected to do deferred cleanup,
> but this cannot be easily seen from the function and should be better
> documented.
>
> What happens with the inode reference (igrab), what happens with all
> work queued until now, or extent range state bits.
>
> It's true that btrfs_run_delalloc_range does error handling, though it
> does that from 4 different types of conditions (nocow, prealloc,
> compression and async). I'd really like to see explained that there's
> nothing left and cause surprises later. The memory allocation failures
> are almost never tested so we have to be sure we understand the error
> handling code flow. I can't say I do after reading your changelog and
> the correctness proof is left as an exercise.
>
> The error handling was brought by 524272607e882d04 "btrfs: Handle
> delalloc error correctly to avoid ordered extent hang", so there's a
> remote chance to cause lockups when the state is not cleaned up.

So taking a quick look at this, just returning does not seem correct:

- If you have for example a dealloc range of 1Mb, submitting the first
512K async job succeeds
  but the the second one fails due to the out of memory issue, then
writepage_delalloc() will
  set the error bit on the page starting at offset 0 of the range, and
not the one at 512K.
  What if the first submitted range succeeds? What happens? I think
the error handling from the
  caller should be aware if writeback started for a part of the range
already, and if so operate only
  on the remaining of the range.

Did you actually run a test where an iteration other then the first
one fails and see if there
were no hangs, error reported to user space (by means of an fsync for
example), no leaks (kmemleak helps here), etc?

This is very fishy because of the async nature of the compression
path, each submitted job also does
error handling if anything fails, and they all get a reference for the
first page of the whole range.
Did you check if we don't end up with 2 tasks unlocking that page for example?

Pre-allocating the async_cow structures at the beginning of
cow_file_range_async(), as suggested in another reply,
seems ok to me and would not need any adjustments of the existing
error handling code.


-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux