On 13.03.20 г. 21:58 ч., Josef Bacik wrote: > In debugging a generic/320 failure on ppc64, Nikolay noticed that > sometimes we'd ENOSPC out with plenty of space to reclaim if we had > committed the transaction. He further discovered that this was because > there was a priority ticket that was small enough to fit in the free > space currently in the space_info. > > Consider the following scenario. There is no more space to reclaim in > the fs without committing the transaction. Assume there's 1mib of space > free in the space info, but there are pending normal tickets with 2mib > reservations. > > Now a priority ticket comes in with a .5mib reservation. Because we > have normal tickets pending we add ourselves to the priority list, > despite the fact that we could satisfy this reservation. > > The flushing machinery now gets to the point where it wants to commit > the transaction, but because there's a .5mib ticket on the priority list > and we have 1mib of free space we assume the ticket will be granted > soon, so we bail without committing the transaction. > > Meanwhile the priority flushing does not commit the transaction, and > eventually fails with an ENOSPC. Then all other tickets are failed with > ENOSPC because we were never able to actually commit the transaction. > > The fix for this is we should have simply granted the priority flusher > his reservation, because there was space to make the reservation. > Priority flushers by definition take priority, so they are allowed to > make their reservations before any previous normal tickets. By not > adding this priority ticket to the list the normal flushing mechanisms > will then commit the transaction and everything will continue normally. > > We still need to serialize ourselves with other priority tickets, so if > there are any tickets on the priority list then we need to add ourselves > to that list in order to maintain the serialization between priority > tickets. > > Signed-off-by: Josef Bacik <josef@xxxxxxxxxxxxxx> Reviewed-by: Nikolay Borisov <nborisov@xxxxxxxx>
