On 04/09/2017 08:10 PM, Chris Murphy wrote: > On Sat, Apr 8, 2017 at 2:19 PM, Hans van Kranenburg > <hans.van.kranenburg@xxxxxxxxxx> wrote: >> After changing to nossd, another thing happened. The expiry process, >> which normally takes about 1.5 hour to remove ~2500 subvolumes (keeping >> it queued up to a 100 orphans all the time), suddenly took the entire >> rest of the day, not being done before the nightly backups had to start >> again at 10PM... > > Is this 'btrfs sub del' with 100 subvolumes listed? What happens if > the delete command is issued with all 2500 at once? Deleting snapshots > is definitely expensive, and deleting them one at a time is more > expensive in total time than deleting them in one whack. But I've > never deleted 100 or more at once. It doesn't really matter how many, because it stil cleans only one at a time, in the order that they were submitted. (And this also means that if you delete 1000 snapshots of the same huge subvolume, it will do all the inefficient backref walking 1000 times etc.) Doing a subvolume delete (or multiple) on the command line will only append them to this list, besides removing some tree items so that it's not visible anymore as normal subvolume. The list of subvolume ids queued for cleaning can be found in tree 1 with keys of (ORPHAN_OBJECTID, ORPHAN_ITEM_KEY, <subvolid>) The 100 is a bit of an arbitrarily chosen number that makes sure it'll be working full speed all the time, and also give me a somewhat acceptable time to wait for finishing when interrupting it. Here's some more and a snippet of example code (at the end of the commit message) which looks almost 100% like what I have in my backup expiry code: https://github.com/knorrie/python-btrfs/commit/9d697ba7d4782afbb070bf057aa4ff3e3aa51be0 -- Hans van Kranenburg -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
