The way it works in snazzer (and btrbk and I think also btrfs-sxbackup as well), local snapshots continue to happen as normal (Eg. daily or hourly) and so when your backup media or backup server is finally available again, the size of each individual incremental is still the same as usual, it just has to perform more of them. Separating snapshotting from transport lends to more flexibility IMHO, Eg. with snazzer I can keep multiple physical backup media in sync with each other even if I only rotate/attach those disks once a week/month (maintain backup filesystems in parallel), the snazzer-receive script is very dumb - it just receives all the missing snapshots from the source. However it does filter them cf. "btrfs subvolume list /subvolume | snazzer-prune-candidates --invert" first in case some would just be deleted again shortly after according to retention policy. For the ssh transport, you can do the same things but in series: push the snapshots up to a local server and then on to remote storage elsewhere (maintain backup filesystems in series). Because the snapshotting, transport and pruning operations are asynchronous the logic for all this is relatively simple. It's thanks to seeing send/receive struggles such as yours on this list (which has also happened to me, but only very rarely: it seems I tend to have reliable connectivity), among other issues, that I wrote snazzer-measure. It ends up appending reproducible sha512sum and pgp signatures to a measurements file for each snapshot, measurements happen more than just once so they're timestamped with hostname - the hope is I should spot any corruption that happens after the first measurements are taken. This is also a separate/async operation (it's the most I/O and CPU intense operation of all). -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
