> That's the problem. > > Deduped files caused heavy overload for backref walk. > And send has to do backref walk, and you see the problem... Interesting! But should it really be able to make btrfs send use up >15GiB of RAM and cause a kernel panic because of that? The btrfs doesn't even have that much metadata on-disk in total. > I'm very interested how heavily deduped the file is. So am I, how could I get my hands on that information? Are that particular file's extents what causes btrfs send's memory usage to spiral out of control? > If it's just all 0 pages, hole punching is more effective than dedupe, > and causes 0 backref overhead. I did punch holes into the disk images I have stored on it by mounting and fstrim'ing them and the duperemove command I used has a flag that ignores all 0 pages (those get compressed down to next to nothing anyways) but it's likely that I ran duperememove once or twice before I knew about that flag. Is there a way to find such extents that could cause the backref walk to overload? Thanks, Atemu
