Hi Brad Just a user here, not a dev. I think I might have run into a similar bug about 6 months ago. At the time I was running Debian stable. (iirc that is kernel 3.16 and probably btrfs-progs of a similar vintage). The filesystem was originally a 2 x 6TB array with a 4TB drive added later when space began to get low. I'm pretty sure I must have done at least a partial balance after adding the 4TB drive, but something like 1TB free on each of the two 6GB drives, and 2GB on the 4TB would have been 'good enough for me'. It was nearly full again when a copy unexpectedly reported out-of-space. Balance didn't fix it. In retrospect btrfs had probably run out of chunks on both 6TB drives. I'm not sure what actually fixed it. I upgraded to Debian testing (something I was going to do soon anyway). I might have also temporarily added another drive. (I have since had a 6TB drive fail, and btrfs is running happily on 2x4TB, and 1x6TB). More inline below. On 24 March 2016 at 05:34, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote: > On Wed, Mar 23, 2016 at 10:51 AM, Brad Templeton <bradtem@xxxxxxxxx> wrote: >> Thanks for assist. To reiterate what I said in private: >> >> a) I am fairly sure I swapped drives by adding the 6TB drive and then >> removing the 2TB drive, which would not have made the 6TB think it was >> only 2TB. The btrfs statistics commands have shown from the beginning >> the size of the device as 6TB, and that after the remove, it haad 4TB >> unallocated. > > I agree this seems to be consistent with what's been reported. > <snip> >> >> Some options remaining open to me: >> >> a) I could re-add the 2TB device, which is still there. Then balance >> again, which hopefully would move a lot of stuff. Then remove it again >> and hopefully the new stuff would distribute mostly to the large drive. >> Then I could try balance again. > > Yeah, to do this will require -f to wipe the signature info from that > drive when you add it. But I don't think this is a case of needing > more free space, I think it might be due to the odd number of drives > that are also fairly different in size. > If I recall correctly, when I did a device delete, I thought device delete did remove the btrfs signature. But I could be wrong > But then what happens when you delete the 2TB drive after the balance? > Do you end up right back in this same situation? > If balance manages to get the data properly distributed across the drives, then the 2TB should be mostly empty, and device delete should be able to remove the 2TB disk. I successfully added a 4TB disk, did a balance, and then removed a failing 6TB from the 3 drive array above. > >> >> b) It was suggested I could (with a good backup) convert the drive to >> non-RAID1 to free up tons of space and then re-convert. What's the >> precise procedure for that? Perhaps I can do it with a limit to see how >> it works as an experiment? Any way to specifically target the blocks >> that have their two copies on the 2 smaller drives for conversion? > > btrfs balance -dconvert=single -mconvert=single -f ## you have to > use -f to force reduction in redundancy > btrfs balance -dconvert=raid1 -mconvert=raid1 I would probably try upgrading to a newer kernel + btrfs-progs first. Before converting back to raid1, I would also run btrfs device usage and check to see whether the all devices have approximately the same amount of unallocated space. If they don't, maybe try running a full balance again. <snip> Andrew -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
