This is the btrfsck output for a real-world rsync backup onto a btrfs raid1 mirror across 4 drives (yes, I know at the moment for btrfs raid1 there's only ever two copies of the data...) checking extents checking fs roots root 5 inode 18446744073709551604 errors 2000 root 5 inode 18446744073709551605 errors 1 root 256 inode 18446744073709551604 errors 2000 root 256 inode 18446744073709551605 errors 1 found 3183604633600 bytes used err is 1 total csum bytes: 3080472924 total tree bytes: 28427821056 total fs tree bytes: 23409475584 btree space waste bytes: 4698218231 file data blocks allocated: 3155176812544 referenced 3155176812544 Btrfs Btrfs v0.19 Command exited with non-zero status 1 So: What does that little lot mean? The drives were mounted and active during an unexpected power-plug pull :-( Safe to mount again or are there other checks/fixes needed? Thanks, Martin -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
