Re: Replacing drives with larger ones in a 4 drive raid1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> a "replace" of the 3rd 6 TB drive onto a second 8 TB drive is currently in progress (at high speed).

This second replace is now finished, and it looks OK now:

	# btrfs replace status /data
	Started on 16.Jun 01:15:17, finished on 16.Jun 11:40:30, 0 write errs, 0 uncorr. read errs

Transfer rate of ~134 MiB/s, or ~2.2 hours per TiB.

	# btrfs device usage  /data 
	/dev/dm-2, ID: 3
	   Device size:             5.46TiB
	   Data,RAID1:              4.85TiB
	   Metadata,RAID1:          3.00GiB
	   Unallocated:           620.03GiB

	/dev/mapper/AAAAAAAA_enc, ID: 1
	   Device size:             7.28TiB
	   Data,RAID1:              6.66TiB
	   Metadata,RAID1:         12.69GiB
	   System,RAID1:           64.00MiB
	   Unallocated:           620.31GiB

	/dev/mapper/BBBBBBBB_enc, ID: 2
	   Device size:             7.28TiB
	   Data,RAID1:              4.79TiB
	   Metadata,RAID1:          9.69GiB
	   System,RAID1:           64.00MiB
	   Unallocated:           676.31GiB

However, while the replace was in progress, it showed weird stuff, like this percentage > 100 today at 9am (~3 hours before completion):

	# btrfs replace status /data       
	272.1% done, 0 write errs, 0 uncorr. read errs

Also, contrary to he first replace, filesystem info was not updated during the replace, and looked like this (for example):

	# btrfs device usage  /data 
	/dev/dm-2, ID: 3
	   Device size:             5.46TiB
	   Data,RAID1:              4.85TiB
	   Metadata,RAID1:          3.00GiB
	   Unallocated:           620.03GiB

	/dev/dm-3, ID: 2
	   Device size:             5.46TiB
	   Data,RAID1:              4.79TiB
	   Metadata,RAID1:          9.69GiB
	   System,RAID1:           64.00MiB
	   Unallocated:           676.31GiB

	/dev/mapper/AAAAAAAA_enc, ID: 1
	   Device size:             7.28TiB
	   Data,RAID1:              6.66TiB
	   Metadata,RAID1:         12.69GiB
	   System,RAID1:           64.00MiB
	   Unallocated:           620.31GiB

	/dev/mapper/BBBBBBBB_enc, ID: 0
	   Device size:             7.28TiB
	   Unallocated:             5.46TiB

I'm happy it worked, just wondering why it behaved weirdly this second time.

During the first replace, my Fedora 23 was booted in emergency mode, whereas for the second time it was booted normally.

I'm going to reboot now to update Kernel 4.5.5 to 4.5.6 and then continue replacing drives.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux