Re: btrfs-tools: missing device delete/remove cancel option on disk failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 08.05.2016 um 02:54 schrieb Martin:
> On 07/05/16 10:39, g6094199@xxxxxxxxxx wrote:
>> a brand new disk which has an upcounting raw error rate
> Note that is the "raw error rate".
>
> For a brand new disk being run for the first time at maximum data
> writes, the "raw error rate" may well be expected to increase. Hard
> disks deliberately make use of error correction for normal operation.
>
> More importantly, what do the other smart values show?
>
> For myself, my concern would only be raised for sector failures.
>
>
> And... A very good test for a new disk is to first run "badblocks" to
> test the disk surface. Read the man page first. (Hint: Non-destructive
> is slow, destructive write is fast...)
>
> Good luck,
> Martin
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

i guess this log is out of diskussion:

[44388.089321] sd 8:0:0:0: [sdf] tag#0 FAILED Result:
hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[44388.089334] sd 8:0:0:0: [sdf] tag#0 CDB: Read(10) 28 00 00 43 1c 48
00 00 08 00
[44388.089340] blk_update_request: I/O error, dev sdf, sector 35185216

...

May  7 06:39:31 NAS-Sash kernel: [35777.520490] sd 8:0:0:0: [sdf] tag#0
FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
May  7 06:39:31 NAS-Sash kernel: [35777.520500] sd 8:0:0:0: [sdf] tag#0
Sense Key : Medium Error [current]
May  7 06:39:31 NAS-Sash kernel: [35777.520508] sd 8:0:0:0: [sdf] tag#0
Add. Sense: Unrecovered read error
May  7 06:39:31 NAS-Sash kernel: [35777.520516] sd 8:0:0:0: [sdf] tag#0
CDB: Read(10) 28 00 03 84 ee 30 00 00 04 00
May  7 06:39:31 NAS-Sash kernel: [35777.520522] blk_update_request:
critical medium error, dev sdf, sector 472347008
May  7 06:39:35 NAS-Sash kernel: [35781.364117] sd 8:0:0:0: [sdf] tag#0
FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
May  7 06:39:35 NAS-Sash kernel: [35781.364138] sd 8:0:0:0: [sdf] tag#0
Sense Key : Medium Error [current]
May  7 06:39:35 NAS-Sash kernel: [35781.364146] sd 8:0:0:0: [sdf] tag#0
Add. Sense: Unrecovered read error
May  7 06:39:35 NAS-Sash kernel: [35781.364154] sd 8:0:0:0: [sdf] tag#0
CDB: Read(10) 28 00 03 84 ee 30 00 00 04 00

and different vendors use the raw error rate differently. some count up
constantly, some do only log real destructive errors.

but i had the luck that the system froze completely. not even an log
entry. now the file system is broken.....arg!

now i need some advice what to do next....best practice wise? try to
mount degraded and copy off all data? then i will net at least 9TB of
new storage... :-(


sash




--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux