Raid1: sdb has a lot mor work then sda

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


at the moment I'm trying to find a bottleneck on my lamp-server with
opensuse 11.4 (Kernel

Sometimes the system has a very poor performance because of a high io-wait.
If I watch the systems disk-access with "atop -dD" I can see that sda is
the most of the time at a load of 10%, sdb sometimes is at 100% or
higher at the same time.

In my opinion in a Raid1-System both disk should have nearly the same load.

Or may I wrong whis this?

Both harddisks where changed 3 Weeks ago, /proc/mdstat shows that the
rebuild was successfull and the array is functional. Today I've changed
th SATA-Cable and the Port on the Maindboard of sdb, but the behaviour
is still the same. Both disks passed the extended smart-self-test.

Any ideas about that?



Daniel Spannbauer                         Software Entwicklung
marco Systemanalyse und Entwicklung GmbH  Tel   +49 8333 9233-27 Fax -11
Rechbergstr. 4 - 6, D 87727 Babenhausen   Mobil +49 171 4033220                      Email ds@xxxxxxxx
Geschäftsführer Martin Reuter             HRB 171775 Amtsgericht München
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at

[ATA RAID]     [Linux SCSI Target Infrastructure]     [Managing RAID on Linux]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device-Mapper]     [Kernel]     [Linux Books]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Photos]     [Yosemite Photos]     [Yosemite News]     [AMD 64]     [Linux Networking]

Add to Google Powered by Linux