Re: iostat with raid device...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neil,

This is raid5. I have mounted /dev/md0 to /mnt and file system is ext4.

The system is newly created. Steps:
mdadm for raid5
mkfs.ext4 /dev/md0
mount /dev/md0 /mnt/raid
Export /mnt/raid to remote PC using CIFS
Copy file from PC to the mounted drive

An update....
I just ran the test again (without doing reformatting device) and
noticed all 4 HDDs incremented the #ofWritesBlocks equally. This
implies that when raid was configured first time, raid5 was trying to
do its own stuff (recovery)...

What I'm not sure of is if the device is newly formatted, would raid
recovery happen? What else could explain difference in the first run
of IO benchmark?


Thanks.

On Fri, Apr 8, 2011 at 4:46 PM, NeilBrown <neilb@xxxxxxx> wrote:
> On Fri, 8 Apr 2011 12:55:39 -0700 Linux Raid Study
> <linuxraid.study@xxxxxxxxx> wrote:
>
>> Hello,
>>
>> I have a raid device /dev/md0 based on 4 devices sd[abcd].
>
> Would this be raid0? raid1? raid5? raid6? raid10?
> It could make a difference.
>
>>
>> When I write 4GB to /dev/md0, I see following output from iostat...
>
> Are you writing directly to the /dev/md0, or to a filesystem mounted
> from /dev/md0? ÂIt might be easier to explain in the second case, but you
> text suggests the first case.
>
>>
>> Ques:
>> Shouldn't I see write/sec to be same for all four drives? Why does
>> /dev/sdd always have higher value for ÂBlksWrtn/sec?
>> My strip size is 1MB.
>>
>> thanks for any pointers...
>>
>> avg-cpu: Â%user  %nice %system %iowait Â%steal  %idle
>> Â Â Â Â Â Â0.02 Â Â0.00 Â Â0.34 Â Â0.03 Â Â0.00 Â 99.61
>>
>> Device:      Âtps  Blk_read/s  Blk_wrtn/s  Blk_read  Blk_wrtn
>> sda        1.08    247.77    338.73  37478883  51237136
>> sda1 Â Â Â Â Â Â Â1.08 Â Â Â 247.77 Â Â Â 338.73 Â 37478195 Â 51237136
>> sdb        1.08    247.73    338.78  37472990  51245712
>> sdb1 Â Â Â Â Â Â Â1.08 Â Â Â 247.73 Â Â Â 338.78 Â 37472302 Â 51245712
>> sdc        1.10    247.82    338.66  37486670  51226640
>> sdc1 Â Â Â Â Â Â Â1.10 Â Â Â 247.82 Â Â Â 338.66 Â 37485982 Â 51226640
>> sdd        1.09    118.46    467.97  17918510  70786576
>> sdd1 Â Â Â Â Â Â Â1.09 Â Â Â 118.45 Â Â Â 467.97 Â 17917822 Â 70786576
>> md0 Â Â Â Â Â Â Â65.60 Â Â Â 443.79 Â Â Â1002.42 Â 67129812 Â151629440
>
> Doing the sums, for every 2 blocks written to md0 we see 3 blocks written to
> some underlying device. ÂThat doesn't make much sense for a 4 drive array.
> If we assume that the extra writes to sdd were from some other source, then
> It is closer to a 3:4 ratio which suggests raid5.
> So I'm guessing that the array is newly created and is recovering the data on
> sdd1 at the same time as you are doing the IO test.
> This would agree with the observation that sd[abc] see a lot more reads than
> sdd.
>
> I'll let you figure out the tps number.... do the math to find out the
> average blk/t number for each device.
>
> NeilBrown
>
>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux