R515/H700 high iowait

Mark Nelson mark.nelson at inktank.com
Wed Jul 18 08:37:37 CDT 2012


On 7/18/12 7:26 AM, G.Bakalarski at icm.edu.pl wrote:
>
>> This is with XFS and barriers were left enabled.  We also tested having
>> fio write directly to the device.
>
> Hi Mark,
>
> Did you enable any virtualization settings ???
> Does your Ubuntu runs on bare metal hardware without
> any xen, citrix or whatever???
>
> Did you changes anything in BIOS setting eg.
> IOMMU on (DMA virtualisation) ? What is your
> C states setting? Is BIOS optimized for Performance
> (CPU seting)?
>
> Have you monitored IO behaviour with e.g. iostat -xzk 2 ???
> What was a size of queue length? Did it change during tests?
> Have you tested IO with another tools like e.g. iozone or others ??
>
> Bests,
>
> GB
>
>

Hi GB,

Looks like virtualization technology is on in the bios, but DMA 
virtualization is off.  OS is running on bare metal.

C1E is on, which should probably be disabled.  CPU Power and Performance 
Management is set to Maximum Performance.

While I did not run iostat specifically, I did run collectl with the 
disk subsystem on during some of the tests.

Here's an example when things are bad (ie multiple writers):

> #                   <---------reads---------><---------writes---------><--------averages--------> Pct
> #Time     Name       KBytes Merged  IOs Size  KBytes Merged  IOs Size  RWSize  QLen  Wait SvcTim Util
> 11:18:40 sdb              0      0    0    0   24964      0  196  127     127   142   725      5   99
> 11:18:41 sdb              0      0    0    0   22272      0  174  128     128   142   925      5   99
> 11:18:42 sdb              0      0    0    0   30464      0  238  128     128   145   716      4   99

And when things are good (256MB requests, DirectIO, one writer):

> #                   <---------reads---------><---------writes---------><--------averages--------> Pct
> #Time     Name       KBytes Merged  IOs Size  KBytes Merged  IOs Size  RWSize  QLen  Wait SvcTim Util
> 11:29:32 sdb              0      0    0    0  737920      0 5765  128     128   119    20      0   87
> 11:29:33 sdb              0      0    0    0  798464      0 6238  128     128   126    20      0   95
> 11:29:34 sdb              0      0    0    0  798976      0 6242  128     128   126    20      0   94

Interestingly in the second test the Qlen, Wait time, Service time, and 
CPU Utlization due to IO are all lower while the throughput is 
significantly higher.  I haven't tried iozone specifically, though we've 
been seeing problems under other IO intensive workloads so I don't think 
it's related specifically to fio.

Mark



More information about the Linux-PowerEdge mailing list