C6105 poor disk performance

Jeremy MAURO jmauro at antidot.net
Mon Mar 5 05:30:29 CST 2012

Hi everyone,

We recently bought some C6105, so we trying to stress test those server 
and apparently we have very poor disk performance:

[root at bench02]:~ # lspci -v
02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic 
SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
         Subsystem: Inventec Corporation Device 6019
         Flags: bus master, fast devsel, latency 0, IRQ 50
         I/O ports at d000 [size=256]
         Memory at fea3c000 (64-bit, non-prefetchable) [size=16K]
         Memory at fea40000 (64-bit, non-prefetchable) [size=256K]
         Expansion ROM at fea80000 [disabled] [size=512K]
         Capabilities: [50] Power Management version 3
         Capabilities: [68] Express Endpoint, MSI 00
         Capabilities: [d0] Vital Product Data
         Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
         Capabilities: [c0] MSI-X: Enable+ Count=15 Masked-
         Capabilities: [100] Advanced Error Reporting
         Capabilities: [138] Power Budgeting <?>
         Kernel driver in use: mpt2sas
         Kernel modules: mpt2sas
[root at bench02]:~ # modinfo mpt2sas
license:        GPL
description:    LSI MPT Fusion SAS 2.0 Device Driver
author:         LSI Corporation <DL-MPTFusionLinux at lsi.com>
srcversion:     5D43B64DF8FE908E1FF129A
alias:          pci:v00001000d0000007Esv*sd*bc*sc*i*
alias:          pci:v00001000d0000006Esv*sd*bc*sc*i*
alias:          pci:v00001000d00000087sv*sd*bc*sc*i*
alias:          pci:v00001000d00000086sv*sd*bc*sc*i*
alias:          pci:v00001000d00000085sv*sd*bc*sc*i*
alias:          pci:v00001000d00000084sv*sd*bc*sc*i*
alias:          pci:v00001000d00000083sv*sd*bc*sc*i*
alias:          pci:v00001000d00000082sv*sd*bc*sc*i*
alias:          pci:v00001000d00000081sv*sd*bc*sc*i*
alias:          pci:v00001000d00000080sv*sd*bc*sc*i*
alias:          pci:v00001000d00000065sv*sd*bc*sc*i*
alias:          pci:v00001000d00000064sv*sd*bc*sc*i*
alias:          pci:v00001000d00000077sv*sd*bc*sc*i*
alias:          pci:v00001000d00000076sv*sd*bc*sc*i*
alias:          pci:v00001000d00000074sv*sd*bc*sc*i*
alias:          pci:v00001000d00000072sv*sd*bc*sc*i*
alias:          pci:v00001000d00000070sv*sd*bc*sc*i*
depends:        scsi_mod,scsi_transport_sas
vermagic:       2.6.18-308.el5 SMP mod_unload gcc-4.1
parm:           logging_level: bits for enabling additional logging info 
parm:           max_sectors:max sectors, range 64 to 8192  default=8192 
parm:           max_lun: max lun, default=16895  (int)
parm:           max_queue_depth: max controller queue depth  (int)
parm:           max_sgl_entries: max sg entries  (int)
parm:           msix_disable: disable msix routed interrupts (default=0) 
parm:           missing_delay: device missing delay , io missing delay 
(array of int)
parm:           diag_buffer_enable: post diag buffers 
(TRACE=1/SNAPSHOT=2/EXTENDED=4/default=0) (int)
parm:           mpt2sas_fwfault_debug: enable detection of firmware 
fault and halt firmware - (default=0)
parm:           disable_discovery: disable discovery  (int)

We have tested with debian squeeze and Redhat 5, on both platform, we 
have the same figure on both platform.

Here is the test we setup:
Creates a lot (I mean a lot) of small files with 3 directory levels.

On a simple shuttle with 2 raptors disk the test takes 4 times less than 
on the server, so we are a little bit disappointed. Has anyone the same 
kind of result with this sort of server?


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20120305/41ae1262/attachment.html 

More information about the Linux-PowerEdge mailing list