PowerVault/Edge Configuration - ESX/Linux Server Install

Robert von Bismarck robert.vonbismarck at smart-telecom.ch
Wed Apr 4 05:12:17 CDT 2007


Hello,

> 
> Hello All,
> 
> Fairly new to the SAN/fibre stuff, so work with me here 
> please :-) .... 
> we have *tons* of Dell equipment here, I'm just new and just 
> now getting 
> my hands into the system configuration stuff.
> 
> I ran into these problems during my attempt to install VMWare ESX3 
> Server on the following hardware....  I'm just hoping for 
> some help in 
> ensuring that I have everything configured properly from a hardware 
> perspective, I will provide as much information as possible

The hardware setup for a SAN is quite easy, just put the HBA into the
server, connect the fibers and off you go. Configuring the systems so
they see each other is something else though :)
> 
> Dell PowerEdge 2650 is the server I am attempting to install 
> ESX3 Server 
> on. 
> 
> This server did not come with a fibre channel card, so I removed one 
> from another Dell system and installed a QLogic QLA2200 PCI Fibre 
> Channel card...

Do you see the card in 'lspci' and in 'dmesg' output when you boot the
server ?
Which OS do you run on the server ?
Do you see the /proc entry for the card ? /proc/scsi/qla2xxx/...

> 
> Attached that to a Dell PowerVault 50F 8-Port Fibre Channel Switch

Never worked with one of those but looks like a rebadged Brocade.
> 
> Which is then connected to a Dell PowerVault 650F (I wish I could 
> name/identify the type of cable connecting the 50F to the 
> 650F, however 

The 650F is quite old, and has only windows NT/2000 tools for management
AFAIK (yuck!)
I worked with another type of array, so I can't really help you here.

> I have never seen a cable like this before)

This is probably a SPF fiber-optic cable

> 
> Ok, from here... where do I go?  I have all of the physical 
> connections 
> and I can install ESX Server onto the PowerEdge 2650 internal 
> RAID, no 
> problem... but how do I know that the attached SAN is working  
> properly?  How do I verify that the server is talking to the 
> switch and 
> the switch is talking to the SAN?

First, check the led's on the switch and the HBA, green is good, amber
is not :)
In Linux 2.6, you can 'cat /proc/scsi/qla2xxx/1' to check the 1st qlogic
card it will show you what targets are visible to the card.
You also see some info in 'dmesg' output, if you see something about an
F-Port, that's usually a good sign.
Here's an example dmesg of a dual-attached server for the first HBA (it
sees two luns, one on each switch, sdb is the backup of sdc)

qla2300 0000:02:0c.0: Found an ISP2312, irq 201, iobase 0xf881c000
qla2300 0000:02:0c.0: Configuring PCI space...
qla2300 0000:02:0c.0: Configure NVRAM parameters...
qla2300 0000:02:0c.0: Verifying loaded RISC code...
qla2300 0000:02:0c.0: Waiting for LIP to complete...
qla2300 0000:02:0c.0: LOOP UP detected (2 Gbps).
qla2300 0000:02:0c.0: Topology - (F_Port), Host Loop address 0xffff
scsi1 : qla2xxx
qla2300 0000:02:0c.0:
 QLogic Fibre Channel HBA Driver: 8.01.04-d7
  QLogic QLA2340 -
  ISP2312: PCI-X (100 MHz) @ 0000:02:0c.0 hdma+, host#=1, fw=3.03.20 IPX
  Vendor: DGC       Model: RAID 5            Rev: 0216
  Type:   Direct-Access                      ANSI SCSI revision: 04
qla2300 0000:02:0c.0: scsi(1:0:0:0): Enabled tagged queuing, queue depth
32.
SCSI device sdb: 62914560 512-byte hdwr sectors (32212 MB)
sdb: asking for cache data failed
sdb: assuming drive cache: write through
SCSI device sdb: 62914560 512-byte hdwr sectors (32212 MB)
sdb: asking for cache data failed
sdb: assuming drive cache: write through
 sdb:<6>Device sdb not ready.
end_request: I/O error, dev sdb, sector 0
Buffer I/O error on device sdb, logical block 0
Device sdb not ready.
end_request: I/O error, dev sdb, sector 0
Buffer I/O error on device sdb, logical block 0
Device sdb not ready.
end_request: I/O error, dev sdb, sector 0
Buffer I/O error on device sdb, logical block 0
 unable to read partition table
Attached scsi disk sdb at scsi1, channel 0, id 0, lun 0
  Vendor: DGC       Model: RAID 5            Rev: 0216
  Type:   Direct-Access                      ANSI SCSI revision: 04
qla2300 0000:02:0c.0: scsi(1:0:1:0): Enabled tagged queuing, queue depth
32.
SCSI device sdc: 62914560 512-byte hdwr sectors (32212 MB)
sdc: cache data unavailable
sdc: assuming drive cache: write through
SCSI device sdc: 62914560 512-byte hdwr sectors (32212 MB)
sdc: cache data unavailable
sdc: assuming drive cache: write through
 sdc: sdc1
Attached scsi disk sdc at scsi1, channel 0, id 1, lun 0

> 
> <additional notes for the card>
> During the system POST/startup I see that the machine recognizes that 
> the Fibre card has been installed, it shows the card model 
> information 
> and the BIOS version.
> Tells me I can press Alt-Q for the Fast!UTIL
> But then, it says "BIOS for Adapter 0 is disabled
> ROM BIOS NOT INSTALLED
> 
> Is that bad?

No it's not, it just means that you won't be able to boot from it.

> 
> I press Alt-Q, click on fibre disk utility, the adapter is there, I 
> click enter, Go into disk-utility options, and if I try any of the 
> options there (low level format, verify disk data, etc..), I just get 
> SCSI command errors
> 
> SCSI operation code:  00
> SCSI sense key: 05
> SCSI additional sense code:  04
> SCSI additional Sense Code Qualifier:  00
> 

This probably means your fiber switch is not allowing connections to the
SAN.
Is the port enabled on the switch ?
Are there other systems on this switch that access the SAN array ?

> That's as much information/knowledge as I have.... I hope someone can 
> help point me in a goo direction
> 
> I'm just lost :-)

I was a little overwhelmed by my first SAN setup too. A friendly guy
working for a Dell systems integrator, spent a day with me explaining
the basics. Once I came back from EMC training, I had to rebuild the
whole thing anyway :)

> 
> Any guidance, again, is greatly appreciated.
> 
> Thanks!
> --jeff


Cheers,

Robert



More information about the Linux-PowerEdge mailing list