poervault/linux query

Kevin_Burroughs@Dell.com Kevin_Burroughs at Dell.com
Mon Apr 15 20:08:00 CDT 2002



Kevin Burroughs, MCSE, MCP+I
PowerVault and Fibre Channel Gold Storage Support Specialist
Dell Computer Corp. Enterprise Expert Center
This email is not subject to a legally binding commitment


-----Original Message-----
From: jason andrade [mailto:jason at dstc.edu.au]
Sent: Monday, April 15, 2002 6:31 PM
To: Kevin_Burroughs at exchange.dell.com
Cc: linux-poweredge at exchange.dell.com
Subject: RE: poervault/linux query


On Mon, 15 Apr 2002 Kevin_Burroughs at Dell.com wrote:

> Off the top of my head, a few questions jump up:
> 1)  Managing a 650 with Array Manager: never heard of that being done: we
> use Data Supervisor or Data Administrator

hi kevin,

i should have been clearer.  i am using data supervisor.  i haven't heard of
data administrator or appear to have the cds for it.

KB: 1)  Data Administrator is similar to Supervisor, but because of
licensing it is sold for a price.  It does much the same tasks as
Supervisor, but can configure multiple 650's in one session.

> 2)  Failover: we use ATF in Windows to manage failover.  Linux is not
> supported in our SAN software at this time, so there is no corresponding
> application.  Besides, if you only have 1 HBA, there is no second HBA to
> manage failover (or the second SP for that matter).

but do you have "ATF" in the SP ?  so if your storage processor fails, the
luns can failover to the other SP.  then if you reboot your linux box it
should see all the luns again.   the data supervisor _appears_ to have this
but i wanted to confirm.

KB:  2)  There is no SP "ATF" so to speak.  This function is
enabled/controlled by software.  Hence, no failover software (like ATF), no
failover functionality.

> 3)  Soft peer errors are something that occurs on the 650/630, but again,
it
> would be highly unusual to see these in Array Manager.

can you elaborate on what it is and why it happens ?  i seem to get quite a
large amount of them.

KB:  Hard for me to elaborate much without seeing the logs.  However, soft
bus errors are generally viewed as similar to SCSI timeouts that were
corrected by the device: in other words, there was a temporary communication
issue that was subsequently resolved.

> The proper way to shut down the array is to take the servers down first,
> then use the SPS to power off the 650 (this sends a signal to the SP
telling
> them to dump their cache: don't remove the 650 fan pack to shut the power

nod.  did that.  it took me a little while to puzzle out that "resetting"
a 650F is a matter of powering it down and back up and there is no "reset"
command in the data supervisor :-)

KB:  Exactly.

> supplies down directly: this will cause data corruption).  When the 650
> shuts itself down, then the cache is written and it's safe to shut down
the
> 630's.  On power up, reverse the order, making sure the 630's are fully
> initialized prior to powering up the 650 so the SP's are able to find all
> their LUNs.  Once the 650 is fully initialized, then power up the servers.

is there any reason to shut down the 630s if all you want to do is reset the
650?  

KB:  Nope...the 630's can stay up in that instance.

one other query - from data supervisor, do i assume you have to have 2 HBAs,
each one plugged into a SP if you want to see event logs from both ? at this
point i can only see and configure SPA.

KB:  Not necessarily: for example, SPS's do a battery recharge cycle weekly.
You would see SPS A's recharge cycle on SPB's log and vice versa.

in the end it all appeared to work well.  i now have a 2450 with 2 QLA2200
HBAs connected to SPA and SPB.  i have 20*36G on each SP and i think this
will load balance nicely - while i contemplate the 650 -> 4500 upgrade
price for which (apart from performance) was the ability to ditch data
supervisor under Win 2000 and use navisphere under linux :-)

KB:  Right...the Dell/EMC arrays will offer more **ix functionality than we
currently are able to offer.

thanks for the followup/answers.

regards,

-jason




More information about the Linux-PowerEdge mailing list