RHEL 4 + PE 1850 + EMC AX100 PowerPath

Tom Ferony tomf at novia.net
Wed Jul 13 20:13:48 CDT 2005


BTW - With RHEL4 U1, I believe RedHat will be inlcuding the mutipath
updates that understand the Clarion service processor stuff.  



On Mon, 2005-07-11 at 00:12 -0500, JACOB_LIBERMAN at Dell.com wrote:

> Tom,
> 
> Im sorry to hear about the problems youre experiencing.
> 
> If you need assistance connecting a RHEL3 server to an AX100, your can
> call the storage support group at 800.945.3355 and follow the
> appropriate prompts to the UNIX/Linux host attach team. 
> 
> Usually a host wont boot when the external storage is attached if the
> HBAs SCSI adapter aliases are listed before the internal boot HBA in
> /etc/modules.conf. Usually you can swap their numeric position and
> remake the initial RAMdisk to get past that problem. Please give us a
> ring if that doesn't work.
> 
> In general, the GFS 6.x pool_mp service for load balancing and fault
> tolerance does not accommodate back-end trespasses because the AX100
> (and the whole Clariion line) are active-passive arrays, not
> active/active. However, I have seen GFS used in conjunction with
> PowerPath and it seems to work although I do not believe it is on the
> ESM at this time. Incidentally, powerpath also works in conjunction with
> OCFS if you choose to use it as a clustered filesystem instead of GFS.
> 
> Running GFS on a single node (in an SLM configuration) also presents a
> single point of failure. It would be preferable to use the Redundant
> Lock Manager file locking mechanism if you have the hardware capacity.
> You can run the lock manager service on hosts that are not physically
> connected to the array as long as they have network access to the other
> lock management servers.
> 
> There are a few Veritas VxVM solutions that are validated on the ESM,
> but I think its primarily validated under Solaris, and very version
> specific. I have talked to people who are using Veritas VxVM with DMP in
> conjunction with PowerPath on Solaris. I know a few of our customers use
> Veritas VxVM with DMP and PowerPath on Linux, but I do not believe that
> EMC has validated that configuration.
> 
> Finally, if you are still in the pilot phase, I also want to suggest
> that an AX100 might not be the best array for a high volume oracle
> database since it uses ATA drives rather than fiber. Plus, ATA drives
> have only single backend loops, so you cant really load balance them
> across SPs effectively. The CX300, although more expensive, would give
> better performance with internal fiber drives.
> 
> Thanks, Jacob
> 
> > -----Original Message-----
> > From: linux-poweredge-bounces-Lists On Behalf Of Tom Ferony
> > Sent: Sunday, July 10, 2005 8:53 PM
> > To: linux-poweredge-Lists
> > Subject: RHEL 4 + PE 1850 + EMC AX100 PowerPath
> > 
> > I'm not sure what Dell recommends, but I know what works, and 
> > I believe this is now certified by EMC.  I happen to work at 
> > a very large railroad that is piloting a EM64T Dell box 
> > (running x86_64 RHEL 3) and have had horrible experiences so 
> > far, but we have solved the lack of EMC support for Linux by 
> > using RedHat's GFS on a single node to provide both failover 
> > and round robin load balancing on RHEL 3.  This solution, and 
> > another involving veritas and dmp solution were not offered 
> > by Dell and the funny thing is that Dell sent us the box with 
> > 4 fiber channel adapters installed in the box (although one 
> > was faulty, which we had to troubleshoot) for the purposes of 
> > a pilot with a very high volume Oracle database.  The 
> > resolutions and problem determination came after a Dell 
> > support person left the premises, I came back from vacation 
> > and had most of it resolved within two days.
> > 
> > Now if we can get the pilot box to reboot while having the 
> > fiber channel adapters plugged in and not panic when loading 
> > the USB drivers, we may be happier.  
> > 
> > We also had 4 Dell blades as pilots, two died.  One of them 
> > melted and a replacement board was sent in, but it was warped 
> > so badly it couldn't even be installed.  The blades were 
> > installed in their own rack in a brand new data center that 
> > is very cool and well ventilated. 
> > 
> > We have 350 UNIX boxes and hundreds of Intel boxes, very few 
> > Dell, but if someone convinces us that Dell can support Linux 
> > and can convince us that the hardware problems we've had were 
> > flukes, Dell might have the whole ball of wax, and we're 
> > exploding with growth.
> > 
> > -- 
> > Tom Ferony 	
> > 
> 

-- 
Tom Ferony <tomf at novia.net>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20050713/ccb09b13/attachment.htm


More information about the Linux-PowerEdge mailing list