RH73 install, devices and modules.conf
sigbjorn.strommen at roxar.com
Mon Jun 24 09:41:00 CDT 2002
got a new PE2650 and two PV220S's which I've been installing over the
weekend. I am using all SCSI (no HW raid) on internal disks, and two
extra Adaptec 39160's for the external disks (software raid on
I hooked up all the hardware, and started installing RH73. The install
failed consistently whatever I tried. Each time it failed right after
formatting the internal disk partitions.
I detached the SCSI cabinets, and then the install went OK. Following
that I tried to attach one of the 220S's again, but then the boot up
failed with heaps of SCSI error messages.
So I checked the docs for the server, and found out that it scans the
internal SCSI card last. All PCI expansion cards will be scanned before
the internal SCSI card.
This again means that all device names for the internal disks will
change when you insert disks in the external SCSI chains. I guess this
is why the install failed as well.
One solution to this is to mount all filesystems with labels, and not
device names, as I had done. The problem will resurface again though,
if I add external disks, as this will probably break my meta devices
for the software raids?
The question is: Since Linux is using the same name for the internal
and external SCSI cards, how can I add aliases to the modules.conf
file, so it reads the cards in the correct order?
Can I add something like:
alias scsi_hostadapter aic7xxx.0
alias scsi_hostadapter1 aic7xxx.1
alias scsi_hostadapter2 aic7xxx.2
...or in any other way map the physical location/name to an alias?
Also, if I for example create a raid 5 volume of devices sdh, sdi and
sdj but at a later point add a disk that push the names of these devices
one step to the right, will I have to edit the raidtab file and rename
the members sdi, sdj and sdk to get the correct devices for the raid
volume? Or is there any other way to keep consistent device names on
disks? (I have 24 disks and 50-60 partitions in this system, and may
add more later, so I would rather not have to do it this way).
I'm sure there is an easy solution to this, as I can't possibly be
the first one with a lot of disks on a server :-)
More information about the Linux-PowerEdge