JBOD a PERC4/DC?

John Jung john.jung at ugs.com
Tue Jul 26 10:56:12 CDT 2005


Hi Paul,

> You're in charge, so why don't you use RAID (>0) ?

   Because of the ~560GB disks available in each set of disks, 480GB is in 
use.  And because the server I'm upgrading is using LVM to span two sets of 5 
disks, I can't just isolate the contents of the server to a subset of disks.

> You should create the desired Raid-Arrays with Raid1 or Raid5 and then
> make the rest via LVM on the logical disks.

   Right now I don't have the free disk space to move things around.  And 
because of the server's usage, I can't take it down for an extended period of 
time, unless the system blows up.

> JBOD and Raid0 are really crap for server einvironment: What do you do
> when one disk crashes?

   In the HPUX environment, I used LVM to create a spanned filesystem for 
each set of 5 JOBDs (resulting in two 560GB filesystems).  Both filesystems 
were mounted and I used cpbk to do crude filesystem-level RAID-1 on a nightly 
basis.  One of the filesystems was the "live" filesystem, while the other was 
backup.

   If a disk on the "live" filesystem went bad, I would unmount the "live" 
one, mount the backup as the "live" one.  This made it possible for the 
server to be down for about 10 minutes while I changed mount points.

   And doing this bought me time to get a replacement disk, take the system 
down to swap drives, rebuild the LVM with the affected disk, re-copy all the 
content over, and eventually reset the "live" filesystem to be the "live" 
filesystem, and the backup filesystem as the backup.

   In about seven years, there have been three such indicidents and the 
typical turnaround time to resolution was about 2 weeks.


   On the plus side, I'm going to be getting two 300GB SCSI disks in a few 
weeks, so I'll have finally some wiggle room.  I'm thinking about:

   1. RAID-0 the 300GB SCSI's, create a new 600GB tertiary filesystem, copy
      all the content (480GB) to it, and make it the "live" filesystem.
   2. Use dellmgr to completely RAID-1 the disks from each channel to create
      five RAID-1 logical drives.
   3. Use Linux's LVM to create a single filesystem across the 5 RAID-1
      logical drives, copy the contents to it, and make it the "live"
      filesystem.
   4. RAID-1 the 300GB SCSI's and add it to the LVM RAID-1 stuff.

   Does that sound like a viable plan?

   Thanks.

						John

-- 
+-------------------------------------+-------------------------------------+
|  John Jung (john.jung at ugs.com)      |   UGS                               |
|  Global Technical Access Center     |   10824 Hope Street, 1S259          |
|  Operating Systems Group            |   Cypress, California 90630         |
+------------------------------(800) 955-0000-------------------------------+



More information about the Linux-PowerEdge mailing list