PE2850 RAID1 upgrade drives

Jefferson Ogata poweredge at
Sat Sep 4 11:55:25 CDT 2010

On 2010-09-04 12:15, Raymond Kolbe wrote:
> Over the past couple of months I have been looking into upgrading the 
> drives in one of our servers, a PE2850 running CentOS 4.8. Currently it 
> has 3x146GB 10K drives, two of which are RAID1 and the third being a hot 
> spare.
> I would like to upgrade the drives to 3x300GB 15K drives but I do not 
> want to reinstall the OS. I have found many articles on the web related 
> to upgrading RAID1 configurations and it seems like everyone says the 
> following:
> 1) Create a Ghost image of OS/data, etc. for backup.
> 2) Break the array (degrade it).
> 3) Pull one of the drives (drive 1) and replace it with the newer 300GB 
> drive.
> 4) Let the array rebuild to the bigger drive.
> 5) Pull drive 0 and replace it with the newer 300GB drive.
> 6) Let the array rebuild.
> 7) Use gParted or another partition resizing program to increase my 
> partitions.
> or
> 1) Create a Ghost image of OS/data, etc. for backup and restore.
> 2) Turn off the server and replace both drives with the newer 300GB drives.
> 3) Turn on the server and create a new RAID1 array.
> 4) Restore the Ghost image from step 1.
> 5) Use gParted or another partition resizing program to increase my 
> partitions.
> However, no one has confirmed that these methods worked for them.
> Now, both ways sound like they would work, but I am extremely nervous 
> about this because I have also found forum postings and articles about 
> having to manually copy over partition information, and that disk block 
> sizes matter, etc. (not exactly sure about the technical issues here), 
> etc. This is also a mission critical production server so uptime is key.
> So my question is, are either of the two methods above realistic, and/or 
> has anyone actually upgraded RAID1 in a PE2850 or PE server before 
> without having to reinstall their OS?

Method 1 may not give you a larger RAID1.

Method 2 may not preserve your boot record (MBR) and partition table,
which are stored in the first track of the RAID volume, i.e. blocks
0-62. Ghost may or may not be able to image these.

In both cases, there may be limits to how you can resize the partitions
because of the actual layout on disk. Also, depending on utilization,
copying full images of your filesystems, rather than using dump/restore,
may waste a lot of time on unallocated blocks.

Since you have a third disk in there, you could also do something like
the following, which would have lower downtime:

1. Replace the hot spare with a 300 GB disk.
2. Create a RAID0 volume on the new 300 GB disk (get rid of the hot spare).
3. Create a partition on the new RAID0 volume and make a filesystem on it.
3. Use dd to transfer the partition table and boot record to a file on
the new filesystem. (dd if=/dev/sda of=/mnt/foo/track0 count=63)
4. Boot in emergency mode, or live boot a CentOS install disc or other
live CD, the objective being to make sure all filesystems are either
unmounted or mounted read-only.
5. Use dump to copy the other filesystems to the new disk. (e.g. dump 0f
/mnt/foo/root.0 /dev/sda1, etc.)
6. Make copies of the static dump and restore binaries on the new disk,
in case you don't have them later.
7. Delete the RAID1 volume.
8. Replace the other two disks.
9. Create a new RAID1 volume.
10. Boot a live or install disc again and mount the RAID0 filesystem.
11. Use dd to copy the partition table and boot record to the new RAID1.
12. Tweak the partition table to suit your needs.
13. Create new filesystems on the new partitions. Run mkswap on your
swap partion, if you have one. Check /etc/fstab and be sure to specify
filesystem labels where needed, e.g. if /etc/fstab says "LABEL=/usr" for
the /usr mount, be sure to add -L /usr to your mkfs line. You can also
tweak labels after the fact using e2label. Also pay attention to whether
there's a label on your swap partition, and use -L with mkswap in that
case as well.
14. Mount each new filesystem and use restore to recover the appropriate

I'm assuming you're not using LVM. If you are, then some of these steps
would become simpler.

It might be advisable to use Ghost, as you suggested, to make a backup
over the network to a different system just in case. But it will add
time to the process.

If you're not completely familiar with all of this, it may be best for
you to setup another system to practice on. Specific things to practice
ahead of time (that are good skills for you to have as a sysadmin anyway):

- Using grub to rewrite the boot record.
- Changing grub config options (e.g. "root", "kernel") at the boot
screen in order to boot a system whose disks have been shuffled around.
- Booting in emergency mode.
- Getting your root filesystem remounted r/w in emergency mode.
- Using dump and restore.
- Checking and modifying filesystem labels with e2label.
- Identifying which RAID volume /dev/sda actually refers to.
- Using a CentOS/Red Hat install disk to get to a command line without
nuking anything on your disk.

More information about the Linux-PowerEdge mailing list