PE2850 RAID1 upgrade drives

J. Epperson Dell at epperson.homelinux.net
Sat Sep 4 11:27:46 CDT 2010


On Sat, September 4, 2010 10:24, Stroller wrote:
>
> On 4 Sep 2010, at 13:15, Raymond Kolbe wrote:
>> ... 1) Create a Ghost image of OS/data, etc. for backup and restore. 2)
>> Turn off the server and replace both drives with the newer 300GB
>> drives. 3) Turn on the server and create a new RAID1 array. 4) Restore
>> the Ghost image from step 1. 5) Use gParted or another partition
>> resizing program to increase my partitions.
>>
>> However, no one has confirmed that these methods worked for them.
>>
>> Now, both ways sound like they would work, but I am extremely nervous
>> about this because I have also found forum postings and articles about
>> having to manually copy over partition information, and that disk block
>>  sizes matter, etc. (not exactly sure about the technical issues here),
>>  etc. This is also a mission critical production server so uptime is
>> key.
>>
>> So my question is, are either of the two methods above realistic,
>> and/or has anyone actually upgraded RAID1 in a PE2850 or PE server
>> before without having to reinstall their OS?
>
>
> I've definitely done this sort of thing with another model of PowerEdge,
> the 2800. I think I've done it with a 2850, although mine doesn't have
> the RAID key.
>
> The drives will just appear to the o/s as block devices - if you boot
> from a LiveCD (well, as long as it's one that supports the RAID
> controller) you'll see the current array as (something like) /dev/sda.
> Take a note of the current configuration, just so you're completely
> confident (e.g. `ls -l /dev/disk/* > /mnt/floppy/file.txt`).
>
> Shutdown the system, slap the new drives in (don't remove the existing
> ones), create an array of them in the RAID BIOS, and reboot again to the
> Live CD. You'll see the existing /dev/sda as it was before (compare
> /dev/disk/by-id/* with what you had before) and a new /dev/ sdb. The RAID
> controller consolidates the drives (hardware RAID) and presents them to
> the o/s as the single /dev/sdb block device.
>
> You can simply `dd if=/dev/sda of=/dev/sdb`, shutdown the system, remove
> the original array, power up and change the boot device in the RAID BIOS
> and boot to the cloned system.
>
> I personally don't use Ghost - Linux has `dd` which is perfectly
> adequate. I trust it more than Ghost. You can "ghost" to backup image
> file on an external USB drive with `dd if=/dev/sda of=/mnt/seagate/
> file.img`. Nor do I use gParted, but the command-line parted. `sudo
> parted /dev/sda p` should show that the new array is larger. It can be a
> pain to manipulate in Linux the partitions of Windows Server, as I've
> mentioned in the past.
>
> I'm tending to assume 2 system drives in a RAID1 here, so that you have
> enough empty bays for the two replacement disks. You can unmount and
> remove data arrays whilst you're upgrading the system drives. I perform
> all partition / filesystem resizes from the LiveCD, with the disks
> unmounted.
>
> DON'T do this sort of thing on a production system without a backup. This
> mailing-list posting confers no warranty, express or implied. If, like
> me, you're a lone IT consultant working on a client's only
> mission-critical server (or even one of your own) this kinda stuff can be
> tremendously stressful. There is potential for you to foul-up at any time
> if you just once confuse your source and destination drives.
>
> This demonstrates the need for backups constantly throughout the system's
> life - really, as soon as you've commissioned the system you should be
> taking backups, you should test them; you should do a full restore just
> to prove you can, before you have any important data on there. Of course
> there are many occasions when we are not so perfect, but this migration
> is perfectly manageable if you're careful; 99 times out of 100 there will
> be no problems, but it can be a bit nerve- wracking.
>
> You ought to be confident about this before you start, so if you can't
> get (or afford) someone more experienced to help then my best
> recommendation is to practice it on another system first. I got my 2850
> at least 6 months ago, and they were going for less than £200 on eBay
> then - I wouldn't be surprised if they're less than £100 now.
>
> This is a really straightforward migration that most of the guys on this
> list - or any other experienced Linux system administrator - would have
> no trouble at all with. I'm surprised you can't find "confirmation" of
> this working (although I think few of us would use Ghost, if that's part
> of your search criteria) because I think there are probably people doing
> this on a daily basis with no problems. But one can't write exact
> instructions for you at one remove like this - the block devices may be
> named differently on your system, for example as /dev/hda instead of
> /dev/sda, and of course there's the liability that a single tiny omission
> can foul you up. But, yes, this technique, generally speaking, does work.
>

I can vet pretty much everything Stroller says, have done a number of
variations of this, often using g4l (Sourceforge) which just glues
together a live CD, dd, compression, and ncftp to put the compressed image
on an ftp server.  When restoring to larger array I've sometimes ended up
with a partition table that no partition editor could stomach modifying
but IIRC that's only happened with RAID5 (shouldn't matter, I know, but I
don't think I've seen it on a RAID1).  If you know what you're doing and
have preserved a partition map to refer to you can delete the table under
those circumstances and build a new one without initializing anything and
come out ok.  You can also avoid by manually partitioning the new array
and copying sda1-->sdb1 etc and doing a grub-install or copying the MBR to
make bootable.

Finally, although I've been slicing/dicing disk partitions from the
command line since HPUX v4, I like Partedmagic for a lazy man's toolkit to
move/resize partitions.  But I wouldn't use it if I didn't know how to get
under the hood and hotwire things if it left me in the lurch.



More information about the Linux-PowerEdge mailing list