raid-5 to raid-10 conversion for PERC-6i?

Stroller stroller at
Mon Feb 15 19:39:33 CST 2010

On 15 Feb 2010, at 20:02, Alexander Dupuy wrote:
> ...
> The only approach that I can see might be to remove a drive from the  
> RAID-5 (making it into a RAID-0), then using the removed drive plus  
> the new one to create two RAID-1s in degraded state (is this even  
> possible?) ... Neither approach seems ideal - a disk failure during  
> the conversion would be fatal

Suspect this migration is in no way supported, for the very reason you  
mention (disk failure). It sounds awfully messy.

The solution that springs to mind is to buy a couple of large external  
hard-drives (or large SATA hard-drives in cheap external USB  
enclosures), boot to a Live CD (System Rescue CD, Knoppix) and clone  
the disk off onto them (e.g. using dd if=/dev/sda of=/mnt/usbdrive/ 
system.img). Reconfigure the array, then clone back onto the new  
"virtual disk".

I'm basing this assumption that your currently RAID'ed drives are not  
huge - eg if you have 3 x 500gb drives in your current RAID, then an  
image of their virtual drive will easily fit on a 1.5TB external  
drive; 3 x 750gb in RAID5 will easily fit on a 2TB drive. Gzipping  
saves a little space but is sloooow.

I did something similar over Christmas and it was stressful, but only  
because the data was so valuable to the customer. I was sweating  
imagining a fingerslip might lead me to clone the wrong array to the  
wrong place and accidentally overwrite my backup with garbage. Of  
course this didn't happen, and everything went smoothly.

Make two backup copies to be safe. Using two external drives it would  
be nice to take one offsite when you've finished the first clone -  
connect it to another machine and inspect you've got the right data  
(loopback mount the partition read-only). I would have found this  
quite reassuring if I'd done it that way (and not between drives of  
the array, as I did; we bought like 3 extra drives for our migration).

Of course, this approach demands that you take the system offline in  
order to perform the clone & restores, but that's the price you pay.

I'm a little confused why you want to do this, and all this way. You  
seem to be acting penurious about drives, which really isn't a  
responsible approach to valuable data. If you have external or SATA  
drives left over when you've finished you can always shred them, so  
that no data is recoverable, and resell them on eBay for a very small  
loss on what you paid for them. On the other hand, perhaps this is  
simply related to the number of drives you can fit in your RAID cage  
at one time?

I've read recently (here?) that RAID10 &/or RAID01 performs better  
than RAID5, but RAID5 is really nice & flexible & easily expandable -  
surely if performance was a major bottleneck & consideration, you  
would have planned for this? I'm not putting you down, but most of us  
don't need to bleed every ounce of performance from our hardware.

Removing a drive from a RAID5 does not make it a RAID0 - it makes it a  
degraded RAID5, BTW.

I would imagine that migrating a RAID0 or RAID1 to RAID10 or RAID01  
might well be officially supported (through the browser-based RAID  
configuration UI or whatever). But insufficient disks means taking  


More information about the Linux-PowerEdge mailing list