Disk cloning redux

Preston Hagar prestonh at gmail.com
Tue Feb 10 16:15:06 CST 2009

On Fri, Jan 23, 2009 at 7:17 AM, Dimitri Yioulos <dyioulos at firstbhph.com> wrote:
> Hi, folks.
> At the risk of earning your scorn and enmity, I'd like to ask once more about
> how I might proceed with a disk cloning project.  I have certainly
> appreciated everyone's feedback, but am not sure I've gotten a definitive
> solution  Once again, here are the key
> -- system to be cloned is remote; no access but for minor tasks (e.g. hard
> reboot)
> -- system resides in its entirety on sda
> -- need faithful reproduction of sda on sdb in order to save Ensim
> installation, and as DR mechanism
> -- no RAID or LVM on system
> -- down time not an option (so not able to use Clonezilla, G4L, etc.)
> -- active MySQL databases
> -- have heard/read that:
>         * dd shouldn't be used on systems with open files
>         * creating software RAID post-system installation may lead to data
> loss
> But, because I've also read/heard that others have used dd for this purpose,
> I'm still leaning toward it.
> Any final thoughts, ideas, tips, tricks, etc.  Or, again, am I SOL?
> And again, thanks for your patience and input.
> Dimitri
> --

I think a major issue with attempting what you are trying is the fact
that you are trying to copy live data.  In the process of
copying/cloning the live data, thing could and would likely change
making the version you just copied invalid.  One idea that you might
try, although I would highly recommend testing it on another
non-production server first would be to use cp -a.  I have, with much
success, cloned systems by booting to a live cd, mounting the old
partition and the new one, using cp -a to copy from one to the other,
fixing /etc/fstab (if needed) and fixing and then reinstalling grub on
the new disk, similar to what is outlined here:


The difference here is that you cannot reboot your system to a live cd
(and have your production services be down).  My thinking is to try
the following (again test this out first, I am not sure if it will

* First, on the production server, shut down anything you can get away
with (if there is anything).
* Mount sdb somewhere, say /mnt/sdb.
* Copy your system from / to /mnt/sdb using cp -a , normally you could
just do something like cp -a /mnt/sda/* /mnt/sdb/ but since you are on
a live system, and you have to mount sdb somewhere under /, you will
have to cp the folders in / by listings them, omitting /mnt and
possibly /dev (which should get recreated on reboot).  So, for
example, you would start with cp -a /bin /mnt/sdb/*
*  Once everything was copied, you could do a msqldump of your
databases and copy them over to /mnt/sdb somewhere
*  Next copy any other data files that might have changed.
*  Edit the fstab on /mnt/sdb/etc/fstab to make sure the correct drive
sdb is showing up as the / partition.  Assuming you use grub to boot,
I would edit the menu.lst in /mnt/sdb/boot/grub/menu.lst make sdb the
root drive, then install grub on that drive (make sure to get grubs
weird hard drive numbering straight, it is 0 based so sdb should be
*  As a good measure, unmount /mnt/sdb and run an fsck

Once all this was done, in theory, if things went bad, you would just
need to reboot, make sure you booted to the second drive (which in the
case of total drive failure of sda should happen automatically,
although if sda still "works" but won't boot you may need remote hands
to get the system to boot to sdb first.  Once
you booted to sdb, you could then restore the mysql databases from the
dumps since they will almost certainly be corrupt (as a side note,
after all this is done, I would keep sdb mounted and regularly do
mysql dumps to it as well as rsyncs of any other important files that
might change).

Hopefully this would work.  In theory, it should be more or less like
the plug was pulled on the server and then started back up.  Usually
Linux can handle this pretty well, although I would still say it isn't
100% guaranteed.   Like others have said, if you could drop to a lower
init level, then that would help a great deal, although then you would
likely have to shutoff production services.

Anyway, this is just an idea I had when reading your email.  I have
never tried this before on a live system and have no idea if it would
work.  If possible, I would take a similar or identical system that I
had at my physical location, set it up and try this out.  You could
even try, once you had everything copied to sdb, bzipping it up in a
tar archive (the entire partition), scp it to a local server, copy it
to a partition on a blank drive, put that drive in a similar machine,
try to boot and see what happens (remember you will need to run grub
setup again).  If the machine boots, then you can have a pretty good
idea that it might work.  If not, then it might be back to the drawing
board.  Depending on how much data and bandwidth you have, this may
not be an option though.

Hope this helps,


More information about the Linux-PowerEdge mailing list