Expanding RAID5 guidance - SOLUTION/RESULTS

Frank Warnke frank at newspapersystems.com
Tue Mar 3 08:53:56 CST 2009


On Mon, 2009-03-02 at 11:40 -0800, Jeff Boyce wrote:

> See posting at bottom.
> 
> ----- Original Message ----- 
> From: "Jeff Boyce" <jboyce at meridianenv.com>
> To: <linux-poweredge at dell.com>
> Sent: Friday, January 30, 2009 2:33 PM
> Subject: Expanding RAID5 guidance
> 
> 
> > Greetings -
> >
> > I am a novice Linux user that manages a file server for a small business 
> > and am looking for a little general guidance on understanding the right 
> > steps for expanding the storage capacity on a Dell Server.  Sorry for the 
> > long post, but I know how irritating it is when people don't describe 
> > their objective or provide all the details for someone else to understand 
> > the problem.
> >
> > Existing Server Setup:
> >   Dell PE 2600
> >   PERC 4/Di
> >   3 - 36GB hard drives in Raid5
> >   1 - 36GB dedicated hot spare
> >   No LVM used
> >   RHEL 3 update 9
> >   OMSA 5.1
> >   Used as small office Samba file server
> >
> > Proposed Objective:
> >   Add 2 - 36GB drives in remaining spare slots
> >   Expand Raid5 to include the added space
> >   Make use of the added space by users of the file server
> >
> > I know the first rule of thumb with managing raids and file systems is to 
> > have good backups (multiple backups are better) and I am writing up a 
> > detailed list of additional files to backup besides my home and data 
> > directories, so I think I have this covered.  My second task has been to 
> > make sure that I have a rescue disk or reinstallation disks available in 
> > case its needed.  If OS installation becomes necessary for some reason I 
> > am considering upgrading to 5.2 (but that is beside the point).
> >
> > I have read through the OMSA user guide and feel comfortable going through 
> > the task to physically install the new drives, and the steps for 
> > reconfiguring the existing virtual disk.  This point is were my 
> > information and comfort level begins to fall apart.  The first questions 
> > that I would like answered is:
> >
> > 1.  How long does the reconfiguration process take (I will do this on a 
> > weekend when no one is using the system)?
> > 2.  How do I know when the reconfiguration process is done (something the 
> > user guide doesn't describe)?  As you can see I want to know what to 
> > expect (good or bad) prior to completing the reconfiguration.
> >
> > Then from my reading of numerous descriptions of expanding raids through 
> > google searches (including a decent summary of the steps written by Matt 
> > Domsch (Expanding Storage on Linux-based Servers, Feb. 2003) it appears 
> > that I will need to expand the file system to use the new space; but do I 
> > also need to add/create a new partition for this space, or can I expand an 
> > existing partition into this space.  What I would like to do is just 
> > expand one or two existing partitions and distribute this space among 
> > them, if that is possible (see fstab listed below).  So my next questions 
> > would be:
> >
> > 3.  What are the general steps that I need to do after my raid 
> > reconfiguration is complete to achieve my general objective?
> > 4.  Would it be possible to add the new space to one or two existing 
> > partitions?  I am thinking sda2 and sda10 (/ecosystem is our samba share 
> > data directory that would be given 90% of the new space).
> > 5.  Will I need to add/create a new partition (and samba mount point) to 
> > make use of the new space?  If so I could reorganize our data files to 
> > make use of two samba mount points.
> > 6.  Any other pitfalls I should be aware of, such as what steps need to be 
> > done on unmounted drives?
> >
> > Thanks for any and all comments and suggestions; good howto links are 
> > always welcome.
> >
> > FSTAB
> > -------------------------
> > LABEL=/                 /                       ext3    defaults        1 
> > 1
> > LABEL=/boot             /boot                   ext2    defaults        1 
> > 2
> > none                    /dev/pts                devpts  gid=5,mode=620  0 
> > 0
> > none                    /proc                   proc    defaults        0 
> > 0
> > none                    /dev/shm                tmpfs   defaults        0 
> > 0
> > LABEL=/tmp              /tmp                    ext3    defaults        1 
> > 2
> > LABEL=/usr              /usr                    ext3    defaults        1 
> > 2
> > LABEL=/var              /var                    ext3    defaults        1 
> > 2
> > /dev/sda9               swap                    swap    defaults        0 
> > 0
> > /dev/sda2  /home  ext3  defaults 1 2
> > /dev/cdrom              /mnt/cdrom              udf,iso9660 
> > noauto,owner,kudzu,ro 0 0
> > /dev/fd0                /mnt/floppy             auto    noauto,owner,kudzu 
> > 0 0
> > /dev/st0  /mnt/tape  ext3 noauto,owner,kudzu 0 0
> > /dev/sda10 /ecosystem ext3 defaults 1 2
> >
> > RECENT LOGWATCH OUTPUT
> > ------------------ Disk Space --------------------
> > Filesystem            Size  Used Avail Use% Mounted on
> > /dev/sda7             2.0G  1.5G  385M  80% /
> > /dev/sda3             190M   54M  128M  30% /boot
> > none                  501M     0  501M   0% /dev/shm
> > /dev/sda8            1012M   33M  928M   4% /tmp
> > /dev/sda5             9.7G  2.8G  6.4G  31% /usr
> > /dev/sda6             9.7G  2.9G  6.3G  32% /var
> > /dev/sda2             2.5G  2.0G  443M  82% /home
> > /dev/sda10             40G   35G  3.7G  91% /ecosystem
> >
> >
> > Jeff Boyce
> > www.meridianenv.com
> 
> I finally had my scheduled maintenance down time and completed this task.  I 
> thought I would share generally what I did and how I did it in case there 
> are other novice administrators out there interested.
> 
> 1.  Ran my normal tape backup on the Friday night before the down weekend to 
> backup all data files.
> 2.  Rebooted the system to verify it shuts down and restarts properly. 
> System had been up 450+ days so this initiated a file system check which was 
> also my plan.
> 3.  Modified my /etc/fstab so that I could mount a usb flash drive and a usb 
> connected portable hard drive.
> 4.  Copied some specific rpm's and system files to the flash drive.
> 5.  Installed GParted LiveCD and rebooted so that the drives were not 
> mounted.
> 6.  Made an image of the server onto the usb connected portable hard drive. 
> In case something goes very wrong in subsequent steps.
>         # dd if=/dev/sda of=/dev/sdb bs=8192 conv=noerror,sync
>         This does not give any progress report to indicate how long it would 
> take so I opened another terminal and ran the following command to force the 
> dd command to give me a periodic status report.
>         # watch -n120 -- pkill -USR1 ^dd$
>         Since the usb transfer rate was 1.1 MB/sec it took about 18 hours to 
> transfer about 70 GB.
> 7.  Unmounted the usb portable drive, shut down GParted, and powered down 
> the server.
> 8.  Vacuumed all the dust from the server and installed the two new hard 
> drives in my last open slots.
> 9.  Rebooted the system in the standard OS in order to use OMSA tools.
> 10.  In OMSA selected the Virtual Disk, chose the reconfigure task, and 
> selected execute to step through the steps to select the new physical disks 
> to include in the virtual disk, the raid type, and the size for the 
> reconfigured virtual disk.
> 11.  OMSA gives a progress report during the reconfiguration process.  It 
> took about 1 hour and 10 minutes to complete reconfiguring a 3 disk raid 5 
> of 67GB to a 5 disk raid 5 of 135GB.
> 12.  Reboot the server using the GParted LiveCD so that the drives were not 
> mounted.
> 13.  My goal was to just expand my existing last partition (sda10) into the 
> new space.  However I realized that I had to expand my existing "extended 
> partition" (sda4) first to include the new space, then expand the last 
> partition (sda10) and grow the filesystem into the space.
> 14.  Remove the GParted LiveCD and rebooted to the standard OS.
> 15.  Checked OMSA Virtual Disk and it indicated that it was conducting a 
> background initialization, which took about 1 hour to complete.
> 16.  Checked the new size of the Samba share (sda10) on a Windows client box 
> and everything looked good.
> 
> When I returned to the office on Monday morning the staff was working on the 
> Samba share without even noticing anything had changed.  I am glad no one 
> noticed the change.  Success goes unnoticed by normal users, failure gets 
> noticed.  I would like to thank everyone that educated me and gave me 
> guidance (both on- an off-list) on how to complete this task and what to 
> expect.  And I hope now that some other novice might learn from my 
> description.
> 
> Jeff Boyce
> www.meridianenv.com
> 

Very good information.

FWIW, I have done similar tasks and for steps 5 - 7 I have used G4L
(http://sourceforge.net/projects/g4l).  It is a live Linux CD that you
can boot from. G4L will compress the dd image to a USB drive, FTP
server, or another drive in the system and it gives a status of its
progress.  I have not booted G4L from a Dell server in some time, so I
don't know for sure what support it may have for the latest Dell PERC's.

Thanks,
Frank



More information about the Linux-PowerEdge mailing list