Expanding RAID5 guidance
jboyce at meridianenv.com
Mon Feb 2 13:17:14 CST 2009
Thanks for the input. I think this gets me a little closer. I am posting
more input at the bottom.
----- Original Message -----
From: "Ryan Bair" <ryandbair at gmail.com>
To: "Jeff Boyce" <jboyce at meridianenv.com>
Cc: <linux-poweredge at dell.com>
Sent: Sunday, February 01, 2009 12:06 PM
Subject: Re: Expanding RAID5 guidance
> Hi Jeff,
> Since no one has bitten, I'll throw in my $0.02.
> On Fri, Jan 30, 2009 at 5:33 PM, Jeff Boyce <jboyce at meridianenv.com>
>> Greetings -
>> I am a novice Linux user that manages a file server for a small business
>> am looking for a little general guidance on understanding the right steps
>> for expanding the storage capacity on a Dell Server. Sorry for the long
>> post, but I know how irritating it is when people don't describe their
>> objective or provide all the details for someone else to understand the
>> Existing Server Setup:
>> Dell PE 2600
>> PERC 4/Di
>> 3 - 36GB hard drives in Raid5
>> 1 - 36GB dedicated hot spare
>> No LVM used
>> RHEL 3 update 9
>> OMSA 5.1
>> Used as small office Samba file server
>> Proposed Objective:
>> Add 2 - 36GB drives in remaining spare slots
>> Expand Raid5 to include the added space
>> Make use of the added space by users of the file server
>> I know the first rule of thumb with managing raids and file systems is to
>> have good backups (multiple backups are better) and I am writing up a
>> detailed list of additional files to backup besides my home and data
>> directories, so I think I have this covered. My second task has been to
>> make sure that I have a rescue disk or reinstallation disks available in
>> case its needed. If OS installation becomes necessary for some reason I
>> considering upgrading to 5.2 (but that is beside the point).
>> I have read through the OMSA user guide and feel comfortable going
>> the task to physically install the new drives, and the steps for
>> reconfiguring the existing virtual disk. This point is were my
>> and comfort level begins to fall apart. The first questions that I would
>> like answered is:
>> 1. How long does the reconfiguration process take (I will do this on a
>> weekend when no one is using the system)?
> I'd certainly leave a weekend just in case things go terribly wrong.
> The drives are pretty small so I wouldn't expect it to take over an
> hour to resize the array but I've never resized on a PERC4.
>> 2. How do I know when the reconfiguration process is done (something the
>> user guide doesn't describe)? As you can see I want to know what to
>> (good or bad) prior to completing the reconfiguration.
> The console should list the process and its percent completion.
>> Then from my reading of numerous descriptions of expanding raids through
>> google searches (including a decent summary of the steps written by Matt
>> Domsch (Expanding Storage on Linux-based Servers, Feb. 2003) it appears
>> I will need to expand the file system to use the new space; but do I also
>> need to add/create a new partition for this space, or can I expand an
>> existing partition into this space. What I would like to do is just
>> one or two existing partitions and distribute this space among them, if
>> is possible (see fstab listed below). So my next questions would be:
>> 3. What are the general steps that I need to do after my raid
>> reconfiguration is complete to achieve my general objective?
>> 4. Would it be possible to add the new space to one or two existing
>> partitions? I am thinking sda2 and sda10 (/ecosystem is our samba share
>> data directory that would be given 90% of the new space).
>> 5. Will I need to add/create a new partition (and samba mount point) to
>> make use of the new space? If so I could reorganize our data files to
>> use of two samba mount points.
>> 6. Any other pitfalls I should be aware of, such as what steps need to
>> done on unmounted drives?
> This will be very tricky, especially since no volume manager is being
> used. You can resize a partition, but it must be contiguous. So to
> expand anything other than the last partition, you'll need to play
> musical file systems which will be tedious and error prone. You could
> also just make a new partition and make a filesystem on it, but that
> would probably be the least friendly to the users of the system.
> Personally, I would backup the current system, test the backup (on a
> spare system or on a VM) and manually restore to a fresh setup
> preferably with LVM for everything but /boot. This will involve
> manually creating the physical and logical volumes, transferring the
> data back, and setting up the bootloader and fstab. However, I am
> completely crazy. Don't try this unless you're 100% positive your
> backups actually work and you've planned the whole process and did a
> dry run.
>> Thanks for any and all comments and suggestions; good howto links are
>> LABEL=/ / ext3 defaults 1
>> LABEL=/boot /boot ext2 defaults 1
>> none /dev/pts devpts gid=5,mode=620 0
>> none /proc proc defaults 0
>> none /dev/shm tmpfs defaults 0
>> LABEL=/tmp /tmp ext3 defaults 1
>> LABEL=/usr /usr ext3 defaults 1
>> LABEL=/var /var ext3 defaults 1
>> /dev/sda9 swap swap defaults 0
>> /dev/sda2 /home ext3 defaults 1 2
>> /dev/cdrom /mnt/cdrom udf,iso9660
>> noauto,owner,kudzu,ro 0 0
>> /dev/fd0 /mnt/floppy auto
>> noauto,owner,kudzu 0
>> /dev/st0 /mnt/tape ext3 noauto,owner,kudzu 0 0
>> /dev/sda10 /ecosystem ext3 defaults 1 2
>> RECENT LOGWATCH OUTPUT
>> ------------------ Disk Space --------------------
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda7 2.0G 1.5G 385M 80% /
>> /dev/sda3 190M 54M 128M 30% /boot
>> none 501M 0 501M 0% /dev/shm
>> /dev/sda8 1012M 33M 928M 4% /tmp
>> /dev/sda5 9.7G 2.8G 6.4G 31% /usr
>> /dev/sda6 9.7G 2.9G 6.3G 32% /var
>> /dev/sda2 2.5G 2.0G 443M 82% /home
>> /dev/sda10 40G 35G 3.7G 91% /ecosystem
>> Jeff Boyce
>> Linux-PowerEdge mailing list
>> Linux-PowerEdge at dell.com
>> Please read the FAQ at http://lists.us.dell.com/faq
> Hope this helps,
I agree with your personal recommendation (backup and restore onto a fresh
setup), however I can't meet the assumptions you have made. I only have the
one server in the company, no spares, no VM, nada; and I would have loved to
have LVM on this system, but I did not set it up 5 years ago. This system
also contains my tape backup software (not something I want to reinstall
from scratch). So I have to work with what I have in front of me. Also
since I mentioned that it is a small business, admin'ing the server is only
a part time thing for me, my full time responsibility is as a consulting
forester. So one of my goals on this task is to minimize the potential for
me making a mistake that could cost a lot of my time to restore a system
Thanks for the advice on the partitions. It looks like if I really knew
what I was doing I could expand the sda10 (ecosystem) partition to take up
the new space, but none of the other partitions. From this advice I will
probably consider just adding a new partition and then reorganizing my file
directory system to use it in an appropriate way. I don't think that would
be too much of an inconvenience to our users (all 9 of them) and might be my
safest approach for minimizing mistakes.
So I am down to needing a few pieces of information:
1. How do I create a new partition sda11 on my expanded raid5? Is this
with fdisk? With disks mounted or unmounted?
2. How do I expand the file system? Is this with resize2fs? With disks
mounted or unmounted?
3. Then I assume that I need to create a new entry in fstab?
4. Then do I need to reboot in order to verify that I have the correct
entry in fstab?
More information about the Linux-PowerEdge