Trying to build 7-45 TB storage on the cheap with only Dell hardware

Ryan Bair ryandbair at gmail.com
Fri Jun 6 11:49:41 CDT 2008


As mentioned, you could use DRBD which operates on the block level
plus a shared mountable filesystem like GFS2 or OFCS2.

GlusterFS works on the file level. So in this scenario, each volume
would be formatted with some random filesystem and Gluster would be
configured to ensure that each file has at least two copies in the
cluster. Failover (and balancing) would be automatic for people using
the Gluster FUSE client, but if you exported via NFS or similar you
would be on your own to ensure clients failed over.

On Thu, Jun 5, 2008 at 8:09 PM, Ray Van Dolson <rvandolson at esri.com> wrote:
> On Mon, Jun 02, 2008 at 09:14:54AM -0700, Ryan Bair wrote:
>> I love the MD1000 PERC5/6 combination. Its really great performance
>> and scalability at a good price.
>>
>> I keep all of my drives in a big 15 block then chain them together
>> with the dual EMMs. On the OS level I use LVM to tie together the
>> individual MD1000's into a single mega volume group. Keep in mind that
>> you can hook up  to 45TB of raw disk space (with 1TB drives) to a
>> single PERC card. With a 2U server and 3 PERC cards, you can really
>> get a lot of storage on a single machine.
>>
>> If you need to scale across multiple servers, you may want to look at
>> GlusterFS.
>>
>> Some comments inline.
>
> This sounds like something we are interested in setting up.  Can you
> give insight into the following potential setup?
>
> Ideally we'd like to have one front-end box and one MD1000 loaded with
> disks in two separate colo rooms.  We'd like the backend storage to be
> mirrored between the two MD1000's and if one MD1000 fails the other
> can automatically step in and continue to be accessed via *either*
> front-end box (regardless if it's directly attached to the MD1000 or
> not).
>
> I envision each front end box having a direct connection to one MD1000
> and an iSCSI connection to the other... then using LVM or software RAID
> to mirror between the two and formatting the whole thing with GFS.
>
> Better way to do this?
>
> Ray
>
> _______________________________________________
> Linux-PowerEdge mailing list
> Linux-PowerEdge at dell.com
> http://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq
>



More information about the Linux-PowerEdge mailing list