Trying to build 7-45 TB storage on the cheap with only Dell hardware

Ryan Bair ryandbair at gmail.com
Mon Jun 2 11:14:54 CDT 2008


I love the MD1000 PERC5/6 combination. Its really great performance
and scalability at a good price.

I keep all of my drives in a big 15 block then chain them together
with the dual EMMs. On the OS level I use LVM to tie together the
individual MD1000's into a single mega volume group. Keep in mind that
you can hook up  to 45TB of raw disk space (with 1TB drives) to a
single PERC card. With a 2U server and 3 PERC cards, you can really
get a lot of storage on a single machine.

If you need to scale across multiple servers, you may want to look at
GlusterFS.

Some comments inline.

On Mon, Jun 2, 2008 at 9:34 AM, bert hubert <bert.hubert at netherlabs.nl> wrote:
> Hi everybody!
>
> I hope you can help us - I am already talking to my account manager, but I
> think the Linux perspective is better on this list.
>
> I am looking for a storage solution, a DAS with a "flexible" growing
> capability from 7 to 16, and possibly 45 TB.
>
> As I am trying not to spend too much money, I am hoping to see if we can
> let Linux do a lot of the heavy work so I don't need to get a NAS or a
> SAN.
>
> For convenience sake, I only want to use Dell hardware as this is easily
> supported, and I don't want to trawl the web for separate solutions.
>
> My storage needs are mostly very large files, so I don't need a lot in the
> way of dedicated filesystems.
>
> The MD1000 seems promising, but will it do what I want? It seems to be a
> great not too expensive solution, a bunch of disks for storage, connected to
> a machine, or two machines, let Linux do the work, there you go.
>
> Do the individual disks show up (/dev/sda, sdb sdc etc)? Or does the
> controller already convert the disks into huge devices? Either is fine with
> us, I have good experiences with md-support in Linux.
>
> In other words, a 'dumb container of disks' would also be fine by me - but
> the MD1000 appears to do more. If a 'dumb container of disks' is available,
> which one works best?
>
> Some more questions:
>
> - Which card is needed in the poweredge servers to connect to the Dell
> MD1000? A secondary perc6e sas raid controller card or a perc5e sas raid
> controller card or something else?
I'd go with the PERC6/e because it has RAID6. When using cheap SATA
drives, I like the extra redundancy.
>
> - In a 7/8 split configuration with a second EMM in the MD1000, what kind of
> card and cables do I need in the server(s)?
I wouldn't do the split unless I really, really needed it. All my
MD1000's came with all the SAS cables I needed for doing a split or
redundant connections back to a single server. Note that with GFS and
a little magic, two servers can access all 15 drives simultaneously.
>
> - With a second EMM in 7/8 split configuration (to separate servers) do you
> get 2 connections of each 4 * 3 Gbps or two connections of each 2 * 3 Gbps?
> In other words is the backend of the MD1000 capable of acting like two
> individual MD1000, each filled with only half the disks?
Each EMM has a 4*3Gbps, so with two EMMs each would get 4*3.
>
> - To be sure, is the size of the 7/8 split configuration expandable by
> adding a secundary MD1000?
Never tried it, but I'd be surprised it that wasn't the case.
> - Do you have any additional tips and tricks for the MD1000 - how to get
> Linux to work with it to get the best performance.
I have found that the deadline IO scheduler really helps performance.
Also a big read ahead on the blockdev helps.
> Thanks!
>
> --
> http://www.PowerDNS.com      Open source, database driven DNS Software
> http://netherlabs.nl              Open and Closed source services
>
> _______________________________________________
> Linux-PowerEdge mailing list
> Linux-PowerEdge at dell.com
> http://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq
>



More information about the Linux-PowerEdge mailing list