Trying to build 7-45 TB storage on the cheap with only Dell hardware

dell at dell at
Fri Jun 6 12:03:44 CDT 2008

On Fri, 6 Jun 2008, Ryan Bair wrote:

> As mentioned, you could use DRBD which operates on the block level
> plus a shared mountable filesystem like GFS2 or OFCS2.

Just bear in mind that you only need GFS (GFS2 is not yet production 
stable) or OCFS2 if you need active-active load balancing operation. This 
would only give you benefits if:

- There is no signifficant write contention
- Your files are few and big (as opposed to many and small)

If your use case doesn't fit that criteria, the chances are that you will 
see better performance with an active-passive fail-over solution.

(Note: There may be ease of administration reasons to use active-active 
despite that.)

> GlusterFS works on the file level. So in this scenario, each volume
> would be formatted with some random filesystem

Not quite random. The underlying FS must support xattrs for AFR 
(mirroring) translator in GlusterFS. Most do, but it's important to make 
sure you aren't trying to use one that doesn't (e.g. Reiser4 still has no 
support for xattrs).

> and Gluster would be
> configured to ensure that each file has at least two copies in the
> cluster. Failover (and balancing) would be automatic for people using
> the Gluster FUSE client, but if you exported via NFS or similar you
> would be on your own to ensure clients failed over.

Indeed, but you can use heartbeat or RHCS to fail over resource groups 
from NAS to NAS (or SAN to SAN, depending on how you configure your 
storage servers).

Also note that transparent failover is only the case if you use GlusterFS 
with client-side AFR. This has a disadvantage, in that all writes from a 
client go to both "servers". This may or may not be an issue depending on 
your network layout. You can also configure server-side AFR, which means 
each client only talks to one server, and the servers replicate writes to 
each other. It means you only need a fast pipe between the two servers 
(e.g. a dedicated cross-over interface), rather than also requiring a fast 
pipe to all clients.

Of course, if you have server-side AFR, the fail-over isn't as 
transparent, as you'd need to implement server fencing and resource 
fail-over. If you do it that way, then you are probably better off 
exporting GlusterFS to clients via NFS over UDP as that fails over much 
more gracefully (i.e. GlusterFS/AFR between servers, NFS export to 


More information about the Linux-PowerEdge mailing list