FC RAID array -- One writer, Many Readers?
jake at dtiserv2.com
Thu Oct 30 17:24:00 CST 2003
Thanks for the reply.
I know I can do this with GFS, CXFS, SANergy, etc..and I have used GFS fairly extensively with (mostly) success.
The problem is that all these clustered file systems expect that you'll want the ability to write from each node simultanenously, which adds a lot of complexity (file locking, concurrency issues, etc, which then require heart beats, I/O fencing, etc).
I want something much more simplistic. I want the ability to write from only a _single_ machine, while still being able to read the file system "live" from the other connected machines.
Obviously this requires something more than just a traditional file system in order for clients to be made "aware" of changes to the file system (that their kernels had nothing to do with). I just don't believe that it should require something as complex as a full-blown clustered file system.
I'm just asking around because I'm hoping that I've managed to miss something, and someone will chime in with "Hey man, why not just use X" -- No one has done that yet, damnit ;-)
I'm considering just using GFS and playing what I can _not use_, by mounting file systems RO, disabling I/O fencing, etc.
But that's still going to make it more complex then I think it should be.
DTI Services, Inc.
On Fri, 31 Oct 2003 07:15:15 +1000 (EST)
jason andrade <jason at rtfmconsult.com> wrote:
> On Thu, 30 Oct 2003, Jake Gold wrote:
> > I have a FC RAID array with 5 machines (with FC HBAs) connected to it.
> > All 5 servers see this "drive" (as a SCSI device - using the QLogic driver).
> > I want to "serve" (read: read) the same data from all 5 servers, but only allow one server to write to it:
> > 4 servers read-only
> > 1 server read-write
> the problem i looked at 3+ years ago. wasn't really a solution then and i don't think
> there is one now.
> > I'm well aware of (and have used) GFS for this exact scenario, but it strikes me as overkill.
> > And I could, of course, create an ext3 file system on the shared disk. I would be able to mount it properly, but changes to the file system would not be
> > propagated to the read-only servers. So this is a big problem obviously.
> yes. the read servers would probably crash or behave very strangely.
> > Is anyone aware of something a bit more than a simple file system (or a standard configuration of one, I should say) but less than a full-blown Cluster FS like GFS?
> you could try talking to veritas as they may have a cluster aware filesystem
> for linux.
> > Or maybe another solution to this?
> sgi may have a cluster aware filesystem too - cxfs so perhaps some
> research there.
> > (I'm using Linux on Dell servers, so that's how I rationalize asking this list ;-)
> > I also could not find any list that seemed more appropriate, but I would welcome a referral.)
> please let us know how you go - this has been a problem in linux for a while and
> it'd be great to know there is a FC level solution (apart from GFS).
> till now i have gotten around it by using a network protocol instead, e.g
> NFS (and some people have used CODA or other such methods)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: not available
Url : http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20031030/f09e6be4/attachment.bin
More information about the Linux-PowerEdge