Corrupt filesystem question

David Truchan-contr David.Truchan-contr at
Tue Apr 27 17:04:01 CDT 2004

Hi All,

2 Dell 6650 Servers clustered running RedHat AS2.1 e.24 kernel attached to 
emc cx600 via single qlogic 2340 hbas in each server.  The switch is zoned
such that each server sees 2 paths to each lun.  We have  unlicensed
version of powerpath installed in case trespassing of luns occurs. 

Recently, we have noticed that we have had filesystem corruption
on many filesystems that are attached to the san.

We believe this to have occured over the past year since 
we had many server crashes due to kernel instability 
issues prior to the e.24 kernel.  We suspect that a fsck has never
actually been run on these filesystems since their original creation.

The filesystems are ext3.  The filesystems are mounted and
unmounted via the redhat clustering software. 

It appears that the redhat clustering software simply mounts the 
devices but never performs anytype of fsck check.  Hence,
we never noticed that we had filesystem corruption.  

I'm wondering if there's a way to configure the clustering software to 
better notify us when we have corrupt filesystems or if there's some clever way
of scripting this to happen before the filesystems get mounted.  Keep in mind
that I wouldn't want to run a fsck if the other node in the cluster was mounting 
those filesystems.  I don't think adding the code to the individual cluster service
stop and start scripts would work either since by the time these scripts get
run, the filesystem are already mounted and I would be running a fsck on 
an open filesystem. 

The other question I have is that I have configured the cluster to actually
use the /dev/emcpower device name instead of the /dev/sd?1 name .  I am calling the 
emcpower path  directly since this helps avoid the device renaming issue that occurs
when I add/remove luns.  I'm wondering if this could be a cause
of the filesystem becoming corrupt and am wondering if I should use something
like devlables instead.   

Your thoughts and opinions are appreciated.



More information about the Linux-PowerEdge mailing list