Preventing I/O starvation on MD1000s triggered by a failed disk.
Carlson, Timothy S
Timothy.Carlson at pnl.gov
Tue Aug 24 19:10:56 CDT 2010
Just focusing on one type of disk error in my reply here. The really nasty one IMHO.
Consumers aren't going to care about single bit silent data corruption. One slightly off color pixel per movie? The PC isn't even going to complain as the video comes of the disk because it thinks it is getting the correct bit. Even a RAID controller is going to be happy with the bad data because it has no idea the data is bad! It If you are looking for Higgs boson, that one bit may be the one you want :)
If you do really care about the silent data corruption then you need to be checksuming blocks as they come off the disk. There are some companies that now sell products that protect you from the silent data corruption as the data leaves the disk and gets to the controller. The other reply mentioned ZFS which is end to end including the silent bits.
One of our recent RFPs for a file system (about 1 Petabyte) included a clause for silent data corruption because we cared about such losses.
From: linux-poweredge-bounces at dell.com [mailto:linux-poweredge-bounces at dell.com] On Behalf Of Kevin Davidson
Sent: Tuesday, August 24, 2010 4:05 PM
Cc: linux-poweredge List
Subject: Re: Preventing I/O starvation on MD1000s triggered by a failed disk.
On 24 Aug 2010, at 22:25, Stroller <stroller at stellar.eclipse.co.uk> wrote:
> Are you seriously telling me that if I go out and buy a 2TB external
> drive from PC World, fill it up with movies, it's sure to fail before
> I've used it 6 full times? Because that's what your "1 per every 12
> TB" claim seems to imply. I don't think manufacturers would release
> drives with such poor reliability, because I don't think consumers
> would stand for it.
You should search for reports of CERN's studies on silent data loss. They really care about this as each run of the LHC will generate terabytes of data that they cannot afford to be corrupted before they analyse it. Turns out their findings were pretty scary. Data reported as being correctly written and correctly read back was not the same as that compared to a known good copy. In frequencies that will turn up in regular use of terabyte and bigger sized disks. It looks like a combination of controller errors and media failures. RAID doesn't help you at all; the best it can do is tell you there's a problem. ZFS is about the only technology that can combat this.
The problem is not with drive failure, or even bad blocks; it's much more insidious and cannot be detected without putting checksums elsewhere and periodically checking the data and checksum agree. This problem has always been there, it's an issue now because we are dealing with more data and hitting this more often.
Apple Certified System Administrator
Sent from my iPad
indigospring :Making Sense of IT
t 0870 745 4001
More information about the Linux-PowerEdge