[Linux-PowerEdge] 2 predicted failure disks and RAID5
bakalarski.g at gmail.com
Tue Nov 14 12:52:45 CST 2017
Thanks for valuable input.
Regarding punctured block: from fwtermlog I got several (not much) lines
11/13/17 3:24:45: EVT#08603-11/13/17 3:24:45: 97=Puncturing bad block on
PD 02(e0x20/s2) at 9ecd
T35: maintainPdFailHistory=0 disablePuncturing=0
All the same PD, the same bad block (different time)
Is my raid useless?
BTW: why do think raid level migration to raid-6 with 2 additional disk
would be better than with one disk. I would keep VD size the same.
Anyway will migration too raid-6 fail with this "awful Puncturing)???
Anyway x 2: would such command work generally (assuming my VD0 (or aArray0
or L0) is raid5:
megacli –LDRecon -CfgSpanAdd -r6 -Array0[32:0,32:1,32:2,32:3,32,4] -a0
(I try to figure out how to make LDRecon i.e. raid level migration with
2017-11-14 17:56 GMT+01:00 Stephen Dowdy <sdowdy at ucar.edu>:
> On 11/14/2017 09:52 AM, Grzegorz Bakalarski wrote:
> > I have a server (R815) with PERC H710. I have 4 disks and RAID-5 on
> them. The server have 2 empty disk slots.
> > After night I noticed 2 disks got predicted failure state - LEDs on disk
> flash yellow (not green) on them (2 of them). MegaCLI shows that 2 disks
> have high "Predictive Failure Count:" - some few thousands.
> *IF* this all rebuilds fine and goes Optimal again, you really want to:
> megacli fwtermlog dsply aall | grep -i punct
> If you see a punctured block, you're gonna have to back off what you can,
> rebuild the RAID from scratch and restore, because there's no good way to
> fix a punctured stripe.
> (ignore the lines that say the controller supports puncturing).
> If you have to rebuild, i'd go RAID6 with your 6 drives.
> Stephen Dowdy - Systems Administrator - NCAR/RAL
> 303.497.2869 - sdowdy at ucar.edu - http://www.ral.ucar.edu/~
> Linux-PowerEdge mailing list
> Linux-PowerEdge at dell.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Linux-PowerEdge