[Linux-PowerEdge] Fixing a degraded RAID 1 configuration

Thibaut Pouzet thibaut.pouzet at lyra-network.com
Wed Sep 11 08:31:47 CDT 2013

Hi all,

A while ago, I noticed that I had a RAID 1 configuration that was 
working in degraded mode in one of my machines : only one of the disks 
was working. This machine is a poweredge R410 with an up-to-date CentOS 
6.4 OS, and srvadmin-*-7.2.0-*x86_64 packages. Upon investigation, the 
"faulty" disk had a foreign configuration, for an unknown reason. In 
order to restore the raid :
* I went to the https gui on port 1311 as root user, then went on "SAS 
6/iR Integrated" > "Informations/Configuration" > "Controller tasks", 
then choose "Foreign Configuration Operations", then clicked on 
"execute". Then, on the new page, I clicked on "Clear".
* At this step, the disk went from the foreign state, to the ready 
state. I want now to set it as a hotspare, so it then automatically 
rejoin the degraded raid group, and the raid group gets restored.
Nowhere in the GUI can I find anything relevant. So I went on the CLI, 
and gathered the correct controller ID & the correct disk :
/opt/dell/srvadmin/sbin/omreport storage controller
/opt/dell/srvadmin/sbin/omreport storage pdisk controller=0
Then, I tried this command :
sudo /opt/dell/srvadmin/sbin/omconfig storage pdisk 
action=assignglobalhotspare controller=0 pdisk=0:0:1 assign=yes
and got this result :
Operation disabled. Read, action=assignglobalhotspare assign=yes
Refer to the documentation for more information

Am I missing a step ? My goal here is to restore the RAID without 
loosing data, and possibly without rebooting the machine. Am I following 
the right process / is there another way ?



More information about the Linux-PowerEdge mailing list