[Linux-PowerEdge] [Poweredgec-tools] c410x, GPU/M2050 and C6220

Philippe SENOT philippe.senot at univ-lorraine.fr
Fri Jul 12 06:52:35 CDT 2013


Hi Thomas!

and many  thanks for your advices!

That works !!!! (node1) with motherboard partnumber  3C9JJ as you said me!

I'm waiting for DELL to change the all 3 MB of others nodes.

It was a long long (and crazy!)  way!

Best regards,

Philippe.


Le 09/07/2013 11:15, Thomas Dargel a écrit :
> Hi Philippe,
>
>  please, don't hurt me, but the second connector from left @ C410x is NOT
> inserted correctly! But that's definitely not the reason why you don't 
> see
> any M2050 in none of the C6220-nodes.
>
> @my question about the MB-P/N:
> How old is your C6220? Mine is from august last year, and I have had a 
> related
> behaviour with an InfiniBand Mezzanine card and a HIC.
> If it is possible, try to remove the 10GE card in one node, and check 
> if you
> could see the HIC and M2050 with lspci (might be helpful to reduce the 
> GPUs to
> one per node in the testing period).
>
> In my case this works and then you should open a support-case with Dell.
>
> Keep me updated,
> best regards
>
>  Thomas.
>
> On 09.07.2013 10:33, Philippe SENOT wrote:
>> Hi Thomas,
>>
>>
>> Le 09/07/2013 08:39, Thomas Dargel a écrit :
>>> Hi Philippe,
>>>
>>>  is there a mezzanine card installed in addition to the HIC?
>> Yes.
>> There are 2 mezzanine cards: HIC card and Intel 82599 dual 10 GbE
>>
>>> Could you check and post which mainboard P/N you have?
>> TTH1R
>>>
>>> #> /opt/dell/pec/bmc allinfo  | grep "Board Extra"
>>> [out of the bmc-packages from poweredgec.com]
>>>
>>> Furthermore, you should check if the iPass cables are connected 
>>> properly -- my
>>> impression of this type of connector is, that's always a gamble if 
>>> it's linked
>>> correctly, or not.
>>>
>> I verified again. It seems to be ok.
>> You can see that here:
>> https://filex.univ-lorraine.fr/get?k=PCKV9nSIOQnknTxrykl
>>
>>> Could you check and post the BIOS- and BMC-versions of the C6620 and 
>>> C410x?
>>>
>>>  BIOS C6220: >= 1.0.28; better 1.1.19
>>>  BMC C6220: >= 1.24; better 2.02
>>>  BMC C410x: 1.28
>>>
>> My releases are all the same.
>> I'm using CentOS 6.3 and there is no xserver installed on the nodes.
>>
>> Thanks for your help.
>>
>>
>> Cheers,
>>
>> Philippe.
>>
>>
>>> Good luck,
>>> cheers,
>>>
>>>  Thomas.
>>>
>>> On 08.07.2013 16:09, Philippe SENOT wrote:
>>>> Hello,
>>>>
>>>> I'm using a cluster with one C6220 connected by HIC card with 4 
>>>> iPass cables to
>>>> one C410x containing 8 GPU/M2050 connected 2 by 1 node.
>>>> Centos 6.3 is installed on C6220.
>>>>
>>>> I can't view the HIC Card and GPU.
>>>>
>>>> A " lspci" command shows nothing.
>>>>
>>>>
>>>> I sawthat Martin Flemming  had the same problem
>>>>
>>>> What kind of bios options did you disabled to see GPU??
>>>>
>>>> Thanks for your help
>>>>
>>>> Philippe.
>>>>
>>>> ************************************************************************************************* 
>>>>
>>>>
>>>>
>>>>> /  From: "Martin Flemming" <martin.flemming at desy.de
>>>>> <https://lists.us.dell.com/mailman/listinfo/linux-poweredge>>
>>>> />/  To: "Dell poweredge Mailling-liste" <linux-poweredge at dell.com
>>>> <https://lists.us.dell.com/mailman/listinfo/linux-poweredge>>
>>>> />/  Sent: Tuesday, March 27, 2012 1:39:49 AM (GMT-0500) 
>>>> America/New_York
>>>> />/  Subject: c410x, GPU/M2075 and C6145
>>>> />/
>>>> />/
>>>> />/  Hi !
>>>> />/
>>>> />/  I've got a problem to get the
>>>> />/  NVIDIA M2075 PCIe x16 GPGPU Card in PowerEdge C410x
>>>> />/  to work with (at this time for two) C6145 :-(
>>>> />/
>>>> />/  ... lspci shows nothing about them :-(
>>>> />/
>>>> />/  After disable all special PCIE-BIOS-settings,
>>>> />/  one machine shows the controller :-)
>>>> />/
>>>> />/  42:00.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 
>>>> 3-Port PCI
>>>> Express Gen 2 (5.0 GT/s) Switch (rev bb)
>>>> />/  43:04.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 
>>>> 3-Port PCI
>>>> Express Gen 2 (5.0 GT/s) Switch (rev bb)
>>>> />/  43:08.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 
>>>> 3-Port PCI
>>>> Express Gen 2 (5.0 GT/s) Switch (rev bb)
>>>> />/  45:00.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 
>>>> 3-Port PCI
>>>> Express Gen 2 (5.0 GT/s) Switch (rev ff)
>>>> />/  46:04.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 
>>>> 3-Port PCI
>>>> Express Gen 2 (5.0 GT/s) Switch (rev ff)
>>>> />/  46:08.0 PCI bridge: PLX Technology, Inc. PEX 8647 48-Lane, 
>>>> 3-Port PCI
>>>> Express Gen 2 (5.0 GT/s) Switch (rev ff)
>>>> />/  47:00.0 PCI bridge: PLX Technology, Inc. PEX 8696 96-lane, 
>>>> 24-Port PCI
>>>> Express Gen 2 (5.0 GT/s) Multi-Root Switch (rev ff)
>>>> />/  48:04.0 PCI bridge: PLX Technology, Inc. PEX 8696 96-lane, 
>>>> 24-Port PCI
>>>> Express Gen 2 (5.0 GT/s) Multi-Root Switch (rev ff)
>>>> />/  48:08.0 PCI bridge: PLX Technology, Inc. PEX 8696 96-lane, 
>>>> 24-Port PCI
>>>> Express Gen 2 (5.0 GT/s) Multi-Root Switch (rev ff)
>>>> />/  49:00.0 3D controller: NVIDIA Corporation Tesla M2075 Dual-Slot
>>>> Computing Processor Module (rev ff)
>>>> />/  4a:00.0 3D controller: NVIDIA Corporation Tesla M2075 Dual-Slot
>>>> Computing Processor Module (rev ff)
>>>> />***************************************************************************************************************************** 
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> -- 
>>>> -------------------------
>>>> *Philippe SENOT*
>>>> /Administrateur Systèmes au Laboratoire
>>>> Structure et Réactivité des Systèmes Moléculaires Complexes (LSRSMC 
>>>> UMR CNRS
>>>> 7565)
>>>> *UNIVERSITE de LORRAINE* Institut de Chimie, Physique et Matériaux
>>>> /
>>>>
>>>>
>>>> _______________________________________________
>>>> Poweredgec-tools mailing list
>>>> Poweredgec-tools at dell.com
>>>> https://lists.us.dell.com/mailman/listinfo/poweredgec-tools
>>>>
>>>
>>
>> -- 
>> -------------------------
>> *Philippe SENOT*
>> /Administrateur Systèmes au Laboratoire
>> Structure et Réactivité des Systèmes Moléculaires Complexes (LSRSMC 
>> UMR CNRS 7565)
>> Groupe de Physique des Collisions et Radio-biologie
>> Gestionnaire du parc informatique de l'ICPM
>>
>> *UNIVERSITE de LORRAINE* Institut de Chimie, Physique et Matériaux
>> 1 Bd Arago, 57078 METZ Cedex 3, FRANCE.
>> Tel. (+33) 03.87.31.58.63 Fax. (+33) 03.87.54.72.57 Abroad replace 03 
>> by 3
>> /
>> http://lpmc.sciences.univ-metz.fr
>> http://www.srsmc.uhp-nancy.fr
>>
>>   * Anglais - détecté
>>   * Anglais
>>   * Français
>>
>>   * Anglais
>>   * Français
>>
>> <javascript:void(0);>
>

-- 
-------------------------
*Philippe SENOT*
/Administrateur Systèmes au Laboratoire
Structure et Réactivité des Systèmes Moléculaires Complexes (LSRSMC UMR 
CNRS 7565)
Groupe de Physique des Collisions et Radio-biologie
Gestionnaire du parc informatique de l'ICPM

*UNIVERSITE de LORRAINE* Institut de Chimie, Physique et Matériaux
1 Bd Arago, 57078 METZ Cedex 3, FRANCE.
Tel. (+33) 03.87.31.58.63 Fax. (+33) 03.87.54.72.57 Abroad replace 03 by 3
/
http://lpmc.sciences.univ-metz.fr
http://www.srsmc.uhp-nancy.fr
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20130712/22e03661/attachment-0001.html 


More information about the Linux-PowerEdge mailing list