DRAC setup with bonded interfaces
dbrooks at mdah.state.ms.us
Thu Mar 1 10:56:07 CST 2012
And there may lie my problem, I am using mode4. I guess I could migrate
over to 6 since it is about the only other one that actually can utilize
the entire bonded speed. I will look into this more. Thanks!
On 3/1/2012 10:48 AM, Jonathan wrote:
> Hi Donny,
> It is possible to get the iDRACs working over a bonded interface.
> But I guess it might differ for bonding modes: I could imagine bond-mode
> 4 (802.3ad mode) to cause issues,
> I am sure that bonding modes 0,1,2,3,5,6 work (and I have extensively
> used modes 5+6) for a share iDRAC.
> However watch out for a possible Firmware-Bug (I have an issue open with
> Dell for>10month now, and only slowly we are getting somewhere) where
> the iDRAC express 'shared' interfaces suddenly fail (and are only truly
> and reliably reachable while the OS is powered down) - in my case for 5
> R815 boxes this is true.
> So keep an eye on things.
> On 03/01/2012 05:11 PM, Donny Brooks wrote:
>> I have a few poweredge servers that I am loading proxmox (debian based)
>> on to. They are as follows:
>> R610 with a iDRAC 6 enterprise
>> 2 x 2900's with I think iDRAC 5 express in each
>> T710 with iDRAC 6 Express
>> The R610 and T710 each have 4 NIC's that I plan to bond 2 for SAN
>> traffic and 2 bonded with trunk for standard network while the 2900's
>> each only have 2 so it will be 1 and 1. In our setup we run various
>> vlans and I want the DRAC's to be accessible on the default untagged
>> vlan of 1 just the same as the switches and the hypervisors for the
>> virtual machines. However I have not found a concise setup for these to
>> get them accessible on the network. Since the R610 and T710 will be over
>> a trunked and bonded interface is this even possible? Any guidance is
>> greatly appreciated.
>> Donny B.
>> Linux-PowerEdge mailing list
>> Linux-PowerEdge at dell.com
> Linux-PowerEdge mailing list
> Linux-PowerEdge at dell.com
More information about the Linux-PowerEdge