openmanage 6.2 conflict with Redhat Cluster Suite

Chandrasekhar_R at Dell.com Chandrasekhar_R at Dell.com
Fri May 14 12:53:30 CDT 2010


Hi Achievement,

OMSA has a feature called "Manage Remote Node" which will be seen on top
most left navigation bar on the GUI.

This feature will allow you to manage remote system(s) from one
centralized webserver. Due to removal of that you cannot manage that
system from OpenManage Server Administrator webserver.

For more details, Please look into the chapter 3 in the following doc
http://support.dell.com/support/edocs/software/svradmin/6.2/en/UG/PDF/OM
SAUG.pdf

Thanks,
Chandrasekhar R
Dell | OpenManage

-----Original Message-----
From: linux-poweredge-bounces at dell.com
[mailto:linux-poweredge-bounces at dell.com] On Behalf Of
linux-poweredge-request at dell.com
Sent: Friday, May 14, 2010 9:10 PM
To: linux-poweredge-Lists
Subject: Linux-PowerEdge Digest, Vol 71, Issue 20

Send Linux-PowerEdge mailing list submissions to
	linux-poweredge at dell.com

To subscribe or unsubscribe via the World Wide Web, visit
	https://lists.us.dell.com/mailman/listinfo/linux-poweredge
or, via email, send a message with subject or body 'help' to
	linux-poweredge-request at dell.com

You can reach the person managing the list at
	linux-poweredge-owner at dell.com

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Linux-PowerEdge digest..."


Today's Topics:

   1. openmanage 6.2 conflict with Redhat Cluster Suite
      (Achievement Chan)
   2. dsm_sa_datamgrd crashing irregularly on PE1950 with EL5
      (Rainer Traut)
   3. iscsi offload on rh el based servers guide? (Gianluca Cecchi)


----------------------------------------------------------------------

Message: 1
Date: Fri, 14 May 2010 16:25:19 +0800
From: Achievement Chan <achievement.hk at gmail.com>
Subject: openmanage 6.2 conflict with Redhat Cluster Suite
To: linux-poweredge at dell.com
Message-ID:
	<AANLkTikPCPOhq7XJBXdohqw2YFxDLF6utzmQ6M9UMZoF at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Dear all,
After installed openmanage 6.2, I try to install Redhat Cluster Suite.

 I'm running CentOS5.4 64bit and use yum to install redhat cluster site.

i found packages conflict problem between and
libcmpiCppImpl0-2.0.0Dell-1.1.el5.i386 and tog-pegasus (required by
cluster-cim-0.12.1-2.el5.centos.x86_64)

For installing redhat cluster suite, i need to remove the following
package
but can't sure any netgative impact.

libcmpiCppImpl0
srvadmin-itunnelprovider
srvadmin-standardAgent

It there any major function of openmanage will be impacted?
or any standard way to resolve this problem?


Rgards,
Achievement
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20100514/
3db44aa9/attachment-0001.htm 

------------------------------

Message: 2
Date: Fri, 14 May 2010 12:16:20 +0200
From: Rainer Traut <tr.ml at gmx.de>
Subject: dsm_sa_datamgrd crashing irregularly on PE1950 with EL5
To: linux-poweredge at dell.com
Message-ID: <4BED22F4.4090902 at gmx.de>
Content-Type: text/plain; charset=UTF-8; format=flowed

Hi,

I'm observing irregularly dsm_sa_datamgrd crashing on a two node PE1950 
cluster with fully patched EL5.5 x86_64. It runs OMSA from dell yum
repo.

node1
# grep segfault /var/log/messages*
/var/log/messages:May 14 00:10:29 n01asp7 kernel: dsm_sa_datamgrd[4802]:

segfault at 00000000fffffffd rip 00000000004aa9c4 rsp 00000000f4eab008 
error 4

node2
# grep segfault /var/log/messages*
/var/log/messages.2:Apr 26 08:04:24 n02asp7 kernel: 
dsm_sa_datamgrd[4564]: segfault at 00000000fffffffd rip 00000000f7e369c4

rsp 00000000f4d25008 error 4

in both cases then
Server Administrator: Instrumentation Service EventID: 1009  Systems 
Management Data Manager Stopped

I'm not quite sure where the problem is, this thing is stable on a 
couple of other servers we run like PE2950.

Could be related to kvm and drbd these two servers run?
Anybody seen this?

Thx
Rainer



------------------------------

Message: 3
Date: Fri, 14 May 2010 17:40:16 +0200
From: Gianluca Cecchi <gianluca.cecchi at gmail.com>
Subject: iscsi offload on rh el based servers guide?
To: linux-poweredge at dell.com
Message-ID:
	<AANLkTin9KoVAxkopU1Xb7sqEAWISxuKVvIrj8a0vP9qy at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hello,
I found several guides (both Red Hat and Dell) about iscsi
configuration,
but it seems to me some doubts remain about configuration itself and
functionality when using iscsi offload....

I can test two M610 blades, where I have RH EL 5.5 and RH EL 6 beta.
Both are x86_64 systems.
Below information from rh el 5.5 server
lspci gives:
01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S
Gigabit Ethernet (rev 20)
01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S
Gigabit Ethernet (rev 20)
02:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E
PCI-Express Fusion-MPT SAS (rev 08)
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S
Gigabit Ethernet (rev 20)
03:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S
Gigabit Ethernet (rev 20)
05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711
10-Gigabit PCIe
05:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57711
10-Gigabit PCIe

I was able to configure the two 10Gbit interfaces to connect to several
volumes on a Dell EQL 6010XV storage array, using sw based iscsi.
Also, I was able to configure dm-multipath so that each volume is seen
by
two paths.

Some raw performance data show not more than 130-140MB/s (both by using
single devices and multipath devices) where other Windows based blades,
configured with Broadcom offload, reach more than 300MB/s on the same
storage array.

Trying to configure offload scsi on rh el I have these doubts:

My 10Gb BCM57711 card has two mac addresses (actually I have two of
these
cards in total, to bind to the EQL).
00:10:18:58:E8:F8 base eth card for network
00:10:18:58:E8:F9 for iscsi offload (not seen from ifconfig, for
example)

1) Do I have to configure an IP for both eth card and for the iscsi
offload
mac? If so must they be different?
Or if I use offload, the nic is not assigned an ip at all?

2) Do the nic has to be configured anyway to start in
/etc/sysconfig/network-scripts/ifcfg-eth0?
With something like this for example if without an ip:
# Broadcom Corporation NetXtreme II BCM57711 10-Gigabit PCIe
DEVICE=eth0
HWADDR=00:10:18:58:E8:F8
ONBOOT=yes
BOOTPROTO=static
TYPE=Ethernet
MTU=9000

3) Can I set MTU=9000 if using offload scsi? Is it supported?
If I don't have instead to set the nic to start at boot, where to put
the
mtu parameter?

4) I configured this way the iscsi offload iface, under
/var/lib/iscsi/ifaces dir:

# cat bnx2i.00:10:18:58:e8:f9
# BEGIN RECORD 2.0-871
iface.iscsi_ifacename = bnx2i.00:10:18:58:e8:f9
iface.ipaddress = 10.10.100.178
iface.hwaddress = 00:10:18:58:e8:f9
iface.transport_name = bnx2i
# END RECORD

and this for the nic associated:
[root at orasvi2 ifaces]# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:10:18:58:E8:F8
          inet addr:10.10.100.174  Bcast:10.10.100.255
Mask:255.255.255.0
          inet6 addr: fe80::210:18ff:fe58:e8f8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:1683 errors:0 dropped:0 overruns:0 frame:0
          TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:155976 (152.3 KiB)  TX bytes:4562 (4.4 KiB)
          Interrupt:114 Memory:dc800000-dcffffff

I have this results:
[root at orasvi2 ifaces]# iscsiadm -m node -P1
Target:
iqn.2001-05.com.equallogic:0-8a0906-97d4b5e06-596000000264bc83-blg9-vol3
Portal: 10.10.100.30:3260,1
Iface Name: bnx2i.00:10:18:58:e8:f9
Iface Name: bnx2i.00:10:18:58:e8:fb
Target:
iqn.2001-05.com.equallogic:0-8a0906-8904b5e06-b66000000204bc83-blg9-vol1
Portal: 10.10.100.30:3260,1
Iface Name: bnx2i.00:10:18:58:e8:f9
Iface Name: bnx2i.00:10:18:58:e8:fb
Target:
iqn.2001-05.com.equallogic:0-8a0906-94c4b5e06-df9000000234bc83-blg9-vol2
Portal: 10.10.100.30:3260,1
Iface Name: bnx2i.00:10:18:58:e8:f9
Iface Name: bnx2i.00:10:18:58:e8:fb

But actually I can connect only with one card and only to one node.....
;-(
[root at orasvi2 ifaces]# iscsiadm -m session -P1
Target:
iqn.2001-05.com.equallogic:0-8a0906-94c4b5e06-df9000000234bc83-blg9-vol2
Current Portal: 10.10.100.32:48140,1
Persistent Portal: 10.10.100.30:3260,1
**********
Interface:
**********
Iface Name: bnx2i.00:10:18:58:e8:f9
Iface Transport: bnx2i
Iface Initiatorname: iqn.1994-05.com.redhat:ab59ecbce6c
Iface IPaddress: <empty>
Iface HWaddress: 00:10:18:58:e8:f9
Iface Netdev: eth0
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

Note the
Iface IPaddress: <empty> ...

Just a basic test shows, for this single out of three volumes:
 dd if=/dev/sdb of=/dev/null bs=10240k count=9500

[root at orasvi2 ~]# vmstat 3
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
id
wa st
 1  1     24 139100 46577740  43056    0    0  3513     2  203  533  0
0 97
 2  0
 2  0     24 139660 46576220  42552    0    0 108117    23 3196 6280  0
2
87 11  0
 1  1     24 140576 46575508  42676    0    0 120064     0 3496 6508  0
2
87 11  0
 1  1     24 137440 46580264  42196    0    0 117888     0 3410 6429  0
2
87 11  0
 2  0     24 141512 46577092  41888    0    0 134741    17 3956 6979  0
2
87 11  0
 1  1     24 138568 46580280  41944    0    0 117376     0 3508 6458  0
2
87 11  0
 0  1     24 137620 46581736  41404    0    0 124629     0 3740 6088  0
2
87 11  0
 0  1     24 137148 46582524  41144    0    0 116651    11 3512 5570  0
2
87 11  0
 0  1     24 141428 46578336  40524    0    0 130560    32 3879 6844  0
2
87 11  0
 0  1     24 145376 46574348  41268    0    0 118997     0 3590 6512  0
2
87 11  0
 0  1     24 136468 46583624  40696    0    0 119467     0 3590 5904  0
2
87 11  0

Logs I can see:
May 14 16:35:10 orasvi2 kernel: bnx2i [05:00.00]: ISCSI_INIT passed
May 14 16:35:10 orasvi2 iscsid: Received iferror -19
May 14 16:35:10 orasvi2 iscsid: cannot make a connection to
10.10.100.30:3260 (-19,11)
May 14 16:35:11 orasvi2 kernel: bnx2i [05:00.01]: ISCSI_INIT passed
May 14 16:35:11 orasvi2 kernel: bnx2i [05:00.00]: ISCSI_INIT passed
May 14 16:35:11 orasvi2 kernel: bnx2i [05:00.01]: ISCSI_INIT passed
May 14 16:35:11 orasvi2 kernel: bnx2i [05:00.00]: ISCSI_INIT passed
May 14 16:35:11 orasvi2 iscsid: Received iferror -101
May 14 16:35:11 orasvi2 iscsid: cannot make a connection to
10.10.100.30:3260 (-101,11)
May 14 16:35:11 orasvi2 iscsid: Received iferror -19
May 14 16:35:11 orasvi2 iscsid: cannot make a connection to
10.10.100.30:3260 (-19,11)
May 14 16:35:11 orasvi2 iscsid: Received iferror -101
May 14 16:35:11 orasvi2 iscsid: cannot make a connection to
10.10.100.30:3260 (-101,11)
May 14 16:35:12 orasvi2 kernel: bnx2i [05:00.01]: ISCSI_INIT passed
May 14 16:35:12 orasvi2 iscsid: Received iferror -101
May 14 16:35:12 orasvi2 iscsid: cannot make a connection to
10.10.100.30:3260 (-101,11)
May 14 16:35:13 orasvi2 kernel:  connection1:0: detected conn error
(1011)
May 14 16:35:13 orasvi2 iscsid: Login authentication failed with target
iqn.2001-05.com.equallogic:0-8a0906-94c4b5e06-df9000000234bc83-blg9-vol2
May 14 16:35:15 orasvi2 kernel: bnx2i [05:00.00]: ISCSI_INIT passed
May 14 16:35:17 orasvi2 kernel:  connection1:0: bnx2i: conn update - MBL
0x40000 FBL 0x10000MRDSL_I 0x40000 MRDSL_T 0x10000
May 14 16:35:17 orasvi2 kernel:   Vendor: EQLOGIC   Model: 100E-00
Rev: 4.3
May 14 16:35:17 orasvi2 kernel:   Type:   Direct-Access
 ANSI SCSI revision: 05
May 14 16:35:17 orasvi2 kernel: SCSI device sdb: 419450880 512-byte hdwr
sectors (214759 MB)
May 14 16:35:17 orasvi2 kernel: sdb: Write Protect is off
May 14 16:35:17 orasvi2 kernel: SCSI device sdb: drive cache: write
through
May 14 16:35:17 orasvi2 kernel: SCSI device sdb: 419450880 512-byte hdwr
sectors (214759 MB)
May 14 16:35:17 orasvi2 kernel: sdb: Write Protect is off
May 14 16:35:17 orasvi2 kernel: SCSI device sdb: drive cache: write
through
May 14 16:35:17 orasvi2 kernel:  sdb: unknown partition table
May 14 16:35:17 orasvi2 kernel: sd 2:0:0:0: Attached scsi disk sdb
May 14 16:35:17 orasvi2 multipathd: sdb: add path (uevent)
May 14 16:35:17 orasvi2 kernel: sd 2:0:0:0: Attached scsi generic sg3
type 0
May 14 16:35:17 orasvi2 kernel: device-mapper: multipath round-robin:
version 1.0.0 loaded
May 14 16:35:17 orasvi2 multipathd: vol2: load table [0 419450880
multipath
1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:16 10]
May 14 16:35:17 orasvi2 multipathd: vol2: event checker started
May 14 16:35:17 orasvi2 multipathd: dm-3: add map (uevent)
May 14 16:35:17 orasvi2 multipathd: dm-3: devmap already registered
May 14 16:35:17 orasvi2 iscsid: connection1:0 is operational now
May 14 16:35:17 orasvi2 iscsid: Could not write to
/sys/bus/scsi/devices/2:0:0:0/queue_depth. Invalid permissions.
May 14 16:35:17 orasvi2 iscsid: Could not queue depth for LUN 0 err 13.

and
# multipath -l
vol2 (36090a068e0b5c49483bc3402000090df) dm-3 EQLOGIC,100E-00
[size=200G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 2:0:0:0 sdb 8:16  [active][undef]


If I now try to connect with the other card ending fb mac .175 ip for
eth1
and .179 for the iscsi part):
# iscsiadm --mode node --portal 10.10.100.30:3260 -I
bnx2i.00:10:18:58:e8:fb
--login
Logging in to [iface: bnx2i.00:10:18:58:e8:fb, target:
iqn.2001-05.com.equallogic:0-8a0906-97d4b5e06-596000000264bc83-blg9-vol3
,
portal: 10.10.100.30,3260]
Logging in to [iface: bnx2i.00:10:18:58:e8:fb, target:
iqn.2001-05.com.equallogic:0-8a0906-8904b5e06-b66000000204bc83-blg9-vol1
,
portal: 10.10.100.30,3260]
Logging in to [iface: bnx2i.00:10:18:58:e8:fb, target:
iqn.2001-05.com.equallogic:0-8a0906-94c4b5e06-df9000000234bc83-blg9-vol2
,
portal: 10.10.100.30,3260]
iscsiadm: Could not login to [iface: bnx2i.00:10:18:58:e8:fb, target:
iqn.2001-05.com.equallogic:0-8a0906-97d4b5e06-596000000264bc83-blg9-vol3
,
portal: 10.10.100.30,3260]:
iscsiadm: initiator reported error (4 - encountered connection failure)
iscsiadm: Could not login to [iface: bnx2i.00:10:18:58:e8:fb, target:
iqn.2001-05.com.equallogic:0-8a0906-8904b5e06-b66000000204bc83-blg9-vol1
,
portal: 10.10.100.30,3260]:
iscsiadm: initiator reported error (4 - encountered connection failure)
iscsiadm: Could not login to [iface: bnx2i.00:10:18:58:e8:fb, target:
iqn.2001-05.com.equallogic:0-8a0906-94c4b5e06-df9000000234bc83-blg9-vol2
,
portal: 10.10.100.30,3260]:
iscsiadm: initiator reported error (4 - encountered connection failure)

with this in messages:
May 14 17:28:59 orasvi2 kernel: bnx2i [05:00.01]: ISCSI_INIT passed
May 14 17:29:00 orasvi2 last message repeated 2 times
May 14 17:29:00 orasvi2 iscsid: Received iferror -101
May 14 17:29:00 orasvi2 iscsid: cannot make a connection to
10.10.100.30:3260 (-101,11)
May 14 17:29:00 orasvi2 iscsid: Received iferror -101
May 14 17:29:00 orasvi2 iscsid: cannot make a connection to
10.10.100.30:3260 (-101,11)
May 14 17:29:00 orasvi2 iscsid: Received iferror -101
May 14 17:29:00 orasvi2 iscsid: cannot make a connection to
10.10.100.30:3260 (-101,11)

Any insight or guide?

M610 fw information:
Bios 2.0.13
network firmware (from NETW_FRMW_LX_R259547.BIN applied)
Release Date:
February 12, 2010
Default Log File Name: R259547

Thanks in advance for any help and/or pointers,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20100514/
7ca666de/attachment.htm 

------------------------------

_______________________________________________
Linux-PowerEdge mailing list
Linux-PowerEdge at dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

End of Linux-PowerEdge Digest, Vol 71, Issue 20
***********************************************



More information about the Linux-PowerEdge mailing list