X-MimeOLE: Produced By Microsoft Exchange V6.5
Received: by onstor-exch02.onstor.net 
	id <01C796AD.82FEFAFE@onstor-exch02.onstor.net>; Mon, 14 May 2007 21:57:17 -0700
MIME-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----_=_NextPart_001_01C796AD.82FEFAFE"
References: <BB375AF679D4A34E9CA8DFA650E2B04E0138C684@onstor-exch02.onstor.net>
Content-class: urn:content-classes:message
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing cluster errors
Date: Mon, 14 May 2007 21:57:17 -0700
Message-ID: <BB375AF679D4A34E9CA8DFA650E2B04E021760D5@onstor-exch02.onstor.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing cluster errors
Thread-Index: AceUzHDmrKQBNO3gTRuD2SAA03/ulgAKkyw3AAD8iXoAU0mw8AATUhXAAAXP0U4=
From: "Sandrine Boulanger" <sandrine.boulanger@onstor.com>
To: "Chris Vandever" <chris.vandever@onstor.com>,
	"Brian DeForest" <brian.deforest@onstor.com>,
	"Raj Kumar" <raj.kumar@onstor.com>
Cc: "Andy Sharp" <andy.sharp@onstor.com>

This is a multi-part message in MIME format.

------_=_NextPart_001_01C796AD.82FEFAFE
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Weird, I ran into this earlier when I forgot to copy the latest =
rc.initial, but this time we did it.
You can either power off the filer and swap flashes, then power up, or =
power cycle it when you have the ssc console open, rush back into your =
cube, ^E on ssc console when the ....... starts, get to ssc-prom, env =
view to check boot_dev variable, env set boot_dev wd1a or wd0a depending =
on what the current is, then autoload. This way you'll be back to the =
222 flash and we can mount the other partition to check /etc/rc.initial =
(verify we copied it at the right place.
Other option is to go through first install script just to specify filer =
name and sc1 ip (maybe also default route 10.2.0.1), then get to nfxsh =
and do a system reboot -s -y to go back to 2.2.2. This way the =
partitions are cleanly unmunted, which is not the case when you power =
cycle (and then you might need to use fsck).


-----Original Message-----
From: Chris Vandever
Sent: Mon 5/14/2007 7:06 PM
To: Sandrine Boulanger; Brian DeForest; Raj Kumar
Cc: Andy Sharp
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors
=20
I followed Sandrine's instructions to set up the R2.2.2 flashes and then =
the R3.0 flashes, including typing 'y' to get the config copied to the =
secondary (R3.0) flash:

=20

/mnt/R3.0.0.0/R3.0.0.0-050907/nfx-tree/Build/ch/dbg/Release/etc

Do you wish for this flash to automatically copy

the configuration from the other flash at boot time

rather than running the initial config menu? [N] y

I'll take y as a 'Yes' and set the new flash to autoupgrade.

eng28#

=20

However, the system in its infinite wisdom booted into the initial =
config tool.  Sounds like a bug to me.

=20

ChrisV

________________________________

From: Sandrine Boulanger=20
Sent: Monday, May 14, 2007 9:52 AM
To: Sandrine Boulanger; Chris Vandever; Brian DeForest; Raj Kumar
Cc: Andy Sharp
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors

=20

We lost the evidence again, the filer rebooted on its own yesterday, =
probably due to resource or memory leak, cluster errors are filling =
elog:

=20

.

May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

.

May 13 08:00:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:17:44 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager: =
Started

May 13 08:17:46 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.0 =
State Up, Msg ''

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 0, State Up

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.1 =
State Down, Msg ''

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 1, State Up

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.2 =
State Down, Msg ''

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 2, State Up

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.3 =
State Down, Msg ''

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 3, State Up

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2, =
CPU 0, State Up

=20

-----Original Message-----
From: Sandrine Boulanger=20
Sent: Saturday, May 12, 2007 6:14 PM
To: Chris Vandever; Brian DeForest; Raj Kumar
Cc: Andy Sharp
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors

=20

Procedure:

=20

Created 2 new flashes with 2.2.2 version for g10r9 and g11r9.

Clustered the 2 filers

Created configuration (from gateway to volume level)

From the 2 filers active flash, mount sub#16 Release directory

Copied the new flash_install.sh under Release/etc

Started the flash_install.sh

Answered Y at the end to trigger later the config files copy once the =
3.0 flash is booting

Copied the new rc.initial to /etc on the new 3.0 flash

system reboot -s -y on g10r9

g10r9 boots, get the config files from the 220 flash through rc.initial, =
then comes up with 3.0 version, cluster db still at 220. g11r9 still =
running version 220. Cluster commands are fine

system reboot -s -y on g11r9

g10r9 boots, get the config files from the 220 flash through rc.initial, =
then comes up with 3.0 version, cluster db still at 220. Both filers are =
now at version 3.0 with cluster db at 2220. Starting to see some cluster =
errors in elog on g10r9

5 minutes later cluster db is upgraded to 300. Still getting cluster =
errors on g10r9

=20

=20

-----Original Message-----

From: Chris Vandever

Sent: Sat 5/12/2007 5:36 PM

To: Sandrine Boulanger; Brian DeForest; Raj Kumar

Cc: Andy Sharp

Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors

=20

I'll need the exact procedure that's being used for the upgrade, and =
access to any scripts, etc.  Clearly, the clustering code is confused =
since the clusDb shows it was successfully upgraded, yet the clustering =
code is still trying (unsuccessfully) to upgrade it.

=20

I won't be able to look at it this evening, but I will try to do so =
tomorrow.

=20

ChrisV

=20

________________________________

=20

From: Sandrine Boulanger

Sent: Sat 5/12/2007 12:33 PM

To: Chris Vandever; Brian DeForest; Raj Kumar

Cc: Andy Sharp

Subject: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing =
cluster errors

=20

=20

=20

I tested Andy's fix for the copy of the config files. I used the files =
as standalone since we don't have a new build. It worked fine. Prior to =
the DBs being upgraded from 220 to 30, all commands were running OK. I =
tried to add a gns root and it refused because the DBs were not upgraded =
yet, which is good.

=20

Then the DBs were upgraded to 30, and at that time I saw the same type =
of errors we saw last Thursday: cluster show summary on g10r9, or gns =
show cifs was failing with cluster2 errors, see elog below. The commands =
are OK on g11r9.

=20

G10r9 was the first to be rebooted on 3.0 flash, g11r9 was second.

=20

G10r9 ssc ip is 10.2.9.10

=20

I'll leave it in this state, let me know if I can help debugging.

=20

# dbtools show /onstor/conf/cluster.db.DB0 -c=20

=20

Cluster DB Header

=20

------------------------------------

=20

dbHdr.version =3D 0x30000

=20

dbHdr.signature =3D 0x434c5553

=20

dbHdr.blockSize =3D 256

=20

dbHdr.freeOffset =3D 0

=20

dbHdr.endOffset =3D 2336000

=20

dbHdr.recCount =3D 0

=20

dbHdr.hashTblSize =3D 4096

=20

dbHdr.hashTblOffset =3D 256

=20

dbHdr.DebugMode =3D 0

=20

# nfxsh

=20

Welcome to the ONStor NAS Gateway.

=20

g10r9 diag> gns show

=20

% Command incomplete.

=20

g10r9 diag> gns show=20

=20

  cifs  Show GNS CIFS

=20

g10r9 diag> gns show cifs=20

=20

  ROOTNAME[\Path]  Root and optional path

=20

  all

=20

g10r9 diag> gns show cifs all

=20

This command is unavailable until after the cluster database has been =
upgraded.

=20

% Command failure.

=20

g10r9 diag> gns show cifs all

=20

This command is unavailable until after the cluster database has been =
upgraded.

=20

% Command failure.

=20

g10r9 diag> cluster show summary=20

=20

Cluster Name: g10r9       Cluster State:   On

=20

------------------------------------------------------

=20

NAS Gateways        IP              State   PCC

=20

------------------------------------------------------

=20

g10r9               10.2.9.10       UP      YES

=20

g11r9               10.2.9.11       UP      NO

=20

Getting cluster information failed

=20

% Command failure.

=20

g10r9 diag>

=20

# cat /var/agile/messages=20

=20

May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over

=20

May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30=20

=20

May 12 12:00:38 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster daemon=20

=20

May 12 12:00:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:01:09 g10r9 last message repeated 3 times

=20

May 12 12:01:13 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster daemon=20

=20

May 12 12:01:19 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: cluster_request_upgrade_db: =
no reply bck -1=20

=20

May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: =
ClusterCtrl_ProcessTimerFunc: Request to upgrade clusDb failed, rc 30=20

=20

May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: =
ClusterCtrl_ProcessTimerFunc: Running in degraded mode on version =
0x20200, will retry upgrade to version 0x30000 later...=20

=20

May 12 12:01:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:01:40 g10r9 : 0:0:cluster2:ERROR: cluster_set_lock_on_rpc: no =
reply bck -1=20

=20

May 12 12:01:49 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:02:00 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30=20

=20

May 12 12:03:35 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:03:57 g10r9 last message repeated 3 times

=20

May 12 12:03:57 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list for node g10r9

=20

May 12 12:04:06 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:04:07 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:04:07 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list for node g10r9

=20

May 12 12:04:16 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:04:26 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:04:27 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11=20

=20

May 12 12:04:36 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:04:42 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11=20

=20

May 12 12:04:46 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:04:56 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:05:28 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11=20

=20

May 12 12:05:33 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down=20

=20

May 12 12:05:34 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11=20

=20

May 12 12:05:34 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g11r9', State Up, Msg ''

=20

May 12 12:05:41 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11=20

=20

May 12 12:05:50 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11=20

=20

May 12 12:05:54 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11=20

=20

May 12 12:05:59 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11=20

=20

May 12 12:06:11 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11=20

=20

May 12 12:06:15 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11=20

=20

May 12 12:06:20 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11=20

=20

May 12 12:06:24 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11=20

=20

May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30=20

=20

May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1=20

=20

May 12 12:06:29 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list for node g10r9

=20

May 12 12:06:33 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11=20

=20

May 12 12:06:40 g10r9 : 0:0:cluster2:ERROR: cluster_upgrade_db_runtime: =
ClusDb update complete; successfully updated version from 0x20200 to =
0x30000=20

=20

May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file in progress, remote ver 1178995860(554), sending to 10.2.9.11=20

=20

May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: cluster =
db version matches=20

=20

May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file end, code 0=20

=20

May 12 12:07:45 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: gns show : status[4]

=20

May 12 12:08:05 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1=20

=20

May 12 12:08:05 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns show cifs all : =
status[11]

=20

May 12 12:10:12 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1=20

=20

May 12 12:10:12 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: gns show cifs all : =
status[11]

=20

# zcat /var/agile/messages.0.gz                                          =
                                                             =20

=20

May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager: =
Started

=20

May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 0, State Up

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 1, State Up

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 2, State Up

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 3, State Up

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2, =
CPU 0, State Up

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.0 =
State Up, Msg ''

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.1 =
State Down, Msg ''

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.2 =
State Down, Msg ''

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.3 =
State Down, Msg ''

=20

May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
failed

=20

May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
failed

=20

May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session =
closed

=20

May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session =
closed

=20

May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
5=20

=20

May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
6=20

=20

May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
8=20

=20

May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
9=20

=20

May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
7=20

=20

May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g10r9', State Up, Msg ''

=20

May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
success

=20

May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
success

=20

May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session =
closed

=20

May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session =
closed

=20

May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
success

=20

May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
success

=20

May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: Using 10.2.9.10 as my =
primary address=20

=20

May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size 64 =


=20

May 12 11:38:17 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'nis-vol', Id 0x000002000000006a, Event 'Create', State 'Down'

=20

May 12 11:38:18 g10r9 : 1:2:sanm_ag:WARNING: 2: sanm_doRMCLocalClose: =
reopening session to sanmd=20

=20

May 12 11:38:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'jumbo-vol', Id 0x0000020000000073, Event 'Create', State 'Down'

=20

May 12 11:38:19 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'win-vol', Id 0x000002000000006d, Event 'Create', State 'Down'

=20

May 12 11:38:19 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -> rc.agile: booting =
version 3.0.0.0b : status[2]

=20

May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'openldap-vol', Id 0x0000020000000071, Event 'Create', State 'Down'

=20

May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local-nis-vol', Id 0x000002000000006e, Event 'Create', State 'Down'

=20

May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'authdom-vol', Id 0x0000020000000072, Event 'Create', State 'Down'

=20

May 12 11:38:21 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local', Id 0x0000020000000075, Event 'Create', State 'Down'

=20

May 12 11:38:22 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -> rc.agile: booting =
from /dev/wd0a : status[2]

=20

May 12 11:38:22 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'vol_mgmt_512', Id 0x0000020000000066, Event 'Create', State 'Down'

=20

May 12 11:38:23 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'LSI_E4600A_R46_g10r9_core', Id 0x0000020000000074, Event 'Create', =
State 'Down'

=20

May 12 11:38:32 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0: Mgmt =
Port 0.0.0.0 PCC, State Up

=20

May 12 11:38:39 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session open =
failed

=20

May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000001026a8a93, WWN 100000068d002a40, LUN 1, vendor 'IBM', model =
'ULTRIUM-TD3'=20

=20

May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000299256570, WWN 100000068d002a40, LUN 2, vendor 'IBM', model =
'ULTRIUM-TD3'=20

=20

May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000474cbbcf7, WWN 100000068d002a40, LUN 4, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000005020ee656, WWN 100000068d002a40, LUN 5, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000007527722f4, WWN 100000068d002a40, LUN 7, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000008c30a5552, WWN 100000068d002a40, LUN 8, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000ad8b479e3, WWN 100000068d002a40, LUN 10, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000b326471cc, WWN 100000068d002a40, LUN 11, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000d7663d86b, WWN 100000068d002a40, LUN 13, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000e50bb2aad, WWN 100000068d002a40, LUN 14, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000010cecbc163, WWN 100000068d002a40, LUN 16, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000001101709690, WWN 100000068d002a40, LUN 17, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_closeSess : NCM session closed

=20

May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session open =
success

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 000000120c18b3b8, WWN 100000068d002a40, LUN 18, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000001483e7a0c2, WWN 100000068d002a40, LUN 20, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 000000154c5cf731, WWN 100000068d002a40, LUN 21, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 000000168d36b6dd, WWN 100000068d002a40, LUN 22, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000017428de12e, WWN 100000068d002a40, LUN 23, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000019533a4544, WWN 100000068d002a40, LUN 25, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000001ad886bb10, WWN 100000068d002a40, LUN 26, vendor 'IBM', model =
'ULTRIUM-TD2'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 00000000b26acfa2, WWN 100000068d002a40, LUN 0, vendor =
'ADIC', model 'Scalar-24'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 00000003632ebb68, WWN 100000068d002a40, LUN 3, vendor =
'STK', model 'L180'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 000000061ed3ca31, WWN 100000068d002a40, LUN 6, vendor =
'STK', model 'L180'=20

=20

May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 000000093a855f24, WWN 100000068d002a40, LUN 9, vendor =
'STK', model 'L180'=20

=20

May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 0000000ca1e082bf, WWN 100000068d002a40, LUN 12, vendor =
'STK', model 'L180'=20

=20

May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 0000000ff746fa9b, WWN 100000068d002a40, LUN 15, vendor =
'STK', model 'L180'=20

=20

May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 00000013719221e7, WWN 100000068d002a40, LUN 19, vendor =
'STK', model 'L180'=20

=20

May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 000000182eb0d71b, WWN 100000068d002a40, LUN 24, vendor =
'STK', model 'L180'=20

=20

May 12 11:38:46 g10r9 : 0:0:auth_agent:NOTICE: main: auth-agent started

=20

May 12 11:38:49 g10r9 : 1:2:sanm_ag:WARNING: 3: sanm_doRMCLocalClose: =
reopening session to sanmd=20

=20

May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 10: vstack_vlink_create:294 : vl =
FO active 1 prefered 1=20

=20

May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 11: vstack_vlink_create:329: =
vlink lp link op-state DOWN=20

=20

May 12 11:38:52 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.1.1, Port bp0, State Up

=20

May 12 11:38:55 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'jumbo-vol', Id 0x0000020000000073, Event 'Online', was offline for =
roughly 255 sec.

=20

May 12 11:38:55 g10r9 : 0:0:sanm:NOTICE: SANM: ONStor Data Mirror =
(c)2006: Started

=20

May 12 11:38:57 g10r9 : 1:0:bsdrl:WARNING: 12: route dst 0 mask 0 =
gateway 100030a fail to be added/deleted. error =3D 51=20

=20

May 12 11:38:57 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.8.1, Port bp0, State Up

=20

May 12 11:38:59 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'VS_MGMT_512', Id 1, State 'Up'

=20

May 12 11:38:59 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[1]

=20

May 12 11:39:00 g10r9 : 0:0:ea:ERROR: ea_procMovInReq[1586] : Volume =
(vol_mgmt_512) is locked for Online operation

=20

May 12 11:39:01 g10r9 : 0:0:vsd:ERROR: vsd_mountVolProc[4941] : Failed =
to mount volume ID=3D0x20000000066. Operation not allowed.

=20

May 12 11:39:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'vol_mgmt_512', Id 0x0000020000000066, Event 'Online', was offline for =
roughly 266 sec.

=20

May 12 11:39:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'JUMBO_G10R9', Id 8, State 'Up'

=20

May 12 11:39:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[8]

=20

May 12 11:44:43 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: system version  : =
status[0]

=20

May 12 11:44:48 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : status[0]

=20

May 12 11:45:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: interface show ip -n =
g11r9 : status[0]

=20

May 12 11:45:37 g10r9 : 0:0:nfxsh:NOTICE: cmd[3]: cluster sho gr : =
status[0]

=20

May 12 11:45:41 g10r9 : 0:0:nfxsh:NOTICE: cmd[4]: cluster show summary  =
: status[0]

=20

May 12 11:45:52 g10r9 : 0:0:nfxsh:NOTICE: cmd[5]: vsvr show all : =
status[0]

=20

May 12 11:49:23 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog show config  : =
status[0]

=20

May 12 11:50:35 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down=20

=20

May 12 11:50:35 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0: Mgmt =
Port 0.0.0.0 PCC, State Up

=20

May 12 11:50:51 g10r9 : 0:0:cluster2:WARNING: =
ClusterCtrl_ProcessTimerFunc: post node down hostname g11r9=20

=20

May 12 11:50:51 g10r9 : 0:0:eventd:CRITICAL: Process-EVENT Node: Name =
'g11r9', State Down, Msg ''

=20

May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D2), num_vsvr 2=20

=20

May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 3=20

=20

May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 4=20

=20

May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 5=20

=20

May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 6=20

=20

May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NIS_G10R9', Id 3, State 'Enabled'

=20

May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'WINDOWS_G10R9', Id 4, State 'Enabled'

=20

May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'LOCAL_NIS_G10R9', Id 5, State 'Enabled'

=20

May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'OPENLDAP_G10R9', Id 6, State 'Enabled'

=20

May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'AUTHDOM_G10R9', Id 7, State 'Enabled'

=20

May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NONETBIOS_G10R9', Id 10, State 'Enabled'

=20

May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'nis-vol', Id 0x000002000000006a, Event 'Takeover', State 'Up'

=20

May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local', Id 0x0000020000000075, Event 'Takeover', State 'Up'

=20

May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NONETBIOS_G10R9', Id 10, State 'Up'

=20

May 12 11:51:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[10]

=20

May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'nis-vol', Id 0x000002000000006a, Event 'Online', was offline for =
roughly 69 sec.

=20

May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'win-vol', Id 0x000002000000006d, Event 'Takeover', State 'Up'

=20

May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local-nis-vol', Id 0x000002000000006e, Event 'Takeover', State 'Up'

=20

May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'openldap-vol', Id 0x0000020000000071, Event 'Takeover', State 'Up'

=20

May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'authdom-vol', Id 0x0000020000000072, Event 'Takeover', State 'Up'

=20

May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local', Id 0x0000020000000075, Event 'Online', was offline for roughly =
52 sec.

=20

May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'openldap-vol', Id 0x0000020000000071, Event 'Online', was offline for =
roughly 62 sec.

=20

May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'win-vol', Id 0x000002000000006d, Event 'Online', was offline for =
roughly 66 sec.

=20

May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local-nis-vol', Id 0x000002000000006e, Event 'Online', was offline for =
roughly 59 sec.

=20

May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'authdom-vol', Id 0x0000020000000072, Event 'Online', was offline for =
roughly 55 sec.

=20

May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.3.1, Port bp0, State Up

=20

May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.6.1, Port bp0, State Up

=20

May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.5.1, Port bp0, State Up

=20

May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.4.1, Port bp0, State Up

=20

May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.4.2, Port bp0, State Up

=20

May 12 11:51:07 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded=20

=20

May 12 11:51:07 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NIS_G10R9', Id 3, State 'Up'

=20

May 12 11:51:07 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[3]

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'OPENLDAP_G10R9', Id 6, State 'Up'

=20

May 12 11:51:08 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[6]

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.1, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.2, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.3, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.4, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.5, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.6, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.7, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.8, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.9, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.10, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.11, Port bp0, State Up

=20

May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.12, Port bp0, State Up

=20

May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.13, Port bp0, State Up

=20

May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.14, Port bp0, State Up

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.15, Port bp0, State Up

=20

May 12 11:51:10 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded=20

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.16, Port bp0, State Up

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.17, Port bp0, State Up

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.18, Port bp0, State Up

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.19, Port bp0, State Up

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -14=20

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.20, Port bp0, State Up

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -14=20

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.21, Port bp0, State Up

=20

May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.22, Port bp0, State Up

=20

May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.23, Port bp0, State Up

=20

May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.24, Port bp0, State Up

=20

May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.25, Port bp0, State Up

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.26, Port bp0, State Up

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.27, Port bp0, State Up

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.28, Port bp0, State Up

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.29, Port bp0, State Up

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.30, Port bp0, State Up

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.31, Port bp0, State Up

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.32, Port bp0, State Up

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3=20

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'LOCAL_NIS_G10R9', Id 5, State 'Up'

=20

May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[5]

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: Vsvr RMC Error -3=20

=20

May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'AUTHDOM_G10R9', Id 7, State 'Up'

=20

May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[7]

=20

May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
retries to filer(g11r9) failed - aborting attemtpts

=20

May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_handle_net_msg_done: =
ncm_repoen filer failed - -1

=20

May 12 11:51:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'WINDOWS_G10R9', Id 4, State 'Up'

=20

May 12 11:51:18 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[4]

=20

May 12 11:51:43 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded=20

=20

May 12 11:51:46 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded=20

=20

May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
retries to filer(g11r9) failed - aborting attemtpts

=20

May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_handle_net_msg_done: =
ncm_repoen filer failed - -1

=20

May 12 11:53:48 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g11r9', State Up, Msg ''

=20

May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11=20

=20

May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file in progress, remote ver 1178991276(64), sending to 10.2.9.11=20

=20

May 12 11:54:01 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file end, code 0=20

=20

May 12 11:54:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : status[0]

=20

May 12 11:58:08 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog find cluster2 : =
status[0]

=20

May 12 11:59:10 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11=20

=20

May 12 11:59:12 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size 64 =


=20

May 12 11:59:32 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1=20

=20

May 12 11:59:32 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns add root cifs =
gnsroot  : status[11] =E8 tried this before the DB was upgraded

=20

May 12 12:00:03 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster daemon=20

=20

May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over

=20

#

=20

=20



------_=_NextPart_001_01C796AD.82FEFAFE
Content-Type: text/html;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Diso-8859-1">
<META NAME=3D"Generator" CONTENT=3D"MS Exchange Server version =
6.5.7652.24">
<TITLE>RE: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing =
cluster errors</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->

<P><FONT SIZE=3D2>Weird, I ran into this earlier when I forgot to copy =
the latest rc.initial, but this time we did it.<BR>
You can either power off the filer and swap flashes, then power up, or =
power cycle it when you have the ssc console open, rush back into your =
cube, ^E on ssc console when the ....... starts, get to ssc-prom, env =
view to check boot_dev variable, env set boot_dev wd1a or wd0a depending =
on what the current is, then autoload. This way you'll be back to the =
222 flash and we can mount the other partition to check /etc/rc.initial =
(verify we copied it at the right place.<BR>
Other option is to go through first install script just to specify filer =
name and sc1 ip (maybe also default route 10.2.0.1), then get to nfxsh =
and do a system reboot -s -y to go back to 2.2.2. This way the =
partitions are cleanly unmunted, which is not the case when you power =
cycle (and then you might need to use fsck).<BR>
<BR>
<BR>
-----Original Message-----<BR>
From: Chris Vandever<BR>
Sent: Mon 5/14/2007 7:06 PM<BR>
To: Sandrine Boulanger; Brian DeForest; Raj Kumar<BR>
Cc: Andy Sharp<BR>
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors<BR>
<BR>
I followed Sandrine's instructions to set up the R2.2.2 flashes and then =
the R3.0 flashes, including typing 'y' to get the config copied to the =
secondary (R3.0) flash:<BR>
<BR>
<BR>
<BR>
/mnt/R3.0.0.0/R3.0.0.0-050907/nfx-tree/Build/ch/dbg/Release/etc<BR>
<BR>
Do you wish for this flash to automatically copy<BR>
<BR>
the configuration from the other flash at boot time<BR>
<BR>
rather than running the initial config menu? [N] y<BR>
<BR>
I'll take y as a 'Yes' and set the new flash to autoupgrade.<BR>
<BR>
eng28#<BR>
<BR>
<BR>
<BR>
However, the system in its infinite wisdom booted into the initial =
config tool.&nbsp; Sounds like a bug to me.<BR>
<BR>
<BR>
<BR>
ChrisV<BR>
<BR>
________________________________<BR>
<BR>
From: Sandrine Boulanger<BR>
Sent: Monday, May 14, 2007 9:52 AM<BR>
To: Sandrine Boulanger; Chris Vandever; Brian DeForest; Raj Kumar<BR>
Cc: Andy Sharp<BR>
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors<BR>
<BR>
<BR>
<BR>
We lost the evidence again, the filer rebooted on its own yesterday, =
probably due to resource or memory leak, cluster errors are filling =
elog:<BR>
<BR>
<BR>
<BR>
.<BR>
<BR>
May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
.<BR>
<BR>
May 13 08:00:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
May 13 08:17:44 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager: =
Started<BR>
<BR>
May 13 08:17:46 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1<BR>
<BR>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.0 =
State Up, Msg ''<BR>
<BR>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 0, State Up<BR>
<BR>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.1 =
State Down, Msg ''<BR>
<BR>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 1, State Up<BR>
<BR>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.2 =
State Down, Msg ''<BR>
<BR>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 2, State Up<BR>
<BR>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.3 =
State Down, Msg ''<BR>
<BR>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 3, State Up<BR>
<BR>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2, =
CPU 0, State Up<BR>
<BR>
<BR>
<BR>
-----Original Message-----<BR>
From: Sandrine Boulanger<BR>
Sent: Saturday, May 12, 2007 6:14 PM<BR>
To: Chris Vandever; Brian DeForest; Raj Kumar<BR>
Cc: Andy Sharp<BR>
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors<BR>
<BR>
<BR>
<BR>
Procedure:<BR>
<BR>
<BR>
<BR>
Created 2 new flashes with 2.2.2 version for g10r9 and g11r9.<BR>
<BR>
Clustered the 2 filers<BR>
<BR>
Created configuration (from gateway to volume level)<BR>
<BR>
From the 2 filers active flash, mount sub#16 Release directory<BR>
<BR>
Copied the new flash_install.sh under Release/etc<BR>
<BR>
Started the flash_install.sh<BR>
<BR>
Answered Y at the end to trigger later the config files copy once the =
3.0 flash is booting<BR>
<BR>
Copied the new rc.initial to /etc on the new 3.0 flash<BR>
<BR>
system reboot -s -y on g10r9<BR>
<BR>
g10r9 boots, get the config files from the 220 flash through rc.initial, =
then comes up with 3.0 version, cluster db still at 220. g11r9 still =
running version 220. Cluster commands are fine<BR>
<BR>
system reboot -s -y on g11r9<BR>
<BR>
g10r9 boots, get the config files from the 220 flash through rc.initial, =
then comes up with 3.0 version, cluster db still at 220. Both filers are =
now at version 3.0 with cluster db at 2220. Starting to see some cluster =
errors in elog on g10r9<BR>
<BR>
5 minutes later cluster db is upgraded to 300. Still getting cluster =
errors on g10r9<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
-----Original Message-----<BR>
<BR>
From: Chris Vandever<BR>
<BR>
Sent: Sat 5/12/2007 5:36 PM<BR>
<BR>
To: Sandrine Boulanger; Brian DeForest; Raj Kumar<BR>
<BR>
Cc: Andy Sharp<BR>
<BR>
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors<BR>
<BR>
<BR>
<BR>
I'll need the exact procedure that's being used for the upgrade, and =
access to any scripts, etc.&nbsp; Clearly, the clustering code is =
confused since the clusDb shows it was successfully upgraded, yet the =
clustering code is still trying (unsuccessfully) to upgrade it.<BR>
<BR>
<BR>
<BR>
I won't be able to look at it this evening, but I will try to do so =
tomorrow.<BR>
<BR>
<BR>
<BR>
ChrisV<BR>
<BR>
<BR>
<BR>
________________________________<BR>
<BR>
<BR>
<BR>
From: Sandrine Boulanger<BR>
<BR>
Sent: Sat 5/12/2007 12:33 PM<BR>
<BR>
To: Chris Vandever; Brian DeForest; Raj Kumar<BR>
<BR>
Cc: Andy Sharp<BR>
<BR>
Subject: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing =
cluster errors<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
I tested Andy's fix for the copy of the config files. I used the files =
as standalone since we don't have a new build. It worked fine. Prior to =
the DBs being upgraded from 220 to 30, all commands were running OK. I =
tried to add a gns root and it refused because the DBs were not upgraded =
yet, which is good.<BR>
<BR>
<BR>
<BR>
Then the DBs were upgraded to 30, and at that time I saw the same type =
of errors we saw last Thursday: cluster show summary on g10r9, or gns =
show cifs was failing with cluster2 errors, see elog below. The commands =
are OK on g11r9.<BR>
<BR>
<BR>
<BR>
G10r9 was the first to be rebooted on 3.0 flash, g11r9 was second.<BR>
<BR>
<BR>
<BR>
G10r9 ssc ip is 10.2.9.10<BR>
<BR>
<BR>
<BR>
I'll leave it in this state, let me know if I can help debugging.<BR>
<BR>
<BR>
<BR>
# dbtools show /onstor/conf/cluster.db.DB0 -c<BR>
<BR>
<BR>
<BR>
Cluster DB Header<BR>
<BR>
<BR>
<BR>
------------------------------------<BR>
<BR>
<BR>
<BR>
dbHdr.version =3D 0x30000<BR>
<BR>
<BR>
<BR>
dbHdr.signature =3D 0x434c5553<BR>
<BR>
<BR>
<BR>
dbHdr.blockSize =3D 256<BR>
<BR>
<BR>
<BR>
dbHdr.freeOffset =3D 0<BR>
<BR>
<BR>
<BR>
dbHdr.endOffset =3D 2336000<BR>
<BR>
<BR>
<BR>
dbHdr.recCount =3D 0<BR>
<BR>
<BR>
<BR>
dbHdr.hashTblSize =3D 4096<BR>
<BR>
<BR>
<BR>
dbHdr.hashTblOffset =3D 256<BR>
<BR>
<BR>
<BR>
dbHdr.DebugMode =3D 0<BR>
<BR>
<BR>
<BR>
# nfxsh<BR>
<BR>
<BR>
<BR>
Welcome to the ONStor NAS Gateway.<BR>
<BR>
<BR>
<BR>
g10r9 diag&gt; gns show<BR>
<BR>
<BR>
<BR>
% Command incomplete.<BR>
<BR>
<BR>
<BR>
g10r9 diag&gt; gns show<BR>
<BR>
<BR>
<BR>
&nbsp; cifs&nbsp; Show GNS CIFS<BR>
<BR>
<BR>
<BR>
g10r9 diag&gt; gns show cifs<BR>
<BR>
<BR>
<BR>
&nbsp; ROOTNAME[\Path]&nbsp; Root and optional path<BR>
<BR>
<BR>
<BR>
&nbsp; all<BR>
<BR>
<BR>
<BR>
g10r9 diag&gt; gns show cifs all<BR>
<BR>
<BR>
<BR>
This command is unavailable until after the cluster database has been =
upgraded.<BR>
<BR>
<BR>
<BR>
% Command failure.<BR>
<BR>
<BR>
<BR>
g10r9 diag&gt; gns show cifs all<BR>
<BR>
<BR>
<BR>
This command is unavailable until after the cluster database has been =
upgraded.<BR>
<BR>
<BR>
<BR>
% Command failure.<BR>
<BR>
<BR>
<BR>
g10r9 diag&gt; cluster show summary<BR>
<BR>
<BR>
<BR>
Cluster Name: g10r9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Cluster =
State:&nbsp;&nbsp; On<BR>
<BR>
<BR>
<BR>
------------------------------------------------------<BR>
<BR>
<BR>
<BR>
NAS Gateways&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
IP&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; State&nbsp;&nbsp; PCC<BR>
<BR>
<BR>
<BR>
------------------------------------------------------<BR>
<BR>
<BR>
<BR>
g10r9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; 10.2.9.10&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
UP&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; YES<BR>
<BR>
<BR>
<BR>
g11r9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; 10.2.9.11&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
UP&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; NO<BR>
<BR>
<BR>
<BR>
Getting cluster information failed<BR>
<BR>
<BR>
<BR>
% Command failure.<BR>
<BR>
<BR>
<BR>
g10r9 diag&gt;<BR>
<BR>
<BR>
<BR>
# cat /var/agile/messages<BR>
<BR>
<BR>
<BR>
May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over<BR>
<BR>
<BR>
<BR>
May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
<BR>
<BR>
May 12 12:00:38 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster daemon<BR>
<BR>
<BR>
<BR>
May 12 12:00:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:01:09 g10r9 last message repeated 3 times<BR>
<BR>
<BR>
<BR>
May 12 12:01:13 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster daemon<BR>
<BR>
<BR>
<BR>
May 12 12:01:19 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: cluster_request_upgrade_db: =
no reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: =
ClusterCtrl_ProcessTimerFunc: Request to upgrade clusDb failed, rc =
30<BR>
<BR>
<BR>
<BR>
May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: =
ClusterCtrl_ProcessTimerFunc: Running in degraded mode on version =
0x20200, will retry upgrade to version 0x30000 later...<BR>
<BR>
<BR>
<BR>
May 12 12:01:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:01:40 g10r9 : 0:0:cluster2:ERROR: cluster_set_lock_on_rpc: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:01:49 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:02:00 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
<BR>
<BR>
May 12 12:03:35 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:03:57 g10r9 last message repeated 3 times<BR>
<BR>
<BR>
<BR>
May 12 12:03:57 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list for node g10r9<BR>
<BR>
<BR>
<BR>
May 12 12:04:06 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:04:07 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:04:07 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list for node g10r9<BR>
<BR>
<BR>
<BR>
May 12 12:04:16 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:04:26 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:04:27 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:04:36 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:04:42 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:04:46 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:04:56 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:05:28 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:05:33 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down<BR>
<BR>
<BR>
<BR>
May 12 12:05:34 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:05:34 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g11r9', State Up, Msg ''<BR>
<BR>
<BR>
<BR>
May 12 12:05:41 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:05:50 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:05:54 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:05:59 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:06:11 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:06:15 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:06:20 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:06:24 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30<BR>
<BR>
<BR>
<BR>
May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:06:29 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list for node g10r9<BR>
<BR>
<BR>
<BR>
May 12 12:06:33 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:06:40 g10r9 : 0:0:cluster2:ERROR: cluster_upgrade_db_runtime: =
ClusDb update complete; successfully updated version from 0x20200 to =
0x30000<BR>
<BR>
<BR>
<BR>
May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file in progress, remote ver 1178995860(554), sending to =
10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: cluster =
db version matches<BR>
<BR>
<BR>
<BR>
May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file end, code 0<BR>
<BR>
<BR>
<BR>
May 12 12:07:45 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: gns show : =
status[4]<BR>
<BR>
<BR>
<BR>
May 12 12:08:05 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:08:05 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns show cifs all : =
status[11]<BR>
<BR>
<BR>
<BR>
May 12 12:10:12 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1<BR>
<BR>
<BR>
<BR>
May 12 12:10:12 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: gns show cifs all : =
status[11]<BR>
<BR>
<BR>
<BR>
# zcat =
/var/agile/messages.0.gz&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<BR>
<BR>
<BR>
<BR>
May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager: =
Started<BR>
<BR>
<BR>
<BR>
May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 1, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 2, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 3, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2, =
CPU 0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.0 =
State Up, Msg ''<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.1 =
State Down, Msg ''<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.2 =
State Down, Msg ''<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.3 =
State Down, Msg ''<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
failed<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
failed<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session =
closed<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session =
closed<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
5<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
6<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
8<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
9<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
7<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g10r9', State Up, Msg ''<BR>
<BR>
<BR>
<BR>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
success<BR>
<BR>
<BR>
<BR>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
success<BR>
<BR>
<BR>
<BR>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session =
closed<BR>
<BR>
<BR>
<BR>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session =
closed<BR>
<BR>
<BR>
<BR>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
success<BR>
<BR>
<BR>
<BR>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
success<BR>
<BR>
<BR>
<BR>
May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: Using 10.2.9.10 as my =
primary address<BR>
<BR>
<BR>
<BR>
May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size =
64<BR>
<BR>
<BR>
<BR>
May 12 11:38:17 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'nis-vol', Id 0x000002000000006a, Event 'Create', State 'Down'<BR>
<BR>
<BR>
<BR>
May 12 11:38:18 g10r9 : 1:2:sanm_ag:WARNING: 2: sanm_doRMCLocalClose: =
reopening session to sanmd<BR>
<BR>
<BR>
<BR>
May 12 11:38:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'jumbo-vol', Id 0x0000020000000073, Event 'Create', State 'Down'<BR>
<BR>
<BR>
<BR>
May 12 11:38:19 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'win-vol', Id 0x000002000000006d, Event 'Create', State 'Down'<BR>
<BR>
<BR>
<BR>
May 12 11:38:19 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -&gt; rc.agile: =
booting version 3.0.0.0b : status[2]<BR>
<BR>
<BR>
<BR>
May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'openldap-vol', Id 0x0000020000000071, Event 'Create', State 'Down'<BR>
<BR>
<BR>
<BR>
May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local-nis-vol', Id 0x000002000000006e, Event 'Create', State 'Down'<BR>
<BR>
<BR>
<BR>
May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'authdom-vol', Id 0x0000020000000072, Event 'Create', State 'Down'<BR>
<BR>
<BR>
<BR>
May 12 11:38:21 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local', Id 0x0000020000000075, Event 'Create', State 'Down'<BR>
<BR>
<BR>
<BR>
May 12 11:38:22 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -&gt; rc.agile: =
booting from /dev/wd0a : status[2]<BR>
<BR>
<BR>
<BR>
May 12 11:38:22 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'vol_mgmt_512', Id 0x0000020000000066, Event 'Create', State 'Down'<BR>
<BR>
<BR>
<BR>
May 12 11:38:23 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'LSI_E4600A_R46_g10r9_core', Id 0x0000020000000074, Event 'Create', =
State 'Down'<BR>
<BR>
<BR>
<BR>
May 12 11:38:32 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0: Mgmt =
Port 0.0.0.0 PCC, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:38:39 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session open =
failed<BR>
<BR>
<BR>
<BR>
May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000001026a8a93, WWN 100000068d002a40, LUN 1, vendor 'IBM', model =
'ULTRIUM-TD3'<BR>
<BR>
<BR>
<BR>
May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000299256570, WWN 100000068d002a40, LUN 2, vendor 'IBM', model =
'ULTRIUM-TD3'<BR>
<BR>
<BR>
<BR>
May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000474cbbcf7, WWN 100000068d002a40, LUN 4, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000005020ee656, WWN 100000068d002a40, LUN 5, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000007527722f4, WWN 100000068d002a40, LUN 7, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000008c30a5552, WWN 100000068d002a40, LUN 8, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000ad8b479e3, WWN 100000068d002a40, LUN 10, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000b326471cc, WWN 100000068d002a40, LUN 11, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000d7663d86b, WWN 100000068d002a40, LUN 13, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000e50bb2aad, WWN 100000068d002a40, LUN 14, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000010cecbc163, WWN 100000068d002a40, LUN 16, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000001101709690, WWN 100000068d002a40, LUN 17, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_closeSess : NCM session =
closed<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session open =
success<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 000000120c18b3b8, WWN 100000068d002a40, LUN 18, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000001483e7a0c2, WWN 100000068d002a40, LUN 20, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 000000154c5cf731, WWN 100000068d002a40, LUN 21, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 000000168d36b6dd, WWN 100000068d002a40, LUN 22, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000017428de12e, WWN 100000068d002a40, LUN 23, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000019533a4544, WWN 100000068d002a40, LUN 25, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000001ad886bb10, WWN 100000068d002a40, LUN 26, vendor 'IBM', model =
'ULTRIUM-TD2'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 00000000b26acfa2, WWN 100000068d002a40, LUN 0, vendor =
'ADIC', model 'Scalar-24'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 00000003632ebb68, WWN 100000068d002a40, LUN 3, vendor =
'STK', model 'L180'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 000000061ed3ca31, WWN 100000068d002a40, LUN 6, vendor =
'STK', model 'L180'<BR>
<BR>
<BR>
<BR>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 000000093a855f24, WWN 100000068d002a40, LUN 9, vendor =
'STK', model 'L180'<BR>
<BR>
<BR>
<BR>
May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 0000000ca1e082bf, WWN 100000068d002a40, LUN 12, vendor =
'STK', model 'L180'<BR>
<BR>
<BR>
<BR>
May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 0000000ff746fa9b, WWN 100000068d002a40, LUN 15, vendor =
'STK', model 'L180'<BR>
<BR>
<BR>
<BR>
May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 00000013719221e7, WWN 100000068d002a40, LUN 19, vendor =
'STK', model 'L180'<BR>
<BR>
<BR>
<BR>
May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 000000182eb0d71b, WWN 100000068d002a40, LUN 24, vendor =
'STK', model 'L180'<BR>
<BR>
<BR>
<BR>
May 12 11:38:46 g10r9 : 0:0:auth_agent:NOTICE: main: auth-agent =
started<BR>
<BR>
<BR>
<BR>
May 12 11:38:49 g10r9 : 1:2:sanm_ag:WARNING: 3: sanm_doRMCLocalClose: =
reopening session to sanmd<BR>
<BR>
<BR>
<BR>
May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 10: vstack_vlink_create:294 : vl =
FO active 1 prefered 1<BR>
<BR>
<BR>
<BR>
May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 11: vstack_vlink_create:329: =
vlink lp link op-state DOWN<BR>
<BR>
<BR>
<BR>
May 12 11:38:52 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.1.1, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:38:55 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'jumbo-vol', Id 0x0000020000000073, Event 'Online', was offline for =
roughly 255 sec.<BR>
<BR>
<BR>
<BR>
May 12 11:38:55 g10r9 : 0:0:sanm:NOTICE: SANM: ONStor Data Mirror =
(c)2006: Started<BR>
<BR>
<BR>
<BR>
May 12 11:38:57 g10r9 : 1:0:bsdrl:WARNING: 12: route dst 0 mask 0 =
gateway 100030a fail to be added/deleted. error =3D 51<BR>
<BR>
<BR>
<BR>
May 12 11:38:57 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.8.1, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:38:59 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'VS_MGMT_512', Id 1, State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:38:59 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[1]<BR>
<BR>
<BR>
<BR>
May 12 11:39:00 g10r9 : 0:0:ea:ERROR: ea_procMovInReq[1586] : Volume =
(vol_mgmt_512) is locked for Online operation<BR>
<BR>
<BR>
<BR>
May 12 11:39:01 g10r9 : 0:0:vsd:ERROR: vsd_mountVolProc[4941] : Failed =
to mount volume ID=3D0x20000000066. Operation not allowed.<BR>
<BR>
<BR>
<BR>
May 12 11:39:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'vol_mgmt_512', Id 0x0000020000000066, Event 'Online', was offline for =
roughly 266 sec.<BR>
<BR>
<BR>
<BR>
May 12 11:39:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'JUMBO_G10R9', Id 8, State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:39:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[8]<BR>
<BR>
<BR>
<BR>
May 12 11:44:43 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: system version&nbsp; : =
status[0]<BR>
<BR>
<BR>
<BR>
May 12 11:44:48 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : =
status[0]<BR>
<BR>
<BR>
<BR>
May 12 11:45:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: interface show ip -n =
g11r9 : status[0]<BR>
<BR>
<BR>
<BR>
May 12 11:45:37 g10r9 : 0:0:nfxsh:NOTICE: cmd[3]: cluster sho gr : =
status[0]<BR>
<BR>
<BR>
<BR>
May 12 11:45:41 g10r9 : 0:0:nfxsh:NOTICE: cmd[4]: cluster show =
summary&nbsp; : status[0]<BR>
<BR>
<BR>
<BR>
May 12 11:45:52 g10r9 : 0:0:nfxsh:NOTICE: cmd[5]: vsvr show all : =
status[0]<BR>
<BR>
<BR>
<BR>
May 12 11:49:23 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog show config&nbsp; =
: status[0]<BR>
<BR>
<BR>
<BR>
May 12 11:50:35 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down<BR>
<BR>
<BR>
<BR>
May 12 11:50:35 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0: Mgmt =
Port 0.0.0.0 PCC, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:50:51 g10r9 : 0:0:cluster2:WARNING: =
ClusterCtrl_ProcessTimerFunc: post node down hostname g11r9<BR>
<BR>
<BR>
<BR>
May 12 11:50:51 g10r9 : 0:0:eventd:CRITICAL: Process-EVENT Node: Name =
'g11r9', State Down, Msg ''<BR>
<BR>
<BR>
<BR>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D2), num_vsvr 2<BR>
<BR>
<BR>
<BR>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 3<BR>
<BR>
<BR>
<BR>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 4<BR>
<BR>
<BR>
<BR>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 5<BR>
<BR>
<BR>
<BR>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 6<BR>
<BR>
<BR>
<BR>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NIS_G10R9', Id 3, State 'Enabled'<BR>
<BR>
<BR>
<BR>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'WINDOWS_G10R9', Id 4, State 'Enabled'<BR>
<BR>
<BR>
<BR>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'LOCAL_NIS_G10R9', Id 5, State 'Enabled'<BR>
<BR>
<BR>
<BR>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'OPENLDAP_G10R9', Id 6, State 'Enabled'<BR>
<BR>
<BR>
<BR>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'AUTHDOM_G10R9', Id 7, State 'Enabled'<BR>
<BR>
<BR>
<BR>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NONETBIOS_G10R9', Id 10, State 'Enabled'<BR>
<BR>
<BR>
<BR>
May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'nis-vol', Id 0x000002000000006a, Event 'Takeover', State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local', Id 0x0000020000000075, Event 'Takeover', State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NONETBIOS_G10R9', Id 10, State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[10]<BR>
<BR>
<BR>
<BR>
May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'nis-vol', Id 0x000002000000006a, Event 'Online', was offline for =
roughly 69 sec.<BR>
<BR>
<BR>
<BR>
May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'win-vol', Id 0x000002000000006d, Event 'Takeover', State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local-nis-vol', Id 0x000002000000006e, Event 'Takeover', State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'openldap-vol', Id 0x0000020000000071, Event 'Takeover', State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'authdom-vol', Id 0x0000020000000072, Event 'Takeover', State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local', Id 0x0000020000000075, Event 'Online', was offline for roughly =
52 sec.<BR>
<BR>
<BR>
<BR>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'openldap-vol', Id 0x0000020000000071, Event 'Online', was offline for =
roughly 62 sec.<BR>
<BR>
<BR>
<BR>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'win-vol', Id 0x000002000000006d, Event 'Online', was offline for =
roughly 66 sec.<BR>
<BR>
<BR>
<BR>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local-nis-vol', Id 0x000002000000006e, Event 'Online', was offline for =
roughly 59 sec.<BR>
<BR>
<BR>
<BR>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'authdom-vol', Id 0x0000020000000072, Event 'Online', was offline for =
roughly 55 sec.<BR>
<BR>
<BR>
<BR>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.3.1, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.6.1, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.5.1, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.4.1, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.4.2, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:07 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded<BR>
<BR>
<BR>
<BR>
May 12 11:51:07 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NIS_G10R9', Id 3, State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:07 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[3]<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'OPENLDAP_G10R9', Id 6, State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[6]<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.1, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.2, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.3, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.4, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.5, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.6, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.7, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.8, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.9, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.10, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.11, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.12, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.13, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.14, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.15, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.16, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.17, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.18, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.19, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -14<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.20, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -14<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.21, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.22, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.23, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.24, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.25, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.26, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.27, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.28, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.29, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.30, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.31, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.32, Port bp0, State Up<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'LOCAL_NIS_G10R9', Id 5, State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[5]<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: Vsvr RMC Error -3<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'AUTHDOM_G10R9', Id 7, State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[7]<BR>
<BR>
<BR>
<BR>
May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
retries to filer(g11r9) failed - aborting attemtpts<BR>
<BR>
<BR>
<BR>
May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_handle_net_msg_done: =
ncm_repoen filer failed - -1<BR>
<BR>
<BR>
<BR>
May 12 11:51:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'WINDOWS_G10R9', Id 4, State 'Up'<BR>
<BR>
<BR>
<BR>
May 12 11:51:18 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[4]<BR>
<BR>
<BR>
<BR>
May 12 11:51:43 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded<BR>
<BR>
<BR>
<BR>
May 12 11:51:46 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded<BR>
<BR>
<BR>
<BR>
May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
retries to filer(g11r9) failed - aborting attemtpts<BR>
<BR>
<BR>
<BR>
May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_handle_net_msg_done: =
ncm_repoen filer failed - -1<BR>
<BR>
<BR>
<BR>
May 12 11:53:48 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g11r9', State Up, Msg ''<BR>
<BR>
<BR>
<BR>
May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file in progress, remote ver 1178991276(64), sending to =
10.2.9.11<BR>
<BR>
<BR>
<BR>
May 12 11:54:01 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file end, code 0<BR>
<BR>
<BR>
<BR>
May 12 11:54:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : =
status[0]<BR>
<BR>
<BR>
<BR>
May 12 11:58:08 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog find cluster2 : =
status[0]<BR>
<BR>
<BR>
<BR>
May 12 11:59:10 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11<BR>
<BR>
<BR>
<BR>
May 12 11:59:12 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size =
64<BR>
<BR>
<BR>
<BR>
May 12 11:59:32 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1<BR>
<BR>
<BR>
<BR>
May 12 11:59:32 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns add root cifs =
gnsroot&nbsp; : status[11] =E8 tried this before the DB was upgraded<BR>
<BR>
<BR>
<BR>
May 12 12:00:03 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster daemon<BR>
<BR>
<BR>
<BR>
May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over<BR>
<BR>
<BR>
<BR>
#<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
<BR>
</FONT>
</P>

</BODY>
</HTML>
------_=_NextPart_001_01C796AD.82FEFAFE--
