X-MimeOLE: Produced By Microsoft Exchange V6.5
Received: by onstor-exch02.onstor.net 
	id <01C7971B.39D404BE@onstor-exch02.onstor.net>; Tue, 15 May 2007 11:02:39 -0700
MIME-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----_=_NextPart_001_01C7971B.39D404BE"
Content-class: urn:content-classes:message
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing cluster errors
Date: Tue, 15 May 2007 11:02:39 -0700
Message-ID: <BB375AF679D4A34E9CA8DFA650E2B04E0138C688@onstor-exch02.onstor.net>
In-Reply-To: <BB375AF679D4A34E9CA8DFA650E2B04E02F1467C@onstor-exch02.onstor.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing cluster errors
Thread-Index: AceUzHDmrKQBNO3gTRuD2SAA03/ulgAKkyw3AAD8iXoAU0mw8AATUhXAAAXP0U4AAGAXOgAbTSeg
From: "Chris Vandever" <chris.vandever@onstor.com>
To: "Raj Kumar" <raj.kumar@onstor.com>,
	"Sandrine Boulanger" <sandrine.boulanger@onstor.com>,
	"Brian DeForest" <brian.deforest@onstor.com>
Cc: "Andy Sharp" <andy.sharp@onstor.com>

This is a multi-part message in MIME format.

------_=_NextPart_001_01C7971B.39D404BE
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Sandrine, thanks for the info.  I power cycled and rebooted on the 222 =
flash.

=20

Raj, I feel your pain, and I'm only seeing the tip of the iceburg.

=20

Andy, you guys have a bug.

=20

________________________________

From: Raj Kumar=20
Sent: Monday, May 14, 2007 10:00 PM
To: Sandrine Boulanger; Chris Vandever; Brian DeForest
Cc: Andy Sharp
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors

=20

Chris,

=20

Welcome to our hell :)

=20

________________________________

From: Sandrine Boulanger
Sent: Mon 5/14/2007 9:57 PM
To: Chris Vandever; Brian DeForest; Raj Kumar
Cc: Andy Sharp
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors

Weird, I ran into this earlier when I forgot to copy the latest =
rc.initial, but this time we did it.
You can either power off the filer and swap flashes, then power up, or =
power cycle it when you have the ssc console open, rush back into your =
cube, ^E on ssc console when the ....... starts, get to ssc-prom, env =
view to check boot_dev variable, env set boot_dev wd1a or wd0a depending =
on what the current is, then autoload. This way you'll be back to the =
222 flash and we can mount the other partition to check /etc/rc.initial =
(verify we copied it at the right place.
Other option is to go through first install script just to specify filer =
name and sc1 ip (maybe also default route 10.2.0.1), then get to nfxsh =
and do a system reboot -s -y to go back to 2.2.2. This way the =
partitions are cleanly unmunted, which is not the case when you power =
cycle (and then you might need to use fsck).


-----Original Message-----
From: Chris Vandever
Sent: Mon 5/14/2007 7:06 PM
To: Sandrine Boulanger; Brian DeForest; Raj Kumar
Cc: Andy Sharp
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors

I followed Sandrine's instructions to set up the R2.2.2 flashes and then =
the R3.0 flashes, including typing 'y' to get the config copied to the =
secondary (R3.0) flash:



/mnt/R3.0.0.0/R3.0.0.0-050907/nfx-tree/Build/ch/dbg/Release/etc

Do you wish for this flash to automatically copy

the configuration from the other flash at boot time

rather than running the initial config menu? [N] y

I'll take y as a 'Yes' and set the new flash to autoupgrade.

eng28#



However, the system in its infinite wisdom booted into the initial =
config tool.  Sounds like a bug to me.



ChrisV

________________________________

From: Sandrine Boulanger
Sent: Monday, May 14, 2007 9:52 AM
To: Sandrine Boulanger; Chris Vandever; Brian DeForest; Raj Kumar
Cc: Andy Sharp
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors



We lost the evidence again, the filer rebooted on its own yesterday, =
probably due to resource or memory leak, cluster errors are filling =
elog:



.

May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

.

May 13 08:00:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1

May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30

May 13 08:17:44 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager: =
Started

May 13 08:17:46 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.0 =
State Up, Msg ''

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 0, State Up

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.1 =
State Down, Msg ''

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 1, State Up

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.2 =
State Down, Msg ''

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 2, State Up

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.3 =
State Down, Msg ''

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 3, State Up

May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2, =
CPU 0, State Up



-----Original Message-----
From: Sandrine Boulanger
Sent: Saturday, May 12, 2007 6:14 PM
To: Chris Vandever; Brian DeForest; Raj Kumar
Cc: Andy Sharp
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors



Procedure:



Created 2 new flashes with 2.2.2 version for g10r9 and g11r9.

Clustered the 2 filers

Created configuration (from gateway to volume level)

From the 2 filers active flash, mount sub#16 Release directory

Copied the new flash_install.sh under Release/etc

Started the flash_install.sh

Answered Y at the end to trigger later the config files copy once the =
3.0 flash is booting

Copied the new rc.initial to /etc on the new 3.0 flash

system reboot -s -y on g10r9

g10r9 boots, get the config files from the 220 flash through rc.initial, =
then comes up with 3.0 version, cluster db still at 220. g11r9 still =
running version 220. Cluster commands are fine

system reboot -s -y on g11r9

g10r9 boots, get the config files from the 220 flash through rc.initial, =
then comes up with 3.0 version, cluster db still at 220. Both filers are =
now at version 3.0 with cluster db at 2220. Starting to see some cluster =
errors in elog on g10r9

5 minutes later cluster db is upgraded to 300. Still getting cluster =
errors on g10r9





-----Original Message-----

From: Chris Vandever

Sent: Sat 5/12/2007 5:36 PM

To: Sandrine Boulanger; Brian DeForest; Raj Kumar

Cc: Andy Sharp

Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors



I'll need the exact procedure that's being used for the upgrade, and =
access to any scripts, etc.  Clearly, the clustering code is confused =
since the clusDb shows it was successfully upgraded, yet the clustering =
code is still trying (unsuccessfully) to upgrade it.



I won't be able to look at it this evening, but I will try to do so =
tomorrow.



ChrisV



________________________________



From: Sandrine Boulanger

Sent: Sat 5/12/2007 12:33 PM

To: Chris Vandever; Brian DeForest; Raj Kumar

Cc: Andy Sharp

Subject: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing =
cluster errors







I tested Andy's fix for the copy of the config files. I used the files =
as standalone since we don't have a new build. It worked fine. Prior to =
the DBs being upgraded from 220 to 30, all commands were running OK. I =
tried to add a gns root and it refused because the DBs were not upgraded =
yet, which is good.



Then the DBs were upgraded to 30, and at that time I saw the same type =
of errors we saw last Thursday: cluster show summary on g10r9, or gns =
show cifs was failing with cluster2 errors, see elog below. The commands =
are OK on g11r9.



G10r9 was the first to be rebooted on 3.0 flash, g11r9 was second.



G10r9 ssc ip is 10.2.9.10



I'll leave it in this state, let me know if I can help debugging.



# dbtools show /onstor/conf/cluster.db.DB0 -c



Cluster DB Header



------------------------------------



dbHdr.version =3D 0x30000



dbHdr.signature =3D 0x434c5553



dbHdr.blockSize =3D 256



dbHdr.freeOffset =3D 0



dbHdr.endOffset =3D 2336000



dbHdr.recCount =3D 0



dbHdr.hashTblSize =3D 4096



dbHdr.hashTblOffset =3D 256



dbHdr.DebugMode =3D 0



# nfxsh



Welcome to the ONStor NAS Gateway.



g10r9 diag> gns show



% Command incomplete.



g10r9 diag> gns show



  cifs  Show GNS CIFS



g10r9 diag> gns show cifs



  ROOTNAME[\Path]  Root and optional path



  all



g10r9 diag> gns show cifs all



This command is unavailable until after the cluster database has been =
upgraded.



% Command failure.



g10r9 diag> gns show cifs all



This command is unavailable until after the cluster database has been =
upgraded.



% Command failure.



g10r9 diag> cluster show summary



Cluster Name: g10r9       Cluster State:   On



------------------------------------------------------



NAS Gateways        IP              State   PCC



------------------------------------------------------



g10r9               10.2.9.10       UP      YES



g11r9               10.2.9.11       UP      NO



Getting cluster information failed



% Command failure.



g10r9 diag>



# cat /var/agile/messages



May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over



May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30



May 12 12:00:38 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster daemon



May 12 12:00:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:01:09 g10r9 last message repeated 3 times



May 12 12:01:13 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster daemon



May 12 12:01:19 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: cluster_request_upgrade_db: =
no reply bck -1



May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: =
ClusterCtrl_ProcessTimerFunc: Request to upgrade clusDb failed, rc 30



May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: =
ClusterCtrl_ProcessTimerFunc: Running in degraded mode on version =
0x20200, will retry upgrade to version 0x30000 later...



May 12 12:01:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:01:40 g10r9 : 0:0:cluster2:ERROR: cluster_set_lock_on_rpc: no =
reply bck -1



May 12 12:01:49 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:02:00 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30



May 12 12:03:35 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:03:57 g10r9 last message repeated 3 times



May 12 12:03:57 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list for node g10r9



May 12 12:04:06 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:04:07 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:04:07 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list for node g10r9



May 12 12:04:16 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:04:26 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:04:27 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11



May 12 12:04:36 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:04:42 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11



May 12 12:04:46 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:04:56 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:05:28 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11



May 12 12:05:33 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down



May 12 12:05:34 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11



May 12 12:05:34 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g11r9', State Up, Msg ''



May 12 12:05:41 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11



May 12 12:05:50 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11



May 12 12:05:54 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11



May 12 12:05:59 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11



May 12 12:06:11 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11



May 12 12:06:15 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11



May 12 12:06:20 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11



May 12 12:06:24 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11



May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get cluster rec, rcode 30



May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply bck -1



May 12 12:06:29 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list for node g10r9



May 12 12:06:33 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11



May 12 12:06:40 g10r9 : 0:0:cluster2:ERROR: cluster_upgrade_db_runtime: =
ClusDb update complete; successfully updated version from 0x20200 to =
0x30000



May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file in progress, remote ver 1178995860(554), sending to 10.2.9.11



May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: cluster =
db version matches



May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file end, code 0



May 12 12:07:45 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: gns show : status[4]



May 12 12:08:05 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1



May 12 12:08:05 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns show cifs all : =
status[11]



May 12 12:10:12 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1



May 12 12:10:12 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: gns show cifs all : =
status[11]



# zcat /var/agile/messages.0.gz                                          =
                                                            =20



May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager: =
Started



May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 0, State Up



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 1, State Up



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 2, State Up



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 3, State Up



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2, =
CPU 0, State Up



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.0 =
State Up, Msg ''



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.1 =
State Down, Msg ''



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.2 =
State Down, Msg ''



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.3 =
State Down, Msg ''



May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
failed



May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
failed



May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session =
closed



May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session =
closed



May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
5



May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
6



May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
8



May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
9



May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
7



May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g10r9', State Up, Msg ''



May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
success



May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
success



May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session =
closed



May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session =
closed



May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
success



May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
success



May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: Using 10.2.9.10 as my =
primary address



May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size 64



May 12 11:38:17 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'nis-vol', Id 0x000002000000006a, Event 'Create', State 'Down'



May 12 11:38:18 g10r9 : 1:2:sanm_ag:WARNING: 2: sanm_doRMCLocalClose: =
reopening session to sanmd



May 12 11:38:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'jumbo-vol', Id 0x0000020000000073, Event 'Create', State 'Down'



May 12 11:38:19 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'win-vol', Id 0x000002000000006d, Event 'Create', State 'Down'



May 12 11:38:19 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -> rc.agile: booting =
version 3.0.0.0b : status[2]



May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'openldap-vol', Id 0x0000020000000071, Event 'Create', State 'Down'



May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local-nis-vol', Id 0x000002000000006e, Event 'Create', State 'Down'



May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'authdom-vol', Id 0x0000020000000072, Event 'Create', State 'Down'



May 12 11:38:21 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local', Id 0x0000020000000075, Event 'Create', State 'Down'



May 12 11:38:22 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -> rc.agile: booting =
from /dev/wd0a : status[2]



May 12 11:38:22 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'vol_mgmt_512', Id 0x0000020000000066, Event 'Create', State 'Down'



May 12 11:38:23 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'LSI_E4600A_R46_g10r9_core', Id 0x0000020000000074, Event 'Create', =
State 'Down'



May 12 11:38:32 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0: Mgmt =
Port 0.0.0.0 PCC, State Up



May 12 11:38:39 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session open =
failed



May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000001026a8a93, WWN 100000068d002a40, LUN 1, vendor 'IBM', model =
'ULTRIUM-TD3'



May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000299256570, WWN 100000068d002a40, LUN 2, vendor 'IBM', model =
'ULTRIUM-TD3'



May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000474cbbcf7, WWN 100000068d002a40, LUN 4, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000005020ee656, WWN 100000068d002a40, LUN 5, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000007527722f4, WWN 100000068d002a40, LUN 7, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000008c30a5552, WWN 100000068d002a40, LUN 8, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000ad8b479e3, WWN 100000068d002a40, LUN 10, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000b326471cc, WWN 100000068d002a40, LUN 11, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000d7663d86b, WWN 100000068d002a40, LUN 13, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000000e50bb2aad, WWN 100000068d002a40, LUN 14, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000010cecbc163, WWN 100000068d002a40, LUN 16, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000001101709690, WWN 100000068d002a40, LUN 17, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_closeSess : NCM session closed



May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session open =
success



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 000000120c18b3b8, WWN 100000068d002a40, LUN 18, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000001483e7a0c2, WWN 100000068d002a40, LUN 20, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 000000154c5cf731, WWN 100000068d002a40, LUN 21, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 000000168d36b6dd, WWN 100000068d002a40, LUN 22, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000017428de12e, WWN 100000068d002a40, LUN 23, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 00000019533a4544, WWN 100000068d002a40, LUN 25, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id 0000001ad886bb10, WWN 100000068d002a40, LUN 26, vendor 'IBM', model =
'ULTRIUM-TD2'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 00000000b26acfa2, WWN 100000068d002a40, LUN 0, vendor =
'ADIC', model 'Scalar-24'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 00000003632ebb68, WWN 100000068d002a40, LUN 3, vendor =
'STK', model 'L180'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 000000061ed3ca31, WWN 100000068d002a40, LUN 6, vendor =
'STK', model 'L180'



May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 000000093a855f24, WWN 100000068d002a40, LUN 9, vendor =
'STK', model 'L180'



May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 0000000ca1e082bf, WWN 100000068d002a40, LUN 12, vendor =
'STK', model 'L180'



May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 0000000ff746fa9b, WWN 100000068d002a40, LUN 15, vendor =
'STK', model 'L180'



May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 00000013719221e7, WWN 100000068d002a40, LUN 19, vendor =
'STK', model 'L180'



May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id 000000182eb0d71b, WWN 100000068d002a40, LUN 24, vendor =
'STK', model 'L180'



May 12 11:38:46 g10r9 : 0:0:auth_agent:NOTICE: main: auth-agent started



May 12 11:38:49 g10r9 : 1:2:sanm_ag:WARNING: 3: sanm_doRMCLocalClose: =
reopening session to sanmd



May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 10: vstack_vlink_create:294 : vl =
FO active 1 prefered 1



May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 11: vstack_vlink_create:329: =
vlink lp link op-state DOWN



May 12 11:38:52 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.1.1, Port bp0, State Up



May 12 11:38:55 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'jumbo-vol', Id 0x0000020000000073, Event 'Online', was offline for =
roughly 255 sec.



May 12 11:38:55 g10r9 : 0:0:sanm:NOTICE: SANM: ONStor Data Mirror =
(c)2006: Started



May 12 11:38:57 g10r9 : 1:0:bsdrl:WARNING: 12: route dst 0 mask 0 =
gateway 100030a fail to be added/deleted. error =3D 51



May 12 11:38:57 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.8.1, Port bp0, State Up



May 12 11:38:59 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'VS_MGMT_512', Id 1, State 'Up'



May 12 11:38:59 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[1]



May 12 11:39:00 g10r9 : 0:0:ea:ERROR: ea_procMovInReq[1586] : Volume =
(vol_mgmt_512) is locked for Online operation



May 12 11:39:01 g10r9 : 0:0:vsd:ERROR: vsd_mountVolProc[4941] : Failed =
to mount volume ID=3D0x20000000066. Operation not allowed.



May 12 11:39:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'vol_mgmt_512', Id 0x0000020000000066, Event 'Online', was offline for =
roughly 266 sec.



May 12 11:39:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'JUMBO_G10R9', Id 8, State 'Up'



May 12 11:39:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[8]



May 12 11:44:43 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: system version  : =
status[0]



May 12 11:44:48 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : status[0]



May 12 11:45:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: interface show ip -n =
g11r9 : status[0]



May 12 11:45:37 g10r9 : 0:0:nfxsh:NOTICE: cmd[3]: cluster sho gr : =
status[0]



May 12 11:45:41 g10r9 : 0:0:nfxsh:NOTICE: cmd[4]: cluster show summary  =
: status[0]



May 12 11:45:52 g10r9 : 0:0:nfxsh:NOTICE: cmd[5]: vsvr show all : =
status[0]



May 12 11:49:23 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog show config  : =
status[0]



May 12 11:50:35 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down



May 12 11:50:35 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0: Mgmt =
Port 0.0.0.0 PCC, State Up



May 12 11:50:51 g10r9 : 0:0:cluster2:WARNING: =
ClusterCtrl_ProcessTimerFunc: post node down hostname g11r9



May 12 11:50:51 g10r9 : 0:0:eventd:CRITICAL: Process-EVENT Node: Name =
'g11r9', State Down, Msg ''



May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D2), num_vsvr 2



May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 3



May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 4



May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 5



May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 6



May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NIS_G10R9', Id 3, State 'Enabled'



May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'WINDOWS_G10R9', Id 4, State 'Enabled'



May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'LOCAL_NIS_G10R9', Id 5, State 'Enabled'



May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'OPENLDAP_G10R9', Id 6, State 'Enabled'



May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'AUTHDOM_G10R9', Id 7, State 'Enabled'



May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NONETBIOS_G10R9', Id 10, State 'Enabled'



May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'nis-vol', Id 0x000002000000006a, Event 'Takeover', State 'Up'



May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local', Id 0x0000020000000075, Event 'Takeover', State 'Up'



May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NONETBIOS_G10R9', Id 10, State 'Up'



May 12 11:51:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[10]



May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'nis-vol', Id 0x000002000000006a, Event 'Online', was offline for =
roughly 69 sec.



May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'win-vol', Id 0x000002000000006d, Event 'Takeover', State 'Up'



May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local-nis-vol', Id 0x000002000000006e, Event 'Takeover', State 'Up'



May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'openldap-vol', Id 0x0000020000000071, Event 'Takeover', State 'Up'



May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'authdom-vol', Id 0x0000020000000072, Event 'Takeover', State 'Up'



May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local', Id 0x0000020000000075, Event 'Online', was offline for roughly =
52 sec.



May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'openldap-vol', Id 0x0000020000000071, Event 'Online', was offline for =
roughly 62 sec.



May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'win-vol', Id 0x000002000000006d, Event 'Online', was offline for =
roughly 66 sec.



May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local-nis-vol', Id 0x000002000000006e, Event 'Online', was offline for =
roughly 59 sec.



May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'authdom-vol', Id 0x0000020000000072, Event 'Online', was offline for =
roughly 55 sec.



May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.3.1, Port bp0, State Up



May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.6.1, Port bp0, State Up



May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.5.1, Port bp0, State Up



May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.4.1, Port bp0, State Up



May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.4.2, Port bp0, State Up



May 12 11:51:07 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded



May 12 11:51:07 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'NIS_G10R9', Id 3, State 'Up'



May 12 11:51:07 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[3]



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'OPENLDAP_G10R9', Id 6, State 'Up'



May 12 11:51:08 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[6]



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.1, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.2, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.3, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.4, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.5, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.6, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.7, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.8, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.9, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.10, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.11, Port bp0, State Up



May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.12, Port bp0, State Up



May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.13, Port bp0, State Up



May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.14, Port bp0, State Up



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.15, Port bp0, State Up



May 12 11:51:10 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.16, Port bp0, State Up



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.17, Port bp0, State Up



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.18, Port bp0, State Up



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.19, Port bp0, State Up



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -14



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.20, Port bp0, State Up



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -14



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.21, Port bp0, State Up



May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.22, Port bp0, State Up



May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.23, Port bp0, State Up



May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.24, Port bp0, State Up



May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.25, Port bp0, State Up



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.26, Port bp0, State Up



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.27, Port bp0, State Up



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.28, Port bp0, State Up



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.29, Port bp0, State Up



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.30, Port bp0, State Up



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.31, Port bp0, State Up



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP =
192.167.7.32, Port bp0, State Up



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: IP i/f RMC Error -3



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'LOCAL_NIS_G10R9', Id 5, State 'Up'



May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[5]



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc EVENT: Vsvr RMC Error -3



May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'AUTHDOM_G10R9', Id 7, State 'Up'



May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[7]



May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
retries to filer(g11r9) failed - aborting attemtpts



May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_handle_net_msg_done: =
ncm_repoen filer failed - -1



May 12 11:51:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server 'WINDOWS_G10R9', Id 4, State 'Up'



May 12 11:51:18 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[4]



May 12 11:51:43 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded



May 12 11:51:46 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9) succeeded



May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
retries to filer(g11r9) failed - aborting attemtpts



May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_handle_net_msg_done: =
ncm_repoen filer failed - -1



May 12 11:53:48 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g11r9', State Up, Msg ''



May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.2.9.11



May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file in progress, remote ver 1178991276(64), sending to 10.2.9.11



May 12 11:54:01 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file end, code 0



May 12 11:54:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : status[0]



May 12 11:58:08 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog find cluster2 : =
status[0]



May 12 11:59:10 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up: will be contacted through 10.12.9.11



May 12 11:59:12 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size 64



May 12 11:59:32 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1



May 12 11:59:32 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns add root cifs =
gnsroot  : status[11] =E8 tried this before the DB was upgraded



May 12 12:00:03 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster daemon



May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over



#








------_=_NextPart_001_01C7971B.39D404BE
Content-Type: text/html;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:st1=3D"urn:schemas-microsoft-com:office:smarttags" =
xmlns=3D"http://www.w3.org/TR/REC-html40">

<head>
<meta http-equiv=3DContent-Type content=3D"text/html; =
charset=3Diso-8859-1">
<meta name=3DGenerator content=3D"Microsoft Word 11 (filtered medium)">
<!--[if !mso]>
<style>
v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style>
<![endif]-->
<title>RE: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing =
cluster
errors</title>
<o:SmartTagType =
namespaceuri=3D"urn:schemas-microsoft-com:office:smarttags"
 name=3D"PlaceType"/>
<o:SmartTagType =
namespaceuri=3D"urn:schemas-microsoft-com:office:smarttags"
 name=3D"PlaceName"/>
<o:SmartTagType =
namespaceuri=3D"urn:schemas-microsoft-com:office:smarttags"
 name=3D"place"/>
<!--[if !mso]>
<style>
st1\:*{behavior:url(#default#ieooui) }
</style>
<![endif]-->
<style>
<!--
 /* Font Definitions */
 @font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
 /* Style Definitions */
 p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman";}
a:link, span.MsoHyperlink
	{color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{color:navy;
	text-decoration:underline;}
p
	{mso-margin-top-alt:auto;
	margin-right:0in;
	mso-margin-bottom-alt:auto;
	margin-left:0in;
	font-size:12.0pt;
	font-family:"Times New Roman";}
span.EmailStyle18
	{mso-style-type:personal-reply;
	font-family:Arial;
	color:navy;}
@page Section1
	{size:8.5in 11.0in;
	margin:1.0in 1.25in 1.0in 1.25in;}
div.Section1
	{page:Section1;}
-->
</style>

</head>

<body lang=3DEN-US link=3Dblue vlink=3Dnavy>

<div class=3DSection1>

<p class=3DMsoNormal><font size=3D2 color=3Dnavy face=3DArial><span =
style=3D'font-size:
10.0pt;font-family:Arial;color:navy'>Sandrine, thanks for the info.=A0 I =
power
cycled and rebooted on the 222 flash.<o:p></o:p></span></font></p>

<p class=3DMsoNormal><font size=3D2 color=3Dnavy face=3DArial><span =
style=3D'font-size:
10.0pt;font-family:Arial;color:navy'><o:p>&nbsp;</o:p></span></font></p>

<p class=3DMsoNormal><font size=3D2 color=3Dnavy face=3DArial><span =
style=3D'font-size:
10.0pt;font-family:Arial;color:navy'>Raj, I feel your pain, and =
I&#8217;m only
seeing the tip of the iceburg.<o:p></o:p></span></font></p>

<p class=3DMsoNormal><font size=3D2 color=3Dnavy face=3DArial><span =
style=3D'font-size:
10.0pt;font-family:Arial;color:navy'><o:p>&nbsp;</o:p></span></font></p>

<p class=3DMsoNormal><font size=3D2 color=3Dnavy face=3DArial><span =
style=3D'font-size:
10.0pt;font-family:Arial;color:navy'>Andy, you guys have a =
bug.<o:p></o:p></span></font></p>

<p class=3DMsoNormal><font size=3D2 color=3Dnavy face=3DArial><span =
style=3D'font-size:
10.0pt;font-family:Arial;color:navy'><o:p>&nbsp;</o:p></span></font></p>

<div>

<div class=3DMsoNormal align=3Dcenter style=3D'text-align:center'><font =
size=3D3
face=3D"Times New Roman"><span style=3D'font-size:12.0pt'>

<hr size=3D2 width=3D"100%" align=3Dcenter tabindex=3D-1>

</span></font></div>

<p class=3DMsoNormal><b><font size=3D2 face=3DTahoma><span =
style=3D'font-size:10.0pt;
font-family:Tahoma;font-weight:bold'>From:</span></font></b><font =
size=3D2
face=3DTahoma><span style=3D'font-size:10.0pt;font-family:Tahoma'> Raj =
Kumar <br>
<b><span style=3D'font-weight:bold'>Sent:</span></b> Monday, May 14, =
2007 10:00
PM<br>
<b><span style=3D'font-weight:bold'>To:</span></b> Sandrine Boulanger; =
Chris
Vandever; Brian DeForest<br>
<b><span style=3D'font-weight:bold'>Cc:</span></b> Andy Sharp<br>
<b><span style=3D'font-weight:bold'>Subject:</span></b> RE: Test upgrade =
fom 222
to 30 prior to MD upgrade, still seeing cluster =
errors</span></font><o:p></o:p></p>

</div>

<p class=3DMsoNormal><font size=3D3 face=3D"Times New Roman"><span =
style=3D'font-size:
12.0pt'><o:p>&nbsp;</o:p></span></font></p>

<div id=3DidOWAReplyText67955>

<div>

<p class=3DMsoNormal><font size=3D2 color=3Dblack face=3DArial><span =
style=3D'font-size:
10.0pt;font-family:Arial;color:black'>Chris,</span></font><o:p></o:p></p>=


</div>

<div>

<p class=3DMsoNormal><font size=3D3 face=3D"Times New Roman"><span =
style=3D'font-size:
12.0pt'>&nbsp;<o:p></o:p></span></font></p>

</div>

<div>

<p class=3DMsoNormal><font size=3D2 face=3DArial><span =
style=3D'font-size:10.0pt;
font-family:Arial'>Welcome to our hell :)</span></font><o:p></o:p></p>

</div>

</div>

<div>

<p class=3DMsoNormal><font size=3D3 face=3D"Times New Roman"><span =
style=3D'font-size:
12.0pt'><o:p>&nbsp;</o:p></span></font></p>

<div class=3DMsoNormal align=3Dcenter style=3D'text-align:center'><font =
size=3D3
face=3D"Times New Roman"><span style=3D'font-size:12.0pt'>

<hr size=3D2 width=3D"100%" align=3Dcenter tabIndex=3D-1>

</span></font></div>

<p class=3DMsoNormal style=3D'margin-bottom:12.0pt'><b><font size=3D2 =
face=3DTahoma><span
style=3D'font-size:10.0pt;font-family:Tahoma;font-weight:bold'>From:</spa=
n></font></b><font
size=3D2 face=3DTahoma><span =
style=3D'font-size:10.0pt;font-family:Tahoma'> Sandrine
Boulanger<br>
<b><span style=3D'font-weight:bold'>Sent:</span></b> Mon 5/14/2007 9:57 =
PM<br>
<b><span style=3D'font-weight:bold'>To:</span></b> Chris Vandever; Brian
DeForest; Raj Kumar<br>
<b><span style=3D'font-weight:bold'>Cc:</span></b> Andy Sharp<br>
<b><span style=3D'font-weight:bold'>Subject:</span></b> RE: Test upgrade =
fom 222
to 30 prior to MD upgrade, still seeing cluster =
errors</span></font><o:p></o:p></p>

</div>

<div>

<p style=3D'margin-bottom:12.0pt'><font size=3D2 face=3D"Times New =
Roman"><span
style=3D'font-size:10.0pt'>Weird, I ran into this earlier when I forgot =
to copy
the latest rc.initial, but this time we did it.<br>
You can either power off the filer and swap flashes, then power up, or =
power
cycle it when you have the ssc console open, rush back into your cube, =
^E on
ssc console when the ....... starts, get to ssc-prom, env view to check
boot_dev variable, env set boot_dev wd1a or wd0a depending on what the =
current
is, then autoload. This way you'll be back to the 222 flash and we can =
mount
the other partition to check /etc/rc.initial (verify we copied it at the =
right
place.<br>
Other option is to go through first install script just to specify filer =
name
and sc1 ip (maybe also default route 10.2.0.1), then get to nfxsh and do =
a
system reboot -s -y to go back to 2.2.2. This way the partitions are =
cleanly
unmunted, which is not the case when you power cycle (and then you might =
need
to use fsck).<br>
<br>
<br>
-----Original Message-----<br>
From: Chris Vandever<br>
Sent: Mon 5/14/2007 7:06 PM<br>
To: Sandrine Boulanger; Brian DeForest; Raj Kumar<br>
Cc: Andy Sharp<br>
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing
cluster errors<br>
<br>
I followed Sandrine's instructions to set up the R2.2.2 flashes and then =
the
R3.0 flashes, including typing 'y' to get the config copied to the =
secondary
(R3.0) flash:<br>
<br>
<br>
<br>
/mnt/R3.0.0.0/R3.0.0.0-050907/nfx-tree/Build/ch/dbg/Release/etc<br>
<br>
Do you wish for this flash to automatically copy<br>
<br>
the configuration from the other flash at boot time<br>
<br>
rather than running the initial config menu? [N] y<br>
<br>
I'll take y as a 'Yes' and set the new flash to autoupgrade.<br>
<br>
eng28#<br>
<br>
<br>
<br>
However, the system in its infinite wisdom booted into the initial =
config
tool.&nbsp; Sounds like a bug to me.<br>
<br>
<br>
<br>
ChrisV<br>
<br>
________________________________<br>
<br>
From: Sandrine Boulanger<br>
Sent: Monday, May 14, 2007 9:52 AM<br>
To: Sandrine Boulanger; Chris Vandever; Brian DeForest; Raj Kumar<br>
Cc: Andy Sharp<br>
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing
cluster errors<br>
<br>
<br>
<br>
We lost the evidence again, the filer rebooted on its own yesterday, =
probably
due to resource or memory leak, cluster errors are filling elog:<br>
<br>
<br>
<br>
.<br>
<br>
May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot get
cluster rec, rcode 30<br>
<br>
May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
.<br>
<br>
May 13 08:00:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
May 13 08:17:44 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager: =
Started<br>
<br>
May 13 08:17:46 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1<br>
<br>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: <st1:place =
w:st=3D"on"><st1:PlaceName
 w:st=3D"on">Process-EVENT</st1:PlaceName> <st1:PlaceType =
w:st=3D"on">Port</st1:PlaceType></st1:place>:
fp1.0 State Up, Msg ''<br>
<br>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 0,
State Up<br>
<br>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: <st1:place =
w:st=3D"on"><st1:PlaceName
 w:st=3D"on">Process-EVENT</st1:PlaceName> <st1:PlaceType =
w:st=3D"on">Port</st1:PlaceType></st1:place>:
fp1.1 State Down, Msg ''<br>
<br>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 1,
State Up<br>
<br>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: <st1:place =
w:st=3D"on"><st1:PlaceName
 w:st=3D"on">Process-EVENT</st1:PlaceName> <st1:PlaceType =
w:st=3D"on">Port</st1:PlaceType></st1:place>:
fp1.2 State Down, Msg ''<br>
<br>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 2,
State Up<br>
<br>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: <st1:place =
w:st=3D"on"><st1:PlaceName
 w:st=3D"on">Process-EVENT</st1:PlaceName> <st1:PlaceType =
w:st=3D"on">Port</st1:PlaceType></st1:place>:
fp1.3 State Down, Msg ''<br>
<br>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 3,
State Up<br>
<br>
May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2, =
CPU 0,
State Up<br>
<br>
<br>
<br>
-----Original Message-----<br>
From: Sandrine Boulanger<br>
Sent: Saturday, May 12, 2007 6:14 PM<br>
To: Chris Vandever; Brian DeForest; Raj Kumar<br>
Cc: Andy Sharp<br>
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing
cluster errors<br>
<br>
<br>
<br>
Procedure:<br>
<br>
<br>
<br>
Created 2 new flashes with 2.2.2 version for g10r9 and g11r9.<br>
<br>
Clustered the 2 filers<br>
<br>
Created configuration (from gateway to volume level)<br>
<br>
From the 2 filers active flash, mount sub#16 Release directory<br>
<br>
Copied the new flash_install.sh under Release/etc<br>
<br>
Started the flash_install.sh<br>
<br>
Answered Y at the end to trigger later the config files copy once the =
3.0 flash
is booting<br>
<br>
Copied the new rc.initial to /etc on the new 3.0 flash<br>
<br>
system reboot -s -y on g10r9<br>
<br>
g10r9 boots, get the config files from the 220 flash through rc.initial, =
then
comes up with 3.0 version, cluster db still at 220. g11r9 still running =
version
220. Cluster commands are fine<br>
<br>
system reboot -s -y on g11r9<br>
<br>
g10r9 boots, get the config files from the 220 flash through rc.initial, =
then
comes up with 3.0 version, cluster db still at 220. Both filers are now =
at
version 3.0 with cluster db at 2220. Starting to see some cluster errors =
in
elog on g10r9<br>
<br>
5 minutes later cluster db is upgraded to 300. Still getting cluster =
errors on
g10r9<br>
<br>
<br>
<br>
<br>
<br>
-----Original Message-----<br>
<br>
From: Chris Vandever<br>
<br>
Sent: Sat 5/12/2007 5:36 PM<br>
<br>
To: Sandrine Boulanger; Brian DeForest; Raj Kumar<br>
<br>
Cc: Andy Sharp<br>
<br>
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing
cluster errors<br>
<br>
<br>
<br>
I'll need the exact procedure that's being used for the upgrade, and =
access to
any scripts, etc.&nbsp; Clearly, the clustering code is confused since =
the
clusDb shows it was successfully upgraded, yet the clustering code is =
still
trying (unsuccessfully) to upgrade it.<br>
<br>
<br>
<br>
I won't be able to look at it this evening, but I will try to do so =
tomorrow.<br>
<br>
<br>
<br>
ChrisV<br>
<br>
<br>
<br>
________________________________<br>
<br>
<br>
<br>
From: Sandrine Boulanger<br>
<br>
Sent: Sat 5/12/2007 12:33 PM<br>
<br>
To: Chris Vandever; Brian DeForest; Raj Kumar<br>
<br>
Cc: Andy Sharp<br>
<br>
Subject: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing =
cluster
errors<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
I tested Andy's fix for the copy of the config files. I used the files =
as
standalone since we don't have a new build. It worked fine. Prior to the =
DBs
being upgraded from 220 to 30, all commands were running OK. I tried to =
add a
gns root and it refused because the DBs were not upgraded yet, which is =
good.<br>
<br>
<br>
<br>
Then the DBs were upgraded to 30, and at that time I saw the same type =
of
errors we saw last Thursday: cluster show summary on g10r9, or gns show =
cifs
was failing with cluster2 errors, see elog below. The commands are OK on =
g11r9.<br>
<br>
<br>
<br>
G10r9 was the first to be rebooted on 3.0 flash, g11r9 was second.<br>
<br>
<br>
<br>
G10r9 ssc ip is 10.2.9.10<br>
<br>
<br>
<br>
I'll leave it in this state, let me know if I can help debugging.<br>
<br>
<br>
<br>
# dbtools show /onstor/conf/cluster.db.DB0 -c<br>
<br>
<br>
<br>
Cluster DB Header<br>
<br>
<br>
<br>
------------------------------------<br>
<br>
<br>
<br>
dbHdr.version =3D 0x30000<br>
<br>
<br>
<br>
dbHdr.signature =3D 0x434c5553<br>
<br>
<br>
<br>
dbHdr.blockSize =3D 256<br>
<br>
<br>
<br>
dbHdr.freeOffset =3D 0<br>
<br>
<br>
<br>
dbHdr.endOffset =3D 2336000<br>
<br>
<br>
<br>
dbHdr.recCount =3D 0<br>
<br>
<br>
<br>
dbHdr.hashTblSize =3D 4096<br>
<br>
<br>
<br>
dbHdr.hashTblOffset =3D 256<br>
<br>
<br>
<br>
dbHdr.DebugMode =3D 0<br>
<br>
<br>
<br>
# nfxsh<br>
<br>
<br>
<br>
Welcome to the ONStor NAS Gateway.<br>
<br>
<br>
<br>
g10r9 diag&gt; gns show<br>
<br>
<br>
<br>
% Command incomplete.<br>
<br>
<br>
<br>
g10r9 diag&gt; gns show<br>
<br>
<br>
<br>
&nbsp; cifs&nbsp; Show GNS CIFS<br>
<br>
<br>
<br>
g10r9 diag&gt; gns show cifs<br>
<br>
<br>
<br>
&nbsp; ROOTNAME[\Path]&nbsp; Root and optional path<br>
<br>
<br>
<br>
&nbsp; all<br>
<br>
<br>
<br>
g10r9 diag&gt; gns show cifs all<br>
<br>
<br>
<br>
This command is unavailable until after the cluster database has been =
upgraded.<br>
<br>
<br>
<br>
% Command failure.<br>
<br>
<br>
<br>
g10r9 diag&gt; gns show cifs all<br>
<br>
<br>
<br>
This command is unavailable until after the cluster database has been =
upgraded.<br>
<br>
<br>
<br>
% Command failure.<br>
<br>
<br>
<br>
g10r9 diag&gt; cluster show summary<br>
<br>
<br>
<br>
Cluster Name: g10r9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Cluster
State:&nbsp;&nbsp; On<br>
<br>
<br>
<br>
------------------------------------------------------<br>
<br>
<br>
<br>
NAS Gateways&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
IP&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;
State&nbsp;&nbsp; PCC<br>
<br>
<br>
<br>
------------------------------------------------------<br>
<br>
<br>
<br>
g10r9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;
10.2.9.10&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
UP&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
YES<br>
<br>
<br>
<br>
g11r9&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;
10.2.9.11&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
UP&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
NO<br>
<br>
<br>
<br>
Getting cluster information failed<br>
<br>
<br>
<br>
% Command failure.<br>
<br>
<br>
<br>
g10r9 diag&gt;<br>
<br>
<br>
<br>
# cat /var/agile/messages<br>
<br>
<br>
<br>
May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over<br>
<br>
<br>
<br>
May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
<br>
<br>
May 12 12:00:38 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster
daemon<br>
<br>
<br>
<br>
May 12 12:00:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:01:09 g10r9 last message repeated 3 times<br>
<br>
<br>
<br>
May 12 12:01:13 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster
daemon<br>
<br>
<br>
<br>
May 12 12:01:19 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: cluster_request_upgrade_db: =
no
reply bck -1<br>
<br>
<br>
<br>
May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: =
ClusterCtrl_ProcessTimerFunc:
Request to upgrade clusDb failed, rc 30<br>
<br>
<br>
<br>
May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: =
ClusterCtrl_ProcessTimerFunc:
Running in degraded mode on version 0x20200, will retry upgrade to =
version
0x30000 later...<br>
<br>
<br>
<br>
May 12 12:01:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:01:40 g10r9 : 0:0:cluster2:ERROR: cluster_set_lock_on_rpc: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:01:49 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:02:00 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
<br>
<br>
May 12 12:03:35 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:03:57 g10r9 last message repeated 3 times<br>
<br>
<br>
<br>
May 12 12:03:57 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId
list for node g10r9<br>
<br>
<br>
<br>
May 12 12:04:06 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:04:07 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:04:07 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId
list for node g10r9<br>
<br>
<br>
<br>
May 12 12:04:16 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:04:26 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:04:27 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.2.9.11<br>
<br>
<br>
<br>
May 12 12:04:36 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:04:42 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.12.9.11<br>
<br>
<br>
<br>
May 12 12:04:46 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:04:56 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:05:28 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.2.9.11<br>
<br>
<br>
<br>
May 12 12:05:33 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down<br>
<br>
<br>
<br>
May 12 12:05:34 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.2.9.11<br>
<br>
<br>
<br>
May 12 12:05:34 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g11r9',
State Up, Msg ''<br>
<br>
<br>
<br>
May 12 12:05:41 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.12.9.11<br>
<br>
<br>
<br>
May 12 12:05:50 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.2.9.11<br>
<br>
<br>
<br>
May 12 12:05:54 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.12.9.11<br>
<br>
<br>
<br>
May 12 12:05:59 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.2.9.11<br>
<br>
<br>
<br>
May 12 12:06:11 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.12.9.11<br>
<br>
<br>
<br>
May 12 12:06:15 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.2.9.11<br>
<br>
<br>
<br>
May 12 12:06:20 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.12.9.11<br>
<br>
<br>
<br>
May 12 12:06:24 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.2.9.11<br>
<br>
<br>
<br>
May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList: =
cannot
get cluster rec, rcode 30<br>
<br>
<br>
<br>
May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey: no =
reply
bck -1<br>
<br>
<br>
<br>
May 12 12:06:29 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot get =
vsId list
for node g10r9<br>
<br>
<br>
<br>
May 12 12:06:33 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.12.9.11<br>
<br>
<br>
<br>
May 12 12:06:40 g10r9 : 0:0:cluster2:ERROR: cluster_upgrade_db_runtime: =
ClusDb
update complete; successfully updated version from 0x20200 to =
0x30000<br>
<br>
<br>
<br>
May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file
in progress, remote ver 1178995860(554), sending to 10.2.9.11<br>
<br>
<br>
<br>
May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: cluster =
db
version matches<br>
<br>
<br>
<br>
May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file
end, code 0<br>
<br>
<br>
<br>
May 12 12:07:45 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: gns show : =
status[4]<br>
<br>
<br>
<br>
May 12 12:08:05 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1<br>
<br>
<br>
<br>
May 12 12:08:05 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns show cifs all :
status[11]<br>
<br>
<br>
<br>
May 12 12:10:12 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1<br>
<br>
<br>
<br>
May 12 12:10:12 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: gns show cifs all : =
status[11]<br>
<br>
<br>
<br>
# zcat
/var/agile/messages.0.gz&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br>
<br>
<br>
<br>
May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager: =
Started<br>
<br>
<br>
<br>
May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 0,
State Up<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 1,
State Up<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 2,
State Up<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1, =
CPU 3,
State Up<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2, =
CPU 0,
State Up<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.0 =
State Up,
Msg ''<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.1 =
State
Down, Msg ''<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.2 =
State
Down, Msg ''<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.3 =
State
Down, Msg ''<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
failed<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open =
failed<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session =
closed<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session =
closed<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
5<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
6<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
8<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
9<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid TID =
7<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g10r9',
State Up, Msg ''<br>
<br>
<br>
<br>
May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open
success<br>
<br>
<br>
<br>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open
success<br>
<br>
<br>
<br>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session =
closed<br>
<br>
<br>
<br>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session =
closed<br>
<br>
<br>
<br>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session open =
success<br>
<br>
<br>
<br>
May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session open
success<br>
<br>
<br>
<br>
May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: Using 10.2.9.10 as my =
primary
address<br>
<br>
<br>
<br>
May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size =
64<br>
<br>
<br>
<br>
May 12 11:38:17 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'nis-vol', Id 0x000002000000006a, Event 'Create', State 'Down'<br>
<br>
<br>
<br>
May 12 11:38:18 g10r9 : 1:2:sanm_ag:WARNING: 2: sanm_doRMCLocalClose: =
reopening
session to sanmd<br>
<br>
<br>
<br>
May 12 11:38:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'jumbo-vol', Id 0x0000020000000073, Event 'Create', State 'Down'<br>
<br>
<br>
<br>
May 12 11:38:19 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'win-vol', Id 0x000002000000006d, Event 'Create', State 'Down'<br>
<br>
<br>
<br>
May 12 11:38:19 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -&gt; rc.agile: =
booting
version 3.0.0.0b : status[2]<br>
<br>
<br>
<br>
May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'openldap-vol', Id 0x0000020000000071, Event 'Create', State 'Down'<br>
<br>
<br>
<br>
May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'local-nis-vol', Id 0x000002000000006e, Event 'Create', State 'Down'<br>
<br>
<br>
<br>
May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'authdom-vol', Id 0x0000020000000072, Event 'Create', State 'Down'<br>
<br>
<br>
<br>
May 12 11:38:21 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local',
Id 0x0000020000000075, Event 'Create', State 'Down'<br>
<br>
<br>
<br>
May 12 11:38:22 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -&gt; rc.agile: =
booting from
/dev/wd0a : status[2]<br>
<br>
<br>
<br>
May 12 11:38:22 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'vol_mgmt_512', Id 0x0000020000000066, Event 'Create', State 'Down'<br>
<br>
<br>
<br>
May 12 11:38:23 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'LSI_E4600A_R46_g10r9_core', Id 0x0000020000000074, Event 'Create', =
State
'Down'<br>
<br>
<br>
<br>
May 12 11:38:32 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0: Mgmt =
Port
0.0.0.0 PCC, State Up<br>
<br>
<br>
<br>
May 12 11:38:39 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session open =
failed<br>
<br>
<br>
<br>
May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
00000001026a8a93, WWN 100000068d002a40, LUN 1, vendor 'IBM', model
'ULTRIUM-TD3'<br>
<br>
<br>
<br>
May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
0000000299256570, WWN 100000068d002a40, LUN 2, vendor 'IBM', model =
'ULTRIUM-TD3'<br>
<br>
<br>
<br>
May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
0000000474cbbcf7, WWN 100000068d002a40, LUN 4, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
00000005020ee656, WWN 100000068d002a40, LUN 5, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
00000007527722f4, WWN 100000068d002a40, LUN 7, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
00000008c30a5552, WWN 100000068d002a40, LUN 8, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
0000000ad8b479e3, WWN 100000068d002a40, LUN 10, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
0000000b326471cc, WWN 100000068d002a40, LUN 11, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
0000000d7663d86b, WWN 100000068d002a40, LUN 13, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
0000000e50bb2aad, WWN 100000068d002a40, LUN 14, vendor 'IBM', model =
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
00000010cecbc163, WWN 100000068d002a40, LUN 16, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
0000001101709690, WWN 100000068d002a40, LUN 17, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_closeSess : NCM session =
closed<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session open =
success<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
000000120c18b3b8, WWN 100000068d002a40, LUN 18, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
0000001483e7a0c2, WWN 100000068d002a40, LUN 20, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
000000154c5cf731, WWN 100000068d002a40, LUN 21, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
000000168d36b6dd, WWN 100000068d002a40, LUN 22, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
00000017428de12e, WWN 100000068d002a40, LUN 23, vendor 'IBM', model =
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
00000019533a4544, WWN 100000068d002a40, LUN 25, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape device: =
id
0000001ad886bb10, WWN 100000068d002a40, LUN 26, vendor 'IBM', model
'ULTRIUM-TD2'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id
00000000b26acfa2, WWN 100000068d002a40, LUN 0, vendor 'ADIC', model =
'Scalar-24'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id
00000003632ebb68, WWN 100000068d002a40, LUN 3, vendor 'STK', model =
'L180'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id
000000061ed3ca31, WWN 100000068d002a40, LUN 6, vendor 'STK', model =
'L180'<br>
<br>
<br>
<br>
May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id
000000093a855f24, WWN 100000068d002a40, LUN 9, vendor 'STK', model =
'L180'<br>
<br>
<br>
<br>
May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id
0000000ca1e082bf, WWN 100000068d002a40, LUN 12, vendor 'STK', model =
'L180'<br>
<br>
<br>
<br>
May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id
0000000ff746fa9b, WWN 100000068d002a40, LUN 15, vendor 'STK', model =
'L180'<br>
<br>
<br>
<br>
May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id
00000013719221e7, WWN 100000068d002a40, LUN 19, vendor 'STK', model =
'L180'<br>
<br>
<br>
<br>
May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new =
media-changer: id
000000182eb0d71b, WWN 100000068d002a40, LUN 24, vendor 'STK', model =
'L180'<br>
<br>
<br>
<br>
May 12 11:38:46 g10r9 : 0:0:auth_agent:NOTICE: main: auth-agent =
started<br>
<br>
<br>
<br>
May 12 11:38:49 g10r9 : 1:2:sanm_ag:WARNING: 3: sanm_doRMCLocalClose: =
reopening
session to sanmd<br>
<br>
<br>
<br>
May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 10: vstack_vlink_create:294 : vl =
FO
active 1 prefered 1<br>
<br>
<br>
<br>
May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 11: vstack_vlink_create:329: =
vlink lp
link op-state DOWN<br>
<br>
<br>
<br>
May 12 11:38:52 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.1.1, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:38:55 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'jumbo-vol', Id 0x0000020000000073, Event 'Online', was offline for =
roughly 255
sec.<br>
<br>
<br>
<br>
May 12 11:38:55 g10r9 : 0:0:sanm:NOTICE: SANM: ONStor Data Mirror =
(c)2006:
Started<br>
<br>
<br>
<br>
May 12 11:38:57 g10r9 : 1:0:bsdrl:WARNING: 12: route dst 0 mask 0 =
gateway
100030a fail to be added/deleted. error =3D 51<br>
<br>
<br>
<br>
May 12 11:38:57 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.8.1, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:38:59 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'VS_MGMT_512', Id 1, State 'Up'<br>
<br>
<br>
<br>
May 12 11:38:59 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[1]<br>
<br>
<br>
<br>
May 12 11:39:00 g10r9 : 0:0:ea:ERROR: ea_procMovInReq[1586] : Volume
(vol_mgmt_512) is locked for Online operation<br>
<br>
<br>
<br>
May 12 11:39:01 g10r9 : 0:0:vsd:ERROR: vsd_mountVolProc[4941] : Failed =
to mount
volume ID=3D0x20000000066. Operation not allowed.<br>
<br>
<br>
<br>
May 12 11:39:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'vol_mgmt_512', Id 0x0000020000000066, Event 'Online', was offline for =
roughly
266 sec.<br>
<br>
<br>
<br>
May 12 11:39:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'JUMBO_G10R9', Id 8, State 'Up'<br>
<br>
<br>
<br>
May 12 11:39:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[8]<br>
<br>
<br>
<br>
May 12 11:44:43 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: system version&nbsp; :
status[0]<br>
<br>
<br>
<br>
May 12 11:44:48 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : =
status[0]<br>
<br>
<br>
<br>
May 12 11:45:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: interface show ip -n =
g11r9 :
status[0]<br>
<br>
<br>
<br>
May 12 11:45:37 g10r9 : 0:0:nfxsh:NOTICE: cmd[3]: cluster sho gr : =
status[0]<br>
<br>
<br>
<br>
May 12 11:45:41 g10r9 : 0:0:nfxsh:NOTICE: cmd[4]: cluster show =
summary&nbsp; :
status[0]<br>
<br>
<br>
<br>
May 12 11:45:52 g10r9 : 0:0:nfxsh:NOTICE: cmd[5]: vsvr show all : =
status[0]<br>
<br>
<br>
<br>
May 12 11:49:23 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog show config&nbsp; =
:
status[0]<br>
<br>
<br>
<br>
May 12 11:50:35 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down<br>
<br>
<br>
<br>
May 12 11:50:35 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0: Mgmt =
Port
0.0.0.0 PCC, State Up<br>
<br>
<br>
<br>
May 12 11:50:51 g10r9 : 0:0:cluster2:WARNING: =
ClusterCtrl_ProcessTimerFunc:
post node down hostname g11r9<br>
<br>
<br>
<br>
May 12 11:50:51 g10r9 : 0:0:eventd:CRITICAL: Process-EVENT Node: Name =
'g11r9',
State Down, Msg ''<br>
<br>
<br>
<br>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9
has matching (vlink#=3D1, vol#=3D2), num_vsvr 2<br>
<br>
<br>
<br>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9
has matching (vlink#=3D1, vol#=3D1), num_vsvr 3<br>
<br>
<br>
<br>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9
has matching (vlink#=3D1, vol#=3D1), num_vsvr 4<br>
<br>
<br>
<br>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9
has matching (vlink#=3D1, vol#=3D1), num_vsvr 5<br>
<br>
<br>
<br>
May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]: node1 =
g10r9
has matching (vlink#=3D1, vol#=3D1), num_vsvr 6<br>
<br>
<br>
<br>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'NIS_G10R9', Id 3, State 'Enabled'<br>
<br>
<br>
<br>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'WINDOWS_G10R9', Id 4, State 'Enabled'<br>
<br>
<br>
<br>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'LOCAL_NIS_G10R9', Id 5, State 'Enabled'<br>
<br>
<br>
<br>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'OPENLDAP_G10R9', Id 6, State 'Enabled'<br>
<br>
<br>
<br>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'AUTHDOM_G10R9', Id 7, State 'Enabled'<br>
<br>
<br>
<br>
May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'NONETBIOS_G10R9', Id 10, State 'Enabled'<br>
<br>
<br>
<br>
May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'nis-vol', Id 0x000002000000006a, Event 'Takeover', State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local',
Id 0x0000020000000075, Event 'Takeover', State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'NONETBIOS_G10R9', Id 10, State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for
vsId[10]<br>
<br>
<br>
<br>
May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'nis-vol', Id 0x000002000000006a, Event 'Online', was offline for =
roughly 69
sec.<br>
<br>
<br>
<br>
May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'win-vol', Id 0x000002000000006d, Event 'Takeover', State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'local-nis-vol', Id 0x000002000000006e, Event 'Takeover', State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'openldap-vol', Id 0x0000020000000071, Event 'Takeover', State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'authdom-vol', Id 0x0000020000000072, Event 'Takeover', State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name =
'local',
Id 0x0000020000000075, Event 'Online', was offline for roughly 52 =
sec.<br>
<br>
<br>
<br>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'openldap-vol', Id 0x0000020000000071, Event 'Online', was offline for =
roughly
62 sec.<br>
<br>
<br>
<br>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'win-vol', Id 0x000002000000006d, Event 'Online', was offline for =
roughly 66
sec.<br>
<br>
<br>
<br>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'local-nis-vol', Id 0x000002000000006e, Event 'Online', was offline for =
roughly
59 sec.<br>
<br>
<br>
<br>
May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
'authdom-vol', Id 0x0000020000000072, Event 'Online', was offline for =
roughly
55 sec.<br>
<br>
<br>
<br>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.3.1, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.6.1, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.5.1, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.4.1, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.4.2, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:07 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
ncm_open_filer(g11r9)
succeeded<br>
<br>
<br>
<br>
May 12 11:51:07 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'NIS_G10R9', Id 3, State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:07 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[3]<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'OPENLDAP_G10R9', Id 6, State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[6]<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.1, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.2, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.3, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.4, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.5, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.6, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.7, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.8, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.9, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.10, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.11, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.12, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.13, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.14, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.15, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer:
ncm_open_filer(g11r9) succeeded<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.16, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.17, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.18, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.19, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -14<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.20, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -14<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.21, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.22, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.23, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.24, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.25, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.26, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.27, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.28, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.29, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.30, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.31, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
192.167.7.32, Port bp0, State Up<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: IP i/f RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'LOCAL_NIS_G10R9', Id 5, State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[5]<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP: =
asd_rmc
EVENT: Vsvr RMC Error -3<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'AUTHDOM_G10R9', Id 7, State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[7]<br>
<br>
<br>
<br>
May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
retries to
filer(g11r9) failed - aborting attemtpts<br>
<br>
<br>
<br>
May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_handle_net_msg_done:
ncm_repoen filer failed - -1<br>
<br>
<br>
<br>
May 12 11:51:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr: Virtual =
server
'WINDOWS_G10R9', Id 4, State 'Up'<br>
<br>
<br>
<br>
May 12 11:51:18 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event for =
vsId[4]<br>
<br>
<br>
<br>
May 12 11:51:43 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer:
ncm_open_filer(g11r9) succeeded<br>
<br>
<br>
<br>
May 12 11:51:46 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer:
ncm_open_filer(g11r9) succeeded<br>
<br>
<br>
<br>
May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer: =
retries to
filer(g11r9) failed - aborting attemtpts<br>
<br>
<br>
<br>
May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_handle_net_msg_done:
ncm_repoen filer failed - -1<br>
<br>
<br>
<br>
May 12 11:53:48 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name =
'g11r9',
State Up, Msg ''<br>
<br>
<br>
<br>
May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.2.9.11<br>
<br>
<br>
<br>
May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file
in progress, remote ver 1178991276(64), sending to 10.2.9.11<br>
<br>
<br>
<br>
May 12 11:54:01 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send =
new file
end, code 0<br>
<br>
<br>
<br>
May 12 11:54:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : =
status[0]<br>
<br>
<br>
<br>
May 12 11:58:08 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog find cluster2 :
status[0]<br>
<br>
<br>
<br>
May 12 11:59:10 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is =
back up:
will be contacted through 10.12.9.11<br>
<br>
<br>
<br>
May 12 11:59:12 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size =
64<br>
<br>
<br>
<br>
May 12 11:59:32 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no reply =
bck -1<br>
<br>
<br>
<br>
May 12 11:59:32 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns add root cifs
gnsroot&nbsp; : status[11] =E8 tried this before the DB was upgraded<br>
<br>
<br>
<br>
May 12 12:00:03 g10r9 : 0:0:spm:ERROR: failed to get filer list from =
cluster
daemon<br>
<br>
<br>
<br>
May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over<br>
<br>
<br>
<br>
#<br>
<br>
<br>
<br>
<br>
<br>
</span></font><o:p></o:p></p>

</div>

</div>

</body>

</html>

------_=_NextPart_001_01C7971B.39D404BE--
