X-MimeOLE: Produced By Microsoft Exchange V6.5
Received: by onstor-exch02.onstor.net 
	id <01C79729.0FD0012A@onstor-exch02.onstor.net>; Tue, 15 May 2007 12:41:41 -0700
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Content-class: urn:content-classes:message
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing cluster errors
Date: Tue, 15 May 2007 12:41:41 -0700
Message-ID: <BB375AF679D4A34E9CA8DFA650E2B04E02679CD0@onstor-exch02.onstor.net>
In-Reply-To: <BB375AF679D4A34E9CA8DFA650E2B04E0138C68B@onstor-exch02.onstor.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing cluster errors
Thread-Index: AceXGDfzw9vgwNSmRXWByjjNMK+hwwACcKmwAAG9VGA=
From: "Sandrine Boulanger" <sandrine.boulanger@onstor.com>
To: "Chris Vandever" <chris.vandever@onstor.com>,
	"Andy Sharp" <andy.sharp@onstor.com>,
	"Brian DeForest" <brian.deforest@onstor.com>
Cc: "Raj Kumar" <raj.kumar@onstor.com>

With sub17, we don't need to manually copy the files. Just use =
flash_install.sh from the mounted Release dir of sub17, answer Y at the =
end and reboot -s.

-----Original Message-----
From: Chris Vandever=20
Sent: Tuesday, May 15, 2007 11:53 AM
To: Andy Sharp; Brian DeForest
Cc: Sandrine Boulanger; Raj Kumar
Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors

Clearly, there's confusion as to the procedure that should be followed =
and what CAN be followed on sub16.

Where EXACTLY is the writeup on the procedure?  Now that sub17 is =
available is that a better version to use?  Will the writeup match =
sub17?

I'm stalled until I get the proposed procedure from dev.

ChrisV

-----Original Message-----
From: Andy Sharp=20
Sent: Tuesday, May 15, 2007 10:41 AM
To: Raj Kumar
Cc: Sandrine Boulanger; Chris Vandever; Brian DeForest
Subject: Re: Test upgrade fom 222 to 30 prior to MD upgrade, still =
seeing cluster errors

Sorry Raj, but I owned that hell a long time before you did.  Or did I?

If I may ask a stupid question, why are you guys hand copying
rc.initial and so forth?  It just seems to be error prone.  But there's
probably some good reason that I'm not aware of.

In case it's of any use, there is another slightly different way of
running flash_install.sh.  If you are booted and running 2.2.2.2.2.2,
you can run flash_install.sh and create a 3.0 flash, and at the end,
specify N to the first question, and Y to the second question.  It will
copy the configuration of the flash you are booted off of to the new
flash.  In that case, you don't need the special rc.initial.  You can't
do this, of course, if you are intending to take the flash to a
different machine.

Cheers,

a

On Mon, 14 May 2007 21:59:50 -0700 "Raj Kumar" <raj.kumar@onstor.com>
wrote:

> Chris,
> =20
> Welcome to our hell :)
>=20
> ________________________________
>=20
> From: Sandrine Boulanger
> Sent: Mon 5/14/2007 9:57 PM
> To: Chris Vandever; Brian DeForest; Raj Kumar
> Cc: Andy Sharp
> Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still
> seeing cluster errors
>=20
>=20
>=20
> Weird, I ran into this earlier when I forgot to copy the latest
> rc.initial, but this time we did it. You can either power off the
> filer and swap flashes, then power up, or power cycle it when you
> have the ssc console open, rush back into your cube, ^E on ssc
> console when the ....... starts, get to ssc-prom, env view to check
> boot_dev variable, env set boot_dev wd1a or wd0a depending on what
> the current is, then autoload. This way you'll be back to the 222
> flash and we can mount the other partition to check /etc/rc.initial
> (verify we copied it at the right place. Other option is to go
> through first install script just to specify filer name and sc1 ip
> (maybe also default route 10.2.0.1), then get to nfxsh and do a
> system reboot -s -y to go back to 2.2.2. This way the partitions are
> cleanly unmunted, which is not the case when you power cycle (and
> then you might need to use fsck).
>=20
>=20
> -----Original Message-----
> From: Chris Vandever
> Sent: Mon 5/14/2007 7:06 PM
> To: Sandrine Boulanger; Brian DeForest; Raj Kumar
> Cc: Andy Sharp
> Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still
> seeing cluster errors
>=20
> I followed Sandrine's instructions to set up the R2.2.2 flashes and
> then the R3.0 flashes, including typing 'y' to get the config copied
> to the secondary (R3.0) flash:
>=20
>=20
>=20
> /mnt/R3.0.0.0/R3.0.0.0-050907/nfx-tree/Build/ch/dbg/Release/etc
>=20
> Do you wish for this flash to automatically copy
>=20
> the configuration from the other flash at boot time
>=20
> rather than running the initial config menu? [N] y
>=20
> I'll take y as a 'Yes' and set the new flash to autoupgrade.
>=20
> eng28#
>=20
>=20
>=20
> However, the system in its infinite wisdom booted into the initial
> config tool.  Sounds like a bug to me.
>=20
>=20
>=20
> ChrisV
>=20
> ________________________________
>=20
> From: Sandrine Boulanger
> Sent: Monday, May 14, 2007 9:52 AM
> To: Sandrine Boulanger; Chris Vandever; Brian DeForest; Raj Kumar
> Cc: Andy Sharp
> Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still
> seeing cluster errors
>=20
>=20
>=20
> We lost the evidence again, the filer rebooted on its own yesterday,
> probably due to resource or memory leak, cluster errors are filling
> elog:
>=20
>=20
>=20
> .
>=20
> May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
> May 13 04:00:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
> May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
> May 13 04:03:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
> May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
> May 13 04:06:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
> May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
> May 13 04:09:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
> .
>=20
> May 13 08:00:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
> May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
> May 13 08:03:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
> May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
> May 13 08:06:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
> May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
> May 13 08:09:23 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
> May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
> May 13 08:12:22 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
> May 13 08:17:44 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager:
> Started
>=20
> May 13 08:17:46 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1
>=20
> May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.0
> State Up, Msg ''
>=20
> May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1,
> CPU 0, State Up
>=20
> May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.1
> State Down, Msg ''
>=20
> May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1,
> CPU 1, State Up
>=20
> May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.2
> State Down, Msg ''
>=20
> May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1,
> CPU 2, State Up
>=20
> May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.3
> State Down, Msg ''
>=20
> May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1,
> CPU 3, State Up
>=20
> May 13 08:18:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2,
> CPU 0, State Up
>=20
>=20
>=20
> -----Original Message-----
> From: Sandrine Boulanger
> Sent: Saturday, May 12, 2007 6:14 PM
> To: Chris Vandever; Brian DeForest; Raj Kumar
> Cc: Andy Sharp
> Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still
> seeing cluster errors
>=20
>=20
>=20
> Procedure:
>=20
>=20
>=20
> Created 2 new flashes with 2.2.2 version for g10r9 and g11r9.
>=20
> Clustered the 2 filers
>=20
> Created configuration (from gateway to volume level)
>=20
> From the 2 filers active flash, mount sub#16 Release directory
>=20
> Copied the new flash_install.sh under Release/etc
>=20
> Started the flash_install.sh
>=20
> Answered Y at the end to trigger later the config files copy once the
> 3.0 flash is booting
>=20
> Copied the new rc.initial to /etc on the new 3.0 flash
>=20
> system reboot -s -y on g10r9
>=20
> g10r9 boots, get the config files from the 220 flash through
> rc.initial, then comes up with 3.0 version, cluster db still at 220.
> g11r9 still running version 220. Cluster commands are fine
>=20
> system reboot -s -y on g11r9
>=20
> g10r9 boots, get the config files from the 220 flash through
> rc.initial, then comes up with 3.0 version, cluster db still at 220.
> Both filers are now at version 3.0 with cluster db at 2220. Starting
> to see some cluster errors in elog on g10r9
>=20
> 5 minutes later cluster db is upgraded to 300. Still getting cluster
> errors on g10r9
>=20
>=20
>=20
>=20
>=20
> -----Original Message-----
>=20
> From: Chris Vandever
>=20
> Sent: Sat 5/12/2007 5:36 PM
>=20
> To: Sandrine Boulanger; Brian DeForest; Raj Kumar
>=20
> Cc: Andy Sharp
>=20
> Subject: RE: Test upgrade fom 222 to 30 prior to MD upgrade, still
> seeing cluster errors
>=20
>=20
>=20
> I'll need the exact procedure that's being used for the upgrade, and
> access to any scripts, etc.  Clearly, the clustering code is confused
> since the clusDb shows it was successfully upgraded, yet the
> clustering code is still trying (unsuccessfully) to upgrade it.
>=20
>=20
>=20
> I won't be able to look at it this evening, but I will try to do so
> tomorrow.
>=20
>=20
>=20
> ChrisV
>=20
>=20
>=20
> ________________________________
>=20
>=20
>=20
> From: Sandrine Boulanger
>=20
> Sent: Sat 5/12/2007 12:33 PM
>=20
> To: Chris Vandever; Brian DeForest; Raj Kumar
>=20
> Cc: Andy Sharp
>=20
> Subject: Test upgrade fom 222 to 30 prior to MD upgrade, still seeing
> cluster errors
>=20
>=20
>=20
>=20
>=20
>=20
>=20
> I tested Andy's fix for the copy of the config files. I used the
> files as standalone since we don't have a new build. It worked fine.
> Prior to the DBs being upgraded from 220 to 30, all commands were
> running OK. I tried to add a gns root and it refused because the DBs
> were not upgraded yet, which is good.
>=20
>=20
>=20
> Then the DBs were upgraded to 30, and at that time I saw the same
> type of errors we saw last Thursday: cluster show summary on g10r9,
> or gns show cifs was failing with cluster2 errors, see elog below.
> The commands are OK on g11r9.
>=20
>=20
>=20
> G10r9 was the first to be rebooted on 3.0 flash, g11r9 was second.
>=20
>=20
>=20
> G10r9 ssc ip is 10.2.9.10
>=20
>=20
>=20
> I'll leave it in this state, let me know if I can help debugging.
>=20
>=20
>=20
> # dbtools show /onstor/conf/cluster.db.DB0 -c
>=20
>=20
>=20
> Cluster DB Header
>=20
>=20
>=20
> ------------------------------------
>=20
>=20
>=20
> dbHdr.version =3D 0x30000
>=20
>=20
>=20
> dbHdr.signature =3D 0x434c5553
>=20
>=20
>=20
> dbHdr.blockSize =3D 256
>=20
>=20
>=20
> dbHdr.freeOffset =3D 0
>=20
>=20
>=20
> dbHdr.endOffset =3D 2336000
>=20
>=20
>=20
> dbHdr.recCount =3D 0
>=20
>=20
>=20
> dbHdr.hashTblSize =3D 4096
>=20
>=20
>=20
> dbHdr.hashTblOffset =3D 256
>=20
>=20
>=20
> dbHdr.DebugMode =3D 0
>=20
>=20
>=20
> # nfxsh
>=20
>=20
>=20
> Welcome to the ONStor NAS Gateway.
>=20
>=20
>=20
> g10r9 diag> gns show
>=20
>=20
>=20
> % Command incomplete.
>=20
>=20
>=20
> g10r9 diag> gns show
>=20
>=20
>=20
>   cifs  Show GNS CIFS
>=20
>=20
>=20
> g10r9 diag> gns show cifs
>=20
>=20
>=20
>   ROOTNAME[\Path]  Root and optional path
>=20
>=20
>=20
>   all
>=20
>=20
>=20
> g10r9 diag> gns show cifs all
>=20
>=20
>=20
> This command is unavailable until after the cluster database has been
> upgraded.
>=20
>=20
>=20
> % Command failure.
>=20
>=20
>=20
> g10r9 diag> gns show cifs all
>=20
>=20
>=20
> This command is unavailable until after the cluster database has been
> upgraded.
>=20
>=20
>=20
> % Command failure.
>=20
>=20
>=20
> g10r9 diag> cluster show summary
>=20
>=20
>=20
> Cluster Name: g10r9       Cluster State:   On
>=20
>=20
>=20
> ------------------------------------------------------
>=20
>=20
>=20
> NAS Gateways        IP              State   PCC
>=20
>=20
>=20
> ------------------------------------------------------
>=20
>=20
>=20
> g10r9               10.2.9.10       UP      YES
>=20
>=20
>=20
> g11r9               10.2.9.11       UP      NO
>=20
>=20
>=20
> Getting cluster information failed
>=20
>=20
>=20
> % Command failure.
>=20
>=20
>=20
> g10r9 diag>
>=20
>=20
>=20
> # cat /var/agile/messages
>=20
>=20
>=20
> May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over
>=20
>=20
>=20
> May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:00:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
>=20
>=20
> May 12 12:00:38 g10r9 : 0:0:spm:ERROR: failed to get filer list from
> cluster daemon
>=20
>=20
>=20
> May 12 12:00:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:01:09 g10r9 last message repeated 3 times
>=20
>=20
>=20
> May 12 12:01:13 g10r9 : 0:0:spm:ERROR: failed to get filer list from
> cluster daemon
>=20
>=20
>=20
> May 12 12:01:19 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR:
> cluster_request_upgrade_db: no reply bck -1
>=20
>=20
>=20
> May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR:
> ClusterCtrl_ProcessTimerFunc: Request to upgrade clusDb failed, rc 30
>=20
>=20
>=20
> May 12 12:01:29 g10r9 : 0:0:cluster2:ERROR:
> ClusterCtrl_ProcessTimerFunc: Running in degraded mode on version
> 0x20200, will retry upgrade to version 0x30000 later...
>=20
>=20
>=20
> May 12 12:01:39 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:01:40 g10r9 : 0:0:cluster2:ERROR: cluster_set_lock_on_rpc:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:01:49 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:02:00 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:03:24 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
>=20
>=20
> May 12 12:03:35 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:03:57 g10r9 last message repeated 3 times
>=20
>=20
>=20
> May 12 12:03:57 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot
> get vsId list for node g10r9
>=20
>=20
>=20
> May 12 12:04:06 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:04:07 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:04:07 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot
> get vsId list for node g10r9
>=20
>=20
>=20
> May 12 12:04:16 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:04:26 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:04:27 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.2.9.11
>=20
>=20
>=20
> May 12 12:04:36 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:04:42 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.12.9.11
>=20
>=20
>=20
> May 12 12:04:46 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:04:56 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:05:28 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.2.9.11
>=20
>=20
>=20
> May 12 12:05:33 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down
>=20
>=20
>=20
> May 12 12:05:34 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.2.9.11
>=20
>=20
>=20
> May 12 12:05:34 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name
> 'g11r9', State Up, Msg ''
>=20
>=20
>=20
> May 12 12:05:41 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.12.9.11
>=20
>=20
>=20
> May 12 12:05:50 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.2.9.11
>=20
>=20
>=20
> May 12 12:05:54 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.12.9.11
>=20
>=20
>=20
> May 12 12:05:59 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.2.9.11
>=20
>=20
>=20
> May 12 12:06:11 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.12.9.11
>=20
>=20
>=20
> May 12 12:06:15 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.2.9.11
>=20
>=20
>=20
> May 12 12:06:20 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.12.9.11
>=20
>=20
>=20
> May 12 12:06:24 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.2.9.11
>=20
>=20
>=20
> May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getFilerNameList:
> cannot get cluster rec, rcode 30
>=20
>=20
>=20
> May 12 12:06:29 g10r9 : 0:0:cluster2:ERROR: cluster_getRecordIdByKey:
> no reply bck -1
>=20
>=20
>=20
> May 12 12:06:29 g10r9 : 0:0:vsd:ERROR: vsd_procGetVsListReq : Cannot
> get vsId list for node g10r9
>=20
>=20
>=20
> May 12 12:06:33 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.12.9.11
>=20
>=20
>=20
> May 12 12:06:40 g10r9 : 0:0:cluster2:ERROR:
> cluster_upgrade_db_runtime: ClusDb update complete; successfully
> updated version from 0x20200 to 0x30000
>=20
>=20
>=20
> May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send
> new file in progress, remote ver 1178995860(554), sending to 10.2.9.11
>=20
>=20
>=20
> May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact:
> cluster db version matches
>=20
>=20
>=20
> May 12 12:06:40 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send
> new file end, code 0
>=20
>=20
>=20
> May 12 12:07:45 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: gns show : status[4]
>=20
>=20
>=20
> May 12 12:08:05 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no
> reply bck -1
>=20
>=20
>=20
> May 12 12:08:05 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns show cifs all :
> status[11]
>=20
>=20
>=20
> May 12 12:10:12 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no
> reply bck -1
>=20
>=20
>=20
> May 12 12:10:12 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: gns show cifs all :
> status[11]
>=20
>=20
>=20
> #
> zcat /var/agile/messages.0.gz                                          =
                                                            =20
>=20
>=20
>=20
> May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CHASSISD: Chassis Manager:
> Started
>=20
>=20
>=20
> May 12 11:38:00 g10r9 : 0:0:cm:NOTICE: CM: opened kseg1
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1,
> CPU 0, State Up
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1,
> CPU 1, State Up
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1,
> CPU 2, State Up
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 1,
> CPU 3, State Up
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT CPU: Slot 2,
> CPU 0, State Up
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.0
> State Up, Msg ''
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.1
> State Down, Msg ''
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.2
> State Down, Msg ''
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Port: fp1.3
> State Down, Msg ''
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session
> open failed
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session
> open failed
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session
> closed
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session
> closed
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid
> TID 5
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid
> TID 6
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid
> TID 8
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid
> TID 9
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:sdm:ERROR: sdm_procScsiRelayRsp: Invalid
> TID 7
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name
> 'g10r9', State Up, Msg ''
>=20
>=20
>=20
> May 12 11:38:16 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session
> open success
>=20
>=20
>=20
> May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session
> open success
>=20
>=20
>=20
> May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : NCM session
> closed
>=20
>=20
>=20
> May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_closeSess : SDM session
> closed
>=20
>=20
>=20
> May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : NCM session
> open success
>=20
>=20
>=20
> May 12 11:38:17 g10r9 : 0:0:evm:NOTICE: evm_rcvRmcMsg : SDM session
> open success
>=20
>=20
>=20
> May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: Using 10.2.9.10 as my
> primary address
>=20
>=20
>=20
> May 12 11:38:17 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size
> 64
>=20
>=20
>=20
> May 12 11:38:17 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'nis-vol', Id 0x000002000000006a, Event 'Create', State 'Down'
>=20
>=20
>=20
> May 12 11:38:18 g10r9 : 1:2:sanm_ag:WARNING: 2: sanm_doRMCLocalClose:
> reopening session to sanmd
>=20
>=20
>=20
> May 12 11:38:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'jumbo-vol', Id 0x0000020000000073, Event 'Create', State 'Down'
>=20
>=20
>=20
> May 12 11:38:19 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'win-vol', Id 0x000002000000006d, Event 'Create', State 'Down'
>=20
>=20
>=20
> May 12 11:38:19 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -> rc.agile:
> booting version 3.0.0.0b : status[2]
>=20
>=20
>=20
> May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'openldap-vol', Id 0x0000020000000071, Event 'Create', State 'Down'
>=20
>=20
>=20
> May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'local-nis-vol', Id 0x000002000000006e, Event 'Create', State 'Down'
>=20
>=20
>=20
> May 12 11:38:20 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'authdom-vol', Id 0x0000020000000072, Event 'Create', State 'Down'
>=20
>=20
>=20
> May 12 11:38:21 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'local', Id 0x0000020000000075, Event 'Create', State 'Down'
>=20
>=20
>=20
> May 12 11:38:22 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: -> rc.agile:
> booting from /dev/wd0a : status[2]
>=20
>=20
>=20
> May 12 11:38:22 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'vol_mgmt_512', Id 0x0000020000000066, Event 'Create', State 'Down'
>=20
>=20
>=20
> May 12 11:38:23 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'LSI_E4600A_R46_g10r9_core', Id 0x0000020000000074, Event 'Create',
> State 'Down'
>=20
>=20
>=20
> May 12 11:38:32 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0:
> Mgmt Port 0.0.0.0 PCC, State Up
>=20
>=20
>=20
> May 12 11:38:39 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session
> open failed
>=20
>=20
>=20
> May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 00000001026a8a93, WWN 100000068d002a40, LUN 1, vendor
> 'IBM', model 'ULTRIUM-TD3'
>=20
>=20
>=20
> May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 0000000299256570, WWN 100000068d002a40, LUN 2, vendor
> 'IBM', model 'ULTRIUM-TD3'
>=20
>=20
>=20
> May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 0000000474cbbcf7, WWN 100000068d002a40, LUN 4, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:42 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 00000005020ee656, WWN 100000068d002a40, LUN 5, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 00000007527722f4, WWN 100000068d002a40, LUN 7, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 00000008c30a5552, WWN 100000068d002a40, LUN 8, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 0000000ad8b479e3, WWN 100000068d002a40, LUN 10, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 0000000b326471cc, WWN 100000068d002a40, LUN 11, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 0000000d7663d86b, WWN 100000068d002a40, LUN 13, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 0000000e50bb2aad, WWN 100000068d002a40, LUN 14, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:43 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 00000010cecbc163, WWN 100000068d002a40, LUN 16, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 0000001101709690, WWN 100000068d002a40, LUN 17, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_closeSess : NCM session
> closed
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:ea:NOTICE: ea_rcvRmcMsg : NCM session
> open success
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 000000120c18b3b8, WWN 100000068d002a40, LUN 18, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 0000001483e7a0c2, WWN 100000068d002a40, LUN 20, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 000000154c5cf731, WWN 100000068d002a40, LUN 21, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 000000168d36b6dd, WWN 100000068d002a40, LUN 22, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 00000017428de12e, WWN 100000068d002a40, LUN 23, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 00000019533a4544, WWN 100000068d002a40, LUN 25, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new tape
> device: id 0000001ad886bb10, WWN 100000068d002a40, LUN 26, vendor
> 'IBM', model 'ULTRIUM-TD2'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new
> media-changer: id 00000000b26acfa2, WWN 100000068d002a40, LUN 0,
> vendor 'ADIC', model 'Scalar-24'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new
> media-changer: id 00000003632ebb68, WWN 100000068d002a40, LUN 3,
> vendor 'STK', model 'L180'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new
> media-changer: id 000000061ed3ca31, WWN 100000068d002a40, LUN 6,
> vendor 'STK', model 'L180'
>=20
>=20
>=20
> May 12 11:38:44 g10r9 : 0:0:tape-driver:NOTICE: Adding new
> media-changer: id 000000093a855f24, WWN 100000068d002a40, LUN 9,
> vendor 'STK', model 'L180'
>=20
>=20
>=20
> May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new
> media-changer: id 0000000ca1e082bf, WWN 100000068d002a40, LUN 12,
> vendor 'STK', model 'L180'
>=20
>=20
>=20
> May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new
> media-changer: id 0000000ff746fa9b, WWN 100000068d002a40, LUN 15,
> vendor 'STK', model 'L180'
>=20
>=20
>=20
> May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new
> media-changer: id 00000013719221e7, WWN 100000068d002a40, LUN 19,
> vendor 'STK', model 'L180'
>=20
>=20
>=20
> May 12 11:38:45 g10r9 : 0:0:tape-driver:NOTICE: Adding new
> media-changer: id 000000182eb0d71b, WWN 100000068d002a40, LUN 24,
> vendor 'STK', model 'L180'
>=20
>=20
>=20
> May 12 11:38:46 g10r9 : 0:0:auth_agent:NOTICE: main: auth-agent
> started
>=20
>=20
>=20
> May 12 11:38:49 g10r9 : 1:2:sanm_ag:WARNING: 3: sanm_doRMCLocalClose:
> reopening session to sanmd
>=20
>=20
>=20
> May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 10: vstack_vlink_create:294 :
> vl FO active 1 prefered 1
>=20
>=20
>=20
> May 12 11:38:50 g10r9 : 1:0:ipm:NOTICE: 11: vstack_vlink_create:329:
> vlink lp link op-state DOWN
>=20
>=20
>=20
> May 12 11:38:52 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.1.1, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:38:55 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'jumbo-vol', Id 0x0000020000000073, Event 'Online', was offline for
> roughly 255 sec.
>=20
>=20
>=20
> May 12 11:38:55 g10r9 : 0:0:sanm:NOTICE: SANM: ONStor Data Mirror
> (c)2006: Started
>=20
>=20
>=20
> May 12 11:38:57 g10r9 : 1:0:bsdrl:WARNING: 12: route dst 0 mask 0
> gateway 100030a fail to be added/deleted. error =3D 51
>=20
>=20
>=20
> May 12 11:38:57 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.8.1, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:38:59 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'VS_MGMT_512', Id 1, State 'Up'
>=20
>=20
>=20
> May 12 11:38:59 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event
> for vsId[1]
>=20
>=20
>=20
> May 12 11:39:00 g10r9 : 0:0:ea:ERROR: ea_procMovInReq[1586] : Volume
> (vol_mgmt_512) is locked for Online operation
>=20
>=20
>=20
> May 12 11:39:01 g10r9 : 0:0:vsd:ERROR: vsd_mountVolProc[4941] :
> Failed to mount volume ID=3D0x20000000066. Operation not allowed.
>=20
>=20
>=20
> May 12 11:39:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'vol_mgmt_512', Id 0x0000020000000066, Event 'Online', was offline
> for roughly 266 sec.
>=20
>=20
>=20
> May 12 11:39:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'JUMBO_G10R9', Id 8, State 'Up'
>=20
>=20
>=20
> May 12 11:39:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event
> for vsId[8]
>=20
>=20
>=20
> May 12 11:44:43 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: system version  :
> status[0]
>=20
>=20
>=20
> May 12 11:44:48 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : status[0]
>=20
>=20
>=20
> May 12 11:45:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[2]: interface show ip
> -n g11r9 : status[0]
>=20
>=20
>=20
> May 12 11:45:37 g10r9 : 0:0:nfxsh:NOTICE: cmd[3]: cluster sho gr :
> status[0]
>=20
>=20
>=20
> May 12 11:45:41 g10r9 : 0:0:nfxsh:NOTICE: cmd[4]: cluster show
> summary  : status[0]
>=20
>=20
>=20
> May 12 11:45:52 g10r9 : 0:0:nfxsh:NOTICE: cmd[5]: vsvr show all :
> status[0]
>=20
>=20
>=20
> May 12 11:49:23 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog show config  :
> status[0]
>=20
>=20
>=20
> May 12 11:50:35 g10r9 : 0:0:cluster2:NOTICE: node 10.2.9.11 is down
>=20
>=20
>=20
> May 12 11:50:35 g10r9 : 0:0:eventd:WARNING: Process-EVENT 0.0.0.0:
> Mgmt Port 0.0.0.0 PCC, State Up
>=20
>=20
>=20
> May 12 11:50:51 g10r9 : 0:0:cluster2:WARNING:
> ClusterCtrl_ProcessTimerFunc: post node down hostname g11r9
>=20
>=20
>=20
> May 12 11:50:51 g10r9 : 0:0:eventd:CRITICAL: Process-EVENT Node: Name
> 'g11r9', State Down, Msg ''
>=20
>=20
>=20
> May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]:
> node1 g10r9 has matching (vlink#=3D1, vol#=3D2), num_vsvr 2
>=20
>=20
>=20
> May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]:
> node1 g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 3
>=20
>=20
>=20
> May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]:
> node1 g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 4
>=20
>=20
>=20
> May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]:
> node1 g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 5
>=20
>=20
>=20
> May 12 11:50:52 g10r9 : 0:0:vtm:NOTICE: vtm_get_better_node[729]:
> node1 g10r9 has matching (vlink#=3D1, vol#=3D1), num_vsvr 6
>=20
>=20
>=20
> May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'NIS_G10R9', Id 3, State 'Enabled'
>=20
>=20
>=20
> May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'WINDOWS_G10R9', Id 4, State 'Enabled'
>=20
>=20
>=20
> May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'LOCAL_NIS_G10R9', Id 5, State 'Enabled'
>=20
>=20
>=20
> May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'OPENLDAP_G10R9', Id 6, State 'Enabled'
>=20
>=20
>=20
> May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'AUTHDOM_G10R9', Id 7, State 'Enabled'
>=20
>=20
>=20
> May 12 11:51:01 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'NONETBIOS_G10R9', Id 10, State 'Enabled'
>=20
>=20
>=20
> May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'nis-vol', Id 0x000002000000006a, Event 'Takeover', State 'Up'
>=20
>=20
>=20
> May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'local', Id 0x0000020000000075, Event 'Takeover', State 'Up'
>=20
>=20
>=20
> May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'NONETBIOS_G10R9', Id 10, State 'Up'
>=20
>=20
>=20
> May 12 11:51:02 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event
> for vsId[10]
>=20
>=20
>=20
> May 12 11:51:02 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'nis-vol', Id 0x000002000000006a, Event 'Online', was offline for
> roughly 69 sec.
>=20
>=20
>=20
> May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'win-vol', Id 0x000002000000006d, Event 'Takeover', State 'Up'
>=20
>=20
>=20
> May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'local-nis-vol', Id 0x000002000000006e, Event 'Takeover', State 'Up'
>=20
>=20
>=20
> May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'openldap-vol', Id 0x0000020000000071, Event 'Takeover', State 'Up'
>=20
>=20
>=20
> May 12 11:51:03 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'authdom-vol', Id 0x0000020000000072, Event 'Takeover', State 'Up'
>=20
>=20
>=20
> May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'local', Id 0x0000020000000075, Event 'Online', was offline for
> roughly 52 sec.
>=20
>=20
>=20
> May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'openldap-vol', Id 0x0000020000000071, Event 'Online', was offline
> for roughly 62 sec.
>=20
>=20
>=20
> May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'win-vol', Id 0x000002000000006d, Event 'Online', was offline for
> roughly 66 sec.
>=20
>=20
>=20
> May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'local-nis-vol', Id 0x000002000000006e, Event 'Online', was offline
> for roughly 59 sec.
>=20
>=20
>=20
> May 12 11:51:04 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Volume: name
> 'authdom-vol', Id 0x0000020000000072, Event 'Online', was offline for
> roughly 55 sec.
>=20
>=20
>=20
> May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.3.1, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.6.1, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.5.1, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.4.1, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:06 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.4.2, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:07 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer:
> ncm_open_filer(g11r9) succeeded
>=20
>=20
>=20
> May 12 11:51:07 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'NIS_G10R9', Id 3, State 'Up'
>=20
>=20
>=20
> May 12 11:51:07 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event
> for vsId[3]
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'OPENLDAP_G10R9', Id 6, State 'Up'
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event
> for vsId[6]
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.1, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.2, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.3, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.4, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.5, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.6, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.7, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.8, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.9, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.10, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.11, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:08 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.12, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.13, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:09 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.14, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.15, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer:
> ncm_open_filer(g11r9) succeeded
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.16, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.17, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.18, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.19, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -14
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.20, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -14
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.21, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:10 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.22, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:11 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.23, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.24, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:12 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.25, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.26, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.27, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.28, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.29, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.30, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.31, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT IP i/f: IP
> 192.167.7.32, Port bp0, State Up
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: IP i/f RMC Error -3
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'LOCAL_NIS_G10R9', Id 5, State 'Up'
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event
> for vsId[5]
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: forwardToLocalApps dstAPP:
> asd_rmc EVENT: Vsvr RMC Error -3
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'AUTHDOM_G10R9', Id 7, State 'Up'
>=20
>=20
>=20
> May 12 11:51:13 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event
> for vsId[7]
>=20
>=20
>=20
> May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer:
> retries to filer(g11r9) failed - aborting attemtpts
>=20
>=20
>=20
> May 12 11:51:14 g10r9 : 0:0:ncm:WARNING: ncmd :
> ncm_handle_net_msg_done: ncm_repoen filer failed - -1
>=20
>=20
>=20
> May 12 11:51:18 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Vsvr:
> Virtual server 'WINDOWS_G10R9', Id 4, State 'Up'
>=20
>=20
>=20
> May 12 11:51:18 g10r9 : 0:0:sanm:NOTICE: Processing fail-over event
> for vsId[4]
>=20
>=20
>=20
> May 12 11:51:43 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer:
> ncm_open_filer(g11r9) succeeded
>=20
>=20
>=20
> May 12 11:51:46 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer:
> ncm_open_filer(g11r9) succeeded
>=20
>=20
>=20
> May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd : ncm_reopen_filer:
> retries to filer(g11r9) failed - aborting attemtpts
>=20
>=20
>=20
> May 12 11:51:49 g10r9 : 0:0:ncm:WARNING: ncmd :
> ncm_handle_net_msg_done: ncm_repoen filer failed - -1
>=20
>=20
>=20
> May 12 11:53:48 g10r9 : 0:0:eventd:NOTICE: Process-EVENT Node: Name
> 'g11r9', State Up, Msg ''
>=20
>=20
>=20
> May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.2.9.11
>=20
>=20
>=20
> May 12 11:53:53 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send
> new file in progress, remote ver 1178991276(64), sending to 10.2.9.11
>=20
>=20
>=20
> May 12 11:54:01 g10r9 : 0:0:cluster2:NOTICE: urecovery_Interact: send
> new file end, code 0
>=20
>=20
>=20
> May 12 11:54:29 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: cl sh cl : status[0]
>=20
>=20
>=20
> May 12 11:58:08 g10r9 : 0:0:nfxsh:NOTICE: cmd[0]: elog find
> cluster2 : status[0]
>=20
>=20
>=20
> May 12 11:59:10 g10r9 : 0:0:cluster2:NOTICE: ubik:server 10.2.9.11 is
> back up: will be contacted through 10.12.9.11
>=20
>=20
>=20
> May 12 11:59:12 g10r9 : 0:0:cluster2:NOTICE: ubik init with buff size
> 64
>=20
>=20
>=20
> May 12 11:59:32 g10r9 : 0:0:cluster2:ERROR: cluster_get_db_ver: no
> reply bck -1
>=20
>=20
>=20
> May 12 11:59:32 g10r9 : 0:0:nfxsh:NOTICE: cmd[1]: gns add root cifs
> gnsroot  : status[11] =E8 tried this before the DB was upgraded
>=20
>=20
>=20
> May 12 12:00:03 g10r9 : 0:0:spm:ERROR: failed to get filer list from
> cluster daemon
>=20
>=20
>=20
> May 12 12:00:02 g10r9 newsyslog[23005]: logfile turned over
>=20
>=20
>=20
> #
>=20
>=20
>=20
>=20
>=20
>=20
>=20
>=20
