AF:
NF:0
PS:10
SRH:1
SFN:
DSR:
MID:<20080404101600.5f5e947a@ripper.onstor.net>
CFG:
PT:0
S:andy.sharp@onstor.com
RQ:
SSV:onstor-exch02.onstor.net
NSV:
SSH:
R:<sripal.surendiran@onstor.com>
MAID:1
X-Sylpheed-Privacy-System:
X-Sylpheed-Sign:0
SCF:#mh/Mailbox/sent
RMID:#imap/andys@onstor.net@onstor-exch02.onstor.net/INBOX	0	47F65C9A.9040307@onstor.com
X-Sylpheed-End-Special-Headers: 1
Date: Fri, 4 Apr 2008 10:17:39 -0700
From: Andrew Sharp <andy.sharp@onstor.com>
To: Sripal <sripal.surendiran@onstor.com>
Subject: Re: Submittal 16 Beta Release Candidate availble
Message-ID: <20080404101739.29f8b847@ripper.onstor.net>
In-Reply-To: <47F65C9A.9040307@onstor.com>
References: <BB375AF679D4A34E9CA8DFA650E2B04E12BFD4@onstor-exch02.onstor.net>
	<47F64800.4050200@onstor.com>
	<BB375AF679D4A34E9CA8DFA650E2B04E042F013E@onstor-exch02.onstor.net>
	<47F65C9A.9040307@onstor.com>
Organization: Onstor
X-Mailer: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Sripal,

This is the contents of Jobi's filer after he did a config reset and
doing an upgrade to sub16:

# cat /etc/network/interfaces 
# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
# /usr/share/doc/ifupdown/examples for more information.

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
#   address not_configured
#   netmask not_configured

# The secondary network interface
auto eth1
iface eth1 inet static
  address address
  netmask netmask


I saw the same breakage for eth1 on Sandrine's filer, except eth0 was
properly configured on hers.

This is the contents of /etc/hostname on Jobi's filer:

# cat /etc/hostname
localhostlocalhost:~# 

So localhost is in there twice with no newline at the end.

So I suspect there is at least one if not multiple problems here.  The
misconfigured interfaces file is obviously one of the problems.  I was
going to blame it on Sandrine, but now that I see it in Jobi's filer
too I'm thinking there is a bug in the FTI code somewhere causing it.


On Fri, 04 Apr 2008 22:21:38 +0530 Sripal
<sripal.surendiran@onstor.com> wrote:

> Larry,
> 
> Hostname file will be changed to localhost
> in /etc/init.d/onstor-config. The workflow is
> 1. If the /onstor/conf/nasgwayinit.status file is present and doesn't 
> have INITIALIZED entry (which means filer is in INITIALIZING state).
> 2. This is to ensure that filer is brought back to default value if
> the FTI has failed in last attempt after partially updating the 
> configuration files.
> 
> I am not sure how this condition was hit by the filer, Since there
> won't be /onstor/conf/nasgwayinit.status file until filer moves to
> the new FTI after next boot up when you run nfxsh.
> 
> Sandrine,
> 
> Does your system crashed while rebooting or when it was booted up
> after upgrade? Does running nfxsh have caused crash? Or what are
> operations you performed before you notice that the hostname is
> changed to localhost.
> 
> -Sripal.
> 
> Larry Scheer wrote:
> >
> > Sripal,
> >   Thanks for this summary. It helps with understanding the behavior
> > we are seeing.
> > Sandrine's machine was running submittal 15 already configured. She 
> > ran system upgrade to upgrade the primary flash. When the system
> > was rebooted the hostname was changed to localhost. Because of that
> > ncmd would not start.
> >
> > What scenarios would cause the hostname to change when 
> > /onstor/etc/nasgwayinit.status is not present and one SC interface
> > is configured?
> >
> > Something mis-fired in this case and we need to find the root cause.
> >
> > Thanks,
> >
> > Larry
> >
> >
> > -----Original Message-----
> > From: Sripal Surendiran (HCL)
> > Sent: Fri 4/4/2008 8:23 AM
> > To: Narain Ramadass
> > Cc: Sandrine Boulanger; Andy Sharp; Manohar Divate; Rendell Fong; 
> > Larry Scheer; dl-Cougar; dl-QA
> > Subject: Re: Submittal 16 Beta Release Candidate availble
> >
> > Narain,
> >
> > 1. /onstor/etc/nasgwayinit.conf file contains information obtained 
> > from the DHCP server during boot up. When nfxsh prompts the user
> > with FTI question, these values are read by Initial configuration
> > API and shown to the user as default value.
> >
> > 2. /onstor/etc/nasgwayinit.status contains the current status of
> > the filer (Initialized || Uninitialized).
> >
> > 3. When the user upgrades to the new Initial configuration, running 
> > nfxsh checks the contents of the /onstor/etc/nasgwayinit.status. If 
> > the /onstor/etc/nasgwayinit.status is not present, it checks to see
> > at least one of the SC interface is configured before deciding that
> > the filer is in uninitialized state. If at least one SC interface
> > is configured, then nfxsh will create a dummy 
> > /onstor/etc/nasgwayinit.conf with the information available from 
> > configuration file in the filer.
> >
> > 4. It also sets the filer state to *Initialized* in 
> > /onstor/etc/nasgwayinit.status file.
> >
> > 5. Since Sandrine filer is already in initialized 
> > state(nasgwayinit.status file containing Initialized entry),
> > removing /onstor/etc/nasgwayinit.conf file from filer FTI doesn't
> > care about recreating it.
> >
> > -Sripal
> >
> > Narain Ramadass wrote:
> >
> >         Sripal,
> >        
> >         Please investigate the defect Sandrine reports about the
> > conf file not being created when the filer is configured.
> >        
> >         Once again: as per design:-
> >        
> >         When the filer boots and the conf/semaphore files are not 
> > present, but the filer is configured for SC1 or SC2 (maybe there is 
> > another condition here that I missed?) the FTI screen will not be 
> > thrown and the conf/semaphore files would be created.
> >        
> >         If this is not the behavior on Cougar - think we have a bug
> > there. 
> >         Narain.
> >        
> >        
> >         -----Original Message-----
> >         From: Sandrine Boulanger
> >         Sent: Thu 4/3/2008 6:32 PM
> >         To: Andy Sharp; Manohar Divate
> >         Cc: Rendell Fong; Narain Ramadass; Larry Scheer; dl-Cougar;
> > dl-QA Subject: RE: Submittal 16 Beta Release Candidate availble
> >        
> >         localhost:/onstor/conf# vi /etc/hostname => changed local
> > host to g14r10
> >        
> >         localhost:/onstor/conf# rm
> > -rf /onstor/conf/nasgwayinit.conf => I removed it since Naraian
> > said it would be created if not there when at least one of the sc
> > interfaces is configured 
> >         localhost:/onstor/conf# vi /etc/network/interfaces => 
> > commented out the 2 lines
> >        
> >        
> >        
> >         reboot
> >        
> >        
> >        
> >         now the system is up, but /onstor/conf/nasgwayinit.conf has 
> > not been created (!)
> >        
> >        
> >        
> >        
> >        
> >         Got this error for the core volume BTW:
> >        
> >         Apr  3 18:25:06 g14r10 : 1:2:coredump:INFO: 52: core dump 
> > state: on - volume: coreg14
> >        
> >         Apr  3 18:25:06 g14r10 : 0:0:evm:NOTICE: evm_closeLunReq:
> > SDM error[-1] for close of lunId[0x0], rc[-1]
> >        
> >         Apr  3 18:25:06 g14r10 : 0:0:evm:INFO: 
> > evm_closeAllLunsInVolume: Cannot close all luns for
> > volume[coreg14], rc[-5593]
> >        
> >        
> >        
> >         -----Original Message-----
> >         From: Andy Sharp
> >         Sent: Thursday, April 03, 2008 6:10 PM
> >         To: Manohar Divate
> >         Cc: Sandrine Boulanger; Rendell Fong; Narain Ramadass;
> > Larry Scheer; dl-Cougar; dl-QA
> >         Subject: Re: Submittal 16 Beta Release Candidate availble
> >        
> >        
> >        
> >         Manny is of course, quite correct: always listen to me.
> >        
> >        
> >        
> >         I've been poking around on Sandrine's system, g14r10, and 
> > there is more
> >        
> >         than a little weirdness.  Here's the scoop: it was an
> > _system upgrade_
> >        
> >         to primary flash; upgrading from a debug build to an opt 
> > build; with the
> >        
> >         new FTI code merged in; with SC2 not configured.  What
> > could go wrong
> >        
> >         with that?
> >        
> >        
> >        
> >         I believe this is a known bug, actually, and a fix for it
> > is pending.
> >        
> >         It might be combining with the bug Rendell mentioned which
> > I'm not 
> >         familiar with (is there a bug filed?).  The known bug I'm 
> > referring to
> >        
> >         isn't necessarily all that known when doing a system
> > upgrade, but it is
> >        
> >         known to be a problem if you do a flash_install, followed
> > by a "copy
> >        
> >         config from standby" and you don't have SC2 configured, 
> > although it's
> >        
> >         rumoured to only affect bobcat/bsd.
> >        
> >        
> >        
> >         Also, there is some bogasity in the /etc/network/interfaces
> > file: 
> >        
> >        
> >         # The secondary network interface
> >        
> >         auto eth1
> >        
> >         iface eth1 inet static
> >        
> >           address address
> >        
> >           netmask netmask
> >        
> >        
> >        
> >         The 'address', 'netmask' and 'auto eth1' lines should be 
> > commented out.
> >        
> >        
> >        
> >         Go ahead and fix the interfaces file, and fix
> > the /etc/hostname 
> >         and /onstor/conf/nasgwayinit.conf files and see if the
> > filer can resume
> >        
> >         operation.
> >        
> >        
> >        
> >        
> >        
> >         On Thu, 3 Apr 2008 17:53:56 -0700 "Manohar Divate"
> >        
> >         <manohar.divate@onstor.com> 
> > <mailto:manohar.divate@onstor.com>  wrote:
> >        
> >        
> >        
> >         > Not me , All clear and looks everything OK on my 2 node 
> > cluster after
> >        
> >         > flash install
> >        
> >         >
> >        
> >         > listen to Andy (do a flash install)
> >        
> >         >
> >        
> >         >
> >        
> >         >
> >        
> >         >
> >        
> >         >
> >        
> >         > -Manny
> >        
> >         >
> >        
> >         >
> >        
> >         > You are good to go if you use this procedure
> >        
> >         >
> >        
> >         > Do a NfXSH>system copy init
> >        
> >         > Nfxsh>exit
> >        
> >         >
> >        
> >         > #mount -r
> >        
> >         > 
> > 10.0.0.236:/nx-d_buildup/build-trees/R4.0.0.0/R4.0.0.0-040308/nfx-tree/B
> >        
> >         > uild/cg/opt/Release /mnt
> >        
> >         >
> >        
> >         > #cd /mnt/etc
> >        
> >         > #./flash_install.sh /mnt 0 or 1 depending on you secondary
> >        
> >         >
> >        
> >         > #Answer N for FIT
> >        
> >         > #Answer Y for copy for upgrade
> >        
> >         >
> >        
> >         > Nfxsh>System reboot -s -f -y
> >        
> >         >
> >        
> >         >
> >        
> >         >
> >        
> >         > _____________________________________________
> >        
> >         > From: Sandrine Boulanger
> >        
> >         > Sent: Thursday, April 03, 2008 5:44 PM
> >        
> >         > To: Rendell Fong; Narain Ramadass; Larry Scheer;
> >         > dl-Cougar; 
> > dl-QA
> >        
> >         > Subject: RE: Submittal 16 Beta Release Candidate availble
> >        
> >         >
> >        
> >         > Yes, looks like everyone upgrading to sub#16 who does not 
> > have this
> >        
> >         > file will end up with the same problem.
> >        
> >         > I'll file a defect.
> >        
> >         >
> >        
> >         > _____________________________________________
> >        
> >         > From: Rendell Fong
> >        
> >         > Sent: Thursday, April 03, 2008 5:17 PM
> >        
> >         > To: Sandrine Boulanger; Narain Ramadass; Larry Scheer; 
> > dl-Cougar;
> >        
> >         > dl-QA Subject: RE: Submittal 16 Beta Release Candidate
> >         > availble
> >        
> >         >
> >        
> >         > The /onstor/conf/nasgwayinit.conf file is not configured
> >         > so 
> > it may be
> >        
> >         > contributing to the confusion.
> >        
> >         > I had to manually edit this file on my filer after doing
> >         > a 
> > system
> >        
> >         > upgrade recently.
> >        
> >         >
> >        
> >         > localhost:/onstor/conf# cat nasgwayinit.conf
> >        
> >         > SC1_IP:
> >        
> >         > SC1_NETMASK:
> >        
> >         > SC2_IP:
> >        
> >         > SC2_NETMASK:
> >        
> >         > GATEWAY:
> >        
> >         > NTP:0.debian.pool.ntp.or
> >        
> >         > HOST_NAME:localhost
> >        
> >         > DOMAIN_NAME:
> >        
> >         > RESOLVER:
> >        
> >         > BOOT_TYPE:STATIC
> >        
> >         >
> >        
> >         > _____________________________________________
> >        
> >         > From: Sandrine Boulanger
> >        
> >         > Sent: Thursday, April 03, 2008 5:12 PM
> >        
> >         > To: Narain Ramadass; Larry Scheer; dl-Cougar; dl-QA
> >        
> >         > Subject: RE: Submittal 16 Beta Release Candidate availble
> >        
> >         >
> >        
> >         > Looks like after the reboot, the name of the filer was 
> > changed to
> >        
> >         > localhost, and that's what ncmd is tripping on, because
> >         > it's 
> > different
> >        
> >         > from the name in cluster.conf
> >        
> >         >
> >        
> >         > Apr  3 16:28:00 g14r10 pm: pm_terminate: child 1257 
> > (/onstor/bin/asd)
> >        
> >         > terminated
> >        
> >         > Apr  3 16:28:01 g14r10 pm: pm_terminate: child 1256
> >        
> >         > (/onstor/bin/snmpd) terminated
> >        
> >         > Apr  3 16:28:01 g14r10 pm: pm_terminate: child 1100
> >        
> >         > (/onstor/bin/cluster_server) terminated
> >        
> >         > Apr  3 16:28:01 g14r10 pm: pm_terminate: child 1099
> >        
> >         > (/onstor/bin/evm_cfgd) terminated
> >        
> >         > Apr  3 16:28:02 g14r10 pm: pm_terminate: child 1069
> >        
> >         > (/onstor/bin/sdm_cfgd) terminated
> >        
> >         > Apr  3 16:28:02 g14r10 pm: pm_terminate: child 1068
> >        
> >         > (/onstor/bin/chassisd) terminated
> >        
> >         > Apr  3 16:28:02 g14r10 pm: pm_terminate: child 1067
> >        
> >         > (/onstor/bin/timekeeper) terminated
> >        
> >         > Apr  3 16:28:02 g14r10 pm: pm_terminate: child 1066
> >        
> >         > (/onstor/bin/eventd) terminated
> >        
> >         > Apr  3 16:28:02 g14r10 pm: pm_terminate: child 1046 
> > (/onstor/bin/ncmd)
> >        
> >         > terminated
> >        
> >         > Apr  3 16:28:02 g14r10 pm: pm_terminate: child 1036 
> > (/onstor/bin/elog)
> >        
> >         > terminated
> >        
> >         > Apr  3 16:30:27 localhost pm: /onstor/bin/elog: finished
> >        
> >         > initialization. Apr  3 16:30:28 localhost :
> >        
> >         > 0:0:pm:INFO: /onstor/bin/ncmd: finished initialization.
> >        
> >         > Apr  3 16:30:29 localhost : 0:0:pm:ERROR: pm_sig_handler:
> >        
> >         > /onstor/bin/ncmd (pid 987) terminated with signal Aborted
> >        
> >         > Apr  3 16:30:29 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/eventd: 
> > finished
> >        
> >         > initialization.
> >        
> >         > Apr  3 16:30:30 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/ncmd: 
> > finished
> >        
> >         > initialization.
> >        
> >         > Apr  3 16:30:31 localhost : 0:0:pm:ERROR: pm_sig_handler:
> >        
> >         > /onstor/bin/ncmd (pid 1000) terminated with signal Aborted
> >        
> >         > Apr  3 16:30:31 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/timekeeper:
> >        
> >         > finished initialization.
> >        
> >         > Apr  3 16:30:31 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/ncmd: 
> > finished
> >        
> >         > initialization.
> >        
> >         > Apr  3 16:30:31 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/chassisd:
> >        
> >         > finished initialization.
> >        
> >         > Apr  3 16:32:02 localhost : 0:0:cm:NOTICE: CHASSISD:
> >         > Chassis 
> > Manager:
> >        
> >         > Started
> >        
> >         >
> >        
> >         > _____________________________________________
> >        
> >         > From: Narain Ramadass
> >        
> >         > Sent: Thursday, April 03, 2008 5:00 PM
> >        
> >         > To: Sandrine Boulanger; Larry Scheer; dl-Cougar; dl-QA
> >        
> >         > Subject: RE: Submittal 16 Beta Release Candidate availble
> >        
> >         >
> >        
> >         > Dont know for sure - but this may be a problem with the 
> > pmtab having
> >        
> >         > entries for cluster_contrl?
> >        
> >         >
> >        
> >         > Thought it was automatically started after a recent
> >         > change 
> > from Chris.
> >        
> >         > Maybe the merge from the FTI branch to dev messed this
> >         > up. 
> > Can you
> >        
> >         > manually uncomment this in your pmtab and retry Sandrine? 
> > The system
> >        
> >         > does not come up because pm is waiting for cluster_contrl
> >         > to ack
> >        
> >         > (which it never will).
> >        
> >         >
> >        
> >         > Thanx,
> >        
> >         > Narain.
> >        
> >         >
> >        
> >         > _____________________________________________
> >        
> >         > From:     Sandrine Boulanger
> >        
> >         > Sent:     Thursday, April 03, 2008 4:50 PM
> >        
> >         > To: Sandrine Boulanger; Larry Scheer; dl-Cougar; dl-QA
> >        
> >         > Subject:  RE: Submittal 16 Beta Release Candidate availble
> >        
> >         >
> >        
> >         > Looks like it's crashed too, but does not show in the log
> >        
> >         >
> >        
> >         > localhost diag> system show chassis
> >        
> >         >
> >        
> >         >  module     cpu         state
> >        
> >         > ----------------------------------------------
> >        
> >         >  SSC        SSC         UP
> >        
> >         >  NFPNIM     TXRX0       DOWN
> >        
> >         >             TXRX1       DOWN
> >        
> >         >             FP0         DOWN
> >        
> >         >             FP1         DOWN
> >        
> >         >             FP2         DOWN
> >        
> >         >             FP3         DOWN
> >        
> >         > ----------------------------------------------
> >        
> >         >
> >        
> >         > _____________________________________________
> >        
> >         > From: Sandrine Boulanger
> >        
> >         > Sent: Thursday, April 03, 2008 4:48 PM
> >        
> >         > To: Larry Scheer; dl-Cougar; dl-QA
> >        
> >         > Subject: RE: Submittal 16 Beta Release Candidate availble
> >        
> >         >
> >        
> >         > I upgraded this single node to sub#16 without issue, but 
> > then I ran
> >        
> >         > into a crash, and after reboot this is what I get.
> >        
> >         > How come ncmd keeps initializing and it does not get past
> >        
> >         > cluster_contrl?
> >        
> >         >
> >        
> >         > localhost:~# ps ax | grep onstor
> >        
> >         >   826 ?        Ss     0:00 /onstor/bin/sshd -u0
> >        
> >         >   968 ?        Ss     0:00 /onstor/bin/pm
> >        
> >         >   974 ?        S      0:02 /onstor/bin/elog
> >        
> >         >   992 ?        S      0:00 /onstor/bin/eventd
> >        
> >         >  1001 ?        S      0:00 /onstor/bin/timekeeper
> >        
> >         >  1016 ?        S      0:00 /onstor/bin/chassisd
> >        
> >         >  1024 ?        S      0:00 /onstor/bin/sdm_cfgd
> >        
> >         >  1034 ?        S      0:01 /onstor/bin/evm_cfgd
> >        
> >         >  1037 ?        S<     0:00 /onstor/bin/cluster_server
> >        
> >         >  1038 ?        R<    13:25 /onstor/bin/cluster_contrl
> >        
> >         >  1039 ?        S<     0:00 /onstor/bin/cluster_contrl
> >        
> >         >  1636 ?        Ss     0:00 /bin/sh /onstor/bin/emrscron
> >         > -g 
> > h_res_stats
> >        
> >         >  1683 pts/0    R+     0:00 grep onstor
> >        
> >         > localhost:~#
> >        
> >         >
> >        
> >         >
> >        
> >         > Apr  3 16:30:27 localhost syslogd 1.4.1#18: restart.
> >        
> >         > Apr  3 16:30:27 localhost pm: /onstor/bin/elog: finished
> >        
> >         > initialization. Apr  3 16:30:28 localhost kernel: fp1: 9:
> >        
> >         > ispfc:ISPFC_CS_NSDB_CHANGE,[8015] on port [10d00]
> >        
> >         > Apr  3 16:30:28 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/ncmd: 
> > finished
> >        
> >         > initialization.
> >        
> >         > Apr  3 16:30:29 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/eventd: 
> > finished
> >        
> >         > initialization.
> >        
> >         > Apr  3 16:30:30 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/ncmd: 
> > finished
> >        
> >         > initialization.
> >        
> >         > Apr  3 16:30:31 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/timekeeper:
> >        
> >         > finished initialization.
> >        
> >         > Apr  3 16:30:31 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/ncmd: 
> > finished
> >        
> >         > initialization.
> >        
> >         > Apr  3 16:30:31 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/chassisd:
> >        
> >         > finished initialization.
> >        
> >         > Apr  3 16:30:32 localhost kernel: fp2: 10: ispfc:
> >         > ispfc:sp1.1
> >        
> >         > Fibrechannel link now online
> >        
> >         > Apr  3 16:30:32 localhost kernel: fp3: 11:
> >        
> >         > ispfc:ISPFC_CS_NSDB_CHANGE,[8014] LoopID_LoginState
> >         > [10004]
> >        
> >         > Apr  3 16:30:50 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:31:18 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:31:49 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:32:02 localhost : 0:0:cm:NOTICE: CHASSISD:
> >         > Chassis 
> > Manager:
> >        
> >         > Started
> >        
> >         > Apr  3 16:32:02 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/sdm_cfgd:
> >        
> >         > finished initialization.
> >        
> >         > Apr  3 16:32:02 localhost : 0:0:sdm:INFO: ONStor Storage
> >         > Device
> >        
> >         > Manager (c)2003: Started
> >        
> >         > Apr  3 16:32:02 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/ncmd: 
> > finished
> >        
> >         > initialization.
> >        
> >         > Apr  3 16:32:03 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/evm_cfgd:
> >        
> >         > finished initialization.
> >        
> >         > Apr  3 16:32:03 localhost :
> >         > 0:0:pm:INFO: /onstor/bin/ncmd: 
> > finished
> >        
> >         > initialization.
> >        
> >         > Apr  3 16:32:03 localhost : 0:0:cluster2:INFO: 
> > ClusterCtrl_InitUbik:
> >        
> >         > connection opened e0a020a
> >        
> >         > Apr  3 16:32:03 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > SDM session
> >        
> >         > open success
> >        
> >         > Apr  3 16:32:04 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to cluster2, 
> > flags 820a,
> >        
> >         > name evm, rc -20, retrying... Apr  3 16:32:04 localhost :
> >        
> >         > 0:0:cluster2:INFO: cluster_lookup_sess: fail to open sess 
> > for app
> >        
> >         > cluster2, rc -20 Apr  3 16:32:04 localhost : 0:0:evm:INFO:
> >        
> >         > evm_rcvRmcMsg : NCM session open failed
> >        
> >         > Apr  3 16:32:04 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:32:05 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name elog, rc -20, retrying...
> >        
> >         > Apr  3 16:32:05 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:32:05 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:32:05 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:32:05 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:32:05 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name sdm, rc -20, retrying...
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name elog, rc -20, retrying...
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:32:06 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:32:13 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:32:13 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:32:13 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:32:20 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:32:22 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:32:23 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:32:23 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:32:51 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:33:22 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:33:51 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:51 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:51 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:33:51 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:51 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:52 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:33:52 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:52 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:52 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:33:52 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name sdm, rc -20, retrying...
> >        
> >         > Apr  3 16:33:53 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:53 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:53 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:33:53 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name elog, rc -20, retrying...
> >        
> >         > Apr  3 16:33:53 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:33:54 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:54 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:54 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:33:55 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:55 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:55 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:33:55 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:33:55 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:55 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:56 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:33:56 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:56 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:56 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:33:57 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name nfxsh-1396, rc -20, retrying...
> >        
> >         > Apr  3 16:33:57 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name elog, rc -20, retrying...
> >        
> >         > Apr  3 16:33:57 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:57 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:57 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:33:57 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:33:57 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:33:58 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:33:58 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:33:58 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:34:24 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:34:55 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:35:15 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:35:16 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:35:16 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:35:16 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:35:16 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:35:16 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:35:16 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name sdm, rc -20, retrying...
> >        
> >         > Apr  3 16:35:17 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name nfxsh-1396, rc -20, retrying...
> >        
> >         > Apr  3 16:35:17 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:35:17 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:35:17 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:35:17 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:35:18 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name elog, rc -20, retrying...
> >        
> >         > Apr  3 16:35:18 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:35:18 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:35:18 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:35:18 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:35:19 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:35:19 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:35:19 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:35:19 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:35:19 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:35:19 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:35:20 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:35:20 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:35:20 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:35:20 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:35:20 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name elog, rc -20, retrying...
> >        
> >         > Apr  3 16:35:21 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name nfxsh-1451, rc -20, retrying...
> >        
> >         > Apr  3 16:35:21 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:35:21 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:35:21 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:35:21 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:35:21 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:35:21 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:35:22 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:35:22 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:35:26 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:35:57 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:36:28 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:36:42 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:42 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:43 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:43 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:43 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:43 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:43 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:36:44 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:44 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:44 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:44 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name nfxsh-1451, rc -20, retrying...
> >        
> >         > Apr  3 16:36:44 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:44 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:45 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:45 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name elog, rc -20, retrying...
> >        
> >         > Apr  3 16:36:45 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:36:45 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:45 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:45 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name elog, rc -20, retrying...
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name nfxsh-1470, rc -20, retrying...
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name nfxsh-1479, rc -20, retrying...
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:46 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:36:56 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:36:56 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:36:56 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:36:56 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:36:56 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:36:56 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:36:59 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:37:06 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:37:06 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:37:06 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:37:06 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:37:06 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:37:15 localhost last message repeated 2 times
> >        
> >         > Apr  3 16:37:15 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:37:16 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:37:16 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:37:16 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:37:24 localhost last message repeated 2 times
> >        
> >         > Apr  3 16:37:25 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:37:25 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:37:25 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:37:25 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:37:25 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:37:30 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:38:01 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:38:08 localhost : 0:0:nfxsh:NOTICE: cmd[0]: -> 
> > EMRS: tried
> >        
> >         > (5 times) without success to get the EMRS config from
> >         > nfxsh :
> >        
> >         > status[2] Apr  3 16:38:08 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_lookup_sess: fail to open sess for app cluster2,
> >         > rc -20
> >        
> >         > Apr  3 16:38:09 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:38:09 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:38:09 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:38:09 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:38:09 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:38:09 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:38:09 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:38:10 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:38:10 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name elog, rc -20, retrying...
> >        
> >         > Apr  3 16:38:10 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name nfxsh-1506, rc -20, retrying...
> >        
> >         > Apr  3 16:38:10 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:38:10 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:38:10 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:38:10 localhost : 0:0:cluster2:INFO:
> >        
> >         > cluster_clientSendRmcRpc: Error sending rpc to
> >         > clusterrpc, flags
> >        
> >         > 820a, name sdm, rc -20, retrying...
> >        
> >         > Apr  3 16:38:10 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:38:11 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:38:11 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:38:11 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:38:11 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:38:19 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:38:20 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:38:20 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:38:20 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:38:20 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:38:20 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:38:29 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:38:29 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:38:29 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:38:30 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:38:32 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         > Apr  3 16:38:39 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:38:39 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:38:39 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:38:39 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:38:39 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:38:48 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:38:48 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app cluster2, rc -20
> >        
> >         > Apr  3 16:38:48 localhost : 0:0:evm:INFO: evm_rcvRmcMsg : 
> > NCM session
> >        
> >         > open failed
> >        
> >         > Apr  3 16:38:48 localhost : 0:0:evm:INFO: evm_closeSess : 
> > NCM session
> >        
> >         > closed
> >        
> >         > Apr  3 16:38:48 localhost : 0:0:cluster2:INFO: 
> > cluster_lookup_sess:
> >        
> >         > fail to open sess for app clusterrpc, rc -20
> >        
> >         > Apr  3 16:39:03 localhost kernel: fp0: warning:
> >        
> >         > rmc_pm_handle_failure(): sess {unknown_app:pm.0.0} down
> >        
> >         >
> >        
> >         > _____________________________________________
> >        
> >         > From: Larry Scheer
> >        
> >         > Sent: Thursday, April 03, 2008 3:03 PM
> >        
> >         > To: dl-Cougar; dl-QA
> >        
> >         > Subject: Submittal 16 Beta Release Candidate availble
> >        
> >         >
> >        
> >         > Submittal 16 made it through basic acceptance tests and
> >         > is 
> > available
> >        
> >         > for general use.
> >        
> >         >
> >        
> >         > This submittal is the first Beta Release candidate.
> >        
> >         > This submittal has a fix for the blocker defect:
> >         > 22970/22947 
> > TXRX
> >        
> >         > crashes during mirror test connect.
> >        
> >         > This submittal adds the Bobcat Migration code to the
> >         > cougar 
> > project.
> >        
> >         > This submittal adds the First Time Install code to the 
> > cougar project.
> >        
> >         >
> >        
> >         >
> >        
> >         > Source tree is located here:
> >        
> >         >
> >        
> >         > n/Build-Trees/R4.0.0.0/R4.0.0.0-040308/nfx-tree
> >        
> >         > /n/Build-Trees/R4.0.0.0/R4.0.0.0-040308/linux
> >        
> >         >
> >        
> >         > R3.3.0.0 images are here:
> >        
> >         > Cheetah optimized:
> >        
> >         > ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0-040308.tar.gz
> >        
> >         > <ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0-032808.tar.gz 
> > <ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0-032808.tar.gz%20%3Cftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0-032808.tar.gz%20>> 
> > >
> >        
> >         > Cheetah debug:
> >        
> >         > ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0DBG-040308.tar.gz
> >        
> >         > 
> > <ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0DBG-032808.tar.gz 
> > <ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0DBG-032808.tar.gz%20%3Cftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0DBG-032808.tar.gz%20>> 
> > >
> >        
> >         > Bobcat optimized:
> >        
> >         > ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0BC-040308.tar.gz
> >        
> >         > 
> > <ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0BC-032808.tar.gz 
> > <ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0BC-032808.tar.gz%20%3Cftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0BC-032808.tar.gz%20>> 
> > >
> >        
> >         > Bobcat debug:
> >        
> >         > ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0BCDBG-040308.tar.gz
> >        
> >         > 
> > <ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0BCDBG-032808.tar.gz 
> > <ftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0BCDBG-032808.tar.gz%20%3Cftp://upgrade@10.2.0.2/home/upgrade/R3.3.0.0BCDBG-032808.tar.gz%20>> 
> > >
> >        
> >         >
> >        
> >         > R4.0.0.0 images are here:
> >        
> >         > Cougar debug:
> >        
> >         > ftp://upgrade@10.2.0.2/R4.0.0.0CGDBG-040308.tar.gz
> >        
> >         > <ftp://upgrade@10.2.0.2/R4.0.0.0CGDBG-032808.tar.gz 
> > <ftp://upgrade@10.2.0.2/R4.0.0.0CGDBG-032808.tar.gz%20%3Cftp://upgrade@10.2.0.2/R4.0.0.0CGDBG-032808.tar.gz%20>> 
> > >
> >        
> >         > Cougar optimized:
> >        
> >         > ftp://upgrade@10.2.0.2/R4.0.0.0CG-040308.tar.gz
> >        
> >         > <ftp://upgrade@10.2.0.2/R4.0.0.0CG-032808.tar.gz 
> > <ftp://upgrade@10.2.0.2/R4.0.0.0CG-032808.tar.gz%20%3Cftp://upgrade@10.2.0.2/R4.0.0.0CG-032808.tar.gz%20>> 
> > >
> >        
> >         >
> >        
> >         >
> >        
> >         > If you have not upgraded your cougar to the latest PROM
> >         > version
> >        
> >         > (1.0.5) it is recommended that you do. Please follow the 
> > instructions
> >        
> >         > in section 5 and 6 of the attached submittal notes
> >         > carefully to
> >        
> >         > upgrade your proms.
> >        
> >         >
> >        
> >         > Submittal 16 notes are attached. Please read for more 
> > details about
> >        
> >         > this build and for instructions specific to this product.
> >        
> >         >
> >        
> >         >  << File: Cougar_Submittal_16_Notes.htm >>
> >        
> >        
> >        
> >
> >
