AF:
NF:0
PS:10
SRH:1
SFN:
DSR:
MID:
CFG:
PT:0
S:andy.sharp@lsi.com
RQ:
SSV:mhbs.lsil.com
NSV:
SSH:
R:<Richard.Hardiman@lsi.com>,<Jonathan.Goldick@lsi.com>,<Jobi.Ariyamannil@lsi.com>,<Siva.Kodiganti@lsi.com>
MAID:2
X-Sylpheed-Privacy-System:
X-Sylpheed-Sign:0
SCF:#mh/Mailbox/sent
RMID:#imap/LSI/INBOX	0	6C678488C5CEE74F813A4D1948FD2DC7B50E7DED@cosmail02.lsi.com
X-Sylpheed-End-Special-Headers: 1
Date: Fri, 23 Oct 2009 18:32:01 -0700
From: Andrew Sharp <andy.sharp@lsi.com>
To: Richard Hardiman <Richard.Hardiman@lsi.com>
Cc: "Goldick, Jonathan" <Jonathan.Goldick@lsi.com>, "Ariyamannil, Jobi"
 <Jobi.Ariyamannil@lsi.com>, "Kodiganti, Siva" <Siva.Kodiganti@lsi.com>
Subject: Re: LSi 2600 (old old storage)
Message-ID: <20091023183201.6097d9ae@ripper.onstor.net>
In-Reply-To: <6C678488C5CEE74F813A4D1948FD2DC7B50E7DED@cosmail02.lsi.com>
References: <D7A889C980962746B30DE07864593C02CC802DE2@cosmail02.lsi.com>
	<6C678488C5CEE74F813A4D1948FD2DC7B50E7DE6@cosmail02.lsi.com>
	<D7A889C980962746B30DE07864593C02CC802DEC@cosmail02.lsi.com>
	<6C678488C5CEE74F813A4D1948FD2DC7B50E7DE8@cosmail02.lsi.com>
	<D7A889C980962746B30DE07864593C02CC802EE3@cosmail02.lsi.com>
	<6C678488C5CEE74F813A4D1948FD2DC7B50E7DE9@cosmail02.lsi.com>
	<D7A889C980962746B30DE07864593C02CC802F79@cosmail02.lsi.com>
	<6C678488C5CEE74F813A4D1948FD2DC7B50E7DEC@cosmail02.lsi.com>
	<6C678488C5CEE74F813A4D1948FD2DC7B50E7DED@cosmail02.lsi.com>
Organization: LSI
X-Mailer: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Rich,

You probably already have, but just in case, go ahead and give Siva
access to the switch database.  But make sure you closely supervise
this obviously dangerous "new kid"!  ~:^)


On Fri, 23 Oct 2009 18:23:06 -0600 "Kodiganti, Siva"
<Siva.Kodiganti@lsi.com> wrote:

> Jonathan,
>=20
> I am a new kid here.  People may not have trust in me.  Could you
> please help me out here to gain some trust from my peers/lab admin
> why I am trying to get these details to resolve a bug and Larry
> filer's long time issues.
>=20
> For some reason, filer g4r5 still sees a few target arrays(which no
> longer exists or exposed due to wrong zones lying around in the
> fabric for de-commissioned arrays or wrong zones).
>=20
> I see following remote target ports show up on filer g4r5 w.r.t Defect
> # TED00027568
>=20
> WWN 0x22000080e5129a53
> WWN 0x22000080e5129a00
> WWN 0x220000a0b808a545
>=20
> If I had access to switch database it would be a few minutes to get
> these details and conclude/resolve the issue.  But, this is not the
> case.  It seems I have to do lot of explanation to get what's needed.
>=20
> If you could share some thoughts with a few here, it will be easy
> in regards to future requests to get needed info to resolve issues.
>=20
> Thanks
> Siva Kodiganti
> LSI, Campbell, CA, USA
> ________________________________________
> From: Kodiganti, Siva
> Sent: Friday, October 23, 2009 12:36 PM
> To: Limato, Dave; Hardiman, Richard
> Cc: Scheer, Larry; Sharp, Andy; Currin, Shawn
> Subject: RE: LSi 2600 (old old storage)
>=20
> Read capacity query(scsi command) is pretty common and most basic
> operation for disks/luns.  This scsi command response provides details
> of size of appropriate LUN??
>=20
> Issue is not with Jonathan's fix.  A basic operation has failed.
> Early during system bootup or periodic checks, system will scan for
> luns visible in fabric. Since for this initiator(g4r5) multiple
> arrays(targets lsi, 3par arrays) are visible, SDM sends various scsi
> command to these targets to check for luns visible from those
> targets.  And read capacity is one of scsi cmd sent in this process.
>=20
> At least from /var/log/onstor/messages g4r5 filer still sees LSI
> array, and thus these warning messages show up.
>=20
> Thanks
> Siva Kodiganti
> LSI, Campbell, CA, USA
> ________________________________________
> From: Limato, Dave
> Sent: Friday, October 23, 2009 11:45 AM
> To: Kodiganti, Siva; Hardiman, Richard
> Cc: Scheer, Larry; Sharp, Andy; Currin, Shawn
> Subject: RE: LSi 2600 (old old storage)
>=20
> I bet Jonathan put something in to do a read capacity query for all
> LSI arrays, and this array firmware does not support that query?
> That=E2=80=99s my guess.
>=20
> Here is what we state is supported on cougar for LSI.
>=20
> LSI (Engenio) 1932 (note 6)
> LSI (Engenio) 2882
> LSI (Engenio) 3992 or 3994
> LSI (Engenio) 4884
> LSI (Engenio) 5884
> LSI (Engenio) 6091
>=20
> The scripts can be found below. Zoning issues need to be sorted out
> through the lab. Thanks.
>=20
> >cd nfx-test/tst-labman/
> [davel@davel:~/p4/nfx-test/tst-labman]$
> >ls
> awk.baddevice              cf-storage.bydevandip.drv
> create-csv-file  lab_linux_pwn.pl    sdm.conf.2.51   storagedev.drv
> awk.gooddevice             cf-storage.db
> filerSetup.exp   readme.txt          sdm.conf.3.8    storagedev.list
> awk.gooddevice.dogfood     cf-storage.txt
> jboddevice.ksh   sdm_10.2.11.1.conf  sdm.conf.7.20   storage-report
> awk.luntolabel             cf-storagexls.txt
> jbod.map         sdm_10.2.11.2.conf  site.exp.eng40
> c21r19_STAF.cfg            collectlun.tcl
> kill.sh          sdm.conf.17.2       start cf-error.txt
> collectsdm.tcl             label.exp        sdm.conf.18.2
> start-client cf-seed.txt                collectstorage.drv
> label.readme     sdm.conf.19.1       startEng1
> cf-storage.bydevandip.awk  collectstorage.sh
> label.sh         sdm.conf.19.2       start.org
> [davel@davel:~/p4/nfx-test/tst-labman]$
> >pwd
> /homes/davel/p4/nfx-test/tst-labman
> [davel@davel:~/p4/nfx-test/tst-labman]$
> >
>=20
>=20
> -----Original Message-----
> From: Kodiganti, Siva
> Sent: Friday, October 23, 2009 10:33 AM
> To: Limato, Dave; Hardiman, Richard
> Cc: Scheer, Larry; Sharp, Andy; Currin, Shawn
> Subject: RE: LSi 2600 (old old storage)
>=20
> Thanks for sharing info about scripts.  Please let me know locations
> of these scripts
>=20
> I came to know through you that one of issue on Larry's filer
> g4r5 is due to faulty array. His 2nd issue, ie. missing luns from
> LSI 2600A array is on another filer.  I am not sure if it is coming
> from de-commissioning Array.  Thus, these emails. Please some
> one look into this issue to see if zones are ok.
>=20
> Regarding faulty array luns on g4r5 filer, I still see following
> issues. Which seems to indicate that zone is not correct.  Please
> verify zones are proper and let me when it is corrected.  This will
> help in re-checking the issue on g4r5 filer.
>=20
> Oct 23 10:20:40 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a53:20 Oct 23 10:20:48 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a53:0 Oct 23 10:20:48 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x220000a0b808a545:0
> Oct 23 10:20:48 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a53:1 Oct 23 10:20:49 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a53:2 Oct 23 10:20:49 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:3
> Oct 23 10:20:49 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a53:4 Oct 23 10:20:49 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a53:5 Oct 23 10:20:50 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:0
> Oct 23 10:20:53 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a53:6 Oct 23 10:20:53 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a53:7 Oct 23 10:20:53 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:8
> Oct 23 10:20:54 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a53:9 Oct 23 10:20:56 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a53:10 Oct 23 10:20:57 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:11
> Oct 23 10:20:58 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a53:12 Oct 23 10:20:58 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a53:13 Oct 23 10:20:59 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:14
> Oct 23 10:21:00 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a53:15 Oct 23 10:21:02 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a53:16 Oct 23 10:21:02 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:17
> Oct 23 10:21:02 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a53:18 Oct 23 10:21:02 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a53:19 Oct 23 10:21:02 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x220000a0b808a545:1
> Oct 23 10:21:03 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x220000a0b808a545:2 Oct 23 10:21:03 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x220000a0b808a545:3 Oct 23 10:21:03 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x220000a0b808a545:4
> Oct 23 10:21:04 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a53:20 Oct 23 10:21:04 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a53:21 Oct 23 10:21:04 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:1
> Oct 23 10:21:04 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a00:2 Oct 23 10:21:05 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a00:3 Oct 23 10:21:05 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:4
> Oct 23 10:21:05 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a00:5 Oct 23 10:21:05 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a00:6 Oct 23 10:21:06 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:7
> Oct 23 10:21:06 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a00:8 Oct 23 10:21:06 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a00:9 Oct 23 10:21:06 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:10
> Oct 23 10:21:06 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a00:11 Oct 23 10:21:07 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a00:12 Oct 23 10:21:07 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:13
> Oct 23 10:21:07 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a00:14 Oct 23 10:21:07 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a00:15 Oct 23 10:21:07 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:16
> Oct 23 10:21:07 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a00:17 Oct 23 10:21:07 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a00:18 Oct 23 10:21:08 g4r5 : 0:0:sdm:NOTICE: Read
> Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:19
> Oct 23 10:21:08 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> status[0xe0000002]: WWN 0x22000080e5129a00:20 Oct 23 10:21:08 g4r5 :
> 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> 0x22000080e5129a00:21
>=20
>=20
> Thanks
> Siva Kodiganti
> LSI, Campbell, CA, USA
>=20
> -----------------------------------------------------
> RE: LSi 2600 (old old storage)
> From: Scheer, Larry
> Sent:   Friday, October 23, 2009 10:21 AM
> To:
> Limato, Dave; Kodiganti, Siva; Hardiman, Richard
> Cc:
> Sharp, Andy; Currin, Shawn
> Attachments:
> That is not correct. Siva discovered that my cougar filer is
> complaining about storage that appears to be zoned for it
> incorrectly. This has been around for a very long time. Jonathan's
> changes just makes the problem more visible with notice messages
> filling the elog until /var gets to 100%. I filed a defect against
> this Siva says it is just a configuration issue at the storage array,
> not a NAS gateway issue.
>=20
> I would greatly appreciate this zoning issue get corrected so that my
> filers run in peace. ________________________________________
> From: Limato, Dave
> Sent: Friday, October 23, 2009 9:38 AM
> To: Kodiganti, Siva; Hardiman, Richard
> Cc: Scheer, Larry; Sharp, Andy; Currin, Shawn
> Subject: RE: LSi 2600 (old old storage)
>=20
> FYI. There are some scripts in nfx-test/tst-lab that we have used
> already. Look at the storage script it should have what you desire.
>=20
> Larrys filers disappearing or luns missing is likely the result of
> broken 3.4 builds (and faulty storage).
>=20
> -----Original Message-----
> From: Kodiganti, Siva
> Sent: Friday, October 23, 2009 8:51 AM
> To: Limato, Dave; Hardiman, Richard
> Cc: Scheer, Larry; Sharp, Andy
> Subject: RE: LSi 2600 (old old storage)
>=20
> Larry has at least couple of filers for which luns have disappeared
> or zoning is wrong.  While de-commissioning Arrays, please also
> clear the zones appropriately to avoid  filing invalid config bugs.
>=20
> Is it possible to send a few additionals details in the Array
> de-commissioning email to group :
>   a) Array wwn:wwpn
>   b) Nas gaetways list for which above array is visible.
>      I can check if it is feasible to develop a script to get these
>      details.
>=20
> Thanks
> Siva Kodiganti
> LSI, Campbell, CA, USA
> ________________________________________
> From: Limato, Dave
> Sent: Thursday, October 22, 2009 7:08 PM
> To: Kodiganti, Siva; Hardiman, Richard
> Subject: RE: LSi 2600 (old old storage)
>=20
> I think you are safe, the 2600 are closer to the center of the lab
> and not near rack 59. You should really be on newer LSI storage.
> Rich. Please move Siva once you have found the appropriate gear.
>=20
> Siva, delete your volumes before he moves otherwise it will cause
> headaches when the storage is removed before the vsvr gets a chance
> to delete it. Cluster will keep looking for it. Thanks,
>=20
> -----Original Message-----
> From: Kodiganti, Siva
> Sent: Thursday, October 22, 2009 6:53 PM
> To: Limato, Dave; Hardiman, Richard
> Subject: RE: LSi 2600 (old old storage)
>=20
> I do see luns from E2600A array on g7r204 & g8r204.
> Could you please take care of updating zones appropriately for
> above cluster filers?
>=20
> g8r204 FOO diag> lun topology
> Controllers                              Type     Nodes
> -----------------------------------------------------------------------
> LSI_E2600A                               Disk     g7r204 g8r204
>=20
> Thanks
> Siva Kodiganti
> LSI, Campbell, CA, USA
> ________________________________________
> From: Limato, Dave
> Sent: Thursday, October 22, 2009 5:34 PM
> To: Limato, Dave; DL-ONStor-Elab; DL-ONStor-Hardware; DL-ONStor-QA;
> DL-ONStor-qualification; DL-ONStor-Software; Haverty, Patrick;
> Nguyen, Brian; Onstor-cs-mail-archive; Stark, Brian Cc:
> DL-ONStor-cstech Subject: RE: LSi 2600 (old old storage)
>=20
> We are looking for another array now=E2=80=A6. LSI_20000080e5129937 as we=
ll
> as LSI_E2600A.
>=20
> If you see this in your filer let me know. I suspect that these are
> serving up core and management volumes for Bobcats.
>=20
> Run        lun topology
>=20
>=20
> LSI_20000080e5129937                     Disk     eng216
> LSI_E2600A
> Disk           eng901
>=20
>=20
> From: Limato, Dave
> Sent: Wednesday, October 21, 2009 6:13 PM
> To: DL-ONStor-Elab; DL-ONStor-Hardware; DL-ONStor-QA;
> DL-ONStor-qualification; DL-ONStor-Software; Haverty, Patrick;
> Nguyen, Brian; Onstor-cs-mail-archive; Stark, Brian Cc:
> DL-ONStor-cstech Subject: LSi 2600 (old old storage)
>=20
> All,
>=20
> About a week ago I asked you to run lun topology on your filers
> looking for Mylex Arrays as we need to migrate off legacy storage. We
> will be shutting down that storage tomorrow so if you are using it,
> delete your volumes and file a help ticket for Rich so that he can
> connect you to newer storage.
>=20
> Now we are looking at about 20 Trays of LSI 2600 Storage.
>=20
> Please run lun topology again. If you see the strings below let me
> and Rich Hardiman know as we need to migrate you onto some shiny new
> storage.
>=20
> LSI_E2600A                               Disk
>=20
> We would like to shut this down in the next few days. Thanks
>=20
> Dave Limato - 408.963.2400 x3515 - 510.329.9994
