AF:
NF:0
PS:10
SRH:1
SFN:
DSR:
MID:
CFG:
PT:0
S:andy.sharp@lsi.com
RQ:
SSV:mhbs.lsil.com
NSV:
SSH:
R:<Richard.Hardiman@lsi.com>,<Jobi.Ariyamannil@lsi.com>,<Siva.Kodiganti@lsi.com>
MAID:2
X-Sylpheed-Privacy-System:
X-Sylpheed-Sign:0
SCF:#mh/Mailbox/sent
RMID:#imap/LSI/INBOX	0	98BD48946A20FF48932378FFD26CE9057AD13443@cosmail01.lsi.com
X-Sylpheed-End-Special-Headers: 1
Date: Fri, 23 Oct 2009 20:02:55 -0700
From: Andrew Sharp <andy.sharp@lsi.com>
To: "Hardiman, Richard" <Richard.Hardiman@lsi.com>
Cc: "Ariyamannil, Jobi" <Jobi.Ariyamannil@lsi.com>, "Kodiganti, Siva"
 <Siva.Kodiganti@lsi.com>
Subject: Re: LSi 2600 (old old storage)
Message-ID: <20091023200255.4589a04d@ripper.onstor.net>
In-Reply-To: <98BD48946A20FF48932378FFD26CE9057AD13443@cosmail01.lsi.com>
References: <98BD48946A20FF48932378FFD26CE9057AD13443@cosmail01.lsi.com>
Organization: LSI
X-Mailer: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

No, Siva is not exempt from the ticket process.  We'll all get together
Monday.

a


On Fri, 23 Oct 2009 20:38:54 -0600 "Hardiman, Richard"
<Richard.Hardiman@lsi.com> wrote:

> Hey Andy,
> Actually I haven't given him access, I told him repeatedly today to
> send me the WWN he was looking for, he never did. I would of thought
> if this was urgent he would of sent me the information I requested.
> Shiva has to understand that there are a lot of requests and him not
> submitting tickets doesn't allow me to plan or work on other requests
> where a ticket was submitted. Very hard to work off verbal requests.
> So I guess I need to understand what's required here, maybe I am
> mistaken do I need to stop and help Shiva whenever he needs it? Is he
> required to submit ticket requests or is he exempt from that process?
> I can give him access to whatever switch he needs I have yet to get
> any info besides what is this email so I don't know what switch DB he
> needs access to. I will get with you and Shiva on Monday.
> -----Original Message----- From: Andrew Sharp <andy.sharp@lsi.com>
> Sent: Friday, October 23, 2009 6:32 PM To: Hardiman, Richard
> <Richard.Hardiman@lsi.com> Cc: Goldick, Jonathan
> <Jonathan.Goldick@lsi.com>; Ariyamannil, Jobi
> <Jobi.Ariyamannil@lsi.com>; Kodiganti, Siva <Siva.Kodiganti@lsi.com>
> Subject: Re: LSi 2600 (old old storage)
>=20
>=20
> Rich,
>=20
> You probably already have, but just in case, go ahead and give Siva
> access to the switch database.  But make sure you closely supervise
> this obviously dangerous "new kid"!  ~:^)
>=20
>=20
> On Fri, 23 Oct 2009 18:23:06 -0600 "Kodiganti, Siva"
> <Siva.Kodiganti@lsi.com> wrote:
>=20
> > Jonathan,
> >
> > I am a new kid here.  People may not have trust in me.  Could you
> > please help me out here to gain some trust from my peers/lab admin
> > why I am trying to get these details to resolve a bug and Larry
> > filer's long time issues.
> >
> > For some reason, filer g4r5 still sees a few target arrays(which no
> > longer exists or exposed due to wrong zones lying around in the
> > fabric for de-commissioned arrays or wrong zones).
> >
> > I see following remote target ports show up on filer g4r5 w.r.t
> > Defect # TED00027568
> >
> > WWN 0x22000080e5129a53
> > WWN 0x22000080e5129a00
> > WWN 0x220000a0b808a545
> >
> > If I had access to switch database it would be a few minutes to get
> > these details and conclude/resolve the issue.  But, this is not the
> > case.  It seems I have to do lot of explanation to get what's
> > needed.
> >
> > If you could share some thoughts with a few here, it will be easy
> > in regards to future requests to get needed info to resolve issues.
> >
> > Thanks
> > Siva Kodiganti
> > LSI, Campbell, CA, USA
> > ________________________________________
> > From: Kodiganti, Siva
> > Sent: Friday, October 23, 2009 12:36 PM
> > To: Limato, Dave; Hardiman, Richard
> > Cc: Scheer, Larry; Sharp, Andy; Currin, Shawn
> > Subject: RE: LSi 2600 (old old storage)
> >
> > Read capacity query(scsi command) is pretty common and most basic
> > operation for disks/luns.  This scsi command response provides
> > details of size of appropriate LUN??
> >
> > Issue is not with Jonathan's fix.  A basic operation has failed.
> > Early during system bootup or periodic checks, system will scan for
> > luns visible in fabric. Since for this initiator(g4r5) multiple
> > arrays(targets lsi, 3par arrays) are visible, SDM sends various scsi
> > command to these targets to check for luns visible from those
> > targets.  And read capacity is one of scsi cmd sent in this process.
> >
> > At least from /var/log/onstor/messages g4r5 filer still sees LSI
> > array, and thus these warning messages show up.
> >
> > Thanks
> > Siva Kodiganti
> > LSI, Campbell, CA, USA
> > ________________________________________
> > From: Limato, Dave
> > Sent: Friday, October 23, 2009 11:45 AM
> > To: Kodiganti, Siva; Hardiman, Richard
> > Cc: Scheer, Larry; Sharp, Andy; Currin, Shawn
> > Subject: RE: LSi 2600 (old old storage)
> >
> > I bet Jonathan put something in to do a read capacity query for all
> > LSI arrays, and this array firmware does not support that query?
> > That=E2=80=99s my guess.
> >
> > Here is what we state is supported on cougar for LSI.
> >
> > LSI (Engenio) 1932 (note 6)
> > LSI (Engenio) 2882
> > LSI (Engenio) 3992 or 3994
> > LSI (Engenio) 4884
> > LSI (Engenio) 5884
> > LSI (Engenio) 6091
> >
> > The scripts can be found below. Zoning issues need to be sorted out
> > through the lab. Thanks.
> >
> > >cd nfx-test/tst-labman/
> > [davel@davel:~/p4/nfx-test/tst-labman]$
> > >ls
> > awk.baddevice              cf-storage.bydevandip.drv
> > create-csv-file  lab_linux_pwn.pl    sdm.conf.2.51   storagedev.drv
> > awk.gooddevice             cf-storage.db
> > filerSetup.exp   readme.txt          sdm.conf.3.8    storagedev.list
> > awk.gooddevice.dogfood     cf-storage.txt
> > jboddevice.ksh   sdm_10.2.11.1.conf  sdm.conf.7.20   storage-report
> > awk.luntolabel             cf-storagexls.txt
> > jbod.map         sdm_10.2.11.2.conf  site.exp.eng40
> > c21r19_STAF.cfg            collectlun.tcl
> > kill.sh          sdm.conf.17.2       start cf-error.txt
> > collectsdm.tcl             label.exp        sdm.conf.18.2
> > start-client cf-seed.txt                collectstorage.drv
> > label.readme     sdm.conf.19.1       startEng1
> > cf-storage.bydevandip.awk  collectstorage.sh
> > label.sh         sdm.conf.19.2       start.org
> > [davel@davel:~/p4/nfx-test/tst-labman]$
> > >pwd
> > /homes/davel/p4/nfx-test/tst-labman
> > [davel@davel:~/p4/nfx-test/tst-labman]$
> > >
> >
> >
> > -----Original Message-----
> > From: Kodiganti, Siva
> > Sent: Friday, October 23, 2009 10:33 AM
> > To: Limato, Dave; Hardiman, Richard
> > Cc: Scheer, Larry; Sharp, Andy; Currin, Shawn
> > Subject: RE: LSi 2600 (old old storage)
> >
> > Thanks for sharing info about scripts.  Please let me know locations
> > of these scripts
> >
> > I came to know through you that one of issue on Larry's filer
> > g4r5 is due to faulty array. His 2nd issue, ie. missing luns from
> > LSI 2600A array is on another filer.  I am not sure if it is coming
> > from de-commissioning Array.  Thus, these emails. Please some
> > one look into this issue to see if zones are ok.
> >
> > Regarding faulty array luns on g4r5 filer, I still see following
> > issues. Which seems to indicate that zone is not correct.  Please
> > verify zones are proper and let me when it is corrected.  This will
> > help in re-checking the issue on g4r5 filer.
> >
> > Oct 23 10:20:40 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a53:20 Oct 23 10:20:48 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a53:0 Oct 23 10:20:48 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x220000a0b808a545:0
> > Oct 23 10:20:48 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a53:1 Oct 23 10:20:49 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a53:2 Oct 23 10:20:49 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:3
> > Oct 23 10:20:49 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a53:4 Oct 23 10:20:49 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a53:5 Oct 23 10:20:50 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:0
> > Oct 23 10:20:53 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a53:6 Oct 23 10:20:53 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a53:7 Oct 23 10:20:53 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:8
> > Oct 23 10:20:54 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a53:9 Oct 23 10:20:56 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a53:10 Oct 23 10:20:57 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:11
> > Oct 23 10:20:58 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a53:12 Oct 23 10:20:58 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a53:13 Oct 23 10:20:59 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:14
> > Oct 23 10:21:00 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a53:15 Oct 23 10:21:02 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a53:16 Oct 23 10:21:02 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a53:17
> > Oct 23 10:21:02 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a53:18 Oct 23 10:21:02 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a53:19 Oct 23 10:21:02 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x220000a0b808a545:1
> > Oct 23 10:21:03 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x220000a0b808a545:2 Oct 23 10:21:03 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x220000a0b808a545:3 Oct 23 10:21:03 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x220000a0b808a545:4
> > Oct 23 10:21:04 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a53:20 Oct 23 10:21:04 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a53:21 Oct 23 10:21:04 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:1
> > Oct 23 10:21:04 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a00:2 Oct 23 10:21:05 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a00:3 Oct 23 10:21:05 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:4
> > Oct 23 10:21:05 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a00:5 Oct 23 10:21:05 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a00:6 Oct 23 10:21:06 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:7
> > Oct 23 10:21:06 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a00:8 Oct 23 10:21:06 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a00:9 Oct 23 10:21:06 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:10
> > Oct 23 10:21:06 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a00:11 Oct 23 10:21:07 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a00:12 Oct 23 10:21:07 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:13
> > Oct 23 10:21:07 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a00:14 Oct 23 10:21:07 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a00:15 Oct 23 10:21:07 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:16
> > Oct 23 10:21:07 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a00:17 Oct 23 10:21:07 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a00:18 Oct 23 10:21:08 g4r5 : 0:0:sdm:NOTICE: Read
> > Capacity Query failed, status[0xe0000002]: WWN 0x22000080e5129a00:19
> > Oct 23 10:21:08 g4r5 : 0:0:sdm:NOTICE: Read Capacity Query failed,
> > status[0xe0000002]: WWN 0x22000080e5129a00:20 Oct 23 10:21:08 g4r5 :
> > 0:0:sdm:NOTICE: Read Capacity Query failed, status[0xe0000002]: WWN
> > 0x22000080e5129a00:21
> >
> >
> > Thanks
> > Siva Kodiganti
> > LSI, Campbell, CA, USA
> >
> > -----------------------------------------------------
> > RE: LSi 2600 (old old storage)
> > From: Scheer, Larry
> > Sent:   Friday, October 23, 2009 10:21 AM
> > To:
> > Limato, Dave; Kodiganti, Siva; Hardiman, Richard
> > Cc:
> > Sharp, Andy; Currin, Shawn
> > Attachments:
> > That is not correct. Siva discovered that my cougar filer is
> > complaining about storage that appears to be zoned for it
> > incorrectly. This has been around for a very long time. Jonathan's
> > changes just makes the problem more visible with notice messages
> > filling the elog until /var gets to 100%. I filed a defect against
> > this Siva says it is just a configuration issue at the storage
> > array, not a NAS gateway issue.
> >
> > I would greatly appreciate this zoning issue get corrected so that
> > my filers run in peace. ________________________________________
> > From: Limato, Dave
> > Sent: Friday, October 23, 2009 9:38 AM
> > To: Kodiganti, Siva; Hardiman, Richard
> > Cc: Scheer, Larry; Sharp, Andy; Currin, Shawn
> > Subject: RE: LSi 2600 (old old storage)
> >
> > FYI. There are some scripts in nfx-test/tst-lab that we have used
> > already. Look at the storage script it should have what you desire.
> >
> > Larrys filers disappearing or luns missing is likely the result of
> > broken 3.4 builds (and faulty storage).
> >
> > -----Original Message-----
> > From: Kodiganti, Siva
> > Sent: Friday, October 23, 2009 8:51 AM
> > To: Limato, Dave; Hardiman, Richard
> > Cc: Scheer, Larry; Sharp, Andy
> > Subject: RE: LSi 2600 (old old storage)
> >
> > Larry has at least couple of filers for which luns have disappeared
> > or zoning is wrong.  While de-commissioning Arrays, please also
> > clear the zones appropriately to avoid  filing invalid config bugs.
> >
> > Is it possible to send a few additionals details in the Array
> > de-commissioning email to group :
> >   a) Array wwn:wwpn
> >   b) Nas gaetways list for which above array is visible.
> >      I can check if it is feasible to develop a script to get these
> >      details.
> >
> > Thanks
> > Siva Kodiganti
> > LSI, Campbell, CA, USA
> > ________________________________________
> > From: Limato, Dave
> > Sent: Thursday, October 22, 2009 7:08 PM
> > To: Kodiganti, Siva; Hardiman, Richard
> > Subject: RE: LSi 2600 (old old storage)
> >
> > I think you are safe, the 2600 are closer to the center of the lab
> > and not near rack 59. You should really be on newer LSI storage.
> > Rich. Please move Siva once you have found the appropriate gear.
> >
> > Siva, delete your volumes before he moves otherwise it will cause
> > headaches when the storage is removed before the vsvr gets a chance
> > to delete it. Cluster will keep looking for it. Thanks,
> >
> > -----Original Message-----
> > From: Kodiganti, Siva
> > Sent: Thursday, October 22, 2009 6:53 PM
> > To: Limato, Dave; Hardiman, Richard
> > Subject: RE: LSi 2600 (old old storage)
> >
> > I do see luns from E2600A array on g7r204 & g8r204.
> > Could you please take care of updating zones appropriately for
> > above cluster filers?
> >
> > g8r204 FOO diag> lun topology
> > Controllers                              Type     Nodes
> > -----------------------------------------------------------------------
> > LSI_E2600A                               Disk     g7r204 g8r204
> >
> > Thanks
> > Siva Kodiganti
> > LSI, Campbell, CA, USA
> > ________________________________________
> > From: Limato, Dave
> > Sent: Thursday, October 22, 2009 5:34 PM
> > To: Limato, Dave; DL-ONStor-Elab; DL-ONStor-Hardware; DL-ONStor-QA;
> > DL-ONStor-qualification; DL-ONStor-Software; Haverty, Patrick;
> > Nguyen, Brian; Onstor-cs-mail-archive; Stark, Brian Cc:
> > DL-ONStor-cstech Subject: RE: LSi 2600 (old old storage)
> >
> > We are looking for another array now=E2=80=A6. LSI_20000080e5129937 as =
well
> > as LSI_E2600A.
> >
> > If you see this in your filer let me know. I suspect that these are
> > serving up core and management volumes for Bobcats.
> >
> > Run        lun topology
> >
> >
> > LSI_20000080e5129937                     Disk     eng216
> > LSI_E2600A
> > Disk           eng901
> >
> >
> > From: Limato, Dave
> > Sent: Wednesday, October 21, 2009 6:13 PM
> > To: DL-ONStor-Elab; DL-ONStor-Hardware; DL-ONStor-QA;
> > DL-ONStor-qualification; DL-ONStor-Software; Haverty, Patrick;
> > Nguyen, Brian; Onstor-cs-mail-archive; Stark, Brian Cc:
> > DL-ONStor-cstech Subject: LSi 2600 (old old storage)
> >
> > All,
> >
> > About a week ago I asked you to run lun topology on your filers
> > looking for Mylex Arrays as we need to migrate off legacy storage.
> > We will be shutting down that storage tomorrow so if you are using
> > it, delete your volumes and file a help ticket for Rich so that he
> > can connect you to newer storage.
> >
> > Now we are looking at about 20 Trays of LSI 2600 Storage.
> >
> > Please run lun topology again. If you see the strings below let me
> > and Rich Hardiman know as we need to migrate you onto some shiny new
> > storage.
> >
> > LSI_E2600A                               Disk
> >
> > We would like to shut this down in the next few days. Thanks
> >
> > Dave Limato - 408.963.2400 x3515 - 510.329.9994
