AF:
NF:0
PS:10
SRH:1
SFN:
DSR:
MID:
CFG:
PT:0
S:andy.sharp@lsi.com
RQ:
SSV:mhbs.lsil.com
NSV:
SSH:
R:<Jobi.Ariyamannil@lsi.com>,<Maxim.Kozlovsky@lsi.com>,<Brian.Stark@lsi.com>,<Bill.Fisher@lsi.com>,<Rendell.Fong@lsi.com>
MAID:2
X-Sylpheed-Privacy-System:
X-Sylpheed-Sign:0
SCF:#mh/Mailbox/sent
RMID:#imap/LSI/INBOX	0	4014E6EE2F9ED44299897AD701ED1C51F409D6B1@cosmail03.lsi.com
X-Sylpheed-End-Special-Headers: 1
Date: Wed, 23 Sep 2009 09:26:56 -0700
From: Andrew Sharp <andy.sharp@lsi.com>
To: "Ariyamannil, Jobi" <Jobi.Ariyamannil@lsi.com>
Cc: "Kozlovsky, Maxim" <Maxim.Kozlovsky@lsi.com>, "Stark, Brian"
 <Brian.Stark@lsi.com>, "Fisher, Bill" <Bill.Fisher@lsi.com>, "Fong,
 Rendell" <Rendell.Fong@lsi.com>
Subject: Re: TuxStor schedule v1
Message-ID: <20090923092656.3ddefebe@ripper.onstor.net>
In-Reply-To: <4014E6EE2F9ED44299897AD701ED1C51F409D6B1@cosmail03.lsi.com>
References: <E1EC65251D4B3D46BBC0AAA3C0629222A90B8239@cosmail02.lsi.com>
	<861DA0537719934884B3D30A2666FECC94302CEF@cosmail02.lsi.com>
	<4014E6EE2F9ED44299897AD701ED1C51F409D6B1@cosmail03.lsi.com>
Organization: LSI
X-Mailer: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Wed, 23 Sep 2009 00:25:11 -0600 "Ariyamannil, Jobi"
<Jobi.Ariyamannil@lsi.com> wrote:

> I suggest adding the following:
> 
> 1.       The ability to read/write from/to the management volume.
> This is needed for volume exception core, ndmp, eek, core dump etc.
> 
> 2.       The ability to read/write volumes from SSC.  fsdb needs to
> do this.

There is no SSC.  And this type of fuctionality hasn't mysteriously
gone away, it will still be there.

> 3.       RMC/SendAgile communication between FP and SSC - needed for
> a lot of deamons, fscmd, eek, dump etc.

See above.

> 4.       We dump the stack trace of the thread to the elogs when a
> volume exception is thrown.  We should do be capable of dumping the
> stack of the current thread from linux.

Trivial.

> 5.       EEE is capable of dumping the stack traces of all the
> running threads on FP processors to a file in the management volume.
> This command needs to work - fscmd writestacks.  Currently, this
> command suspends all the CPUs while walking through the thread list.
> Also this program requires large amount of memory.

Some of this type of specialty behavior we may find isn't necessary in
Linux.  Try to remember that it is going to be a whole different world
that needs to be experienced before we know what it needs.  That said,
functionality similar to this can wait until after the more core
development tasks have made some progress.

> 6.       The FP code can be compiled with red zoned stacks to catch
> bugs causing stack corruption.  There should be similar capability
> with linux as well.

Of course there is.  Stack overflow and all kinds of other capabilities.

> 7.       Not sure whether Max accounted time for converting all the
> eee_rtc related stuff which tracks cpu time.

Already done.

> Regards,
> Jobi
> 
> 
> From: Kozlovsky, Maxim
> Sent: Tuesday, September 22, 2009 4:23 PM
> To: Stark, Brian; Sharp, Andy; Fisher, Bill; Ariyamannil, Jobi; Fong,
> RendellTh Subject: RE: TuxStor schedule v1
> 
> Hello,
> 
> What exactly do you mean by nfsperftest without networking? To my
> limited knowledge no such thing exists, or you might mean something
> else under that name.
> 
> The virtual servers being long pole can be mitigated by structuring
> the development so the initial deliverable is being able to bring a
> single virtual server with a single interface without aggregation or
> vlan or jumbo frames support up and down, without actual support for
> multiple virtual networking stacks, without any extras like protocol
> statistics, don't even need interface delete or modify to work. This
> will allow testing the NFS CIFS NDMP DMIP code without waiting for
> the completion of the rest of the virtual server work. The
> performance testing will need full support for multiple interfaces
> and aggregation.
> 
> Here are my changes so far, I may have more later:
> 
> 23: need to have two separate items for TXRX and FP TPL code
> 
> Change 31: FS code compile/link - 1 week
> 
> Change 34, 35 (local and dmip mirroring) compile link - collapse into
> single item, set duration to 1 week.
> 
> Change 32 (NDMP) compile link - set duration to 1week
> 
> Add: SCSI/ISPFC compile/link - 1 week
> 
> (Hopefully this will go faster but I am rounding up).
> 
> Under FS threads:
> 
> 29, 30 (implement and test) - remove
> 
> FS and ACPU polling threads and wakeup mechanism - 1 week
> 
> Interrupt handler for qlogic - 2 days (?)
> 
> State machines and FS threads - 1 week
> 
> Non-failing/Non-blocking memory manager - 1 week
> 
> Debugger and core dump support for FS threads - 1 week
> 
> Watchdog support for FS and ACPU threads - 3 days (?)
> 
> 42 - RMC test automation - remove
> 
> Max
> 
> 
> ________________________________
> From: Stark, Brian
> Sent: Tuesday, September 22, 2009 3:42 PM
> To: Sharp, Andy; Kozlovsky, Maxim; Fisher, Bill; Ariyamannil, Jobi;
> Fong, Rendell Subject: TuxStor schedule v1
> 
> I've updated the TuxStor schedule based on the feedback from
> yesterday's meeting.  I've attached the Project file and a pdf and
> would like to get feedback on this by the end of the week.  The key
> dates are getting to task 36 Dev Unit tests, which then drives task
> 39 Phase 1 Integration.  After this, we are then ready for QA.
> 
> By building in holiday times and also making nfsperftest a
> pre-requisite for entering QA for Phase 1, the date is pushed out to
> the middle of June.  This is a little too close for comfort, and I'd
> like to see if we can pull it back in to May.  The long pole appears
> to be the virtual server work, so maybe nfsperftest without
> networking can start before virtual server work is completed.  This
> is just a suggestion, and keep in mind that my primary goal here is
> to create an accurate schedule that we can deliver on.
> 
> Take a look at tasks and durations in particular.  As a reminder I'd
> like to get tasks down to 1-2 week intervals, so add more detail to
> tasks that are currently longer.
> 
> After I receive feedback, I'll work up another version and then we
> can meet early next week.
> 
> 
> Thanks,
> Brian
> 
> 
> 
