X-MimeOLE: Produced By Microsoft Exchange V6.5
Received: by onstor-exch02.onstor.net 
	id <01C72564.C6A87312@onstor-exch02.onstor.net>; Thu, 21 Dec 2006 17:01:56 -0800
MIME-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----_=_NextPart_001_01C72564.C6A87312"
Content-class: urn:content-classes:message
Subject: RE: Cougar meeting minutes
Date: Thu, 21 Dec 2006 17:01:55 -0800
Message-ID: <BB375AF679D4A34E9CA8DFA650E2B04E01D4CD19@onstor-exch02.onstor.net>
In-Reply-To: <BB375AF679D4A34E9CA8DFA650E2B04E01D4CC46@onstor-exch02.onstor.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: Cougar meeting minutes
thread-index: AcclVlR7vaqsmxlZQV6XYh5XdzIKLwABHmOQ
From: "Brian Stark" <brian.stark@onstor.com>
To: "Tim Gardner" <tim.gardner@onstor.com>,
	"dl-Cougar" <dl-Cougar@onstor.com>

This is a multi-part message in MIME format.

------_=_NextPart_001_01C72564.C6A87312
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

See below...


> _____________________________________________=20
> From: 	Tim Gardner =20
> Sent:	Thursday, December 21, 2006 3:19 PM
> To:	dl-Cougar
> Subject:	Cougar meeting minutes
>=20
> This note summarizes a couple of Cougar meetings that were held this
> past week.
>=20
> *	The Cougar requirements state that the initial product release
> must support 60K NFS ops and 500MB/s streaming read performance.
> The issue with this is that the initial HW will only support 4 gige
> interfaces and the initial SW will not support the expansion card that
> provides 4 additional gige interfaces.
> I discussed this with Narayan. He said it is acceptable for the first
> product release to only support 4 gige interfaces and that this
> product does not need to do 500MB/s.
> This is ok so long as we can support a follow on product 3 months or
> so later that is capable of 500MB/s. The 500MB/s implies that it must
> support more than 4 gige interfaces.
>=20
		FYI...the hardware expansion option to support 4
additional GigE ports will start soon after the first motherboard
prototypes have been brought up.  This task has been built into the
hardware section of the schedule.

> *	During a SW architecture discussion we outlined a plan for an
> architecture that will be capable of meeting the performance
> requirements:
> On the node to which the gige interface is connected we will run:
> ACPU0, NCPU0, FP0 and FP1. The FP cores will also run FC
> functionality.
> On the other node we will run ACPU1, FP2, SSC and either NCPU1 or FP3
> depending on whether or not the gige expansion board is installed.
> 1	The following questions came out of this discussion:
> *	Could the HW layout be setup so that in the base configuration 2
> gige interfaces are wired to each 1480? The expansion would then
> connect an
> additional 2 interfaces to each 1480. Brian needs to respond to this.
>=20
				The concern on the hardware side is with
the routing.  Right now, the connectivity of the expansion connectors is
already complicated, and by wiring 2 interfaces from each 1480, signals
from at least 1 of the 1480s will then have to cross to the other side
of the board.  We'll know more when we get further into routing, but my
initial take is that we would not want to do this.  Also, what impact
does this have on 4-port link aggregation?

> *	We need to verify that the Qlogic can DMA from either bank of
> memory. Brian?
>=20
				I believe this is possible for both
descriptors and buffers.  At the very least, the Qlogic gets the buffer
address from the descriptor, which is controlled by the host.  This
address is then mapped by the hardware to the respective memory.  I'm
not sure if the Qlogic supports multiple descriptor queues, and I'll
check into this.

> *	Why is the BCM5715 attached to the PCI-e and not the PCI-x?
> Brian?
>=20
				Routing area -- PCI-X is at least 40
signals while the PCI-e bus is 16.  Given that the cluster interconnect
may be pushing data for GNS applications, I wanted to give the BCM5715
as much bandwidth as possible.  The PCI-e connectivity offers a lot of
bandwidth, but the 32-bit PCI-X connectivity does not, especially since
it's running at 33MHz because of the PCI1520.  Plus, Broadcom has
indicated that PCI-X MACs will soon go EOL because of how pervasive
PCI-e has become.


> *	The proposed architecture means that we have to solve the
> following SW problems:
> *	The ACPU functionality will not need to be made SMP if we tie
> all operations to a specific volume to an ACPU.
> But we will need to handle some locking on the data structures (mostly
> VS related) that are shared by volumes.
> 1	We discussed how to perform load balancing. An idea was proposed
> to add an ACPU triage queue that would be
> at the highest processing priority. Incoming packets placed on this
> queue would need to be triaged to determine which
> ACPU should be used to process the packet. The packet would be moved
> to the appropriate ACPU pending queue as
> part of triage.
> 2	We have not yet discussed what it means to split FC
> functionality between multiple nodes.
>=20
> *	Cougar planning continues. Currently thinking about doing the
> linux port to bobcat early in the project.=20
> *	Much of the early development env work is common to cougar and
> the bobcat port.
> 1	Doing the bobcat port early will provide a platform for doing
> SSC app porting. This will enable more developers to start the
> porting earlier and to work in parallel as there are plenty of bobcats
> available and there will be very few cougars.
> 2	There is likely to be slack in the schedule while waiting for
> cougar HW. This could be utilized to bring up linux on the
> bobcat R9000.
>=20

------_=_NextPart_001_01C72564.C6A87312
Content-Type: text/html;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii">
<META NAME=3D"Generator" CONTENT=3D"MS Exchange Server version =
6.5.7650.28">
<TITLE>RE: Cougar meeting minutes</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/rtf format -->

<P><FONT COLOR=3D"#0000FF" SIZE=3D2 FACE=3D"Arial">See =
below&#8230;</FONT>
</P>
<BR>
<UL>
<P><FONT SIZE=3D1 =
FACE=3D"Tahoma">_____________________________________________ </FONT>

<BR><B><FONT SIZE=3D1 FACE=3D"Tahoma">From: &nbsp;</FONT></B> <FONT =
SIZE=3D1 FACE=3D"Tahoma">Tim Gardner&nbsp; </FONT>

<BR><B><FONT SIZE=3D1 FACE=3D"Tahoma">Sent:&nbsp;&nbsp;</FONT></B> <FONT =
SIZE=3D1 FACE=3D"Tahoma">Thursday, December 21, 2006 3:19 PM</FONT>

<BR><B><FONT SIZE=3D1 =
FACE=3D"Tahoma">To:&nbsp;&nbsp;&nbsp;&nbsp;</FONT></B> <FONT SIZE=3D1 =
FACE=3D"Tahoma">dl-Cougar</FONT>

<BR><B><FONT SIZE=3D1 =
FACE=3D"Tahoma">Subject:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT>=
</B> <FONT SIZE=3D1 FACE=3D"Tahoma">Cougar meeting minutes</FONT>
</P>

<P><FONT SIZE=3D2 FACE=3D"Arial">This note summarizes a couple of Cougar =
meetings that were held this past week.</FONT>
</P>
<UL>
<UL>
<LI><FONT SIZE=3D2 FACE=3D"Arial">The Cougar requirements state that the =
initial product release must support 60K NFS ops and 500MB/s streaming =
read performance.<BR>
The issue with this is that the initial HW will only support 4 gige =
interfaces and the initial SW will not support the expansion card that =
provides 4 additional gige interfaces.<BR>
I discussed this with Narayan. He said it is acceptable for the first =
product release to only support 4 gige interfaces and that this product =
does not need to do 500MB/s.<BR>
This is ok so long as we can support a follow on product 3 months or so =
later that is capable of 500MB/s. The 500MB/s implies that it must =
support more than 4 gige interfaces.</FONT></LI>
<BR>
</UL>
<P><FONT COLOR=3D"#0000FF" SIZE=3D2 FACE=3D"Arial">FYI&#8230;the =
hardware expansion option to support 4 additional GigE ports will start =
soon after the first motherboard prototypes have been brought up.&nbsp; =
This task has been built into the hardware section of the =
schedule.</FONT></P>
</UL></UL>
<P><FONT SIZE=3D2 =
FACE=3D"Arial">&middot;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT> <FONT =
SIZE=3D2 FACE=3D"Arial">During a SW architecture discussion we outlined =
a plan for an architecture that will be capable of meeting the =
performance requirements:<BR>
On the node to which the gige interface is connected we will run: ACPU0, =
NCPU0, FP0 and FP1. The FP cores will also run FC functionality.<BR>
On the other node we will run ACPU1, FP2, SSC and either NCPU1 or FP3 =
depending on whether or not the gige expansion board is =
installed.</FONT></P>

<P><FONT SIZE=3D2 FACE=3D"Arial">1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
The following questions came out of this discussion:</FONT>

<BR><FONT SIZE=3D2 =
FACE=3D"Arial">&middot;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT> <FONT =
SIZE=3D2 FACE=3D"Arial">Could the HW layout be setup so that in the base =
configuration 2 gige interfaces are wired to each 1480? The expansion =
would then connect an<BR>
additional 2 interfaces to each 1480. Brian needs to respond to =
this.</FONT>
</P>
<UL><UL><UL><UL>
<P><FONT COLOR=3D"#0000FF" SIZE=3D2 FACE=3D"Arial">The concern on the =
hardware side is with the routing.&nbsp; Right now, the connectivity of =
the expansion connectors is already complicated, and by wiring 2 =
interfaces from each 1480, signals from at least 1 of the 1480s will =
then have to cross to the other side of the board.&nbsp; We'll know more =
when we get further into routing, but my initial take is that we would =
not want to do this.&nbsp; Also, what impact does this have on 4-port =
link aggregation?</FONT></P>
</UL></UL></UL></UL>
<P><FONT SIZE=3D2 =
FACE=3D"Arial">&middot;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT> <FONT =
SIZE=3D2 FACE=3D"Arial">We need to verify that the Qlogic can DMA from =
either bank of memory. Brian?</FONT>
</P>
<UL><UL><UL><UL>
<P><FONT COLOR=3D"#0000FF" SIZE=3D2 FACE=3D"Arial">I believe this is =
possible for both descriptors and buffers.&nbsp; At the very least, the =
Qlogic gets the buffer address from the descriptor, which is controlled =
by the host.&nbsp; This address is then mapped by the hardware to the =
respective memory.&nbsp; I'm not sure if the Qlogic supports multiple =
descriptor queues, and I'll check into this.</FONT></P>
</UL></UL></UL></UL>
<P><FONT SIZE=3D2 =
FACE=3D"Arial">&middot;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT> <FONT =
SIZE=3D2 FACE=3D"Arial">Why is the BCM5715 attached to the PCI-e and not =
the PCI-x? Brian?</FONT>
</P>
<UL><UL><UL><UL>
<P><FONT COLOR=3D"#0000FF" SIZE=3D2 FACE=3D"Arial">Routing area -- PCI-X =
is at least 40 signals while the PCI-e bus is 16.&nbsp; Given that the =
cluster interconnect may be pushing data for GNS applications, I wanted =
to give the BCM5715 as much bandwidth as possible.&nbsp; The PCI-e =
connectivity offers a lot of bandwidth, but the 32-bit PCI-X =
connectivity does not, especially since it's running at 33MHz because of =
the PCI1520.&nbsp; Plus, Broadcom has indicated that PCI-X MACs will =
soon go EOL because of how pervasive PCI-e has become.</FONT></P>
<BR>
</UL></UL></UL></UL>
<P><FONT SIZE=3D2 =
FACE=3D"Arial">&middot;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT> <FONT =
SIZE=3D2 FACE=3D"Arial">The proposed architecture means that we have to =
solve the following SW problems:</FONT>

<BR><FONT SIZE=3D2 =
FACE=3D"Arial">&middot;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT> <FONT =
SIZE=3D2 FACE=3D"Arial">The ACPU functionality will not need to be made =
SMP if we tie all operations to a specific volume to an ACPU.<BR>
But we will need to handle some locking on the data structures (mostly =
VS related) that are shared by volumes.</FONT>

<BR><FONT SIZE=3D2 FACE=3D"Arial">1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
We discussed how to perform load balancing. An idea was proposed to add =
an ACPU triage queue that would be<BR>
at the highest processing priority. Incoming packets placed on this =
queue would need to be triaged to determine which<BR>
ACPU should be used to process the packet. The packet would be moved to =
the appropriate ACPU pending queue as<BR>
part of triage.</FONT>

<BR><FONT SIZE=3D2 FACE=3D"Arial">2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
We have not yet discussed what it means to split FC functionality =
between multiple nodes.</FONT>
</P>

<P><FONT SIZE=3D2 =
FACE=3D"Arial">&middot;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT> <FONT =
SIZE=3D2 FACE=3D"Arial">Cougar planning continues. Currently thinking =
about doing the linux port to bobcat early in the project. </FONT>

<BR><FONT SIZE=3D2 =
FACE=3D"Arial">&middot;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT> <FONT =
SIZE=3D2 FACE=3D"Arial">Much of the early development env work is common =
to cougar and the bobcat port.</FONT>

<BR><FONT SIZE=3D2 FACE=3D"Arial">1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
Doing the bobcat port early will provide a platform for doing SSC app =
porting. This will enable more developers to start the<BR>
porting earlier and to work in parallel as there are plenty of bobcats =
available and there will be very few cougars.</FONT>

<BR><FONT SIZE=3D2 FACE=3D"Arial">2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
There is likely to be slack in the schedule while waiting for cougar HW. =
This could be utilized to bring up linux on the<BR>
bobcat R9000.<BR>
</FONT>
</P>

</BODY>
</HTML>
------_=_NextPart_001_01C72564.C6A87312--
