AF:
NF:0
PS:10
SRH:1
SFN:
DSR:
MID:
CFG:
PT:0
S:andy.sharp@lsi.com
RQ:
SSV:mhbs.lsil.com
NSV:
SSH:
R:<brian.stark@lsi.com>
MAID:2
X-Sylpheed-Privacy-System:
X-Sylpheed-Sign:0
SCF:#mh/Mailbox/sent
RMID:#imap/LSI/INBOX	14770	A11ADD3877B3C543B65BE8E196B7BDFFBB725CBD@cosmail02.lsi.com
X-Sylpheed-End-Special-Headers: 1
Date: Thu, 25 Feb 2010 13:50:24 -0800
From: Andrew Sharp <andy.sharp@lsi.com>
To: Brian Stark <brian.stark@lsi.com>
Subject: Re: Graph of NFS Performance: xentop: 100% CPU in VXWorks?:
 Additional Debian-2 VM: MSI Data: Steve's Objective: Pikes Peak Project:
 Steve's Performance Setup....
Message-ID: <20100225135024.144fcaf5@ripper.onstor.net>
In-Reply-To: <A11ADD3877B3C543B65BE8E196B7BDFFBB725CBD@cosmail02.lsi.com>
References: <A11ADD3877B3C543B65BE8E196B7BDFFBB725CBD@cosmail02.lsi.com>
Organization: LSI
X-Mailer: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

The large number of variables makes it difficult for me to draw too
many conclusions at this point, so I'll just throw out this notion:

Step 1: raw network performance on the 10Ge, no VMs, no filesystem,
just using netcat or similar.  I have a hacked version of netcat that
helps with this kind of thing if anyone wants it.

Step 2: raw NFS performance, streaming only, no NAS software (?), no
VMs, in memory filesystem, single client (obviously a slightly special
client, jah?).

Step 3: same as step 2, but over multiple clients.

This will give us some good data points to understand what the hardware
is capable of minus some of the software variables plaguing us.  Step 2
might be too hard to do, but maybe a hybrid 2-3 scenario, like a very
small number of clients, 4, 8, etc.

Then there's the issue of the IOVM, but I see that as a constant that
we have to address which is in addition to the fact that the other
numbers aren't meeting our expectations.  If I understand correctly.


On Mon, 22 Feb 2010 18:24:04 -0700 "Padmanabhan, Seetharaman"
<Seetharaman.Padmanabhan@lsi.com> wrote:

> Brian:
>                 FYI on Be2NIC performance in Pikes Peak / Debian VM
> environment.  We are working to recreate the issue of degraded
> performance with VxWorks VM presence.  This is the reflection of the
> BE2 max capabilities as they are using RAM Disks to reduce to back
> end latency to near zero.  Of course standard open source NAS Server
> is used for this experimentation.
>=20
>                 I am not sure how this lines up to meet your
> expectation but certain provides some base line data to start the
> conversation.
>=20
> Paddu
>=20
> From: Steve Stokes [mailto:stevens@serverengines.com]
> Sent: Monday, February 22, 2010 3:43 PM
> To: Tammana, Dilip; Padmanabhan, Seetharaman; Monroe, Chip; Hari
> Mudaliar; Crabb, Mark; Jolad, Amarnath; Venkatesha, Pradeep Cc:
> sanjeevd@serverengines.com; subbus@serverengines.com;
> billw@serverengines.com; joer@serverengines.com;
> pikespeak-lsi@serverengines.com Subject: Graph of NFS Performance:
> xentop: 100% CPU in VXWorks?: Additional Debian-2 VM: MSI Data:
> Steve's Objective: Pikes Peak Project: Steve's Performance Setup....
>=20
> Dilip,
>=20
>  Attached is a graph of NFS performance to the Debian VM.
>=20
>  It provides a visual representation of the issue at hand.
>=20
> -- steve
>=20
> ________________________________
> From: Tammana, Dilip [mailto:Dilip.Tammana@lsi.com]
> To: Steve Stokes [mailto:stevens@serverengines.com], Padmanabhan,
> Seetharaman [mailto:Seetharaman.Padmanabhan@lsi.com], Monroe, Chip
> [mailto:Chip.Monroe@lsi.com], Hari Mudaliar
> [mailto:harim@serverengines.com], Crabb, Mark
> [mailto:Mark.Crabb@lsi.com], Jolad, Amarnath
> [mailto:Amarnath.Jolad@lsi.com], Venkatesha, Pradeep
> [mailto:Pradeep.Venkatesha@lsi.com] Cc: sanjeevd@serverengines.com
> [mailto:sanjeevd@serverengines.com], subbus@serverengines.com
> [mailto:subbus@serverengines.com], billw@serverengines.com
> [mailto:billw@serverengines.com], joer@serverengines.com
> [mailto:joer@serverengines.com], lsi-pikespeak@serverengines.com
> [mailto:lsi-pikespeak@serverengines.com] Sent: Mon, 22 Feb 2010
> 15:30:17 -0800 Subject: RE: xentop: 100% CPU in VXWorks?: Additional
> Debian-2 VM: MSI Data: Steve's Objective: Pikes Peak Project: Steve's
> Performance Setup.... Hi Steve
>=20
> I am not really sure if it explains the issue. Idle tasks uses 100%
> of the CPU, but the Dom0 and Debian-64 should have access to their
> own Physical CPUs and should be unaffected by this.
>=20
> Most PP controllers have 4 cores and each VM is assigned to a core.
> So, even if VxWorks uses 1 core, we still have 3 more cores for the
> other VMs.
>=20
> Plus, VxWorks idle task should not be doing anything on the PCI bus
> to cause performance degradation.
>=20
> Anyways, Thanks for showing interest in figuring out this issue.
>=20
> Thanks
> -Dilip
>=20
> From: Steve Stokes [mailto:stevens@serverengines.com]
> Sent: Monday, February 22, 2010 5:06 PM
> To: Tammana, Dilip; Padmanabhan, Seetharaman; Monroe, Chip; Hari
> Mudaliar; Crabb, Mark; Jolad, Amarnath; Venkatesha, Pradeep Cc:
> sanjeevd@serverengines.com; subbus@serverengines.com;
> billw@serverengines.com; joer@serverengines.com;
> lsi-pikespeak@serverengines.com Subject: RE: xentop: 100% CPU in
> VXWorks?: Additional Debian-2 VM: MSI Data: Steve's Objective: Pikes
> Peak Project: Steve's Performance Setup....
>=20
> Dilip,
>=20
>  I think that the root cause of the networking performance
> degradation can be explained by the fact that VXworks has no concept
> of an IDLE loop, and therefore it sits in a tight loop waiting for
> interrupts to happen.
>=20
> -- steve
>=20
> ________________________________
> From: Tammana, Dilip [mailto:Dilip.Tammana@lsi.com]
> To: Steve Stokes [mailto:stevens@serverengines.com], Padmanabhan,
> Seetharaman [mailto:Seetharaman.Padmanabhan@lsi.com], Monroe, Chip
> [mailto:Chip.Monroe@lsi.com], Hari Mudaliar
> [mailto:harim@serverengines.com], Crabb, Mark
> [mailto:Mark.Crabb@lsi.com], Jolad, Amarnath
> [mailto:Amarnath.Jolad@lsi.com], Venkatesha, Pradeep
> [mailto:Pradeep.Venkatesha@lsi.com] Cc: sanjeevd@serverengines.com
> [mailto:sanjeevd@serverengines.com], subbus@serverengines.com
> [mailto:subbus@serverengines.com], billw@serverengines.com
> [mailto:billw@serverengines.com], joer@serverengines.com
> [mailto:joer@serverengines.com], lsi-pikespeak@serverengines.com
> [mailto:lsi-pikespeak@serverengines.com] Sent: Mon, 22 Feb 2010
> 15:01:30 -0800 Subject: RE: xentop: 100% CPU in VXWorks?: Additional
> Debian-2 VM: MSI Data: Steve's Objective: Pikes Peak Project: Steve's
> Performance Setup.... Hi Steve
>=20
> Yes, VxWorks will use 100% cpu even in idle mode as idle task itself
> will use the CPU.
>=20
> Can we any add some kind of CPU load to the debian-2 VM to increase
> its CPU usage.
>=20
> Thanks
> -Dilip
>=20
> From: Steve Stokes [mailto:stevens@serverengines.com]
> Sent: Monday, February 22, 2010 4:40 PM
> To: Tammana, Dilip; Padmanabhan, Seetharaman; Monroe, Chip; Hari
> Mudaliar; Crabb, Mark; Jolad, Amarnath; Venkatesha, Pradeep Cc:
> sanjeevd@serverengines.com; subbus@serverengines.com;
> billw@serverengines.com; joer@serverengines.com;
> lsi-pikespeak@serverengines.com Subject: xentop: 100% CPU in
> VXWorks?: Additional Debian-2 VM: MSI Data: Steve's Objective: Pikes
> Peak Project: Steve's Performance Setup....
>=20
> Dilip,
>=20
>  Thanks for the pointer to 'xentop'
>=20
>  Attached is a screenshot showing that after the VXworks VM is
> started, CPU utilization in the VXworks VM stays at 100% percent.
>=20
>  But, the idling debian VM show less than 1% CPU utilization.
>=20
>  Is this normal?
>=20
> -- steve
>=20
> ________________________________
> From: Tammana, Dilip [mailto:Dilip.Tammana@lsi.com]
> To: Steve Stokes [mailto:stevens@serverengines.com], Padmanabhan,
> Seetharaman [mailto:Seetharaman.Padmanabhan@lsi.com], Monroe, Chip
> [mailto:Chip.Monroe@lsi.com], Hari Mudaliar
> [mailto:harim@serverengines.com], Crabb, Mark
> [mailto:Mark.Crabb@lsi.com], Jolad, Amarnath
> [mailto:Amarnath.Jolad@lsi.com], Venkatesha, Pradeep
> [mailto:Pradeep.Venkatesha@lsi.com] Cc: sanjeevd@serverengines.com
> [mailto:sanjeevd@serverengines.com], subbus@serverengines.com
> [mailto:subbus@serverengines.com], billw@serverengines.com
> [mailto:billw@serverengines.com], joer@serverengines.com
> [mailto:joer@serverengines.com], lsi-pikespeak@serverengines.com
> [mailto:lsi-pikespeak@serverengines.com] Sent: Sun, 21 Feb 2010
> 21:06:29 -0800 Subject: RE: Additional Debian-2 VM: MSI Data: Steve's
> Objective: Pikes Peak Project: Steve's Performance Setup....
>=20
> Hi Steve
>=20
> Thanks for doing this experiment. It does look like the performance
> drop is related to settings in IOVM rather than due to spawning of a
> generic VM. We are working on recreating the issue at LSI. So far, we
> were unable to get anything beyond 2 Gbps on our setups.
>=20
> I am not really sure if the 95% idle time corresponds to any CPU
> utilization similar to IOVM. Did you happen to run xen top to check
> the cpu utilization of debian-2 VM as noted by the xen.
>=20
> I added some of the hypervisor team members to comment on this result.
>=20
> Thanks
> -DIlip
> ________________________________________
> From: Steve Stokes
> [stevens@serverengines.com<mailto:stevens@serverengines.com>] Sent:
> Sunday, February 21, 2010 8:22 PM To: Tammana, Dilip; Padmanabhan,
> Seetharaman; Monroe, Chip; Hari Mudaliar; Crabb, Mark Cc:
> sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> subbus@serverengines.com<mailto:subbus@serverengines.com>;
> billw@serverengines.com<mailto:billw@serverengines.com>;
> joer@serverengines.com<mailto:joer@serverengines.com>;
> lsi-pikespeak@serverengines.com<mailto:lsi-pikespeak@serverengines.com>
> Subject: Additional Debian-2 VM: MSI Data: Steve's Objective: Pikes
> Peak Project: Steve's Performance Setup....
>=20
> Dilip,
>=20
> I installed a SECOND debian VM on my Pikes peak setup {Called
> Debian-2} (but did not map the PCI interrupts for the BE-2 into this
> new VM)
>=20
> That means when the new Debian-2 VM is up and running, an lspci does
> NOT see the BE-2 NIC functions.... which is as expected.
>=20
> 10 trials sending 4 TCP discard streams to the Debian-64 VM when the
> VX works is NOT active, AND the new Debian-2 VM is active yields:
>=20
> 9.25, 9.23, 9.24, 9.24, 9.23, 9.24, 9.08, 9.16, 9.27, 9,26 (Gbps)
>=20
>=20
> Even better news:
>=20
> a) When running 'top' in the newly created Debian-2 VM, I see about
> 95%+ idle time inside debian-2. b) At the same time 'top' is running
> in Debian-2, 4 TCP discard streams are going to the Debian-64 VM at
> about 9.0 Gbps. (Very Nice)
>=20
>=20
> Something about the configuration of the VXworks VM severely impacts
> the networking performance the Debian-64 VM.
>=20
> NOTE: The PCI string in demo.cfg for the VXworks VM does NOT include
> the IDs for the BE-2 NIC functions.
>=20
> Any ideas?
>=20
> -- steve
>=20
>=20
>=20
> ----- Original Message -----
> From: Steve Stokes
> [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> To: Tammana, Dilip
> [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>],
> Padmanabhan, Seetharaman
> [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@ls=
i.com>],
> Monroe, Chip
> [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>], Hari
> Mudaliar
> [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> Crabb, Mark [mailto:Mark.Crabb@lsi.com<mailto:Mark.Crabb@lsi.com>]
> Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>,
> subbus@serverengines.com<mailto:subbus@serverengines.com>,
> billw@serverengines.com<mailto:billw@serverengines.com>,
> joer@serverengines.com<mailto:joer@serverengines.com> Subject: MSI
> Data: Steve's Objective: Pikes Peak Project: Steve's Performance
> Setup....
>=20
>=20
> > Dilip,
> >
> > I installed the MSI version of the VXworks VM as instructed.
> >
> > This is what I see now:
> >
> > 10 trials sending 4 TCP discard streams to the Debian VM when the
> > VX works is NOT active:
> >
> > 9.24, 9.26, 9.27, 9.26, 9.27, 9.26, 9.18, 9.23, 9.31, 9,26 (Gbps)
> >
> >
> >
> > 10 trials sending 4 TCP discard streams to the Debian VM when the
> > VX works IS active (but idle):
> >
> > 5.00, 5.06, 5.01, 5.01, 5.02, 4.98, 5.07, 4.99, 5.05, 5.08 (Gbps)
> >
> > -- steve
> >
> > _____
> >
> > From: Tammana, Dilip
> > [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>] To:
> > Padmanabhan, Seetharaman
> > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@=
lsi.com>],
> > Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > Monroe, Chip
> > [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>], Hari
> > Mudaliar
> > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > Crabb, Mark [mailto:Mark.Crabb@lsi.com<mailto:Mark.Crabb@lsi.com>]
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>
> > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>],
> > subbus@serverengines.com<mailto:subbus@serverengines.com>
> > [mailto:subbus@serverengines.com<mailto:subbus@serverengines.com>]
> > Sent: Thu, 18 Feb 2010 14:13:24 -0800 Subject: RE: Steve's
> > Objective: Pikes Peak Project: Steve's Performance Setup....
> >
> >
> >
> > Hi Steve
> >
> > I copied the vxWorks file that is MSI capable on the ftp site under
> > name temp_files/vxWorks_MSI
> >
> > Rename the file to vxWorks_MSI to vxWorks before you follow the
> > instructions below.
> >
> > Here are the instructions from Nirmal on how to update your
> > controller with the new VxWorks file:
> >
> >
> > * mount the demo image
> >
> >
> > losetup /dev/loop2 demo.img
> >
> > kpartx -a -v /dev/loop2
> >
> > mount /dev/mapper/loop2p1 /mnt
> >
> >
> >
> > * Overwrite the vxWorks file at /mnt/boot with new vxWorks file
> >
> >
> > rm -rf /mnt/boot/vxWorks
> >
> > cp ./vxWorks /mnt/boot/vxWorks
> >
> >
> >
> > * Unmount the image
> >
> >
> > umount /mnt
> >
> > kpartx -d /dev/loop2
> >
> > losetup -d /dev/loop2
> >
> >
> >
> > * Start/Create the vxw guest as usual
> >
> >
> > Let me know if you have any questions.
> >
> > Thanks
> > -Dilip
> >
> >
> >
> > From: Padmanabhan, Seetharaman
> > Sent: Thursday, February 18, 2010 12:32 AM
> > To: Steve Stokes; Tammana, Dilip; Monroe, Chip; Hari Mudaliar;
> > Crabb, Mark Cc:
> > sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> > Dilip:
> > Let us sync up tomorrow morning=E2=80=A6 Barada can help to
> > upload the image to the SE ftp site=E2=80=A6
> >
> > Paddu
> >
> >
> >
> > From: Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > Sent: Wednesday, February 17, 2010 10:12 PM To: Padmanabhan,
> > Seetharaman; Tammana, Dilip; Monroe, Chip; Hari Mudaliar; Crabb,
> > Mark Cc:
> > sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> > Paddu,
> >
> >
> >
> > Please point me at an MSI enabled VxWorks so that I can get going
> > from there.
> >
> >
> >
> > Thanks!
> >
> >
> >
> > -- steve
> >
> >
> >
> > _____
> >
> >
> >
> > From: Padmanabhan, Seetharaman
> > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@=
lsi.com>]
> > To: Tammana, Dilip
> > [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>],
> > Monroe, Chip
> > [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>], Steve
> > Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > Hari Mudaliar
> > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > Crabb, Mark [mailto:Mark.Crabb@lsi.com<mailto:Mark.Crabb@lsi.com>]
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>
> > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>],
> > subbus@serverengines.com<mailto:subbus@serverengines.com>
> > [mailto:subbus@serverengines.com<mailto:subbus@serverengines.com>]
> > Sent: Wed, 17 Feb 2010 21:59:48 -0800 Subject: RE: Steve's
> > Objective: Pikes Peak Project: Steve's Performance Setup....
> >
> >
> > Steve:
> >
> > Can you please confirm whether the driver you are using in
> > your config is MSI enabled? To speed up the triage of this issue,
> > can I make a MSI enabled VxWorks drop to you? If this resolves your
> > problem, then we can move on to the next layer of issues.
> >
> >
> >
> > Chip: I am just concerned with just ~1.5Gbps, we are comparing
> > apples to oranges. We need to figure out a way to hit ~8Gbps on
> > your system.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > From: Tammana, Dilip
> > Sent: Wednesday, February 17, 2010 2:59 PM
> > To: Monroe, Chip; Steve Stokes; Padmanabhan, Seetharaman; Hari
> > Mudaliar; Crabb, Mark
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> > I am waiting on SE to provide this to us.
> >
> >
> >
> > Thanks
> >
> > -Dilip
> >
> >
> >
> >
> >
> > From: Monroe, Chip
> > Sent: Wednesday, February 17, 2010 4:58 PM
> > To: Tammana, Dilip; Steve Stokes; Padmanabhan, Seetharaman; Hari
> > Mudaliar; Crabb, Mark
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> > Is it built into the 2-11-10 Golden Image?
> >
> >
> >
> >
> >
> > From: Tammana, Dilip
> > Sent: Wednesday, February 17, 2010 3:44 PM
> > To: Monroe, Chip; Steve Stokes; Padmanabhan, Seetharaman; Hari
> > Mudaliar; Crabb, Mark
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> > Hi Chip
> >
> >
> >
> > Thanks for the runs. We will definitely need the MSI capable be2nix
> > driver to continue further with this investigation. The be2nic
> > driver is the missing piece here.
> >
> >
> >
> > Thanks
> >
> > -Dilip
> >
> >
> >
> >
> >
> > From: Monroe, Chip
> > Sent: Wednesday, February 17, 2010 4:40 PM
> > To: Steve Stokes; Padmanabhan, Seetharaman; Tammana, Dilip; Hari
> > Mudaliar; Crabb, Mark
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> >
> > b) another 2 runs for each IP, now with VX VM started and Debian
> > running
> >
> > # on Chip=E2=80=99s cfgis13 =E2=80=93 from win2k3 Dell R710 server #
> >
> > The color code is just differentiating the 2 different IPs on this
> > run.
> >
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> >
> > I don=E2=80=99t see the Delta that Steve has noted, the rates look simi=
lar
> > with and without the VXworks VM running.
> >
> >
> >
> >
> >
> > ------------------------------------------------------------
> >
> > Client connecting to 10.10.10.120, TCP port 9
> >
> > TCP window size: 8.00 KByte (default)
> >
> > ------------------------------------------------------------
> >
> > [ 4] local 10.10.10.110 port 1290 connected with 10.10.10.120 port 9
> >
> > [ 6] local 10.10.10.110 port 1293 connected with 10.10.10.120 port 9
> >
> > [ 3] local 10.10.10.110 port 1292 connected with 10.10.10.120 port 9
> >
> > [ 5] local 10.10.10.110 port 1291 connected with 10.10.10.120 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 6] 0.0-30.0 sec 1.33 GBytes 380 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 3] 0.0-30.0 sec 1.32 GBytes 378 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 5] 0.0-30.0 sec 1.32 GBytes 377 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 4] 0.0-30.0 sec 1.34 GBytes 383 Mbits/sec
> >
> > [SUM] 0.0-30.0 sec 5.31 GBytes 1.52 Gbits/sec
> >
> > ------------------------------------------------------------
> >
> > Client connecting to 10.10.10.121, TCP port 9
> >
> > TCP window size: 8.00 KByte (default)
> >
> > ------------------------------------------------------------
> >
> > [ 4] local 10.10.10.110 port 1305 connected with 10.10.10.121 port 9
> >
> > [ 6] local 10.10.10.110 port 1306 connected with 10.10.10.121 port 9
> >
> > [ 3] local 10.10.10.110 port 1304 connected with 10.10.10.121 port 9
> >
> > [ 5] local 10.10.10.110 port 1307 connected with 10.10.10.121 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 5] 0.0-30.0 sec 1.30 GBytes 371 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 4] 0.0-30.0 sec 1.29 GBytes 369 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 6] 0.0-30.0 sec 1.29 GBytes 369 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 3] 0.0-30.0 sec 1.29 GBytes 369 Mbits/sec
> >
> > [SUM] 0.0-30.0 sec 5.17 GBytes 1.48 Gbits/sec
> >
> > ------------------------------------------------------------
> >
> > Client connecting to 10.10.10.120, TCP port 9
> >
> > TCP window size: 8.00 KByte (default)
> >
> > ------------------------------------------------------------
> >
> > [ 6] local 10.10.10.110 port 1315 connected with 10.10.10.120 port 9
> >
> > [ 4] local 10.10.10.110 port 1314 connected with 10.10.10.120 port 9
> >
> > [ 5] local 10.10.10.110 port 1313 connected with 10.10.10.120 port 9
> >
> > [ 3] local 10.10.10.110 port 1312 connected with 10.10.10.120 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 4] 0.0-30.1 sec 1.24 GBytes 354 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 5] 0.0-30.1 sec 1.24 GBytes 355 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 3] 0.0-30.1 sec 1.27 GBytes 363 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 6] 0.0-30.1 sec 1.22 GBytes 350 Mbits/sec
> >
> > [SUM] 0.0-30.1 sec 4.98 GBytes 1.42 Gbits/sec
> >
> > ------------------------------------------------------------
> >
> > Client connecting to 10.10.10.121, TCP port 9
> >
> > TCP window size: 8.00 KByte (default)
> >
> > ------------------------------------------------------------
> >
> > [ 5] local 10.10.10.110 port 1324 connected with 10.10.10.121 port 9
> >
> > [ 3] local 10.10.10.110 port 1321 connected with 10.10.10.121 port 9
> >
> > [ 6] local 10.10.10.110 port 1323 connected with 10.10.10.121 port 9
> >
> > [ 4] local 10.10.10.110 port 1322 connected with 10.10.10.121 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 3] 0.0-30.0 sec 1.30 GBytes 373 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 6] 0.0-30.0 sec 1.28 GBytes 367 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 4] 0.0-30.0 sec 1.30 GBytes 372 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 5] 0.0-30.0 sec 1.29 GBytes 370 Mbits/sec
> >
> > [SUM] 0.0-30.0 sec 5.18 GBytes 1.48 Gbits/sec
> >
> >
> >
> > ~ Chip
> >
> >
> >
> > From: Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > Sent: Wednesday, February 17, 2010 3:01 PM To: Monroe, Chip;
> > Padmanabhan, Seetharaman; Tammana, Dilip; Hari Mudaliar; Crabb, Mark
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> >
> > Got it.... two runs of the same config.... and now your trying to
> > get the data for when the VXworks VM is running??
> >
> >
> >
> >
> >
> > -- steve
> >
> >
> > _____
> >
> >
> >
> > From: Monroe, Chip
> > [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>] To:
> > Monroe, Chip
> > [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>], Steve
> > Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > Padmanabhan, Seetharaman
> > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@=
lsi.com>],
> > Tammana, Dilip
> > [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>], Hari
> > Mudaliar
> > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > Crabb, Mark [mailto:Mark.Crabb@lsi.com<mailto:Mark.Crabb@lsi.com>]
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>
> > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>],
> > subbus@serverengines.com<mailto:subbus@serverengines.com>
> > [mailto:subbus@serverengines.com<mailto:subbus@serverengines.com>]
> > Sent: Wed, 17 Feb 2010 13:57:20 -0800 Subject: RE: Steve's
> > Objective: Pikes Peak Project: Steve's Performance Setup....
> >
> >
> > I=E2=80=99m fighting with the Pikes Peak=E2=80=A6 let me vent here
> >
> > common problem: several reboots have found DOM-0 not getting an IP
> > for Xenbr0
> >
> >
> >
> >
> >
> > From: Monroe, Chip
> > Sent: Wednesday, February 17, 2010 2:47 PM
> > To: 'Steve Stokes'; Padmanabhan, Seetharaman; Tammana, Dilip; Hari
> > Mudaliar; Crabb, Mark
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> > No, the VX VM was NOT started.
> >
> >
> >
> > The two runs were just minutes apart, to the same IP, with the
> > Debian VM running, but the VX VM not started.
> >
> >
> >
> >
> >
> > From: Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > Sent: Wednesday, February 17, 2010 2:44 PM To: Monroe, Chip;
> > Padmanabhan, Seetharaman; Tammana, Dilip; Hari Mudaliar; Crabb, Mark
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> >
> > Chip,
> >
> >
> >
> >
> >
> > Thanks for the clarification.
> >
> >
> >
> >
> >
> > To re-state: You see networking performance in the Debian VM drop
> > from 1.79 Gbps to 1.05 Gbps when the VXworks VM is started but
> > remains idle.
> >
> >
> >
> >
> >
> > Is that correct?
> >
> >
> >
> >
> >
> > -- steve
> >
> >
> > _____
> >
> >
> >
> > From: Monroe, Chip
> > [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>] To: Steve
> > Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > Padmanabhan, Seetharaman
> > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@=
lsi.com>],
> > Tammana, Dilip
> > [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>], Hari
> > Mudaliar
> > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > Crabb, Mark [mailto:Mark.Crabb@lsi.com<mailto:Mark.Crabb@lsi.com>]
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>
> > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>],
> > subbus@serverengines.com<mailto:subbus@serverengines.com>
> > [mailto:subbus@serverengines.com<mailto:subbus@serverengines.com>]
> > Sent: Wed, 17 Feb 2010 13:40:55 -0800 Subject: RE: Steve's
> > Objective: Pikes Peak Project: Steve's Performance Setup....
> >
> >
> > Oooops=E2=80=A6 you are correct Steve=E2=80=A6 sorry=E2=80=A6 I=E2=80=
=99m fighting this POS Pikes
> > Peak and emailing=E2=80=A6
> >
> >
> >
> > I=E2=80=99m using a Dell R710 that Wichita sent to us in Boulder. I don=
=E2=80=99t
> > have anything else with a 10G NIC in it, that isn=E2=80=99t the same se=
rver.
> >
> >
> >
> >
> >
> > From: Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > Sent: Wednesday, February 17, 2010 2:33 PM To: Monroe, Chip;
> > Padmanabhan, Seetharaman; Tammana, Dilip; Hari Mudaliar; Crabb, Mark
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> >
> > First run: $ iperf -c 10.10.10.121 -p 9 -P 4 -t 30
> > Second run: $ iperf -c 10.10.10.121 -p 9 -P 4 -t 30
> >
> >
> >
> >
> >
> > Looks like the same destination IP address to me!
> >
> >
> >
> >
> >
> > I would have thought the the difference between runs was that the
> > VXworks VM was idling during the second run?
> >
> >
> >
> >
> >
> > Also, I am getting 8+ Gbps, and you see less than 2 Gbps... do you
> > have access to faster machines?
> >
> >
> >
> >
> >
> > -- steve
> > _____
> >
> >
> >
> > From: Monroe, Chip
> > [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>] To:
> > Padmanabhan, Seetharaman
> > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@=
lsi.com>],
> > Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > Tammana, Dilip
> > [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>], Hari
> > Mudaliar
> > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > Crabb, Mark [mailto:Mark.Crabb@lsi.com<mailto:Mark.Crabb@lsi.com>]
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>
> > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>],
> > subbus@serverengines.com<mailto:subbus@serverengines.com>
> > [mailto:subbus@serverengines.com<mailto:subbus@serverengines.com>]
> > Sent: Wed, 17 Feb 2010 13:22:48 -0800 Subject: RE: Steve's
> > Objective: Pikes Peak Project: Steve's Performance Setup....
> >
> >
> > I believe you both may have misread the output!
> >
> >
> >
> > This is only with VXwork not running=E2=80=A6
> >
> > I ran the same test to the 2 Debian Eth1 & ETH2 IPs that can be hit
> > with only one physical connection on the iSCSI HIC
> >
> >
> >
> >
> >
> >
> >
> > From: Padmanabhan, Seetharaman
> > Sent: Wednesday, February 17, 2010 2:19 PM
> > To: Steve Stokes; Monroe, Chip; Tammana, Dilip; Hari Mudaliar;
> > Crabb, Mark Cc:
> > sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> > Good=E2=80=A6 Chip: Why are we getting only 1.79Gbps from this link? I =
would
> > expect this number to be close to 10Gbps.
> >
> >
> >
> > Now that we have recreated a similar degradation, let us get MSI
> > enabled builds to compare and contrast. If we see similar issues
> > with MSI as well, then we can dig deeper to figure out the next
> > steps.
> >
> >
> >
> > Paddu Padmanabhan
> >
> >
> >
> >
> >
> >
> >
> > From: Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > Sent: Wednesday, February 17, 2010 1:11 PM To: Monroe, Chip;
> > Tammana, Dilip; Padmanabhan, Seetharaman; Hari Mudaliar; Crabb, Mark
> > Cc: sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com> Subject:
> > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > Setup....
> >
> >
> >
> >
> > Seems to be about the same 40% performance hit I am seeing.....
> >
> >
> >
> >
> >
> > Thank you for re-creating this at LSI !!
> >
> >
> >
> >
> >
> > -- steve
> >
> >
> > _____
> >
> >
> >
> > From: Monroe, Chip
> > [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>] To:
> > Monroe, Chip
> > [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>], Steve
> > Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > Tammana, Dilip
> > [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>],
> > Padmanabhan, Seetharaman
> > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@=
lsi.com>],
> > Hari Mudaliar
> > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > Crabb, Mark [mailto:Mark.Crabb@lsi.com<mailto:Mark.Crabb@lsi.com>]
> > Sent: Wed, 17 Feb 2010 13:07:32 -0800 Subject: RE: Steve's
> > Objective: Pikes Peak Project: Steve's Performance Setup....
> >
> >
> > ### Chip's results 2/17/10, from direct attached 2k3 host Devon:
> >
> > a) Debian VM active and VXworks VM not started.
> >
> > ------------------------------------------------------------------
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $ iperf -c 10.10.10.121 -p 9 -P 4 -t 30
> >
> > ------------------------------------------------------------
> >
> > Client connecting to 10.10.10.121, TCP port 9
> >
> > TCP window size: 8.00 KByte (default)
> >
> > ------------------------------------------------------------
> >
> > [ 4] local 10.10.10.110 port 1709 connected with 10.10.10.121 port 9
> >
> > [ 5] local 10.10.10.110 port 1710 connected with 10.10.10.121 port 9
> >
> > [ 3] local 10.10.10.110 port 1711 connected with 10.10.10.121 port 9
> >
> > [ 6] local 10.10.10.110 port 1712 connected with 10.10.10.121 port 9
> >
> > write2 failed: Interrupted system callwrite2 failed: Interrupted
> > system callwrit
> >
> > e2 failed: Interrupted system call[ ID] Interval Transfer
> > Bandwidth
> >
> >
> >
> >
> >
> >
> >
> > [ 3] 0.0-30.0 sec 1.57 GBytes 449 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 6] 0.0-30.1 sec 1.56 GBytes 447 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 4] 0.0-30.1 sec 1.57 GBytes 448 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 5] 0.0-30.1 sec 1.56 GBytes 445 Mbits/sec
> >
> > [SUM] 0.0-30.1 sec 6.26 GBytes 1.79 Gbits/sec
> >
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $ iperf -c 10.10.10.121 -p 9 -P 4 -t 30
> >
> > ------------------------------------------------------------
> >
> > Client connecting to 10.10.10.121, TCP port 9
> >
> > TCP window size: 8.00 KByte (default)
> >
> > ------------------------------------------------------------
> >
> > [ 3] local 10.10.10.110 port 1723 connected with 10.10.10.121 port 9
> >
> > [ 5] local 10.10.10.110 port 1724 connected with 10.10.10.121 port 9
> >
> > [ 6] local 10.10.10.110 port 1726 connected with 10.10.10.121 port 9
> >
> > [ 4] local 10.10.10.110 port 1725 connected with 10.10.10.121 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 6] 0.0-30.0 sec 917 MBytes 256 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 4] 0.0-30.0 sec 962 MBytes 269 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 3] 0.0-30.0 sec 937 MBytes 262 Mbits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 5] 0.0-30.0 sec 954 MBytes 267 Mbits/sec
> >
> > [SUM] 0.0-30.0 sec 3.68 GBytes 1.05 Gbits/sec
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> >
> >
> >
> >
> > From: Monroe, Chip
> > Sent: Wednesday, February 17, 2010 1:59 PM
> > To: 'Steve Stokes'; Tammana, Dilip; Padmanabhan, Seetharaman; Hari
> > Mudaliar; Crabb, Mark
> > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > Performance Setup....
> >
> >
> >
> > Ahhhh
> >
> > You didn=E2=80=99t suggest =E2=80=9Cre-start debian=E2=80=9D originally=
=E2=80=A6
> >
> >
> >
> > Running now =E2=80=93
> >
> >
> >
> > Be patient
> >
> >
> >
> >
> >
> > From: Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > Sent: Wednesday, February 17, 2010 1:32 PM To: Tammana, Dilip;
> > Padmanabhan, Seetharaman; Hari Mudaliar; Crabb, Mark; Monroe, Chip
> > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > Performance Setup....
> >
> >
> >
> >
> > Did you enable the discard service (port 9) on the debian VM? Need
> > to un-comment a line in /etc/inetd.conf, and re-start debian.
> >
> >
> >
> >
> >
> > A quick test to see if port 9 has a listener is to:
> >
> >
> >
> >
> >
> > telnet <ip_address> 9
> >
> >
> >
> >
> >
> > -- steve
> >
> >
> > _____
> >
> >
> >
> > From: Tammana, Dilip
> > [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>] To:
> > Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > Padmanabhan, Seetharaman
> > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@=
lsi.com>],
> > Hari Mudaliar
> > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > Crabb, Mark [mailto:Mark.Crabb@lsi.com<mailto:Mark.Crabb@lsi.com>],
> > Monroe, Chip
> > [mailto:Chip.Monroe@lsi.com<mailto:Chip.Monroe@lsi.com>] Sent: Wed,
> > 17 Feb 2010 12:28:01 -0800 Subject: RE: Steve's Objective: Pikes
> > Peak Project: Steve's Performance Setup....
> >
> >
> > Hi Steve
> >
> >
> >
> > Are they any special consideration in running iperf script. Our PT
> > test is unable to use the command. I copied our PT engineers to
> > this email.
> >
> >
> >
> > Thanks
> >
> > -Dilip
> >
> >
> >
> > Here is the output so far:
> >
> >
> >
> > I just downloaded iperf off the net, had to compile etc...
> >
> > We're running this on an external win2k3 host =E2=80=9Cdevon=E2=80=9D, =
with cygwin
> >
> >
> >
> > 1st try, everything fails Transport endpoint is not connected
> >
> >
> >
> > With XVW running ---- I can ping all 3 Debian ports from my IO
> > host, from the direct connected 2k3 host.
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $ ping 10.10.10.120
> >
> > PING 10.10.10.120 (10.10.10.120): 56 data bytes
> >
> > 64 bytes from 10.10.10.120: icmp_seq=3D0 ttl=3D64 time=3D0 ms
> >
> > 64 bytes from 10.10.10.120: icmp_seq=3D1 ttl=3D64 time=3D0 ms
> >
> > 64 bytes from 10.10.10.120: icmp_seq=3D2 ttl=3D64 time=3D0 ms
> >
> > 64 bytes from 10.10.10.120: icmp_seq=3D3 ttl=3D64 time=3D0 ms
> >
> >
> >
> > ----10.10.10.120 PING Statistics----
> >
> > 4 packets transmitted, 4 packets received, 0.0% packet loss
> >
> > round-trip (ms) min/avg/max/med =3D 0/0/0/0
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $ ping 10.10.10.121
> >
> > PING 10.10.10.121 (10.10.10.121): 56 data bytes
> >
> > 64 bytes from 10.10.10.121: icmp_seq=3D0 ttl=3D64 time=3D0 ms
> >
> > 64 bytes from 10.10.10.121: icmp_seq=3D1 ttl=3D64 time=3D0 ms
> >
> > 64 bytes from 10.10.10.121: icmp_seq=3D2 ttl=3D64 time=3D0 ms
> >
> > 64 bytes from 10.10.10.121: icmp_seq=3D3 ttl=3D64 time=3D0 ms
> >
> > 64 bytes from 10.10.10.121: icmp_seq=3D4 ttl=3D64 time=3D0 ms
> >
> > 64 bytes from 10.10.10.121: icmp_seq=3D5 ttl=3D64 time=3D0 ms
> >
> > 64 bytes from 10.10.10.121: icmp_seq=3D6 ttl=3D64 time=3D0 ms
> >
> >
> >
> > ----10.10.10.121 PING Statistics----
> >
> > 7 packets transmitted, 7 packets received, 0.0% packet loss
> >
> > round-trip (ms) min/avg/max/med =3D 0/0/0/0
> >
> >
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $ ping 147.145.174.145
> >
> > PING 147.145.174.145 (147.145.174.145): 56 data bytes
> >
> > 64 bytes from 147.145.174.145: icmp_seq=3D0 ttl=3D63 time=3D0 ms
> >
> > 64 bytes from 147.145.174.145: icmp_seq=3D1 ttl=3D63 time=3D0 ms
> >
> > 86 bytes from 147.145.170.5: icmp_type=3D3 (Dest Unreachable)
> > icmp_code=3D3
> >
> > 64 bytes from 147.145.174.145: icmp_seq=3D2 ttl=3D63 time=3D0 ms
> >
> > 64 bytes from 147.145.174.145: icmp_seq=3D3 ttl=3D63 time=3D0 ms
> >
> > 64 bytes from 147.145.174.145: icmp_seq=3D4 ttl=3D63 time=3D0 ms
> >
> > 64 bytes from 147.145.174.145: icmp_seq=3D5 ttl=3D63 time=3D0 ms
> >
> > 64 bytes from 147.145.174.145: icmp_seq=3D6 ttl=3D63 time=3D0 ms
> >
> > 64 bytes from 147.145.174.145: icmp_seq=3D7 ttl=3D63 time=3D0 ms
> >
> >
> >
> > ----147.145.174.145 PING Statistics----
> >
> > 8 packets transmitted, 8 packets received, 0.0% packet loss
> >
> > round-trip (ms) min/avg/max/med =3D 0/0/0/0
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $
> >
> >
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $ iperf -c 147.145.174.145 -p 9 -P 4 -t 30
> >
> > connect failed: Connection refused
> >
> > write1 failed: Transport endpoint is not connected
> >
> > connect failed: Connection refusedconnect failed: Connection refused
> >
> >
> >
> > write1 failed: Transport endpoint is not
> > connected------------------------------
> >
> > ------------------------------
> >
> >
> >
> > write1 failed: Transport endpoint is not connectedClient connecting
> > to 147.145.1
> >
> > 74.145, TCP port 9
> >
> >
> >
> > TCP window size: 8.00 KByte (default)
> >
> > ------------------------------------------------------------
> >
> > connect failed: Connection refused
> >
> > write1 failed: Transport endpoint is not connected
> >
> > [ 3] local 0.0.0.0 port 1416 connected with 147.145.174.145 port 9
> >
> > write2 failed: Transport endpoint is not connected[ 4] local
> > 0.0.0.0 port 1417
> >
> > connected with 147.145.174.145 port 9
> >
> >
> >
> > write2 failed: Transport endpoint is not connectedwrite2 failed:
> > Transport endpo
> >
> > int is not connectedwrite2 failed: Transport endpoint is not
> > connected
> >
> >
> >
> >
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 3] 0.0- 0.0 sec 0.00 Bytes 0.00 bits/sec
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 4] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [ 5] local 0.0.0.0 port 1418 connected with 147.145.174.145 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 5] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [ 6] local 0.0.0.0 port 1419 connected with 147.145.174.145 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 6] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [SUM] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $ iperf -c 10.10.10.120 -p 9 -P 4 -t 30
> >
> > connect failed: Connection refusedconnect failed: Connection refused
> >
> >
> >
> > write1 failed: Transport endpoint is not connectedwrite1 failed:
> > Transport endpo
> >
> > int is not connected
> >
> >
> >
> > connect failed: Connection refusedconnect failed: Connection refused
> >
> >
> >
> > write1 failed: Transport endpoint is not
> > connected------------------------------
> >
> > ------------------------------
> >
> >
> >
> > write1 failed: Transport endpoint is not connectedClient connecting
> > to 10.10.10.
> >
> > 120, TCP port 9
> >
> >
> >
> > TCP window size: 8.00 KByte (default)
> >
> > write2 failed: Transport endpoint is not
> > connected------------------------------
> >
> > ------------------------------
> >
> >
> >
> > write2 failed: Transport endpoint is not connectedwrite2 failed:
> > Transport endpo
> >
> > int is not connectedwrite2 failed: Transport endpoint is not
> > connected
> >
> >
> >
> >
> >
> > [ 4] local 0.0.0.0 port 1427 connected with 10.10.10.120 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 4] 0.0- 0.0 sec 0.00 Bytes 0.00 bits/sec
> >
> > [ 5] local 0.0.0.0 port 1428 connected with 10.10.10.120 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 5] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [ 3] local 0.0.0.0 port 1426 connected with 10.10.10.120 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 3] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [ 6] local 0.0.0.0 port 1429 connected with 10.10.10.120 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 6] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [SUM] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $ iperf -c 10.10.10.121 -p 9 -P 4 -t 30
> >
> > connect failed: Connection refused
> >
> > write1 failed: Transport endpoint is not connected
> >
> > connect failed: Connection refused
> >
> > connect failed: Connection refused
> >
> > write1 failed: Transport endpoint is not connectedconnect failed:
> > Connection ref
> >
> > used
> >
> >
> >
> > ------------------------------------------------------------
> >
> > write1 failed: Transport endpoint is not connectedClient connecting
> > to 10.10.10.
> >
> > 121, TCP port 9
> >
> >
> >
> > write1 failed: Transport endpoint is not connectedTCP window size:
> > 8.00 KByte (d
> >
> > efault)
> >
> >
> >
> > ------------------------------------------------------------
> >
> > write2 failed: Transport endpoint is not connectedwrite2 failed:
> > Transport endpo
> >
> > int is not connectedwrite2 failed: Transport endpoint is not
> > connectedwrite2 fai
> >
> > led: Transport endpoint is not connected
> >
> >
> >
> >
> >
> >
> >
> > [ 3] local 0.0.0.0 port 1436 connected with 10.10.10.121 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 3] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [ 5] local 0.0.0.0 port 1438 connected with 10.10.10.121 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 5] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [ 6] local 0.0.0.0 port 1439 connected with 10.10.10.121 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 6] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [ 4] local 0.0.0.0 port 1437 connected with 10.10.10.121 port 9
> >
> > [ ID] Interval Transfer Bandwidth
> >
> > [ 4] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> > [SUM] 0.0- 0.1 sec 0.00 Bytes 0.00 bits/sec
> >
> >
> >
> > floyd@devon<mailto:floyd@devon> /cygdrive/c/iperf/iperf-2.0.4
> >
> > $
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > From: Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > Sent: Wednesday, February 17, 2010 12:37 PM To: Tammana, Dilip;
> > Weber, Chris; Padmanabhan, Seetharaman; Hari Mudaliar; Indurkar,
> > Nirmal; Jolad, Amarnath; Mishra, Barada Cc: Sanjeev Datla; Schwing,
> > Aaron; Stolte, Daryl;
> > joer@serverengines.com<mailto:joer@serverengines.com>;
> > billw@serverengines.com<mailto:billw@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com>;
> > sathyap@serverengines.com<mailto:sathyap@serverengines.com>
> > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > Performance Setup....
> >
> >
> >
> >
> > OK, it seems that I should hold off until I get a unified set of
> > *.img files.
> >
> >
> >
> >
> >
> > 1) A debian.img file that contains our MSI-X version of be2net.o
> >
> >
> > 2) A vxworks.img file that has MSI-X enabled.
> >
> >
> >
> >
> >
> > I will continue preliminary performance work with the NFS server
> > that I have, but will hold off on publishing any data until AFTER
> > the new *.img files are running in my setup.
> >
> >
> >
> >
> >
> > -- steve
> >
> >
> > _____
> >
> >
> >
> > From: Tammana, Dilip
> > [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>] To:
> > Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > Weber, Chris
> > [mailto:Chris.Weber@lsi.com<mailto:Chris.Weber@lsi.com>],
> > Padmanabhan, Seetharaman
> > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@=
lsi.com>],
> > Hari Mudaliar
> > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > Indurkar, Nirmal
> > [mailto:Nirmal.Indurkar@lsi.com<mailto:Nirmal.Indurkar@lsi.com>],
> > Jolad, Amarnath
> > [mailto:Amarnath.Jolad@lsi.com<mailto:Amarnath.Jolad@lsi.com>],
> > Mishra, Barada
> > [mailto:Barada.Mishra@lsi.com<mailto:Barada.Mishra@lsi.com>] Cc:
> > Sanjeev Datla
> > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>],
> > Schwing, Aaron
> > [mailto:Aaron.Schwing@lsi.com<mailto:Aaron.Schwing@lsi.com>],
> > Stolte, Daryl
> > [mailto:Daryl.Stolte@lsi.com<mailto:Daryl.Stolte@lsi.com>],
> > joer@serverengines.com<mailto:joer@serverengines.com>
> > [mailto:joer@serverengines.com<mailto:joer@serverengines.com>],
> > billw@serverengines.com<mailto:billw@serverengines.com>
> > [mailto:billw@serverengines.com<mailto:billw@serverengines.com>],
> > subbus@serverengines.com<mailto:subbus@serverengines.com>
> > [mailto:subbus@serverengines.com<mailto:subbus@serverengines.com>],
> > sathyap@serverengines.com<mailto:sathyap@serverengines.com>
> > [mailto:sathyap@serverengines.com<mailto:sathyap@serverengines.com>]
> > Sent: Wed, 17 Feb 2010 10:13:32 -0800 Subject: RE: Steve's
> > Objective: Pikes Peak Project: Steve's Performance Setup....
> >
> > Hi Steve
> >
> > Thanks for the info.
> >
> > The VxWorks build you were using does not support MSI, so when you
> > run IOVM, the interrupts will switch to legacy model. This can
> > certainly cause a drop in performance. We can provide you with a
> > VxWorks build that has MSI enabled if you want to try it out.
> >
> > Thanks
> > -Dilip
> >
> > -----Original Message-----
> > From: Steve Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > Sent: Wednesday, February 17, 2010 12:03 PM To: Tammana, Dilip;
> > Weber, Chris; Padmanabhan, Seetharaman; Hari Mudaliar; Indurkar,
> > Nirmal; Jolad, Amarnath; Mishra, Barada Cc: Sanjeev Datla; Schwing,
> > Aaron; Stolte, Daryl;
> > joer@serverengines.com<mailto:joer@serverengines.com>;
> > billw@serverengines.com<mailto:billw@serverengines.com>;
> > subbus@serverengines.com<mailto:subbus@serverengines.com>;
> > sathyap@serverengines.com<mailto:sathyap@serverengines.com>
> > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > Performance Setup....
> >
> > Dilip,
> >
> > I believe that I am using an MSI-X version of the be2net.o driver
> > in my setup.
> >
> > Subbu/Sathya: Could you be sure to send Dilip the be2net.o driver??
> >
> > -- steve
> >
> > ----- Original Message -----
> > From: Tammana, Dilip
> > [mailto:Dilip.Tammana@lsi.com<mailto:Dilip.Tammana@lsi.com>] To:
> > Weber, Chris
> > [mailto:Chris.Weber@lsi.com<mailto:Chris.Weber@lsi.com>], Steve
> > Stokes
> > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > Padmanabhan, Seetharaman
> > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabhan@=
lsi.com>],
> > Hari Mudaliar
> > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > Indurkar, Nirmal
> > [mailto:Nirmal.Indurkar@lsi.com<mailto:Nirmal.Indurkar@lsi.com>],
> > Jolad, Amarnath
> > [mailto:Amarnath.Jolad@lsi.com<mailto:Amarnath.Jolad@lsi.com>],
> > Mishra, Barada
> > [mailto:Barada.Mishra@lsi.com<mailto:Barada.Mishra@lsi.com>] Cc:
> > Sanjeev Datla
> > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>],
> > Schwing, Aaron
> > [mailto:Aaron.Schwing@lsi.com<mailto:Aaron.Schwing@lsi.com>],
> > Stolte, Daryl
> > [mailto:Daryl.Stolte@lsi.com<mailto:Daryl.Stolte@lsi.com>],
> > joer@serverengines.com<mailto:joer@serverengines.com>
> > [mailto:joer@serverengines.com<mailto:joer@serverengines.com>],
> > billw@serverengines.com<mailto:billw@serverengines.com>
> > [mailto:billw@serverengines.com<mailto:billw@serverengines.com>]
> > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > Performance Setup....
> >
> >
> > > Hi Steve
> > >
> > > I will be handling this issue on the LSI side.
> > >
> > > Can you let us know if the be2nic driver is using MSI or legacy
> > > interrupt model?
> > >
> > > If you are using legacy interrupt model, it is possible that the
> > > be2nic interrupt might be shared with other devices used by
> > > VxWorks. This can
> > cause
> > > a drop in performance. Also, the VxWorks you are using does not
> > > have MSI capability.
> > >
> > > Our PT is setting up a system to recreate this issue. Can you
> > > provide us a driver that is MSI capable?
> > >
> > >
> > > Thanks
> > > -Dilip
> > >
> > >
> > > -----Original Message-----
> > > From: Weber, Chris
> > > Sent: Wednesday, February 17, 2010 11:39 AM
> > > To: Steve Stokes; Padmanabhan, Seetharaman; Hari Mudaliar;
> > > Indurkar,
> > Nirmal;
> > > Jolad, Amarnath; Mishra, Barada; Tammana, Dilip
> > > Cc: Sanjeev Datla; Schwing, Aaron; Stolte, Daryl;
> > > joer@serverengines.com<mailto:joer@serverengines.com>;
> > > billw@serverengines.com<mailto:billw@serverengines.com> Subject:
> > > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > > Setup....
> > >
> > > Adding Dilip
> > >
> > > --Chris
> > >
> > >
> > > -----Original Message-----
> > > From: Steve Stokes
> > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > > Sent: Tuesday, February 16, 2010 3:17 PM To: Padmanabhan,
> > > Seetharaman; Hari Mudaliar; Indurkar, Nirmal; Jolad, Amarnath;
> > > Weber, Chris; Mishra, Barada Cc: Sanjeev Datla; Schwing, Aaron;
> > > Stolte, Daryl;
> > > joer@serverengines.com<mailto:joer@serverengines.com>;
> > > billw@serverengines.com<mailto:billw@serverengines.com> Subject:
> > > RE: Steve's Objective: Pikes Peak Project: Steve's Performance
> > > Setup....
> > >
> > > Paddu,
> > >
> > > Attached are a simple script and iperf tool to re-create the
> > > Networking performance degradation I see.
> > >
> > > If the attached version of iperf does not work on your 10Gb
> > > client, then your team will need to download and compile iperf.
> > >
> > > Instructions for re-production:
> > >
> > > a) Enable the TCP discard server in Debian by
> > > editing /etc/inetd.conf and removing the comment for the discard
> > > service. b) Get iperf on a 10Gb client and: iperf -c
> > > <IP_address_of_Debian> -P 4 -p 9 -t 30 (Or edit the attached doit
> > > script) c) That does 4 concurrent TCP connections to the discard
> > > port (9) with each connection lasting 30 seconds.
> > > d) If VXworks is not started, I see an aggregate throughput of
> > > 8.0 Gbps. e) If VXworks is started and idle, I see an aggregate
> > > throughput of just 3.9 Gbps.
> > >
> > > This is using 3.4.1 XEN and the debian image you refer to in this
> > > e-mail thread.
> > >
> > > I would like to know what the 4 TCP connection iperf throughput
> > > to the Debian discard server your team sees for:
> > >
> > > a) Debian VM active and VXworks VM not started.
> > > b) Debian VM active and VXworks started (but idle)
> > >
> > > Thanks!!
> > >
> > > -- steve
> > >
> > >
> > >
> > >
> > >
> > > From: Padmanabhan, Seetharaman
> > > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabha=
n@lsi.com>]
> > > To: Steve Stokes
> > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>],
> > > Hari Mudaliar
> > > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > > Indurkar, Nirmal
> > > [mailto:Nirmal.Indurkar@lsi.com<mailto:Nirmal.Indurkar@lsi.com>],
> > > Jolad, Amarnath
> > > [mailto:Amarnath.Jolad@lsi.com<mailto:Amarnath.Jolad@lsi.com>],
> > > Weber, Chris
> > [mailto:Chris.Weber@lsi.com<mailto:Chris.Weber@lsi.com>],
> > > Mishra, Barada
> > > [mailto:Barada.Mishra@lsi.com<mailto:Barada.Mishra@lsi.com>] Cc:
> > > Sanjeev Datla
> > > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>=
],
> > > Schwing, Aaron
> > > [mailto:Aaron.Schwing@lsi.com<mailto:Aaron.Schwing@lsi.com>],
> > > Stolte, Daryl
> > [mailto:Daryl.Stolte@lsi.com<mailto:Daryl.Stolte@lsi.com>]
> > > Sent: Thu, 04 Feb 2010 08:24:50 -0800
> > > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > > Performance Setup....
> > >
> > > Steve and Hari:
> > > We found the problem and we are spinning a new image. Can we use
> > > your ftp site to upload this compressed image?
> > >
> > >
> > > -----Original Message-----
> > > From: Steve Stokes
> > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > > Sent: Wednesday, February 03, 2010 9:22 PM To: Padmanabhan,
> > > Seetharaman; Hari Mudaliar; Indurkar, Nirmal; Jolad, Amarnath;
> > > Weber, Chris; Mishra, Barada Cc: Sanjeev Datla; Schwing, Aaron;
> > > Stolte, Daryl Subject: RE: Steve's Objective: Pikes Peak Project:
> > > Steve's Performance Setup....
> > >
> > > Paddu,
> > >
> > > Just a clarification..... the debian image that I was given is:
> > > debian_29_jan_10
> > >
> > > Which debian version is supposed to contain the NFS server?
> > >
> > > -- steve
> > >
> > >
> > > ----- Original Message -----
> > > From: Padmanabhan, Seetharaman
> > > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanabha=
n@lsi.com>]
> > > To: Hari Mudaliar
> > > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>],
> > > Indurkar, Nirmal
> > > [mailto:Nirmal.Indurkar@lsi.com<mailto:Nirmal.Indurkar@lsi.com>],
> > > Jolad, Amarnath
> > > [mailto:Amarnath.Jolad@lsi.com<mailto:Amarnath.Jolad@lsi.com>],
> > > Weber, Chris
> > [mailto:Chris.Weber@lsi.com<mailto:Chris.Weber@lsi.com>],
> > > Mishra, Barada
> > > [mailto:Barada.Mishra@lsi.com<mailto:Barada.Mishra@lsi.com>] Cc:
> > > Sanjeev Datla
> > > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>=
],
> > > Schwing, Aaron
> > > [mailto:Aaron.Schwing@lsi.com<mailto:Aaron.Schwing@lsi.com>],
> > > Stolte, Daryl
> > [mailto:Daryl.Stolte@lsi.com<mailto:Daryl.Stolte@lsi.com>],
> > > stevens@serverengines.com<mailto:stevens@serverengines.com>
> > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>]
> > > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > > Performance Setup....
> > >
> > >
> > > > This does not look right. Let me get back to you on this.
> > > >
> > > > Also, I found a new Quad core 24GB RAM controller for Steve. So
> > > > you should be able to keep using the one you have right now. It
> > > > is probably Monday before I will receive this additional
> > > > controller and will need a day to stage the latest code. Expect
> > > > to swap by Tuesday COB.
> > > >
> > > >
> > > >
> > > > From: Hari Mudaliar
> > > > [mailto:harim@serverengines.com<mailto:harim@serverengines.com>]
> > > > Sent: Wednesday, February 03, 2010 8:07 PM To: Padmanabhan,
> > > > Seetharaman; Indurkar, Nirmal; Jolad, Amarnath; Weber, Chris
> > > > Cc: Sanjeev Datla; Schwing, Aaron; Stolte, Daryl;
> > > > stevens@serverengines.com<mailto:stevens@serverengines.com>
> > > > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > > > Performance Setup....
> > > >
> > > > Paddu,
> > > > Steve could not get the get the NFS server working for the
> > > > following reasons -
> > > > 1) There are no UDP or TCP sockets listening on the nfs port
> > > > (2049) 2) There is no nfsd listed in a ps -ax.
> > > > 3) There is no /sbin/rpc.nfsd file (the binary that implements
> > > > the NFS server) on the supplied debian.img disk image.
> > > > 4) There are no instructions on bringing up some other kind of
> > > > proprietary NFS server.
> > > >
> > > > Basically, the NFS server package seems to be missing. Please
> > > > let us know how to proceed.
> > > >
> > > > - Hari
> > > >
> > > > ________________________________
> > > > From: Padmanabhan, Seetharaman
> > > > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanab=
han@lsi.com>]
> > > > To: Steve Stokes
> > > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>=
],
> > > > Indurkar, Nirmal
> > > > [mailto:Nirmal.Indurkar@lsi.com<mailto:Nirmal.Indurkar@lsi.com>],
> > > > Jolad, Amarnath
> > > > [mailto:Amarnath.Jolad@lsi.com<mailto:Amarnath.Jolad@lsi.com>],
> > > > Weber, Chris
> > > > [mailto:Chris.Weber@lsi.com<mailto:Chris.Weber@lsi.com>] Cc:
> > > > sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>
> > > > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.co=
m>],
> > > > joer@serverengines.com<mailto:joer@serverengines.com>
> > > > [mailto:joer@serverengines.com<mailto:joer@serverengines.com>],
> > > > billw@serverengines.com<mailto:billw@serverengines.com>
> > > > [mailto:billw@serverengines.com<mailto:billw@serverengines.com>],
> > > > subbus@serverengines.com<mailto:subbus@serverengines.com>
> > > > [mailto:subbus@serverengines.com<mailto:subbus@serverengines.com>],
> > > > chinthuk@serverengines.com<mailto:chinthuk@serverengines.com>
> > > > [mailto:chinthuk@serverengines.com<mailto:chinthuk@serverengines.co=
m>],
> > > > 'lsisw@serverengines.com<mailto:lsisw@serverengines.com>' [mailto:l=
sisw@serverengines.com<mailto:lsisw@serverengines.com>],
> > > > Schwing, Aaron
> > > > [mailto:Aaron.Schwing@lsi.com<mailto:Aaron.Schwing@lsi.com>],
> > > > Stolte, Daryl
> > > > [mailto:Daryl.Stolte@lsi.com<mailto:Daryl.Stolte@lsi.com>]
> > > > Sent: Mon, 01 Feb 2010 22:16:17 -0800 Subject: RE: Steve's
> > > > Objective: Pikes Peak Project: Steve's Performance Setup....
> > > >
> > > > Have you checked the Debian VM image that Nirmal passed on
> > > > yesterday? It already has NFS Server package included.
> > > >
> > > > I understand the usage of RAMdisks for the NFS Server. Can you
> > > > please explain to me RAMdisks for the iSCSI target?
> > > >
> > > > You can use your Dual core 4GB controller to verify your setup.
> > > > In the mean time, we are working to ship a Quad core 24GB Glen
> > > > Cove Controller for this performance run.
> > > >
> > > >
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: Steve Stokes
> > > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>=
<mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>>]
> > > > Sent: Monday, February 01, 2010 5:41 PM
> > > > To: Padmanabhan, Seetharaman; Indurkar, Nirmal
> > > > Cc:
> > > > sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com><mailt=
o:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>>;
> > > > joer@serverengines.com<mailto:joer@serverengines.com><mailto:joer@s=
erverengines.com<mailto:joer@serverengines.com>>;
> > > > billw@serverengines.com<mailto:billw@serverengines.com><mailto:bill=
w@serverengines.com<mailto:billw@serverengines.com>>;
> > > > subbus@serverengines.com<mailto:subbus@serverengines.com><mailto:su=
bbus@serverengines.com<mailto:subbus@serverengines.com>>;
> > > > chinthuk@serverengines.com<mailto:chinthuk@serverengines.com><mailt=
o:chinthuk@serverengines.com<mailto:chinthuk@serverengines.com>>;
> > > > 'lsisw@serverengines.com<mailto:lsisw@serverengines.com><mailto:lsi=
sw@serverengines.com<mailto:lsisw@serverengines.com>>';
> > > > Schwing, Aaron; Stolte, Daryl Subject: RE: Steve's Objective:
> > > > Pikes Peak Project: Steve's Performance Setup....
> > > >
> > > > Paddu,
> > > >
> > > > I am interested in obtaining the NFS server for the Pikes Peak
> > > > platform.
> > > >
> > > > Will RAMdisks will be supported/configured on the drop that I
> > > > receive?
> > > >
> > > > I would like to have 40 RAMdisks available to the NFS Server and
> > > > another 40 RAMdisks for the iSCSI target.
> > > >
> > > > Is there a plan and timeline to provide an NFS server that
> > > > supports RAMdisks?
> > > >
> > > > Also, should I run the NFS performance numbers on the 4GB
> > > > controller that I have, or is there a plan to upgrade what I
> > > > have?
> > > >
> > > > Please let me know!
> > > >
> > > > Thanks!
> > > >
> > > > -- steve
> > > >
> > > >
> > > > ----- Original Message -----
> > > > From: Padmanabhan, Seetharaman
> > > > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanab=
han@lsi.com><mailto:Seetharaman.Padmanabhan
> > > > @lsi.com>]
> > > > To: Steve Stokes
> > > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>=
<mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>>],
> > > > Indurkar, Nirmal
> > > > [mailto:Nirmal.Indurkar@lsi.com<mailto:Nirmal.Indurkar@lsi.com><mai=
lto:Nirmal.Indurkar@lsi.com<mailto:Nirmal.Indurkar@lsi.com>>]
> > > > Cc:
> > > > sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com><mailt=
o:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>>
> > > > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.co=
m><mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>>] ,
> > > > joer@serverengines.com<mailto:joer@serverengines.com><mailto:joer@s=
erverengines.com<mailto:joer@serverengines.com>>
> > > > [mailto:joer@serverengines.com<mailto:joer@serverengines.com><mailt=
o:joer@serverengines.com<mailto:joer@serverengines.com>>],
> > > > billw@serverengines.com<mailto:billw@serverengines.com><mailto:bill=
w@serverengines.com<mailto:billw@serverengines.com>>
> > > > [mailto:billw@serverengines.com<mailto:billw@serverengines.com><mai=
lto:billw@serverengines.com<mailto:billw@serverengines.com>>],
> > > > subbus@serverengines.com<mailto:subbus@serverengines.com><mailto:su=
bbus@serverengines.com<mailto:subbus@serverengines.com>>
> > > > [mailto:subbus@serverengines.com<mailto:subbus@serverengines.com><m=
ailto:subbus@serverengines.com<mailto:subbus@serverengines.com>>],
> > > > chinthuk@serverengines.com<mailto:chinthuk@serverengines.com><mailt=
o:chinthuk@serverengines.com<mailto:chinthuk@serverengines.com>>
> > > > [mailto:chinthuk@serverengines.com<mailto:chinthuk@serverengines.co=
m><mailto:chinthuk@serverengines.com<mailto:chinthuk@serverengines.com>>] ,
> > > > 'lsisw@serverengines.com<mailto:lsisw@serverengines.com><mailto:lsi=
sw@serverengines.com<mailto:lsisw@serverengines.com>>' [mailto:lsisw@server=
engines.com<mailto:lsisw@serverengines.com><mailto:lsisw@serverengines.com<=
mailto:lsisw@serverengines.com>>],
> > > > Schwing, Aaron
> > > > [mailto:Aaron.Schwing@lsi.com<mailto:Aaron.Schwing@lsi.com><mailto:=
Aaron.Schwing@lsi.com<mailto:Aaron.Schwing@lsi.com>>],
> > > > Stolte, Daryl
> > > > [mailto:Daryl.Stolte@lsi.com<mailto:Daryl.Stolte@lsi.com><mailto:Da=
ryl.Stolte@lsi.com<mailto:Daryl.Stolte@lsi.com>>]
> > > > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > > > Performance Setup....
> > > >
> > > >
> > > > > If you have 4GB of RAM then you have a Dual core system with
> > > > > HT
> > enabled.
> > > > > How many controllers do you have on this system? All Quad core
> > > > > systems
> > > > come
> > > > > with 12 or 24GB memory.
> > > > >
> > > > > LSI test team is working to integrate NFS Server in to our
> > > > > Debian VM as we speak now. I will ask Nirmal to pass on the
> > > > > latest Debian VM we have (It does have the default NFS Server
> > > > > but not configured yet).
> > > > >
> > > > >
> > > > >
> > > > > From: Steve Stokes
> > > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>=
<mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>>]
> > > > > Sent: Thursday, January 28, 2010 12:23 PM
> > > > > To: Padmanabhan, Seetharaman; Indurkar, Nirmal
> > > > > Cc:
> > > > > sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com><mai=
lto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>>;
> > > > joer@serverengines.com<mailto:joer@serverengines.com><mailto:joer@s=
erverengines.com<mailto:joer@serverengines.com>>;
> > > > > billw@serverengines.com<mailto:billw@serverengines.com><mailto:bi=
llw@serverengines.com<mailto:billw@serverengines.com>>;
> > > > subbus@serverengines.com<mailto:subbus@serverengines.com><mailto:su=
bbus@serverengines.com<mailto:subbus@serverengines.com>>;
> > > > > chinthuk@serverengines.com<mailto:chinthuk@serverengines.com><mai=
lto:chinthuk@serverengines.com<mailto:chinthuk@serverengines.com>>;
> > > > 'lsisw@serverengines.com<mailto:lsisw@serverengines.com><mailto:lsi=
sw@serverengines.com<mailto:lsisw@serverengines.com>>';
> > > > Schwing, Aaron;
> > > > > Stolte, Daryl
> > > > > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > > > > Performance Setup....
> > > > >
> > > > > Paddu,
> > > > >
> > > > > The box has 4 GB in it, and it looks like it could have 4
> > > > > cores....
> > > > >
> > > > > Xen seems to think that there are 4, Debian sees 2 and
> > > > > vxWorks sees 2.
> > > > >
> > > > > How can I confirm the # of cores ??
> > > > >
> > > > > Also, when do you think I can see an NFS server running in
> > > > > the Debian
> > VM
> > > ?
> > > > >
> > > > > -- steve
> > > > >
> > > > > ________________________________
> > > > > From: Padmanabhan, Seetharaman
> > > > [mailto:Seetharaman.Padmanabhan@lsi.com<mailto:Seetharaman.Padmanab=
han@lsi.com><mailto:Seetharaman.Padmanabhan
> > > > @lsi.com>]
> > > > > To: Steve Stokes
> > > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>=
<mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>>],
> > > > Indurkar, Nirmal
> > > > > [mailto:Nirmal.Indurkar@lsi.com<mailto:Nirmal.Indurkar@lsi.com><m=
ailto:Nirmal.Indurkar@lsi.com<mailto:Nirmal.Indurkar@lsi.com>>]
> > > > > Cc:
> > > > > sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com><mai=
lto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>>
> > > > [mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.co=
m><mailto:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>>]
> > > > ,
> > > > > joer@serverengines.com<mailto:joer@serverengines.com><mailto:joer=
@serverengines.com<mailto:joer@serverengines.com>>
> > > > [mailto:joer@serverengines.com<mailto:joer@serverengines.com><mailt=
o:joer@serverengines.com<mailto:joer@serverengines.com>>],
> > > > > billw@serverengines.com<mailto:billw@serverengines.com><mailto:bi=
llw@serverengines.com<mailto:billw@serverengines.com>>
> > > > [mailto:billw@serverengines.com<mailto:billw@serverengines.com><mai=
lto:billw@serverengines.com<mailto:billw@serverengines.com>>],
> > > > > subbus@serverengines.com<mailto:subbus@serverengines.com><mailto:=
subbus@serverengines.com<mailto:subbus@serverengines.com>>
> > > > [mailto:subbus@serverengines.com<mailto:subbus@serverengines.com><m=
ailto:subbus@serverengines.com<mailto:subbus@serverengines.com>>],
> > > > > chinthuk@serverengines.com<mailto:chinthuk@serverengines.com><mai=
lto:chinthuk@serverengines.com<mailto:chinthuk@serverengines.com>>
> > > > [mailto:chinthuk@serverengines.com<mailto:chinthuk@serverengines.co=
m><mailto:chinthuk@serverengines.com<mailto:chinthuk@serverengines.com>>]
> > > > ,
> > > > > 'lsisw@serverengines.com<mailto:lsisw@serverengines.com><mailto:l=
sisw@serverengines.com<mailto:lsisw@serverengines.com>>'
> > > > [mailto:lsisw@serverengines.com<mailto:lsisw@serverengines.com><mai=
lto:lsisw@serverengines.com<mailto:lsisw@serverengines.com>>],
> > > > Schwing, Aaron
> > > > > [mailto:Aaron.Schwing@lsi.com<mailto:Aaron.Schwing@lsi.com><mailt=
o:Aaron.Schwing@lsi.com<mailto:Aaron.Schwing@lsi.com>>],
> > > > > Stolte,
> > > > Daryl
> > > > [mailto:Daryl.Stolte@lsi.com<mailto:Daryl.Stolte@lsi.com><mailto:Da=
ryl.Stolte@lsi.com<mailto:Daryl.Stolte@lsi.com>>]
> > > > > Sent: Thu, 28 Jan 2010 12:14:20 -0800
> > > > > Subject: RE: Steve's Objective: Pikes Peak Project: Steve's
> > > > > Performance Setup....
> > > > > Steve:
> > > > > Can you please let me know your Pikes peak system
> > > > > configuration please? I want to know whether you have Quad
> > > > > core vs. Dual core system and the amount of installed memory.
> > > > >
> > > > > For IOVM only validation, dual core system is sufficient but
> > > > > for the simultaneous block + file performance numbers, I
> > > > > would like you to use Quad core system with 12 or 24GB RAM.
> > > > >
> > > > > Quad core CPUs are in short supply so would like to start
> > > > > working now to replace yours.
> > > > >
> > > > > Paddu
> > > > >
> > > > >
> > > > > From: Steve Stokes
> > > > >
> > > > [mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>=
<mailto:stevens@serverengines.com<mailto:stevens@serverengines.com>><ma
> > > > ilto:stevens@serverengines.com<mailto:stevens@serverengines.com><ma=
ilto:stevens@serverengines.com<mailto:stevens@serverengines.com>>>]
> > > > > Sent: Friday, January 22, 2010 11:15 AM
> > > > > To: Padmanabhan, Seetharaman; Indurkar, Nirmal
> > > > > Cc:
> > > > sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com><mailt=
o:sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>><mailto:s
> > > > anjeevd@serverengines.com<mailto:anjeevd@serverengines.com><mailto:=
sanjeevd@serverengines.com<mailto:sanjeevd@serverengines.com>>>;
> > > > >
> > > > joer@serverengines.com<mailto:joer@serverengines.com><mailto:joer@s=
erverengines.com<mailto:joer@serverengines.com>><mailto:joer@serv<mailto:jo=
er@serv>
> > > > erengines.com<mailto:joer@serverengines.com<mailto:joer@serverengin=
es.com>>>;
> > > > >
> > > > billw@serverengines.com<mailto:billw@serverengines.com><mailto:bill=
w@serverengines.com<mailto:billw@serverengines.com>><mailto:billw@s<mailto:=
billw@s>
> > > > erverengines.com<mailto:billw@serverengines.com<mailto:billw@server=
engines.com>>>;
> > > > >
> > > > subbus@serverengines.com<mailto:subbus@serverengines.com><mailto:su=
bbus@serverengines.com<mailto:subbus@serverengines.com>><mailto:subbu
> > > > s@serverengines.com<mailto:s@serverengines.com><mailto:subbus@serve=
rengines.com<mailto:subbus@serverengines.com>>>;
> > > > >
> > > > chinthuk@serverengines.com<mailto:chinthuk@serverengines.com><mailt=
o:chinthuk@serverengines.com<mailto:chinthuk@serverengines.com>><mailto:c
> > > > hinthuk@serverengines.com<mailto:hinthuk@serverengines.com><mailto:=
chinthuk@serverengines.com<mailto:chinthuk@serverengines.com>>>
> > > > > Subject: Steve's Objective: Pikes Peak Project: Steve's
> > > > > Performance Setup....
> > > > >
> > > > > Paddu,
> > > > >
> > > > > My objective is to produce graphs similar to the attached
> > > > > samples in the next few weeks....
> > > > >
> > > > > But first, I need to get a be2net driver from our team that
> > > > > supports XEN 3.4.1.
> > > > >
> > > > > -- steve
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > ________________________________
> > > > >
> > > > ___________________________________________________________________=
___
> > > > _____________
> > > > > This message, together with any attachment(s), contains
> > > > > confidential and proprietary information of ServerEngines
> > > > > Corporation and is intended only for the designated
> > > > > recipient(s) named above. Any unauthorized review, printing,
> > > > > retention, copying, disclosure or distribution is
> > > > strictly
> > > > > prohibited. If you are not the
> > > > > intended recipient of this message, please immediately advise
> > > > > the sender
> > > > by
> > > > > reply email message and
> > > > > delete all copies of this message and any attachment(s).
> > > > > Thank you.
> > > > >
> > > >
> > > > ___________________________________________________________________=
___
> > > > _____________ This message, together with any attachment(s),
> > > > contains confidential and proprietary information of
> > > > ServerEngines Corporation and is intended only for the
> > > > designated recipient(s) named above. Any unauthorized review,
> > > > printing, retention, copying, disclosure or distribution is
> > > > strictly prohibited. If you are not the intended recipient of
> > > > this message, please immediately advise the sender by reply
> > > > email message and delete all copies of this message and any
> > > > attachment(s). Thank you.
> > > >
> > > >
> > > >
> > > > ________________________________
> > > > ___________________________________________________________________=
___
> > > > _____________ This message, together with any attachment(s),
> > > > contains confidential and proprietary information of
> > > > ServerEngines Corporation and is intended only for the
> > > > designated recipient(s) named above. Any unauthorized review,
> > > > printing, retention, copying, disclosure or distribution is
> > > > strictly prohibited. If you are not the intended recipient of
> > > > this message, please immediately advise the sender by reply
> > > > email message and delete all copies of this message and any
> > > > attachment(s). Thank you.
> > > >
> > >
> > >
> > _______________________________________________________________________=
____________
> > > This message, together with any attachment(s), contains
> > > confidential and proprietary information of ServerEngines
> > > Corporation and is intended only for the designated recipient(s)
> > > named above. Any unauthorized review, printing, retention,
> > > copying, disclosure or distribution is strictly prohibited. If
> > > you are not the intended recipient of this message, please
> > > immediately advise the sender by reply email message and delete
> > > all copies of this message and any attachment(s). Thank you.
> > >
> > >
> > >
> > _______________________________________________________________________=
____________
> > > This message, together with any attachment(s), contains
> > > confidential and proprietary information of ServerEngines
> > > Corporation and is intended only for the designated recipient(s)
> > > named above. Any unauthorized review, printing, retention,
> > > copying, disclosure or distribution is strictly prohibited. If
> > > you are not the intended recipient of this message, please
> > > immediately advise the sender by reply email message and delete
> > > all copies of this message and any attachment(s). Thank you.
> > >
> >
> > _______________________________________________________________________=
____________
> > This message, together with any attachment(s), contains
> > confidential and proprietary information of
> > ServerEngines Corporation and is intended only for the designated
> > recipient(s) named above. Any unauthorized
> > review, printing, retention, copying, disclosure or distribution is
> > strictly prohibited. If you are not the
> > intended recipient of this message, please immediately advise the
> > sender by reply email message and
> > delete all copies of this message and any attachment(s). Thank you.
> >
> >
> >
> >
> >
> >
> >
> >
> > _____
> >
> >
> >
> > _______________________________________________________________________=
____________
> > This message, together with any attachment(s), contains
> > confidential and proprietary information of
> > ServerEngines Corporation and is intended only for the designated
> > recipient(s) named above. Any unauthorized
> > review, printing, retention, copying, disclosure or distribution is
> > strictly prohibited. If you are not the
> > intended recipient of this message, please immediately advise the
> > sender by reply email message and
> > delete all copies of this message and any attachment(s). Thank you.
> >
> >
> >
> >
> >
> >
> >
> >
> > _____
> >
> >
> >
> > _______________________________________________________________________=
____________
> > This message, together with any attachment(s), contains
> > confidential and proprietary information of
> > ServerEngines Corporation and is intended only for the designated
> > recipient(s) named above. Any unauthorized
> > review, printing, retention, copying, disclosure or distribution is
> > strictly prohibited. If you are not the
> > intended recipient of this message, please immediately advise the
> > sender by reply email message and
> > delete all copies of this message and any attachment(s). Thank you.
> >
> >
> >
> >
> >
> >
> >
> >
> > _____
> >
> >
> >
> > _______________________________________________________________________=
____________
> > This message, together with any attachment(s), contains
> > confidential and proprietary information of
> > ServerEngines Corporation and is intended only for the designated
> > recipient(s) named above. Any unauthorized
> > review, printing, retention, copying, disclosure or distribution is
> > strictly prohibited. If you are not the
> > intended recipient of this message, please immediately advise the
> > sender by reply email message and
> > delete all copies of this message and any attachment(s). Thank you.
> >
> >
> >
> >
> >
> >
> >
> >
> > _____
> >
> >
> >
> > _______________________________________________________________________=
____________
> > This message, together with any attachment(s), contains
> > confidential and proprietary information of
> > ServerEngines Corporation and is intended only for the designated
> > recipient(s) named above. Any unauthorized
> > review, printing, retention, copying, disclosure or distribution is
> > strictly prohibited. If you are not the
> > intended recipient of this message, please immediately advise the
> > sender by reply email message and
> > delete all copies of this message and any attachment(s). Thank you.
> >
> >
> >
> >
> >
> >
> >
> >
> > _____
> >
> >
> >
> > _______________________________________________________________________=
____________
> > This message, together with any attachment(s), contains
> > confidential and proprietary information of
> > ServerEngines Corporation and is intended only for the designated
> > recipient(s) named above. Any unauthorized
> > review, printing, retention, copying, disclosure or distribution is
> > strictly prohibited. If you are not the
> > intended recipient of this message, please immediately advise the
> > sender by reply email message and
> > delete all copies of this message and any attachment(s). Thank you.
> >
> >
> >
> >
> >
> >
> >
> >
> > _____
> >
> >
> >
> > _______________________________________________________________________=
____________
> > This message, together with any attachment(s), contains
> > confidential and proprietary information of
> > ServerEngines Corporation and is intended only for the designated
> > recipient(s) named above. Any unauthorized
> > review, printing, retention, copying, disclosure or distribution is
> > strictly prohibited. If you are not the
> > intended recipient of this message, please immediately advise the
> > sender by reply email message and
> > delete all copies of this message and any attachment(s). Thank you.
> >
> >
> >
> >
> >
> >
> > _____
> >
> >
> > _______________________________________________________________________=
____________
> > This message, together with any attachment(s), contains
> > confidential and proprietary information of
> > ServerEngines Corporation and is intended only for the designated
> > recipient(s) named above. Any unauthorized
> > review, printing, retention, copying, disclosure or distribution is
> > strictly prohibited. If you are not the
> > intended recipient of this message, please immediately advise the
> > sender by reply email message and
> > delete all copies of this message and any attachment(s). Thank you.
> >
> >
>=20
> _________________________________________________________________________=
__________
> This message, together with any attachment(s), contains confidential
> and proprietary information of ServerEngines Corporation and is
> intended only for the designated recipient(s) named above. Any
> unauthorized review, printing, retention, copying, disclosure or
> distribution is strictly prohibited. If you are not the intended
> recipient of this message, please immediately advise the sender by
> reply email message and delete all copies of this message and any
> attachment(s). Thank you.
>=20
>=20
>=20
> ________________________________
> _________________________________________________________________________=
__________
> This message, together with any attachment(s), contains confidential
> and proprietary information of ServerEngines Corporation and is
> intended only for the designated recipient(s) named above. Any
> unauthorized review, printing, retention, copying, disclosure or
> distribution is strictly prohibited.  If you are not the intended
> recipient of this message, please immediately advise the sender by
> reply email message and delete all copies of this message and any
> attachment(s). Thank you.
>=20
>=20
>=20
> ________________________________
> _________________________________________________________________________=
__________
> This message, together with any attachment(s), contains confidential
> and proprietary information of ServerEngines Corporation and is
> intended only for the designated recipient(s) named above. Any
> unauthorized review, printing, retention, copying, disclosure or
> distribution is strictly prohibited.  If you are not the intended
> recipient of this message, please immediately advise the sender by
> reply email message and delete all copies of this message and any
> attachment(s). Thank you.
>=20
>=20
>=20
> ________________________________
> _________________________________________________________________________=
__________
> This message, together with any attachment(s), contains confidential
> and proprietary information of ServerEngines Corporation and is
> intended only for the designated recipient(s) named above. Any
> unauthorized review, printing, retention, copying, disclosure or
> distribution is strictly prohibited.  If you are not the intended
> recipient of this message, please immediately advise the sender by
> reply email message and delete all copies of this message and any
> attachment(s). Thank you.
