AF:
NF:0
PS:10
SRH:1
SFN:
DSR:
MID:
CFG:
PT:0
S:andy.sharp@lsi.com
RQ:
SSV:mhbs.lsil.com
NSV:
SSH:
R:<Brent.Kingsbury@lsi.com>
MAID:2
X-Sylpheed-Privacy-System:
X-Sylpheed-Sign:0
SCF:#mh/Mailbox/sent
RMID:#imap/LSI/INBOX	0	4B6A08C587958942AA3002690DD4F8C3BA7DA6AA@cosmail02.lsi.com
X-Sylpheed-End-Special-Headers: 1
Date: Thu, 4 Mar 2010 15:41:01 -0800
From: Andrew Sharp <andy.sharp@lsi.com>
To: "Kingsbury, Brent" <Brent.Kingsbury@lsi.com>
Subject: Re: x86 SIMD and the tuxfs world....
Message-ID: <20100304154101.25194911@ripper.onstor.net>
In-Reply-To: <4B6A08C587958942AA3002690DD4F8C3BA7DA6AA@cosmail02.lsi.com>
References: <4B6A08C587958942AA3002690DD4F8C3BA7DA6AA@cosmail02.lsi.com>
Organization: LSI
X-Mailer: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Thu, 4 Mar 2010 16:22:53 -0700 "Kingsbury, Brent"
<Brent.Kingsbury@lsi.com> wrote:

> Andy,
> 
> 'Sorry to bother, but I've a question I imagine that you (or someone
> in your immediate circle) could make a judgment upon, which is this:
> 
>    Will it be possible for us to use the x86 SIMD instructions in our
> future tuxfs work when running upon our future Nehalem Xeon x86-based
> line?
> 
> I realize I probably look seriously nuts asking this, but there are
> some potential optimizations to be had once our C code dust settles,
> provided that we layout certain data structures in the appropriate
> ways.   I'm thinking specifically of a possible directory block
> scheme wherein we lookup pathname component names by hash (which we
> do already), but wherein all the hash values are in an array (they
> aren't this way now), and wherein we wouldn't necessarily be wanting
> to sort the hash values - depending upon the circumstances.
> 
> If our 'eee' environment wrapping in the future x86 product
> environment remains non-preemptive (it appears to me to be that way
> now from a cursory look), then there would be no need to save/restore
> the SIMD register context when context switching between threads.  I
> do however wonder how this would all work in the virtual machine
> environment of which I saw a recent diagram in the Orion slides.  In
> that world, I presume the individual virtual machines are
> preempted ... whenever ... which would seem to necessitate a
> save/restore of the SIMD register context when switching between
> virtual machines if more than one virtual machine context had code
> using those registers....
> 
> Thanks for any/all comments and insights,
> 
> --BK


Burger King?

Anyway, the very short answer is: it would be the kernel's job to
handle this, so I don't care!  Haha, tra-la-tra-lee.  I should smoke
less crack, really.

OK, so maybe you'd like to work on some tuxstor stuff?  I have several
tasks already with your name on it.  However if you tell Jobama I told
you, he will come to my office and try to kill me.  If you'd be
interested in some of that work, let me know, and if there's anybody
else up there in B-town who would like to work on some tuxstor kernel
stuff, also let me know.  I'll never tell a soul who spilled the beans.

Normally I would just come around and ask for myself, but they won't
spring for a plane ticket just now.  And oh yeah, I don't have time
right now to make a trip to Oregon.  Maybe soon though.

Cheers,

a
