X-MimeOLE: Produced By Microsoft Exchange V6.5
Received: by onstor-exch02.onstor.net 
	id <01C76747.581FAEE7@onstor-exch02.onstor.net>; Thu, 15 Mar 2007 14:17:32 -0700
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Content-class: urn:content-classes:message
Subject: RE: memory map madness
Date: Thu, 15 Mar 2007 14:17:31 -0700
Message-ID: <BB375AF679D4A34E9CA8DFA650E2B04E02D8FC36@onstor-exch02.onstor.net>
In-Reply-To: <20070314181219.0c1d8913@ripper.onstor.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: memory map madness
Thread-Index: AcdmnvoSpDZN1d6BR86fSJucvNFaYwAkdWgw
References: <20070314131829.514b95c7@ripper.onstor.net><BB375AF679D4A34E9CA8DFA650E2B04E02D8F746@onstor-exch02.onstor.net><20070314173541.0d9e29f6@ripper.onstor.net><BB375AF679D4A34E9CA8DFA650E2B04E02D8F79F@onstor-exch02.onstor.net> <20070314181219.0c1d8913@ripper.onstor.net>
From: "Brian Stark" <brian.stark@onstor.com>
To: "Andy Sharp" <andy.sharp@onstor.com>

Andy,

Yeah, my brain is hurting now, too!  Let's just say the address
translation and remapping that I'm talking about is done in the hardware
level after the virtual address has been converted to physical.  It's
not as magical as it sounds, and I'll draw it out for you sometime if
you're still interested.

The registers in the PMC that I'm referring to are called the OCD
registers.  We set them up in the PROM, and the kernel doesn't need to
touch them.


Brian
=20

> -----Original Message-----
> From: Andy Sharp=20
> Sent: Wednesday, March 14, 2007 6:12 PM
> To: Brian Stark
> Subject: Re: memory map madness
>=20
> On Wed, 14 Mar 2007 17:47:28 -0700 "Brian Stark"
> <brian.stark@onstor.com> wrote:
>=20
> > That's right, we aren't using the same physical address for local=20
> > memory right now.  And the spaces you noted for the 512MB of local=20
> > memory are correct.  Sorry that the 256MB regions aren't contiguous.
> > Blame the PMC mapping registers for this.
>=20
> Yeah, I get this part, lately.  It doesn't matter to me or=20
> the Linux kernel if it's contiguous, especially since each=20
> chunk is going to a different core.  Of course, if it was all=20
> in kseg0 that would save having to use the TLB a little bit=20
> but who cares.
>=20
> > In an attempt to clear any confusion, let me address one of the=20
> > questions in your previous email.  From the point of view=20
> of the PMC,=20
> > SysAD and local memory are considered peripherals.  The CPU cores=20
> > access these peripherals with a virtual address that is=20
> then converted=20
> > to a physical address.  This physical address is then mapped by=20
> > internal registers to one of these peripherals, and it=20
> could then be=20
> > remapped to a different physical address.  This is often=20
> done to meet=20
> > addressing requirements of one of the chips attached to the=20
> same bus=20
> > (e.g. Marvell).
>=20
> mapped by internal registers to one of these peripherals ...=20
> what internal registers are these?  I don't know of any=20
> magical internal registers in the RM9K that can deduce via=20
> mental telepathy which peripheral you meant when you issued=20
> an address that mapped to 4000.0000 v. another address that=20
> maps to the same physical address.
> Unless I'm missing something, like an uncached TLB mapping=20
> that resolves to physical address 4000.0000 somehow goes=20
> somewhere different from a cached TLB mapping that also=20
> resolves to physical 4000.0000.
> I hurt my brain just thinking that up.
>=20
> > So, while you're correct that the CPU cores can't deal with=20
> > peripherals at the same physical address, it has no notion of the=20
> > remapping that can happen.  This remapping can result in two=20
> > peripherals having the same physical address from an overall system=20
> > perspective, but since the peripherals are isolated from=20
> each other by=20
> > the internal remappers, this doesn't matter.  From the=20
> perspective of=20
> > the CPU cores, these peripherals must live at different virtual and=20
> > physical addresses (which they do).
>=20
> Where are these internal remappers?  I think I'm more=20
> confused now than before, if possible.
>=20
> > Hope I didn't confuse you any more.  If I did, then just read the=20
> > first paragraph in this reply again since we aren't using=20
> that upper=20
> > 512MB of space anyway!
>=20
> Okay.  I'll do a tlb_flush and reread the first paragraph.
>=20
> >=20
> >=20
> > > -----Original Message-----
> > > From: Andy Sharp
> > > Sent: Wednesday, March 14, 2007 5:36 PM
> > > To: Brian Stark
> > > Subject: Re: memory map madness
> > >=20
> > > On Wed, 14 Mar 2007 16:38:42 -0700 "Brian Stark"
> > > <brian.stark@onstor.com> wrote:
> > >=20
> > > > Andy,
> > >=20
> > > > I see your confusion about how there appears to be a
> > > physical address
> > > > conflict between the Marvell and local memory.  Keep in=20
> mind that=20
> > > > after all of the address translation and remapping stages
> > > inside the
> > > > PMC, it's possible that different peripherals, such as
> > > SysAD and local
> > > > memory, actually show the same physical address.  Since these=20
> > > > peripherals are isolated from each other, this is ok.
> > > Plus, we don't
> > > > actually use the virtual regions at 20000000 and=20
> 30000000 as noted=20
> > > > above.
> > >=20
> > > Nevermind that previous reply of mine, I was still suffering some=20
> > > confusion.  Basically, I take it that we aren't using the phys=20
> > > 4000.0000 address for local memory right now, we are using phys 0=20
> > > thru (1000.0000 - 1) and 2000.0000 thru (3000.0000 - 1) for local=20
> > > memory, right?
> > >=20
> > > Cheers,
> > >=20
> > > a
> > >=20
> > > > Brian
> > > >=20
> > > >=20
> > > > > -----Original Message-----
> > > > > From: Andy Sharp
> > > > > Sent: Wednesday, March 14, 2007 1:18 PM
> > > > > To: Brian Stark
> > > > > Subject: memory map madness
> > > > >=20
> > > > > Hi Brian,
> > > > >=20
> > > > > I'm apparently having trouble reading the memory map pdf's.
> > > > >=20
> > > > > Bobcat\ Memory\ Map\ 3.pdf lists two different things at phys
> > > > > 0x4000.000: 256MB of local memory, and 256MB of=20
> Marvell memory.
> > > > >=20
> > > > >=20
> > > > > Bobcat\ Memory\ Map.pdf lists the marvell memory as=20
> starting at=20
> > > > > 0x5000.0000
> > > > >=20
> > > > > What's the real and true story of the memory map?
> > > > >=20
> > > > > Cheers,
> > > > >=20
> > > > > a
> > > > >=20
> > >=20
>=20
