A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » Intel
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Tyan Thunder K8W Questions



 
 
Thread Tools Display Modes
  #1  
Old May 19th 04, 12:02 AM
Frank Robinson
external usenet poster
 
Posts: n/a
Default Tyan Thunder K8W Questions

AMDers,

The following is from Tyan's spec sheet:

"Eight 184-pin 2.5-Volt DDR DIMM sockets - Four per CPU"


What does "four per cpu" mean?

The SMP architecture requires that all processors
can access all memory. How does the Opetron
access the "other" 4 DIMMs?



  #2  
Old May 19th 04, 12:50 AM
Rick Jones
external usenet poster
 
Posts: n/a
Default

Frank Robinson wrote:
What does "four per cpu" mean?


It means that if you want 8 DIMMs in the system you have to have two
CPUs as the memory controller is per-CPU and each CPU is "directly"
connected to only four DIMMs.

rick jones
--
portable adj, code that compiles under more than one compiler
these opinions are mine, all mine; HP might not want them anyway...
feel free to post, OR email to raj in cup.hp.com but NOT BOTH...
  #3  
Old May 19th 04, 01:21 AM
Frank Robinson
external usenet poster
 
Posts: n/a
Default

OK, the other questions was:

How does each CPU access the "other" 4 DIMMs?

"Rick Jones" wrote in message
...
Frank Robinson wrote:
What does "four per cpu" mean?


It means that if you want 8 DIMMs in the system you have to have two
CPUs as the memory controller is per-CPU and each CPU is "directly"
connected to only four DIMMs.

rick jones
--
portable adj, code that compiles under more than one compiler
these opinions are mine, all mine; HP might not want them anyway...
feel free to post, OR email to raj in cup.hp.com but NOT BOTH...



  #4  
Old May 19th 04, 02:49 AM
David Schwartz
external usenet poster
 
Posts: n/a
Default

Frank Robinson wrote:

OK, the other questions was:

How does each CPU access the "other" 4 DIMMs?


Each CPU has a 6-way (hypertransport) switch. One port goes to the
memory, one goes to the other CPU, one goes to the local CPU.

DS


  #5  
Old May 19th 04, 05:15 PM
spinlock
external usenet poster
 
Posts: n/a
Default

Wow, that is comical!

They make such a big deal about reducing memory
latency with the internal memory controller and
1/2 of all accesses go to an external memory controller!

AMD must have hired some of Intel's marketing people.


"David Schwartz" wrote in message
...
Frank Robinson wrote:

OK, the other questions was:

How does each CPU access the "other" 4 DIMMs?


Each CPU has a 6-way (hypertransport) switch. One port goes to the
memory, one goes to the other CPU, one goes to the local CPU.

DS




  #6  
Old May 19th 04, 08:09 PM
David Schwartz
external usenet poster
 
Posts: n/a
Default

spinlock wrote:

Wow, that is comical!


No, not really.

They make such a big deal about reducing memory
latency with the internal memory controller and
1/2 of all accesses go to an external memory controller!


That assumes you have dual CPUs and put equal memory on each CPU.

AMD must have hired some of Intel's marketing people.


It's still significantly faster and lowe in latency that having all your
inter-processor traffic and all your memory traffic go over a single bus. A
single HT port maxes out at over 19GB/sec. And that port, in the worst case
scenario, carries your inter-processor traffic and half your memory traffic.
Constrast that to a Xeon, with an FSB that carries less than 7GB/sec and has
to carry all your inter-processor traffic, all your memory traffic, and all
your I/O traffic.

DS



  #7  
Old May 19th 04, 10:00 PM
spinlock
external usenet poster
 
Posts: n/a
Default

First, there is no "inter-processor" traffic.

Second, AMD has all of the rest going trough the processor to
memory!

In the Intel Architecture data can DMA directly between
memory and IO space without going through the processor.

"David Schwartz" wrote in message
...
spinlock wrote:

Wow, that is comical!


No, not really.

They make such a big deal about reducing memory
latency with the internal memory controller and
1/2 of all accesses go to an external memory controller!


That assumes you have dual CPUs and put equal memory on each CPU.

AMD must have hired some of Intel's marketing people.


It's still significantly faster and lowe in latency that having all

your
inter-processor traffic and all your memory traffic go over a single bus.

A
single HT port maxes out at over 19GB/sec. And that port, in the worst

case
scenario, carries your inter-processor traffic and half your memory

traffic.
Constrast that to a Xeon, with an FSB that carries less than 7GB/sec and

has
to carry all your inter-processor traffic, all your memory traffic, and

all
your I/O traffic.

DS





  #8  
Old May 20th 04, 07:52 PM
external usenet poster
 
Posts: n/a
Default

spinlock wrote:
First, there is no "inter-processor" traffic.


All multi-cpu systems have inter-processor traffic, otherwise the
different CPUs would not be connected, right?

Second, AMD has all of the rest going trough the processor to
memory!

In the Intel Architecture data can DMA directly between
memory and IO space without going through the processor.


The memory controller is connected to the hypertransport switch
(which the CPU is also connected to). This switch is full-duplex, and
thus supports full DMA operations between memory and I/O devices
without involving the CPU core.

--
Bjørn-Ove Heimsund
Centre for Integrated Petroleum Research
University of Bergen, Norway
  #9  
Old May 20th 04, 11:28 PM
David Schwartz
external usenet poster
 
Posts: n/a
Default


"Nate Edel" wrote in message
...

Opteron isn't quite SMP, timing-wise; all processors can access all
memory,
but some memory is directly connected, and some is accessible from the HT
links, with a little bit higher latency. So it's sort of NUMA-light.


Right. It's just that the latency/throughput for directly connected
memory is so close to the same as for remotely connected memory that doing
real NUMA-type stuff (like page migration or allocating close pages based
upon the allocating processor) isn't worth the effort. You tend to do better
interleaving pages across processors so that you have greater aggregate
bandwidth to consecutive pages (since you can use the memory controller and
the HT link).

DS


  #10  
Old May 21st 04, 05:21 PM
spinlock
external usenet poster
 
Posts: n/a
Default


wrote in message
...
spinlock wrote:
First, there is no "inter-processor" traffic.


All multi-cpu systems have inter-processor traffic, otherwise the
different CPUs would not be connected, right?


WRONG!

Threads executuing on the different processors communicate with
each other through control structures, like spinlocks, in memory.

THERE ARE NO x86 intsructions that read or write another
processor.

Everything is orchestrated through memory accesses.


Second, AMD has all of the rest going trough the processor to
memory!

In the Intel Architecture data can DMA directly between
memory and IO space without going through the processor.


The memory controller is connected to the hypertransport switch
(which the CPU is also connected to). This switch is full-duplex, and
thus supports full DMA operations between memory and I/O devices
without involving the CPU core.

--
Bjørn-Ove Heimsund
Centre for Integrated Petroleum Research
University of Bergen, Norway



 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Wanted: Tyan S1836 Thunder 100 Beta BIOS Andrew Manore Homebuilt PC's 0 August 28th 04 09:36 AM
Tyan Mini 1U Rack Mount Chassis (14", 200W, 1/2 HD, 1 CD, LED Panel, etc) APPLIANCE JohnNews Overclocking AMD Processors 2 August 10th 04 08:40 AM
PCI Revision on a Tyan Thunder 2500 S1876 ? Woodsy General 0 May 28th 04 03:20 AM
Would a Tyan Thunder i7505 mobo fit in a Dell 1600Sc Server Case? powervideo Dell Computers 6 April 11th 04 02:26 AM
Tyan Thunder K8W S2885 and ATI Radeon 9800 Pro problems. Kris von Mach Ati Videocards 0 January 23rd 04 11:28 PM


All times are GMT +1. The time now is 07:37 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.