If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
Tyan Thunder K8W Questions
AMDers,
The following is from Tyan's spec sheet: "Eight 184-pin 2.5-Volt DDR DIMM sockets - Four per CPU" What does "four per cpu" mean? The SMP architecture requires that all processors can access all memory. How does the Opetron access the "other" 4 DIMMs? |
#2
|
|||
|
|||
Frank Robinson wrote:
What does "four per cpu" mean? It means that if you want 8 DIMMs in the system you have to have two CPUs as the memory controller is per-CPU and each CPU is "directly" connected to only four DIMMs. rick jones -- portable adj, code that compiles under more than one compiler these opinions are mine, all mine; HP might not want them anyway... feel free to post, OR email to raj in cup.hp.com but NOT BOTH... |
#3
|
|||
|
|||
OK, the other questions was:
How does each CPU access the "other" 4 DIMMs? "Rick Jones" wrote in message ... Frank Robinson wrote: What does "four per cpu" mean? It means that if you want 8 DIMMs in the system you have to have two CPUs as the memory controller is per-CPU and each CPU is "directly" connected to only four DIMMs. rick jones -- portable adj, code that compiles under more than one compiler these opinions are mine, all mine; HP might not want them anyway... feel free to post, OR email to raj in cup.hp.com but NOT BOTH... |
#4
|
|||
|
|||
Frank Robinson wrote:
OK, the other questions was: How does each CPU access the "other" 4 DIMMs? Each CPU has a 6-way (hypertransport) switch. One port goes to the memory, one goes to the other CPU, one goes to the local CPU. DS |
#5
|
|||
|
|||
Wow, that is comical!
They make such a big deal about reducing memory latency with the internal memory controller and 1/2 of all accesses go to an external memory controller! AMD must have hired some of Intel's marketing people. "David Schwartz" wrote in message ... Frank Robinson wrote: OK, the other questions was: How does each CPU access the "other" 4 DIMMs? Each CPU has a 6-way (hypertransport) switch. One port goes to the memory, one goes to the other CPU, one goes to the local CPU. DS |
#6
|
|||
|
|||
spinlock wrote:
Wow, that is comical! No, not really. They make such a big deal about reducing memory latency with the internal memory controller and 1/2 of all accesses go to an external memory controller! That assumes you have dual CPUs and put equal memory on each CPU. AMD must have hired some of Intel's marketing people. It's still significantly faster and lowe in latency that having all your inter-processor traffic and all your memory traffic go over a single bus. A single HT port maxes out at over 19GB/sec. And that port, in the worst case scenario, carries your inter-processor traffic and half your memory traffic. Constrast that to a Xeon, with an FSB that carries less than 7GB/sec and has to carry all your inter-processor traffic, all your memory traffic, and all your I/O traffic. DS |
#7
|
|||
|
|||
First, there is no "inter-processor" traffic.
Second, AMD has all of the rest going trough the processor to memory! In the Intel Architecture data can DMA directly between memory and IO space without going through the processor. "David Schwartz" wrote in message ... spinlock wrote: Wow, that is comical! No, not really. They make such a big deal about reducing memory latency with the internal memory controller and 1/2 of all accesses go to an external memory controller! That assumes you have dual CPUs and put equal memory on each CPU. AMD must have hired some of Intel's marketing people. It's still significantly faster and lowe in latency that having all your inter-processor traffic and all your memory traffic go over a single bus. A single HT port maxes out at over 19GB/sec. And that port, in the worst case scenario, carries your inter-processor traffic and half your memory traffic. Constrast that to a Xeon, with an FSB that carries less than 7GB/sec and has to carry all your inter-processor traffic, all your memory traffic, and all your I/O traffic. DS |
#8
|
|||
|
|||
spinlock wrote:
First, there is no "inter-processor" traffic. All multi-cpu systems have inter-processor traffic, otherwise the different CPUs would not be connected, right? Second, AMD has all of the rest going trough the processor to memory! In the Intel Architecture data can DMA directly between memory and IO space without going through the processor. The memory controller is connected to the hypertransport switch (which the CPU is also connected to). This switch is full-duplex, and thus supports full DMA operations between memory and I/O devices without involving the CPU core. -- Bjørn-Ove Heimsund Centre for Integrated Petroleum Research University of Bergen, Norway |
#9
|
|||
|
|||
"Nate Edel" wrote in message ... Opteron isn't quite SMP, timing-wise; all processors can access all memory, but some memory is directly connected, and some is accessible from the HT links, with a little bit higher latency. So it's sort of NUMA-light. Right. It's just that the latency/throughput for directly connected memory is so close to the same as for remotely connected memory that doing real NUMA-type stuff (like page migration or allocating close pages based upon the allocating processor) isn't worth the effort. You tend to do better interleaving pages across processors so that you have greater aggregate bandwidth to consecutive pages (since you can use the memory controller and the HT link). DS |
#10
|
|||
|
|||
wrote in message ... spinlock wrote: First, there is no "inter-processor" traffic. All multi-cpu systems have inter-processor traffic, otherwise the different CPUs would not be connected, right? WRONG! Threads executuing on the different processors communicate with each other through control structures, like spinlocks, in memory. THERE ARE NO x86 intsructions that read or write another processor. Everything is orchestrated through memory accesses. Second, AMD has all of the rest going trough the processor to memory! In the Intel Architecture data can DMA directly between memory and IO space without going through the processor. The memory controller is connected to the hypertransport switch (which the CPU is also connected to). This switch is full-duplex, and thus supports full DMA operations between memory and I/O devices without involving the CPU core. -- Bjørn-Ove Heimsund Centre for Integrated Petroleum Research University of Bergen, Norway |
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Wanted: Tyan S1836 Thunder 100 Beta BIOS | Andrew Manore | Homebuilt PC's | 0 | August 28th 04 09:36 AM |
Tyan Mini 1U Rack Mount Chassis (14", 200W, 1/2 HD, 1 CD, LED Panel, etc) APPLIANCE | JohnNews | Overclocking AMD Processors | 2 | August 10th 04 08:40 AM |
PCI Revision on a Tyan Thunder 2500 S1876 ? | Woodsy | General | 0 | May 28th 04 03:20 AM |
Would a Tyan Thunder i7505 mobo fit in a Dell 1600Sc Server Case? | powervideo | Dell Computers | 6 | April 11th 04 02:26 AM |
Tyan Thunder K8W S2885 and ATI Radeon 9800 Pro problems. | Kris von Mach | Ati Videocards | 0 | January 23rd 04 11:28 PM |