A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Multi-core and memory



 
 
Thread Tools Display Modes
  #1  
Old May 20th 08, 06:35 PM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
Rui Pedro Mendes Salgueiro[_2_]
external usenet poster
 
Posts: 2
Default Multi-core and memory

Hello

Would it make sense to have multiple memory interfaces in multi-core CPUs ?
Have Intel or AMD announced plans to have such a thing ?

--
http://www.mat.uc.pt/~rps/

..pt is Portugal| `Whom the gods love die young'-Menander (342-292 BC)
Europe | Villeneuve 50-82, Toivonen 56-86, Senna 60-94
  #2  
Old May 20th 08, 11:15 PM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
MitchAlsup
external usenet poster
 
Posts: 38
Default Multi-core and memory

Opteron with its on-chip DRAM controller and on-board chip-to-chip
interconnect has had multiple memory controllers for (¿what?) 4.5
years now.....
  #3  
Old May 21st 08, 11:05 AM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
Rui Pedro Mendes Salgueiro[_2_]
external usenet poster
 
Posts: 2
Default Multi-core and memory

In comp.arch MitchAlsup wrote:
Opteron with its on-chip DRAM controller and on-board chip-to-chip
interconnect has had multiple memory controllers for (¿what?) 4.5
years now.....


It has multiple memory controllers, if you have multiple chips.
When Opteron first appeared, each chip had only one core so it
had one memory bank per core.

Then with dual-core Opterons, each memory bank served 2 cores.

Now, with quad-core chips, I think you still have only one memory
controller per chip (I saw something in AMD's web site about that memory
being able to be used either as one 128-bit-wide memory or 2 64-bit-wide
memories, but that is not quite the same thing).

Since even one core can saturate one memory controller, it seems to me
that the systems are getting more and more inbalanced, and it could be
useful to have multiple memory controllers per chip. But maybe it would
make more sense to have wider memory instead.

And I suppose for the moment it is not practical to do either thing (pin
count, price for base configurations, other reasons ?).

--
http://www.mat.uc.pt/~rps/

..pt is Portugal| `Whom the gods love die young'-Menander (342-292 BC)
Europe | Villeneuve 50-82, Toivonen 56-86, Senna 60-94
  #4  
Old May 21st 08, 05:08 PM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
Evandro Menezes
external usenet poster
 
Posts: 2
Default Multi-core and memory

On May 21, 5:05 am, Rui Pedro Mendes Salgueiro
wrote:

Now, with quad-core chips, I think you still have only one memory
controller per chip (I saw something in AMD's web site about that memory
being able to be used either as one 128-bit-wide memory or 2 64-bit-wide
memories, but that is not quite the same thing).


You're right about your concerns, but AMD's Barcelona has two memory
controllers that can be ganged together to control memory as a 128-bit
array (for greater bandwidth) or left independent to control separate
64-bit memory arrays (page-interleaved, IIRC).

HTH
  #5  
Old May 21st 08, 05:12 PM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
Evandro Menezes
external usenet poster
 
Posts: 2
Default Multi-core and memory

On May 20, 5:15 pm, MitchAlsup wrote:
Opteron with its on-chip DRAM controller and on-board chip-to-chip
interconnect has had multiple memory controllers for (¿what?) 4.5
years now.....


Actually, K8 had a dual-channel controller, meaning that it would keep
track of RAM resources for both channels, such as open pages, etc.
For example, if a RAM page was open, it was open on both channels.

With Barcelona though does have two independent controllers. For
example, a RAM page on one "channel" might not have a corresponding
open page on the other "channel".

HTH
  #6  
Old May 21st 08, 05:26 PM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
MitchAlsup
external usenet poster
 
Posts: 38
Default Multi-core and memory

On May 21, 10:12*am, Evandro Menezes wrote:
On May 20, 5:15 pm, MitchAlsup wrote:

Opteron with its on-chip DRAM controller and on-board chip-to-chip
interconnect has had multiple memory controllers for (¿what?) 4.5
years now.....


Actually, K8 had a dual-channel controller, meaning that it would keep
track of RAM resources for both channels, such as open pages, etc.
For example, if a RAM page was open, it was open on both channels.


Yes, the later K8s did (rev ¿C? and above).
Although generally marketed as "allowing more 'stuffings' of the DRAM
arrays" it was, in effect two DRAM controllers hiding behind 1 memory
controller.

With Barcelona though does have two independent controllers. *For
example, a RAM page on one "channel" might not have a corresponding
open page on the other "channel".


Barcelona is an enhancement of the later K8 dual controllers, with a
much greater write buffer depth and a watermarked write back scheme,
and a much more clever DRAM scheduling scheme. Some prefetching is
done by the DRAM controller itself on cycles that are not otherwise
"in demand". Something that CPU-based and Cache-based prefetchers
cannot do--because they cannot figure out when the cycles are free.

The decision to run the DRAM banks as one channel of 128 bits (Plus
ECC as desired) or as two channels of 64-bits is done at boot time. If
all the DRAMs on both channels can comply with the same timings,
simulations showed that generally the 1 channel twice as wide
performed better. When a random mix of DRAM timings is "stuffed" into
the sockets, the controler manages both banks independently with the
slowest timings on that bank. You particular BIOS may allow you to run
the DRAMs as 2 banks even if all the timings are the same.

Mitch
  #7  
Old May 21st 08, 09:41 PM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
Nate Edel
external usenet poster
 
Posts: 225
Default Multi-core and memory

In comp.sys.ibm.pc.hardware.chips Rui Pedro Mendes Salgueiro wrote:
Since even one core can saturate one memory controller, it seems to me
that the systems are getting more and more inbalanced, and it could be
useful to have multiple memory controllers per chip. But maybe it would
make more sense to have wider memory instead.

And I suppose for the moment it is not practical to do either thing (pin
count, price for base configurations, other reasons ?).


Intel went to wider memory (quad channel) with the current generation (5xxx
series) of dual socket Xeons, but that's still multiple sockets - AMD has
effectively been doing that since the Opterons came out - and that's also
only between the northbridge and memory.

Bandwidth and memory width do go up at times; Intel hasn't widened from 64
bits since the Pentium classic came out but they have upped the clock speed
many times and gone from a regular FSB to a QDR one (which is then split
into DDR dual channel by the north bridge)

With AMD, it should be hypothetically possible to stick additional memory
controllers on the end of some of the HT links, but it would be slow
compared to memory off the onboard controller, and I don't know if anyone's
actually done it.

--
Nate Edel http://www.cubiclehermit.com/
preferred email |
is "nate" at the | "I do have a cause, though. It is obscenity.
posting domain | I'm for it." - prologue to "Smut" by Tom Lehrer
  #8  
Old May 22nd 08, 07:23 AM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
Chris Thomasson
external usenet poster
 
Posts: 28
Default Multi-core and memory

"Rui Pedro Mendes Salgueiro" wrote in message
...
Hello

Would it make sense to have multiple memory interfaces in multi-core CPUs
?
Have Intel or AMD announced plans to have such a thing ?


[...]

I have always wondered why a multi-core CPU could not be
__directly_integrated__ into a memory card. IMHO, a 2GB mem-card should be
able to physically integrate with one or two multi-core CPU's. The memory
which resides on the same card as the CPU(s) could be accessed using a cache
coherent shared memory model. If one card needs to communicate with another,
then a message-passing interface would be utilized. Think of a single
desktop sized system that has 8 2GB cards with two 64-core CPU's per-card.
That's 16GB of total distributed memory running on 1024 cores...

Does anybody know of any experimental projects that are trying to accomplish
something even vaguely similar?

Of course, the chip vendor would need to be the memory vendor as well...
Humm...

IMVHO, drastically reducing the physical distance between the chip and its
local memory can be very important factor wrt scalability concerns. It
should be ideal to merge the chip and a couple of GB of memory into a single
unit.

Intra-CPU to local memory communication would use shared memory, and
inter-CPU and remote memory communication would use message passing. It
seems the scheme could be made to work... What am I missing?

With this type of setup, it seems like each card could be running a separate
operating system that is physically isolated from the other cards in the
system. Their only communication medium would be message passing. OS(a)
running on Card(a) could communicate with OS(b) running on Card(b) using
MPI. Card(a) intra-comm could use shared memory. This sure seems like it
would scale. Adding extra cards would not seem to be a problem. They might
even be able to be hot-swappable. Humm...

The programming model would be something like:

http://groups.google.com/group/comp....dbf634f491f46b

http://groups.google.com/group/comp....5eeaecd0e69aed

Basically, intra-node communication is analogous to inter-card comms, and
inter-node comms would be similar to inter-card comm...

Any thoughts?

  #9  
Old May 22nd 08, 04:38 PM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
MitchAlsup
external usenet poster
 
Posts: 38
Default Multi-core and memory

On May 22, 12:23*am, "Chris Thomasson" wrote:
"Rui Pedro Mendes Salgueiro" wrote in ...

Hello


Would it make sense to have multiple memory interfaces in multi-core CPUs
?
Have Intel or AMD announced plans to have such a thing ?


[...]

I have always wondered why a multi-core CPU could not be
__directly_integrated__ into a memory card. IMHO, a 2GB mem-card should be
able to physically integrate with one or two multi-core CPU's. The memory
which resides on the same card as the CPU(s) could be accessed using a cache
coherent shared memory model. If one card needs to communicate with another,
then a message-passing interface would be utilized. Think of a single
desktop sized system that has 8 2GB cards with two 64-core CPU's per-card.
That's 16GB of total distributed memory running on 1024 cores...

Does anybody know of any experimental projects that are trying to accomplish
something even vaguely similar?

Of course, the chip vendor would need to be the memory vendor as well...
Humm...

IMVHO, drastically reducing the physical distance between the chip and its
local memory can be very important factor wrt scalability concerns. It
should be ideal to merge the chip and a couple of GB of memory into a single
unit.

Intra-CPU to local memory *communication would use shared memory, and
inter-CPU and remote memory communication would use message passing. It
seems the scheme could be made to work... What am I missing?


Heat disipation. One could, in principle, design a DRAM daughtercard
that had an Opteron/Barcelona socket in the middle, ranks of DRAMs to
the left and right, and HT links through the pins. Dealing with the
100Watts of power at the center would make the spacing between
daughter cards pretty large, and cooling a little on the difficult
side.

BUT it is possible today, if you wanted to try and make a go of it.

  #10  
Old May 28th 08, 09:20 PM posted to comp.arch,comp.sys.ibm.pc.hardware.chips
Chris Thomasson
external usenet poster
 
Posts: 28
Default Multi-core and memory


"MitchAlsup" wrote in message
...
On May 22, 12:23 am, "Chris Thomasson" wrote:
[...]
IMVHO, drastically reducing the physical distance between the chip and
its
local memory can be very important factor wrt scalability concerns. It
should be ideal to merge the chip and a couple of GB of memory into a
single
unit.

Intra-CPU to local memory communication would use shared memory, and
inter-CPU and remote memory communication would use message passing. It
seems the scheme could be made to work... What am I missing?


Heat disipation. One could, in principle, design a DRAM daughtercard
that had an Opteron/Barcelona socket in the middle, ranks of DRAMs to
the left and right, and HT links through the pins. Dealing with the
100Watts of power at the center would make the spacing between
daughter cards pretty large, and cooling a little on the difficult
side.


BUT it is possible today, if you wanted to try and make a go of it.


Humm. I was thinking that a clever mixture of high-end liquid cooling
systems and fans might be able to help things out in this specific problem
domain.

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
XP SP2 & Multi-Core CPU Patch? KlausK Homebuilt PC's 1 March 6th 08 08:21 AM
is dual core = multi processor? Cartoper General 6 November 4th 07 05:08 AM
Debugging Multi-core Processors [email protected] General 0 August 21st 07 04:33 PM
Best Multi-chip multi-core mobo question Mike[_7_] General 4 May 14th 07 08:11 AM
Oracle gives in on multi-core processors Yousuf Khan General 0 December 20th 05 05:54 AM


All times are GMT +1. The time now is 01:09 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.