View Single Post
  #3  
Old December 28th 03, 02:02 PM
Robert Myers
external usenet poster
 
Posts: n/a
Default

On Sat, 27 Dec 2003 23:55:30 GMT, daytripper
wrote:

snip


FB-DIMMs....Might be a lot less there than meets the eye of the article.

FB-DIMMs translate a narrow but very fast memory interconnect into ddr2 sdram
transactions, with each FB-Dimm having an asic (the "hub") doing all of the
things discrete registers and plls used to do - PLUS the memory interconnect
actually passes through the hub on one dimm to get to the next dimm/hub,
through that one to the next, and so on. It's quite extensible, which
addresses the problem of hooking a bunch of dimms to *anything* these days
while maintaining interconnect speed.

Presumably solving the problems inherent in a multi-drop bus?

Note, however, that memory latency is clearly not addressed in a positive
manner - sticking n pass-thru elements between the nth dimm's drams and the
host chipset rarely results in quicker memory response ;-)

One can surmise the era of (up to) 6MB on-chip caches is expected to reduce
typical miss ratios down to where the even-longer-than-before latency isn't a
significant hit to overall platform performance...

The 6mb cache is an act of desperation on Intel's part. I don't
_think_ their strategy is to keep increasing cache size. It's a
losing strategy, anyway, unless you go to COMA. Itanium's in-order
architecture is just too inflexible, and the problem is still cache
misses.

Intel will, I gather, move the memory controller onto the die. Other
than that, the strategy of the day (and for the forseeable future) is
to hide latency, not to address it directly.

RM