A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » Intel
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

numa question for xeon 5500



 
 
Thread Tools Display Modes
  #1  
Old February 12th 10, 08:36 PM posted to comp.sys.intel
Sven Bächle
external usenet poster
 
Posts: 3
Default numa question for xeon 5500

Hi,
I am wondering what the maximum malloc size is, when working in a dual
5500 setup. That is, two xeon 5500 (nahalem), each having some amount of
ram installed. For example, each 5500 having 8gb of ram, could you do a
single malloc of 10gb?
  #2  
Old February 12th 10, 10:22 PM posted to comp.sys.intel
Robert Myers
external usenet poster
 
Posts: 606
Default numa question for xeon 5500

On Feb 12, 3:36*pm, Sven Bächle wrote:
Hi,
I am wondering what the maximum malloc size is, when working in a dual
5500 setup. That is, two xeon 5500 (nahalem), each having some amount of
ram installed. For example, each 5500 having 8gb of ram, could you do a
single malloc of 10gb?


You should see one big, undifferentiated memory space, with a slightly
different latency, depending on whether the data have to take an extra
hop to get to the processor that needs it.

Robert.


  #3  
Old February 13th 10, 12:18 AM posted to comp.sys.intel
Nate Edel
external usenet poster
 
Posts: 225
Default numa question for xeon 5500

Sven B?chle wrote:
I am wondering what the maximum malloc size is, when working in a dual
5500 setup. That is, two xeon 5500 (nahalem), each having some amount of
ram installed. For example, each 5500 having 8gb of ram, could you do a
single malloc of 10gb?


What OS?

On 64-bit Linux, the above would be correct. I'm not sure for any other OS.

--
Nate Edel http://www.cubiclehermit.com/
preferred email |
is "nate" at the | "I do have a cause, though. It's obscenity. I'm
posting domain | for it."
  #4  
Old February 13th 10, 06:01 AM posted to comp.sys.intel
Yousuf Khan
external usenet poster
 
Posts: 914
Default numa question for xeon 5500

Sven Bächle wrote:
Hi,
I am wondering what the maximum malloc size is, when working in a dual
5500 setup. That is, two xeon 5500 (nahalem), each having some amount of
ram installed. For example, each 5500 having 8gb of ram, could you do a
single malloc of 10gb?


In what sense are you asking? Are talking about for peak performance
considerations or just whether it's possible or not? If it's the latter,
then there should be no problem malloc'ing whatever size you want,
including above and beyond your installed memory size, the OS will take
care of it through virtual memory. If it's for performance
considerations, then quite obviously it will be best if you stayed
within the installed memory sizes of each processor. But you can't tell
how much memory is already allocated by other programs and drivers on
each processor, so even if you think you've allocated less than the
installed memory of each processor, you may cross into the memory
controlled by the other processor.

Yousuf Khan
  #5  
Old February 13th 10, 08:54 AM posted to comp.sys.intel
Sven Bächle
external usenet poster
 
Posts: 3
Default numa question for xeon 5500

Thanks for all the answers. No, it's not a performance issue, at least
not at the moment. I was just unsure wether it was possible, but it
makes sense that the MMU and OS take care of that.
Raising the question of performance - is this true:

More importantly, I should point out that even
remote memory references in Nehalem-based Servers
are faster than all memory references in the
previous generation Xeon-based systems.

I read it on
http://kevinclosson.wordpress.com/20...gives-part-ii/

  #6  
Old February 13th 10, 08:00 PM posted to comp.sys.intel
Sven Bächle
external usenet poster
 
Posts: 3
Default numa question for xeon 5500

On 13.02.2010 09:54, Sven Bächle wrote:

More importantly, I should point out that even
remote memory references in Nehalem-based Servers
are faster than all memory references in the
previous generation Xeon-based systems.


I found official Intel slides that say the same.
Page 46 of a presentation at Aachen University
http://www.rz.rwth-aachen.de/global/...aaaaaaaaabsyka

  #7  
Old February 13th 10, 08:00 PM posted to comp.sys.intel
Yousuf Khan[_2_]
external usenet poster
 
Posts: 1,296
Default numa question for xeon 5500

Sven Bächle wrote:
Thanks for all the answers. No, it's not a performance issue, at least
not at the moment. I was just unsure wether it was possible, but it
makes sense that the MMU and OS take care of that.
Raising the question of performance - is this true:

More importantly, I should point out that even
remote memory references in Nehalem-based Servers
are faster than all memory references in the
previous generation Xeon-based systems.

I read it on
http://kevinclosson.wordpress.com/20...gives-part-ii/


I'd say it's probably true, though I have not done any actual testing
myself. Previous generation Xeon systems used the outdated frontside bus
to do memory accesses. Modern Nehalems and later generation Xeons use
Quickpath Interconnect (QPI, Intel's version of AMD's Hypertransport).
Frontside bus was very collision-prone, multiple cores and processors
would be contending for the same resources, however it was a fully flat
architecture, no NUMA whatsoever there. During the heyday of AMD's
Opteron processors, which were NUMA right from the beginning, they'd
make mincemeat of their opposite number Xeons, even if the Xeons had a
clockspeed advantage.


Yousuf Khan
  #8  
Old February 13th 10, 08:12 PM posted to comp.sys.intel
Yousuf Khan[_2_]
external usenet poster
 
Posts: 1,296
Default numa question for xeon 5500

Sven Bächle wrote:
Thanks for all the answers. No, it's not a performance issue, at least
not at the moment. I was just unsure wether it was possible, but it
makes sense that the MMU and OS take care of that.
Raising the question of performance - is this true:

More importantly, I should point out that even
remote memory references in Nehalem-based Servers
are faster than all memory references in the
previous generation Xeon-based systems.

I read it on
http://kevinclosson.wordpress.com/20...gives-part-ii/



Another thing to note, is that in this article that you pointed to, they
are talking specifically about Oracle database performance. When NUMA
first started becoming a big issue back in 2003, when the Opteron first
came out, AMD coined the term SUMA (Sufficiently Uniform Memory Access).
That means just treat the memory as a big flat-space ignore the NUMA. At
that point in time, the operating systems available were Windows 2000/XP
and Server 2000/2003, mainly 32-bit versions of them too. They weren't
too NUMA aware, though they had some rudimentary awareness. Today's
versions of Windows Vista or 7, or their server versions, are much more
NUMA aware and have some specific optimizations available for them,
though I'm not aware of how much difference they would make to the
non-optimized settings. But Microsoft seems to make big hay out of their
NUMA optimizations.

Yousuf Khan
  #9  
Old February 13th 10, 08:17 PM posted to comp.sys.intel
Yousuf Khan[_2_]
external usenet poster
 
Posts: 1,296
Default numa question for xeon 5500

Sven Bächle wrote:
On 13.02.2010 09:54, Sven Bächle wrote:

More importantly, I should point out that even
remote memory references in Nehalem-based Servers
are faster than all memory references in the
previous generation Xeon-based systems.


I found official Intel slides that say the same.
Page 46 of a presentation at Aachen University
http://www.rz.rwth-aachen.de/global/...aaaaaaaaabsyka


Yes, I found something similar for AMD Opteron, on AMD's website:

AMD Opteron™ Solutions eNewsletter
"To be effective, NUMA systems need operating systems that are geared to
expect the different response times when a processor requests
information from memory, depending on where the memory is located within
the overall system. Initial benchmarks show as much as a 20 percent
performance increase in using a NUMA-aware Operating System. "
http://www.amd.com/us-en/0,,3715_11789,00.html

So it looks like there is only a 20% difference between local and
non-local memory access here.

Yousuf Khan
  #10  
Old February 13th 10, 08:19 PM posted to comp.sys.intel
Robert Myers
external usenet poster
 
Posts: 606
Default numa question for xeon 5500

On Feb 13, 3:00*pm, Yousuf Khan wrote:

(QPI, Intel's version of AMD's Hypertransport).


Hypertransport is a standard, and QPI is not Intel's version of it.
AMD no more invented cache-coherent serial interconnects than the
Russians invented the laser.

Robert.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
What memory for a 5500 Xeon Jason Arthurs Compaq Servers 1 September 17th 05 02:33 PM
Upgrading Proliant 5500 & 6500 from Pentium Pro to Xeon Jason Arthurs Compaq Servers 4 September 11th 05 07:15 PM
Solaris Heirarchal Latency Groups for NUMA YKhan General 3 March 2nd 05 07:56 PM
Proliant 5500 question Askalon Compaq Servers 9 February 19th 05 02:34 PM
Proliant 5500 P12 PII unit CPU question Tim Compaq Servers 9 November 18th 04 01:55 PM


All times are GMT +1. The time now is 05:42 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.