A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Scott McNealy, AMD fanboy



 
 
Thread Tools Display Modes
  #1  
Old December 11th 03, 04:14 PM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default Scott McNealy, AMD fanboy

Scott McNealy is sounding like an AMD fanboy these days. Here's some of his
latest quotes:

-"It was a slam-dunk reason. We maintained binary compatibility with the
entire x86 software base with Opteron. We took the Xeon Solaris binary and
it immediately ran on Opteron."
-"Itanium is last millennium architecture - it's too late. Sparc and IBM's
Power have already taken the space in the 64-bit, vertical scaling,
enterprise compute space. Itanium would need to be three or four times
faster than Power and/or Sparc to establish a position, coming in eight
years, nine years or you could say 10 years, late.
-"We got everybody to recompile for Sparc 64-bit way back in 1994 because we
were the first. They're last, nobody's going to recompile for that
environment. So AMD was just great. AMD is an Itanium killer."

Here's the link:
http://www.computing.co.uk/News/1151480

Anyways, he's supposed to talk like that, he's trying to sell Opterons now
afterall.

Also it looks like Sun is going to be buying its servers from Newisys:
http://www.theinquirer.net/?article=13145

Yousuf Khan


  #2  
Old December 11th 03, 07:47 PM
Tony Hill
external usenet poster
 
Posts: n/a
Default

On Thu, 11 Dec 2003 16:14:50 GMT, "Yousuf Khan"
wrote:
Scott McNealy is sounding like an AMD fanboy these days. Here's some of his
latest quotes:

-"It was a slam-dunk reason. We maintained binary compatibility with the
entire x86 software base with Opteron. We took the Xeon Solaris binary and
it immediately ran on Opteron."


It should perhaps be noted that Sun has already done all the work to
get a combined 32-bit/64-bit operating system working. The way that
AMD extended x86 to 64-bits is nearly identical to how Sun extended
Sparc to 64-bits a few years back. Running a mixed 32-bit/64-bit
environment is already well supported in Sparc. As we've seen with
Linux, getting a mixed environment was the tricky part to getting
AMD64 support. Most companies were able to compile most things for
AMD64 in a few days or weeks, but the distributions still took a year
or so to complete.

For Sun, most of the work is already done.

-"Itanium is last millennium architecture - it's too late. Sparc and IBM's
Power have already taken the space in the 64-bit, vertical scaling,
enterprise compute space. Itanium would need to be three or four times
faster than Power and/or Sparc to establish a position, coming in eight
years, nine years or you could say 10 years, late.


Hehe, I don't know if this is AMD fanboy-ism, or simply "If it comes
from Intel is CRAP!" :

Maybe they are one and the same?

-"We got everybody to recompile for Sparc 64-bit way back in 1994 because we
were the first. They're last, nobody's going to recompile for that
environment. So AMD was just great. AMD is an Itanium killer."


Give me an A!
Give me a M!
Give me a D!

What's that spell? AMD!!!

Yup, there is definitely some AMD-cheerleading going on here!

Here's the link:
http://www.computing.co.uk/News/1151480

Anyways, he's supposed to talk like that, he's trying to sell Opterons now
afterall.

Also it looks like Sun is going to be buying its servers from Newisys:
http://www.theinquirer.net/?article=13145


Yup, Scott and Hector Ruiz had a presentation a month or two back when
they announced the servers, and they showed off two "Sun" Opteron
servers. One was a two-processor 1U Newisys 2100 and the other was a
4-processor, 3U Newisys 4300, both all decked out in Sun colours.

Sun has also said that they would develop some 8-processor Opteron
systems as well as some workstations though, so they may end up doing
the development of those in-house, as currently Newisys does not
provide such systems.

-------------
Tony Hill
hilla underscore 20 at yahoo dot ca
  #3  
Old December 12th 03, 03:10 PM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

"Tony Hill" wrote in message
.com...
Also it looks like Sun is going to be buying its servers from Newisys:
http://www.theinquirer.net/?article=13145


Yup, Scott and Hector Ruiz had a presentation a month or two back when
they announced the servers, and they showed off two "Sun" Opteron
servers. One was a two-processor 1U Newisys 2100 and the other was a
4-processor, 3U Newisys 4300, both all decked out in Sun colours.

Sun has also said that they would develop some 8-processor Opteron
systems as well as some workstations though, so they may end up doing
the development of those in-house, as currently Newisys does not
provide such systems.


The more I think about it, the more difficult the task sounds. How do you
get 8 processors (along with their huge heatsinks) populated on a standard
sized motherboard, and still have room for 8 sets of memory banks (presuming
that each processor will control its own memory banks), and then also have
PCI slots? It seems like a logistical nightmare. I can't see how they can do
it without splitting it up into two separate motherboards. Even if they
concentrate the CPUs physically close together and use some proprietary
all-in-one heatsinking solution it seems like heat will be a problem to get
rid of. Maybe once they go to 90nm the heat problem will be manageable at
that point?

Yousuf Khan


  #4  
Old December 12th 03, 07:15 PM
Scott Alfter
external usenet poster
 
Posts: n/a
Default

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

In article ers.com,
Yousuf Khan wrote:
"Tony Hill" wrote in message
t.com...
Sun has also said that they would develop some 8-processor Opteron
systems as well as some workstations though, so they may end up doing
the development of those in-house, as currently Newisys does not
provide such systems.


The more I think about it, the more difficult the task sounds. How do you
get 8 processors (along with their huge heatsinks) populated on a standard
sized motherboard, and still have room for 8 sets of memory banks (presuming
that each processor will control its own memory banks), and then also have
PCI slots?


Who said it has to be an ATX motherboard that you can drop into any old
case? Sun might design a motherboard around one of its existing enclosures,
or it might come up with an all-new design. In their market, their system
designers pretty much have a blank slate. If they set out to make an 8-way
box and end up having to make it the size of a dorm fridge, they'll do that.

_/_ Scott Alfter (address in header doesn't receive mail)
/ v \ send mail to
(IIGS( http://alfter.us/ Top-posting!
\_^_/ rm -rf /bin/laden What's the most annoying thing on Usenet?

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (Linux)

iD8DBQE/2hPiVgTKos01OwkRAoO4AKCYfJdi/6E442drOxpDtBLNksXrzwCeIhJ6
z7uevKLKZpvj4D8b65mv/Qg=
=sXTO
-----END PGP SIGNATURE-----
  #5  
Old December 12th 03, 07:33 PM
Felger Carbon
external usenet poster
 
Posts: n/a
Default

"Yousuf Khan" wrote
in message
.rogers.com...


opteron heat problems snipped

Maybe once they go to 90nm the heat problem will be manageable at
that point?


Good point. See how well that worked on Prescott? ;-) ;-)



  #6  
Old December 12th 03, 09:32 PM
Tony Hill
external usenet poster
 
Posts: n/a
Default

On Fri, 12 Dec 2003 15:10:16 GMT, "Yousuf Khan"
wrote:
Sun has also said that they would develop some 8-processor Opteron
systems as well as some workstations though, so they may end up doing
the development of those in-house, as currently Newisys does not
provide such systems.


The more I think about it, the more difficult the task sounds. How do you
get 8 processors (along with their huge heatsinks) populated on a standard
sized motherboard,


Well, you start by scrapping any idea of using a standard sized
motherboard. It doesn't really matter what architecture you're
talking about, if you want 8 or more CPUs in a single system image,
you are NOT talking about a standard sized motherboard.

and still have room for 8 sets of memory banks (presuming
that each processor will control its own memory banks), and then also have
PCI slots? It seems like a logistical nightmare. I can't see how they can do
it without splitting it up into two separate motherboards.


Probably more than 2. Your memory will be stuck on memory risers and
PCI-X slots will be on another board(s) still. Personally I'd
envision starting with 8 processor on a board, roughly split into 2
rows of 4. Beside each row of 4 processors you have a memory riser
board with 8 banks of memory per board (two banks per processor).
Than you can move further out and put PCI-X riser boards on either
side, or possibly just go straight to a backplane.

Is it easy? Hell now. Is it possible? Sure. You aren't going to
find it in a desktop case though! You're probably looking at a 5U
rack at a minimum.

Even if they
concentrate the CPUs physically close together and use some proprietary
all-in-one heatsinking solution it seems like heat will be a problem to get
rid of. Maybe once they go to 90nm the heat problem will be manageable at
that point?


Heat will be a bit of a problem, but not an insurmountable one. Just
make the CPU section of the system a great big wind tunnel and you
should do ok. You're only talking about 8 * 80W or so, about 640W.
Even with a handful of hard drives, memory, I/O and the general loses
due to power supply inefficiency you're still probably only looking at
about a KW of power. Again, it's a lot, but by no means impossible.

-------------
Tony Hill
hilla underscore 20 at yahoo dot ca
  #7  
Old December 13th 03, 03:39 AM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

"Felger Carbon" wrote in message
k.net...
"Yousuf Khan" wrote
in message
Maybe once they go to 90nm the heat problem will be manageable at
that point?


Good point. See how well that worked on Prescott? ;-) ;-)


Yeah, but AMD already has SOI ready, alongside its 90nm.

Yousuf Khan


  #8  
Old December 13th 03, 03:50 AM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

"Tony Hill" wrote in message
.com...
The more I think about it, the more difficult the task sounds. How do you
get 8 processors (along with their huge heatsinks) populated on a

standard
sized motherboard,


Well, you start by scrapping any idea of using a standard sized
motherboard. It doesn't really matter what architecture you're
talking about, if you want 8 or more CPUs in a single system image,
you are NOT talking about a standard sized motherboard.


But still, it has to be able to fit into a motherboard that will fit inside
a standard footprint case that will fit inside a standard sized rack. What
is that footprint, 19" x 19"?

Probably more than 2. Your memory will be stuck on memory risers and
PCI-X slots will be on another board(s) still. Personally I'd
envision starting with 8 processor on a board, roughly split into 2
rows of 4. Beside each row of 4 processors you have a memory riser
board with 8 banks of memory per board (two banks per processor).
Than you can move further out and put PCI-X riser boards on either
side, or possibly just go straight to a backplane.


I can understand the PCI-X being put into a riser card, as they aren't that
speed critical, but wouldn't putting the memory into risers add delays into
the memory? Would the registered DIMMs be enough to counteract those kinds
of delays?

Heat will be a bit of a problem, but not an insurmountable one. Just
make the CPU section of the system a great big wind tunnel and you
should do ok. You're only talking about 8 * 80W or so, about 640W.
Even with a handful of hard drives, memory, I/O and the general loses
due to power supply inefficiency you're still probably only looking at
about a KW of power. Again, it's a lot, but by no means impossible.


I think IBM makes some Power boards that hold upto 8 or 16 processors per
board. They are all very close together too. But these boards don't hold any
memory or i/o.

Yousuf Khan


  #9  
Old December 13th 03, 09:41 PM
external usenet poster
 
Posts: n/a
Default

Yousuf Khan wrote:
"Tony Hill" wrote in message
.com...
The more I think about it, the more difficult the task sounds. How do you
get 8 processors (along with their huge heatsinks) populated on a

standard
sized motherboard,


Well, you start by scrapping any idea of using a standard sized
motherboard. It doesn't really matter what architecture you're
talking about, if you want 8 or more CPUs in a single system image,
you are NOT talking about a standard sized motherboard.


But still, it has to be able to fit into a motherboard that will fit inside
a standard footprint case that will fit inside a standard sized rack. What
is that footprint, 19" x 19"?


Many such systems put both memory and cpus on riser cards, so that the
motherboard is just some sort of bus-system without (many) on-board
components. Consequently, the board need not be very large, but
adding in all the risers, it may be somewhat tall.

--
Bjørn-Ove Heimsund
Centre for Integrated Petroleum Research
University of Bergen, Norway
  #10  
Old December 13th 03, 11:32 PM
Tony Hill
external usenet poster
 
Posts: n/a
Default

On Sat, 13 Dec 2003 03:50:00 GMT, "Yousuf Khan"
wrote:
"Tony Hill" wrote in message
t.com...
Well, you start by scrapping any idea of using a standard sized
motherboard. It doesn't really matter what architecture you're
talking about, if you want 8 or more CPUs in a single system image,
you are NOT talking about a standard sized motherboard.


But still, it has to be able to fit into a motherboard that will fit inside
a standard footprint case that will fit inside a standard sized rack. What
is that footprint, 19" x 19"?


You're motherboard can be about 18" wide (19" rack with a tiny bit of
space on either side) but fairly deep. I'm not sure what the maximum
depth of a rack-mount case is off the top of my head, but it's more
than 19".

Probably more than 2. Your memory will be stuck on memory risers and
PCI-X slots will be on another board(s) still. Personally I'd
envision starting with 8 processor on a board, roughly split into 2
rows of 4. Beside each row of 4 processors you have a memory riser
board with 8 banks of memory per board (two banks per processor).
Than you can move further out and put PCI-X riser boards on either
side, or possibly just go straight to a backplane.


I can understand the PCI-X being put into a riser card, as they aren't that
speed critical, but wouldn't putting the memory into risers add delays into
the memory? Would the registered DIMMs be enough to counteract those kinds
of delays?


Have you ever looked inside most 8P and even many 4P servers now?
They use memory riser cards all the time! You would probably need
slightly slower memory timings than you could use if you were putting
memory on the same board, but it can be done.

It may also even be possible to do hypertransport between different
boards. I don't know enough about the protocol to know if this would
work or not, but it may very well be possible to have the processors
and memory stuck on daughterboards. For example, you could have two
daughterboards and their accompanied memory with the two middle
cc-hypertransport channels going through the mainboard between them.

There is also always the option of doing an 8P Opteron server in a
non-glueless fashion, much like how the 8P Xeon servers are done
today. You could easily stick a pair of 4P boards in the same chassis
with a crossbar between the two boards (hypertransport even gives you
a nice high bandwidth/low latency I/O mechanism to attach that
crossbar to). This would give you a two-level NUMA setup, which would
be somewhat less than ideal, but it would still probably all work a
lot better than an 8P XeonMP system would.

-------------
Tony Hill
hilla underscore 20 at yahoo dot ca
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Book Review:Upgrading and Repairing PCs, 16/e, Scott Mueller Paul Dell Computers 0 October 18th 04 04:10 AM
Book Review:Upgrading and Repairing PCs, 16/e, Scott Mueller Paul Gateway Computers 0 October 18th 04 04:10 AM
Help - Nvidia Fanboy thinking of going to ATI! Steven C \(Doktersteve\) Ati Videocards 8 September 6th 03 05:16 PM


All times are GMT +1. The time now is 09:23 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.