A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

65nm news from Intel



 
 
Thread Tools Display Modes
  #41  
Old September 2nd 04, 09:54 AM
Nick Maclaren
external usenet poster
 
Posts: n/a
Default

In article , Grumble wrote:
spinlock wrote:

We are on track for mass shipment of a billion (that's with a B)
transistor die by '08.


Who's "we" ?


A good question. But note that "by '08" includes "in 2005".

I have read that there will be ~1.7e9 transistors in Montecito.
Cache (2*1 MB L2 + 2*12 MB L3) probably accounts for ~90% of the
transistor count. Montecito is expected next year.


By whom is it expected? And how is it expected to appear? Yes,
someone will wave a chip at IDF and claim that it is a Montecito,
but are you expecting it to be available for internal testing,
to all OEMS, to special customers, or on the open market?


Regards,
Nick Maclaren.
  #42  
Old September 2nd 04, 10:01 AM
Nick Maclaren
external usenet poster
 
Posts: n/a
Default

In article ,
Russell Wallace wrote:

At least as far as your typical spaghetti C++ is concerned, yeah, not
going to happen anytime in the near future.


Sigh. You are STILL missing the point. Spaghetti C++ may be about
as bad as it gets, but the SAME applies to the cleanest of Fortran,
if it is using the same programming paradigms. I can't get excited
over factors of 5-10 difference in optimisability, when we are
talking about improvements over decades.

This has the consequence that large-scale parallelism is not a viable
general-purpose architecture until and unless we move to a paradigm
that isn't so intractable.


And yet, by that argument there should be no market for the big
parallel servers and supercomputers; yet there is. The solution is
that for things that need the speed, people just write the parallel
code by hand.


Sigh. Look, I am in that area. If it were only so simple :-(

If what's on the desktop when Doom X, Half-Life Y and Unreal Z come
out is a chip with 1024 individually slow cores, then those games will
be written to use 1024-way parallelism, just as weather forecasting
and quantum chemistry programs are today. Ditto for Photoshop, 3D
modelling, movie editing, speech recognition etc. There's certainly no
shortage of parallelism in the problem domains. The reason things like
games don't use parallel code today whereas weather forecasting does
isn't because of any software issue, it's because gamers don't have
the money to buy massively parallel supercomputers whereas
organizations doing weather forecasting do. When that changes, so will
the software.


Oh, yeah. Ha, ha. I have been told that more-or-less continually
since about 1970. Except for the first two thirds of your first
sentence, it is nonsense.

Not merely do people sweat blood to get such parallelism, they
often have to change their algorithms (sometimes to ones that are
less desirable, such as being less accurate), and even then only
SOME problems can be parallelised.


Regards,
Nick Maclaren.
  #43  
Old September 2nd 04, 11:16 AM
Grumble
external usenet poster
 
Posts: n/a
Default

Nick Maclaren wrote:

Grumble wrote:

spinlock wrote:

We are on track for mass shipment of a billion (that's with a B)
transistor die by '08.


Who's "we" ?


A good question. But note that "by '08" includes "in 2005".


I took "by 2008" to mean "sometime in 2008". Otherwise he would have
said "by 2005" or "by 2006", don't you think?

I have read that there will be ~1.7e9 transistors in Montecito.
Cache (2*1 MB L2 + 2*12 MB L3) probably accounts for ~90% of the
transistor count. Montecito is expected next year.


By whom is it expected? And how is it expected to appear? Yes,
someone will wave a chip at IDF and claim that it is a Montecito,
but are you expecting it to be available for internal testing,
to all OEMS, to special customers, or on the open market?


In November 2003, Intel's roadmap claimed Montecito would appear in
2005. 6 months later, Otellini mentioned 2005 again. In June 2004, Intel
supposedly showcased Montecito dies, and claimed that testing had begun.

http://www.theinquirer.net/?article=15917
http://www.xbitlabs.com/news/cpu/dis...219125800.html
http://www.xbitlabs.com/news/cpu/dis...619180753.html

Perhaps Intel is being overoptimistic, but, as far as I understand, they
claim Montecito will be ready in 2005.

--
Regards, Grumble
  #44  
Old September 2nd 04, 12:02 PM
Nick Maclaren
external usenet poster
 
Posts: n/a
Default


In article , Grumble writes:
|
| By whom is it expected? And how is it expected to appear? Yes,
| someone will wave a chip at IDF and claim that it is a Montecito,
| but are you expecting it to be available for internal testing,
| to all OEMS, to special customers, or on the open market?
|
| In November 2003, Intel's roadmap claimed Montecito would appear in
| 2005. 6 months later, Otellini mentioned 2005 again. In June 2004, Intel
| supposedly showcased Montecito dies, and claimed that testing had begun.
|
| Perhaps Intel is being overoptimistic, but, as far as I understand, they
| claim Montecito will be ready in 2005.

I am aware of that. Given that Intel failed to reduce the power
going to 90 nm for the Pentium 4, that implies it will need 200
watts. Given that HP have already produced a dual-CPU package,
they will have boards rated for that. Just how many other vendors
will have?

Note that Intel will lose more face if they produce the Montecito
and OEMs respond by dropping their IA64 lines than if they make
it available only on request to specially favoured OEMs.


Regards,
Nick Maclaren.
  #45  
Old September 2nd 04, 01:21 PM
Alex Johnson
external usenet poster
 
Posts: n/a
Default

Nick Maclaren wrote:
Montecito is expected next year.


By whom is it expected? And how is it expected to appear? Yes,
someone will wave a chip at IDF and claim that it is a Montecito,
but are you expecting it to be available for internal testing,
to all OEMS, to special customers, or on the open market?


By intel and everyone who has been believing their repeated, unwavering
claims that mid-2005 will see commercial revenue shipments of Montecito.
Based on all the past releases in IPF, I expect a "launch" in June '05
and customers will have systems running in their environments around
August. There should be Montecito demonstrations at this coming IDF.
There were wafers shown at the last IDF. If my anticipated schedule is
correct, OEMs will have test chips soon.

Alex
--
My words are my own. They represent no other; they belong to no other.
Don't read anything into them or you may be required to compensate me
for violation of copyright. (I do not speak for my employer.)

  #46  
Old September 2nd 04, 03:15 PM
Russell Wallace
external usenet poster
 
Posts: n/a
Default

On 2 Sep 2004 09:01:35 GMT, (Nick Maclaren) wrote:

Sigh. You are STILL missing the point. Spaghetti C++ may be about
as bad as it gets, but the SAME applies to the cleanest of Fortran,
if it is using the same programming paradigms. I can't get excited
over factors of 5-10 difference in optimisability, when we are
talking about improvements over decades.


"Cleanest of Fortran" usually means vector-style code, which is a
reasonable target for autoparallelization. I'll grant you if you took
a pile of spaghetti C++ and translated line-for-line to Fortran, the
result wouldn't autoparallelize with near-future technology any more
than the original did.

And yet, by that argument there should be no market for the big
parallel servers and supercomputers; yet there is. The solution is
that for things that need the speed, people just write the parallel
code by hand.


Sigh. Look, I am in that area. If it were only so simple :-(


I didn't claim it was simple. I claimed that, even though it's
complicated, it still happens.

If what's on the desktop when Doom X, Half-Life Y and Unreal Z come
out is a chip with 1024 individually slow cores, then those games will
be written to use 1024-way parallelism, just as weather forecasting
and quantum chemistry programs are today. Ditto for Photoshop, 3D
modelling, movie editing, speech recognition etc. There's certainly no
shortage of parallelism in the problem domains. The reason things like
games don't use parallel code today whereas weather forecasting does
isn't because of any software issue, it's because gamers don't have
the money to buy massively parallel supercomputers whereas
organizations doing weather forecasting do. When that changes, so will
the software.


Oh, yeah. Ha, ha. I have been told that more-or-less continually
since about 1970. Except for the first two thirds of your first
sentence, it is nonsense.


So you claim weather forecasting and quantum chemistry _don't_ use
parallel processing today? Or that gamers would be buying 1024-CPU
machines today if Id would only get around to shipping parallel code?

Not merely do people sweat blood to get such parallelism, they
often have to change their algorithms (sometimes to ones that are
less desirable, such as being less accurate), and even then only
SOME problems can be parallelised.


I didn't claim sweating blood and changing algorithms weren't
required. However, I'm not aware of any CPU-intensive problems of
practical importance that _can't_ be parallelized; do you have any
examples of such?

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
  #47  
Old September 2nd 04, 03:39 PM
Nick Maclaren
external usenet poster
 
Posts: n/a
Default


In article ,
(Russell Wallace) writes:
|
| "Cleanest of Fortran" usually means vector-style code, which is a
| reasonable target for autoparallelization. ...

Not in my world, it doesn't. There are lots of other extremely
clean codes.

| Oh, yeah. Ha, ha. I have been told that more-or-less continually
| since about 1970. Except for the first two thirds of your first
| sentence, it is nonsense.
|
| So you claim weather forecasting and quantum chemistry _don't_ use
| parallel processing today? Or that gamers would be buying 1024-CPU
| machines today if Id would only get around to shipping parallel code?

I am claiming that a significant proportion of the programs don't.
In a great many cases, people have simply given up attempting the
analyses, and have moved to less satisfactory ones that can be
parallelised. In some cases, they have abandoned whole lines of
reserach! Your statement was that the existing programs would
be parallelised:

then those games will be written to use 1024-way parallelism,
just as weather forecasting and quantum chemistry programs are
today

| Not merely do people sweat blood to get such parallelism, they
| often have to change their algorithms (sometimes to ones that are
| less desirable, such as being less accurate), and even then only
| SOME problems can be parallelised.
|
| I didn't claim sweating blood and changing algorithms weren't
| required. However, I'm not aware of any CPU-intensive problems of
| practical importance that _can't_ be parallelized; do you have any
| examples of such?

Yes. Look at ODEs for one example that is very hard to parallelise.
Anything involving sorting is also hard to parallelise, as are many
graph-theoretic algorithms. Ones that are completely hopeless are
rarer, but exist - take a look at the "Spectral Test" in Knuth for
a possible candidate.

The characteristic of the most common class of unparallelisable
algorithm is that they are iterative, each step is small (i.e.
effectively scalar), yet it makes global changes (and where the
cost of that is very small). This means that steps are never
independent, and are therefore serialised.

What I can't say is how many CPU-intensive problems of practical
importance are intrinsically unparallelisable - i.e. they CAN'T
be converted to a parallelisable form by changing the algorithms.
But that is not what I claimed.


Regards,
Nick Maclaren.
  #49  
Old September 2nd 04, 05:34 PM
Scott Moore
external usenet poster
 
Posts: n/a
Default

Russell Wallace wrote:

On 1 Sep 2004 19:50:09 GMT, (Nick Maclaren) wrote:


There is effectively NO chance of automatic parallelisation working
on serial von Neumann code of the sort we know and, er, love. Not
in the near future, not in my lifetime and not as far as anyone can
predict. Forget it.



At least as far as your typical spaghetti C++ is concerned, yeah, not
going to happen anytime in the near future.


The statement is wrong in any case. C can be translated to hardware
(which is defacto parallelisim) by "constraints", i.e., refusing to
translate its worst features (look up system C, C to hardware and
similar). Other languages can do it without constraints. Finally,
any code, no matter how bad, could be so translated by executing it
(simulating it), and then translating what it does dynamically and
not statically. This simulation can then give the programmer a report
of what was not executed, and the programmer modifies the test cases
until all code has been so translated.


This has the consequence that large-scale parallelism is not a viable
general-purpose architecture until and unless we move to a paradigm
that isn't so intractable.



And yet, by that argument there should be no market for the big
parallel servers and supercomputers; yet there is. The solution is
that for things that need the speed, people just write the parallel
code by hand.

If what's on the desktop when Doom X, Half-Life Y and Unreal Z come
out is a chip with 1024 individually slow cores, then those games will
be written to use 1024-way parallelism, just as weather forecasting
and quantum chemistry programs are today. Ditto for Photoshop, 3D
modelling, movie editing, speech recognition etc. There's certainly no
shortage of parallelism in the problem domains. The reason things like
games don't use parallel code today whereas weather forecasting does
isn't because of any software issue, it's because gamers don't have
the money to buy massively parallel supercomputers whereas
organizations doing weather forecasting do. When that changes, so will
the software.



--
Samiam is Scott A. Moore

Personal web site: http:/www.moorecad.com/scott
My electronics engineering consulting site:
http://www.moorecad.com
ISO 7185 Standard Pascal web site: http://www.moorecad.com/standardpascal
Classic Basic Games web site: http://www.moorecad.com/classicbasic
The IP Pascal web site, a high performance, highly portable ISO 7185 Pascal
compiler system: http://www.moorecad.com/ippas

Being right is more powerfull than large corporations or governments.
The right argument may not be pervasive, but the facts eventually are.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Intel Prescott CPU in a Nutshell LuvrSmel Overclocking 1 January 10th 05 03:23 PM
Intel chipsets are the most stable? Grumble Homebuilt PC's 101 October 26th 04 02:53 AM
Real World Comparisons: AMD 3200 -vs- Intel 3.2. Your thoughts, experiences.... Ted Grevers General 33 February 6th 04 02:34 PM
Intel & 65nm Yousuf Khan General 0 November 25th 03 01:18 AM
Intel Updates Plans Again: Adds Pentium 4 EE at 3.40GHz and Pentium 4 at 3.40GHz lyon_wonder General 2 November 10th 03 11:17 PM


All times are GMT +1. The time now is 01:14 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.