A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

65nm news from Intel



 
 
Thread Tools Display Modes
  #61  
Old September 3rd 04, 10:52 AM
Jan Vorbrüggen
external usenet poster
 
Posts: n/a
Default

ODEs, to a great extent.

Parallelize over initial conditions?

A great deal of transaction processing.


Parallelize over transactions? OK, the commit phase needs to be
serialized.

A great deal of I/O.


Why that?

Event handling in GUIs.


Is that really limiting performance in any way, nowadays?

Jan
  #63  
Old September 3rd 04, 11:11 AM
TOUATI Sid
external usenet poster
 
Posts: n/a
Default

Yousuf Khan wrote:
http://www.reuters.com/locales/c_new...toryID=6098883

Yousuf Khan




"This is evidence that Moore's Law continues," said Mark Bohr( Intel's
director of process architecture and integration).


I remember that I read some months ago an interesting study of two intel
researchers who had shown the end of Moore Law: they said that we have
now a real wall that we cannot cross.

Can a company contradict itself like this (within one year) ?

S

  #64  
Old September 3rd 04, 11:14 AM
Nick Maclaren
external usenet poster
 
Posts: n/a
Default


In article ,
=?ISO-8859-1?Q?Jan_Vorbr=FCggen?= writes:
|
| ODEs, to a great extent.
|
| Parallelize over initial conditions?

If that is what you are doing. If they are a component of a more
complex application, you will find that unhelpful :-)

| A great deal of transaction processing.
|
| Parallelize over transactions? OK, the commit phase needs to be
| serialized.

Think multi-component and parallelising WITHIN transactions. In
theory, it can often be done. In practice, doing it and maintaining
consistency is hard enough that it isn't. Why do you think that so
many electronic transactions are so slow, and often getting slower?

Note that this is not a CPU limitation as such, but is a different
level of parallelism. But it is the same class of problem.

| A great deal of I/O.
|
| Why that?

Incompetence and historical, unparallelisable specifications.

| Event handling in GUIs.
|
| Is that really limiting performance in any way, nowadays?

Yes. I am sent gibbering up the wall by it, and am not alone in
that. The reason is that I am using some fairly ancient machines
with more modern software. Answers:

Never upgrade software, and don't connect to parts of the net
that need newer versions.

Upgrade your system. Oops. A few years down the line, you
will have the same problem. And remember that Not-Moore's Law
has reached the end of the line - so, while I can upgrade by a
healthy factor and remain serial, people with the latest and
greatest systems can't.


Regards,
Nick Maclaren.
  #66  
Old September 3rd 04, 11:54 AM
Paul Repacholi
external usenet poster
 
Posts: n/a
Default

Stefan Monnier writes:

Getting back to the issue of multiprocessors for "desktops" or even
laptops: I agree that parallelizing Emacs is going to be
excrutiatingly painful so I don't see it happening any time soon.
But that's not really the question.


In fact, Emacs IS a good candidate. Very little context that is not
buffer or window/frame local. Going into that swamp is another issue!

--
Paul Repacholi 1 Crescent Rd.,
+61 (08) 9257-1001 Kalamunda.
West Australia 6076
comp.os.vms,- The Older, Grumpier Slashdot
Raw, Cooked or Well-done, it's all half baked.
EPIC, The Architecture of the future, always has been, always will be.
  #67  
Old September 3rd 04, 01:24 PM
Jan Vorbrüggen
external usenet poster
 
Posts: n/a
Default

Think multi-component and parallelising WITHIN transactions. In
theory, it can often be done. In practice, doing it and maintaining
consistency is hard enough that it isn't.


What kind of transaction - by itself - would take long enough to warrant
that?

Why do you think that so
many electronic transactions are so slow, and often getting slower?


I wonder myself. I put it down to general incompetence - in particular,
because some much data is unnecessarily slung around over none-too-fast
networks. Of course, anything XML-based will make things only worse.

| Event handling in GUIs.
|
| Is that really limiting performance in any way, nowadays?

Yes. I am sent gibbering up the wall by it, and am not alone in
that. The reason is that I am using some fairly ancient machines
with more modern software.


Ancient as in a 30 MHz (IIRC) 68040 running NeXtStep - which is the most
responsive UIs I've ever seen? That is to say: any performance problem
with UIs is a problem of design and/or implementation, not the problem
as such. Not that that helps you any if the application you are using
is programmed on such a UI...cue WIN32 woes...

Jan
  #68  
Old September 3rd 04, 01:31 PM
Sander Vesik
external usenet poster
 
Posts: n/a
Default

In comp.arch Nick Maclaren wrote:
In article , Grumble wrote:
spinlock wrote:

We are on track for mass shipment of a billion (that's with a B)
transistor die by '08.


Who's "we" ?


A good question. But note that "by '08" includes "in 2005".

I have read that there will be ~1.7e9 transistors in Montecito.
Cache (2*1 MB L2 + 2*12 MB L3) probably accounts for ~90% of the
transistor count. Montecito is expected next year.


By whom is it expected? And how is it expected to appear? Yes,
someone will wave a chip at IDF and claim that it is a Montecito,
but are you expecting it to be available for internal testing,
to all OEMS, to special customers, or on the open market?


Is any kind of itanium actually available on the open market (and
i mean openmarket for new chips, not resale of systems)?



Regards,
Nick Maclaren.


--
Sander

+++ Out of cheese error +++
  #69  
Old September 3rd 04, 01:36 PM
Sander Vesik
external usenet poster
 
Posts: n/a
Default

In comp.arch Nick Maclaren wrote:

Event handling in GUIs.


GUIs and event processing and the inability to trivialy allow for
at least topwindowlevel total paralellism is just a complete screwup.
Its made worse by middleware (like say java and swing) exporting
such braindeadness to application level.

So instead of "write to tolerate or take advantage of parallelism
if present" everybody writes for "serial everything in GUI is the
one andonly true way".


Regards,
Nick Maclaren.


--
Sander

+++ Out of cheese error +++
  #70  
Old September 3rd 04, 01:55 PM
Nick Maclaren
external usenet poster
 
Posts: n/a
Default


In article ,
=?ISO-8859-1?Q?Jan_Vorbr=FCggen?= writes:
| Think multi-component and parallelising WITHIN transactions. In
| theory, it can often be done. In practice, doing it and maintaining
| consistency is hard enough that it isn't.
|
| What kind of transaction - by itself - would take long enough to warrant
| that?

Anything that is built up of a couple of dozen steps, with the
various components scattered from here to New Zealand!

In practice, the cumulative latency issue bites earlier, but that
one is imposed by physical limits. Again, I am not denying the
overriding cause of incompetence.

| Why do you think that so
| many electronic transactions are so slow, and often getting slower?
|
| I wonder myself. I put it down to general incompetence - in particular,
| because some much data is unnecessarily slung around over none-too-fast
| networks. Of course, anything XML-based will make things only worse.

There is no doubt that General Incompetence is in overall command,
but the question is what form the incompetence takes :-)

| Ancient as in a 30 MHz (IIRC) 68040 running NeXtStep - which is the most
| responsive UIs I've ever seen? That is to say: any performance problem
| with UIs is a problem of design and/or implementation, not the problem
| as such. Not that that helps you any if the application you are using
| is programmed on such a UI...cue WIN32 woes...

No :-(

Ancient as in a 250 MHz processor with lashings of memory, and the
need to run Netscape 6 or beyond, because of the ghastly Web pages
I need to access.

Look, I was asked

What are some examples of important and performance-limited
computation tasks that aren't run in parallel?

not WHY are they not run in parallel, nor WHY they are performance-
limited, nor WHETHER that is unavoidable. As you point out, it
is due to misdesigns at various levels. But it IS an example of
what I was asked for.


Regards,
Nick Maclaren.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Intel Prescott CPU in a Nutshell LuvrSmel Overclocking 1 January 10th 05 03:23 PM
Intel chipsets are the most stable? Grumble Homebuilt PC's 101 October 26th 04 02:53 AM
Real World Comparisons: AMD 3200 -vs- Intel 3.2. Your thoughts, experiences.... Ted Grevers General 33 February 6th 04 02:34 PM
Intel & 65nm Yousuf Khan General 0 November 25th 03 01:18 AM
Intel Updates Plans Again: Adds Pentium 4 EE at 3.40GHz and Pentium 4 at 3.40GHz lyon_wonder General 2 November 10th 03 11:17 PM


All times are GMT +1. The time now is 11:15 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.