A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

65nm news from Intel



 
 
Thread Tools Display Modes
  #71  
Old September 3rd 04, 01:58 PM
Nick Maclaren
external usenet poster
 
Posts: n/a
Default


In article ,
Sander Vesik writes:
|
| GUIs and event processing and the inability to trivialy allow for
| at least topwindowlevel total paralellism is just a complete screwup.
| Its made worse by middleware (like say java and swing) exporting
| such braindeadness to application level.
|
| So instead of "write to tolerate or take advantage of parallelism
| if present" everybody writes for "serial everything in GUI is the
| one andonly true way".

I should like to be able to disagree, but regret that I am unable
to. The one niggle that I have is that a FEW applications do
allow for parallelism at the top level which is, as you say,
trivial.

There is no reason why most of the underlying morass ("layers"
implies a degree of structure that it does not possess) should
not be fully asynchronous and parallel. Well, no good reason.
But it isn't in most modern designs.


Regards,
Nick Maclaren.
  #72  
Old September 3rd 04, 02:33 PM
Robert Myers
external usenet poster
 
Posts: n/a
Default

Rupert Pigott wrote:

Robert Myers wrote:

[SNIP]

The smallest unit that anyone will ever program for non-embedded
applications will support I hesitate to guess how many execution
pipes, but certainly more than one. Single-pipe programming, using
tools appropriate for single-pipe programming, will come to seem just
as natural as doing physics without vectors and tensors.

The fact that this reality is finally percolating into the lowly but
ubiquitous PC is what I'm counting on for magic.



I really wouldn't hold your breath. Look how long it took for SMP to
become ubiquitous with major league UNIXen ... Has it had much of an
impact on the code base at large ? IMO : It hasn't.

UNIX had three stumbling blocks :

1) UNIX does let you make use of multiple CPUs at a coarse grained level
with stuff like pipes (ie : good enough).

2) The predominance of single threaded language that promotes single
threaded thinking.

3) Libraries designed for single-threaded non-rentrant usage.


I wouldn't have the slightest clue, were it not for Gnu. As it is, I
have a clue, but just barely. What I see happening is that, if there is
a better way to do business, people want to find a way to get there.
Given the millions of lines of code that are written in a language and
with an OS descended from ones written for the PDP-11, fundamental
change is very hard. To make change, though, the first thing is that
you have to want to make change, and I'm optimistic enough to believe
that the will is there.

By all accounts Windows NT suffers from the same, but to be fair it
has supported threading for a very long time and MS has been pushing
it very hard too. The codebase is positively riddled with threads by
comparison to UNIX, but I haven't seen much that is genuinely scalable.


Why should Microsoft make the necessary investment? The truly obscene
margin they are making on an OS they have foisted on the world by
illegal means keeps the empire running. Because they need that margin
to keep the empire running, it is never going to be invested in the
kinds of radical rework that would be needed to fix the supposedly
already fixed Windows NT/2000/XP stuff (as opposed to the Windows
95/98/ME stuff that even Microsoft effectively admits is hopelessly broken).

The fact that Microsoft has Tony Hoare and Leslie Lamport on staff and
_still_ manages to produce such horrifying stuff argues for your point
of view, and I think Microsoft is a dead waste to the world in terms of
making any kind of fundamental contribution to software.

I'm prematurely gloating over the fact that Microsoft isn't going to do
an IBM redux. IBM has software still in use so ancient it should be in
a Museum of Natural History. IBM got that software situated at a time
when the style of business that made such hopelessly proprietary and
hermetic software possible dominated the industry. IBM also understood
that, no matter what it took to avoid them, surprises were unacceptable.
Microsoft hasn't understood that or practically anything other than
the way that IBM's hermetic, proprietary software has, in the end, been
its passport to survival.

I don't believe that some kid will have a stunning insight as a result
of having a 2 or a 4P NT/Linux box sat on their desk either. Such boxes
have been around a *long* time and in the hands of some very clever
people who have already cleaned out the low-hanging fruit and are about
1/3rd the way up the tree at the moment.


Stunning insights are hard to predict.

The lesson of the fundamental disciplines where I should have some
capacity to judge progress is that it is the kids who make the
breakthroughs. Nothing that I have learned about the world of software,
mostly as an outsider, would suggest to me that it works any differently
in that respect from physics or mathematics.

I think hard graft is needed, perhaps having more boxes in more hands
will help increase the volume of hard graft and in turn that might get
us a result.


The respectable argument at the core of what I have said is somewhat
akin to the arguments that free market theoreticians make. A modest
number of people, no matter how smart, are unlikely to come up with the
best solution to a problem like planning a national economy. Turn a
large number of even modestly endowed free agents loose, though, and
amazing things will happen.

RM

  #73  
Old September 3rd 04, 04:09 PM
Mitch Alsup
external usenet poster
 
Posts: n/a
Default

Robert Myers wrote in message news:EFLZc.105418$Fg5.9550@attbi_s53...
Stefan Monnier wrote:

Your second CPU will be mostly idle, of course, but so is the first CPU
anyway ;-)


I sometimes think: no one experienced the microprocessor revolution.


Indeed. One thing we noticed in the RISC revolution (may it rest in
peace) was that a dual processor workstation did not get an application
done any faster, but it made the person interacting with the application
a lot happier!

One of the big benefits to a dual processor that is difficult to measure
is the improvement in hand eye coordination with the application. Lets
say a heavy CAD application is using 10% of a CPU for keyboard and mouse
activity, and 100% of the other CPU for application processing. This dual
processor arrangement is much better hand-(KB-app-graphics)-eye
coordination than a single CPU with 110% the processing power.
  #74  
Old September 3rd 04, 04:42 PM
Stephen Fuld
external usenet poster
 
Posts: n/a
Default


"Nick Maclaren" wrote in message
...

snip

Look, I was asked

What are some examples of important and performance-limited
computation tasks that aren't run in parallel?

not WHY are they not run in parallel, nor WHY they are performance-
limited, nor WHETHER that is unavoidable. As you point out, it
is due to misdesigns at various levels. But it IS an example of
what I was asked for.


OK, let me rephrase the original question to more reflect what I think the
OP was asking.

What are some examples of important, CPU bound applications that are limited
by not being parallelized?

I mean this to eliminate answers that depend on improving the latency
between the UK and New Zealand, which is a different sort of research
program. :-) I also mean to eliminate transaction processing, at least as
most commercial systems use it as it is already highly parallel between
transactions and very few individual transactions use enough CPU to benefit
much by within transaction CPU parallelism. I also mean to eliminate I/O as
that has been parallelized for decades (as you well know).

So from your original list we still have ODEs and perhaps UIs, though there
the benefit may be limited to relativly simple things like what was
mentioned earlier - dedicating a CPU to user interactions to assure
responsivness. Are there others?

--
- Stephen Fuld
e-mail address disguised to prevent spam


  #75  
Old September 3rd 04, 04:50 PM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

TOUATI Sid wrote:
"This is evidence that Moore's Law continues," said Mark Bohr( Intel's
director of process architecture and integration).


I remember that I read some months ago an interesting study of two
intel researchers who had shown the end of Moore Law: they said that
we have now a real wall that we cannot cross.

Can a company contradict itself like this (within one year) ?


Only if one of the company reps is an executive. :-)

Yousuf Khan


  #76  
Old September 3rd 04, 05:29 PM
Robert Myers
external usenet poster
 
Posts: n/a
Default

Stephen Fuld wrote:

snip


So from your original list we still have ODEs and perhaps UIs, though there
the benefit may be limited to relativly simple things like what was
mentioned earlier - dedicating a CPU to user interactions to assure
responsivness. Are there others?


I take the question to be: how many applications have been created for
which appropriate hardware doesn't yet exist?

In the broad class of applications that will spring into existence when
appropriate resources become available, I would place those that depend
on brute force search.

RM

  #77  
Old September 3rd 04, 05:39 PM
Nick Maclaren
external usenet poster
 
Posts: n/a
Default


In article ,
"Stephen Fuld" writes:
|
| OK, let me rephrase the original question to more reflect what I think the
| OP was asking.
|
| What are some examples of important, CPU bound applications that are limited
| by not being parallelized?
|
| I mean this to eliminate answers that depend on improving the latency
| between the UK and New Zealand, which is a different sort of research
| program. :-) I also mean to eliminate transaction processing, at least as
| most commercial systems use it as it is already highly parallel between
| transactions and very few individual transactions use enough CPU to benefit
| much by within transaction CPU parallelism. I also mean to eliminate I/O as
| that has been parallelized for decades (as you well know).

Actually, no, it doesn't eliminate it. I am not an expert on what
is normally known as transaction processing, but most of the things
that I have seen that fall under that have various steps. Now, in
many cases, many of those steps could be done in parallel, but aren't
(for the reasons I gave). Locking is all very well for some problems,
but not for others; Alpha LDC/STC designs can be applied more generally;
and so on.

Also, some I/O has been parallelised for decades, but modern forms
typically aren't. TCP/IP over Ethernet is usually dire, and that is
today's de facto standard.

If, however, you are referring to problem areas where there is no
known way of parallelising them, and yet they are bottlenecks, I
should have to think harder. I am certain that there are some, but
(as I said) a lot of people will have abandoned them as intractable.
So I should have to think about currently untackled requirements.

| So from your original list we still have ODEs and perhaps UIs, though there
| the benefit may be limited to relativly simple things like what was
| mentioned earlier - dedicating a CPU to user interactions to assure
| responsivness. Are there others?

Protein folding comes close. It is parallelisable in space, but
not easily in time. There are quite a lot of problems like that.



Regards,
Nick Maclaren.
  #78  
Old September 3rd 04, 07:12 PM
Russell Wallace
external usenet poster
 
Posts: n/a
Default

On 3 Sep 2004 16:39:00 GMT, (Nick Maclaren) wrote:

In article ,
"Stephen Fuld" writes:
|
| OK, let me rephrase the original question to more reflect what I think the
| OP was asking.
|
| What are some examples of important, CPU bound applications that are limited
| by not being parallelized?


Yes, that would be a better way of phrasing it.

| I mean this to eliminate answers that depend on improving the latency
| between the UK and New Zealand, which is a different sort of research
| program. :-)


Right I'll agree it's an answer to the question I asked, but it's
not the sort of problem I'm interested in here.

If, however, you are referring to problem areas where there is no
known way of parallelising them, and yet they are bottlenecks, I
should have to think harder. I am certain that there are some, but
(as I said) a lot of people will have abandoned them as intractable.
So I should have to think about currently untackled requirements.


Okay.

Protein folding comes close. It is parallelisable in space, but
not easily in time. There are quite a lot of problems like that.


Speaking of which: It seems to me that a big problem with protein
folding and similar jobs (e.g. simulating galaxy collisions) is:

- If you want N digits of accuracy in the numerical calculations, you
just need to use N digits of numerical precision, for O(N^2)
computational effort.

- However, quantizing time produces errors; if you want to reduce
these to N digits of accuracy, you need to use exp(N) time steps.

Is this right? Or is there any way to put a bound on the total error
introduced by time quantization over many time steps?

(Fluid dynamics simulation has this problem too, but in both the space
and time dimensions; I suppose there's definitely no way of solving it
for the space dimension, at least, other than by brute force.)

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
  #79  
Old September 3rd 04, 07:13 PM
Alex Johnson
external usenet poster
 
Posts: n/a
Default

Sander Vesik wrote:
Is any kind of itanium actually available on the open market (and
i mean openmarket for new chips, not resale of systems)?


Searching PriceWatch I got 11 offers for boxed Itanium 2 CPUs.

--
My words are my own. They represent no other; they belong to no other.
Don't read anything into them or you may be required to compensate me
for violation of copyright. (I do not speak for my employer.)

  #80  
Old September 3rd 04, 09:01 PM
Nick Maclaren
external usenet poster
 
Posts: n/a
Default

In article ,
Russell Wallace wrote:

Speaking of which: It seems to me that a big problem with protein
folding and similar jobs (e.g. simulating galaxy collisions) is:

- If you want N digits of accuracy in the numerical calculations, you
just need to use N digits of numerical precision, for O(N^2)
computational effort.


More or less.

- However, quantizing time produces errors; if you want to reduce
these to N digits of accuracy, you need to use exp(N) time steps.

Is this right? Or is there any way to put a bound on the total error
introduced by time quantization over many time steps?


There are ways, but they aren't very reliable. The worse problem
is that many such analyses are numerically unstable (a.k.a. chaotic),
and that the number of digits you need in your calculations is
exponential in the number of time steps. Also, reducing the size
of steps reduces one cause of error and increases this one.

You don't usually have to mince time as finely as you said, but
the problem remains. This is alleviated by the fact that most
numerical errors merely change one possible solution into another,
which is harmless. Unfortunately, there is (in general) no way of
telling whether that is happening or whether they are changing a
possible solution into an impossible one.

(Fluid dynamics simulation has this problem too, but in both the space
and time dimensions; I suppose there's definitely no way of solving it
for the space dimension, at least, other than by brute force.)


The same applies to the other problems. The formulae are different,
but the problems have a similar structure.

All this is why doing such things is a bit of a black art. I know
enough to know the problems in principle, but can't even start to
tackle serious problems in practice.


Regards,
Nick Maclaren.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Intel Prescott CPU in a Nutshell LuvrSmel Overclocking 1 January 10th 05 03:23 PM
Intel chipsets are the most stable? Grumble Homebuilt PC's 101 October 26th 04 02:53 AM
Real World Comparisons: AMD 3200 -vs- Intel 3.2. Your thoughts, experiences.... Ted Grevers General 33 February 6th 04 02:34 PM
Intel & 65nm Yousuf Khan General 0 November 25th 03 01:18 AM
Intel Updates Plans Again: Adds Pentium 4 EE at 3.40GHz and Pentium 4 at 3.40GHz lyon_wonder General 2 November 10th 03 11:17 PM


All times are GMT +1. The time now is 08:02 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.