If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#121
|
|||
|
|||
Terje Mathisen wrote:
Nick Maclaren wrote: In article , Dean Kent wrote: "Nick Maclaren" wrote in message ... But Robert Myers is right - there has been enough work that there are grounds for believing that the problem is effectively insoluble. You mean "at the present time", correct? ;-). No. I said "insoluble", not "unsolved". You might be right, but that's still in the 'famous last words' cathegory. :-) I believe the relevant quote is something like this: "When an established expert in a field tell you that something is possible, he is almost certainly right, but when he tells you that something is impossible, he is very likely wrong." I'm sure I don't have to say this for your benefit, but, as to what I said on the subject, I really want to stick with my own exact words, which I chose with some care. RM |
#122
|
|||
|
|||
Terje Mathisen wrote:
+--------------- | I believe the relevant quote is something like this: | | "When an established expert in a field tell you that something is | possible, he is almost certainly right, but when he tells you that | something is impossible, he is very likely wrong." +--------------- You're probably thinking of Clarke's First Law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. Arthur C Clarke, "Profiles of the Future" (1962; rev. 1973) ``Hazards of Prophecy: The Failure of Imagination'' But one should always temper that with Isaac Asimov's comment: When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervor and emotion--the distinguished but elderly scientists are then, after all, probably right. Isaac Asimov (1920-1992), in "Fantasy & Science Fiction" 1977 [In answer to Clarke's First Law] -Rob Refs: http://www.phantazm.dk/sf/arthur_c_clarke/s.htm http://www.xs4all.nl/~jcdverha/scijokes/8_4.html and many others... ----- Rob Warnock 627 26th Avenue URL:http://rpw3.org/ San Mateo, CA 94403 (650)572-2607 |
#123
|
|||
|
|||
Rob Warnock wrote:
Terje Mathisen wrote: +--------------- | I believe the relevant quote is something like this: | | "When an established expert in a field tell you that something is | possible, he is almost certainly right, but when he tells you that | something is impossible, he is very likely wrong." +--------------- You're probably thinking of Clarke's First Law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. Arthur C Clarke, "Profiles of the Future" (1962; rev. 1973) ``Hazards of Prophecy: The Failure of Imagination'' Right, thanks for the reference! But one should always temper that with Isaac Asimov's comment: When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervor and emotion--the distinguished but elderly scientists are then, after all, probably right. Isaac Asimov (1920-1992), in "Fantasy & Science Fiction" 1977 [In answer to Clarke's First Law] Disproof by popular acclaim? :-) Terje -- - "almost all programming can be viewed as an exercise in caching" |
#124
|
|||
|
|||
Nick Maclaren wrote:
In article CI5%c.44980$3l3.13915@attbi_s03, Robert Myers writes: | | No. I said "insoluble", not "unsolved". | | So, it is your position that it cannot be solved now, nor anytime in the | future? | | Hot fusion plainly has its ready defenders. The expectations for | programming multiprocessors are apparently low, with no apparent and | certainly no strenuous dissent. Eh? I will dissent, strenuously, against such a sweeping statement! My comment was about such programming by the mass of 'ordinary' programmers, not about its use in HPC and embedded work (including games). And then there is Jouni Osmala .... I wouldn't want to discourage Jouni or anyone else from optimism, even overoptimism, even naive overoptimism. I was, on the other hand, trying to provoke a dissent from you to the extent that you would say what you do think is possible. If SGI can get 1024 Itanium to cooperate in a single Linux system image on a single system image, then somebody must know what they're doing. NASA Ames apparently has enough confidence in its ability to program big SMP boxes that it is buying 20 with 512 processors apiece. On the other hand, the suggestion was recently made here that maybe we should just banish SMP as an unacceptable programming style (meaning, I think, that multiprocessor programming should not be done in a globally-shared memory space, or at least that the shared space should be hidden behind something like MPI). The situation is _so_ bad that it doesn't seem embarrassing, apparently, for Orion Multisystems to take a lame processor, to hobble it further with a lame interconnect, and to call it a workstation. If the future of computing really is slices of Wonder Bread in a plastic bag and not a properly cooked meal, then the Orion box makes some sense. Might as well get used to it and start programming on an architecture that at least has the right topology and instruction set, as I believe Andrew Reilly is suggesting. If big computers are to be used to solve problems, they are inevitably going to fall into the hands of people who are more interested in solving problems than they are in the computers...as should be. If we really can't conjure tools for programming them that are reliable in the hands of relative amateurs, I see it as a more pressing issue than not being able to do hot fusion (the prospects for wind and solar having come along very nicely). RM |
#125
|
|||
|
|||
In article 8uh%c.47892$3l3.16380@attbi_s03, Robert Myers writes: | | I was, on the other hand, trying to provoke a dissent from you to the | extent that you would say what you do think is possible. | | If SGI can get 1024 Itanium to cooperate in a single Linux system image | on a single system image, then somebody must know what they're doing. | NASA Ames apparently has enough confidence in its ability to program big | SMP boxes that it is buying 20 with 512 processors apiece. Yes, it can be done. | On the other hand, the suggestion was recently made here that maybe we | should just banish SMP as an unacceptable programming style (meaning, I | think, that multiprocessor programming should not be done in a | globally-shared memory space, or at least that the shared space should | be hidden behind something like MPI). My view is that, if it is to be done, it should be done properly. And currently, it isn't. There are hardware issues where the primitives provided are unsuitable, the operating system ones are definitely unsuitable, and the language situation beggars belief. All soluble, in theory. Whether it is the BEST approach is unclear. Explicit synchronisation of incoherent shared memory is a good model, too, as is message passing. I can live with any of them, and so can most good parallel programmers. | If big computers are to be used to solve problems, they are inevitably | going to fall into the hands of people who are more interested in | solving problems than they are in the computers...as should be. If we | really can't conjure tools for programming them that are reliable in the | hands of relative amateurs, I see it as a more pressing issue than not | being able to do hot fusion (the prospects for wind and solar having | come along very nicely). And we need to start by developing some defined parallel programming languages and paradigms that are acceptable to such users. Regards, Nick Maclaren. |
#126
|
|||
|
|||
Robert Myers wrote: On the other hand, the suggestion was recently made here that maybe we should just banish SMP as an unacceptable programming style (meaning, I think, that multiprocessor programming should not be done in a globally-shared memory space, or at least that the shared space should be hidden behind something like MPI). With the latter presenting a different api to the programmer or do you mean doing the shared memory virtualization in software rather than hardware? Distributed algorithms are attractive from a hardware point of view because they force some nastier error checking into software. Do you really want programmers who can't handle shared memory doing distributed programming? Joe Seigh |
#127
|
|||
|
|||
Nick Maclaren wrote:
snip All soluble, in theory. snip And we need to start by developing some defined parallel programming languages and paradigms that are acceptable to such users. It _does_ seem rather like hot fusion. RM |
#128
|
|||
|
|||
Robert Myers wrote:
On the other hand, the suggestion was recently made here that maybe we should just banish SMP as an unacceptable programming style (meaning, I think, that multiprocessor programming should not be done in a globally-shared memory space, or at least that the shared space should be hidden behind something like MPI). I wonder how much SMP style, and the uniform address spaces that go with it, can be hidden under VM, pointer swizzling and layers of software-based caching. Probably not much, really. The situation is _so_ bad that it doesn't seem embarrassing, apparently, for Orion Multisystems to take a lame processor, to hobble it further with a lame interconnect, and to call it a workstation. If the future of computing really is slices of Wonder Bread in a plastic bag and not a properly cooked meal, then the Orion box makes some sense. Might as well get used to it and start programming on an architecture that at least has the right topology and instruction set, as I believe Andrew Reilly is suggesting. Well, I think that the specific instruction set is probably a red herring. I reckon that an object code specifically designed to be a target for JIT compilation to a register-to-register VLIW engine of indeterminate dimensions will turn out to be better ultimately. There are projects moving in that direction: http://llvm.cs.uiuc.edu/, and from long, long ago: TAO-group's VM. Stack-based VM's like JVM and MS-IL might or might not be the right answer. I guess we'll find out soon enough. Code portability and density is important, of course, but the main thing is winning back with dynamic recompilation some of the unknowables that plain VLIW in-order RISC visits on code. The Transmeta Eficieon is just the first widely available processor with embedded-levels of integration (memory and some peripheral interfaces and hyper-channel for other peripherals) and power consumption that can do pipelined double-precision floating point multiply/additions at two flops/clock at an interesting clock rate. 1.5Ghz is significantly faster than the DSP competitors. TIC6700 tops out at 300MHz and only does single precision at the core rate. PowerPC+Altivec doesn't have the memory controller or the peripheral interconnect to drive up the areal density. The BlueGene core is about the right shape, but I haven't seen any industrial/embedded boxes with a few dozen of them in it, yet. The MIPS and ARM processors that have the integration don't have the floating point chops. Modern versions of VIA C3 might be getting interesting (or not: I haven't looked at their double-precision performance), but have neither the memory controller nor the hyperchannel, nor quite the MHz. Of course, Opterons fit that description too, and clock much faster, but I thought that they consumed considerably more power, too. Maybe their MIPS/watt is closer than I've given it credit for. If big computers are to be used to solve problems, they are inevitably going to fall into the hands of people who are more interested in solving problems than they are in the computers...as should be. If we really can't conjure tools for programming them that are reliable in the hands of relative amateurs, I see it as a more pressing issue than not being able to do hot fusion (the prospects for wind and solar having come along very nicely). For such people, I suspect that the appropriate level of programming is that of science fiction starship bridge computers: "here's what I want: make it so". I wonder if anyone has looked at something like simulated annealing or genetic optimisation to drive memory access patterns revealed by problems expressed at an APL or Matlab (or higher) level. For most of the "big science" problems, I suspect that the "what I want" is not terribly difficult to express (once you've done the science-level thinking, of course). The tricky part, at the moment, is having a human understand the redundancies and dataflows (and numerical stability) issues well enough to map the direct-form of the solution to something efficient (on one or on a bunch of processors). I think that from a sufficient altitude, that looks like an annealing problem, with dynamic recompilation being the lower tier mechanism of the optimisation target. The lucky thing about "big science" problems is that by definition they have big data, and run for a long time. That time and that amount of data might as well be used by the machine itself to try to speed the process up as by a bunch of humans attempting the same thing without as intimate access to the actual values in the data sets and computations. It's late, I've had a few glasses of a nice red and I'm rambling. Sorry about that. Hope the ramble sparks some other ideas. -- Andrew |
#129
|
|||
|
|||
Joe Seigh wrote:
Robert Myers wrote: On the other hand, the suggestion was recently made here that maybe we should just banish SMP as an unacceptable programming style (meaning, I think, that multiprocessor programming should not be done in a globally-shared memory space, or at least that the shared space should be hidden behind something like MPI). With the latter presenting a different api to the programmer or do you mean doing the shared memory virtualization in software rather than hardware? I mean that one writes modules as if for a von Neumann architecture--never any possibility of a variable being corrupted because of concurrency. Data that fall outside the purview of the module are received from or sent to an outside agent through a perfectly encapsulated interface. How that agent does its work, whether in hardware or software, is immaterial, so long as it does it according to specification without intervention or oversight from the application programmer. Distributed algorithms are attractive from a hardware point of view because they force some nastier error checking into software. Do you really want programmers who can't handle shared memory doing distributed programming? I believe that it is possible to write formally incorrect programs in any language currently in practical use. It seems likely that anyone using such a language, no matter how competent, will eventually write a formally incorrect program and introduce a bug that will prove to be very hard to find. Artificial boundaries (separate processors, separate memory spaces, separate processes, separate threads, separate system images) might help in debugging and create an illusion of safety, but, without formal verification, an illusion is what it is. RM |
#130
|
|||
|
|||
Joe Seigh wrote:
Robert Myers wrote: On the other hand, the suggestion was recently made here that maybe we should just banish SMP as an unacceptable programming style (meaning, I think, that multiprocessor programming should not be done in a globally-shared memory space, or at least that the shared space should be hidden behind something like MPI). With the latter presenting a different api to the programmer or do you mean doing the shared memory virtualization in software rather than hardware? Distributed algorithms are attractive from a hardware point of view because they force some nastier error checking into software. Do you really want programmers who can't handle shared memory doing distributed programming? Too late to worry about whether we want them to be doing that kind of stuff, they already have... Outlook has been terrorizing the Internet for many years now, surely a decade by now in fact. Cheers, Rupert |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Intel Prescott CPU in a Nutshell | LuvrSmel | Overclocking | 1 | January 10th 05 03:23 PM |
Intel chipsets are the most stable? | Grumble | Homebuilt PC's | 101 | October 26th 04 02:53 AM |
Real World Comparisons: AMD 3200 -vs- Intel 3.2. Your thoughts, experiences.... | Ted Grevers | General | 33 | February 6th 04 02:34 PM |
Intel & 65nm | Yousuf Khan | General | 0 | November 25th 03 01:18 AM |
Intel Updates Plans Again: Adds Pentium 4 EE at 3.40GHz and Pentium 4 at 3.40GHz | lyon_wonder | General | 2 | November 10th 03 11:17 PM |