If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#41
|
|||
|
|||
In article , Grumble wrote:
spinlock wrote: We are on track for mass shipment of a billion (that's with a B) transistor die by '08. Who's "we" ? A good question. But note that "by '08" includes "in 2005". I have read that there will be ~1.7e9 transistors in Montecito. Cache (2*1 MB L2 + 2*12 MB L3) probably accounts for ~90% of the transistor count. Montecito is expected next year. By whom is it expected? And how is it expected to appear? Yes, someone will wave a chip at IDF and claim that it is a Montecito, but are you expecting it to be available for internal testing, to all OEMS, to special customers, or on the open market? Regards, Nick Maclaren. |
#42
|
|||
|
|||
In article ,
Russell Wallace wrote: At least as far as your typical spaghetti C++ is concerned, yeah, not going to happen anytime in the near future. Sigh. You are STILL missing the point. Spaghetti C++ may be about as bad as it gets, but the SAME applies to the cleanest of Fortran, if it is using the same programming paradigms. I can't get excited over factors of 5-10 difference in optimisability, when we are talking about improvements over decades. This has the consequence that large-scale parallelism is not a viable general-purpose architecture until and unless we move to a paradigm that isn't so intractable. And yet, by that argument there should be no market for the big parallel servers and supercomputers; yet there is. The solution is that for things that need the speed, people just write the parallel code by hand. Sigh. Look, I am in that area. If it were only so simple :-( If what's on the desktop when Doom X, Half-Life Y and Unreal Z come out is a chip with 1024 individually slow cores, then those games will be written to use 1024-way parallelism, just as weather forecasting and quantum chemistry programs are today. Ditto for Photoshop, 3D modelling, movie editing, speech recognition etc. There's certainly no shortage of parallelism in the problem domains. The reason things like games don't use parallel code today whereas weather forecasting does isn't because of any software issue, it's because gamers don't have the money to buy massively parallel supercomputers whereas organizations doing weather forecasting do. When that changes, so will the software. Oh, yeah. Ha, ha. I have been told that more-or-less continually since about 1970. Except for the first two thirds of your first sentence, it is nonsense. Not merely do people sweat blood to get such parallelism, they often have to change their algorithms (sometimes to ones that are less desirable, such as being less accurate), and even then only SOME problems can be parallelised. Regards, Nick Maclaren. |
#43
|
|||
|
|||
Nick Maclaren wrote:
Grumble wrote: spinlock wrote: We are on track for mass shipment of a billion (that's with a B) transistor die by '08. Who's "we" ? A good question. But note that "by '08" includes "in 2005". I took "by 2008" to mean "sometime in 2008". Otherwise he would have said "by 2005" or "by 2006", don't you think? I have read that there will be ~1.7e9 transistors in Montecito. Cache (2*1 MB L2 + 2*12 MB L3) probably accounts for ~90% of the transistor count. Montecito is expected next year. By whom is it expected? And how is it expected to appear? Yes, someone will wave a chip at IDF and claim that it is a Montecito, but are you expecting it to be available for internal testing, to all OEMS, to special customers, or on the open market? In November 2003, Intel's roadmap claimed Montecito would appear in 2005. 6 months later, Otellini mentioned 2005 again. In June 2004, Intel supposedly showcased Montecito dies, and claimed that testing had begun. http://www.theinquirer.net/?article=15917 http://www.xbitlabs.com/news/cpu/dis...219125800.html http://www.xbitlabs.com/news/cpu/dis...619180753.html Perhaps Intel is being overoptimistic, but, as far as I understand, they claim Montecito will be ready in 2005. -- Regards, Grumble |
#44
|
|||
|
|||
In article , Grumble writes: | | By whom is it expected? And how is it expected to appear? Yes, | someone will wave a chip at IDF and claim that it is a Montecito, | but are you expecting it to be available for internal testing, | to all OEMS, to special customers, or on the open market? | | In November 2003, Intel's roadmap claimed Montecito would appear in | 2005. 6 months later, Otellini mentioned 2005 again. In June 2004, Intel | supposedly showcased Montecito dies, and claimed that testing had begun. | | Perhaps Intel is being overoptimistic, but, as far as I understand, they | claim Montecito will be ready in 2005. I am aware of that. Given that Intel failed to reduce the power going to 90 nm for the Pentium 4, that implies it will need 200 watts. Given that HP have already produced a dual-CPU package, they will have boards rated for that. Just how many other vendors will have? Note that Intel will lose more face if they produce the Montecito and OEMs respond by dropping their IA64 lines than if they make it available only on request to specially favoured OEMs. Regards, Nick Maclaren. |
#45
|
|||
|
|||
Nick Maclaren wrote:
Montecito is expected next year. By whom is it expected? And how is it expected to appear? Yes, someone will wave a chip at IDF and claim that it is a Montecito, but are you expecting it to be available for internal testing, to all OEMS, to special customers, or on the open market? By intel and everyone who has been believing their repeated, unwavering claims that mid-2005 will see commercial revenue shipments of Montecito. Based on all the past releases in IPF, I expect a "launch" in June '05 and customers will have systems running in their environments around August. There should be Montecito demonstrations at this coming IDF. There were wafers shown at the last IDF. If my anticipated schedule is correct, OEMs will have test chips soon. Alex -- My words are my own. They represent no other; they belong to no other. Don't read anything into them or you may be required to compensate me for violation of copyright. (I do not speak for my employer.) |
#46
|
|||
|
|||
|
#47
|
|||
|
|||
|
#48
|
|||
|
|||
|
#49
|
|||
|
|||
Russell Wallace wrote:
On 1 Sep 2004 19:50:09 GMT, (Nick Maclaren) wrote: There is effectively NO chance of automatic parallelisation working on serial von Neumann code of the sort we know and, er, love. Not in the near future, not in my lifetime and not as far as anyone can predict. Forget it. At least as far as your typical spaghetti C++ is concerned, yeah, not going to happen anytime in the near future. The statement is wrong in any case. C can be translated to hardware (which is defacto parallelisim) by "constraints", i.e., refusing to translate its worst features (look up system C, C to hardware and similar). Other languages can do it without constraints. Finally, any code, no matter how bad, could be so translated by executing it (simulating it), and then translating what it does dynamically and not statically. This simulation can then give the programmer a report of what was not executed, and the programmer modifies the test cases until all code has been so translated. This has the consequence that large-scale parallelism is not a viable general-purpose architecture until and unless we move to a paradigm that isn't so intractable. And yet, by that argument there should be no market for the big parallel servers and supercomputers; yet there is. The solution is that for things that need the speed, people just write the parallel code by hand. If what's on the desktop when Doom X, Half-Life Y and Unreal Z come out is a chip with 1024 individually slow cores, then those games will be written to use 1024-way parallelism, just as weather forecasting and quantum chemistry programs are today. Ditto for Photoshop, 3D modelling, movie editing, speech recognition etc. There's certainly no shortage of parallelism in the problem domains. The reason things like games don't use parallel code today whereas weather forecasting does isn't because of any software issue, it's because gamers don't have the money to buy massively parallel supercomputers whereas organizations doing weather forecasting do. When that changes, so will the software. -- Samiam is Scott A. Moore Personal web site: http:/www.moorecad.com/scott My electronics engineering consulting site: http://www.moorecad.com ISO 7185 Standard Pascal web site: http://www.moorecad.com/standardpascal Classic Basic Games web site: http://www.moorecad.com/classicbasic The IP Pascal web site, a high performance, highly portable ISO 7185 Pascal compiler system: http://www.moorecad.com/ippas Being right is more powerfull than large corporations or governments. The right argument may not be pervasive, but the facts eventually are. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Intel Prescott CPU in a Nutshell | LuvrSmel | Overclocking | 1 | January 10th 05 03:23 PM |
Intel chipsets are the most stable? | Grumble | Homebuilt PC's | 101 | October 26th 04 02:53 AM |
Real World Comparisons: AMD 3200 -vs- Intel 3.2. Your thoughts, experiences.... | Ted Grevers | General | 33 | February 6th 04 02:34 PM |
Intel & 65nm | Yousuf Khan | General | 0 | November 25th 03 01:18 AM |
Intel Updates Plans Again: Adds Pentium 4 EE at 3.40GHz and Pentium 4 at 3.40GHz | lyon_wonder | General | 2 | November 10th 03 11:17 PM |