A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Intel found to be abusing market power in Japan



 
 
Thread Tools Display Modes
  #101  
Old March 20th 05, 07:18 PM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

Robert Myers wrote:
Before you cause a joint dislocation patting yourself on the back,
consider this: at roughly the same time as AMD was retooling for its
next generation architecture, Intel was redoing Netburst. I would
have bet on Intel successully rejiggering Netburst to get better
performance before I would have bet on AMD having the resouces to
produce Opteron.

It didn't turn out that way. Intel's failure with Netburst is
probably a mixture of physics and poor execution, but, in the end,
physics won. If you want to claim that you understood those physics
well enough ahead of time to predict the failure of Netburst, you
shouldn't have any problem at all giving us a concise summary of what
it is you understood so well ahead of time. You might also want to
offer some insights into Intel's management of the design process.

Had Intel done what it expected to with Prescott and done it on
schedule, Opteron would have been in a very different position.


Not at all likely that Intel would've turned things around with
Prescott. First of all nobody (except those within Intel) knew that they
were trying to increase the pipeline from 20 to 30 stages. So until that
point all anybody knew about Prescott was it just a die-shrunk
Northwood, which itself was just a die-shrunk Williamette. Anyways,
bigger pipeline stages or not, it was just continuing on along the same
standard path -- faster Mhz. There wasn't much of an architectural
improvement to it, unlike what AMD did with Opteron.

Your read of history is that 64-bit x86 and AMD won because of 64-bit
x86. My read of history is that IBM's process technology and AMD's
circuit designers won, and Intel's process technology and circuit
designers lost.


Not at all, AMD's 64-bit extensions had very little to do with it. I'd
say the bigger improvements were due to Hypertransport and memory
controller, and yes inclusion of SOI manufacturing techniques. In terms
of engineering I'd say ranking the important developments we 64-bit
(10%), Hypertransport (20%), process technology (20%), and memory
controller (50%). In terms of marketing, it was 100% 64-bit, which sort
of took the mantle of spokesman for all of the other technology also
included.

It's easy to understand why AMD took the long odds with Hammer. It
didn't really have much choice. Intel wanted to close off the 64-bit
market to x86, and it might well have succeeded.


I'm hearing, "I'd have gotten away with it too, if it weren't for you
meddling kids!" :-)

As to "People were willing to wait..." who needed either, really?
Almost no one. This is all about positioning.


Not sure where you get that little piece of logic from. People weren't
waiting for just any old 64-bit processor, they could get those before.
They were looking for a 64-bit x86 processor. Itanium falls into the
category of "any old 64-bit processor", since it's definitely not x86
compatible.

Yousuf Khan
  #102  
Old March 20th 05, 08:02 PM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

Robert Myers wrote:
On Sat, 19 Mar 2005 12:27:03 -0500, keith wrote:
No hindsight at all. Google if you don't believe. Many of us here said
_at_the_time_ that Opteron was the Itanic killer. Evolution beats
revolution, once again.


I remember the exchanges very well, and I remember what the local AMD
chorus was saying. AMD took a gamble on very long odds, IMHO. The
fact that they were going to 64-bits didn't shorten those odds.


Sure, but if it was merely just a move to 64-bit instructions, then AMD
would've had these processors out by 2001 or 2002, but it would've been
more of a K7.5 rather than a K8. They admitted that the addition of the
64-bit instructions only added 5% to the die area. They could've had an
Athlon 64 out three years ago. But we saw how little time it took Intel
to copy the 64-bit instructions, maybe just a year and a half. So I
think AMD decided to give their architecture a value proposition well
beyond just a 64-bit language upgrade. They piled on an additional two
years of development and came up with the Hypertransport and memory
controller.

Now we see that Intel in the same amount of time, doubled the size of
its core just to add 64-bit instructions and ten additional pipeline
stages. And it's not expected to have its own version of Hypertransport
and memory controller till at least 2007. Even the memory controller
won't be entirely the same as onboard, it'll still be sticking to a
separate memory controller model, but perhaps one memory controller chip
per processor, but still distinctly seperate.

The AMD chorus here wanted: AMD win, x86 win, 64-bits. That, not any
realistic assessment of AMD actually succeeding, was what everybody
was betting on. Well done for AMD and IBM that they could make it
happen, but far from a safe bet.


There's not much reason to give IBM too much credit here, it only helped
out with one of the new technologies that were incorporated into AMD64,
which is the process technology.

I don't think Intel's plans for Itanium had much of an effect on the
success or failure of the x86 offerings of AMD and Intel. As Felger
pointed out, the money went into Prescott. For the return Intel got
on that investment, Intel might almost as well have put that money
into a big pile and burned it (yes, that's an overstatement). The
advice that Intel _should_ have followed would have been to have
canned NetBurst long before they did. Netburst, not Itanium, is the
marketing strategy that gave AMD the opening.


Well, it's true that AMD always wants you to compare it against Xeon
rather than Itanium. But Itanium was supposed to be the eventual
successor to x86 as we all know. Maybe not right away, but eventually,
and it's that strategy that is now in jeopardy.

Itanium sort of parallels a strategy Microsoft followed with its Windows
OSes. It created a series of legacy operating systems in the DOS -
Windows 3.x - Windows 9x/ME family of OSes, with the Windows NT -
2000/XP/2003 family running in parallel until they were ready to take
over from the other family. Except Microsoft actually got it done, but
Intel won't.

Circuit designers, my ass. Marketeers lost/users won.


I think it's safe to say that Intel didn't plan on spending all that
money for a redesign with such a marginal improvement in performance.
Somebody scoped out performance targets that couldn't be hit. Maybe
they hired a program manager from the DoD.


I can't disagree with the assessment about Prescott, but I don't think
it's quite as pivotal of a problem, as the problem it faces with Itanium.

Intel certainly wanted to contain x86. That's something we can agree
on. Intel's major vendor, Dell, would have done just fine hustling
ia32 server hardware if the performance had been there. The
performance just wasn't there.


Thus the reason for AMD's gamble on more than just 64-bit for Opteron.
It's added dimensions of scalablity to the x86 processor that never
existed before.

Yousuf Khan
  #103  
Old March 20th 05, 10:05 PM
keith
external usenet poster
 
Posts: n/a
Default

On Sun, 20 Mar 2005 18:01:14 -0500, Robert Myers wrote:

On Sun, 20 Mar 2005 12:36:20 -0500, keith wrote:

On Sun, 20 Mar 2005 07:05:49 -0500, Robert Myers wrote:


I don't know how to cope with the way you use language. They changed
the instruction set. Period. Backward-compatible !=unchanged.


Enhanced || added instructions != changed. Your use of language is very
(and I'm sure intentionally) misleading. "Changing the instruction set"
implies incompatability.

I have no idea where you get the idea I would be interested in
misleading you or anyone else. It's a really unattractive accusation.


That's the only conclusion I can come to. I cannot say you're stupid.

In what way was hypertransport not a new memory interface for AMD?


Since it is *not* a memory interface, it's not a new one, now is it.

You can call it whatever you like, Keith. AMD changed the way its
processors communicate with the outside world.


"The outside world" memory. Perhaps now you see why I think you're
purposely misleading. Certainly you *know* this. Hypertransport is an
I/O interface primarily, though is used for memory in a UMA sort of system.

snip

They "had to go to a new process", whether Opteron came about or not.
Your argument is void. The world was going to 90nm and that had nothing
to do with Opteron or Prescott. It was time.

Every scale shrink requires work, but I think everybody understood that
90nm was going to be different.


I don't think that's true. Few really did "know" this, until it was
over. *VERY* few saw that particular speed bump. After 130nm (a fairly
simple transistion), everyone was cruising. Oops!

snip, nothing important


IBM is in business with AMD to make money for IBM, and AFAIK they do (as
a result of that alliance). You're the one who sees something sinister
in AMD. Everything there was obvious to anyone who has followed AMD for
a decade or so.

*Where* do you get the idea that I see anyting sinister in AMD? That's
just bizarre. I don't particularly admire AMD, that's true, but I don't
think AMD sinister. I'm glad that IBM has been able to take care of
business, because I don't want to see IBM pushed out of
microelectronics.


Your posts speak for themselves. You seem to be distrought that Opteron
brought Itanic to it's grave. ...when it was really Intel's senior
management that blew it (on both ends).

snip

Open your eyes, man! Intel attempted to choke x86. Intel *let* it be
enough of a gamble so AMD could take control of the ISA.

One think Intel certainly did not want was for x86 to get those extra
named registers. If Prescott had done well enough with out them, no
problem. But it didn't.


I wonder why? The fact is that they didn't want x86 to "grow up" in any
way. They wanted it dead. Oops, AMD had other ideas.

snip


Perhaps they are, perhaps not. Intel has classically held the lead, but
it's not the point. Circuits don't define processors.

Besides botching the marketing, Intel botched the *micro-architecture*,
or more accurately the implementation of that micro-architecture, not
the circuits. Circuits have nothing to do with it.


Clouds of billowing smoke.


There *is* a difference between "circuits", "micro-architecture", and
"process", you know. You really are fabricating your argument out of,
err, smoke. My bet is that you didn't inhale.


You're both wrong. The money went into Itanic! Then an ice-berg
happened. Prescott was what was left of the life-rafts. ...not
pretty.

Dramatic imagery is not an argument.


Imagery or not, that's exactly what happened. So far you haven't had
*any* argument, other than Intel == good, AMD == lucky that IBM happened
(and took pity on them).

I don't see Intel as particularly good in all this. Their performance
has been exceptionally lame. The elevator isn't going all the way to
the top at Intel.


No, it's *trapped* at the top. They cannot see what's on the lower
floors. Intel == Itanic, except it sunk.

IBM took pity on AMD? Wherever did you get that idea?


Perhaps you want to read your posts again.

I did say here
that the money that changed hands ($45 million, if I recall correctly)
didn't sound like a great deal of money for a significant technology
play.


You, nor I, know what money has traded places. I note that you don't
comment on the AMD bodies placed in IBM-EF as a joint venture. Tehy
aren't exactly free either.


Sure, but that was *not* their plan. They wanted to isolate x86, starve
it, and take that business private to Itanic. Bad plan.

Unrealistic plan, to be sure, but, yes, that was their plan.


It *was* their plan. Who has 20-20 hindsight now?

They chose the wrong road for x86, which was to continue NetBurst.


Intel's preferred choice was *no* x86, but AMD didn't let that happen.
They answered with a poorly implemented, by all reports rushed, P4.

They did something wrong, that's for sure. If they thought itanium was
ready to take out of the oven... No, I don't believe that. Itanium was
in an enterprise processor division, or something like that. They had
to know that x86 had to live on the desktop for at least a while.


They did not. That is the point. They wanted x86 to be burried, by right
about now. Itanic, uber alles!

Oh, I see. You're moving the goal posts. I thought we were talking
about Opteron's position in the processor market and AMD's rise to the
top of x86.

Since I've been through this "Oh, you're moving the goalposts" gig of
yours before, I've left the conversation unsnipped 6 comments back and
I'll just invite you to review the exchange.


Nope. We *were* talking about Itanic and x86. *YOU* want to talk about
NetBurst/Opteron in a vacuum. Sorry, that's not the way the industry
works. Itanic (and thereby Intel's myopic marketing) is an important part
of this story.

Cost = die size. It didn't fit the marketing plan. When the P4 came
out there was still some headroom for power (there still is, but no one
wants to go there). The shifter and fixed multiplier wouldn't have added
all that much more power.

Intel is between a rock and a hard place on x86, and itanium does come
into play.


My, we're bing generous (not).

Whatever they do, they can't afford to have x86 outshine
intanium in the benchmarks. I'd belief that as reason for a
deliberately stunted x86 before I'd believe anything else.


Oh, so they somehow *let* AMD kill 'em here? ...on purpose? Come on,
don't treat us as phools. I though Intel was HQ'd in Santa Clara, not
Roswell.

snip

Except that NetBurst is a dramatically different architecture that
runs into the teeth of the physics in a way that previous
architectures didn't. That's why the future is the Pentium-M branch,
and that's something _I_ was saying well before Prescott came out.


I thought you said (above) the "physics problem" was leakage, not MHz.


Pentium-M manifestly runs at much lower power for equivalent
performance. What is there that you don't understand?


Your flip-flop on the technology "problem". First you say it's a
"leakage" problem, then say that P-M is better because it performs better
at lower frequency. Which is it?

snip


...and panic sets in. "What have we got? Ah, P4! Ship it!"

I don't think it happened that way. They did botch the Prescott
redesign, and it may well have been deliberately hobbled, as well.


From all accounts (inside and out), it was. Prescott wasn't much more
than a tumor on an ugly wart. Remember, these wunnerful marketeers didn't
want to sell P-M into the desktop. Why? Perhaps they didn't want to be
seen as the idiots they *are*? That still doesn't explain (though I've
tried) the 64bit thing, which *is* where we started here.

That's something we can agree
on. Intel's major vendor, Dell, would have done just fine hustling
ia32 server hardware if the performance had been there. The
performance just wasn't there.

Dell is simply Intel's box marketing arm. No invention there. Who
cares?

Keith, that's just ridiculous. _Where_ do you think the money in your
paycheck comes from?


It's certainly not signed by Mike (or Andy). It doesn't come from
PeeCee wrench-monkeys either.


Maybe not, but somebody has to sell something to somebody so the
technical weenies can be paid.


Then that's an even dumber response that I'd have expected. Certainly
someone has to sell boxes, but what I said *is* still true. There is no
invention in Dell. It is no more than Intel's box-making plant. It's
interesting that they couldn't even make a profit on white ones, since
that's all they do.

--
Keith

  #104  
Old March 20th 05, 11:01 PM
Robert Myers
external usenet poster
 
Posts: n/a
Default

On Sun, 20 Mar 2005 12:36:20 -0500, keith wrote:

On Sun, 20 Mar 2005 07:05:49 -0500, Robert Myers wrote:

On Sat, 19 Mar 2005 22:03:17 -0500, keith wrote:

On Sat, 19 Mar 2005 17:13:13 -0500, Robert Myers wrote:

On Sat, 19 Mar 2005 12:27:03 -0500, keith wrote:

On Sat, 19 Mar 2005 09:04:59 -0500, Robert Myers wrote:

On Fri, 18 Mar 2005 18:52:05 -0500, Yousuf Khan
wrote:

Robert Myers wrote:
Opteron was ballsy. I didn't expect it to succeed. It wouldn't have
without IBM. Even had I been able to predict that intervention, I
still wouldn't have predicted it to succeed. So I guess it's pretty
clear my ability to predict AMD is... yet to be established. :-).

A lot of us in here thought Opteron was exactly the technology that was
needed by the vast majority of people, and that it was destined to
succeed. The only thing holding back absolute certainty on that
prediction was whether Intel's marketing was going to prevent that.

Hindsight is marvellous.

No hindsight at all. Google if you don't believe. Many of us here said
_at_the_time_ that Opteron was the Itanic killer. Evolution beats
revolution, once again.

I remember the exchanges very well, and I remember what the local AMD
chorus was saying. AMD took a gamble on very long odds, IMHO. The
fact that they were going to 64-bits didn't shorten those odds. They
had to change the instructions set, move the controller onto the die,
develop a new memory interface, and go to a new process. Not exactly
a conservative move.

GMAFB, they did *not* change the instructions set. It is still x86, and
*backwards* compatable, which is the key. They did move the controller
on-die (an obvious move, IMO), but certainly did *not* develope a new
memory interface (what *are* you smoking?). They also dod not go with a
new process. The started in 130nm which was fairly well known. I don't
see any huge "risks" here at all. The only risk I saw was that INtel
would pull the rug out with their own x86-64 architecture. But no, Intel
had no such intentions since that woul suck the life our of Itanic.
Instead they let AMD do the job.

I don't know how to cope with the way you use language. They changed
the instruction set. Period. Backward-compatible !=unchanged.


Enhanced || added instructions != changed. Your use of language is very
(and I'm sure intentionally) misleading. "Changing the instruction set"
implies incompatability.

I have no idea where you get the idea I would be interested in
misleading you or anyone else. It's a really unattractive accusation.

In what way was hypertransport not a new memory interface for AMD?


Since it is *not* a memory interface, it's not a new one, now is it.

You can call it whatever you like, Keith. AMD changed the way its
processors communicate with the outside world.

It's true, they introduced at 130nm, but at a time when movement to 90nm
was inevitable, where new cleverness would be required..."They had
to...go to a new process." If Intel had been able to move to 90nm
(successfully) with Prescott and AMD was stuck at 130 nm, it would have
been a different ball game.


They "had to go to a new process", whether Opteron came about or not.
Your argument is void. The world was going to 90nm and that had nothing
to do with Opteron or Prescott. It was time.

Every scale shrink requires work, but I think everybody understood
that 90nm was going to be different.

The AMD chorus here wanted: AMD win, x86 win, 64-bits. That, not any
realistic assessment of AMD actually succeeding, was what everybody
was betting on. Well done for AMD and IBM that they could make it
happen, but far from a safe bet.

Nonsense. I was betting on the outcome, and unlike you, with real
greenies. ...and don't blame IBM for pulling it off. It was all AMD.
IBM is in business of making money, nothing more.


Another odd choice of language. Blame IBM? Who? For what? Why?


You're saying that IBM enabled Opteron, which is hokum.

IBM is in the business of making money? What else do you imagine I
think IBM is up to? Trying to put Intel out of business?


IBM is in business with AMD to make money for IBM, and AFAIK they do (as a
result of that alliance). You're the one who sees something sinister in
AMD. Everything there was obvious to anyone who has followed AMD for a
decade or so.

*Where* do you get the idea that I see anyting sinister in AMD?
That's just bizarre. I don't particularly admire AMD, that's true,
but I don't think AMD sinister. I'm glad that IBM has been able to
take care of business, because I don't want to see IBM pushed out of
microelectronics.

snip


I don't think Intel's plans for Itanium had much of an effect on the
success or failure of the x86 offerings of AMD and Intel.

Bull****! Intel wanted Itanic to be the end-all, and to let x86 starve
to death. AMD had other plans and anyone who had any clue of the history
of the business *should* have known that AMD would win. They won
because their customers won. Intel will make loads of money off AMD64,
but they don't like it.

You haven't shown in what way Intel's plans for Itanium affected the
success or failure of x86 offerings of AMD and Intel. Intel had the
money for the huge gamble it made on Itanium. It was not a bet the
company proposition.


Open your eyes, man! Intel attempted to choke x86. Intel *let* it be
enough of a gamble so AMD could take control of the ISA.

One think Intel certainly did not want was for x86 to get those extra
named registers. If Prescott had done well enough with out them, no
problem. But it didn't.

As Felger pointed out, the money went into Prescott.

Felger and I have been knwon to disagree. Because he agrees with you
this time, he's now the authority? I see.

Later on, you say that Intel's circuits are better than AMD's. If they
didn't put enough money into Prescott circuit design (maybe because they
put it into IA-64), they got better circuits, anyway? Which is it,
Keith?


Perhaps they are, perhaps not. Intel has classically held the lead, but
it's not the point. Circuits don't define processors.

Besides botching the marketing, Intel botched the *micro-architecture*, or
more accurately the implementation of that micro-architecture, not the
circuits. Circuits have nothing to do with it.


Clouds of billowing smoke.

You're both wrong. The money went into Itanic! Then an ice-berg
happened. Prescott was what was left of the life-rafts. ...not pretty.

Dramatic imagery is not an argument.


Imagery or not, that's exactly what happened. So far you haven't had
*any* argument, other than Intel == good, AMD == lucky that IBM happened
(and took pity on them).

I don't see Intel as particularly good in all this. Their performance
has been exceptionally lame. The elevator isn't going all the way to
the top at Intel.

IBM took pity on AMD? Wherever did you get that idea? I did say here
that the money that changed hands ($45 million, if I recall correctly)
didn't sound like a great deal of money for a significant technology
play.

For the return Intel got on
that investment, Intel might almost as well have put that money into a
big pile and burned it (yes, that's an overstatement). The advice
that Intel _should_ have followed would have been to have canned
NetBurst long before they did. Netburst, not Itanium, is the
marketing strategy that gave AMD the opening.

Wrong, wrong, wrong! You and Intel have the same dark glasses on.
Itanic was the failure. "Netburst" was the lifeboat with the empty water
containers. It was too little and *way* too late.

Ignore Itanium. Intel had the money to gamble on Itanium and to advance
x86 technology.


Sure, but that was *not* their plan. They wanted to isolate x86, starve
it, and take that business private to Itanic. Bad plan.

Unrealistic plan, to be sure, but, yes, that was their plan.

They chose the wrong road for x86, which was to continue NetBurst.


Intel's preferred choice was *no* x86, but AMD didn't let that happen.
They answered with a poorly implemented, by all reports rushed, P4.

They did something wrong, that's for sure. If they thought itanium
was ready to take out of the oven... No, I don't believe that.
Itanium was in an enterprise processor division, or something like
that. They had to know that x86 had to live on the desktop for at
least a while.

snip


It didn't turn out that way. Intel's failure with Netburst is
probably a mixture of physics and poor execution, but, in the end,
physics won. If you want to claim that you understood those physics
well enough ahead of time to predict the failure of Netburst, you
shouldn't have any problem at all giving us a concise summary of
what it is you understood so well ahead of time. You might also
want to offer some insights into Intel's management of the design
process.

How do you figure that physics won?

Physics 1 Netburst 0.

You can repeat yourself into next week, but you're still wrong.

Intel had to curtail plans for Prescott and kill other Netburst projects
because of leakage--physics and heat--physics. They thought they could
beat those problems with process improvements--physics. They
couldn't--physics.


Oh, I see. You're moving the goal posts. I thought we were talking
about Opteron's position in the processor market and AMD's rise to the top
of x86.

Since I've been through this "Oh, you're moving the goalposts" gig of
yours before, I've left the conversation unsnipped 6 comments back and
I'll just invite you to review the exchange.

Go just as fast as you can without wasting cycles, but no faster.
Netburst broke that rule. It was painful the instant the P4 came out,
it got more painful as the scale shrank, and finally it became
unacceptable.

You really should study microarchitecture some more. What broke the P4
was sill marketeering. It was too big to fit the die given (by
marketing) so they tossed overboard some rather important widgets.


Which generation are you talking about? The original P4 went on a
transistor reduction plan because of power consumption problems. That
Prescott would have had to go on a transistor budget for similar reasons
seems almost inevitable. The transistor budget was driven by die-size
considerations? That's a new one, and I'm skeptical, to put it mildly.


Cost = die size. It didn't fit the marketing plan. When the P4 came out
there was still some headroom for power (there still is, but no one wants
to go there). The shifter and fixed multiplier wouldn't have added all
that much more power.

Intel is between a rock and a hard place on x86, and itanium does come
into play. Whatever they do, they can't afford to have x86 outshine
intanium in the benchmarks. I'd belief that as reason for a
deliberately stunted x86 before I'd believe anything else.

snip


Had Intel done what it expected to with Prescott and done it on
schedule, Opteron would have been in a very different position.

I don't believe this is so. Intel was forced to continue down the x86
path because AMD was forcing the issue. Intel wanted to dump x86 for
Itanic.

We'll just have to disagree about this.

You can disagree all you want. It's in the history books now. Physics
had *nothing* to do with this battle (AMD and Intel both are constrained
by the same physics, BTW).


Except that NetBurst is a dramatically different architecture that runs
into the teeth of the physics in a way that previous architectures
didn't. That's why the future is the Pentium-M branch, and that's
something _I_ was saying well before Prescott came out.


I thought you said (above) the "physics problem" was leakage, not MHz.


Pentium-M manifestly runs at much lower power for equivalent
performance. What is there that you don't understand?


Circuit designers, my ass. Marketeers lost/users won.

I think it's safe to say that Intel didn't plan on spending all that
money for a redesign with such a marginal improvement in performance.
Somebody scoped out performance targets that couldn't be hit. Maybe
they hired a program manager from the DoD.

Intel is a marketing driven company. They are responsible for this
mess. It had *NOTHING* to do with circuits (yeesh). I'm quite sure
(without first-hand evidence) Intel's curcuits are still superrior to
AMD's. Intel's helm _was_ "frozen" though.

You can call it marketing arrogance all you want. If you make a plan
driven by marketing, you have to be able to execute it. Intel couldn't
execute. If you say "I'm going to to X," and X is technical, I don't
call that a marketing failure. If X could have been done but wasn't,
it's a technical failure. If X couldn't have been done and that wasn't
recognized but should have been, it's a management failure.


The marketing failure was in Itanic. Ok, the fact they couldn't execute
may be called a technical failure, but it's marketing that sets the
schedule. Innovation usually doesn't follow M$ Planner.

snip

"Almost no one?" Rubbish. It was absolutely required to move x86 off
the desktop, where Intel had no interest in it going.

Intel certainly wanted to contain x86.

"Contain" it in a casket, perhaps. Intel had no interest in having x86
survive. All the patents were either expired or were cross-licensed
into oblivion. Why do you think Intel and HP formed a seperate company
as a holder of Itanic IP?

Whatever Intel's original hopes for Itanium were, they had to have been
almost constantly scaled back as it became more and more obvious that
they had undertaken a mission to Mars.


...and panic sets in. "What have we got? Ah, P4! Ship it!"

I don't think it happened that way. They did botch the Prescott
redesign, and it may well have been deliberately hobbled, as well.

That's something we can agree
on. Intel's major vendor, Dell, would have done just fine hustling
ia32 server hardware if the performance had been there. The
performance just wasn't there.

Dell is simply Intel's box marketing arm. No invention there. Who
cares?


Keith, that's just ridiculous. _Where_ do you think the money in your
paycheck comes from?


It's certainly not signed by Mike (or Andy). It doesn't come from
PeeCee wrench-monkeys either.


Maybe not, but somebody has to sell something to somebody so the
technical weenies can be paid.

RM

  #105  
Old March 20th 05, 11:27 PM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

Robert Myers wrote:
GMAFB, they did *not* change the instructions set. It is still x86, and
*backwards* compatable, which is the key. They did move the controller
on-die (an obvious move, IMO), but certainly did *not* develope a new
memory interface (what *are* you smoking?). They also dod not go with a
new process. The started in 130nm which was fairly well known. I don't
see any huge "risks" here at all. The only risk I saw was that INtel
would pull the rug out with their own x86-64 architecture. But no, Intel
had no such intentions since that woul suck the life our of Itanic.
Instead they let AMD do the job.


I don't know how to cope with the way you use language. They changed
the instruction set. Period. Backward-compatible !=unchanged.


Tidied it up a bit, here and there. Added some extra registers. But why
is that such an important point? It's the least they could be expected
to do, considering the major _capability_ improvement they are making to
this architecture. The same opcodes that worked in 8, 16, or 32-bit are
still the same in 64-bit.

In what way was hypertransport not a new memory interface for AMD?


Well, it's not a memory interface; at least not exclusively for memory,
it's a generic i/o interface. The onboard memory controller is a
separate subsystem from Hypertransport. The Hypertransport may be used
to access memory, especially in multiprocessor Opteron systems where the
memory addressing duties is split up amongst several processors, each
processor feeds neighbouring processors with data from their own local
pool of memory as requested.

It's true, they introduced at 130nm, but at a time when movement to
90nm was inevitable, where new cleverness would be required..."They
had to...go to a new process." If Intel had been able to move to 90nm
(successfully) with Prescott and AMD was stuck at 130 nm, it would
have been a different ball game.


They had already made a successful transition to 130nm with the K7
Athlon XPs, the Bartons and Thoroughbreds. However, that was a 130nm
without SOI. With the K8's, they used a different 130nm process with
SOI. They then transitioned the 130nm SOI to 90nm SOI.

Nonsense. I was betting on the outcome, and unlike you, with real
greenies. ...and don't blame IBM for pulling it off. It was all AMD.
IBM is in business of making money, nothing more.



Another odd choice of language. Blame IBM? Who? For what? Why?


In this case, Keith is just being sarcastic. He's saying "blame" when he
really means "give credit to".

As to investment decisions, the only investment prospects I could see
for Intel or AMD were downside, and not in a way that was sure enough
to justify a short position, even were I in the habit of taking such
positions. I didn't like Intel's plans any better than I liked AMD's.


Another thing, Intel is just as heavily manipulated as AMD's stock. It's
fool's game to try to short either one. There's some very powerful
interests who manipulate its price up and down. Neither stock seems to
be heavily affected by their own news. Instead things like global
markets, oil prices, inflation rates, interest rates, affect their
prices more often. Occasionally, you'll notice one go up while the other
goes down -- that's just the powers-that-be having fun with each other.

You haven't shown in what way Intel's plans for Itanium affected the
success or failure of x86 offerings of AMD and Intel. Intel had the
money for the huge gamble it made on Itanium. It was not a bet the
company proposition.


Well, that's going to be a little difficult now, considering Itanium is
no longer considered a competitive threat.

Yousuf Khan
  #106  
Old March 20th 05, 11:43 PM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

Robert Myers wrote:
Either way, I think Opteron would have been shipped, possibly
not in as good shape as it is now.



It would have shipped, but to whom? AMD would have another also-ran
processor and Intel would be moving to displace as much of x86 as it
could with Itanium, rather than emphasizing that the competition for
Itanium is really Power.


AMD already had a non-SOI 130nm process running for their K7's. They
would've just shipped Opteron on that process. They may not have been
able to crank Opteron all of the way upto 2.4Ghz at 130nm as they did
with SOI, but they might've been able to go 2.2Ghz (like the K7's had
already reached). Also I think if they weren't in a hurry to get to
90nm, they probably would've been able to crank the 130nm SOI upto 2.6
or even 2.8Ghz.

Their next step would've been to get to 90nm, but without SOI, they
would've run into the exact same issue that Intel did, albeit in a much
less severe form, because they weren't pushing upto 4.0Ghz. We would've
been seeing non-SOI Opterons at around 100W, instead of the 65W that we
see them at now with SOI.

No doubt about it, AMD needed to get to SOI eventually, and without
IBM's help, they may have gotten there probably by the middle of this
year, rather than at this time two years ago (when the first Opteron
shipped). From what I've heard about SOI, this technology has been
around a long time, it's just recently that it's started to find itself
into mass-market ICs -- until now it was used mainly in electronics for
airplanes, because of the extra radiation that they are exposed to up there.

Yousuf Khan
  #107  
Old March 21st 05, 12:53 AM
Robert Myers
external usenet poster
 
Posts: n/a
Default

On Sun, 20 Mar 2005 14:18:27 -0500, Yousuf Khan
wrote:

Robert Myers wrote:


snip


Had Intel done what it expected to with Prescott and done it on
schedule, Opteron would have been in a very different position.


Not at all likely that Intel would've turned things around with
Prescott. First of all nobody (except those within Intel) knew that they
were trying to increase the pipeline from 20 to 30 stages. So until that
point all anybody knew about Prescott was it just a die-shrunk
Northwood, which itself was just a die-shrunk Williamette. Anyways,
bigger pipeline stages or not, it was just continuing on along the same
standard path -- faster Mhz. There wasn't much of an architectural
improvement to it, unlike what AMD did with Opteron.

We did know it was a complete die redesign. Bigger cache, longer
pipeline, different latencies. In the end, big investment,
disappointing results. Maybe my spin-o-meter is malfunctioning, but I
got the sense that Intel management was just as mystified as everyone
else. Northwood, if you remember, was about a 10% improvement on
Willamette at the same clock (bigger cache, if nothing else).

Your read of history is that 64-bit x86 and AMD won because of 64-bit
x86. My read of history is that IBM's process technology and AMD's
circuit designers won, and Intel's process technology and circuit
designers lost.


Not at all, AMD's 64-bit extensions had very little to do with it. I'd
say the bigger improvements were due to Hypertransport and memory
controller, and yes inclusion of SOI manufacturing techniques. In terms
of engineering I'd say ranking the important developments we 64-bit
(10%), Hypertransport (20%), process technology (20%), and memory
controller (50%). In terms of marketing, it was 100% 64-bit, which sort
of took the mantle of spokesman for all of the other technology also
included.

The only thing out of the 64-bit that really mattered for most users
was more named registers.

It's easy to understand why AMD took the long odds with Hammer. It
didn't really have much choice. Intel wanted to close off the 64-bit
market to x86, and it might well have succeeded.


I'm hearing, "I'd have gotten away with it too, if it weren't for you
meddling kids!" :-)

As to "People were willing to wait..." who needed either, really?
Almost no one. This is all about positioning.


Not sure where you get that little piece of logic from. People weren't
waiting for just any old 64-bit processor, they could get those before.
They were looking for a 64-bit x86 processor. Itanium falls into the
category of "any old 64-bit processor", since it's definitely not x86
compatible.

Almost no one really needed a processor with 64-bit pointers. If you
don't really need them, and most people don't, you might go to the
trouble to compile so you use 32-bit pointers, anyway. The increased
number of named registers is what most users are really ready to use
(with appropriate compiler support). That's the part Intel really
didn't like, and they wouldn't have had to swallow it if they could
have delivered the performance without them. They couldn't.

RM
  #108  
Old March 21st 05, 01:55 AM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

George Macdonald wrote:
Isn't that enough? I didn't think you'd be the one to need convincing
about x86-64 as a necessary component of future PCs. Beyond that I'm not
sure but I'm pretty sure that AMD has some other patents which might be of
interest, e.g. large L1 cache efficiency.


"Large L1 cache efficiency" requires a patent?

Yousuf Khan
  #109  
Old March 21st 05, 02:57 AM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

Robert Myers wrote:
On Sun, 20 Mar 2005 12:36:20 -0500, keith wrote:
On Sun, 20 Mar 2005 07:05:49 -0500, Robert Myers wrote:
On Sat, 19 Mar 2005 22:03:17 -0500, keith wrote:
On Sat, 19 Mar 2005 17:13:13 -0500, Robert Myers wrote:
On Sat, 19 Mar 2005 12:27:03 -0500, keith wrote:
On Sat, 19 Mar 2005 09:04:59 -0500, Robert Myers wrote:
On Fri, 18 Mar 2005 18:52:05 -0500, Yousuf Khan
wrote:
Robert Myers wrote:


Yeesh! You guys oughta really crop the quotes down to two levels at most.

In what way was hypertransport not a new memory interface for AMD?


Since it is *not* a memory interface, it's not a new one, now is it.


You can call it whatever you like, Keith. AMD changed the way its
processors communicate with the outside world.


The majority of its memory accesses are done through its memory
controller, not through Hypertransport. Regardless, what is the point
you're trying to make here about AMD changing the way it communicates
with the outside world? What difference does it make how AMD does its
i/o, it still works the same way it's always worked -- the software
can't tell the difference.

Also it was already doing things differently from Intel as if the K7
Athlons, when it was using the EV7 bus, while Intel was using its own
bus. Software couldn't tell the difference back then either. Now it's
changed over from the EV7 to the Hypertransport -- and software still
remains blissfully unaware.

One think Intel certainly did not want was for x86 to get those extra
named registers. If Prescott had done well enough with out them, no
problem. But it didn't.


Where does this insight into Intel's inner mind come from? I've never
heard Intel disparage extra registers. They've certainly disparaged the
whole x86-64 concept before (boy, have they!), but nothing specifically
about extra registers.

I don't see Intel as particularly good in all this. Their performance
has been exceptionally lame. The elevator isn't going all the way to
the top at Intel.


Or perhaps it's going too often to the top, but not spending enough time
at the ground floors?

Sure, but that was *not* their plan. They wanted to isolate x86, starve
it, and take that business private to Itanic. Bad plan.


Unrealistic plan, to be sure, but, yes, that was their plan.


Which is the point we're trying to make about why we had a feeling that
Intel's Itanium was not going to take off, while AMD's Opteron had a
great chance to take off.

Intel's preferred choice was *no* x86, but AMD didn't let that happen.
They answered with a poorly implemented, by all reports rushed, P4.


They did something wrong, that's for sure. If they thought itanium
was ready to take out of the oven... No, I don't believe that.


I don't think Itanium was ever ready to take out of the oven.

Itanium was in an enterprise processor division, or something like
that. They had to know that x86 had to live on the desktop for at
least a while.


They knew that x86 had to live on the desktop for awhile, that's why
they were pursuing the same dual-path strategy of legacy and
new-technology that Microsoft employed so well in the transition from
the DOS-family OSes to the NT-family OSes. NT was too advanced for most
home users and even most businesses initially. But they kept grooming it
until it became even easy to use in home settings.

Intel would've pursued a similar strategy with x86 (legacy) and IA64
(new-tech).

Intel is between a rock and a hard place on x86, and itanium does come
into play. Whatever they do, they can't afford to have x86 outshine
intanium in the benchmarks. I'd belief that as reason for a
deliberately stunted x86 before I'd believe anything else.


Benchmarks are the least of their worries. x86 has such a huge installed
base that they are basically immune to benchmarks. The only benchmarks
that matter are the ones comparing one x86 to another. Itanium could not
quite obviously be put into any x86 benchmark. If interprocessor
benchmarks mattered, then x86 would've been gone a long time ago. My
feeling is that if any architecture is able to emulate x86 at full
speed, then that architecture has a chance to take over from x86.

Yousuf Khan
  #110  
Old March 21st 05, 03:10 AM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

Robert Myers wrote:
On Sun, 20 Mar 2005 14:18:27 -0500, Yousuf Khan
wrote:
Not at all likely that Intel would've turned things around with
Prescott. First of all nobody (except those within Intel) knew that they
were trying to increase the pipeline from 20 to 30 stages. So until that
point all anybody knew about Prescott was it just a die-shrunk
Northwood, which itself was just a die-shrunk Williamette. Anyways,
bigger pipeline stages or not, it was just continuing on along the same
standard path -- faster Mhz. There wasn't much of an architectural
improvement to it, unlike what AMD did with Opteron.


We did know it was a complete die redesign. Bigger cache, longer
pipeline, different latencies. In the end, big investment,
disappointing results. Maybe my spin-o-meter is malfunctioning, but I
got the sense that Intel management was just as mystified as everyone
else. Northwood, if you remember, was about a 10% improvement on
Willamette at the same clock (bigger cache, if nothing else).


When did we first know it was a complete redesign? I think we only found
out about larger pipeline about a month before Prescott's release. We
all assumed bigger cache size, that's done during most die-shrinks
anyways. When we found out about extensive pipeline redesign, we all
knew it was a major job that they had done on it, and many of us had
expressed genuine surprise about it. Northwood had actually been doing a
credible job keeping things competitive with AMD, and we were just
expecting Northwood II with Prescott.

Not at all, AMD's 64-bit extensions had very little to do with it. I'd
say the bigger improvements were due to Hypertransport and memory
controller, and yes inclusion of SOI manufacturing techniques. In terms
of engineering I'd say ranking the important developments we 64-bit
(10%), Hypertransport (20%), process technology (20%), and memory
controller (50%). In terms of marketing, it was 100% 64-bit, which sort
of took the mantle of spokesman for all of the other technology also
included.


The only thing out of the 64-bit that really mattered for most users
was more named registers.


I don't know if that even matters to most users -- even to programming
users. The only thing that's going to matter to most users is when they
start seeing games start using the additional features.

Not sure where you get that little piece of logic from. People weren't
waiting for just any old 64-bit processor, they could get those before.
They were looking for a 64-bit x86 processor. Itanium falls into the
category of "any old 64-bit processor", since it's definitely not x86
compatible.


Almost no one really needed a processor with 64-bit pointers. If you
don't really need them, and most people don't, you might go to the
trouble to compile so you use 32-bit pointers, anyway. The increased
number of named registers is what most users are really ready to use
(with appropriate compiler support). That's the part Intel really
didn't like, and they wouldn't have had to swallow it if they could
have delivered the performance without them. They couldn't.


Not sure why it matters to you if people needed extra registers now or
extra memory addressibily now? It's not like as if people have time to
go do major overhauls of architectures everyday, they might as well have
added as much as they could fit in. Most of it is going to be needed
sooner or later anyways.

Yousuf Khan
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Power supply can zap motherboard? Eric Popelka Homebuilt PC's 8 June 18th 05 08:54 PM
intel SE7210TP1-E - eps power supply problem - won't boot dnt Homebuilt PC's 0 December 2nd 04 07:01 PM
P4EE will cost $1000 Yousuf Khan General 60 December 27th 03 02:19 PM
Happy Birthday America SST Nvidia Videocards 336 November 27th 03 07:54 PM
Power Surge David LeBrun General 44 September 12th 03 02:35 AM


All times are GMT +1. The time now is 01:13 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.