A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Video Cards » Ati Videocards
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

CPU vs GPU



 
 
Thread Tools Display Modes
  #1  
Old May 15th 04, 12:50 PM
cowboyz
external usenet poster
 
Posts: n/a
Default CPU vs GPU

Skybuck Flying wrote:
Hi,

Today's CPUs are in the 3000 to 4000 MHZ clock speed range.

Today's GPU's are in the 250 to 600 MHZ clock speed range.

I am wondering if GPU's are holding back game graphics performance
because their clock speed is so low ?

For example suppose someone programs a game with the following
properties:

3 GHZ for os, game, network, etc logic
1 GHZ for graphics.

Would that game run faster than any other game ?

How about

2 GHZ for game
2 GHZ for graphics.

I know CPU's have generic instructions. so 4 GHZ means a cpu can do
about 4000 miljoen generic instructions.

What does GPU 500 mhz mean ???? 500 miljoen pixels ? 500 miljoen
triangles ? 100 miljoen T&L + 400 miljoen pixels ? what ?

Bye,
Skybuck.



Instead of looking at clock speed you should be paying more attention to
bandwidth. The latest CPU and GPU are far more powerful than they need to
be. We are just only seeing the beginning of games that take advantage of
the faster gear we have had for the last 6 months or so. You would hardly
notice on a PII 450 and FX5200 though.




  #2  
Old May 15th 04, 08:06 PM
Dr Richard Cranium
external usenet poster
 
Posts: n/a
Default

ahhwwweeee. you most likely didn't have the answer anyway.

i personally would like to read this discussion with diverse input and opinions from
around the known galaxy.

it is rare that discussions like these are put into an open forum with no malice intended.

so please refrain your little tantrum to simply a kill file huh.

** No Fate **

cheers,
dracman
Tomb Raider: Shotgun City
http://www.smokeypoint.com/tomb.htm
http://www.smokeypoint.org/traod/traod.html

**savegame editors all versions Tomb Raider & TRAOD godmode
http://www.smokeypoint.com/tr2code.htm

http://www.smokeypoint.com/tombraide...r1pictures.htm
http://www.smokeypoint.com/3dfx.htm
http://www.smokeypoint.com/banshee.html
http://www.smokeypoint.com/My_PC.htm
http://www.smokeypoint.com/tomb2.htm#Tova

** GTA III vice City Character MOD
http://www.smokeypoint.com/uzi.htm#gta3

NFS: drive at the limits of tyre adhesion ! snag Lara's Croftraider IV sports car ! !
NFS3:NFS4
http://www.smokeypoint.com/3dfx.htm#raider

http://www.smokeypoint.com/3dfx.htm#blondstranger
NFS:HS - Reg method to add your d3d card to NFS:HS
NFS III - Reg method to add your d3d card to NFS:HP
-=-=-
Mad Onion - 3DMark 2001 and 3DMark 2001se -
{not the Pro} 4x4 Missile Launcher Truck Secret Game Demo screen snaps
(and secret password)
http://www.smokeypoint.com/3dmark2001.html
-=-=-



"Alan Tiedemann" wrote in message
...
: Skybuck Flying wrote:
:
: Oh my god... Freaks like you are who make the "Netiquette" for german
: speaking newsgroups useful.
:
: Could you *please* leave the de.-hierarchy again and in future postings
: *always* set a followup-to exactly *one* newsgroup where your topic is
: *on* topic?
:
: Crossposts without followup-to exactly ONE group are completely stupid.
: If you set a followup-to, the people who are interested in your topic
: will surely follow you to the ONE newsgroup that fits. If you do not set
: any followup-to, all other readers are annoyed by the flood of useless
: and off-topic messages.
:
: So, please: GO. And do not come back unless you have at least a *tiny*
: clue about how usenet works.
:
: It's impossible for me to set a followup-to a correct group because
: *none* of the groups you have chosen is correct for your topic. So,
: please: DO NOT REPLY TO THIS POST IN USENET. The discussion is OVER for
: me, I will NEVER EVER read any of your usenet postings again, as
: probably any other reader of the de.*-groups you have randomly chosen
: from your group list.
:
: I've now set a followup-to poster. So, if you want to discuss with me, I
: will *only* accept mails. No posts in usenet, as I will not read them
: anymore and because this is now *completely* off topic in *all*
: newsgroups this thread is posted to.
:
: Filter score adjusted, discussion over for me.
:
: Alan
:
: --
: Bitte nur in die Newsgroup antworten! -Mails rufe ich nur selten ab.
: http://www.schwinde.de/webdesign/ ~ http://www.schwinde.de/cdr-info/
: Mail: at__0815athotmailpunktcom ~ news-2003-10atschwindepunktde




.................................................. ...............
Posted via TITANnews - Uncensored Newsgroups Access
at http://www.TitanNews.com

-=Every Newsgroup - Anonymous, UNCENSORED, BROADBAND Downloads=-

  #3  
Old May 15th 04, 09:18 PM
Damaeus
external usenet poster
 
Posts: n/a
Default

In news:alt.comp.periphs.videocards.nvidia, "Skybuck Flying"
posted on Sat, 15 May 2004 13:20:35 +0200:

Today's CPUs are in the 3000 to 4000 MHZ clock speed range.

Today's GPU's are in the 250 to 600 MHZ clock speed range.

I am wondering if GPU's are holding back game graphics performance because
their clock speed is so low ?


The graphics card just has to process graphics. The CPU has many other
things to contend with. It's probably pretty well balanced, IMHO. You can
have too much video card. You wouldn't want to run a GeForce 6800 on a
Pentium 60 computer. The performance would be horrid. And if you ran a
GeForce 6800 on a 500MHz system, your performance would still be crappy.
And it would also be crappy if you ran a Creative Labs 3Dfx Banshee w/ 16mb
of VRAM on a 3.2GHz processor.
  #4  
Old May 15th 04, 09:47 PM
Dr Richard Cranium
external usenet poster
 
Posts: n/a
Default

the real answer ? those folks are busy getting the 64/128 bit ATI video card ready to
ship sometime in June.
Then the 64 bit sound card will follow. (PCI2)

Microsoft is just sitting on the 64 bit windows OS until INTEL is READY to ship. Gosh -
you wouldn't want AMD to grab the 64 bit OS market first, would you.

okay so you will have to use what available talent is here in the ATI ng huh, well at
least these folks are trying to help more ng'ers than wallets (like in their own).

my 2 cents follows.

The more you can off load the graphics API's from the cpu the faster the cpu can process
the game (software api's)
The cost/overhead of building a perfect GPU goes well beyond what you or i care to spend
for a graphics card.
How about $12,000 for the almost perfect CADD workstation video card. I don't think so.
$ 500 per card is my limit. ATI needs trade-offs, and you are purchasing those trade offs
and various nitch prices you and I can afford. This does NOT put much money back into
ATI's pocket so they can research an inexpensive GPU that performs like a CPU. ATI ain't
stupid either - you ain't getting that $12,000 video card for peanuts dude. What you ARE
going to get is value for your money. ATI can afford and make and sell you at a
reasonable price what you see on the market. You want CPU = GPU performance, so does ATI.

as GPU and CPU creation process's get improved and cheaper - then you will see GPU
performance approaching CPU
performance.

You need the GPU to run the CPU at good efficiency. Lose the GPU, and you bog/choke the
CPU.

CPU win's, GPU wins.

nuff said.


** no fate **

dracman
http://www.smokeypoint.com/My_PC.htm




"Skybuck Flying" wrote in message
...
: Well I still have seen no real answer.
:
: Can I safely conclude it's non determinstic... in other words people dont
: know ****.
:
: So the thruth to be found out needs testing programs !
:
: Test it on P4
:
: Test it on GPU
:
: And then see who's faster.
:
: Since I don't write games it's not interesting for me.
:
: I do hope game writers will be so smart to test it out
:
: Skybuck.
:
:




.................................................. ...............
Posted via TITANnews - Uncensored Newsgroups Access
at http://www.TitanNews.com

-=Every Newsgroup - Anonymous, UNCENSORED, BROADBAND Downloads=-

  #5  
Old May 16th 04, 01:01 AM
Charles Banas
external usenet poster
 
Posts: n/a
Default

Skybuck Flying wrote:
Hi,

Today's CPUs are in the 3000 to 4000 MHZ clock speed range.

i haven't seen one on the market at 3.8GHz yet.

Today's GPU's are in the 250 to 600 MHZ clock speed range.

and they run much hotter than current CPUs because they perform many
times more calculations per cycle than a CPU does.

I am wondering if GPU's are holding back game graphics performance because
their clock speed is so low ?

no. current GPUs process 4-16 pixels per cycle simultaneously. (more
accurately, 4-16 texels. that is, a textured and colored pixel.)
graphics are inherently parallelizable and as a result, (barring the
logistical problems) a GPU could conceivably be designed to process
every pixel on the screen simultaneously in one cycle.

For example suppose someone programs a game with the following properties:

3 GHZ for os, game, network, etc logic
1 GHZ for graphics.

Would that game run faster than any other game ?

possibly slower.

How about

2 GHZ for game
2 GHZ for graphics.

even worse. that doesn't leave much for data processing, audio
processing, data dispatch, etc.

I know CPU's have generic instructions. so 4 GHZ means a cpu can do about
4000 miljoen generic instructions.

no. that means it's capable of processing instructions at the rate of 4
billion cycles per second. it has nothing to do with the speed
instructions are actually executed.

why?

because CPUs generally break down instructions into a series of jobs.
the pentium 4, for example, does this with 3 instructions at a time in
optimal conditions. each of these instructions is broken down into
about 15 or so jobs. (i forget how many micro-ops.) each one of these
jobs is performed sequentially as it is sent down the pipeline. once it
leaves the pipeline, the instruction is deemed complete.

long pipelines allow for high clock speeds, but because of this, delays
of tens of cycles can happen if the CPU is fed an instruction it can't
process right away.

What does GPU 500 mhz mean ???? 500 miljoen pixels ? 500 miljoen triangles
? 100 miljoen T&L + 400 miljoen pixels ? what ?

generally, it doesn't mean much. the GeForce 6800, for example, can
process up to 16 pixels per cycle, which includes texturing and up to
two math or texture calculations per pixel, for every cycle. if more
than one texture is used for a given pixel, then two pipelines are used
bringing it down to 8 pixels per cycle, which includes two textures and
the two calculations. so the theoretical peak fill rate of a GeForce
6800 is (i think) about 4 billion texels per second. (i'm probably a
bit off, but that should give you an idea.) a 4GHz pentium 4 doing the
*same* job might be able to do about 100 million texels per second,
assuming that's the only job it's doing.

ATis work a bit differently, but i skimmed over those articles and don't
recall the details.

--
-- Charles Banas
  #6  
Old May 16th 04, 02:07 AM
Skybuck Flying
external usenet poster
 
Posts: n/a
Default


"Dr Richard Cranium" wrote in message
. ..
the real answer ? those folks are busy getting the 64/128 bit ATI video

card ready to
ship sometime in June.
Then the 64 bit sound card will follow. (PCI2)

Microsoft is just sitting on the 64 bit windows OS until INTEL is READY to

ship. Gosh -
you wouldn't want AMD to grab the 64 bit OS market first, would you.

okay so you will have to use what available talent is here in the ATI ng

huh, well at
least these folks are trying to help more ng'ers than wallets (like in

their own).

my 2 cents follows.

The more you can off load the graphics API's from the cpu the faster the

cpu can process
the game (software api's)
The cost/overhead of building a perfect GPU goes well beyond what you or i

care to spend
for a graphics card.
How about $12,000 for the almost perfect CADD workstation video card. I

don't think so.
$ 500 per card is my limit. ATI needs trade-offs, and you are purchasing

those trade offs
and various nitch prices you and I can afford. This does NOT put much

money back into
ATI's pocket so they can research an inexpensive GPU that performs like a

CPU. ATI ain't
stupid either - you ain't getting that $12,000 video card for peanuts

dude. What you ARE
going to get is value for your money. ATI can afford and make and sell

you at a
reasonable price what you see on the market. You want CPU = GPU

performance, so does ATI.

as GPU and CPU creation process's get improved and cheaper - then you will

see GPU
performance approaching CPU
performance.

You need the GPU to run the CPU at good efficiency. Lose the GPU, and you

bog/choke the
CPU.

CPU win's, GPU wins.

nuff said.


Absolutely not...

Since nowadays nvidia cards and maybe radeon cards have T&L... that means
transform and lighting.

That also means these cards can logically only do a limited ammount of
transform and lighting.

5 years ago I bought a PIII 450 mhz and a TNT2 at 250 mhz.

Today there are P4's and AMD's at 3000 mhz... most graphic cards today are
stuck at 500 mhz and overheating with big cooling stuff on it.

So it seems cpu's have become 6 times as fast... (if that is true) and
graphics cards maybe 2 or 3 times ?! they do have new functionality.

Engines are expensive to build.

Look at the doom3 engine... or any other engine... Suppose that engine uses
T&L... Suppose 5 years from now cpu's are again 6 times faster... and
graphics cards only twice as fast (the best most expensive one's I doubt
that is going to happen any time soon seeing the big heat problem ).

That could mean the doom3 engine or any other engine is seriously held back
if all T&L is done in the graphics card...

I think john carmack is really smart and codes flexible stuff... so maybe he
is smart enough to do T&L in cpu as well... Actually doom3 might not use T&L
at all... since I was able to run the doom3 alpha on a TNT2 which has no
T&L... also rumors are lol that john carmack likes using opengl that's
true and doom3 only (?) uses opengl.

So for doom3 this might actually not be a problem... But for games like call
of duty, homeworld 2, halo... this could become a problem... a slow problem
that is =D

Seeing this post my coinfendence in john delivering awesome powerfull
flexible engines has just rissen

Bye, bye,
Skybuck


  #7  
Old May 16th 04, 04:39 AM
Charles Banas
external usenet poster
 
Posts: n/a
Default

Skybuck Flying wrote:

"Dr Richard Cranium" wrote in message
. ..

the real answer ? those folks are busy getting the 64/128 bit ATI video


card ready to

ship sometime in June.
Then the 64 bit sound card will follow. (PCI2)

Microsoft is just sitting on the 64 bit windows OS until INTEL is READY to


ship. Gosh -

you wouldn't want AMD to grab the 64 bit OS market first, would you.

okay so you will have to use what available talent is here in the ATI ng


huh, well at

least these folks are trying to help more ng'ers than wallets (like in


their own).

my 2 cents follows.

The more you can off load the graphics API's from the cpu the faster the


cpu can process

the game (software api's)
The cost/overhead of building a perfect GPU goes well beyond what you or i


care to spend

for a graphics card.
How about $12,000 for the almost perfect CADD workstation video card. I


don't think so.

$ 500 per card is my limit. ATI needs trade-offs, and you are purchasing


those trade offs

and various nitch prices you and I can afford. This does NOT put much


money back into

ATI's pocket so they can research an inexpensive GPU that performs like a


CPU. ATI ain't

stupid either - you ain't getting that $12,000 video card for peanuts


dude. What you ARE

going to get is value for your money. ATI can afford and make and sell


you at a

reasonable price what you see on the market. You want CPU = GPU


performance, so does ATI.

as GPU and CPU creation process's get improved and cheaper - then you will


see GPU

performance approaching CPU
performance.

You need the GPU to run the CPU at good efficiency. Lose the GPU, and you


bog/choke the

CPU.

CPU win's, GPU wins.

nuff said.



Absolutely not...

Since nowadays nvidia cards and maybe radeon cards have T&L... that means
transform and lighting.

do you know what T&L entails? one T&L operation on a GPU is roughly
equivalent to 10-15 operations on a CPU. video cards often use T&L that
uses lower precision than a CPU might use (16-bit vs. 32-bit z-buffers)
by default, but it does that work much more quickly. at a lwer clock rate.

That also means these cards can logically only do a limited ammount of
transform and lighting.

but they are designed to do it, and so are much more deft at it.

5 years ago I bought a PIII 450 mhz and a TNT2 at 250 mhz.

how nice.

Today there are P4's and AMD's at 3000 mhz... most graphic cards today are
stuck at 500 mhz and overheating with big cooling stuff on it.

i have one of those video cards, but AMDs aren't really up to 3GHz.
their performance rating is 3000+ or 3200+ etc., but they actually
operate down toward 2.5GHz.

So it seems cpu's have become 6 times as fast... (if that is true) and
graphics cards maybe 2 or 3 times ?! they do have new functionality.

graphics cards perform a function that is more parallelizable, so they
have more pipelines to perform calculations in. CPUs have relatively
few pipelines because they can't be parallelized as easily. as a
result, a GPU can do more work per clock than a CPU.

Engines are expensive to build.

really.

Look at the doom3 engine... or any other engine... Suppose that engine uses
T&L... Suppose 5 years from now cpu's are again 6 times faster... and
graphics cards only twice as fast (the best most expensive one's I doubt
that is going to happen any time soon seeing the big heat problem ).

That could mean the doom3 engine or any other engine is seriously held back
if all T&L is done in the graphics card...

WRONG.

the T&L by that time will be performed on /more vertices per clock/ than
it is now.

CPUs are not as capable.

I think john carmack is really smart and codes flexible stuff... so maybe he
is smart enough to do T&L in cpu as well... Actually doom3 might not use T&L
at all... since I was able to run the doom3 alpha on a TNT2 which has no
T&L... also rumors are lol that john carmack likes using opengl that's
true and doom3 only (?) uses opengl.

OpenGL is supported on more platforms. he doesn't have his sights so
narrow as other developers. Windows is not the only future. other
platforms exist, and those platforms have OpenGL.

as a side note, OpenGL is a standard. as such, anything that uses it
must follow the standard. OpenGL mandates that T&L be supported -
either by the OpenGL driver or by the hardware. ever since the GeForce
series started, T&L has been done in hardware. your argument holds no
water.

So for doom3 this might actually not be a problem... But for games like call
of duty, homeworld 2, halo... this could become a problem... a slow problem
that is =D

again, you're wrong. Halo makes extensive se of Pixel Shaders. in
another post ) i noted that GPUs are
inherently parallelizable. as such, i expect to see GPUs coming out in
the future that support as many as 24 or 32 pixel pipelines with
multiple math and texture units per pipeline. as such, they will
process a great deal of pixel data every cycle - moe than a CPU can
dream of doing in a hundred cycles. Halo especially will reap the
benefits here, and in a little over a year or so, we can probably see
Halo doing upwards of 100+ frames per second with CURRENT CPUs.

Seeing this post my coinfendence in john delivering awesome powerfull
flexible engines has just rissen

John is an exceptional programmer. i can only hope that my talent
proves itself to be half what he is known for.

but honestly, you're giving him too much credit.

--
-- Charles Banas
  #8  
Old May 16th 04, 05:03 AM
Dr Richard Cranium
external usenet poster
 
Posts: n/a
Default

cross posted.

valid question has he, however; no clear direction can the answer come from - so the man
with the answer doesn't quite know whenst or wherest he is answering or whom is responding
from where.

basically.

the guy just "shotgunned" the newsgroups he thought might answer his already made up
mind - a kinda guy you hope has no authority or money to carry out his whims. know what i
mean.

i would even venture to say - he is German - as he is having a difficult time reading and
interpreting the answer (hence my "already made up mind" inference to the poster.

** No Fate **

cheers,
dracman
Tomb Raider: Shotgun City
http://www.smokeypoint.com/tomb.htm

http://www.smokeypoint.com/3dfx.htm

http://www.smokeypoint.com/My_PC.htm





"C" wrote in message
om...
: Someone clue me in on all the attacks on this (mis)post.
: Seems so much energy was expended on bashing this guys error of post. I'm
: new to newsgroups, so please explain the presidential level error of such
: magnitude that it required such negative attention.
:
:
: "Skybuck Flying" wrote in message
: ...
: Hi,
:
: Today's CPUs are in the 3000 to 4000 MHZ clock speed range.
:
: Today's GPU's are in the 250 to 600 MHZ clock speed range.
:
: I am wondering if GPU's are holding back game graphics performance because
: their clock speed is so low ?
:
: For example suppose someone programs a game with the following properties:
:
: 3 GHZ for os, game, network, etc logic
: 1 GHZ for graphics.
:
: Would that game run faster than any other game ?
:
: How about
:
: 2 GHZ for game
: 2 GHZ for graphics.
:
: I know CPU's have generic instructions. so 4 GHZ means a cpu can do about
: 4000 miljoen generic instructions.
:
: What does GPU 500 mhz mean ???? 500 miljoen pixels ? 500 miljoen
: triangles
: ? 100 miljoen T&L + 400 miljoen pixels ? what ?
:
: Bye,
: Skybuck.
:
:
:
:




.................................................. ...............
Posted via TITANnews - Uncensored Newsgroups Access
at http://www.TitanNews.com

-=Every Newsgroup - Anonymous, UNCENSORED, BROADBAND Downloads=-

  #9  
Old May 16th 04, 11:23 AM
Gerry Quinn
external usenet poster
 
Posts: n/a
Default

In article , "Dr Richard Cranium" wrote:
the guy just "shotgunned" the newsgroups he thought might answer his already
made up
mind - a kinda guy you hope has no authority or money to carry out his whims.
know what i
mean.


It wasn't that stupid a question, even if he had apparently developed
some prejudice on the matter. And the newsgroups he chose were targeted
reasonably well, except for the language thing, ewhich was probably an
honest mistake.

At least he's not one of those complete ****wits who complain about
cross-posting per se.

- Gerry Quinn
  #10  
Old May 16th 04, 09:32 PM
J. Clarke
external usenet poster
 
Posts: n/a
Default

Skybuck Flying wrote:


"Dr Richard Cranium" wrote in message
. ..
the real answer ? those folks are busy getting the 64/128 bit ATI video

card ready to
ship sometime in June.
Then the 64 bit sound card will follow. (PCI2)

Microsoft is just sitting on the 64 bit windows OS until INTEL is READY
to

ship. Gosh -
you wouldn't want AMD to grab the 64 bit OS market first, would you.

okay so you will have to use what available talent is here in the ATI ng

huh, well at
least these folks are trying to help more ng'ers than wallets (like in

their own).

my 2 cents follows.

The more you can off load the graphics API's from the cpu the faster the

cpu can process
the game (software api's)
The cost/overhead of building a perfect GPU goes well beyond what you or
i

care to spend
for a graphics card.
How about $12,000 for the almost perfect CADD workstation video card. I

don't think so.
$ 500 per card is my limit. ATI needs trade-offs, and you are purchasing

those trade offs
and various nitch prices you and I can afford. This does NOT put much

money back into
ATI's pocket so they can research an inexpensive GPU that performs like a

CPU. ATI ain't
stupid either - you ain't getting that $12,000 video card for peanuts

dude. What you ARE
going to get is value for your money. ATI can afford and make and sell

you at a
reasonable price what you see on the market. You want CPU = GPU

performance, so does ATI.

as GPU and CPU creation process's get improved and cheaper - then you
will

see GPU
performance approaching CPU
performance.

You need the GPU to run the CPU at good efficiency. Lose the GPU, and
you

bog/choke the
CPU.

CPU win's, GPU wins.

nuff said.


Absolutely not...

Since nowadays nvidia cards and maybe radeon cards have T&L... that means
transform and lighting.

That also means these cards can logically only do a limited ammount of
transform and lighting.


Any DX compliant board is going to have hardware transform and lighting. As
for doing "a limited amount", I'm not sure what point you're trying to
make. They do it in dedicated hardware, not in software. What takes a
regular CPU a dozen or so operations happens in one on the GPU.

5 years ago I bought a PIII 450 mhz and a TNT2 at 250 mhz.


If you're comparing apples to apples that PIII was running 100 MHz with a
clock multiplier of 4.5 while the TNT2 was running an honest 125 MHz (not
250--nvidia didn't get to that speed until the Geforce2).

Today there are P4's and AMD's at 3000 mhz... most graphic cards today are
stuck at 500 mhz and overheating with big cooling stuff on it.


Those P4s and AMDs are running around 200 MHz with clock multipliers of
12-16. GPUs get speed by parallelism, not by a deep pipeline, and so
aren't clock-multiplied--a current GPU running 500 MHz is running that as
the primary clock, not the multiplied clock.

So it seems cpu's have become 6 times as fast... (if that is true) and
graphics cards maybe 2 or 3 times ?! they do have new functionality.


Nope, both have increased in performance at about the same rate. Clock
speed is not the whole story. If it was then the P4s would be walking all
over the AMD-64s.

Engines are expensive to build.

Look at the doom3 engine... or any other engine... Suppose that engine
uses T&L... Suppose 5 years from now cpu's are again 6 times faster... and
graphics cards only twice as fast (the best most expensive one's I doubt
that is going to happen any time soon seeing the big heat problem ).

That could mean the doom3 engine or any other engine is seriously held
back if all T&L is done in the graphics card...


Why? You're assuming that doubling the clock speed results in doubling the
T&L rate. Suppose at the same time the clock speed is being doubled the
number of clock cycles needed to perform a stage of T&L is also cut in
half.

You're also assuming that T&L is all that the board does. That assumption
is also not correct. Further, you're assuming that T&L is the bottleneck.

I think john carmack is really smart and codes flexible stuff... so maybe
he is smart enough to do T&L in cpu as well... Actually doom3 might not
use T&L at all... since I was able to run the doom3 alpha on a TNT2 which
has no T&L... also rumors are lol that john carmack likes using opengl
that's true and doom3 only (?) uses opengl.

So for doom3 this might actually not be a problem... But for games like
call of duty, homeworld 2, halo... this could become a problem... a slow
problem that is =D


If your assumptions are correct. They are not.

Seeing this post my coinfendence in john delivering awesome powerfull
flexible engines has just rissen

Bye, bye,
Skybuck


--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 09:01 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.