A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » Intel
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization, And Why Billions Of Dollars Is At Stake



 
 
Thread Tools Display Modes
  #1  
Old August 30th 06, 06:33 PM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
AirRaid
external usenet poster
 
Posts: 126
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization, And Why Billions Of Dollars Is At Stake

http://blogs.mercurynews.com/aei/200...comb.html#more

The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization,
And Why Billions Of Dollars Is At Stake

Dean Takahashi, 12:01 AM in Dean Takahashi, Gaming

Not everybody may care about how they get their eye-popping graphics.
But how it gets delivered to you will be determined by the results of a
multi-billion dollar chess game between the chip industry's giants. Can
you imagine, for instance, a future where Nvidia doesn't exist? Where
there's no Intel? The survivor in the PC chip business may be the
company that combines a graphics chip and a
microprocessor on a single chip.

In the graphics chip industry, everyone remembers how Intel came into
the market and landed with a thud. After acquiring Lockheed's Real3D
division and building up its graphics engineering team, Intel launched
the i740 graphics chip in 1998 and it crashed and burned. The company
went on to use the i740 as the core of its integrated graphics chip
sets, which combined the graphics chip with the chip set, which
controls input-output functions in the PC. Intel took the dominant
share of graphics as the industry moved to integrated, low-cost chip
sets, according to Jon Peddie Associates. But the company never gave up
on its ambition of breaking into graphics. Intel has a big team of
graphics engineers in Folsom, Calif., to work on its integrated
graphics chip sets. And it recently acquired graphics engineers from
3Dlabs. The Inquirer.net has been writing about rumors that Intel has a
stand-alone graphics chip cooking. That may have been one of the
factors that pushed Advanced Micro Devices into its $5.4 billion
acquisition of graphics chip maker ATI Technologies. Because of that
deal, the PC landscape has changed forever. Now there is an imbalance
as Intel, Nvidia, and AMD-ATI try to find the center of the future of
computing. (pictured: Intel's Jerry Bautista)

If AMD and ATI combine the central processing unit with the graphics
processing unit, it could collapse the barrier between
multibillion-dollar industries, leaving both Nvidia and Intel to
scramble. What will happen? Will Nvidia stick a CPU on the corner of
its graphics chip and take a lot of dollars away from Intel in the PC?
Intel is betting that something else will happen. In interviews with
its researchers, they are confident that graphics processing will
naturally shift to the CPU from the GPU. That's because they believe
that the decades-old technique dubbed "ray tracing" will replace
the technique of rasterization, or texture-mapping, that modern
graphics chips have grown up with. Check out more about this in a paper
by Intel researcher Jim Hurley at
http://www.intel.com/technology/itj/...10_authors.htm.
Ray tracing involves rendering an image by shooting a ray from a point
of view and seeing what it hits.You can see what is in a picture and
what is hidden from view. There is no need to render everything in the
whole scene. Only what is visible. By contrast, rasterization is
getting more and more complicated. Programmers have to make numerous
passes, adding layer upon layer of shadows and lighting to a scene
until it looks just right. Hurley says it is a more accurate depiction
of reality, while rasterization can only approximate reality. Ray
tracing has been expensive, but the animation houses such as Dreamworks
and Pixar have used it in their latest movies. Perhaps the latest
efforts in video games will not be far behind, says Hurley. If you look
at the Cell chip for the PlayStation 3, it was clear that Sony thought
about putting graphics and the CPU on one chip. But it changed its mind
and brought Nvidia into the picture with the RSX graphics chip.

I've interviewed a number of folks about what they think about the
possibility of combining the graphics chip and the CPU, as well as the
notion that ray tracing on the CPU might replace rasterization on
graphics chips. Some of these interviews took place before the AMD-ATI
merger, and some after. Here are some of the quotes.

Patrick Moorhead, vice president of advanced marketing at Advanced
Micro Devices, said, "The idea that the microprocessor and the
graphics chip might combine was an element in our merger. We could have
licensed that. We see that as a mid-term reality. We are announcing we
will do a combined CPU and GPU development in 2007. Initially it is
focused on emerging markets. That's where the right solution is
optimized for emerging solutions. The cost of emerging platform is
governing a lot of things. The CPU controls the costs of peripherals in
the system. If it is more integrated, the less power it requires. You
can get by with a smaller power supply and have other benefits."

Pat Gelsinger, executive vice president Digital Enterprise Group at
Intel, said: "This idea is interesting for our tera-scale computing
initiative. We have had a certain architectural model for graphics. ATI
and Nvidia have evolved it very effectively. We do some a lot of
business in integrated graphics. Now as people have moved to multipass,
more sophisticated rendering, and we have introduced ray tracing and
other models, the nature of the graphics pipelines is changing. You
shove polygons through. That is not right for the work load. Small
general-purpose algorithms look more like what we do on a CPU than a
GPU. Ray tracing is being done today. People ship them today running a
blade configuration, with two-unit rack-mounted servers. They use it
for techniques for high-resolution rendering on things like Shrke 2.
They produce superior graphics results with it. It will start to
displace traditional rendering architectures. We're not there yet. We
are still a few years away from that point of saying that.

What can the graphics chip do instead? Well, why can you tool the
physics to run on the GPU or CPU? Look at Havok versus Ageia. The
results of that is we see the next-generation visualization work loads.
We are taking that into account in our planning. I wouldn't call it a
collision of the graphics chip and the CPU. It's next-generation work
loads. I'm not expecting GPUs to go away, and Jen-Hsun and Dave Orton
aren't expect it to go away.

Dave Kirk, Nvidia's chief scientist, said, "If ray tracing was
universally superior to rasterization, wouldn't digital film studios
use ray tracing exclusively? They do not. In fact, the first film to
extensively use ray tracing is Pixar/Disney's Cars, which has lots of
shiny reflective objects and scenes rather than soft, flexible
characters and natural environments. Most digital animated films use a
combination of many techniques including both ray tracing and
rasterization, to create the widest possible variety of effects as
efficiently as possible. It is likely that games and interactive
graphics applications will progress in the same way over time. It is
naïve to think that CPUs with limited parallelism will be competitive
with the massively parallel devices such as graphics chips."

On ray tracing versus rasterization, Greg Brandeau, the chief
technology officer at Pixar Animation, said, "This is a complicated
question. My simple summary would be that more studios would use ray
tracing if the computer power was cheap enough. There are work arounds
to get nice images but ray tracing is computationally very expensive.
Also, Cars was not the first movie to use ray tracing. Shrek 2 used ray
tracing to approximate global illumination. As to which of these
technologies will win out, only time will tell. These two technologies
are so different that it is hard to predict which will ultimately win
out. At Pixar, we don't know the answer but we are constantly
evaluating the state of the latest hardware to figure out what is going
to give us the most pretty pixels per dollar."

Dave Orton, CEO of ATI Technologies: I think it is extremely realistic
that the CPU and the GPU will be combined on one chip. If you think
about the market and how it has moved from the GPU to the integrated
graphics chip sets, there are new opportunities. Like the ultramobile
PC or MIT's "One Laptop Per Child" project. Power continues to be
an issue. There is a huge opportunity in the low end of the market to
create a third platform stack to the overall PC platform. So you have
integrated chip sets and GPUs. It's a question of when.

The question is if the CPU architecture can do graphics processing. I
would agree there is a class of graphics processing that you would want
a graphics processor to do. But the question is if there is a
system-on-a-chip that could also do that. It will not happen in
performance laptops. But think of how many chip sets still use Direct X
7 or Direct X 8 graphics. Those are still shipping. The "One Laptop
Per Child" applications. I think there will be a class of problems
you can solve with current technology. It might be one generation
behind the state of the art graphics. "

On ray tracing versus rasterization, Orton said: "Ray tracing is just
one form of how you render a pixel. It's not the only form. At a
scientific level, you can say that it is growing to fit more problems.
But the reality is there is a broad range of how you want to render a
pixel. Ray tracing is one form of how you do it. Other applications
will want to render it in different ways. I don't see the processor
doing it as much as I see extensions of the processor doing it."

Henri Richard, chief sales and marketing officer, Advanced Micro
Devices: "It's more of a question of when than if. We will have a
transistor budget at some point in time to combine the CPU and the GPU
on one piece of silicon. In a multicore environment, one core will be
the GPU.

Justin Rattner, chief technology officer, Intel, "Intel builds
raster-based graphics. We have for some time. It's mature as a
technology and has reached its highest evolutionary form. What you see
now is to achieve the desired look for a scene, you have to make many
passes over the data per scene. Fifteen or twenty-five times on a GPU
pipeline. That's not raster versus ray trace It's about how GPUs
have fixed pipelines, anything that is longitudinal, you render it,
then take another pass. A more flexible architecture lets you render in
one pass. We're interested in that from a pure architectural point of
view. In the next five years, these two architectures will meet
somewhere in the middle. GPUs will become more flexible and CPUs will
do more things. Ray tracing has more to do with the fact that you get
the desired result with very little effort. Right now it's tedious to
get the desired look. If you can do ray tracing in real time, it's
the obvious choice for the solution. Right now we get six or so frames
per second. It can deliver an arbitrary degree of photorealism. The
idea has generated a lot of discussion with the merger of ATI and AMD
coming. We have been much more focused on working with the graphics and
rendering software communities to create the architecture and software
for a new generation of rendering. It's marked now by functions that
don't have much to do with image quality. If you want to add physics
and behavioral AI, you have to design the software in a different way.
Not piecemeal. That's where were we come in. You have to do this in a
general-purpose environment. Our view is we have to beat them on
performance. You have to do something they can't do today. Otherwise,
you can't generate momentum.

Jim Hurley, researcher, Intel: "We think that ray-tracing is going to
take off. This is a technique where you render only what you see.
It's different from rasterization, which is what graphics chips do.
With rasterization, you feed triangles into a rasterizer and it
processes them in order. But it doesn't take into account the
relationship of the triangles. It can only do multiple passes and do
things over and over again. Ray-tracing lets you shoot a ray into a
model of a world and it will find the object it is aimed at. It is a
simulation of the physics of light. Rasterized graphics is an
approximation. It can achieve plausible images but it is using brute
force. Ray tracing can run efficiently on a CPU because of the large
caches. GPUs rely on brute-force bandwidth. Traditional raster graphics
doesn't do that well with ray-tracing. You can't do ray-tracing on
a GPU because there isn't that much memory. You get movie quality for
games and can do it in real time. Rasterization is trying to mimic what
ray tracing does in photorealism. Pixar's rendering was based on
raster graphics for a long time, but all the movies houses are moving
to ray tracing." (CPU magazine).

Jerry Bautista, director of Intel's Microcomputer Research Lab:
Regarding the graphics chip companies, he said, "If I were them, I
would be nervous. We see a trend. We watch the FLOPS, or floating point
operations, the watts, and dollars that go into the graphics cards and
the computational physics on GPUs. They have been a growing part of the
PC budget. We are aware of that. Some graphics computation is handled
well on a graphics processor. We can pull the graphics back on the CPU.
In the future, the load of rendering an image falls in favor of the
computer side, the microprocessor, and the pixelization task becomes
minor. Our horizon is three to five years.

He added, "Instead of saying that we will win over the graphics chip
makers, I'd talk with them about the applications themselves. In
today's systems, they are largely concerned with rendering. About 90
percent of resources are spend drawing pictures. There is not much left
for physics and artificial intelligence. What happens if we are have
real physics and real AI kicks in? In ray tracing, we see if we had 10
or 100 cores, we would see a 10X speed-up. With 1,000 cores, we would
see 100X speed-up. It just keeps going. Ray-tracing can swallow up
whatever compute we build. At what point do you get diminishing
returns?" (CPU magazine)"

David Wu, game programmer and president of Pseudo Interactive in
Toronto, Canada, said, "Many concepts from ray tracing and
rasterization are converging, eventually they will meet. With current
architectures (CPU or GPU) and memory bottlenecks, rasterization has an
inherent advantage in performance. That will be enough to keep it as
the technique of choice for high performance applications for many
years. The main advantage of Ray tracing is the fact that you can
create nice abstract images with little programming effort. However
when you get down to all that details that are required to render real
scenes there is not much savings in programming complexity. Ray
Tracing might find its niche amongst hobbyists (who want to build there
own renderers from scratch), dogmatic programmer evangelists who like
the term "Ray Tracing", and existing, legacy systems.

Wu added, "There is no question about the GPU/CPU separation. They
will both be on one chip using pretty much the same sort of hardware by
the next console generation. Something like CELL, but without all of
the flaws and easier to program for. Physics will be done using the
same hardware. Relatively simple, massively parallel processors with a
lot of hardware dedicated to the issue of memory latency and
bandwidth."

Tim Sweeney, CEO of Epic Games and graphics expert, said, "I'm a very
strong believer in the coming convergence of CPU and GPU hardware and
programming models, enabling CPUs to once again implement great
software rendering, or alternatively for GPUs to be applied naturally
to general computing problem using mainstream programming languages.
This is a separate topic from the question of whether ray tracing is
the future of graphics. Many vast benefits would come from a CPU-GPU
convergence that would benefit all means of generating scenes:
rasterization, ray tracing, radiosity, voxels, volumetric rendering,
and other paradigms. Such a convergence means that real-time ray
tracing will become possible, but by no means does it imply that ray
tracing will become the de facto solution for 3D drawing. For example,
ray tracing is poorer for rendering for anti-aliasing (looking towards
multisampling and analytic anti-aliasing techniques), and typically
imposes a 20-40X computational penalty compared to rendering. Ray
tracing is superior for handling bounced light, reflection, and
refraction. So, there are some places where you will definitely want
to ray trace, and some cases where it would be a very inefficient
choice. Certainly, future rendering algorithms will incorporate a mix
of techniques from different areas to exploit their strengths in
various cases without being universally penalized by one technique's
weakness."

Bob Drebin, chief technology officer of the PC business unit at ATI
Technologies, said, "They do pseudo ray tracking in the movies now.
They rasterize. If there is a polygon that needs complex reflections,
they start a ray trace for that. In both Shrek 2 and Cars, they use it
depending on the effect. With our Toy Shop demo, we did limited ray
tracing with cobblestones. It's limited. For the bricks. It's a
tool that they use in a shader program for certain situations where you
determine your color. What objects occlude you, what can you see. It is
a technique. Notion of casting rays to determine visibility or color is
something we use today. It's just one of the things. To me the more
interesting thing is the dynamics of the scene. Top tier developers
feel they are getting good. Now they want to make the scenes more
compelling from an interactive view. It's more about how I make it
more dynamic, more interactive, than to make the lighting more precise.
The physics, the interaction of objects. Character animation getting
muscle based. That is where the energy is going in game computing. In
terms of realism, I see ray tracing as a technique that will be used
selectively. Even if it goes that way, ray tracing is a highly parallel
operation. I don't see a time when they are talking of a time with
thousands of processors. I don't see advantage of a CPU doing it. If
the question is who can do a single ray fastest, then the CPU will win.
Then the goal is determine each reflection as soon as possible and move
to next one. But if you complete a million of them, the question is how
long it takes to do the first one. Youl can many of the rays in
parallel together. The throughput would be much higher. Thousands or
millions of ray intersections would collide. With thousands, then the
GPU is the clear winner. I think that in a lot of ways with the new
compute coming to the GPU, there are things that are not possible to do
on a CPU. In the past, the only place you could express it was the CPU.
The GPU is now becoming programmable. People aren't saying give me a
smaller CPU. They are saying now I can finally do more things. In a
game like Half-Life 2, you would be able to throw around all the
objects. Not just one or two. Multi-core is great for lots of
sequential computation. It may become less clear. With a richer
programming languages, the GPU needs less interaction with the CPU. I
suspect CPUs will become more parallel. We aren't running out of
things we wish we could do.

Jen-Hsun Huang, CEO of Nvidia, before the ATI-AMD merger announcement,
said, "Programmability has different types. There are scalar
programs. That uses a scalar microprocessor with a flow of instructions
and it fetches instructions out of a cache. It processes data in a
data-dependent way. That sort of programming is what microprocessors
are really wonderful at. We are not very good at that kind of
processing. Our processors are adept at processing large amounts of
data that have less dependency. Our processor is more akin to a stream
processor. The types of architectures are radically different. Just as
the CPU can run DSP programs, a DSP is much better at running DSP
programs. There are different types of programming models, whether it
is signal processing for baseband, or voice. There are scalar
processors. There are image processors for enormously large data sets
which is what a GPU does.

There is integration at two levels. There is the unification of
processing models. There is the CPU and the GPU, combined together in a
unified processor model. I think the latter is very unlikely. Although
on balance, transistors are free, we are challenged because most of the
opportunities require low power. So you have to have efficient
programming. It is far more efficient to run a program written for CPUs
on a CPU, and it's far more efficient to run a program for GPUs on a
GPU. There is the issue of power efficiency and cost efficiency. Brute
force is not a very good option. There is the second approach of
combining two processors onto one chip. In some markets, that would
happen. For example, integrated graphics combines two chips into one
where the technology is not very demanding. The market requirements are
much slower in commercial, corporate desktops and others that require
very little graphics. But if the graphics technology is a defining part
of that system, whether it is a game console or high-end PC or
workstations, the two devices innovate at different rhythms. There is
no reason the two devices want to merge into one in that case. In fact,
combining them into one makes it very difficult to combine two modern
cores into the same substrate on the same schedule. There, what causes
the two to move apart is not difference in programming models but
differences in market requirements and rhythms. By putting it in one
chip, you end up getting the worst of both worlds.

Nelson Gonzalez, CEO of Alienware, said about the merger of ATI and
AMD, "It may be a good thing. The reason I say that is I see ray
tracing is part of the way to go in the future. I don't think it's
going to be handled always at the CPU level. Maybe some FPGA chip.
That's the way to go. We are getting to the point where you have to
run eight geometric processors to process all these polygons. At some
point it doesn't make sense anymore. It makes sense to do ray
tracing.

I would think at this point it makes sense to keep the graphics chip
and the CPU separate. Unless you have many many cores, we're still
away from that. The writing is one the wall. Pixar rendered the Cars
movie with ray tracing. You're going to get a level of realism you
can't get with what we have. The combo of ray tracing and
rasterization makes sense at the beginning. Eventually, the future is
really just pure ray tracing. It's easier to model than draw these
things out.

  #2  
Old August 31st 06, 06:57 PM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
Yousuf Khan
external usenet poster
 
Posts: 62
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization, And Why Billions Of Dollars Is At Stake

AirRaid wrote:
http://blogs.mercurynews.com/aei/200...comb.html#more

The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization,
And Why Billions Of Dollars Is At Stake


Kind of a long-winded way of saying GPU's and CPU's might be coming
together, don't you think?

Yousuf Khan

  #3  
Old September 2nd 06, 05:07 AM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
HockeyTownUSA
external usenet poster
 
Posts: 31
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization, And Why Billions Of Dollars Is At Stake


"Yousuf Khan" wrote in message
ups.com...
AirRaid wrote:
http://blogs.mercurynews.com/aei/200...comb.html#more

The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization,
And Why Billions Of Dollars Is At Stake


Kind of a long-winded way of saying GPU's and CPU's might be coming
together, don't you think?

Yousuf Khan


Yep.

But I doubt it will happen. Too much money to be made unless Intel buys
nVidia (since AMD already bought AMD - go figure, I was expecting AMD to buy
nVidia instead since they make chipsets for their CPU's). But either way,
selling GPU's and CPU's separately is a much bigger money maker than
integrating them together. I can see CPU's offering specialty areas on the
chip for basic GPU operations, thus eliminating a need for a separate
onboard GPU chip which would increase revenues. There is a market in the
business industry for basic GPU functions for desktop work, and relegating
that stuff into the CPU would save money by not having to offer a separate
onboard GPU chipset.

But a dedicated GPU will always be needed for intense graphical applications
(especially games). It's just technological progress.


  #4  
Old September 2nd 06, 11:28 AM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
[email protected]
external usenet poster
 
Posts: 28
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization, And Why Billions Of Dollars Is At Stake

Yousuf Khan wrote:
Kind of a long-winded way of saying GPU's and CPU's might be coming
together, don't you think?


Them coming together means nothing if there isn't bandwidth to back it
up.

  #5  
Old September 3rd 06, 02:40 AM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
Yousuf Khan
external usenet poster
 
Posts: 914
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization,And Why Billions Of Dollars Is At Stake

HockeyTownUSA wrote:
But I doubt it will happen. Too much money to be made unless Intel buys
nVidia (since AMD already bought AMD - go figure, I was expecting AMD to buy
nVidia instead since they make chipsets for their CPU's). But either way,
selling GPU's and CPU's separately is a much bigger money maker than
integrating them together. I can see CPU's offering specialty areas on the
chip for basic GPU operations, thus eliminating a need for a separate
onboard GPU chip which would increase revenues. There is a market in the
business industry for basic GPU functions for desktop work, and relegating
that stuff into the CPU would save money by not having to offer a separate
onboard GPU chipset.


I doubt integrating a GPU into a CPU is going to be done for performance
reasons, it's going to be done for the same reason they integrate it
into chipsets -- to save money.

Yousuf Khan
  #6  
Old September 3rd 06, 08:27 PM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
jon
external usenet poster
 
Posts: 13
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization,And Why Billions Of Dollars Is At Stake

Using ray tracing would be awesome. I use it all the time in lightwave
3d, but only would be usable in gaming when computer cpu's reach 1000's
of GHZ in speed, trust me on this one. One frame of a complex object can
take 30 seconds to 2 minutes or more to render. Intel will win the
battle in the end once their chips break the 3 ghz barrier and reach the
1000's. I heard rumors of them working on that right now.
  #7  
Old September 3rd 06, 09:07 PM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
the dog from that film you saw
external usenet poster
 
Posts: 8
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization, And Why Billions Of Dollars Is At Stake


"jon" wrote in message
om...
Using ray tracing would be awesome. I use it all the time in lightwave 3d,
but only would be usable in gaming when computer cpu's reach 1000's of GHZ
in speed, trust me on this one. One frame of a complex object can take 30
seconds to 2 minutes or more to render. Intel will win the battle in the
end once their chips break the 3 ghz barrier and reach the 1000's. I heard
rumors of them working on that right now.




i remember back in the days of the amiga when a magazine i read would print
great raytraced pictures only to reveal that the amiga had taken 7 days to
render that single frame.


--
Gareth.
A french man who wanted a castle threw his cat into a pond.
http://www.audioscrobbler.com/user/dsbmusic/


  #8  
Old September 4th 06, 07:29 AM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
Yousuf Khan
external usenet poster
 
Posts: 62
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization, And Why Billions Of Dollars Is At Stake

jon wrote:
Using ray tracing would be awesome. I use it all the time in lightwave
3d, but only would be usable in gaming when computer cpu's reach 1000's
of GHZ in speed, trust me on this one. One frame of a complex object can
take 30 seconds to 2 minutes or more to render. Intel will win the
battle in the end once their chips break the 3 ghz barrier and reach the
1000's. I heard rumors of them working on that right now.


It's not going to take thousands of gigahertz to do it. The only reason
it takes that long on regular CPUs is because they aren't optimized for
it. A GPU can be designed that can do it milliseconds, if there is
enough parallelism available. Ray-traced images using Lightwave have
nothing to do with these sorts of raytraces.

Yousuf Khan

  #9  
Old September 6th 06, 10:29 PM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
[email protected]
external usenet poster
 
Posts: 28
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization, And Why Billions Of Dollars Is At Stake

Yousuf Khan wrote:
It's not going to take thousands of gigahertz to do it. The only reason
it takes that long on regular CPUs is because they aren't optimized for
it. A GPU can be designed that can do it milliseconds, if there is
enough parallelism available. Ray-traced images using Lightwave have
nothing to do with these sorts of raytraces.


Raytracing is easy to parallelize, very easy in fact. The problem is
bandwidth. Where as for triangle rasterizer all the information that is
needed is a relatively compact amount of information, for a ray tracer,
the FULL SCENE must be accessible (random access!) and the data is
hierarchical and needs traversal to even hit, where as for
scanconverter/rasterizer the most of the data is nested twice at best
(dependent texture reads withstanding).

The processing (before DX10) is at vertex and fragment level. A
fragment does read from basicly only few places (I'm putting this into
OpenGL context, the principle is very similiar for obvious reasons for
the D3D):

- uniforms (both built-in and user defined)
- varyings (ditto)
- samplers (read: textures)

Those are the primary data sources, there are more, example: alpha
blending for example uses the render target for r/w access. Other
associated buffers also come into mind like the zs.

The biggest overhead is when dependent texture reads are used, if the
coordinate is computed the latency of the computation is easier to
hide. For dependent texture read, it is less easy to hide as it is
order invariant. This means the actual result from the texture sampler
unit must arrive before the result can be used as a texture coordinate:
doesn't pipeline very well. Enough of trivialities.

A raytracer, however, for each ray, must see the *whole* database at
once and *traverse it*, binary space partition is often used technique
to reduce the order of the complexity a degree or two. Octrees and
their close cousin KD-trees are often employed techniques, there are
more but these still require systematic traversal of the tree to
finally get to amount of data that a brute-force linear search can be
done.

This involves a lot of maths, no problem there, this IS possible to
distribute. A design, I think, what would use hardware better than
straightforward C-to-VHDL-like translation would be to have units to do
the computation that is common to raytracing. Ray-to-primitive
intersection Unit, which could be chopped again into smaller pieces
like barycentrics computation unit, ray-to-plane solver and so on and
on and on, so that each unit can be re-used as much as possible in
different subtasks involved in this whole debacle.

Then see what the loads to each unit are, and see who's sleeping in the
class.. the sleepers need more work so add units that are under
heaviest load, in other words are the bottleneck.

The main trick in this sort of "stuff" is to avoid waste when possible,
you don't want to have 200K gates there doing nothing most of the time.
I think design like this can be made practical, but the problem would
be that this would involve a rather large amount of random accessing
into the memory. Random in ways that might be a challenge to cache
efficiently (think of texture sampler reads, caching reads from
textures is essentially a solved problem for ages.. first tiling, then
compressing the tiles to reduce even the ram-to-cache fetch bandwidth
and what not..)

I could easily be wrong but first impression is that the caching and
memory bandwidth management would be the biggest issue technically for
this. The second issue is that there is no infrastructure in place to
make money with this in a large scale as there is for the triangle
scanconverters. Even that took a while to build up momentum while it
was the *obvious* way to go.

Wasn't there a raytracing chip few years ago..? I don't ever recall
hearing what happened with that.

  #10  
Old September 10th 06, 03:19 AM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
jon
external usenet poster
 
Posts: 13
Default The Coming Combo Of The CPU And GPU, Ray Tracing Versus Rasterization,And Why Billions Of Dollars Is At Stake

Yousuf Khan wrote:
jon wrote:

Using ray tracing would be awesome. I use it all the time in lightwave
3d, but only would be usable in gaming when computer cpu's reach 1000's
of GHZ in speed, trust me on this one. One frame of a complex object can
take 30 seconds to 2 minutes or more to render. Intel will win the
battle in the end once their chips break the 3 ghz barrier and reach the
1000's. I heard rumors of them working on that right now.



It's not going to take thousands of gigahertz to do it. The only reason
it takes that long on regular CPUs is because they aren't optimized for
it. A GPU can be designed that can do it milliseconds, if there is
enough parallelism available. Ray-traced images using Lightwave have
nothing to do with these sorts of raytraces.

Yousuf Khan

A GPU is a video card's cpu. The faster the cpu the faster you raytrace,
simple as that. A raytrace is a raytrace, it's an old technology not
used in todays video games, but in hollywood effects in render farms to
make movies.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 10:45 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.