A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Video Cards » Nvidia Videocards
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

GTC 2015 Review by Skybuck.



 
 
Thread Tools Display Modes
  #1  
Old March 18th 15, 08:15 PM posted to alt.comp.periphs.videocards.nvidia
Skybuck Flying[_4_]
external usenet poster
 
Posts: 480
Default GTC 2015 Review by Skybuck.

Hello,

I kinda made it a tradition to review GTC opening speech by CEO of NVIDIA.

(GTC=Graphics Technology Conference) or is it ?

I am not sure about the G anymore... is it still graphics or maybe GPU ?
lol... kinda odd... now with cuda.

Anyway............................................ .................................................. .................

I can kinda be short about this review... the only topic covered was: "deep
learning".

Apperently NVIDIA is really happy about "neurons" and "deep learning"
because it can probably execute well on their GPUs...

Though I do remain a little bit skeptical...

So neurons can be thought of as "pixels".... another nice particle for them
to sink their gpu's teeth into....

Though I think intel and other chip manufacturers might integrate neuron
circuitry into their chips in near future, which might take away nvidia's
possible adventage.

These neuron nets are very old ideas... which was also explained in the
presentation... from 1995... so it's not really new... it's just that GPUs
can now be applied to them.

So basically nothing really new was presented during this GTC 2015... except
a new processor/chip... which was also somewhat announced ealier... the
titan x.

What is amazing about the titan x is it's complete lack of double precision
performance.

Who really cares about single precision performance ? Not me... it's like 32
bit technology... which is 2000 year technology.

We are now very quickly approaching the 64 bit technology era... so this
chip... is old-stuff.... it's not prepared for the future.

There was some hint towards a titan-z which is supposedly to have better
double precision performance... but no numbers were given...

Single precision might be nice for neurons... or maybe not... even there...
with the share ammounts of neuron connections... numerical drift might
become a problem even for that...

Perhaps single precision is also nice for graphics....

But for serious/future applications this chip will not do.

Therefore my prediction will be:

If nvidia keeps their chips operating at 32 bits of precision/floating point
precision... they will very quickly find themselfes without customers in the
near future... at least for general computations.

All bussiness will either go to more expensive chips... or chips of
competitors.

Perhaps nvidia will wise up next year

Anyway... some of the deep learning stuff was nice... it requires huge
ammounts of data though... which makes it difficult to apply in practice...

It's more of a data driven thing... data collection thing... then anything
else.... also some compute ofcourse...

I also consider it a subject on-top of computer technology/processing
technology... there are different topics... but none of them were covered
which made it a bit boring...

I would advice nvidia to spent more time on caches.... more resources on
caches... perhaps offer a small little cache per cuda core... so that it can
do more data processing... without having to access main memory... I am also
not yet sure how fast the "shared" data cache is...

I would worry a bit... that the "shared" cache... might cause lots of cache
trashing... but other cuda threads and such... not sure about that...

Perhaps more time on next years presentation should be how to actually
program cuda chips... how to deal with caches... how to prevent cache
trashing... and basically how to get the most out of the cuda chips...

Perhaps dive into some algorithms... or just practical stuff for
programmers...

Shed some more light on performance considerations for algorithm etc...

I think that's what might interest some programmers...

And ofcourse also new language supports... and maybe new language
features...

Or new compilers speeds...

Compiler improvements... debugging improvements... profiling improvements...

Keep it very technical... and very nvidia focused...

Again this year... it's all about "you".

I don't really care about "you" being other companies...

As a programmer am I more interested what nvidia has to offer for me... how
they can help me...

And I think that goes for most programmers...

And yes... there will be noobs watching... experts watching... there should
be something in it for everybody...

Or perhaps just assume... that people watched the previous gtc's... and just
expand on that... how things improved...

Noobs can always watch older recordings... and get up to speed !

Though the deep leaning was somewhat interesting.

So I would suggest to divide presentation time as follows:

1. Time for a new chip introduction
2. Time for cuda improvements, compiler speeds, bugs, profiler, debugging,
ptx improvements, compatibility.
3. Time for how to program nvidia chips best, algorithm improvement
suggestions, optimizations, all kinds of stuff... do's, dont's... cache
usage, pitfalls, etc.
4. And only lastly... about interesting applications of gpus/cuda
technology/companies.

I think this is what developers would like best...

Doesn't have to be super detailed... just some highlevel overview... and
where time allows... dive deeper into it...

And where time does not allow... link towards other presentations...

Bye for now,
Skybuck

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Skybuck's Review of Sid Meier's Starships 3 out of 5 Skybuck Flying[_4_] Nvidia Videocards 0 March 17th 15 12:24 AM
Skybuck's Review of GTC 2014 Skybuck Flying[_7_] Nvidia Videocards 0 March 25th 14 11:24 PM
**** Starcraft 2 Heart of the Swarm **** (Skybuck's Review). Skybuck Flying[_7_] Nvidia Videocards 1 April 1st 13 05:22 PM
Skybuck's review of GTC 2013 key note. Skybuck Flying[_7_] Nvidia Videocards 0 March 20th 13 03:04 AM
**** Skybuck's Review of Bulletstorm **** Skybuck Flying[_3_] Nvidia Videocards 1 March 9th 11 04:13 AM


All times are GMT +1. The time now is 12:13 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.