GTC 2015 Review by Skybuck.
Hello,
I kinda made it a tradition to review GTC opening speech by CEO of NVIDIA. (GTC=Graphics Technology Conference) or is it ? I am not sure about the G anymore... is it still graphics or maybe GPU ? lol... kinda odd... now with cuda. Anyway............................................ .................................................. ................. I can kinda be short about this review... the only topic covered was: "deep learning". Apperently NVIDIA is really happy about "neurons" and "deep learning" because it can probably execute well on their GPUs... Though I do remain a little bit skeptical... So neurons can be thought of as "pixels".... another nice particle for them to sink their gpu's teeth into.... Though I think intel and other chip manufacturers might integrate neuron circuitry into their chips in near future, which might take away nvidia's possible adventage. These neuron nets are very old ideas... which was also explained in the presentation... from 1995... so it's not really new... it's just that GPUs can now be applied to them. So basically nothing really new was presented during this GTC 2015... except a new processor/chip... which was also somewhat announced ealier... the titan x. What is amazing about the titan x is it's complete lack of double precision performance. Who really cares about single precision performance ? Not me... it's like 32 bit technology... which is 2000 year technology. We are now very quickly approaching the 64 bit technology era... so this chip... is old-stuff.... it's not prepared for the future. There was some hint towards a titan-z which is supposedly to have better double precision performance... but no numbers were given... Single precision might be nice for neurons... or maybe not... even there... with the share ammounts of neuron connections... numerical drift might become a problem even for that... Perhaps single precision is also nice for graphics.... But for serious/future applications this chip will not do. Therefore my prediction will be: If nvidia keeps their chips operating at 32 bits of precision/floating point precision... they will very quickly find themselfes without customers in the near future... at least for general computations. All bussiness will either go to more expensive chips... or chips of competitors. Perhaps nvidia will wise up next year ;) Anyway... some of the deep learning stuff was nice... it requires huge ammounts of data though... which makes it difficult to apply in practice... It's more of a data driven thing... data collection thing... then anything else.... also some compute ofcourse... I also consider it a subject on-top of computer technology/processing technology... there are different topics... but none of them were covered which made it a bit boring... I would advice nvidia to spent more time on caches.... more resources on caches... perhaps offer a small little cache per cuda core... so that it can do more data processing... without having to access main memory... I am also not yet sure how fast the "shared" data cache is... I would worry a bit... that the "shared" cache... might cause lots of cache trashing... but other cuda threads and such... not sure about that... Perhaps more time on next years presentation should be how to actually program cuda chips... how to deal with caches... how to prevent cache trashing... and basically how to get the most out of the cuda chips... Perhaps dive into some algorithms... or just practical stuff for programmers... Shed some more light on performance considerations for algorithm etc... I think that's what might interest some programmers... And ofcourse also new language supports... and maybe new language features... Or new compilers speeds... Compiler improvements... debugging improvements... profiling improvements... Keep it very technical... and very nvidia focused... Again this year... it's all about "you". I don't really care about "you" being other companies... As a programmer am I more interested what nvidia has to offer for me... how they can help me... And I think that goes for most programmers... And yes... there will be noobs watching... experts watching... there should be something in it for everybody... Or perhaps just assume... that people watched the previous gtc's... and just expand on that... how things improved... Noobs can always watch older recordings... and get up to speed ! ;) Though the deep leaning was somewhat interesting. So I would suggest to divide presentation time as follows: 1. Time for a new chip introduction 2. Time for cuda improvements, compiler speeds, bugs, profiler, debugging, ptx improvements, compatibility. 3. Time for how to program nvidia chips best, algorithm improvement suggestions, optimizations, all kinds of stuff... do's, dont's... cache usage, pitfalls, etc. 4. And only lastly... about interesting applications of gpus/cuda technology/companies. I think this is what developers would like best... Doesn't have to be super detailed... just some highlevel overview... and where time allows... dive deeper into it... And where time does not allow... link towards other presentations... Bye for now, Skybuck :) |
All times are GMT +1. The time now is 07:32 AM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
HardwareBanter.com