PDA

View Full Version : GPU Supercomputer - 300 TFLOPs


AirRaid
April 22nd 08, 11:00 PM
(Google Translated)

In France, the Grand National Equipment Intensive Computing (Genci)
striking hard since his first supercomputer, one of the first machine
of its kind to use the GPU processing for calculation.

According to our information, the 1068 processors are indeed Nehalem
eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla.
This will be in addition to a new generation of Tesla, with a very
high probability of the GT200 chip controllers, and even several for
each module (we imagine two, as in the current solutions Tesla
NVIDIA).

In this machine, all the CPU is supposed to produce a powerful
theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against
GPU 1068 CPU, it is already clear domination of the GPU on the CPU
intensive computing, thanks to its multiple parallel processors flow.

http://www.google.com/translate?u=http%3A%2F%2Fwww.pcinpact.com%2Factu%2 Fnews%2F43165-premier-supercalculateur-GPU-France-Tesla.htm&langpair=fr%7Cen&hl=en&ie=UTF8

whygee
April 23rd 08, 12:10 AM
AirRaid wrote:
> (Google Translated)
>
> In France, the Grand National Equipment Intensive Computing (Genci)
> striking hard since his first supercomputer, one of the first machine
> of its kind to use the GPU processing for calculation.
>
> According to our information, the 1068 processors are indeed Nehalem
> eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla.
> This will be in addition to a new generation of Tesla, with a very
> high probability of the GT200 chip controllers, and even several for
> each module (we imagine two, as in the current solutions Tesla
> NVIDIA).
>
> In this machine, all the CPU is supposed to produce a powerful
> theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against
> GPU 1068 CPU, it is already clear domination of the GPU on the CPU
> intensive computing, thanks to its multiple parallel processors flow.
>
> http://www.google.com/translate?u=http%3A%2F%2Fwww.pcinpact.com%2Factu%2 Fnews%2F43165-premier-supercalculateur-GPU-France-Tesla.htm&langpair=fr%7Cen&hl=en&ie=UTF8

1) I was indeed invited at the press conference and could clarify a few details.
Oh, and i have taken a lot of interesting pictures :-)
[to be published in an upcoming french Linux Magazine]

2) the translation is awful.

3) what a dirty crosspost.

4) see my post on comp.arch dated 19/04/2008
titled "The next french "supercomputer" will have both CPUs and GPUs"

5) let's continue this discussion on comp.arch ?

regards,
yg from f-cpu.org or ygdes.com

Phil Weldon
April 23rd 08, 02:09 AM
'AirRaid' wrote, in part:
>
> (Google Translated)
>
> In France, the Grand National Equipment Intensive Computing (Genci)
> striking hard since his first supercomputer, one of the first machine
> of its kind to use the GPU processing for calculation.
>
> According to our information, the 1068 processors are indeed Nehalem
> eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla.
> This will be in addition to a new generation of Tesla, with a very
> high probability of the GT200 chip controllers, and even several for
> each module (we imagine two, as in the current solutions Tesla
> NVIDIA).
>
> In this machine, all the CPU is supposed to produce a powerful
> theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against
> GPU 1068 CPU, it is already clear domination of the GPU on the CPU
> intensive computing, thanks to its multiple parallel processors flow.
_____

This and similar posts are a good argument for filtering out Googlegroups
postings.

Why do you continue to post this kind of article without proper attribution?
And crosspost to boot? And never have anything of your own to contribute?
And, for this particular post, use a perfectly terrible translation?
Considering you had to use a Google translation, how did you ever decide
whether the original article was WORTH posting?

If you can't add value, don't post. Especially don't crosspost.

Phil Weldon

"AirRaid" > wrote in message
...
>
> (Google Translated)
>
> In France, the Grand National Equipment Intensive Computing (Genci)
> striking hard since his first supercomputer, one of the first machine
> of its kind to use the GPU processing for calculation.
>
> According to our information, the 1068 processors are indeed Nehalem
> eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla.
> This will be in addition to a new generation of Tesla, with a very
> high probability of the GT200 chip controllers, and even several for
> each module (we imagine two, as in the current solutions Tesla
> NVIDIA).
>
> In this machine, all the CPU is supposed to produce a powerful
> theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against
> GPU 1068 CPU, it is already clear domination of the GPU on the CPU
> intensive computing, thanks to its multiple parallel processors flow.
>
> http://www.google.com/translate?u=http%3A%2F%2Fwww.pcinpact.com%2Factu%2 Fnews%2F43165-premier-supercalculateur-GPU-France-Tesla.htm&langpair=fr%7Cen&hl=en&ie=UTF8

Phil Weldon
April 23rd 08, 02:12 AM
'whygee' wrote, in part:
> 1) I was indeed invited at the press conference and could clarify a few
> details.
> Oh, and i have taken a lot of interesting pictures :-)
> [to be published in an upcoming french Linux Magazine]
>
> 2) the translation is awful.
>
> 3) what a dirty crosspost.
>
> 4) see my post on comp.arch dated 19/04/2008
> titled "The next french "supercomputer" will have both CPUs and GPUs"
_____

Hey, it could be worse - it could have been a post from 'Skybuck' (if you do
not recognize the sig 'Skybuck', consider yourself fortunate.)

Phil Weldon


"whygee" > wrote in message
...
> AirRaid wrote:
>> (Google Translated)
>>
>> In France, the Grand National Equipment Intensive Computing (Genci)
>> striking hard since his first supercomputer, one of the first machine
>> of its kind to use the GPU processing for calculation.
>>
>> According to our information, the 1068 processors are indeed Nehalem
>> eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla.
>> This will be in addition to a new generation of Tesla, with a very
>> high probability of the GT200 chip controllers, and even several for
>> each module (we imagine two, as in the current solutions Tesla
>> NVIDIA).
>>
>> In this machine, all the CPU is supposed to produce a powerful
>> theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against
>> GPU 1068 CPU, it is already clear domination of the GPU on the CPU
>> intensive computing, thanks to its multiple parallel processors flow.
>>
>> http://www.google.com/translate?u=http%3A%2F%2Fwww.pcinpact.com%2Factu%2 Fnews%2F43165-premier-supercalculateur-GPU-France-Tesla.htm&langpair=fr%7Cen&hl=en&ie=UTF8
>
> 1) I was indeed invited at the press conference and could clarify a few
> details.
> Oh, and i have taken a lot of interesting pictures :-)
> [to be published in an upcoming french Linux Magazine]
>
> 2) the translation is awful.
>
> 3) what a dirty crosspost.
>
> 4) see my post on comp.arch dated 19/04/2008
> titled "The next french "supercomputer" will have both CPUs and GPUs"
>
> 5) let's continue this discussion on comp.arch ?
>
> regards,
> yg from f-cpu.org or ygdes.com

Jack Klein
April 23rd 08, 02:31 AM
On Tue, 22 Apr 2008 21:09:15 -0400, "Phil Weldon"
> wrote in comp.arch.embedded:

[snip spam]

> This and similar posts are a good argument for filtering out Googlegroups
> postings.

Yes, and it's an even better argument for NOT RESPONDING to usenet
spam, and ESPECIALLY NOT QUOTING THE ENTIRE SPAM PAYLOAD in all of the
cross-posted groups.

Spammers rarely read the groups they abuse, and even if this one does,
he/she/it will certainly ignore your rant. In the meantime you have
DOUBLED his exposure.

In fact, this idiot was caught by my filters, and I would never have
seen his drivel at all had you not replied and quoted it.

Phil Weldon
April 23rd 08, 02:46 AM
'Jack Klein' wrote, in part:
> Yes, and it's an even better argument for NOT RESPONDING to usenet
> spam, and ESPECIALLY NOT QUOTING THE ENTIRE SPAM PAYLOAD in all of the
> cross-posted groups.
_____

'AirRaid' is not a spammer, and the post is not spam. The transgression is
NOT that he posted the URL of the poorly translated webpage (see my original
post.)

I quoted because it illustrates the problem 'AirRaid' presents. And how is
THAT different from your post that lengthens the thread? Sometimes you just
have describe a problem to get it fixed.

Phil Weldon

"Jack Klein" > wrote in message
...
> On Tue, 22 Apr 2008 21:09:15 -0400, "Phil Weldon"
> > wrote in comp.arch.embedded:
>
> [snip spam]
>
>> This and similar posts are a good argument for filtering out Googlegroups
>> postings.
>
> Yes, and it's an even better argument for NOT RESPONDING to usenet
> spam, and ESPECIALLY NOT QUOTING THE ENTIRE SPAM PAYLOAD in all of the
> cross-posted groups.
>
> Spammers rarely read the groups they abuse, and even if this one does,
> he/she/it will certainly ignore your rant. In the meantime you have
> DOUBLED his exposure.
>
> In fact, this idiot was caught by my filters, and I would never have
> seen his drivel at all had you not replied and quoted it.