View Single Post
  #8  
Old May 25th 18, 07:26 PM posted to alt.comp.hardware.pc-homebuilt,alt.comp.hardware
Rene Lamontagne
external usenet poster
 
Posts: 187
Default Intel onboard GPU conflicting with CPU power

On 05/25/2018 12:06 PM, Jimmy Wilkinson Knife wrote:
On Fri, 25 May 2018 17:46:49 +0100, Paul wrote:

Jimmy Wilkinson Knife wrote:
On Fri, 25 May 2018 16:55:29 +0100, John McGaw wrote:

On 5/25/2018 8:48 AM, Jimmy Wilkinson Knife wrote:
Is it true that if you max out the GPU part of an Intel processor
at the
same time as all the normal cores, it can't do them all at once?* I
run
Boinc which can do calculations on both at once.* And if I use all
the CPU
cores, the GPU part goes about a 5th of the speed.* I guess the only
other
place this could happen is a game that multithreads on all the
cores and
also uses the graphics.

A question probably better asked in a different forum. No doubt someone
here will have a definitive answer:

https://boinc.berkeley.edu/dev/forum_index.php

I asked there too.* The consensus of opinion appears to be, with most
projects on most CPUs, that an onboard GPU is faster (5 times faster in
my case) than one of the cores you have to tell it to stop using, so
it's worth using it.* If I don't tell it release a core, the GPU part is
throttled severely.

But I asked here because I wanted to know why it can't run everything at
once (it's not overheating or anywhere near it) - a google search says
throttling should only take place if it's too hot.

And this is the case on a couple of computers - i5-3570K and i5-8600K.
I didn't bother testing the old Celerons.

I run BOINC on multiple machines and always understood that it would
only
use external (non-cpu) graphics hardware so I can't contribute anything
from personal experience.

It depends on the project.* They all run on CPUs.* Several run on Nvidia
cards.* Several others run on AMD cards (or both).* Only three have
Intel graphics support - Einstein, SETI, and I think Collatz.


Modern processors have power limiting designed into VCore. There
could be a signal coming from VCore, to the processor or PCH,
indicating the power status of VCore.

Any resource used inside the CPU, count towards power usage,
and eventually the overall power "bumps against TDP". If you
stop railing the GPU portion, it gives more headroom for
the CPU cores.

Check the BIOS, to see if there is a setting to
disable the power limiter.


Not sure I want to risk that!

Stick a finger on VCore and see if it's getting too hot.


I've done that (and software monitoring) and the temperature is fine,
but even if it doesn't overheat, I wouldn't like to allow more power
than it's designed to get.* A tiny portion of the CPU might overload and
melt somewhere without the whole thing being too hot.

In the old days, a MOSFET could go into thermal runaway
(channel resistance goes up, I^2R goes up, MOSFET gets hotter,
and so on). With the new design concepts, they seem to be
happy running the MOSFETs at 65C when the CPU is busy.

CPUs didn't always have a TDP limiter. In the past, VCore
may have had an overcurrent, but it should be set high
enough so that it's not triggered in normal usage.
If VCore was rated at 100A, you'd expect OCP to be
set at 130A or more. It's a good idea to have some
protection there, as the area around the CPU socket
can become charred on a plane-to-plane short circuit,
without it. There is one picture out there, of a
motherboard damaged that way.


It amazes me how they can get that much current into it with little
motherboard tracks - if you consider how thick a cable is to supply your
electric shower or cooker for example, with a third as much current.


YIKES. Someone Protect us from Electric showers, :-)


Rene