View Single Post
  #2  
Old December 10th 08, 03:07 PM posted to alt.comp.hardware.overclocking,alt.electronics
Phil Weldon
external usenet poster
 
Posts: 550
Default Non-deterministic CPU's.

'omattos' wrote, in part:

The first rule of digital electronics is they should be predictable -
ie. when the same data is fed into a circuit twice, the output should
be the same every time.

I'm trying to do a thought experiment to see what could happen if this
restriction was to be relaxed for CPU's.

_____

Predictable performance is not the same as perfect performance. The outcome
of many spins of a roulette wheel are predictable. The outcome of any one
spin is not. The smaller components become physically, and the smaller the
number of electrons involved in calculations, the more the chimera of '100%
accuracy' recedes. As the number becomes smaller the output becomes more
granular.

Think a little more before engaging in your 'thought experiment'.



Redundant computer systems already exist for critical real time
applications; i.e. of three systems performing the same calculation, in the
case of different results, any majority agreement is chosen as correct. In
less time critical applications the same calculation can be run three times
serially. This would be most useful in an environment where random events
may affect calculation outputs; high energy ionizing radiation, for example.
On the component level parity / self-correcting error detection in RAM and
caches is an example.

This kind of redundancy is always be more expensive in time and material
than reducing the error rate by merely operating components 'in spec'.

Now, if by operating a systems in 'non-deterministic' regimes you mean
systems where quantum states are collapsed to obtain output, then that's a
horse of an entirely different color.

Finally, you will be running your thought experiment on a system that is
'non-deterministic' to some extent, prone to errors, and 'just good enough'
rather perfect.

Phil Weldon




"omattos" wrote in message
...
The first rule of digital electronics is they should be predictable -
ie. when the same data is fed into a circuit twice, the output should
be the same every time.

I'm trying to do a thought experiment to see what could happen if this
restriction was to be relaxed for CPU's.

Current CPU's should process every instruction with 100% accuracy.
What if I remove that restriction and say an error may be made in one
in every 1000 instructions. The error could effect anything, from
simple incorrect results of an arithmetic operation to incorrect
branching in the flow of control. After an "incorrect" instruction, I
understand that any further instructions executed could depend on the
"faulty" one, and therefore also produce unexpected results.

My question is what optimizations and speedups could be applied to CPU
design if they were allowed to occasionally produce "wrong" results?

For example I suspect a higher clock speed would be ok, higher
operating temperature range would be possible, on-die defects would be
ok (provided they only affect a few operations), due to more defects
being allowed, feature size could be reduced with the same
manufacturing process, and therefore clock speeds increased further,
and finally I'm guessing operating voltage could be reduced, reducing
power consumption.

My main question is could a CPU design expert take an "out of the air"
estimate how much faster a CPU could be made if it only had to produce
mainly-correct results, and not perfect results?


Before anyone asks why I want such a CPU, it's just a thought
experiment to see how an algorithm that can't be multi-threaded could
be run fastest. By having multiple CPU's running on the same code and
data data and being "synched" every 100 instructions or so, the
current state of each processor could be compared, and a majority
decision taken. Any CPU which isn't in the correct state would be
reset by copying the state of another CPU, and execution would
continue.