View Single Post
  #6  
Old December 11th 08, 07:14 PM posted to alt.comp.hardware.overclocking,alt.electronics
omattos
external usenet poster
 
Posts: 3
Default Non-deterministic CPU's.


Can you fillin a blank please? What's the application? *


The main one is experimentation to take a fresh look at the design of
modern high performance digital electronics. One example of a system
that is currently used and can be "wrong" is branch prediction in
modern CPU's. In most cases, it predicts the branch correctly and
performance is improved, but in some cases it incorectly predicts the
branch and performance is reduced, but the overall effect is an
increase in average performance.

My suggestion is to take this to the next logical step - since the
outcome of the branch prediction doesn't affect the results of the
calculation but instead only the performance, there is no reason it
has to be 100% deterministic. The same can be applied to other parts
of the CPU provided that, as with branch prediction, faults can be
retrospectively detected and corrected. The key to this system would
be effective fault detection (which is easy if you think about it,
since checking the execution history of a core is now a parallel
problem not a serial one), and also effective fault recovery.
Recovery is harder, since effectively you need a way to "roll back" to
the state just before the error occurred - this could include rolling
back main memory, but I believe it's still possible.

The same tech would also have advantages for reducing CPU testing and
development time - it wouldn't matter if you happened to happened to
introduce a "pentium divide bug" (http://www.google.co.uk/search?
q=pentium+divide+bug) since the "detection" code would detect the
error and recalculate the result using a slower failsafe mechanism.
Equivalently bugs could be introduced on purpose to increase speed for
the majority of cases - effectively turn on optimizations that don't
work for corner cases.

I'm really looking to see if this idea has been investigated before
and if it's already proven to not add significantly to performance.

If there isn't a consensus that it simply won't work, I might have a
go by making a simple CPU on an FPGA and re-making it with error
detection and recovery and intentionally introducing both random and
systematic errors into the processing core to see how it behaves.