A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » Intel
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Different numerical behaviour on i86 vs other CPUs



 
 
Thread Tools Display Modes
  #1  
Old August 8th 03, 04:12 PM
Matthieu Dazy
external usenet poster
 
Posts: n/a
Default Different numerical behaviour on i86 vs other CPUs

Hi,

I recently came across a problem with computations involving float operand
and a double result.

On the test code below, i86 CPUs have a consistent behaviour OS- and
compiler-wise (tested on Linux + gcc 3.2 and Windows + VS6), and other CPU
types (Sparc, MIPS and Intel ia64) exhibit different results, but still
consistent OS- and compiler-wise (tested on Solaris + Sun CC 5.2, Irix +
MIPSpro CC 7.3, Linux64 + gcc 3.2, Linux64 + Intel CC 7.1).

It looks like i86s implicitly cast individual float operands to double,
thus preventing potential float underflows/overflows that may change the
final result.

This is quite problematic for some of our code that is required to have
strictly identical behaviour on all platforms. I have tried changing the
precision setting of the i86 FPU, but the three computations of the test
code still give the same single result, whereas other CPUs give three
slightly different results.

Is there a solution either at the CPU/FPU or compiler level that would
avoid us the trouble of identifying and manually fixing all potentially
problematic constructs in our (rather large) code ?

Thanks a lot,


--+-- Test code

#include iostream
using namespace std;

int main(int argc, char** argv) {
float x = 41200.0f;
float y = 0.0f;
float z = 1257.53f;

// this evaluates to 1699021440 on Sparc, MIPS and ia64 CPUs
double a = x * x + y * y + z * z;

// this evaluates to 1699021381.75 on Sparc, MIPS and ia64 CPUs
double b = double(x * x) + double(y * y) + double(z * z);

// this evaluates to 1699021381.774583 on Sparc, MIPS and ia64 CPUs
double c = double(x) * double(x) + double(y) * double(y) +
double(z) * double(z);

// on i86 CPUs, all three expressions evaluate to 1699021381.774583

cout.precision(16);
cout "a : " a endl;
cout "b : " b endl;
cout "c : " c endl;

return 0;
}


--
Matthieu Dazy ) -- tel. +33 (0)3 83 67 66 19
Earth Decision Sciences, Nancy -- http://www.earthdecision.com
  #2  
Old August 8th 03, 08:22 PM
David Schwartz
external usenet poster
 
Posts: n/a
Default


"Matthieu Dazy" wrote in message
m...

This is quite problematic for some of our code that is required to have
strictly identical behaviour on all platforms.


This is an essentially impossible requirement. You would have to code
your own math routines.

DS


  #3  
Old August 10th 03, 07:49 AM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

"Oliver S." wrote in message
...
Is there a solution either at the CPU/FPU or compiler level that would
avoid us the trouble of identifying and manually fixing all potentially
problematic constructs in our (rather large) code ?


With the MS-Compiler there's an option called "improve floating-pt
consistency" (/Op[-]) and with the Intel-compilers there's something
similar.


You mean that there is actually a switch to decrease precision?

Yousuf Khan


  #4  
Old August 18th 03, 08:18 AM
Glen Herrmannsfeldt
external usenet poster
 
Posts: n/a
Default


"Oliver S." wrote in message
...
You're close, but it's actually more than just a double. They cast it
to what they call "extended precision", which is an 80-bit float. So
it's actually more precise than even a double which is only a 64-bit
float.


This isn't true for most runtime-libraries because the startup-code of
all CRTs I've seen so far sets the x86 fpu-control-word to a precision
of 64 bits.


Even with 64 bits precision, doesn't it still maintain the 16 bit exponent?
If so, overflow properties would be different.

-- glen


  #5  
Old August 18th 03, 07:57 PM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

"Glen Herrmannsfeldt" wrote in message news:L4%%a.178579$uu5.33564@sccrnsc04...
"Oliver S." wrote in message
...
You're close, but it's actually more than just a double. They cast it
to what they call "extended precision", which is an 80-bit float. So
it's actually more precise than even a double which is only a 64-bit
float.


This isn't true for most runtime-libraries because the startup-code of
all CRTs I've seen so far sets the x86 fpu-control-word to a precision
of 64 bits.


Even with 64 bits precision, doesn't it still maintain the 16 bit exponent?
If so, overflow properties would be different.


Here's the formatting differences between single and double precision
floats:

http://www.psc.edu/general/software/...ieee/ieee.html

Basically, the single-precision exponents are 8-bits long, and in
double-precision they are 11-bits long.

Yousuf Khan
  #6  
Old August 19th 03, 05:42 AM
Glen Herrmannsfeldt
external usenet poster
 
Posts: n/a
Default


"Yousuf Khan" wrote in message
m...
"Glen Herrmannsfeldt" wrote in message

news:L4%%a.178579$uu5.33564@sccrnsc04...
"Oliver S." wrote in message
...
You're close, but it's actually more than just a double. They cast

it
to what they call "extended precision", which is an 80-bit float. So
it's actually more precise than even a double which is only a 64-bit
float.

This isn't true for most runtime-libraries because the startup-code

of
all CRTs I've seen so far sets the x86 fpu-control-word to a precision
of 64 bits.


Even with 64 bits precision, doesn't it still maintain the 16 bit

exponent?
If so, overflow properties would be different.


Here's the formatting differences between single and double precision
floats:

http://www.psc.edu/general/software/...ieee/ieee.html

Basically, the single-precision exponents are 8-bits long, and in
double-precision they are 11-bits long.


Yes, but the internal format has a 16 bit exponent. When you lower the
precision, I don't think it also decreases the exponent bits. The
properties of overflow will not be the same as if the result was stored and
then reloaded.

Actually, I haven't looked at them for a while. Wouldn't the precision
attribute specify the number of mantissa bits? So it would be 53 if you
wanted to match double, and 24 if you wanted to match float?

-- glen


  #7  
Old August 20th 03, 03:38 AM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

"Glen Herrmannsfeldt" wrote in message
news:PUh0b.196659$Ho3.25901@sccrnsc03...
Even with 64 bits precision, doesn't it still maintain the 16 bit

exponent?
If so, overflow properties would be different.


Here's the formatting differences between single and double precision
floats:

http://www.psc.edu/general/software/...ieee/ieee.html

Basically, the single-precision exponents are 8-bits long, and in
double-precision they are 11-bits long.


Yes, but the internal format has a 16 bit exponent. When you lower the
precision, I don't think it also decreases the exponent bits. The
properties of overflow will not be the same as if the result was stored

and
then reloaded.


Yes, you're right, I misunderstood you, it does convert everything to the
internal "extended precision" format, which has a 16-bit exponent. The
process of loading it or storing it from either single- or double-precision
format to extended is done internally by the FPU. There is no optioning that
a compiler has in this conversion process, it is done completely internally.

Actually, I haven't looked at them for a while. Wouldn't the precision
attribute specify the number of mantissa bits? So it would be 53 if you
wanted to match double, and 24 if you wanted to match float?


Well, actually the mantissa would actually be 52- and 23-bits, respectively,
because one bit is reserved for the sign of the number.

An interesting thing about floating point is that there doesn't seem to be a
standard extended precision format, or if there is a standard it's quite ad
hoc. The single- and double-precision formats are locked tight and
well-defined, but not the extended one. If you'll notice, there are two
separate extended precision formats available, the Intel format and the
Sparc/PowerPC format. The Intel format has an 80-bit structure with a 16-bit
exponent and a 63-bit mantissa. The other format has an 128-bit structure
with a 111-bit mantissa, but surprisingly still a 16-bit exponent like the
Intel extended format, despite the fact that it's got 48 extra bits. It
seems as if the split up between exponent bits and mantissa bits is entirely
arbitrarily chosen.

http://www.dcs.ed.ac.uk/home/SUNWspr...h.doc.html#866

or

http://tinyurl.com/kk5c

Yousuf Khan


  #8  
Old August 20th 03, 06:38 AM
Glen Herrmannsfeldt
external usenet poster
 
Posts: n/a
Default


"Yousuf Khan" wrote in message
able.rogers.com...
"Glen Herrmannsfeldt" wrote in message
news:PUh0b.196659$Ho3.25901@sccrnsc03...
Even with 64 bits precision, doesn't it still maintain the 16 bit

exponent?
If so, overflow properties would be different.

Here's the formatting differences between single and double precision
floats:

http://www.psc.edu/general/software/...ieee/ieee.html

Basically, the single-precision exponents are 8-bits long, and in
double-precision they are 11-bits long.


Yes, but the internal format has a 16 bit exponent. When you lower the
precision, I don't think it also decreases the exponent bits. The
properties of overflow will not be the same as if the result was stored

and
then reloaded.


Yes, you're right, I misunderstood you, it does convert everything to the
internal "extended precision" format, which has a 16-bit exponent. The
process of loading it or storing it from either single- or

double-precision
format to extended is done internally by the FPU. There is no optioning

that
a compiler has in this conversion process, it is done completely

internally.

Actually, I haven't looked at them for a while. Wouldn't the precision
attribute specify the number of mantissa bits? So it would be 53 if you
wanted to match double, and 24 if you wanted to match float?


I did check this one. The options are 64, 53, and 24.

Well, actually the mantissa would actually be 52- and 23-bits,

respectively,
because one bit is reserved for the sign of the number.


Yes, but the 32 bit and 64 bit formats have a hidden one. Since a
normalized binary floating point non-zero number must have the most
significant mantissa bit a one, they don't need to store it. The 80 bit
format does store it, and I believe allows unnormalized numbers.

An interesting thing about floating point is that there doesn't seem to be

a
standard extended precision format, or if there is a standard it's quite

ad
hoc. The single- and double-precision formats are locked tight and
well-defined, but not the extended one. If you'll notice, there are two
separate extended precision formats available, the Intel format and the
Sparc/PowerPC format. The Intel format has an 80-bit structure with a

16-bit
exponent and a 63-bit mantissa. The other format has an 128-bit structure
with a 111-bit mantissa, but surprisingly still a 16-bit exponent like the
Intel extended format, despite the fact that it's got 48 extra bits. It
seems as if the split up between exponent bits and mantissa bits is

entirely
arbitrarily chosen.


16 bits allows numbers bigger and smaller than most uses for such numbers.
There are a few algorithms where a large exponent is useful, but not so
many. The Intel 80 bit format started on the 8087 when they were somewhat
limited in what they could do. I might even say that 16 is too many. One
that I saw once explained that the volume of the universe in cubic Fermis,
(about the volume of an atomic nucleus) is 1e136. The actual number of
protons and neutrons in the universe is about 1e80. Why would anyone need
numbers larger than that?

The IBM extended precision format, starting with the 360/85, is base 16 with
a 112 bit mantissa, and 7 bit base 16 exponent. (Single, Double, and
Extended all have the same 7 bit base 16 exponent.)

-- glen


  #9  
Old August 21st 03, 11:50 PM
bill davidsen
external usenet poster
 
Posts: n/a
Default

In article wPD0b.204709$Ho3.27468@sccrnsc03,
Glen Herrmannsfeldt wrote:

| 16 bits allows numbers bigger and smaller than most uses for such numbers.
| There are a few algorithms where a large exponent is useful, but not so
| many. The Intel 80 bit format started on the 8087 when they were somewhat
| limited in what they could do. I might even say that 16 is too many. One
| that I saw once explained that the volume of the universe in cubic Fermis,
| (about the volume of an atomic nucleus) is 1e136. The actual number of
| protons and neutrons in the universe is about 1e80. Why would anyone need
| numbers larger than that?

Think small... sometimes you want very small numbers not to go to zero.
If they do your calculations will not converge, or will do so slowly.

|
| The IBM extended precision format, starting with the 360/85, is base 16 with
| a 112 bit mantissa, and 7 bit base 16 exponent. (Single, Double, and
| Extended all have the same 7 bit base 16 exponent.)

I thought the 80 bit format was part of the IEEE standard, called
something like intermediate result or some such. gcc calls it long
double IIRC.

--
Bill Davidsen CTO, TMR Associates
As we enjoy great advantages from inventions of others, we should be
glad of an opportunity to serve others by any invention of ours; and
this we should do freely and generously.
-Benjamin Franklin (who would have liked open source)
  #10  
Old August 27th 03, 02:38 AM
Glen Herrmannsfeldt
external usenet poster
 
Posts: n/a
Default


"bill davidsen" wrote in message
m...
In article wPD0b.204709$Ho3.27468@sccrnsc03,


(snip of various IEEE format discussions)

I thought the 80 bit format was part of the IEEE standard, called
something like intermediate result or some such. gcc calls it long
double IIRC.


I haven't looked into it for a while, but I beleive that it allows a variety
of extended formats.

-- glen


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Any new cpu's on the horizon... Pccomputerdr Ati Videocards 1 July 24th 04 07:40 PM
Quad Cpu Mobo with Dual Core CPUS how fast would that be ? Dennis E Strausser Jr Overclocking 1 June 16th 04 03:52 AM
Quad Cpu Mobo with Dual Core CPUS how fast would that be ? Dennis E Strausser Jr Overclocking AMD Processors 1 June 16th 04 03:52 AM
Valid Points 101: 2x P4 Xeons + Hyperthreading + Windows XP Professional / W2K / NT4 / *Nix (long post!) Duncan, Eric A. General 7 February 3rd 04 05:06 PM
Dual Vs. Single Processor System Darren Harris Homebuilt PC's 10 January 23rd 04 03:59 PM


All times are GMT +1. The time now is 08:59 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.