If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
|
#1
|
|||
|
|||
Floating point format for Intel math coprocessors
On Fri, 27 Jun 2003 14:23:22 GMT, Jack Crenshaw
wrote: I've run across a peculiarity concerning the format used inside the Intel math coprocessor. I have always thought that the format used was in accordance with the IEEE 794 Unless IEEE 794 is something new, I think you mean IEEE 754. standard, and every reference I've seen on the web seems to imply that. But, as nearly as I can tell, it's not the same. The IEEE standard for 32-bit floats says the format should be sign -- 1 bit exponent -- 8 bits, power of 2, split on 127 With the proviso that the values of 0 and 255 for the exponent are special cases reserved for 0, Inf, denormals, and NaN. mantissa -- 23 bits + phantom bit in bit 24. The Intel processor seems to use the following: sign -- 1 bit exponent -- _SEVEN_ bits, power of _FOUR_ mantissa -- sometimes 23 bits, sometimes 24. Sometimes phantom bit, sometimes not. When it's there, it's in bit _TWENTY_THREE_ ! I don't think so. Are you mistaking the lsb of the exponent for the "visible" phantom bit? [...] You'll see that 1 -- 3f800000 (high bit is visible) seee eeee emmm mmmm mmmm mmmm mmmm mmmm 0011 1111 1000 0000 0000 0000 0000 0000 s = 0, e = 127, m = 0 (-1)^s * 2^(e-127) * (1+m/(2^23)) = 1*1*1 = 1.0 but 2 -- 40000000 (high bit is not) seee eeee emmm mmmm mmmm mmmm mmmm mmmm 0100 0000 0000 0000 0000 0000 0000 0000 s = 0, e = 128, m = 0 (-1)^s * 2^(e-127) * (1+m/(2^23)) = 1*2*1 = 2.0 Try a few others and see what you get. Some will surprise you. I'm not finding any surprises. 1.5 - 3fc00000 seee eeee emmm mmmm mmmm mmmm mmmm mmmm 0011 1111 1100 0000 0000 0000 0000 0000 s = 0, e = 127, m = 0x400000 (-1)^s * 2^(e-127) * (1+m/(2^23)) = 1*1*1.5 = 1.5 2.5 - 40200000 seee eeee emmm mmmm mmmm mmmm mmmm mmmm 0100 0000 0020 0000 0000 0000 0000 0000 s = 0, e = 128, m = 0x400000 (-1)^s * 2^(e-127) * (1+m/(2^23)) = 1*2*1.25 = 2.5 Perhaps I'm misunderstanding your point? Regards, -=Dave -- Change is inevitable, progress is not. |
#2
|
|||
|
|||
In article , Jack Crenshaw wrote:
I've run across a peculiarity concerning the format used inside the Intel math coprocessor. If you're talking about the format that IA32 FPs store values in memory, then I doubt it. If you're talking about the 80-bit internal format, I don't know. I've never tried to use that format externally. I've been exchanging float data between IA32 systems and at least a half-dozen other architectures since the 8086/8087 days. I never saw any format problems. I have always thought that the format used was in accordance with the IEEE 794 standard, It is IEEE something though 794 doesn't sound right... -- Grant Edwards grante Yow! .. this must be what at it's like to be a COLLEGE visi.com GRADUATE!! |
#3
|
|||
|
|||
Jack Crenshaw wrote:
.... snip ... The IEEE standard for 32-bit floats says the format should be sign -- 1 bit exponent -- 8 bits, power of 2, split on 127 mantissa -- 23 bits + phantom bit in bit 24. The Intel processor seems to use the following: sign -- 1 bit exponent -- _SEVEN_ bits, power of _FOUR_ mantissa -- sometimes 23 bits, sometimes 24. Sometimes phantom bit, sometimes not. When it's there, it's in bit _TWENTY_THREE_ ! .... snip ... You'll see that 1 -- 3f800000 (high bit is visible) but 2 -- 40000000 (high bit is not) Try a few others and see what you get. Some will surprise you. I know that there must be people out there to whom this is old, old news. Even so, I've never seen a word about it, and didn't find anything in a Google search. I'd appreciate any comments. What chips does this format appear in? I expect the presence or absence of normalization depends on the oddness of the exponent byte. It makes sense for byte addressed memory based systems, since zero (ignoring denormalization) can be detected in a single byte. -- Chuck F ) ) Available for consulting/temporary embedded and systems. http://cbfalconer.home.att.net USE worldnet address! |
#4
|
|||
|
|||
On Fri, 27 Jun 2003 17:58:50 GMT, Jonathan Kirwan
wrote: Hmm. Are you *THE* Jack Crenshaw? The "Let's Build A Compiler" and "Math Toolkit for Real-Time Programming" Jack _W._ Crenshaw? Actually, I think I've answered my own question, here. You really *are* that Jack. What clues me in is your use of "phantom" he mantissa -- 23 bits + phantom bit in bit 24. The same term used on page 50 in "Math toolkit..." In my own experience, even that predating the Intel 8087 or the IEEE standardization, it was called a "hidden bit" notation. I don't know where "phantom" comes from, as my own reading managed to completely miss it. So, a hearty "Hello" from me! Jon |
#5
|
|||
|
|||
Jonathan Kirwan wrote:
On Fri, 27 Jun 2003 17:58:50 GMT, Jonathan Kirwan wrote: Hmm. Are you *THE* Jack Crenshaw? The "Let's Build A Compiler" and "Math Toolkit for Real-Time Programming" Jack _W._ Crenshaw? Actually, I think I've answered my own question, here. You really *are* that Jack. Grin! Yep, I really am. What clues me in is your use of "phantom" he mantissa -- 23 bits + phantom bit in bit 24. The same term used on page 50 in "Math toolkit..." In my own experience, even that predating the Intel 8087 or the IEEE standardization, it was called a "hidden bit" notation. I don't know where "phantom" comes from, as my own reading managed to completely miss it. So, a hearty "Hello" from me! Hello. Re the term, phantom bit: I've been using that term since I can remember -- and that's a looooonnnngggg time. Then again, I still sometimes catch myself saying "cycles" or "kilocycles," or "B+". I first heard the term in 1975. Not sure when it became Politically Incorrect. Maybe someone objected to the implied occult nature of the term, "phantom"? Who knows? but as far as I'm concerned the term "hidden bit" is a Johnny-come-lately on the scene. Back to the point. I want to thank you and everyone else who responded (except the guy who said "stop it") for helping to straighten out my warped brain. It's nice that you have my book. Thanks for buying it. As a matter of fact, I first ran across this "peculiarity" three years ago, when I was writing it. I needed to twiddle the components of the floating-point number -- separate the exponent from mantissa -- to write the fp_hack structure for the square root algorithm. I looked at the formats for float, double, and long double, and found the second two formats easy enough to grok. But when I looked at the format for floats, I sort of went, "Gag!" and quickly decided to use doubles for the book. It's funny how an idea, once formed, can persist. Lo those many years ago, I didn't have a lot of time to think about it -- had to get the chapter done. I just managed to convince myself that the format used this peculiar convention, what with base-4 exponents, and all. I had no more need of it at the time, so never went back and revisited the impression. It's persisted ever since. All of the folks who responded are absolutely right. Once I got my head screwed on straight, it was quite obvious that the format has no mysteries. It is indeed the IEEE 754 format, plain and simple. The thing that had me confused was the exponents: 3f8, 400, 408, etc. With one bit for the sign and eight for the exponent, it's perfectly obvious that the exponent has to bleed down one bit into the next lower hex digit. That's what I was seeing, but somehow in my haste, I didn't recognize it as such, and formed this "theory" that it was using a base-4 exponent. Wanna hear the funny part? After tinkering with it for awhile, I worked out the rules for my imagined format, that worked just fine. At work, I've got a Mathcad file that takes the hex number, shifts it two bits at a time, diddles the "phantom" bit, and produces the right results. I can go from integer to float and back nicely, using this cockamamie scheme. Needless to say, the conversion is a whole lot easier if one uses the real format! My Mathcad file just got a lot shorter. Thanks again to everyone who responded, and my apologies for bothering y'all with this imaginary problem. Jack |
#6
|
|||
|
|||
On Tue, 01 Jul 2003 13:03:19 GMT, Jack Crenshaw
wrote: Jonathan Kirwan wrote: On Fri, 27 Jun 2003 17:58:50 GMT, Jonathan Kirwan wrote: Hmm. Are you *THE* Jack Crenshaw? The "Let's Build A Compiler" and "Math Toolkit for Real-Time Programming" Jack _W._ Crenshaw? Actually, I think I've answered my own question, here. You really *are* that Jack. Grin! Yep, I really am. Hehe. Nice to know one of my antennas is still sharp. What clues me in is your use of "phantom" he mantissa -- 23 bits + phantom bit in bit 24. The same term used on page 50 in "Math toolkit..." In my own experience, even that predating the Intel 8087 or the IEEE standardization, it was called a "hidden bit" notation. I don't know where "phantom" comes from, as my own reading managed to completely miss it. So, a hearty "Hello" from me! Hello. Re the term, phantom bit: I've been using that term since I can remember -- and that's a looooonnnngggg time. I think my first exposure to hidden-bit as a term dates to about 1974. But I could be off, by a year, either way. Then again, I still sometimes catch myself saying "cycles" or "kilocycles," or "B+". Hehe. Now those terms aren't so "hidden" to me. I learned my early electronics on tube design manuals. One sticking point I remember bugging me for a long time was exactly, "How do they size those darned grid leak resistors?" I just couldn't figure out where they got the current from which to figure their magnitude. So even B+ is old hat to me. I first heard the term in 1975. Well, that's about the time for "hidden bit," too. Probably, at that time the term was still in a state of flux. I just got my hands on different docs, I imagine. Not sure when it became Politically Incorrect. Oh, it's fine to me, anyway. I knew what was meant the moment I saw the term. It's pretty clear. I just think time has settled more on one term than another. But to take your allusion and run with it a bit... I don't know of anyone part of some conspiracy to set the term -- in any case, setting terms usually is propagandistic, designed for setting agendas in peoples' minds and here is a case where everyone would want the same agenda. Maybe someone objected to the implied occult nature of the term, "phantom"? Oh, geez. I've never known a geek to care about such things. I suppose they must exist, somwhere. I've just never met one willing to let me know they thought like that. But that's an interesting thought. It would fit the weird times in the US we live in, with about 30% aligning themselves as fundamentalists. Nah... it just can't be. Who knows? I really think it was more the IEEE settling on a term. But then, this isn't my area so I could be dead wrong about that -- I'm only guessing. but as far as I'm concerned the term "hidden bit" is a Johnny-come-lately on the scene. Hehe. I've no problem if that's true. Back to the point. I want to thank you and everyone else who responded (except the guy who said "stop it") for helping to straighten out my warped brain. No problem. It was really pretty easy to recall the details. Like learning to ride a bicycle, I suppose. It's nice that you have my book. Thanks for buying it. Oh, there was no question. I've a kindred interest in physics and engineering, I imagine. I'm currently struggling through Robert Gilmore's books, one on lie groups and algebras and the other on catastrophe theory for engineers as well as polytropes, packing spheres, and other delights. There were some nice insights in your book, which helped wind me on just enough of a different path to stretch me without losing me. By the way!! I completely agree with you about MathCad! What a piece of *&!@&$^%$^ it is, now. I went through several iterations, loved at first the slant or approach in using it, but absolutely hate it now because, frankly, I can't run it for more than an hour before I don't have any memory left and it crashes out. Reboot time every hour is not my idea of a good thing. And that's only if I don't type and change things too fast. When I work quick on it, I can go through what's left with Win98 on a 256Mb RAM machine in a half hour! No help from them and two versions later I've simply stopped using it. I don't even want to hear from them, again. Hopefully, I'll be able to find an old version somewhere. For now, I'm doing without. As a matter of fact, I first ran across this "peculiarity" three years ago, when I was writing it. I needed to twiddle the components of the floating-point number -- separate the exponent from mantissa -- to write the fp_hack structure for the square root algorithm. I looked at the formats for float, double, and long double, and found the second two formats easy enough to grok. But when I looked at the format for floats, I sort of went, "Gag!" and quickly decided to use doubles for the book. Yes. But that's fine, I suspect. I've taught undergrad classes and most folks just go "barf" when confronted with learning floating point. In class evaluations, I think having to learn floating point was the bigger source of complaints about the classes. You probably addressed everything anyone "normal" could reasonably care about and more. It's funny how an idea, once formed, can persist. Lo those many years ago, I didn't have a lot of time to think about it -- had to get the chapter done. I just managed to convince myself that the format used this peculiar convention, what with base-4 exponents, and all. I had no more need of it at the time, so never went back and revisited the impression. It's persisted ever since. No problem. All of the folks who responded are absolutely right. Once I got my head screwed on straight, it was quite obvious that the format has no mysteries. It is indeed the IEEE 754 format, plain and simple. The thing that had me confused was the exponents: 3f8, 400, 408, etc. With one bit for the sign and eight for the exponent, it's perfectly obvious that the exponent has to bleed down one bit into the next lower hex digit. That's what I was seeing, but somehow in my haste, I didn't recognize it as such, and formed this "theory" that it was using a base-4 exponent. In any case, it's clear that your imagination is able to work overtime, here! Maybe that's a good thing. Wanna hear the funny part? After tinkering with it for awhile, I worked out the rules for my imagined format, that worked just fine. At work, I've got a Mathcad file that takes the hex number, shifts it two bits at a time, diddles the "phantom" bit, and produces the right results. I can go from integer to float and back nicely, using this cockamamie scheme. Hmm. Then you should be able to construct a function to map between these, proving the consistent results. I've a hard time believing there is one. But who knows? Maybe this is the beginning of a new facet of mathematics, like the investigation into fractals or something! Needless to say, the conversion is a whole lot easier if one uses the real format! My Mathcad file just got a lot shorter. Hehe!! When you get things right, they *do* tend to become a little more prosaic, too. Good thing for those of us with feeble minds, too. Thanks again to everyone who responded, and my apologies for bothering y'all with this imaginary problem. hehe. Best of luck. In the process, I did notice that you are entertaining thoughts on a revised "Let's build a compiler." Best of luck on that and if you feel the desire for unloading some of the work, I might could help a little. I've written a toy C compiler before, an assembler, several linkers, and a not-so-toy BASIC interpreter. I can, at least, be a little bit dangerous. Might be able to shoulder something, if it helps. Jon |
#7
|
|||
|
|||
Jonathan Kirwan wrote:
On Tue, 01 Jul 2003 13:03:19 GMT, Jack Crenshaw wrote: snip Hello. Re the term, phantom bit: I've been using that term since I can remember -- and that's a looooonnnngggg time. I think my first exposure to hidden-bit as a term dates to about 1974. But I could be off, by a year, either way. Then again, I still sometimes catch myself saying "cycles" or "kilocycles," or "B+". Hehe. Now those terms aren't so "hidden" to me. I learned my early electronics on tube design manuals. One sticking point I remember bugging me for a long time was exactly, "How do they size those darned grid leak resistors?" I just couldn't figure out where they got the current from which to figure their magnitude. So even B+ is old hat to me. Then you definitely ain't one of the young punks, are you? g Re grid leak: I think it must be pretty much trial and error. No doubt _SOMEONE_ has a theory for it, but I would think the grid current must vary a lot from tube to tube. FWIW, I sit here surrounded by old Heathkit tube electronics. I collect them. Once I started buying them, I realized I couldn't just drive down to the local drugstore and test the tubes. Had to buy a tube tester, VTVM, and all the other accoutrements to be able to work on them. Maybe someone objected to the implied occult nature of the term, "phantom"? Oh, geez. I've never known a geek to care about such things. I suppose they must exist, somwhere. I've just never met one willing to let me know they thought like that. But that's an interesting thought. It would fit the weird times in the US we live in, with about 30% aligning themselves as fundamentalists. Nah... it just can't be. I agree; I was mostly kidding about the PC aspects. One never knows, tho. FYI, I have been known to be called a "fundie" on talk.origins and others of those insightful and respectful sites. I'm not, but they are not noted for their discernment or subtleties of observation. One of my favorite atheists is Stan Kelly-Bootle of "Devil's DP Dictionary" fame. Among others of his many myriad talents, he's one of the world's leading experts on matters religious. He and I have had some most stimulating and rewarding discussions, on the rare occasions when we get together. The trick is a little thing called mutual respect. Most modern denizens of the 'net don't get the notion of respecting a person's opinion, even while disagreeing with it. Oh, there was no question. I've a kindred interest in physics and engineering, I imagine. I'm currently struggling through Robert Gilmore's books, one on lie groups and algebras and the other on catastrophe theory for engineers as well as polytropes, packing spheres, and other delights. There were some nice insights in your book, which helped wind me on just enough of a different path to stretch me without losing me. Glad to help. By the way!! I completely agree with you about MathCad! What a piece of *&!@&$^%$^ it is, now. I went through several iterations, loved at first the slant or approach in using it, but absolutely hate it now because, frankly, I can't run it for more than an hour before I don't have any memory left and it crashes out. Don't get me started on Mathcad! As some old-time readers might know, I used to recommend Mathcad to everyone. In my conference papers, I'd say, "If you are doing engineering and don't have Mathcad, you're limiting your career." After Version 7 came out, I had to say, "Don't buy Mathcad at any price; it's broken." Here at home I've stuck at Version 6. Even 6 has its problems -- 5 was more stable -- but it's the oldest I could get (from RecycledSoftware, a great source). The main reason I talked my company into getting Matlab was as a refuge from Mathcad. Having said that, truth in advertising also requires me to say that I use it almost every day. The reason is simple: It's the only game in town. It's the only Windows program that lets you write both math equations and text, lets you generate graphics, and also does symbolic algebra, in a WYSIWYG interface. Pity it's so unstable. Come to that, my relationship with Mathcad is very consistent, and much the same as my relationship with Microsoft Office and Windows. I use it every day, and curse it every day. I've learned to save early and often. Even that doesn't always help, but it's the best policy. I had one case where saving broke the file, but the Mathcad support people (who can be really nice, sometimes) managed to restore it. I stay in pretty constant contact with the Mathcad people. As near as I can tell, they are trying hard to get the thing under control. Their goal is to get the program to such a point that it's reasonable to use as an Enterprise-level utility, and a means of sharing data across organizations. I'm also on their power users' group, and theoretically supposed to be telling them where things aren't working. Even so, when I report problems, which is often, the two most common responses I get a 1) It's not a bug, it's a feature, and 2) Sorry, we can't reproduce that problem. I think Mathsoft went through a period where all the original authors were replaced by maintenance programmers -- programmers with more confidence than ability. They seemingly had no qualms about changing things around and redefining user interfaces, with little regard for what they might break. Mathsoft is trying to turn things around now, but it's not going to be easy. IMO. Reboot time every hour is not my idea of a good thing. And that's only if I don't type and change things too fast. When I work quick on it, I can go through what's left with Win98 on a 256Mb RAM machine in a half hour! No help from them and two versions later I've simply stopped using it. I don't even want to hear from them, again. Hopefully, I'll be able to find an old version somewhere. For now, I'm doing without. See RecycledSoftware as mentioned above. BTW, have you _TOLD_ Mathsoft how you feel? Sometimes I think I'm the only one complaining. I'm using Version 11 with all the upgrades, and it's still thoroughly broken. Much less stable than versions 7, 8, etc. Yes. But that's fine, I suspect. I've taught undergrad classes and most folks just go "barf" when confronted with learning floating point. In class evaluations, I think having to learn floating point was the bigger source of complaints about the classes. You probably addressed everything anyone "normal" could reasonably care about and more. F.P. is going to be in my next book. I have a format called "short float" which uses a 24-bit form factor; 16-bit mantissa. I first used it back in '76 for an embedded 8080 problem (Kalman filter on an 8080!). Used it again, 20 years later, on a '486. Needless to say, it's not very accurate, but 16 bits is about all we can get out of an A/D converter anyway, so it's reasonable for embedded use. Wanna hear the funny part? After tinkering with it for awhile, I worked out the rules for my imagined format, that worked just fine. At work, I've got a Mathcad file that takes the hex number, shifts it two bits at a time, diddles the "phantom" bit, and produces the right results. I can go from integer to float and back nicely, using this cockamamie scheme. Hmm. Then you should be able to construct a function to map between these, proving the consistent results. I've a hard time believing there is one. But who knows? Maybe this is the beginning of a new facet of mathematics, like the investigation into fractals or something! Grin! I don't know about that, but there is indeed a connection. I suppose that, with enough effort, I could work out a scheme for using base 16, and still get the same bit patterns. Epicycles upon epicycles, don'cha know. hehe. Best of luck. In the process, I did notice that you are entertaining thoughts on a revised "Let's build a compiler." Best of luck on that and if you feel the desire for unloading some of the work, I might could help a little. I've written a toy C compiler before, an assembler, several linkers, and a not-so-toy BASIC interpreter. I can, at least, be a little bit dangerous. Might be able to shoulder something, if it helps. Thanks for the offer. I'm thinking that perhaps an open-source sort of approach might be useful. Several people have offered to help. My intent is to use Delphi, and there are lots of folks out there who know it better than I. Of course, I'll still have to do the prose, but help with the software is always welcome. Jack |
#8
|
|||
|
|||
Hi Jack!
I've always used the term "implied bit". I think I saw it in the 80186 programmer's reference section on using an 80187 coprocessor. BTW: thanks for the articles (and the Math toolkit book) on interpolating functions - I use that stuff over and over again. In fact, I'm simulating an antilog interpolation routine (4 terms by 4 indicies) right now that will eventually run on a PIC (no hardware multiply; not even an add-with-carry instruction). Those foward and backward difference operators make it all pretty easy! With no hardware support from the PIC, it will end up taking near 10mS from 24 bit ADC to final LED output but it will be better than 14 bit accurate over the 23 bit output dynamic range. all the best, Bob |
#9
|
|||
|
|||
Bob wrote:
Hi Jack! I've always used the term "implied bit". I think I saw it in the 80186 programmer's reference section on using an 80187 coprocessor. I think I got the term "phantom bit" from Intel's f.p. library for the 8080, ca. 1975. Then again, I've been doing my own ASCII-binary and binary-ASCII conversions since way before that, on big IBM iron. We pretty much had to, since the old Fortran I/O routines were so incredibly confining. It had to be around 1960-62. But the old 7094 format didn't use the "phantom" bit, AIR. You just haven't lived until you've twiddled F.P. bits in Fortran g. BTW: thanks for the articles (and the Math toolkit book) on interpolating functions - I use that stuff over and over again. In fact, I'm simulating an antilog interpolation routine (4 terms by 4 indicies) right now that will eventually run on a PIC (no hardware multiply; not even an add-with-carry instruction). Those foward and backward difference operators make it all pretty easy! With no hardware support from the PIC, it will end up taking near 10mS from 24 bit ADC to final LED output but it will be better than 14 bit accurate over the 23 bit output dynamic range. Sounds neat. I'm glad I could help. FWIW, there's a fellow in my office who has a Friden calculator sitting on his credenza. He's restoring it. Jack |
#10
|
|||
|
|||
Jack Crenshaw writes:
You just haven't lived until you've twiddled F.P. bits in Fortran g. Or any other HLL, for that matter! FWIW, there's a fellow in my office who has a Friden calculator sitting on his credenza. He's restoring it. He has a sliderule for backup? Speaking of hardware math mysteries, Dr. Crenshaw, et al, does anyone know how the (very few) computers that have the capability perform BCD multiply and divide? Surely there's a better way than repeated adding/subtracting n times per multiplier/divisor digit. Converting arbitrary precision BCD to binary, performing the operation, and then converting back to BCD wouldn't seem to be the way to go (in hardware). |
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Installing MoBo | Homebuilt PC's | 36 | November 28th 04 02:29 AM | |
Passmark Performance Test, Division, Floating Point Division, 2DShapes | @(none) | General | 0 | August 19th 04 11:57 PM |
Floating Point Operations & AMD | Keith B. Silverman | Overclocking AMD Processors | 1 | August 5th 04 02:07 PM |
AMD64 vs. a floating point operation (FLOP) | Only NoSpammers | AMD x86-64 Processors | 8 | June 27th 04 03:55 PM |
fastest floating point operation as possible | Paul Spitalny | Homebuilt PC's | 22 | February 10th 04 02:34 PM |