A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » Intel
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Floating point format for Intel math coprocessors



 
 
Thread Tools Display Modes
  #11  
Old July 2nd 03, 12:51 AM
Bob
external usenet poster
 
Posts: n/a
Default

Hi Jack!

I've always used the term "implied bit". I think I saw it in the 80186
programmer's reference section on using an 80187 coprocessor.

BTW: thanks for the articles (and the Math toolkit book) on interpolating
functions - I use that stuff over and over again. In fact, I'm simulating an
antilog interpolation routine (4 terms by 4 indicies) right now that will
eventually run on a PIC (no hardware multiply; not even an add-with-carry
instruction). Those foward and backward difference operators make it all
pretty easy! With no hardware support from the PIC, it will end up taking
near 10mS from 24 bit ADC to final LED output but it will be better than 14
bit accurate over the 23 bit output dynamic range.

all the best,
Bob


  #12  
Old July 2nd 03, 04:39 AM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

"Jack Crenshaw" wrote in message
...
Back to the point. I want to thank you and everyone else who responded
(except the guy who said
"stop it") for helping to straighten out my warped brain.


Yeah, sometimes the ones who have the most details about a subject sometimes
fail to see the overall big picture (i.e. the old forest for the trees
argument). I've sometimes had revelations about things when I go to teach
someone something that I thought I already knew, but now I understand it
better. :-)

Yousuf Khan


  #13  
Old July 2nd 03, 04:52 AM
Jonathan Kirwan
external usenet poster
 
Posts: n/a
Default

On Wed, 02 Jul 2003 03:39:51 GMT, "Yousuf Khan"
wrote:

"Jack Crenshaw" wrote in message
...
Back to the point. I want to thank you and everyone else who responded
(except the guy who said
"stop it") for helping to straighten out my warped brain.


Yeah, sometimes the ones who have the most details about a subject sometimes
fail to see the overall big picture (i.e. the old forest for the trees
argument). I've sometimes had revelations about things when I go to teach
someone something that I thought I already knew, but now I understand it
better. :-)


Teaching *is* one of the better ways to learn something.

Jon

  #14  
Old July 2nd 03, 07:11 AM
Paul Keinanen
external usenet poster
 
Posts: n/a
Default

On Fri, 27 Jun 2003 19:14:09 GMT, Jonathan Kirwan
wrote:


In my own experience, even that predating the Intel 8087 or the
IEEE standardization, it was called a "hidden bit" notation. I
don't know where "phantom" comes from, as my own reading managed
to completely miss it.


I have never heard about phantom bits before, but the PDP-11 processor
handbooks talked about hidden bit normalisation when talking about the
floating point processor (FPP) instructions in the mid-70's. It might
even be older, since the same format was used on the FIS instruction
set extensions on some early PDP-11s.

Paul

  #15  
Old July 2nd 03, 01:58 PM
Jack Crenshaw
external usenet poster
 
Posts: n/a
Default

Bob wrote:

Hi Jack!

I've always used the term "implied bit". I think I saw it in the 80186
programmer's reference section on using an 80187 coprocessor.


I think I got the term "phantom bit" from Intel's f.p. library for the
8080, ca. 1975.
Then again, I've been doing my own ASCII-binary and binary-ASCII
conversions since
way before that, on big IBM iron. We pretty much had to, since the old
Fortran I/O
routines were so incredibly confining. It had to be around 1960-62.
But the old
7094 format didn't use the "phantom" bit, AIR.

You just haven't lived until you've twiddled F.P. bits in Fortran g.

BTW: thanks for the articles (and the Math toolkit book) on interpolating
functions - I use that stuff over and over again. In fact, I'm simulating an
antilog interpolation routine (4 terms by 4 indicies) right now that will
eventually run on a PIC (no hardware multiply; not even an add-with-carry
instruction). Those foward and backward difference operators make it all
pretty easy! With no hardware support from the PIC, it will end up taking
near 10mS from 24 bit ADC to final LED output but it will be better than 14
bit accurate over the 23 bit output dynamic range.


Sounds neat. I'm glad I could help.

FWIW, there's a fellow in my office who has a Friden calculator sitting
on his credenza.
He's restoring it.

Jack
  #16  
Old July 2nd 03, 02:31 PM
Jack Crenshaw
external usenet poster
 
Posts: n/a
Default

Jonathan Kirwan wrote:

On Tue, 01 Jul 2003 13:03:19 GMT, Jack Crenshaw
wrote:

snip
Hello. Re the term, phantom bit: I've been using that term since I can
remember -- and that's a looooonnnngggg time.


I think my first exposure to hidden-bit as a term dates to about
1974. But I could be off, by a year, either way.

Then again, I still sometimes catch myself saying "cycles" or
"kilocycles," or "B+".


Hehe. Now those terms aren't so "hidden" to me. I learned my
early electronics on tube design manuals. One sticking point I
remember bugging me for a long time was exactly, "How do they
size those darned grid leak resistors?" I just couldn't figure
out where they got the current from which to figure their
magnitude. So even B+ is old hat to me.


Then you definitely ain't one of the young punks, are you? g
Re grid leak: I think it must be pretty much trial and error. No doubt
_SOMEONE_ has a theory for it, but I would think the grid current must
vary
a lot from tube to tube.

FWIW, I sit here surrounded by old Heathkit tube electronics. I collect
them.
Once I started buying them, I realized I couldn't just drive down to the
local
drugstore and test the tubes. Had to buy a tube tester, VTVM, and all
the other
accoutrements to be able to work on them.

Maybe someone objected to the implied occult nature of the term,
"phantom"?


Oh, geez. I've never known a geek to care about such things. I
suppose they must exist, somwhere. I've just never met one
willing to let me know they thought like that. But that's an
interesting thought. It would fit the weird times in the US we
live in, with about 30% aligning themselves as fundamentalists.

Nah... it just can't be.


I agree; I was mostly kidding about the PC aspects. One never knows,
tho.
FYI, I have been known to be called a "fundie" on talk.origins and
others
of those insightful and respectful sites. I'm not, but they are not
noted
for their discernment or subtleties of observation.

One of my favorite atheists is Stan Kelly-Bootle of "Devil's DP
Dictionary"
fame. Among others of his many myriad talents, he's one of the world's
leading
experts on matters religious. He and I have had some most stimulating
and
rewarding discussions, on the rare occasions when we get together. The
trick
is a little thing called mutual respect. Most modern denizens of the
'net
don't get the notion of respecting a person's opinion, even while
disagreeing
with it.

Oh, there was no question. I've a kindred interest in physics
and engineering, I imagine. I'm currently struggling through
Robert Gilmore's books, one on lie groups and algebras and the
other on catastrophe theory for engineers as well as polytropes,
packing spheres, and other delights. There were some nice
insights in your book, which helped wind me on just enough of a
different path to stretch me without losing me.


Glad to help.

By the way!! I completely agree with you about MathCad! What a
piece of *&!@&$^%$^ it is, now. I went through several
iterations, loved at first the slant or approach in using it,
but absolutely hate it now because, frankly, I can't run it for
more than an hour before I don't have any memory left and it
crashes out.


Don't get me started on Mathcad!

As some old-time readers might know, I used to recommend Mathcad to
everyone.
In my conference papers, I'd say, "If you are doing engineering and
don't have
Mathcad, you're limiting your career." After Version 7 came out, I had
to say,
"Don't buy Mathcad at any price; it's broken." Here at home I've stuck
at
Version 6. Even 6 has its problems -- 5 was more stable -- but it's the
oldest
I could get (from RecycledSoftware, a great source). The main reason I
talked
my company into getting Matlab was as a refuge from Mathcad.

Having said that, truth in advertising also requires me to say that I
use it
almost every day. The reason is simple: It's the only game in town.
It's the only
Windows program that lets you write both math equations and text, lets
you generate
graphics, and also does symbolic algebra, in a WYSIWYG interface. Pity
it's so
unstable.

Come to that, my relationship with Mathcad is very consistent, and much
the same as
my relationship with Microsoft Office and Windows. I use it every day,
and curse it
every day. I've learned to save early and often. Even that doesn't
always help, but
it's the best policy. I had one case where saving broke the file, but
the Mathcad
support people (who can be really nice, sometimes) managed to restore
it.

I stay in pretty constant contact with the Mathcad people. As near as I
can tell, they
are trying hard to get the thing under control. Their goal is to get the
program to
such a point that it's reasonable to use as an Enterprise-level utility,
and a means
of sharing data across organizations. I'm also on their power users'
group, and
theoretically supposed to be telling them where things aren't working.

Even so, when I report problems, which is often, the two most common
responses I get
a

1) It's not a bug, it's a feature, and
2) Sorry, we can't reproduce that problem.

I think Mathsoft went through a period where all the original authors
were replaced
by maintenance programmers -- programmers with more confidence than
ability. They
seemingly had no qualms about changing things around and redefining user
interfaces,
with little regard for what they might break. Mathsoft is trying to
turn things
around now, but it's not going to be easy. IMO.

Reboot time every hour is not my idea of a good
thing. And that's only if I don't type and change things too
fast. When I work quick on it, I can go through what's left
with Win98 on a 256Mb RAM machine in a half hour! No help from
them and two versions later I've simply stopped using it. I
don't even want to hear from them, again. Hopefully, I'll be
able to find an old version somewhere. For now, I'm doing
without.


See RecycledSoftware as mentioned above. BTW, have you _TOLD_ Mathsoft
how you feel?
Sometimes I think I'm the only one complaining.

I'm using Version 11 with all the upgrades, and it's still thoroughly
broken. Much less
stable than versions 7, 8, etc.

Yes. But that's fine, I suspect. I've taught undergrad classes
and most folks just go "barf" when confronted with learning
floating point. In class evaluations, I think having to learn
floating point was the bigger source of complaints about the
classes. You probably addressed everything anyone "normal"
could reasonably care about and more.


F.P. is going to be in my next book. I have a format called "short
float" which
uses a 24-bit form factor; 16-bit mantissa. I first used it back in '76
for an
embedded 8080 problem (Kalman filter on an 8080!). Used it again, 20
years later,
on a '486. Needless to say, it's not very accurate, but 16 bits is
about all we
can get out of an A/D converter anyway, so it's reasonable for embedded
use.

Wanna hear the funny part? After tinkering with it for awhile, I worked
out the rules for my imagined
format, that worked just fine. At work, I've got a Mathcad file that
takes the hex number, shifts it
two bits at a time, diddles the "phantom" bit, and produces the right
results. I can go from integer to
float and back nicely, using this cockamamie scheme.


Hmm. Then you should be able to construct a function to map
between these, proving the consistent results. I've a hard time
believing there is one. But who knows? Maybe this is the
beginning of a new facet of mathematics, like the investigation
into fractals or something!


Grin! I don't know about that, but there is indeed a connection. I
suppose that,
with enough effort, I could work out a scheme for using base 16, and
still get the
same bit patterns. Epicycles upon epicycles, don'cha know.

hehe. Best of luck. In the process, I did notice that you are
entertaining thoughts on a revised "Let's build a compiler."
Best of luck on that and if you feel the desire for unloading
some of the work, I might could help a little. I've written a
toy C compiler before, an assembler, several linkers, and a
not-so-toy BASIC interpreter. I can, at least, be a little bit
dangerous. Might be able to shoulder something, if it helps.


Thanks for the offer. I'm thinking that perhaps an open-source sort of
approach might
be useful. Several people have offered to help. My intent is to use
Delphi, and there
are lots of folks out there who know it better than I. Of course, I'll
still have to
do the prose, but help with the software is always welcome.

Jack
  #17  
Old July 2nd 03, 04:48 PM
Everett M. Greene
external usenet poster
 
Posts: n/a
Default

Jack Crenshaw writes:

You just haven't lived until you've twiddled F.P. bits in Fortran g.


Or any other HLL, for that matter!

FWIW, there's a fellow in my office who has a Friden calculator sitting
on his credenza. He's restoring it.


He has a sliderule for backup?


Speaking of hardware math mysteries, Dr. Crenshaw, et al,
does anyone know how the (very few) computers that have
the capability perform BCD multiply and divide? Surely
there's a better way than repeated adding/subtracting n
times per multiplier/divisor digit. Converting arbitrary
precision BCD to binary, performing the operation, and
then converting back to BCD wouldn't seem to be the way
to go (in hardware).
  #18  
Old July 2nd 03, 06:16 PM
Jonathan Kirwan
external usenet poster
 
Posts: n/a
Default

On Wed, 02 Jul 2003 09:11:38 +0300, Paul Keinanen
wrote:

On Fri, 27 Jun 2003 19:14:09 GMT, Jonathan Kirwan
wrote:

In my own experience, even that predating the Intel 8087 or the
IEEE standardization, it was called a "hidden bit" notation. I
don't know where "phantom" comes from, as my own reading managed
to completely miss it.


I have never heard about phantom bits before, but the PDP-11 processor
handbooks talked about hidden bit normalisation when talking about the
floating point processor (FPP) instructions in the mid-70's. It might
even be older, since the same format was used on the FIS instruction
set extensions on some early PDP-11s.


Thanks for that. I think I still have a PDP-11 book or two
around here... yes! There it is. 1976, PDP 11/70 Processor
Handbook, and yes... they talk about the hidden bit.

Yup, I was working on PDP-11's (and PDP-8's as well) from about
1972, on. PDP-8's first, though. So I'm pretty sure that's
where I got it and it probably *was* circa 1974, my guess.

Damn, my memory is good in spots!

Thanks,
Jon

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Installing MoBo Homebuilt PC's 36 November 28th 04 02:29 AM
Passmark Performance Test, Division, Floating Point Division, 2DShapes @(none) General 0 August 19th 04 11:57 PM
Floating Point Operations & AMD Keith B. Silverman Overclocking AMD Processors 1 August 5th 04 02:07 PM
AMD64 vs. a floating point operation (FLOP) Only NoSpammers AMD x86-64 Processors 8 June 27th 04 03:55 PM
fastest floating point operation as possible Paul Spitalny Homebuilt PC's 22 February 10th 04 02:34 PM


All times are GMT +1. The time now is 03:00 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.