A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » Intel
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Floating point format for Intel math coprocessors



 
 
Thread Tools Display Modes
  #21  
Old July 4th 03, 03:49 AM
Morris Dovey
external usenet poster
 
Posts: n/a
Default

Jonathan Kirwan wrote:

Oh, cripes. It's that goto process which reminded me of Fortran
II days on the IBM 1130. Now some more brain cells have been
restored. Bad news, as I have now probably forgotten yet
another important something.


That's what happens when you punch "//JOB T"
--
Morris Dovey
West Des Moines, Iowa USA
C links at http://www.iedu.com/c

  #22  
Old July 4th 03, 04:38 PM
Everett M. Greene
external usenet poster
 
Posts: n/a
Default

Jack Crenshaw writes:
"Everett M. Greene" wrote:
Jack Crenshaw writes:

You just haven't lived until you've twiddled F.P. bits in Fortran g.


Or any other HLL, for that matter!

FWIW, there's a fellow in my office who has a Friden calculator sitting
on his credenza. He's restoring it.


He has a sliderule for backup?


Funny you should mention that. He has a whole _WALL_ covered with slide
rules, of all shapes, sizes, and descriptions. He has long ones, short
ones, circular ones, spiral ones. He has Pascal-based adding machines.
He has others of which I don't know the origin. He has Curta "pepper
grinders" like we used to use in sports car rallies. The guy's a collector.

But I presume you mean that the Friden won't be reliable. You may be
right, since he's the one doing the restoring. But IMX, I have never,
ever seen one fail.


I was just making a light-hearted comment. I just guessed that
someone who has an interest in older technology would have some
even older things. Does he have an abacus or two -- just in case?

Speaking of hardware math mysteries, Dr. Crenshaw, et al,
does anyone know how the (very few) computers that have
the capability perform BCD multiply and divide? Surely
there's a better way than repeated adding/subtracting n
times per multiplier/divisor digit. Converting arbitrary
precision BCD to binary, performing the operation, and
then converting back to BCD wouldn't seem to be the way
to go (in hardware).


Here, I'm not too clear as to whether you mean mechanical or electronic
calculators. The Friden and Monroe mechanical calculators did indeed
do successive subtraction. But they moved the carriage so that you
always got one digit of result after, on average, 5 subtractions. 50
for 10 digits. That's not too bad.


Ah, yes. I'd forgotten about the ol' divide by zero on the
electro-mechanical calculators. And the only "reset" was to
pull the plug.

Most electronic calculators used decimal arithmetic internally,
though I think the newer ones are binary.
As someone else said, they used multiplication tables stored in ROM.


I'll have to think about the use of multiplication tables,
especially for the case of packed values.

Since the advent of the microprocessor, ca. 1972, Intel and other mfrs
have included "decimal adjust" operations that correct addition and
subtraction operations if you want them to be decimal. Adding
numbers in decimal is the same as adding them in binary (or hex),
except that you must add six if the result is bigger than 9. They
use auxiliarly carry bits to allow the addition of more than one
digit at a time.


Addition and subtraction as you say is accommodated by the DAA
instruction. Most of the micros only have a decimal-adjust for
addition so you have to learn how to subtract by adding the tens-
complement of the value being subtracted.
  #23  
Old July 5th 03, 04:00 PM
Jack Crenshaw
external usenet poster
 
Posts: n/a
Default

Jonathan Kirwan wrote:

On Thu, 03 Jul 2003 17:28:20 GMT, "Tauno Voipio"
wrote:

"Jack Crenshaw" wrote in message
...

Paul Keinanen wrote:

On Wed, 2 Jul 2003 07:48:49 PST, (Everett M.
Greene) wrote:

Jack Crenshaw writes:

You just haven't lived until you've twiddled F.P. bits in Fortran

g.

What is the problem ?

IIRC the IAND and IOR are standard functions in FORTRAN and in many
implementations .AND. and .OR. operators between integers actually
produced bitwise and and bitwise or results.

Hmphh! In Fortran II, we were lucky to get add and subtract. No such
thing
as .AND. and .OR. there.


Right. The logic operations were performed by strings of three-way GOTO's.

IIRC, the IBM 1620 did not have the logical operations in machine code,
either.


Oh, cripes. It's that goto process which reminded me of Fortran
II days on the IBM 1130. Now some more brain cells have been
restored. Bad news, as I have now probably forgotten yet
another important something.


Grin!! I did a _LOT_ of work on the 1130. That's where I learned a lot
of my Fortran skills.
IMO the 1130 was one of IBM's very few really good computers. Ours had
16k of RAM !, and
one 512k, removable HD. And it supported 100 engineers, plus the
accounting dept.

The 1130 OS provided all kinds of neat tricks (remember LOCAL?) to save
RAM. I was generating
trajectories to the Moon and Mars on it. Its Fortran
compiler was designed to be fully functional with 8k of RAM, total.
Let's
see Bill Gates try _THAT_!!!

Jack
  #24  
Old July 5th 03, 04:55 PM
Jack Crenshaw
external usenet poster
 
Posts: n/a
Default

"Everett M. Greene" wrote:

Jack Crenshaw writes:
"Everett M. Greene" wrote:
Jack Crenshaw writes:

You just haven't lived until you've twiddled F.P. bits in Fortran g.

Or any other HLL, for that matter!

FWIW, there's a fellow in my office who has a Friden calculator sitting
on his credenza. He's restoring it.

He has a sliderule for backup?


Funny you should mention that. He has a whole _WALL_ covered with slide
rules, of all shapes, sizes, and descriptions. He has long ones, short
ones, circular ones, spiral ones. He has Pascal-based adding machines.
He has others of which I don't know the origin. He has Curta "pepper
grinders" like we used to use in sports car rallies. The guy's a collector.

But I presume you mean that the Friden won't be reliable. You may be
right, since he's the one doing the restoring. But IMX, I have never,
ever seen one fail.


I was just making a light-hearted comment. I just guessed that
someone who has an interest in older technology would have some
even older things. Does he have an abacus or two -- just in case?


Actually, he does -- several. He seems to be collecting every possible
example of
mechanical ways to do math. His office wall is a work of art.

Speaking of hardware math mysteries, Dr. Crenshaw, et al,
does anyone know how the (very few) computers that have
the capability perform BCD multiply and divide? Surely
there's a better way than repeated adding/subtracting n
times per multiplier/divisor digit. Converting arbitrary
precision BCD to binary, performing the operation, and
then converting back to BCD wouldn't seem to be the way
to go (in hardware).


Here, I'm not too clear as to whether you mean mechanical or electronic
calculators. The Friden and Monroe mechanical calculators did indeed
do successive subtraction. But they moved the carriage so that you
always got one digit of result after, on average, 5 subtractions. 50
for 10 digits. That's not too bad.


Ah, yes. I'd forgotten about the ol' divide by zero on the
electro-mechanical calculators. And the only "reset" was to
pull the plug.

Most electronic calculators used decimal arithmetic internally,
though I think the newer ones are binary.
As someone else said, they used multiplication tables stored in ROM.


I'll have to think about the use of multiplication tables,
especially for the case of packed values.

Since the advent of the microprocessor, ca. 1972, Intel and other mfrs
have included "decimal adjust" operations that correct addition and
subtraction operations if you want them to be decimal. Adding
numbers in decimal is the same as adding them in binary (or hex),
except that you must add six if the result is bigger than 9. They
use auxiliarly carry bits to allow the addition of more than one
digit at a time.


Addition and subtraction as you say is accommodated by the DAA
instruction. Most of the micros only have a decimal-adjust for
addition so you have to learn how to subtract by adding the tens-
complement of the value being subtracted.


Agreed, although some (I think the Z80 was one) worked for subtraction
as well.
As for multiplication and division, forget it. But if you do the Friden
trick of
shifting, successive subtraction works pretty well. Still not nearly as
fast as
binary, of course, but if you have to convert back & forth for I/O
anyway, it may
still be more efficient to do it in BCD. Plus, you don't have to deal
with the bother
of a 1 that becomes 0.99999999.

Calculators do everything completely differently. In a calculator (at
least the old
ones -- modern things like PDA's have RAM to burn), time isn't an issue.
The CPU
is always waiting for the user anyway, so efficiency of computation
isn't required.
Saving ROM space is lots more important.

Jack
  #25  
Old July 5th 03, 07:34 PM
Jonathan Kirwan
external usenet poster
 
Posts: n/a
Default

On Sat, 05 Jul 2003 15:47:56 GMT, Jack Crenshaw
wrote:

Jonathan Kirwan wrote:

snip
Re grid leak: I think it must be pretty much trial and error. No doubt
_SOMEONE_ has a theory for it, but I would think the grid current must
vary a lot from tube to tube.


I'll tell you, I sure beat myself to death on that one. I
didn't have anyone smart enough to ask this, so it was just me
and the libraries. Finally, it did dawn on me about the
practical knowledge one simply needed to have. And I relaxed a
little. But then I started looking for physics models of vacuum
tubes.

I've only recently realized that the full understanding
(allowing for rarified gases in the tube) requires at least 2
spatial and 1 time dimension of PDEs coupled to at least
6-dimensional ODEs to understand, along with radiation transport
and atomic interactions. I've lost some hubris and gained some
humility from such realizations. This stuff ain't just for
anyone!


I think you'll appreciate this story, which came from my major prof in
college.
He was a wonder -- designed and built the Calutrons for separating
Uranium at Oak Ridge,
later went on to design analog fire-control systems for the Navy. Nobody
knew vacuum tubes
like him.

You remember photomultiplier tubes? You have a bunch of electrodes all
curved and arranged in
strange patterns. Each is held at a different potential. The idea is
that once an electron
is released at the first sensor, they cascade down from plate to plate,
increasing the current
each time.


I use PMTs in my work, so yes.

Dr. Carr asked me if I could guess how they designed the shape of those
electrodes? Or the
potentials. I assumed, as you did, that they used PDE's and lots of
math. Not so.

He said they'd build a large mechanical analog of the tube, with huge
models of each electrode.
The height of each model was set at the potential it would be at.

Then they'd stretch a sheet of rubber over the whole thing, and roll
marbles down it !!!!.
They'd play with shapes and heights (potentials) until they got most
marbles rolling the right
way. Neat, eh?


Excellent. Of course, their might be some subtlety that they
didn't include in the physical model which becomes dominant, in
the real mccoy. But odds are the physical approximation will
get you far, if you are wise about modeling the dimensions (and
I don't just mean length here.)

I just like having the theory from which to make such deductions
to the specific, as well. Sometimes, interesting ideas can
suggest themselves from that. More, it's hard to apply your
dimensional analysis to the physical modeling without at least
some theory as your guide.

But one uses all the tools available at reasonable cost, I
imagine. I'm sure they did, too.

FWIW, I sit here surrounded by old Heathkit tube electronics. I collect
them. Once I started buying them, I realized I couldn't just drive down to the
local drugstore and test the tubes. Had to buy a tube tester, VTVM, and all
the other accoutrements to be able to work on them.


I collected boxes and boxes of tubes for the longest time. From
VR-150's I removed from radar systems of WW II, to 12AX7s, to
whatever. Finally, I selected a radio club and just gave the
entire lot of them away. I had to balance my memories of
thinking about these things against my hording a collection of
them and I decided to find some folks who would make some honest
use of them. I had no business keeping them.

Took me a while to let them go, though. And there are times of
slight regret, when I want to play with one in a circuit again.


Yep, I know. We had a junk room at the Physics dept, with all kinds of
tubes in it. I raided it and
had a big box of tubes which I carried around from house to house, and
job to job. I might have used,
like, 4. Like you, I finally ended up junking them.

But now I have more. Here's a box right here with the cream of the crop
-- 5881's, KT66's, 6V6's,
etc. Somewhere else I've got the 12AX7's, etc. Tubes are coming back
into style, and tube audio
amps bring $100's on eBay.


It wasn't about money, for me. So that's no incentive for me.
I just enjoyed the learning experience.

Maybe someone objected to the implied occult nature of the term,
"phantom"?

Oh, geez. I've never known a geek to care about such things. I
suppose they must exist, somwhere. I've just never met one
willing to let me know they thought like that. But that's an
interesting thought. It would fit the weird times in the US we
live in, with about 30% aligning themselves as fundamentalists.

Nah... it just can't be.

I agree; I was mostly kidding about the PC aspects. One never knows,
tho. FYI, I have been known to be called a "fundie" on talk.origins and
others of those insightful and respectful sites. I'm not, but they are not
noted for their discernment or subtleties of observation.


Hehe. I was raised Catholic and I attended both Catholic and
public schools -- including a few years at the University of
Portland, which is a Catholic university. I'm an atheist (a
conclusion I've come to out of what I feel is being honest about
the preponderance of the evidence), but I've quite a collection
of religious materials -- including "parallels" and facsimiles
of various fragments and as much raw source materials as I can
muster. Fills many shelves in my library. So, as you might
imagine, I can manage some debate on the subject.

One of my favorite atheists is Stan Kelly-Bootle of "Devil's DP
Dictionary" fame. Among others of his many myriad talents, he's one of
the world's leading experts on matters religious. He and I have had
some most stimulating and rewarding discussions, on the rare occasions
when we get together. The trick is a little thing called mutual respect.
Most modern denizens of the 'net don't get the notion of respecting a
person's opinion, even while disagreeing with it.


Frankly, I think this is one area of physics and physics
training that most lay people simply do NOT understand well. I
can stand up in front of a group and propose my thoughts on the
board. They can tear into me like a pack of lions (in fact,
there would be something really, really wrong if they didn't)
and abuse me with all manner of criticism. Any outsider looking
in would imagine that I must feel pretty darned bad after such
attacks. But only a half hour later we are out in the lunch
room talking like the best of friends, which we are.

I *need* criticism. Without it, I simply don't get better.
This is such an important process and most people I know don't
seem to understand how one's friends can appear to go for the
jugular in one moment and try and make you feel like a worm, it
seems, and in the next be your great pal. They seem to think
that friends support friends in their ideas.

But there is an opposing facet in physics. One where peoples'
respect is demonstrated by their willingness to put in time and
criticize. And the critiquing processes is fundamental for all
of us becoming better. And when each of us is better, we are
all made better.

Underlying this, as you say, is *respect* and a willingness to
deal with the objections made, and not just ignore them.

Sadly, this is something which most outsiders do not well fathom
or appreciate. And personally, I think we'd all be better off
if people would learn to separate the concepts of challenge from
respect. One can be quite respectful, while challenging the
hell out of another. And just a few minutes later be the best
of pals. And that's a healthy way to be, in my mind.

In personal and business relationships, I have to be careful to
continually reinforce my respect and appreciation whenever I'm
also challenging ideas, to diffuse natural reactions. It feels
almost smarmy and fawning to me, but it seems what's expected in
"polite society." But among physicists? Nah. There's no need
for such blandishments -- it's assumed. And the respect is
demonstrated all the more by the heat and energy which goes into
the debate.

A final note about this, though. Respect is *earned*, not given
away. One earns this through due diligence and mental effort.
It's not some free-bee. Perhaps this is what bothers
non-scientists more -- the work that is required in order to
earn the respect which endures through any processes of
challenge. No one is going to think the less of you, if you've
done your work and applied yourself well -- even if you are
wrong. And winnowing away wrong ideas is part and parcel,
anyway. It's what science is largely about. The better of us
aren't judged so much by the conclusions reached, as by the
careful application of intellect and diligence.


To which, all I can say is, "Amen." Sharing ideas with people who don't
agree is how we learn. Calling them nasty names is not helping.


Yes, in general.

I think there may be a place for name calling is if other
avenues of goading fail and it's perceived that there might be
some hope with a dash of cold water in the face. Sometimes, it
just takes a slap to get a response. If it's done with respect,
even if they don't really realize you honestly care about them
caring about themselves, then it may work out for the better.

Of course, there's always the risk of a broken relationship as a
result. But sometimes it's already broken by that time and this
is the only remaining possibility for restoring it. One takes
one's chances.

Oh, there was no question. I've a kindred interest in physics
and engineering, I imagine. I'm currently struggling through
Robert Gilmore's books, one on lie groups and algebras and the
other on catastrophe theory for engineers as well as polytropes,
packing spheres, and other delights. There were some nice
insights in your book, which helped wind me on just enough of a
different path to stretch me without losing me.

Glad to help.

By the way!! I completely agree with you about MathCad! What a
piece of *&!@&$^%$^ it is, now. I went through several
iterations, loved at first the slant or approach in using it,
but absolutely hate it now because, frankly, I can't run it for
more than an hour before I don't have any memory left and it
crashes out.

Don't get me started on Mathcad!

As some old-time readers might know, I used to recommend Mathcad to
everyone. In my conference papers, I'd say, "If you are doing engineering and
don't have Mathcad, you're limiting your career." After Version 7 came out, I had
to say, "Don't buy Mathcad at any price; it's broken." Here at home I've stuck
at Version 6. Even 6 has its problems -- 5 was more stable -- but it's the
oldest I could get (from RecycledSoftware, a great source). The main reason I
talked my company into getting Matlab was as a refuge from Mathcad.


You are in Arizona, now? I seem to recall you worked at some
business in Florida -- perhaps even one related to the area I
work in. But my memory is fading.


Yep. I used to work at ATK in Clearwater. Now I'm with Spectrum Astro
in the Phoenix
area (111 degrees, yesterday!). Good memory. Both jobs require a lot
of simulation skills.


Hehe. Looks like very interesting work with very interesting
people. Excellent!

(I remember you mentioning a Kaypro somewhere. My first
personal purchase of a PC was the Kaypro 286i, which was the
first truly IBM PC compatible. Before it, they were "90%"
compatible, or so.)


Right again. I still have my two original machines (the pre-DOS, CP/M
boxes), plus
five or six more I got from eBay. Nice little boxes, in their day.


Yes, Kaypro did good and with reasonable pricing at the time.

FWIW, I still miss the reliability of the CP/M machines. Crude? Yes,
indeed. Limited?
For sure. _BUT_ I could edit a file without fear of crashing, and I
never found the need
for ScanDisk or Defrag. Cetainly not Norton Disk Doctor!! I did my
columns and stuff in
Wordstar for years; also programming in Turbo Pascal. During that time,
I had exactly
_ZERO_ crashes, or bugs of any kind (except when a solder joint failed
once, because of thermal
stress). Wordstar and TP simply did what I told them, when I told them,
every time.

Someone recently asked me how many times I had a disk failure with CP/M.
It was exactly
once, and that was when a floppy got so worn, the oxide began flaking
off the vinyl.


I used CP/M a fair amount, too. In general, my only problems
were with Persei floppies. Voice coil drive, fast, and I had a
few fail on me. The Shugarts just kept working.

I rely on DOS, similarly. Of course, if you've done *any*
assembly programming on a DOX x86 with .COM files or have
programmed with the early DOS 1.0 function calls, you *know*
about the many similarities (identical, sometimes) with CP/M.
But there are times when Windows won't boot and that DOS is
still ticking away just fine, that I can jump in and use it to
get Windows restarted. Another reason I'm still on Win98, by
the way.

Having said that, truth in advertising also requires me to say that I
use it almost every day. The reason is simple: It's the only game in town.
It's the only Windows program that lets you write both math equations and text, lets
you generate graphics, and also does symbolic algebra, in a WYSIWYG interface. Pity
it's so unstable.


Yes. I really need to get an older version, I guess, or else
put myself into deep freeze to be awakened when they "get it
right."


Whatever you do, do _NOT_ get Version 11. It's even worse that the
predecessors.


Okay. It just makes me angry with them, having paid them well
and received absolutely NOTHING of value for it. And I keep
seeing that carrot in front of my face that I can't quite get.
I just have to close my eyes, I guess.

Come to that, my relationship with Mathcad is very consistent, and much
the same as my relationship with Microsoft Office and Windows. I use it every day,
and curse it every day. I've learned to save early and often. Even that doesn't
always help, but it's the best policy. I had one case where saving broke the file, but
the Mathcad support people (who can be really nice, sometimes) managed to restore
it.


Well, I can't compare my experiences with Mathcad with Windows
or Office. Frankly, I've never seen a single product so
potentially useful and at the same time so totally useless as
Mathcad!


There you go. It's a great pity, and I do hope they manage to get it
back on track.
If ever there were a need for an open source version, this is it.


Hehe. Noted. I'll remember this example when others rail at
the idea of open source and stuff this example into their face.
It's a classic, for sure.

I stay in pretty constant contact with the Mathcad people. As near as I
can tell, they are trying hard to get the thing under control. Their goal is to get the
program to such a point that it's reasonable to use as an Enterprise-level utility,
and a means of sharing data across organizations. I'm also on their power users'
group, and theoretically supposed to be telling them where things aren't working.


Well, I talked to them at length and even told several people
there to look at your book to read about someone's experiences
in print.

Even so, when I report problems, which is often, the two most common
responses I get a

1) It's not a bug, it's a feature, and
2) Sorry, we can't reproduce that problem.


Yes, that's a good summary!

I think Mathsoft went through a period where all the original authors
were replaced by maintenance programmers -- programmers with more confidence than
ability. They seemingly had no qualms about changing things around and redefining user
interfaces, with little regard for what they might break. Mathsoft is trying to
turn things around now, but it's not going to be easy. IMO.


That would fit facts. The top-dogs in the business probably
just thought that the technical resources were "replaceable"
when they weren't, really. Of course, the top-dogs also think
they are *not* replaceable. Hypocracy in action.


I think that's exactly right. Someone made decisions based on profit
rather than quality.


Like that's anything new. In the US, at least, it's not just
profit but short-term-profit-in-the-next-three-months which
drives almost all decisions.

Reboot time every hour is not my idea of a good
thing. And that's only if I don't type and change things too
fast. When I work quick on it, I can go through what's left
with Win98 on a 256Mb RAM machine in a half hour! No help from
them and two versions later I've simply stopped using it. I
don't even want to hear from them, again. Hopefully, I'll be
able to find an old version somewhere. For now, I'm doing
without.

See RecycledSoftware as mentioned above. BTW, have you _TOLD_ Mathsoft
how you feel? Sometimes I think I'm the only one complaining.


Oh, yes. I've told them on numerous occasions and in no
uncertain terms. I'll look at recycledsoftware and elsewhere.
But it's my opinion that Mathsoft should *give* me a copy of
version 5 and be done with it. It's not like I haven't paid
them their due and have found their later products unusable.
They should make every attempt within their reasonable power to
satisfy their customers and prodiving version 5 is reasonable,
in my estimation. I've come to the conclusion that they aren't
interested enough in a satisfied customer base.


That's absolutely true, IMO. There was a time, around versions 7-8,
when they were
changing the user interface radically, with each version. We "power
users" complained
mightily, to no avail. We finally figured out, they were responding to
neophyte users who
couldn't figure out the interface, and called in because it didn't
behave like Word.
Instead of saying RTFM, they set out to make the interface more
Wordlike, thereby blowing
away all of us who _HAD_ learned the interface.

Here's one you'll appreciate. It's a minor nit, but indicative, I
think, of something-or-other.

All recent versions of the interface have a function called
insert/delete lines. With older versions,
it used to be the _ONLY_ way to deal with white space. In newer
versions, you can just enter or delete
carriage returns, but the insert/delete is still there. I use the
function a lot when I want to
insert extra prose or math in the middle of a file. I just make lots of
white space, make my inserts,
then delete what white space is left over.

I'd do something like insert 500 lines, enter my new stuff, then delete
500 lines. It wouldn't let
me delete all of them, but through version 6, it would change my 500 to,
say, 367, and give me the
chance to approve that number.

After version 6, it won't do that any more. It says, "please enter a
number between 1 and 367." Then I
have to type the 367 manually.

Now, why on earth would you _CHANGE_ a thing like that? Clearly the
routine knows how many lines can
be deleted; it's telling me so. But it won't just put the correct limit
in, even though it used to do
exactly that.

Why would someone take out working code and replace it with something
dumber? And why, oh why, have they
let the software progress through six more major releases, plus
countless updates, without fixing it?


What can I say?? I'm as frustrated and I've not been exposed to
it, like you have. It all just baffles me.

Hehe. Kalman!! And continuous time Kalman-Bucy. Oh, the joys.
Now there's a subject which could use a good writer for the
embedded programming masses!


I know; I'm working on it. Just about the time I think I'm ready to
write that article, someone else already has written their own version.


hehe. Well, keep at it! I used it for sensor fusion with
various phased-array radar and other sensor systems, all with
varying characteristics to model. It's worth knowing about!

Used it again, 20 years later,
on a '486. Needless to say, it's not very accurate, but 16 bits is
about all we can get out of an A/D converter anyway, so it's reasonable
for embedded use.


I tend to look very closely at folks who just apply floating
point without thinking. Even in cases where the dynamic range
is crutial (and I've worked on systems measuring current, for
example, over 12 orders of magnitude), I usually find far fewer
stumbling blocks to working code without using floating point.
Most programmers just do NOT understand it's limitations and
pitfalls. And they will use it unwisely. It's a testimonial to
the design of floating point, that they can get away with it do
often without really knowing why. But they are still largely
ignorant when applying it.

So I sometimes just summarize this to another programmer by
saying, "Well, you get your data from an integer ADC and you put
out your results on an integer DAC, so why are you using
floating point?" Sometimes, that's enough of a prod at least to
get them to think about it.


Well, let me give you a counterpoint. The huge advantage to f.p. is the
dynamic range.


Of course. I gave a swipe at that point above, in fact. But I
added that even then, I've often found better ways -- even in
the face of 12 orders of mag. The key to my point isn't that
floating point should be entirely avoided. It's that it should
be applied with understanding -- and more particularly, in the
case of most embedded systems. If Excel crashes out, "Oh, gee.
I guess I'll just reboot." You live with it. But in an
embedded system, subtle errors crop up if you aren't careful.

It's virtually impossible to write a Kalman filter any other way, for
that very reason
(hence our use of f.p. in that 8080 version). In integer arithmetic,
you have to check
every single operation for overflow. I can't tell you how many times
I've used a debugger
to read a fixed-point value in hex, and convert it to real using my
trusty Sharp calculator.


Or you can do the analysis to verify that it is impossible for
overflow to occur. Which is what I've done in many cases. One
should be careful, no matter, I suppose.

It's a very, very tedious process, and fraught with the possibility of
error. One of the
Mars missions was lost for this very reason -- overflow in a fixed-point
computation.

Of course, these things are supposed to be caught during V & V, but
clearly they are not, always.

Sometimes it can be pretty scary, realizing that all those nuke-tipped
missiles sitting in
silos and other places, have software written in a way that multiplies
can overflow and go
negative. Brrrr!!! ("Return to sender, address unknown...")

If nothing else, floating point can eliminate that worry. I tell my
readers, if your CPU has f.p. capability, USE IT! It may be a crutch,
as you suggest,
but it sure speeds up the development process. Makes the software more
robust, too.


Well, you've given me a story early on, about modeling PMTs.
Let me tell you one.

Calculating a standard deviation is often done by "smarty pants"
programmers with a modified version of the "standard equation"
where only one pass through the data is required. You know the
one, where you accumulate both a SUM(x) and SUM(x^2). At the
end, a difference calculation is used. But in this case, the
magnitude of both parts are often quite similar, leaving only
the least significant bits in the result.

When this happens, preserving those bits during accumulation can
be very important. For example, what often isn't realized is
that it is important to pre-sort the data before accumulation so
that the smaller numbers can accumulate to larger values
*before* getting swamped out by the accumulation of one of the
larger values. If the largest value, for example, were added
first, the smaller values might very well truncate out
completely as they are accumulated and never get a chance to
impact the least significant bits in the summed result, before
the difference is taken and they become crucial for the final
calculation.

This is only one of many subtle examples. And the analysis is
sometimes rather difficult to shepherd well, without training
and practice. On the other hand, analyzing integer math is, by
comparison, much more of an "undergrad" kind of thing. The
issues are more tractable to more people, as a rule.

And in the end, it *is* helpful to remember that it's integer
in, integer out, for many embedded systems and it's worth doing
to analyze the data flows throughout. My belief is that
floating point should be justified by the proponents. But so
should integer.

In other words, someone should be paying attention and it should
be clear from the record why either integer or floating point is
chosen for a particular application. But to be honest, the
issues of floating point are more subtle and the skills required
to properly analyze it are greater, I think.

In any case, it's good to question someone and make them think
about it.

The other scary part is, as nearly as I can tell, desk checking and hand
checking has become
a lost art. With all too many programmers nowadays, if it compiles, it's
ready to ship.


Tell me about it. It's quite common for me to prepare a 10 or
20 page analysis, complete with timing diagrams and mathematical
derivations. I'll include error budgets/tolerances in that
analysis and show how I got them. Sometimes, people just want
me to roll up my sleeves and get the task out. But I need
confidence, even if they don't. So I do the work, anyway.

Originally, I'd hoped that others would actually take the chance
to point out my errors and help me improve the documents. But
most of my target readers just ignore them, assuming I am
getting things right, or unable to challenge my points, or
unwilling to put in the time. No matter. Now, I just do it
mostly for my own sake -- just to help me be sure that I've
covered the issues and to provide myself with something to look
back on at a later time. It's turned out to help me a lot to
get back into the right mindset, when having to return to a
project.

So the point often isn't anymore to get input from others. It's
more for me. I can live with that.

Sadly, too few programmers have learned numerical methods for
analysis -- for example, power functions applied to recurrances.
Who today reads through each page of Knuth's 3-vol set, as I did
when it came out? (Or his "Concrete Mathematics," published
recently, or Chapra and Canale's "Numerical Methods for
Engineers" or your own book or a host of others worth studying.)

Times have changed, I suppose.

Jon

  #26  
Old July 5th 03, 07:38 PM
Jonathan Kirwan
external usenet poster
 
Posts: n/a
Default

On Sat, 05 Jul 2003 15:00:37 GMT, Jack Crenshaw
wrote:

Jonathan Kirwan wrote:

On Thu, 03 Jul 2003 17:28:20 GMT, "Tauno Voipio"
wrote:

"Jack Crenshaw" wrote in message
...

Paul Keinanen wrote:

On Wed, 2 Jul 2003 07:48:49 PST, (Everett M.
Greene) wrote:

Jack Crenshaw writes:

You just haven't lived until you've twiddled F.P. bits in Fortran
g.

What is the problem ?

IIRC the IAND and IOR are standard functions in FORTRAN and in many
implementations .AND. and .OR. operators between integers actually
produced bitwise and and bitwise or results.

Hmphh! In Fortran II, we were lucky to get add and subtract. No such
thing
as .AND. and .OR. there.

Right. The logic operations were performed by strings of three-way GOTO's.

IIRC, the IBM 1620 did not have the logical operations in machine code,
either.


Oh, cripes. It's that goto process which reminded me of Fortran
II days on the IBM 1130. Now some more brain cells have been
restored. Bad news, as I have now probably forgotten yet
another important something.


Grin!! I did a _LOT_ of work on the 1130. That's where I learned a lot
of my Fortran skills.
IMO the 1130 was one of IBM's very few really good computers. Ours had
16k of RAM !, and
one 512k, removable HD. And it supported 100 engineers, plus the
accounting dept.

The 1130 OS provided all kinds of neat tricks (remember LOCAL?) to save
RAM. I was generating
trajectories to the Moon and Mars on it. Its Fortran
compiler was designed to be fully functional with 8k of RAM, total.
Let's
see Bill Gates try _THAT_!!!


hehe. We had a 16k system, too! The timesharing system I wrote
provided timeshared BASIC for 32 users, by the way, and lived in
16k RAM -- 6k for the interpreter and 10k for the swapped user
page. Included Chebyshev and mini-max methods for the
transcendentals -- something Intel failed to use for their x87
floating point units until the advent of the Pentium, many years
later.

Oh, well.

Jon

  #27  
Old July 6th 03, 10:22 AM
Cameron Laird
external usenet poster
 
Posts: n/a
Default

In article ,
Jack Crenshaw wrote:
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Installing MoBo Homebuilt PC's 36 November 28th 04 02:29 AM
Passmark Performance Test, Division, Floating Point Division, 2DShapes @(none) General 0 August 19th 04 11:57 PM
Floating Point Operations & AMD Keith B. Silverman Overclocking AMD Processors 1 August 5th 04 02:07 PM
AMD64 vs. a floating point operation (FLOP) Only NoSpammers AMD x86-64 Processors 8 June 27th 04 03:55 PM
fastest floating point operation as possible Paul Spitalny Homebuilt PC's 22 February 10th 04 02:34 PM


All times are GMT +1. The time now is 01:39 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.