Re: IEEE 754 support and implications

From: joel garry <joel-garry_at_home.com>
Date: Mon, 22 Aug 2011 09:35:25 -0700 (PDT)
Message-ID: <d750eedb-7509-4550-97ab-bb5c139979d3_at_g8g2000prn.googlegroups.com>



On Aug 22, 8:11 am, Richard <richard.ran..._at_ieee.org> wrote:
> On Aug 22, 9:40 am, John Hurley <hurleyjo..._at_yahoo.com> wrote:
>
>
>
> > On Aug 21, 8:47 pm, Richard <richard.ran..._at_ieee.org> wrote:
>
> > > IEEE Standard 754 floating point is the most common representation in
> > > use today for real numbers on computers, including Intel-based PC's,
> > > Macintoshes, and most Unix platforms. This is binary, unlike the other
> > > oracle datatypes. Math with this type is much faster. Use with the
> > > type in C, Java, Ada, etc. requires no conversion from Oracle. It's
> > > stored like this:
>
> > > A number of this format consists of a sign bit, an exponent and a
> > > mantissa. The mantissa consists of the value an implicit leading bit,
> > > a binary point and then the value of the faction bits (ƒ) and thus 1.ƒ
>
> > >                              Sign    Exponent     Fraction     Bias
>
> > > Single Precision    1 [31]      8 [30-23]    23 [22-00]    127
> > > Double Precision   1 [63]    11 [62-52]    52 [51-00]    1023
>
> > > Single precision is 32-bits. Double is 64 bits. Bit 1 is the sign. The
> > > exponent is length[bits]. The fraction is length[bits]. Since the
> > > exponent can be + or -, the bias is added to the actual exponent to
> > > get the stored exponent. Thus for single precision, a stored exponent
> > > of 127 means an exponent of zero and a stored exponent of 200 equals
> > > an exponent of 73.
>
> > > An exponent field of all zeros and a fraction field of all zeros
> > > denotes a value of zero, if the exponent field is all 1's and the
> > > fraction field is all 0's, value is infinity. Because of the sign bit,
> > > both can be + or -.
>
> > > If an exponent field of all 0's but the faction field is non-zero then
> > > value is a denormalized number. In this case the leading bit before
> > > the binary point is not assumed to be 1.
>
> > > An exponent of all 1's and a non-zero fraction is Not a Number (NaN).
> > > There are two kinds of Nan:
>
> > > QNaN in which the most significant fraction bit is set and SNaN in
> > > which the most significant fraction bit is clear. A QNaN (or Quiet
> > > NaN) propogates freely through arithmetic operations and are the
> > > result of operations where the result is undefined (indeterminate).
> > > SNaN denotes the results of invalid operaations.Only the latter throw
> > > exceptions.
>
> > > On Aug 20, 5:33 pm, John Hurley <hurleyjo..._at_yahoo.com> wrote:
>
> > > > Richard:
>
> > > > # I would like to know more about the purpose, use and misuse of these
> > > > datatypes.
>
> > > > ...
>
> > > > Most of the Oracle world or at least the part that I have seen uses
> > > > NUMBER in some manner for tables.
>
> > > > Probably the other ones were put in for standards compatibility.
>
> > > > Have you tried searching for stuff related to this at asktom?
>
> > What does that have to do with the pretty well established idea that
> > people use the Oracle NUMBER datatype most of the time?
>
> Who said it did? It IS the most widely used datatype in the world. If
> you want to work directly with mainstream languages or hardware
> without conversion you may want to use this type. It is faster. The
> double type can hold a far larger & more accurate floating point value
> than the NUMBER datatype. If you're doing science or engineering you
> will need to know this type. Oracle didn't put it there for no reason.
> The design and its widespread use reflect its utility. You just need
> to know what you're doing.
>
> There is and always has been a trade off between usability,
> versatility, etc. and ease of use, protection from coding errors, etc.
> Bjarne Stroustrup who wrote C++ said "C makes it easy to shoot
> yourself in the foot; C++ makes it harder, but when you do it blows
> your whole leg off." He also said "I do not think that safety should
> be bought at the cost of complicating the expression of good solutions
> to real-life problems." Incidentally C is the most widely used
> language in  the world, more than Java and far more than C++.

C may be the most widely used language in the world, but that doesn't mean it is the best idea for application coding. In fact, it is horrible for application coding. It assumes good and rigorous programming practices, which don't happen. Note the Yorktown incident was an NT application: http://catless.ncl.ac.uk/Risks/19.88.html#subj1 This came directly from the military using COTS applications, and the risks were pointed out before the problems displayed. Risks digest has long shown examples of C problems, particularly buffer overflows.

Oracle has a lot of stuff that customers have asked for. That merely shows that customers ask for things, not that customers ask for good things. Universities put out large numbers of C programmers, so there is a large ecosystem of those programmers and derived languages. Again, that doesn't mean that the current situation is the best, or even good. Some would argue it is bad.

These datatypes exist for historical reasons. There was a time when it made a difference whether you had hardware accelerator options on your VAX as to whether you should be using one type or another and how you should be coding. So what? If you are gathering physics data at CERN your requirements are different than if you are gathering SEO information at google, and neither has much to do with the vast majority of Oracle installations, or nuke plants controlled by pdp-11's for that matter. So what's your point?

jg

--
_at_home.com is bogus.
http://www.dailykos.com/story/2011/08/22/1009142/-Laser-Enrichment-of-U-235
Received on Mon Aug 22 2011 - 11:35:25 CDT

Original text of this message