Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Q: float data type

Re: Q: float data type

From: Prakash V <venkatprakash_at_hotmail.com>
Date: Thu, 02 Sep 1999 23:54:22 GMT
Message-ID: <19990902235423.32476.qmail@hotmail.com>


Its a major difference between NUMBER and FLOAT.

We have already discussed this issue earlier. Here is the full document says the difference between NUMBER and FLOAT.

NUMBER Datatype

The NUMBER datatype stores zero, positive, and negative fixed and floating-point numbers with
magnitudes between 1.0 x 10-130 and 9.9...9 x 10125 (38 nines followed by 88 zeroes) with 38 digits
of precision. If you specify an arithmetic expression whose value has a magnitude greater than or
equal to 1.0 x 10126, Oracle returns an error.

Specify a fixed-point number using the following form:

NUMBER(p,s)

where:

s

    is the scale, or the number of digits to the right of the decimal point. The scale can range from

    -84 to 127.

Specify an integer using the following form:

NUMBER(p)

            is a fixed-point number with precision p and scale 0. This is equivalent to

            NUMBER(p,0).

Specify a floating-point number using the following form:

NUMBER
         is a floating-point number with decimal precision 38. Note that a scale value is not applicable for floating-point numbers.

Oracle allows you to specify floating-point numbers, which can have a decimal point anywhere from the first to the last digit or can have no decimal point at all. A scale value is not applicable to floating-point numbers, because the number of digits that can appear after the decimal point is not restricted.

You can specify floating-point numbers with the form discussed in "NUMBER Datatype". Oracle also supports the ANSI datatype FLOAT. You can specify this datatype using one of these syntactic forms:

FLOAT specifies a floating-point number with decimal precision 38, or binary precision 126.
FLOAT(b) specifies a floating-point number with binary precision b. The precision b can range from 1 to 126. To convert from binary to decimal precision, multiply b by 0.30103. To convert from decimal to binary precision, multiply the decimal precision by 3.32193.

     The maximum of 126 digits of binary precision is roughly equivalent to 38 digits of decimal precision.

Hope this will help you.

Thanks

V Prakash



Get Your Private, Free Email at http://www.hotmail.com

 Sent via Deja.com http://www.deja.com/  Share what you know. Learn what you don't. Received on Thu Sep 02 1999 - 18:54:22 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US