Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.tools -> Re: VARCHAR: characters or bytes?

Re: VARCHAR: characters or bytes?

From: Bob Fazio <rfazio_at_home.com.nospam>
Date: 2000/05/10
Message-ID: <AsbS4.181986$Tn4.1407993@news1.rdc2.pa.home.com>#1/1

Since I don't work with others that much, and I'm not going to look it up. But I am fairly confident that it is Characters, but characters of the type used when the database was created.

SQLWKS> select * from v$nls_parameters;

PARAMETER                                                        VALUE
---------------------------------------------------------------- -----------
-----------------------------------------------------
NLS_LANGUAGE                                                     AMERICAN
NLS_TERRITORY                                                    AMERICA
NLS_CURRENCY                                                     $
NLS_ISO_CURRENCY                                                 AMERICA
NLS_NUMERIC_CHARACTERS                                           .,
NLS_CALENDAR                                                     GREGORIAN
NLS_DATE_FORMAT                                                  MM/DD/RRRR
HH:MI:SSAM
NLS_DATE_LANGUAGE                                                AMERICAN
NLS_CHARACTERSET
WE8ISO8859P1
NLS_SORT                                                         BINARY
NLS_NCHAR_CHARACTERSET
WE8ISO8859P1
11 rows selected.

The character type was set when the database was created.

--
Robert Fazio, Oracle DBA
rfazio_at_home.com
remove nospam from reply address
http://24.8.218.197/
"Art Decco" <pleasedont_at_email.com> wrote in message
news:8eo63d$fev$1_at_goodnews.macromedia.com...

> When you define a field as a varchar(24), does that give it room for up to
> 24 bytes of text data, or does it give it room for up to 24 characters of
> data? After all, not all encodings are single-byte.
>
> In particular, we're thinking of using a UTF-8 (Unicode) database to hold
> global customer data. We're converting the data from a previous,
non-Unicode
> encoding. (Actually several databases, and several different encodings.)
We
> don't want the data to overflow the field when converted, nor do we want
the
> new field to be too narrow for new data. If a column has type varchar with
a
> width of 24 and we convert it to UTF-8 using the same schema, that would
> only be enough room for about eight Japanese kanji in UTF-8 if varchar is
> measured in bytes, but it would be plenty if 24 referred to characters
> instead of bytes. Does anybody know for sure which it is? I've heard lots
of
> opinions, but nobody seems to really know.
>
> Thanks.
>
>
Received on Wed May 10 2000 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US