Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Unicode vs. Multibyte for japanese chars..

Unicode vs. Multibyte for japanese chars..

From: NetComrade <andreyNSPAM_at_bookexchange.net>
Date: Mon, 26 Nov 2001 21:24:26 GMT
Message-ID: <3c02b124.2330733016@news.globix.com>


I have a database... which is basicly a registration system.. we plan to market out stuff on the japanese market.. The database isn't too big--10 Gigs or so, other 'product-specific' DB's talk to it via DB links..

In order for the registration system to be able to take japanese characters, I have to either transform it to to two-byte chars or unicode.. AFAIK, unicode only stores a char in more then one byte if the character requires so.. but there is some overhead in figuring out how the character should be stored.. So the question is, should I store everything as two-bytes or as unicode:

two-bytes:
increase in DB size
possibly increase in network traffic
entire db is transformed into two-bytes, when only a small portion is actually non-ASCII.

unicode
CPU overhead
not as widely supported?

Any help greatly appreciated.

thnx.
.......
We use Oracle 8.1.6-8.1.7 on Solaris 2.6, 2.7 boxes

Andrey Dmitriev	 eFax: (978) 383-5892  Daytime: (917) 750-3630
AOL: NetComrade	 ICQ: 11340726 remove NSPAM to email
Received on Mon Nov 26 2001 - 15:24:26 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US