Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Characterset problem with Oracle
"Peter Sylvester" <peters_no_spam_please_at_mitre.org> wrote in message
news:ci4noi$5d3$1_at_newslocal.mitre.org...
[...]
> Would you care to expand upon the overhead issue associated with UTF8.
Hi Peter
Unicode character sets needs more space to store data. Therefore all
operations are a little slower.
Even Oracle clearly document this problem.... at
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96529/ch5.htm#1013253
you can read:
"If performance is your main concern, then consider using a single-byte
database character set and storing Unicode data in the SQL NCHAR datatypes.
Databases that use a multibyte database character set such as UTF8 have a
performance overhead."
As you will never store short strings in CLOB instead of VARCHAR2, you should not use a Unicode character set if you don't need it.
> I would expect that with Java and other apps that deal with UTF natively
> that there might be less overhead, overall (client + server).
There are many Unicode encodings... Java, AFAIK, use UCS-2, i.e. a completely different encoding compared to UTF8!
Chris
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
![]() |
![]() |