Re: Oracle 11 Server and Unicode UTF-8
Date: Fri, 28 May 2010 16:38:29 +0200
"Arne Ortlinghaus" <Arne.Ortlinghaus_at_acs.it> writes:
> Hi Walt,
> yes, every field with too much characters (more than 4000 bytes UTF8 code) can
> not be converted without data loss. It could be more secure to add new Unicode
> columns if you have already database with data in it and then add conversion
> Arne Ortlinghaus
> ACS Data Systems
Its not only 4000 bytes that impose a limit. I recently tried to import
an export file from a database using ISO 8859 into a database uning
I had some columns of type VARCHAR2(3) which meant VARCHAR2(3 BYTE). The columns in the new database were also VARCHAR2(3 BYTES) and some data contained german umlauts which are stored in two bytes in UTF-8. So these rows were not imported.
IIRC there is a syntax like VARCHAR2(3 CHAR) when defining columns. That should allow 3 UTF-8 characters to be stored in the column. I think it's a good idea to checks this and possibly convert columns to allow a maximum number of characters rather than bytes before doing export/import involving multibyte charactersets.
> "Walt" <walt_askier_at_SHOESyahoo.com> schrieb im Newsbeitrag
>> We're running Oracle 10g on Windows using ISO 8859 as the character
>> set. We're exploring the idea of converting to unicode (UTF8) along with the
>> upcoming upgrade to v11.
>> I haven't found a good reference for how to best accomplish the conversion
>> and what pitfalls to watch out for. Any suggestions?
>> One thing I expect to be a problem is that we have about 200 columns that
>> are defined as Varchar2(4000). My understanding is that this limit is 4000
>> *bytes* so some of our data may not "fit" if it contains enough characters
>> with ascii values above 128.
>> Anyone been through this before who'd like to offer sage advice?
-- Lothar Armbrüster | lothar.armbruester_at_t-online.de Hauptstr. 26 | 65346 Eltville |Received on Fri May 28 2010 - 09:38:29 CDT