Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: NCLOB fields and CLOB.createTemporary does not work
On Thu, 28 Apr 2005 21:30:59 +0800, John Hendrikx wrote
(in article <4270e597$0$157$e4fe514c_at_news.xs4all.nl>):
>
>>>>> I'm using Oracle 9i, using the latest thin JDBC Java driver. I'm trying >>>>> to insert a NCLOB into the database. The recommended way to do this is >>>>> to use CLOB.createTemporary() as you can see in the code below. This >>>>> code works fine for CLOB fields, but fails with a 'ORA-12704: character >>>>> set mismatch' when the field is NCLOB. >>>> >>>> >>>> >>>> I am sure you will not catch an ugly disease from posting >>>> the character sets in question? >>> >>> >>> Java uses it's standard UCS2 character for all strings, and our data is >>> in that format. >>> >>> The databases we've tested on are: >>> >>> Oracle 9i, with a database using WE8MSWIN1252 charset. We found this >>> setup is useless, as all international characters will be discarded >>> during the translation process, dispite using NVARCHAR2/NCLOB fields. >>> >> >> >> Still, that is not a client setting.
>> The MSWIN1252 means nothing; NCLOB/NVARCHAR fields would be either >> UTF8, or AL16UTF16 (default).
>> Besides, I do not understand why using NVARCHAR/NCLOB fields, when >> the standard counterparts would do, too. The reference to WE8MSWIN1252 >> makes me believe you do not need to store multibyte characters, like >> japanese, Chinese or Korean.
>>> Oracle 9i, with a database using the AL32UTF8 charset. This works, >>> international characters are retained in both NVARCHAR2 and NCLOB >>> fields. NCLOB fields however only work with small sizes. Above 32K >>> we're forced to use the CLOB.createTemporary() construction (instead of >>> setString() from PreparedStatement) but that results in the problem >>> detailed in my initial post. >> >> Strings in Oracle go up to 32k, and that only within PL/SQL. String >> storage is 4000 bytes (!) max, or 4G, if you use LOBs.
>>> Yes, but I donot see how it is relevant what Java supports, when the >>> Oracle Database is the one that is unable to convert the standard java >>> character set to one of its own. >> >> I could just as easily state you do not seem to grasp the concept of >> character sets within oracle, and what character set is active in >> what case. >> As I am not very much into java, I will not.
>> Here's what I think happens: >> - you install, and create a database, using 9i with the WE8MSWIN1252 >> character set.
>> Without knowing the details, I am guessing from here on. >> - the environment setting for the client processes (e.g. sqlplus) are >> set to WE8MSWIN1252. This is done by the installer.
>> - consequence is: any process connecting will have to be converted TO >> WE8MSWIN1252. A character set, *NOT* supported by Java.
>> - If the above scenario is correct, that would also explain why the >> experiments with AL32UTF8 in either 9i, or 10g, work out OK.
>> Still, you have to understand, NVARCHAR, NCLOB and NCHAR fields >> obey the character set as defined in the 'create database ... national >> character set xxxxxx' command. All the other character fields will obey >> the 'character set yyyyy' part of the 'create database ... character >> set yyyyy national character set xxxxxx' command.
>> And I would use the clob handling routines when working clobs, and >> the string handling routines for handling varchars.
>> Maybe an experienced java/oracle programmer can comment on that?
>> -)
so why not just use BLOB in the java code, and let client deal with it?
If the data field is not going to be indexed by the oracle database text services, why even bother converting a database to use multi characters. Received on Fri May 06 2005 - 05:19:56 CDT