Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> UTF8, ODBC and Japanese
I have created an Oracle 8.0.4 database with a characterset of UTF8. I'm trying to store Japanese Unicode characters into the database and select them for display on my PC.
I'm inserting the characters using the chr() function:
insert into mytable values (chr(15123368));
Where 15123368 is the UTF8 encoding for the Katakana character 30E8 (Unicode code point for that character). When I dump the contents of that column:
select dump(string, 1016) from mytable;
I can see that it has indeed stored a multibyte character:
Typ=96 Len=10 CharacterSet=UTF8: e6,c3,a8,20,20,20,20,20,20,20
However, I have yet to come up with anyway to 'select * from mytable' and see the actual character in Japanese. We've tried a client-side test program with an Oracle ODBC driver, but the values we get back seem to be sent as single-byte characters; we've tried downloading Japanese fonts for Exceed and selecting against our Unix database server; and have tried creating a Web application using Oracle's WebDB to view directly in IE. Of course, we've set NLS_LANG and all the other language parameters we can think of.
It just can't be this complicated to read multibyte data from a UTF8 database. Does anybody have any documentation, an example, or solutions to make this work?
Thanks in advance!
Niels Bauer
Sent via Deja.com http://www.deja.com/
Before you buy.
Received on Fri Aug 04 2000 - 00:00:00 CDT