| Oracle FAQ | Your Portal to the Oracle Knowledge Grid | |
Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle ODBC / UNICODE Non Conformance
Joe - thanks for the update.
Lets go again for simplicity and pretend there is no NCHAR since its not clear to me what its for. In addition you are saying that regular old CHAR can store double byte UNICODE which is nice. That means we dont have the duality of MS SQL Server which has CHAR for single byte and NCHAR for double byte..So we also dont need to wait for Oracle 9..
On point 4 this is the SQL column type, not the C type. So the choices are SQL_CHAR or SQL_UNICODE_CHAR not SQL_C_CHAR or SQL_C_WCHAR. I assume from the other info you have provided that SQL_CHAR should be fine to bind UNICODE data.
To be quite honest I dont know why the application has to provide this
value on a SQLBindParam - surely the db knows the column type better
than the application..
On the same subject is the very problematic Column Size argument to
SQLBindParam whose description in the ODBC pages is totally ambiguous
and whose
implementation seems to vary from driver to driver and database to
database. ?
Microsofts ODBC drivers for Access and SQL Server behave totally
differently in respect to this problematic parameter. Once you start
talking about binding
INOUT paramters (EG Stored Procs) with SQLBindParam its usage is a
nightmare.
Is there a coherent description of Oracle's use of this parameter?
DM
In article <8s0b1s$2kt$1_at_nnrp1.deja.com>,
David Murphy <DavidMurphy_at_j-netdirect.com> wrote:
> Joe - thanks for the explanation - this is good info..
>
> For simplicity lets assume that all character data handled by the app
> is UNICODE double byte. The app needs to deal with non ANSI data
always.
> Therefore all binding must be done with SQL_C_WCHAR
OK, so presumably the application stores all its character data in twobyte character strings (i.e. TCHAR or WCHAR)?
>
> We need to insert into columns (either CHAR or NCHAR) using
> SQLBindParam. We need to read data (CHAR & NCHAR) using SQLBindColumn.
> Trouble is we dont know the column type before the insert since the
> underlying db column types are unknown to us due to the nature of the
> app.
>
> 1. My first question is what is the difference between NCHAR and CHAR
> in Oracle? I assumed NCHAR was the only one allowing double byte chars
> like UNICODE
I believe this will become a true statement with Oracle 9, but not at the moment. When NCHAR was introduced originally, it was for 'National character' data, essentially to allow you to have a second language defined. I don't know the history, but I assume that this designation was done before Unicode came out as a standard, or before it was widely embraced. Otherwise, it's a pretty pathetic data type.
I've never seen anyone using NCHAR for any sort of benefit.
If someone on the newsgroup has a little more background on NCHAR, or some idea for where it might be useful, I'd be interested.
> 2. I assume that the ODBC driver hides our app from having to know the
> encoding of char data in the database - is that correct? So what is
the
> format (encoding?)
> of character data that our app must provide when we bind (params or
> columns) with SQL_C_WCHAR?. I assume the UCS-2 format you descibed -
> correct?
Correct. Everything we pass around is UCS-2, which is pretty standard.
> 3. I guess in summary what we want is that we can write char data
bound
> with SQL_C_WCHAR and read it back bound with SQL_C_WCHAR and get the
> same data back
> no matter whether the underlying column is CHAR or NCHAR. Also no
> matter the database encoding scheme. Is this possible?
When you read data back, bound as SQL_C_WCHAR, you will get back Unicode data (i.e. every second byte is 0x00 if that actual data is English). If you bind the data as SQL_C_CHAR, you'll get back data in the local code page (i.e. ANSI 1-byte characters), but there may be conversion issues if the data was inserted as Unicode (i.e. no mapping to ANSI for some Unicode characters).
> 4. What should the SQL column type be for the SQLBindParameter for
CHAR
> and NCHAR? SQL_CHAR or SQL_UNICODE_CHAR?
If you have Unicode data, and want Unicode data back, binding as SQL_C_WCHAR is the way to go. Of course, you'll want to take a look at the various length parameters to see which ask for number of characters & which ask for number of bytes.
> 5. What does 'character set mismatch' mean when trying to update NCHAR
> data?
I assume that there's a problem converting the data to whatever national code page the database is assuming. As I said before, however, NCHAR is a rather useless data type, so I've never spent any time trying to understand how to use it.
> 6. On an unrelated matter - can we tell clients to use the latest ODBC
> driver for 8.1.5 on an 8.0.x database since it fixed a problem we
> needed fixed for them.
I doubt this will work. Assuming this is the unrelated matter I'm
thinking of, I sent mail to you earlier today on this point.
Basically, when we release new driver versions, we release them on each
supported Oracle client platform (currently 8.0.5, 8.0.6, 8.1.5, 8.1.6,
and (soon to be) 8.1.7). Thus, the most recent drivers for each
platform
(8.0.5.9, 8.0.6.1, 8.1.5.6, and 8.1.6.1) are all identical code bases,
built against different Oracle client DLL's. You should be able to fix
the customer's problem by installing the 8.0.5.9 driver.
Justin Cave
Oracle ODBC Development
Sent via Deja.com http://www.deja.com/
Before you buy.
Received on Tue Oct 10 2000 - 23:01:02 CDT
![]() |
![]() |