Skip Headers

Oracle9i Database Globalization Support Guide
Release 2 (9.2)

Part Number A96529-01
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

6
Programming with Unicode

This chapter describes how to use Oracle's database access products with Unicode. It contains the following topics:

Overview of Programming with Unicode

Oracle9i offers several database access products for inserting and retrieving Unicode data. Oracle offers database access products for commonly used programming environments such as Java and C/C++. Data is transparently converted between the database and client programs, which ensures that client programs are independent of the database character set and national character set. In addition, client programs are sometimes even independent of the character datatype, such as NCHAR or CHAR, used in the database.

To avoid overloading the database server with data conversion operations, Oracle9i always tries to move them to the client side database access products. In a few cases, data must be converted in the database, which affects performance. This chapter discusses details of the data conversion paths.

Database Access Product Stack and Unicode

Oracle Corporation offers a comprehensive set of database access products that allow programs from different development environments to access Unicode data stored in the database. These products are listed in Table 6-1.

Table 6-1 Oracle Database Access Products  
Programming Environment Oracle Database Access Products

C/C++

Oracle Call Interface (OCI)
Oracle Pro*C/C++
Oracle ODBC Driver
Oracle OLE DB Driver

Visual Basic

Oracle ODBC Driver
Oracle OLE DB Driver

Java

Oracle JDBC OCI or thin driver
Oracle SQLJ

PL/SQL

Oracle PL/SQL and SQL

Figure 6-1 shows how the database access products can access the database.

Figure 6-1 Oracle Database Access Products

Text description of nls81024.gif follows
Text description of the illustration nls81024.gif


The Oracle Call Interface (OCI) is the lowest level API that the rest of the client-side database access products use. It provides a flexible way for C/C++ programs to access Unicode data stored in SQL CHAR and NCHAR datatypes. Using OCI, you can programmatically specify the character set (UTF-8, UTF-16, and others) for the data to be inserted or retrieved. It accesses the database through Oracle Net.

Oracle Pro*C/C++ enables you to embed SQL and PL/SQL in your programs. It uses OCI's Unicode capabilities to provide UTF-16 and UTF-8 data access for SQL CHAR and NCHAR datatypes.

The Oracle ODBC driver enables C/C++, Visual Basic, and VBScript programs running on Windows platforms to access Unicode data stored in SQL CHAR and NCHAR datatypes of the database. It provides UTF-16 data access by implementing the SQLWCHAR interface specified in the ODBC standard specification.

The Oracle OLE DB driver enables C/C++, Visual Basic, and VBScript programs running on Windows platforms to access Unicode data stored in SQL CHAR and NCHAR datatypes. It provides UTF-16 data access through wide string OLE DB datatypes.

Oracle JDBC drivers are the primary Java programmatic interface for accessing an Oracle9i database. Oracle provides two client-side JDBC drivers:

Both drivers support Unicode data access to SQL CHAR and NCHAR datatypes in the database.

Oracle SQLJ acts like a preprocessor that translates embedded SQL in a Java program into a Java source file with JDBC calls. It offers you a higher level programmatic interface to access databases. Like JDBC, SQLJ provides Unicode data access to SQL CHAR and NCHAR datatypes in the database.

The PL/SQL and SQL engines process PL/SQL programs and SQL statements on behalf of client-side programs such as OCI and server-side PL/SQL stored procedures. They allow PL/SQL programs to declare NCHAR and NVARCHAR2 variables and access SQL NCHAR datatypes in the database.

The following sections describe how each of the database access products supports Unicode data access to an Oracle9i database and offer examples for using those products:

SQL and PL/SQL Programming with Unicode

SQL is the fundamental language with which all programs and users access data in an Oracle database either directly or indirectly. PL/SQL is a procedural language that combines the data manipulating power of SQL with the data processing power of procedural languages. Both SQL and PL/SQL can be embedded in other programming languages. This section describes Unicode-related features in SQL and PL/SQL that you can deploy for multilingual applications.

This section contains the following topics:

SQL NCHAR Datatypes

There are three SQL NCHAR datatypes:

The NCHAR Datatype

When you define a table column or a PL/SQL variable as the NCHAR datatype, the length is always specified as the number of characters. For example, the statement

CREATE TABLE table1 (column1 NCHAR(30)); 

creates a column with a maximum length of 30 characters. The maximum number of bytes for the column is determined as follows:

maximum number of bytes = (maximum number of characters) x (maximum number of 
bytes per character)

For example, if the national character set is UTF8, the maximum byte length is 30 characters times 3 bytes per character, or 90 bytes.

The national character set, which is used for all NCHAR datatypes, is defined when the database is created. In Oracle9i, the national character set can be either UTF8 or AL16UTF16. The default is AL16UTF16.

The maximum column size allowed is 2000 characters when the national character set is UTF8 and 1000 when it is AL16UTF16. The actual data is subject to the maximum byte limit of 2000. The two size constraints must be satisfied at the same time. In PL/SQL, the maximum length of NCHAR data is 32767 bytes. You can define an NCHAR variable of up to 32767 characters, but the actual data cannot exceed 32767 bytes. If you insert a value that is shorter than the column length, Oracle pads the value with blanks to whichever length is smaller: maximum character length or maximum byte length.


Note:

UTF8 may affect performance because it is a variable-width character set. Excessive blank padding of NCHAR fields decreases performance. Consider using the NVARCHAR datatype or changing to the AL16UTF16 character set for the NCHAR datatype.


The NVARCHAR2 Datatype

The NVARCHAR2 datatype specifies a variable length character string that uses the national character set. When you create a table with an NVARCHAR2 column, you specify the maximum number of characters for the column. Lengths for NVARCHAR2 are always in units of characters, just as for NCHAR. Oracle subsequently stores each value in the column exactly as you specify it, if the value does not exceed the column's maximum length. Oracle does not pad the string value to the maximum length.

The maximum column size allowed is 4000 characters when the national character set is UTF8 and 2000 when it is AL16UTF16. The maximum length of an NVARCHAR2 column in bytes is 4000. Both the byte limit and the character limit must be met, so the maximum number of characters that is actually allowed in an NVARCHAR2 column is the number of characters that can be written in 4000 bytes.

In PL/SQL, the maximum length for an NVARCHAR2 variable is 32767 bytes. You can define NVARCHAR2 variables up to 32767 characters, but the actual data cannot exceed 32767 bytes.

The following CREATE TABLE statement creates a table with one NVARCHAR2 column of with a maximum length of 2000 characters. If the national character set is UTF8, the following will create a column with maximum character length of 2000 and maximum byte length of 4000.

CREATE TABLE table2 (column2 NVARCHAR2(2000)); 

The NCLOB Datatype

NCLOB is a character large object containing multibyte characters, with a maximum size of 4 gigabytes. Unlike BLOBs, NCLOBs have full transactional support so that changes made through SQL, the DBMS_LOB package, or OCI participate fully in transactions.Manipulations of NCLOB value can be committed and rolled back. Note, however, that you cannot save an NCLOB locator in a PL/SQL or OCI variable in one transaction and then use it in another transaction or session.

NCLOB values are stored in the database using the fixed-width AL16UTF16 character set, regardless of the national character set. Oracle translates the stored Unicode value to the character set requested on the client or on the server, which can be fixed-width or variable-width. When you insert data into an NCLOB column using a variable-width character set, Oracle converts the data into AL16UTF16 before storing it in the database.

See Also:

Oracle9i Application Developer's Guide - Large Objects (LOBs) for more information about NCLOBs

Implicit Datatype Conversion Between NCHAR and Other Datatypes

Oracle supports implicit conversions between SQL NCHAR datatypes and other Oracle datatypes, such as CHAR, VARCHAR2, NUMBER, DATE, ROWID, and CLOB. Any implicit conversions for CHAR and VARCHAR2 datatypes are also supported for SQL NCHAR datatypes. You can use SQL NCHAR datatypes the same way as SQL CHAR datatypes.

Keep in mind these points about implicit conversions:

Exception Handling for Data Loss During Datatype Conversion

Data loss can occur during datatype conversion when character set conversion is necessary. If a character in the first character set is not defined in the target character set, then a replacement character will be used in its place. For example, if you try to insert NCHAR data into a regular CHAR column and the character data in NCHAR (Unicode) form cannot be converted to the database character set, the character will be replaced by a replacement character defined by the database character set. The NLS_NCHAR_CONV_EXCP initialization parameter controls the behavior of data loss during character type conversion. When this parameter is set to TRUE, any SQL statements that result in data loss return an ORA-12713 error and the corresponding operation is aborted. When this parameter is set to FALSE, data loss is not reported and the unconvertible characters are replaced with replacement characters. The default value is TRUE. This parameter works for both implicit and explicit conversion.

In PL/SQL, when data loss occurs during conversion of SQL CHAR and NCHAR datatypes, the LOSSY_CHARSET_CONVERSION exception is raised for both implicit and explicit conversion.

Rules for Implicit Datatype Conversion

In some cases, conversion between datatypes is possible in only one direction. In other cases, conversion in both directions is possible. Oracle defines a set of rules for conversion between datatypes. Table 6-2 contains the rules for conversion between datatypes.

Table 6-2 Rules for Conversion Between Datatypes  
Statement Rule

INSERT/UPDATE statement

Values are converted to the datatype of the target database column.

SELECT INTO statement

Data from the database is converted to the datatype of the target variable.

Variable assignments

Values on the right of the equal sign are converted to the datatype of the target variable on the left of the equal sign.

Parameters in SQL and PL/SQL functions

CHAR, VARCHAR2, NCHAR, and NVARCHAR2 are loaded the same way. An argument with a CHAR, VARCHAR2, NCHAR or NVARCHAR2 datatype is compared to a formal parameter of any of the CHAR, VARCHAR2, NCHAR or NVARCHAR2 datatypes. If the argument and formal parameter datatypes do not match exactly, then implicit conversions are introduced when data is copied into the parameter on function entry and copied out to the argument on function exit.

Concatenation || operation or CONCAT function

If one operand is a SQL CHAR or NCHAR datatype and the other operand is a NUMBER or other non-character datatype, then the other datatype is converted to VARCHAR2 or NVARCHAR2. For concatenation between character datatypes, see "SQL NCHAR datatypes and SQL CHAR datatypes".

SQL CHAR or NCHAR datatypes and NUMBER datatype

Character value is converted to NUMBER datatype

SQL CHAR or NCHAR datatypes and DATE datatype

Character value is converted to DATE datatype

SQL CHAR or NCHAR datatypes and ROWID datatype

Character datatypes are converted to ROWID datatype

SQL NCHAR and SQL CHAR datatypes

Character values are converted to NUMBER datatype

SQL CHAR or NCHAR datatypes and NUMBER datatype

Character values are converted to NUMBER datatype

SQL CHAR or NCHAR datatypes and DATE datatype

Character values are converted to DATE datatype

SQL CHAR or NCHAR datatypes and ROWID datatype

Character values are converted to ROWID datatype

SQL NCHAR datatypes and SQL CHAR datatypes

Comparisons between SQL NCHAR datatypes and SQL CHAR datatypes are more complex because they can be encoded in different character sets.

When CHAR and VARCHAR2 values are compared, the CHAR values are converted to VARCHAR2 values.

When NCHAR and NVARCHAR2 values are compared, the NCHAR values are converted to NVARCHAR2 values.

When there is comparison between SQL NCHAR datatypes and SQL CHAR datatypes, character set conversion occurs if they are encoded in different character sets. The character set for SQL NCHAR datatypes is always Unicode and can be either UTF8 or AL16UTF16 encoding, which have the same character repertoires but are different encodings of the Unicode standard. SQL CHAR datatypes use the database character set, which can be any character set that Oracle supports. Unicode is a superset of any character set supported by Oracle, so SQL CHAR datatypes can always be converted to SQL NCHAR datatypes without data loss.

SQL Functions for Unicode Datatypes

SQL NCHAR datatypes can be converted to and from SQL CHAR datatypes and other datatypes using explicit conversion functions. The examples in this section use the table created by the following statement:

CREATE TABLE customers 
  (id NUMBER, name NVARCHAR2(50), address NVARCHAR2(200), birthdate DATE);

Example 6-1 Populating the Customer Table Using the TO_NCHAR Function

The TO_NCHAR function converts the data at run time, while the N function converts the data at compilation time.

INSERT INTO customers VALUES (1000, 
  TO_NCHAR('John Smith'),N'500 Oracle Parkway',sysdate);

Example 6-2 Selecting from the Customer Table Using the TO_CHAR Function

The following statement converts the values of name from characters in the national character set to characters in the database character set before selecting them according to the LIKE clause:

SELECT name FROM customers WHERE TO_CHAR(name) LIKE '%Sm%';

You should see the following output:

NAME
--------------------------------------
John Smith

Example 6-3 Selecting from the Customer Table Using the TO_DATE Function

Using the N function shows that either NCHAR or CHAR data can be passed as parameters for the TO_DATE function. The datatypes can mixed because they are converted at run time.

DECLARE
ndatestring NVARCHAR2(20) := N'12-SEP-1975';
BEGIN
SELECT name into ndstr FROM customers
WHERE (birthdate)> TO_DATE(ndatestring, 'DD-MON-YYYY', N'NLS_DATE_LANGUAGE = 
AMERICAN');
END;

As demonstrated in Example 6-3, SQL NCHAR data can be passed to explicit conversion functions. SQL CHAR and NCHAR data can be mixed together when using multiple string parameters.

See Also:

Oracle9i SQL Reference for more information about explicit conversion functions for SQL NCHAR datatypes

Other SQL Functions

Most SQL functions can take arguments of SQL NCHAR datatypes as well as mixed character datatypes. The return datatype is based on the type of the first argument. If a non-string datatype like NUMBER or DATE is passed to these functions, it will be converted to VARCHAR2. The following examples use the customer table created in "SQL Functions for Unicode Datatypes".

Example 6-4 INSTR Function

SELECT INSTR(name, N'Sm', 1, 1) FROM customers;

Example 6-5 CONCAT Function

SELECT CONCAT(name,id) FROM customers;

id is converted to NVARCHAR2 and then concatenated with name.

Example 6-6 RPAD Function

SELECT RPAD(name,100,' ') FROM customers;

The following output results:

RPAD(NAME,100,'')
------------------------------------------
John Smith

Space character ' ' is converted to the corresponding character in the NCHAR character set and then padded to the right of name until the total display length reaches 100.

See Also:

Oracle9i SQL Reference

Unicode String Literals

You can input Unicode string literals in SQL and PL/SQL as follows:

The last two methods can be used to encode any Unicode string literals.

Using the UTL_FILE Package with NCHAR Data

The UTL_FILE package has been enhanced in Oracle9i to handle Unicode national character set data. The following functions and procedures have been added:

OCI Programming with Unicode

OCI is the lowest-level API for accessing a database, so it offers the best possible performance. When using Unicode with OCI, consider these topics:

OCIEnvNlsCreate() Function for Unicode Programming

The OCIEnvNlsCreate() function is used to specify a SQL CHAR character set and a SQL NCHAR character set when the OCI environment is created. It is an enhanced version of the OCIEnvCreate() function and has extended arguments for two character set IDs. The OCI_UTF16ID UTF-16 character set ID replaces the Unicode mode introduced in Oracle9i release 1 (9.0.1). For example:

OCIEnv *envhp;
status = OCIEnvNlsCreate((OCIEnv **)&envhp,
(ub4)0,
(void *)0,
(void *(*) ()) 0,
(void *(*) ()) 0,
(void(*) ()) 0,
(size_t) 0,
(void **)0,
(ub2)OCI_UTF16ID, /* Metadata and SQL CHAR character set */
(ub2)OCI_UTF16ID /* SQL NCHAR character set */);

The Unicode mode, in which the OCI_UTF16 flag is used with the OCIEnvCreate() function, is deprecated.

When OCI_UTF16ID is specified for both SQL CHAR and SQL NCHAR character sets, all metadata and bound and defined data are encoded in UTF-16. Metadata includes SQL statements, user names, error messages, and column names. Thus, all inherited operations are independent of the NLS_LANG setting, and all metatext data parameters (text*) are assumed to be Unicode text datatypes (utext*) in UTF-16 encoding.

To prepare the SQL statement when the OCIEnv() function is initialized with the OCI_UTF16ID character set ID, call the OCIStmtPrepare() function with a (utext*) string. The following example runs on the Windows platform only. You may need to change wchar_t datatypes for other platforms.

const wchar_t sqlstr[] = L"SELECT * FROM ENAME=:ename";
...
OCIStmt* stmthp;
sts = OCIHandleAlloc(envh, (void **)&stmthp, OCI_HTYPE_STMT, 0,
NULL);
status = OCIStmtPrepare(stmthp, errhp,(const text*)sqlstr,
wcslen(sqlstr),
                        OCI_NTV_SYNTAX, OCI_DEFAULT);

To bind and define data, you do not have to set the OCI_ATTR_CHARSET_ID attribute because the OCIEnv() function has already been initialized with UTF-16 character set IDs. The bind variable names must be also UTF-16 strings.

/* Inserting Unicode data */
OCIBindByName(stmthp1, &bnd1p, errhp, (const text*)L":ename",
(sb4)wcslen(L":ename"),
              (void *) ename, sizeof(ename), SQLT_STR, (void
*)&insname_ind,
              (ub2 *) 0, (ub2 *) 0, (ub4) 0, (ub4 *)0,
OCI_DEFAULT);
OCIAttrSet((void *) bnd1p, (ub4) OCI_HTYPE_BIND, (void *)
&ename_col_len,
           (ub4) 0, (ub4)OCI_ATTR_MAXDATA_SIZE, errhp);
...
/* Retrieving Unicode data */
OCIDefineByPos (stmthp2, &dfn1p, errhp, (ub4)1, (void *)ename,
               (sb4)sizeof(ename), SQLT_STR, (void *)0, (ub2 *)0,
(ub2*)0,
            (ub4)OCI_DEFAULT);

The OCIExecute() function performs the operation.

See Also:

"OCIEnvNlsCreate()"

OCI Unicode Code Conversion

Unicode character set conversions take place between an OCI client and the database server if the client and server character sets are different. The conversion occurs on either the client or the server depending on the circumstances, but usually on the client side.

Data Integrity

You can lose data during conversion if you call an OCI API inappropriately. If the server and client character sets are different, you can lose data when the destination character set is a smaller set than the source character set. You can avoid this potential problem if both character sets are Unicode character sets (for example, UTF8 and AL16UTF16).

When you bind or define SQL NCHAR datatypes, you should set the OCI_ATTR_CHARSET_FORM attribute to SQLCS_NCHAR. Otherwise, you can lose data because the data is converted to the database character set before converting to or from the national character set. This occurs only if the database character set is not Unicode.

OCI Performance Implications When Using Unicode

Redundant data conversions can cause performance degradation in your OCI applications. These conversions occur in two cases:

To avoid performance problems, you should always set OCI_ATTR_CHARSET_FORM correctly, based on the datatype of the target columns. If you do not know the target datatype, you should set the OCI_ATTR_CHARSET_FORM attribute to SQLCS_NCHAR when binding and defining.

Table 6-3 contains information about OCI character set conversions.

Table 6-3 OCI Character Set Conversions  
Datatypes for OCI Client Buffer OCI_ATTR_CHARSET_FORM Datatypes of the Target Column in the Database Conversion Between Comments

utext

SQLCS_IMPLICIT

CHAR,
VARCHAR2,
CLOB

UTF-16 and database character set in OCI

No unexpected data loss

utext

SQLCS_NCHAR

NCHAR,
NVARCHAR2,
NCLOB

UTF-16 and national character set in OCI

No unexpected data loss

utext

SQLCS_NCHAR

CHAR,
VARCHAR2,
CLOB

UTF-16 and national character set in OCI

National character set and database character set in database server

No unexpected data loss, but may degrade performance because the conversion goes through the national character set

utext

SQLCS_IMPLICIT

NCHAR,
NVARCHAR2,
NCLOB

UTF-16 and database character set in OCI

Database character set and national character set in database server

Data loss may occur if the database character set is not Unicode

text

SQLCS_IMPLICIT

CHAR,
VARCHAR2,
CLOB

NLS_LANG character set and database character set in OCI

No unexpected data loss

text

SQLCS_NCHAR

NCHAR, NVARCHAR2, NCLOB

NLS_LANG character set and national character set in OCI

No unexpected data loss

text

SQLCS_NCHAR

CHAR,
VARCHAR2,
CLOB

NLS_LANG character set and national character set in OCI

National character set and database character set in database server

No unexpected data loss, but may degrade performance because the conversion goes through the national character set

text

SQLCS_IMPLICIT

NCHAR, NVARCHAR2, NCLOB

NLS_LANG character set and database character set in OCI

Database character set and national character set in database server

Data loss may occur because the conversion goes through the database character set

OCI Unicode Data Expansion

Data conversion can result in data expansion, which can cause a buffer to overflow. For binding operations, you need to set the OCI_ATTR_MAXDATA_SIZE attribute to a large enough size to hold the expanded data on the server. If this is difficult to do, you need to consider changing the table schema. For defining operations, client applications need to allocate enough buffer space for the expanded data. The size of the buffer should be the maximum length of the expanded data. You can estimate the maximum buffer length with the following calculation:

  1. Get the column data byte size.
  2. Multiply it by the maximum number of bytes per character in the client character set.

This method is the simplest and quickest way, but it may not be accurate and can waste memory. It is applicable to any character set combination. For example, for UTF-16 data binding and defining, the following example calculates the client buffer:

ub2 csid = OCI_UTF16ID;
oratext *selstmt = "SELECT ename FROM emp";
counter = 1;
... 
OCIStmtPrepare(stmthp, errhp, selstmt, (ub4)strlen((char*)selstmt),
               OCI_NTV_SYNTAX, OCI_DEFAULT);
OCIStmtExecute ( svchp, stmthp, errhp, (ub4)0, (ub4)0,
                 (CONST OCISnapshot*)0, (OCISnapshot*)0,
                 OCI_DESCRIBE_ONLY);
OCIParamGet(stmthp, OCI_HTYPE_STMT, errhp, &myparam, (ub4)counter);
OCIAttrGet((void*)myparam, (ub4)OCI_DTYPE_PARAM, (void*)&col_width,
           (ub4*)0, (ub4)OCI_ATTR_DATA_SIZE, errhp);
... 
maxenamelen = (col_width + 1) * sizeof(utext);
cbuf = (utext*)malloc(maxenamelen);
...
OCIDefineByPos(stmthp, &dfnp, errhp, (ub4)1, (void *)cbuf,
                (sb4)maxenamelen, SQLT_STR, (void *)0, (ub2 *)0,
                (ub2*)0, (ub4)OCI_DEFAULT);
OCIAttrSet((void *) dfnp, (ub4) OCI_HTYPE_DEFINE, (void *) &csid,
           (ub4) 0, (ub4)OCI_ATTR_CHARSET_ID, errhp);
OCIStmtFetch(stmthp, errhp, 1, OCI_FETCH_NEXT, OCI_DEFAULT);
...

When the NLS_LANG Character Set is UTF8 or AL32UTF8 in OCI

You can use UTF8 and AL32UTF8 by setting NLS_LANG for OCI client applications. If you do not need supplementary characters, then it does not matter whether you choose UTF8 or AL32UTF8. However, if your OCI applications might handle supplementary characters, then you need to make a decision. Because UTF8 can require up to three bytes for each character, one supplementary character is represented in two code points, totalling six bytes. In AL32UTF8, one supplementary character is represented in one code point, totalling four bytes.

Do not set NLS_LANG to AL16UTF16, because AL16UTF16 is the national character set for the server. If you need to use UTF-16, then you should specify the client character set to OCI_UTF16ID, using the OCIAttrSet() function when binding or defining data.

Binding and Defining SQL CHAR Datatypes in OCI

To specify a Unicode character set for binding and defining data with SQL CHAR datatypes, you may need to call the OCIAttrSet() function to set the appropriate character set ID after OCIBind() or OCIDefine() APIs. There are two typical cases:

Binding and Defining SQL NCHAR Datatypes in OCI

Oracle Corporation recommends that you access SQL NCHAR datatypes using UTF-16 binding or defining when using OCI. Starting from Oracle9i, SQL NCHAR datatypes are Unicode datatypes with an encoding of either UTF8 or AL16UTF16. To access data in SQL NCHAR datatypes, set the OCI_ATTR_CHARSET_FORM attribute to SQLCS_NCHAR between binding or defining and execution so that it performs an appropriate data conversion without data loss. The length of data in SQL NCHAR datatypes is always in the number of Unicode code points.

The following program is a typical example of inserting and fetching data against an NCHAR data column:

...
ub2 csid = OCI_UTF16ID;
ub1 cform = SQLCS_NCHAR;
utext ename[100]; /* enough buffer for ENAME */
... 
/* Inserting Unicode data */ 
OCIBindByName(stmthp1, &bnd1p, errhp, (oratext*)":ENAME",
              (sb4)strlen((char *)":ENAME"), (void *) ename,
              sizeof(ename), SQLT_STR, (void *)&insname_ind, (ub2 *) 0,
              (ub2 *) 0, (ub4) 0, (ub4 *)0, OCI_DEFAULT); 
OCIAttrSet((void *) bnd1p, (ub4) OCI_HTYPE_BIND, (void *) &cform, (ub4) 0,
           (ub4)OCI_ATTR_CHARSET_FORM, errhp); 
OCIAttrSet((void *) bnd1p, (ub4) OCI_HTYPE_BIND, (void *) &csid, (ub4) 0,
           (ub4)OCI_ATTR_CHARSET_ID, errhp);
OCIAttrSet((void *) bnd1p, (ub4) OCI_HTYPE_BIND, (void *) &ename_col_len,
           (ub4) 0, (ub4)OCI_ATTR_MAXDATA_SIZE, errhp); 
... 
/* Retrieving Unicode data */
OCIDefineByPos (stmthp2, &dfn1p, errhp, (ub4)1, (void *)ename,
                (sb4)sizeof(ename), SQLT_STR, (void *)0, (ub2 *)0, (ub2*)0,
                (ub4)OCI_DEFAULT); 
OCIAttrSet((void *) dfn1p, (ub4) OCI_HTYPE_DEFINE, (void *) &csid, (ub4) 0,
           (ub4)OCI_ATTR_CHARSET_ID, errhp); 
OCIAttrSet((void *) dfn1p, (ub4) OCI_HTYPE_DEFINE, (void *) &cform, (ub4) 0,
           (ub4)OCI_ATTR_CHARSET_FORM, errhp); 
...

Binding and Defining CLOB and NCLOB Unicode Data in OCI

In order to write (bind) and read (define) UTF-16 data for CLOB or NCLOB columns, the UTF-16 character set ID must be specified as OCILobWrite() and OCILobRead(). When you write UTF-16 data into a CLOB column, call OCILobWrite() as follows:

...
ub2 csid = OCI_UTF16ID;
err = OCILobWrite (ctx->svchp, ctx->errhp, lobp, &amtp, offset, (void *) buf,
                   (ub4) BUFSIZE, OCI_ONE_PIECE, (void *)0,
                   (sb4 (*)()) 0, (ub2) csid, (ub1) SQLCS_IMPLICIT); 

The amtp parameter is the data length in number of Unicode code points. The offset parameter indicates the offset of data from the beginning of the data column. The csid parameter must be set for UTF-16 data.

To read UTF-16 data from CLOB columns, call OCILobRead() as follows:

...
ub2 csid = OCI_UTF16ID;
err = OCILobRead(ctx->svchp, ctx->errhp, lobp, &amtp, offset, (void *) buf,
                 (ub4)BUFSIZE , (void *) 0, (sb4 (*)()) 0, (ub2)csid,
                 (ub1) SQLCS_IMPLICIT);

The data length is always represented in the number of Unicode code points. Note one Unicodesupplementary character is counted as two code points, because the encoding is UTF-16. After binding or defining LOB column, you can measure the data length stored in the LOB column using OCILobGetLength(). The returning value is the data length in the number of code points if you bind or define as UTF-16.

err = OCILobGetLength(ctx->svchp, ctx->errhp, lobp, &lenp);

If you are using an NCLOB, you must set OCI_ATTR_CHARSET_FORM to SQLCS_NCHAR.

Pro*C/C++ Programming with Unicode

Pro*C/C++ provides the following ways to insert or retrieve Unicode data into or from the database:

Pro*C/C++ does not use the Unicode OCI API for SQL text. As a result, embedded SQL text must be encoded in the character set specified in the NLS_LANG environment variable.

This section contains the following topics:

Pro*C/C++ Data Conversion in Unicode

Data conversion occurs in the OCI layer, but it is the Pro*C/C++ preprocessor that instructs OCI which conversion path should be taken based on the datatypes used in a Pro*C/C++ program. Table 6-4 illustrates the conversion paths:

Table 6-4 Pro*C/C++ Bind and Define Data Conversion  
Pro*C/C++ Datatype SQL Datatype Conversion Path

VARCHAR or text

CHAR

NLS_LANG character set to and from the database character set happens in OCI

VARCHAR or text

NCHAR

NLS_LANG character set to and from database character set happens in OCI

Database character set to and from national character set happens in database server

NVARCHAR

NCHAR

NLS_LANG character set to and from national character set happens in OCI

NVARCHAR

CHAR

NLS_LANG character set to and from national character set happens in OCI

National character set to and from database character set in database server

UVARCHAR or utext

NCHAR

UTF-16 to and from the national character set happens in OCI

UVARCHAR or utext

CHAR

UTF-16 to and from national character set happens in OCI

National character set to database character set happens in database server

Using the VARCHAR Datatype in Pro*C/C++

The Pro*C/C++ VARCHAR datatype is preprocessed to a struct with a length field and text buffer field. The following example uses the C/C++ text native datatype and the VARCHAR Pro*C/C++ datatypes to bind and define table columns.

#include <sqlca.h> 
main() 
{ 
   ... 
   /* Change to STRING datatype:    */ 
   EXEC ORACLE OPTION (CHAR_MAP=STRING) ; 
   text ename[20] ;                  /* unsigned short type */ 
   varchar address[50] ;             /* Pro*C/C++ uvarchar type */ 

   EXEC SQL SELECT ename, address INTO :ename, :address FROM emp; 
   /* ename is NULL-terminated */ 
   printf(L"ENAME = %s, ADDRESS = %.*s\n", ename, address.len, address.arr); 
   ... 
} 

When you use the VARCHAR datatype or native text datatype in a Pro*C/C++ program, the preprocessor assumes that the program intends to access columns of SQL CHAR datatypes instead of SQL NCHAR datatypes in the database. The preprocessor generates C/C++ code to reflect this fact by doing a bind or define using the SQLCS_IMPLICIT value for the OCI_ATTR_CHARSET_FORM attribute. As a result, if a bind or define variable is bound to a column of SQL NCHAR datatypes in the database, implicit conversion happens in the database server to convert the data from the database character set to the national database character set and vice versa. During the conversion, data loss occurs when the database character set is a smaller set than the national character set.

Using the NVARCHAR Datatype in Pro*C/C++

The Pro*C/C++ NVARCHAR datatype is similar to the Pro*C/C++ VARCHAR datatype. It should be used to access SQL NCHAR datatypes in the database. It tells Pro*C/C++ preprocessor to bind or define a text buffer to the column of SQL NCHAR datatypes. The preprocessor will specify the SQLCS_NCHAR value for the OCI_ATTR_CHARSET_FORM attribute of the bind or define variable. As a result, no implicit conversion occurs in the database.

If the NVARCHAR buffer is bound against columns of SQL CHAR datatypes, the data in the NVARCHAR buffer (encoded in the NLS_LANG character set) is converted to or from the national character set in OCI, and the data is then converted to the database character set in the database server. Data can be lost when the NLS_LANG character set is a larger set than the database character set.

Using the UVARCHAR Datatype in Pro*C/C++

The UVARCHAR datatype is preprocessed to a struct with a length field and utext buffer field. The following example code contains two host variables, ename and address. The ename host variable is declared as a utext buffer containing 20 Unicode characters. The address host variable is declared as a uvarchar buffer containing 50 Unicode characters, the len and arr fields are accessible as fields of a struct.

#include <sqlca.h> 
#include <sqlucs2.h> 

main() 
{ 
   ... 
   /* Change to STRING datatype:    */ 
   EXEC ORACLE OPTION (CHAR_MAP=STRING) ; 
   utext ename[20] ;                  /* unsigned short type */ 
 uvarchar address[50] ;               /* Pro*C/C++ uvarchar type */ 

   EXEC SQL SELECT ename, address INTO :ename, :address FROM emp; 
   /* ename is NULL-terminated */ 
wprintf(L"ENAME = %s, ADDRESS = %.*s\n", ename, address.len, 
address.arr); 
... 
} 

When you use the UVARCHAR datatype or native utext datatype in Pro*C/C++ programs, the preprocessor assumes that the program intends to access SQL NCHAR datatypes. The preprocessor generates C/C++ code by binding or defining using the SQLCS_NCHAR value for OCI_ATTR_CHARSET_FORM attribute. As a result, if a bind or define variable is bound to a column of a SQL NCHAR datatype, an implicit conversion of the data from the national character set occurs in the database server. However, there is no data lost in this scenario because the national character set is always a larger set than the database character set.

JDBC and SQLJ Programming with Unicode

Oracle provides three JDBC drivers for Java programs to access Unicode data in the database:

Java programs can insert or retrieve Unicode data to and from columns of SQL CHAR and NCHAR datatypes. Specifically, JDBC enables Java programs to bind or define Java strings to SQL CHAR and NCHAR datatypes. Because Java's string datatype is UTF-16 encoded, data retrieved from or inserted into the database must be converted from UTF-16 to the database character set or the national character set and vice versa. The SQLJ preprocessor enables Java programs to embed SQL statements to simplify database access code. It translates the embedded SQL statements of a Java program to the corresponding JDBC calls. Similar to JDBC, SQLJ enables programs to bind or define Java strings to a SQL CHAR or NCHAR column. JDBC and SQLJ also allow you to specify the PL/SQL and SQL statements in Java strings so that any non-ASCII schema object names can be referenced in Java programs.

This section contains the following topics:

Binding and Defining Java Strings in Unicode

Oracle JDBC drivers allow you to access SQL CHAR datatypes in the database using Java string bind or define variables. The following code illustrates how to bind or define a Java string to a CHAR column:

int empno = 12345;
String ename = "Joe"
PreparedStatement pstmt = conn.prepareStatement("INSERT INTO" +
    "emp (ename, empno) VALUES (?, ?)");
pstmt.setString(1, ename);
pstmt.setInt(2, empno);
pstmt.execute();                 /* execute to insert into first row */
empno += 1;                      /* next employee number */
ename = "\uFF2A\uFF4F\uFF45";    /* Unicode characters in name */
pstmt.setString(1, ename);
pstmt.setInt(2, empno);
pstmt.execute();                 /* execute to insert into second row */

For binding or defining Java string variables to SQL NCHAR datatypes, Oracle extends the JDBC specification to add the PreparedStatement.setFormOfUse() method through which you can explicitly specify the target column of a bind variable to be a SQL NCHAR datatype. The following code illustrates how to bind a Java string to an NCHAR column:

int empno = 12345;
String ename = "Joe"
oracle.jdbc.OraclePreparedStatement pstmt =
    (oracle.jdbc.OraclePreparedStatement)
    conn.prepareStatement("INSERT INTO emp (ename, empno) VALUES (?, ?)");
pstmt.setFormOfUse(1, oracle.jdbc.OraclePreparedStatement.FORM_NCHAR);
pstmt.setString(1, ename);
pstmt.setInt(2, empno);
pstmt.execute();                 /* execute to insert into first row */
empno += 1;                      /* next employee number */
ename = "\uFF2A\uFF4F\uFF45";    /* Unicode characters in name */
pstmt.setString(1, ename);
pstmt.setInt(2, empno);
pstmt.execute();                 /* execute to insert into second row */

You can bind or define a Java string against an NCHAR column without explicitly specifying the form of use argument. This implies the following:

In addition, if you bind or define a Java string for a column of SQL CHAR datatypes but specify the form of use argument, performance of the database will be degraded. However, data should not be lost because the national character set is always a larger set than the database character set.

Java Data Conversion in Unicode

Because Java strings are always encoded in UTF-16, JDBC drivers transparently convert data from the database character set to UTF-16 or the national character set.The conversion paths taken are different for the three JDBC drivers:

Data Conversion for the OCI Driver

For the OCI driver, the SQL statements are always converted to the database character set by the driver before it is sent to the database for processing. For Java string bind or define variables, Table 6-5 summarizes the conversion paths taken for different scenarios:

Table 6-5 OCI Driver Conversion Path  
Form of Use SQL Datatype Conversion Path

Const.CHAR (Default)

CHAR

Java String to and from database character set happens in the JDBC driver

Const.CHAR (Default)

NCHAR

Java String to and from database character set happens in the JDBC driver.

Data in the database character set to and from national character set happens in the database server

Const.NCHAR

NCHAR

Java String to and from national character set happens in the JDBC driver

Const.NCHAR

CHAR

Java String to and from national character set happens in the JDBC driver

Data in national character set to and from database character set happens in the database server

Data Conversion for the Thin Driver

For the thin driver, SQL statements are always converted to either the database character set or to UTF-8 by the driver before they are sent to the database for processing. The thin driver also notifies the database that a SQL statement requires further conversion before being processed. The database, in turn, converts the SQL statement to the database character set. For Java string bind and define variables, the conversion paths shown in Table 6-6 are taken for the thin driver:

Table 6-6 Thin Driver Conversion Path  
Form of Use SQL Datatype Database Character Set Conversion Path

Const.CHAR (Default)

CHAR

US7ASCII or WE8ISO8859P1

Java String to and from the database character set happens in the thin driver

Const.CHAR (Default)

NCHAR

US7ASCII or WE8ISO8859P1

Java String to and from the database character set happens in the thin driver.

Data in the database character set to and from the national character set happens in the database server

Const.CHAR (Default)

CHAR

non-ASCII and non-WE8ISO8859P1

Java String to and from UTF-8 happens in the thin driver.

Data in UTF-8 to and from the database character set happens in the database server

Const.CHAR (Default)

CHAR

non-ASCII and non-WE8ISO8859P1

Java String to and from UTF-8 happens in the thin driver.

Data in UTF-8 to and from national character set happens in the database server

Const.NCHAR

CHAR

Java String to and from the national character set happens in the thin driver.

Data in the national character set to and from the database character set happens in the database server

Const.NCHAR

NCHAR

Java String to and from the national character set happens in the thin driver

Data Conversion for the JDBC Driver

The JDBC server-side internal driver runs in the server. All conversions are done in the database server. SQL statements specified as Java strings are converted to the database character set. Java string bind or define variables are converted to the database character sets if the form of use argument is not specified. Otherwise, they are converted to the national character set.

ODBC and OLE DB Programming with Unicode

You should use Oracle's ODBC and OLE DB drivers to access Oracle9i when using a Windows platform. This section describes how these drivers support Unicode. It includes the following topics:

Unicode-Enabled Drivers in ODBC and OLE DB

Oracle's ODBC and OLE DB drivers can handle Unicode data properly without data loss. For example, you can run a Unicode ODBC application containing Japanese data on English Windows if you install Japanese fonts and an input method editor for entering Japanese characters.

In Oracle9i, Oracle provides Windows platform-specific ODBC and OLE DB drivers only. For Unix platforms, contact your vendor.

OCI Dependency in Unicode

OCI Unicode binding and defining features are used by the ODBC and OLE DB drivers to handle Unicode data. OCI Unicode data binding and defining features are independent from NLS_LANG. This means Unicode data is handled properly, irrespective of the NLS_LANG setting on the platform.

See Also:

"OCI Programming with Unicode"

ODBC and OLE DB Code Conversion in Unicode

In general, no redundant data conversion occurs unless you specify a different client datatype from that of the server. If you bind Unicode buffer SQL_C_WCHAR with a Unicode data column like NCHAR, for example, ODBC and OLE DB drivers bypass it between the application and OCI layer.

If you do not specify datatypes before fetching, but call SQLGetData with the client datatypes instead, then the conversions in Table 6-7 occur.

Table 6-7 ODBC Implicit Binding Code Conversions  
Datatypes of ODBC Client Buffer Datatypes of the Target Column in the Database Fetch Conversions Comments

SQL_C_WCHAR

CHAR,
VARCHAR2,
CLOB

[If the database character set is a subset of the NLS_LANG character set, then the conversions occur in the following order:

  • Database character set
  • NLS_LANG
  • UTF-16 in OCI
  • UTF-16 in ODBC

No unexpected data loss

May degrade performance if database character set is a subset of the NLS_LANG character set

SQL_C_CHAR

CHAR,
VARCHAR2,
CLOB

If database character set is a subset of NLS_LANG character set:

Database character set to NLS_LANG in OCI

If database character set is NOT a subset of NLS_LANG character set:

Database character set, UTF-16, to NLS_LANG character set in OCI and ODBC

No unexpected data loss

May degrade performance if database character set is not a subset of NLS_LANG character set

You must specify the datatype for inserting and updating operations.

The datatype of the ODBC client buffer is given when you call SQLGetData but not immediately. Hence, SQLFetch does not have the information.

Because the ODBC driver guarantees data integrity, if you perform implicit bindings, redundant conversion may result in performance degradation. Your choice is the trade-off between performance with explicit binding or usability with implicit binding.

OLE DB Code Conversions

Unlike ODBC, OLE DB only enables you to perform implicit bindings for inserting, updating, and fetching data. The conversion algorithm for determining the intermediate character set is the same as the implicit binding cases of ODBC.

Table 6-8 OLE DB Implicit Bindings
Datatypes of OLE_DB Client Buffer Datatypes of the Target Column in the Database In-Binding and Out-Binding Conversions Comments

DBTYPE_WCHAR

CHAR,
VARCHAR2,
CLOB

If database character set is a subset of the NLS_LANG character set:

Database character set to and from NLS_LANG character set in OCI. NLS_LANG character set to UTF-16 in OLE DB

If database character set is NOT a subset of NLS_LANG character set:

Database character set to and from UTF-16 in OCI

No unexpected data loss

May degrade performance if database character set is a subset of NLS_LANG character set

DBTYPE_CHAR

CHAR,
VARCHAR2,
CLOB

If database character set is a subset of the NLS_LANG character set:

Database character set to and from NLS_LANG in OCI

If database character set is not a subset of NLS_LANG character set:

Database character set to and from UTF-16 in OCI. UTF-16 to NLS_LANG character set in OLE DB

No unexpected data loss

May degrade performance if database character set is not a subset of NLS_LANG character set

ODBC Unicode Datatypes

In ODBC Unicode applications, use SQLWCHAR to store Unicode data. All standard Windows Unicode functions can be used for SQLWCHAR data manipulations. For example, wcslen counts the number of characters of SQLWCHAR data:

SQLWCHAR sqlStmt[] = L"select ename from emp";
len = wcslen(sqlStmt);

Microsoft's ODBC 3.5 specification defines three Unicode datatype identifiers for the SQL_C_WCHAR, SQL_C_WVARCHAR, and SQL_WLONGVARCHAR clients; and three Unicode datatype identifiers for servers SQL_WCHAR, SQL_WVARCHAR, and SQL_WLONGVARCHAR.

For binding operations, specify datatypes for both client and server using SQLBindParameter. The following is an example of Unicode binding, where the client buffer Name indicates that Unicode data (SQL_C_WCHAR) is bound to the first bind variable associated with the Unicode column (SQL_WCHAR):

SQLBindParameter(StatementHandle, 1, SQL_PARAM_INPUT, SQL_C_WCHAR,
SQL_WCHAR, NameLen, 0, (SQLPOINTER)Name, 0, &Name);

Table 6-9 represents the datatype mappings of the ODBC Unicode datatypes for the server against SQL NCHAR datatypes.

Table 6-9 Server ODBC Unicode Datatype Mapping
ODBC Datatype Oracle Datatype

SQL_WCHAR

NCHAR

SQL_WVARCHAR

NVARCHAR2

SQL_WLONGVARCHAR

NCLOB

According to ODBC specifications, SQL_WCHAR, SQL_WVARCHAR, and SQL_WLONGVARCHAR are treated as Unicode data, and are therefore measured in the number of characters instead of the number of bytes.

OLE DB Unicode Datatypes

OLE DB offers the wchar_t *, BSTR, and OLESTR datatypes for the Unicode client C datatype. In practice, wchar_t is the most common datatype and the others are for specific purposes. The following example assigns a static SQL statement:

wchar_t *sqlStmt = OLESTR("SELECT ename FROM emp");

The OLESTR macro works exactly like an "L" modifier to indicate the Unicode string. If you need to allocate Unicode data buffer dynamically using OLESTR, use the IMalloc allocator (for example, CoTaskMemAlloc). However, using OLESTR is not the normal method for variable length data; use wchar_t* instead for generic string types. BSTR is similar. It is a string with a length prefix in the memory location preceding the string. Some functions and methods can accept only BSTR Unicode datatypes. Therefore, BSTR Unicode string must be manipulated with special functions like SysAllocString for allocation and SysFreeString for freeing memory.

Unlike ODBC, OLE DB does not allow you to specify the server datatype explicitly. When you set the client datatype, the OLE DB driver automatically performs data conversion if necessary.

Table 6-10 illustrates OLE DB datatype mapping.

Table 6-10 OLE DB Datatype Mapping
OLE DB Datatype Oracle Datatype

DBTYPE_WCHAR

NCHAR or NVARCHAR2

If DBTYPE_BSTR is specified, it is assumed to be DBTYPE_WCHAR because both are Unicode strings.

ADO Access

ADO is a high-level API to access database with the OLE DB and ODBC drivers. Most database application developers use the ADO interface on Windows because it is easily accessible from Visual Basic, the primary scripting language for Active Server Pages (ASP) for the Internet Information Server (IIS). To OLE DB and ODBC drivers, ADO is simply an OLE DB consumer or ODBC application. ADO assumes that OLE DB and ODBC drivers are Unicode-aware components; hence, it always attempts to manipulate Unicode data.


Go to previous page Go to next page
Oracle
Copyright © 1996, 2002 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback