RE: : RE: bytes vs chars

From: Neil Chandler <neil_chandler_at_hotmail.com>
Date: Mon, 14 Mar 2016 17:57:19 +0000
Message-ID: <DUB113-W136A318B9D83145F31E6A9285880_at_phx.gbl>



Sometimes I wonder if these emails are sent just to wind me up, but there is an important reason why you should not just define VARCHAR2 columns really large "just in case".

Should the definition of a table approach or exceed the size of your Oracle block (so in most cases around 8k), Oracle will no longer be able to perform set-based operations against the table. This will significantly increase the redo requirement, and cause performance problems.

This becomes even easier to do when correctly using Unicode character sets such as AL32UTF8 with CHAR semantics.

For example, if you use the following table: create table test_tab1 (col1 date, col2 varchar2(1000 CHAR), col3 varchar2(1000 CHAR), col4 varchar2(1000 CHAR) );

This table will have a definition of over 12,000 bytes as the varchar2(1000 CHAR) columns will convert to a varchar2(4000) "underneath the covers" to cope with the fact that it takes between 1 and 4 bytes to store each character.

Making columns large "just in case" has real world consequences. Having very "wide" tables tends to hit the same problem (as well as a number of others).

Aside: If you do a lot of string manipulation in your application, you can suffer performance problems using AL32UTF8 due to the variable length. Use AL16UTF16 as this uses 2 bytes to store any character. This is the default character set for NVARCHAR2's and CLOBs.

regards

Neil Chandler

> To: oracle-l_at_freelists.org
>
> Why are you even worrying about it.
>
> Make it VARCHAR2(255) or VARCHAR2(1000) and save yourself resizing it AGAIN in the future.
>
> IMHO
> Dave
>
                                               

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Mar 14 2016 - 18:57:19 CET

Original text of this message