Re: RE: DWH varchar2(4000)

From: <l.flatz_at_bluewin.ch>
Date: Mon, 22 Dec 2014 21:35:19 +0000 (GMT)
Message-ID: <10724368.44614.1419284119698.JavaMail.webmail_at_bluewin.ch>



I remember back in the 90ties we did such stuff because our IDMS.R database was a pain to reorganize. I never did that with Oracle. There was no need for such precaution. Therefore I have a deja-vu. Certainly not a very modern design. Generally my reasoning is make the columns as long as they need to be. If you are wrong it is no big deal. ----Ursprüngliche Nachricht----
Von : jonathan_at_jlcomp.demon.co.uk
Datum : 22/12/2014 - 18:53 (GMT)
An : oracle-l_at_freelists.org
Betreff : RE: DWH varchar2(4000)
That may be version dependent; do you have any links to demonstrations ? As far as sorting is concerned I've just done a quick test that showed the memory requirement was dictated by the used length not the declared length (in 12c).    

Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
_at_jloracle
From: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] on behalf of Xiang Rao [xiang_rao_at_hotmail.com] Sent: 22 December 2014 17:42
To: ricard.martinez_at_gmail.com; Martin Preiss Cc: oracle-l_at_freelists.org
Subject: RE: DWH varchar2(4000)
One issue with varchar2(4000) columns is, when these columns are included in large aggregations, sorting, or even hash join, you could run into problem with memory usages and temp space usages. Sent from my Windows Phone
From:
Ricard Martinez
Sent:
‎12/‎22/‎2014 12:04 PM
To:
Martin Preiss
Cc:
oracle-l_at_freelists.org
Subject:
Re: DWH varchar2(4000)
Thanks for that Martin.
Pretty interesting and disturbing at the same time. Regards
On Mon, Dec 22, 2014 at 4:44 PM, Martin Preiss <mtnpreiss_at_gmx.de> wrote:
Ricard,
another point that could result in difficulties has been mention by Nicolay Savvinov recently: rows with a potential row length (i.e. the addition of data type sizes of the different columns) that is bigger than the block size result in a row-by-row processing  and supress bulk operations:
http://savvinov.com/2014/12/15/4k-bug-is-not-a-bug/. Regards
Martin Preiss
Am 22.12.2014 16:01, schrieb Ricard Martinez: Hi list,
We have a DWH database on 11.2.0.4.
The developers/data modelers are creating all the tables with varchar2(4000) as standard by default. They don't care if they just insert 10 bytes, 200 or nothing. When asked about why this standard is in place, the answer is just: "Just in case we need it". For my point of view this is not a good practice, as can cause memory waste on client applications, as well as data garbage or errors like ORA-01450: maximum key length (3218) exceeded. But as i'm not a DWH expert, maybe im missing information that proves that this is really a good practice, and not just a laziness way  of working.
I will be glad to know your opinion, or point me to any doc/test that proves me wrong, in order to learn about this. Thanks

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Dec 22 2014 - 22:35:19 CET

Original text of this message