RE: DWH varchar2(4000)

From: Xiang Rao <>
Date: Mon, 22 Dec 2014 12:42:51 -0500
Message-ID: <BLU405-EAS2649AB3E854A606F59E204995560_at_phx.gbl>

One issue with varchar2(4000) columns is, when these columns are included in large aggregations, sorting, or even hash join, you could run into problem with memory usages and temp space usages.

Sent from my Windows Phone

From: Ricard Martinez<> Sent: ‎12/‎22/‎2014 12:04 PM
To: Martin Preiss<> Cc:<> Subject: Re: DWH varchar2(4000)

Thanks for that Martin.
Pretty interesting and disturbing at the same time.


On Mon, Dec 22, 2014 at 4:44 PM, Martin Preiss <> wrote:

> Ricard,
> another point that could result in difficulties has been mention by
> Nicolay Savvinov recently: rows with a potential row length (i.e. the
> addition of data type sizes of the different columns) that is bigger than
> the block size result in a row-by-row processing and supress bulk
> operations:
> Regards
> Martin Preiss
> Am 22.12.2014 16:01, schrieb Ricard Martinez:
> Hi list,
> We have a DWH database on
> The developers/data modelers are creating all the tables with
> varchar2(4000) as standard by default. They don't care if they just insert
> 10 bytes, 200 or nothing.
> When asked about why this standard is in place, the answer is just: "Just
> in case we need it".
> For my point of view this is not a good practice, as can cause memory
> waste on client applications, as well as data garbage or errors like ORA-01450:
> maximum key length (3218) exceeded.
> But as i'm not a DWH expert, maybe im missing information that proves
> that this is really a good practice, and not just a laziness way of
> working.
> I will be glad to know your opinion, or point me to any doc/test that
> proves me wrong, in order to learn about this.
> Thanks

Received on Mon Dec 22 2014 - 18:42:51 CET

Original text of this message