Re: DWH varchar2(4000)

From: Martin Preiss <>
Date: Mon, 22 Dec 2014 17:44:08 +0100
Message-ID: <>


another point that could result in difficulties has been mention by Nicolay Savvinov recently: rows with a potential row length (i.e. the addition of data type sizes of the different columns) that is bigger than the block size result in a row-by-row processing and supress bulk operations:


Martin Preiss

Am 22.12.2014 16:01, schrieb Ricard Martinez:
> Hi list,
> We have a DWH database on
> The developers/data modelers are creating all the tables with
> varchar2(4000) as standard by default. They don't care if they just
> insert 10 bytes, 200 or nothing.
> When asked about why this standard is in place, the answer is just:
> "Just in case we need it".
> For my point of view this is not a good practice, as can cause memory
> waste on client applications, as well as data garbage or errors like
> ORA-01450: maximum key length (3218) exceeded.
> But as i'm not a DWH expert, maybe im missing information that proves
> that this is really a good practice, and not just a laziness way of
> working.
> I will be glad to know your opinion, or point me to any doc/test that
> proves me wrong, in order to learn about this.
> Thanks

Received on Mon Dec 22 2014 - 17:44:08 CET

Original text of this message