Re: partitionning join table and normalization

From: <chmanu_at_gmail.com>
Date: Tue, 8 Apr 2008 13:35:33 -0700 (PDT)
Message-ID: <576aa5ab-b42e-431e-b3d2-985e268614ac@k1g2000prb.googlegroups.com>

> > > Performance may or may not improve but maintainability and the
> > > ability to use it for something other than a data-dumpster will.
> > > 10M rows is not a particularly large table. But 180 columns is,
> > > in the vast majority of cases, a really bad idea.

> > > 10M rows is not a particularly large table. But 180 columns is,
> > > in the vast majority of cases, a really bad idea.

> > 180 columns smacks of bad design.
> > In a really horrible design, 180 indexes on these 180 columns would
> > exist as well.

I know and be conscient of that, i m agree with you. For the context, this table is just for select. The only field destined to be modified (just a bit) are them i want to normalize. In fact the 180 columns contains the 20 i want to store in the second table but there also are date, user and another information associated to it (so 20*4 = 80 => the mainly table will reduce to 100 columns)

> > To explore this argument further, if you did have this sort of bad
> > structure, would the Oracle CBO still choose the index that would give
> > the best execution plan to satisfy the query? Would it still choose
> > the index that would give the best execution plan for a table with
> > 1000 columns (the limit in 10.2)?
>
> > I don't know the answer. Does anyone? (at some point, surely the cost
> > of determining the cost could be larger that the cost for executing
> > the query)
>
> I don't know the answer either, but I'll make a couple of
> observations.
>
> You can adjust how deep the optimizer will look for various plans.
> With lots of indices, there may be some issue with wasting time
> evaluating those that will never be right. Since the cost is what
> finally determines which plan, it becomes possible to have way-off
> costs coupled with perhaps never getting to the correct plan.
>
> http://jonathanlewis.wordpress.com/2006/12/09/how-many-ways/http://groups.google.com/group/comp.databases.oracle.server/browse_th...

Is it possible to store the execution plan in a persistent mode or i need to hint the view ?

> Investigate histograms. With a properly normalized system, you can
> say "the more info you give to the optimizer, the better it is likely
> to make a correct decision for costing plans." With too much data,
> you can't say that. The general purpose of histograms is to give the
> optimizer more information on data distribution. So what happens when
> there are too many irrelevant histograms?
>
> Even if it does generally work, performance problems arise where it
> doesn't. This isn't anything different, I just think it could be much
> worse. http://oracledoug.com/serendipity/index.php?/archives/820-Topsy-Turvy...
> **(number of denormalizations)
>
> "...it's hard to know if there are even enough hints (yet) to describe
> every path we might want." - Jonathan Lewis on his Scratchpad.
>
> If you find something wrong, you have to give a simple case to Oracle
> support to get it fixed. It may go to the bottom of the pile if it is
> way out in left field.
>
> As far as performance with the view, note that recent and even
> unreleased patches are still dealing with performance and wrong
> results issues for certain obscure view constructs. So test and let
> us know.

thank you for the advise, i will test with some data.

I havent ever read all your links (some word i don t understand i need a good translator colleague)

Another possibility is to create more tables to join to the first (but i don't like)

Regards,

Chmanu Received on Tue Apr 08 2008 - 15:35:33 CDT

Original text of this message