Re: partitionning join table and normalization

From: joel garry <joel-garry_at_home.com>
Date: Tue, 8 Apr 2008 11:49:22 -0700 (PDT)
Message-ID: <67dce61d-e4e9-4c41-918f-7bd52e2488f6@s33g2000pri.googlegroups.com>


On Apr 8, 9:08 am, Barry Bulsara <bbulsar..._at_hotmail.com> wrote:
> On Apr 8, 4:41 pm, DA Morgan <damor..._at_psoug.org> wrote:
>
>
>
>
>
> > chm..._at_gmail.com wrote:
> > > Hello,
>
> > > I'm working on a database which contains a big table TD (10 millions
> > > lines) with many columns (180)
>
> > > I wonder if the performance won t be better if i normalize some
> > > columns (about 20).
>
> > > So this is the new table NORM_FIELD :
> > > ID_DATA (ID reference from original table)
> > > FIELD_ID
> > > FIELD_DATA
> > > + 3 - 4 fields about updating (date, user, ...)
>
> > > TD is partitionning case 2 columns.
> > > Is it possible to partitionning NORM_FIELD which the same critery of
> > > TD ?
> > > Even ifnot, what the better solution between :
> > > - copy data of partitioning field in NORM_FIELD to store in the same
> > > partition (and have this critery for the join condition)
> > > - create a new partitioning critery for NORM_FIELD (like the last
> > > digit for having the best distribution)
>
> > > I don t want to change the application code, so i will rename my table
> > > and create a view with this name which will be a join with a another
> > > view build from this method :
> > >http://oracle.ittoolbox.com/groups/technical-functional/oracle-db-l/d...
>
> > > Will performance be ok if i do this change ?
> > > What s about your experience ?
>
> > > Thanks for answers.
>
> > > Chmanu
>
> > > PS : it is possible i've not be clear cause my poor english, in this
> > > case ask me to explain.
>
> > Performance may or may not improve but maintainability and the
> > ability to use it for something other than a data-dumpster will.
>
> > 10M rows is not a particularly large table. But 180 columns is,
> > in the vast majority of cases, a really bad idea.
> > --
> > Daniel A. Morgan
> > Oracle Ace Director & Instructor
> > University of Washington
> > damor..._at_x.washington.edu (replace x with u to respond)
> > Puget Sound Oracle Users Groupwww.psoug.org-Hide quoted text -
>
> > - Show quoted text -
>
> Daniel
>
> > 10M rows is not a particularly large table. But 180 columns is,
> > in the vast majority of cases, a really bad idea.
>
> 180 columns smacks of bad design.
>
> In a really horrible design, 180 indexes on these 180 columns would
> exist as well.
>
> To explore this argument further, if you did have this sort of bad
> structure, would the Oracle CBO still choose the index that would give
> the best execution plan to satisfy the query? Would it still choose
> the index that would give the best execution plan for a table with
> 1000 columns (the limit in 10.2)?
>
> I don't know the answer. Does anyone? (at some point, surely the cost
> of determining the cost could be larger that the cost for executing
> the query)

I don't know the answer either, but I'll make a couple of observations.

You can adjust how deep the optimizer will look for various plans. With lots of indices, there may be some issue with wasting time evaluating those that will never be right. Since the cost is what finally determines which plan, it becomes possible to have way-off costs coupled with perhaps never getting to the correct plan.

http://jonathanlewis.wordpress.com/2006/12/09/how-many-ways/ http://groups.google.com/group/comp.databases.oracle.server/browse_thread/thread/d91f5c65ee5e2355/91b269133fa1ce18?lnk=st&q=#91b269133fa1ce18

Investigate histograms. With a properly normalized system, you can say "the more info you give to the optimizer, the better it is likely to make a correct decision for costing plans." With too much data, you can't say that. The general purpose of histograms is to give the optimizer more information on data distribution. So what happens when there are too many irrelevant histograms?

Even if it does generally work, performance problems arise where it doesn't. This isn't anything different, I just think it could be much worse. http://oracledoug.com/serendipity/index.php?/archives/820-Topsy-Turvy.html **(number of denormalizations)

"...it's hard to know if there are even enough hints (yet) to describe every path we might want." - Jonathan Lewis on his Scratchpad.

If you find something wrong, you have to give a simple case to Oracle support to get it fixed. It may go to the bottom of the pile if it is way out in left field.

As far as performance with the view, note that recent and even unreleased patches are still dealing with performance and wrong results issues for certain obscure view constructs. So test and let us know.

jg

--
@home.com is bogus.
http://www.popjudaica.com/chosen-product.php?model=doyaandta
Received on Tue Apr 08 2008 - 13:49:22 CDT

Original text of this message