Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Opinion on table structure and accessibility

Re: Opinion on table structure and accessibility

From: joel garry <joel-garry_at_home.com>
Date: 19 Mar 2007 13:37:54 -0700
Message-ID: <1174336674.836173.255440@p15g2000hsd.googlegroups.com>


On Mar 17, 4:42 pm, "Oracle Data Miner" <shyamva..._at_gmail.com> wrote:
> On Mar 16, 5:20 pm, "joel garry" <joel-ga..._at_home.com> wrote:
>
>
>
>
>
> > On Mar 16, 7:45 am, "gl" <stin..._at_pd.jaring.my> wrote:
>
> > > Hey guys, need expect opinion on table design and access. For a loan
> > > record which going to store 300 months of principal outstanding and
> > > interest outstanding. Which is a better choice?
>
> > > 1) Create a table with data structure as
> > > loan_no, month_no, prin_os, int_os
>
> > > 2) create a table with
> > > loan_no, p_mth1, p_mth2, p_mth3 ... p_mth300.
>
> > > The first method will have million of records which cause I/O
> > > contention, the second method will have issue on selecting data
> > > randomly. Is there any other way for this design? Thank you.
>
> > There is an option in Oracle called partitioning. When you access the
> > data, the optimizer can do "partition pruning" to cull out the data
> > you are not interested in. Also, you can use it for rapid data loads
> > or truncation if you are using a 300 month window. That is one reason
> > to go with the first method.
>
> > The only reason to go with the second method is to have people laugh
> > at you.
>
> > There may be other ways, but there is insufficient problem definition
> > to say much else.
>
> > jg
> > --
> > @home.com is bogus.
> > "The board's greatest contribution in 2006 was introducing the word
> > 'pretexting' into the American lexicon."http://www.signonsandiego.com/uniontrib/20070315/news_1b15hp.html
> > Yeah, we used to call it Social Engineering.
>
> Also you need to consider whether it is for OLTP (transaction
> processing system) or for data warehouse (DW). In DW scenario,
> somethimes the de-normalizes structures win due to querying needs. Seehttp://OracleBIWA.orgfor some related webcasts.
> Thanks
> Shyam

That would be true, but the OP's statement of "the second method will have issue on selecting data randomly" implies that particular denormalization would not be a win. I have no trouble with denormalization for specific performance issues or a formal DW approach (indeed, I've had great success with the former with certain intractable problems), but the small amount of info in the OP doesn't point there. Of course, there is some implication of possible data mining simply with the words "loan record." My assumption is he is talking about an OLTP ("randomly" being the clue), and DW would be later. The OP's assertion that having millions of records will cause I/O contention implies he doesn't really understand normalization or how Oracle works, unless he knows something he's not saying. Even if it is a DW where he will be doing queries like "how many people are 3 months late," that sort of denormalization would likely be wrong, he would need to know more about DW design.

Thanks for the link, though, it will be useful to many.

jg

--
@home.com is bogus.
http://www.signonsandiego.com/news/nation/20070319-1144-europe-a380toamerica.html
Received on Mon Mar 19 2007 - 15:37:54 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US