Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: 90GB table on Windows 2000

Re: 90GB table on Windows 2000

From: Richard Foote <richard.foote_at_bigpond.com>
Date: Wed, 9 Oct 2002 22:53:48 +1000
Message-ID: <ZtVo9.49475$g9.141726@newsfeeds.bigpond.com>


Hi Jeff,

You describe the "alternative" way of partitioning a table, common before Oracle introduced partitioning.

Problem of course is that the optimizer is not so smart at performing partition pruning. With a partitioned table, Oracle will simply prune the partitions and access only those that are relevant. With a complex view, all the data needs to be unioned which is not going to be as efficient. Also, Oracle is able to perform certain parallel operations which it can't on a manually partitioned tables (version dependent). Also joins on other like partitioned tables are incredible efficient as the optimizer knows to perform partition-wise joins. And of course you have all the necessary maintenance operations (such as add and drop/exchange partition) for maintaining a window of data.

I totally agree that maintaining global indexes can be a bit of a pain, hence the advantages of local indexes.

I think that Partitioning sounds the go for the OP. By appropriately partitioning the data, tuning the applications and creating the right mix of local/global indexes, there are grounds for improvements in performance.

Cheers

Richard
"Jeff Kommers" <kommers_at_ll.mit.edu> wrote in message news:76Eo9.93$I7.6228_at_llslave.llan.ll.mit.edu...
> The index creation is an interesting problem that I've dealt with
> before in a very similar situation. Basically we had point-of-sale
> records and we had indexed on the date/time column and a
> transaction key column. It is a huge pain to have to re-index
> 3 billion rows, and the operation tends to fail owing to out-of-temp
> space problems.
>
> Here's a suggestion that works if your data updates are incremental
> and don't affect any existing data (e.g. loading a new day of sales
> records in). Partition your table and indexes. You can use
> built-in features for this or "do-it-yourself" in the following way.
> Create a set of tables with identical schemas for each time period,
> e.g. "create table week1 (...); create table week2 (...);" etc.
> Then index each table separately. "create index idxweek1 on
> week1(...); create index idxweek2 on week2(...);" etc. Access your
> data through a view that is the union of your tables: "create
> view mydata as select * from week1 union all week2 union all ...." etc.
> It's pretty quick for the optimizer to find which table has the data
> relevant to a given query as long as it can use the index lookups. Now
> you can drop/create/index/whatever your data in weekly chunks that
> are much easier to manage. In particular, if you bulk-load a new day's
> worth
> of data, then re-indexing the current week (maybe
> 60 million rows) of your 3 billion rows of data is much faster than
> re-indexing
> *everything* just because you loaded one more day's data into your
> database.
>
> Good luck
>
> Jeff
>
>
> "Jim Kennedy" <kennedy-family_at_attbi.com> wrote in message
> news:fQBo9.11290$ST4.26543_at_rwcrnsc53...
> > 1. Post the query and the explain plan and the tkprof.
> > 2. More disks would probably speed it up.
> > 3. Try to compress the index. alter index blah compress; (it will
rebuild
> > it, but redundant entries will be compressed and faster for use for
> queries,
> > inserts might slow down a little.)
> > Jim
> >
> > "Andras Kovacs" <andkovacs_at_yahoo.com> wrote in message
> > news:412ebb69.0210080408.1777a9f2_at_posting.google.com...
> > > I am having problems to maintain a very large database. In fact the
> > > product has just been installed and we are trying to fine tune the
> > > application. All the transactions are recorded into a single table
> > > that can't be split (but can be partitioned).
> > >
> > > Here are the facts :
> > > - This table has 6 attributes and 1 index on 2 of the attributes,
> > > structure is very simple.
> > > - No more than 10 concurrent users on the system to query data
> > > - Table records 6000 rows /min and it grows 246 MB/day
> > > - Its size by the end of the year is expected to be 90 GB and
> > > it will hold 3 153 600 000 rows &#8230;
> > > - At the end of the year all data older than 1 year will be archived
> > > in another table
> > > - So this table will not grow beyond 90GB
> > > - We are going to upgrade Oracle to the Enterprise edition
> > > for partitioning and bitmap indexes
> > > - The system runs on Window 2000 advanced server
> > > - We have a Compaq hardware with two 1266 MHZ CPUs
> > > - 2,4 GB RAM
> > > - Controller N1: (Disk C 17 GB Raid 1: hold OS),
> > > (Disk D 17 GB Raid 0: holds temp space)
> > > (Disk E 17 GB Raid 1: holds Oracle )
> > > - Controller N2: (Disk F 34 GB Raid 5: holds indexes)
> > > (Disk H 101GB Raid 0+1: holds data)
> > >
> > >
> > > My questions are :
> > >
> > > 1. What type of backup should we use ? (We are thinking about
> > > replication
> > > and incremental backups or maybe a third machine)
> > >
> > > 2. Our write performance is very good. However we have some problems
> > > with
> > > reads (at present we have 15GB of data and 320 000 000 rows). The
> > > first
> > > read for a given query takes 52 seconds. Then the second time the
> > > query
> > > runs in 12 seconds with a 100% cache hit ratio. What type of
> > > hardware
> > > (controllers and disks) should we use to improve performance (52
> > > seconds)?
> > > Is there any thing to do to reduce these 12 seconds cache reads ?
> > >
> > > 3. I have tried to rebuild the index on the table after having dropped
> > > it.
> > > It is still running ... I had to configure a 15GB temporary table
> > > space.
> > > Any advise to speed up the index reconstruction ?
> > >
> > > 4. What would be the benefit of switching from NT to Unix ?
> > >
> > > 5. If somebody has a similar sized system, could you indicate us what
> > > type
> > > of hardware you have ?
> > >
> > > Thanks for your time, Andras
> >
> >
>
>
Received on Wed Oct 09 2002 - 07:53:48 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US