Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Is creating a table with 1000 partitions a stupid thing to do?

Re: Is creating a table with 1000 partitions a stupid thing to do?

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Wed, 16 Jun 1999 17:12:44 +0100
Message-ID: <929550911.936.0.nnrp-13.9e984b29@news.demon.co.uk>


One of the overheads of partitioning is that a parallel query that executes against a partitioned object issues a segment checkpoint against every segment that may be involved in the query - including the segments which to the human eye are clearly irrelevant.

At 1,000 data partitions you may find that a query that should execute very rapidly will actually spend a LONG time handling segment checkpoints - on a recent test (version I can't remember) a 3 second query was preceded by 40 seconds of checkpointing on 700 partitions.

The duration of the checkpoint is also affected by the checkpointing strategy in the oracle Kernel, which I believe may changed dramatically in the 8.1 release, and by the size of the db_block_buffers.

The way to reduce the cost of segment checkpoints is to move as many partitions as possible into read-only tablespaces, but this does not apply in your case.

I would personally would look at partitioning into (fairly arbitrary) 100 partitions to reduce each partition to a conveniently manageable 1Gb, then put one partition per tablespace for the convenience and performance factor in doing hot backups.

--

Jonathan Lewis
Yet another Oracle-related web site: www.jlcomp.demon.co.uk

Mike Streeton wrote in message <376769ff_at_kaos.fastnet.co.uk>...
>We are currently looking to create a very large table >100GB 100,000,000
>rows, the only key we have that is always specified in every query is a
>customer id of which there is a range 1-1000. We are looking to partition
>the table based around this key. Most of the queries will be trawling the
>whole customers data, although some will also specify a date range, which
we
>can index locally. Not all partitions will be in seperate tablespaces. This
>is a reporting database with a large number of online connected users
>running canned queries.
>
>Questions:
>Is there a better way of doing this?
>What is the overhead of having a large number of parititions?
>Would standard index/clusters work better?
Received on Wed Jun 16 1999 - 11:12:44 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US