Re: Pros/cons of using as many table partitions as possible

From: Jonathan Lewis <>
Date: Sun, 20 Apr 2008 08:10:59 +0100
Message-ID: <>

"Paulukas" <> wrote in message
> Oracle 9i had a limit of 64K-1 partitions per table. 10g increased
> that to over a million (1024K-1). Is it practical to approach that
> limit (>500,000 partitions)? What are the pitfalls?
> I'm very aware of partition-elimination and partition-wise joins.
> Assume all queries will appropriately specify the partition-keys.
> What concerns me:
> a) Query parsing.
> b) Huge amount of system info (objects, segments, extents,
> c) Other things I do not know about!! :-)
> I welcome other suggestions for managing such a large table, but
> please try to address the original question about using a very high
> number of partitions.
> Thanks.
> - Paul

I would approach with extreme caution - any bugs, and any flaws in the optimizer, are much more likely to show up as you approach boundary conditions like this.

> b) Each row will be partitioned on a year-field for LIST-partitions
> and a surrogate-key (integer) for HASH-subpartitions.

    Have you worked out what that surrogate key implementation     means in terms of querying the "public" values for the data. How     do you get a key value from the "real" values ?

> c) I'd like to use composite list/hash partitioning to support:
> - Easy removal of old data (dropping partitions based on year)
> - Minimal size of subpartitions due to large hash count.

How many years are you keeping by the time you have 500,000 partitions ? If it's about 8, then you're looking at 65,536 hash subpartitions - to drop and add, and query for any single year.

Simple test - create a tablespace with a uniform extent size of 4 blocks, then
create an empty table, then add one partition and drop the other - see what happens.

> d) With small-enough subpartition size, I won't need any indexes,
> since a full scan of an individual subpartition should be very
> reasonable. This will save an enormous amount of space (no indexes).

You suggested about 3TB of data - which would be 6Mb of data for a partition. To get partition elimination, every query would have to be YEAR = {constant} and ID = {constant} - how many rows per partition do you expect for a single ID - and is it reasonable to spend (a minimum of)
one second of physical disk reads per ID, because that's what you'd get. If your partitioining is basically trying to get one ID per partition, this might
be a viable strategy - if your IDs are close to unique within a partition then
it's almost certainly not.

  e) Small subpartition size allows easier replication via partitionswapping.

Try some swapping after creating the table - in the absence of uniqueness constraints and indexes, or with a little experimentation, I would expect to
get it working perfectly reasonably - for any one subpartition.

> What concerns me:
> c) Other things I do not know about!! :-)

The first two on your list would concern me, too. But they can be tested very easily without generating a huge amount of data. It's this last one where I'd bet on some really nasty surprises.


Jonathan Lewis

Author: Cost Based Oracle: Fundamentals

The Co-operative Oracle Users' FAQ
Received on Sun Apr 20 2008 - 02:10:59 CDT

Original text of this message