Re: Pros/cons of using as many table partitions as possible

From: Michael Austin <maustin_at_firstdbasource.com>
Date: Sat, 19 Apr 2008 13:21:35 -0500
Message-ID: <hJqOj.8856$V14.8252@nlpi070.nbdc.sbc.com>


Noons wrote:
> Paulukas wrote,on my timestamp of 19/04/2008 7:30 AM:

>> Oracle 9i had a limit of 64K-1 partitions per table.  10g increased
>> that to over a million (1024K-1).  Is it practical to approach that
>> limit (>500,000 partitions)?  What are the pitfalls?
>>
>> I'm very aware of partition-elimination and partition-wise joins.
>> Assume all queries will appropriately specify the partition-keys.
>>
>> Here are some of the requirements and goals:
>>   a) I'm trying to manage a table that will have about 50 billion rows
>> and about 60 bytes/row.
>>   b) Each row will be partitioned on a year-field for LIST-partitions
>> and a surrogate-key (integer) for HASH-subpartitions.
>>   c) I'd like to use composite list/hash partitioning to support:
>>        - Easy removal of old data (dropping partitions based on year)
>>        - Minimal size of subpartitions due to large hash count.
>>   d) With small-enough subpartition size, I won't need any indexes,
>> since a full scan of an individual subpartition should be very
>> reasonable.  This will save an enormous amount of space (no indexes).
>>   e) Small subpartition size allows easier replication via partition-
>> swapping.
>>   f) Subpartitions for earlier years can also be compressed and their
>> tablespaces can be made readonly.
>>
>> What concerns me:
>>   a) Query parsing.
>>   b) Huge amount of system info (objects, segments, extents,
>> DBA_TAB_PARTITIONS, etc.).
>>   c) Other things I do not know about!!  :-)
>>
>> I welcome other suggestions for managing such a large table, but
>> please try to address the original question about using a very high
>> number of partitions.
>>

>
>
> Interesting. Is this to keep RFID data
> by any chance?
>
> Jonathan mentioned here a few years ago
> about using thousands of partitions
> in a table without any major problems.
> That was wa-a-a-a-y back in the 9i days.
> With 10g and 11g you'd have much better luck,
> at a guess.
>
> Let's say that at 50 billion rows, partitioning
> to 100 thousand will bring the number of rows
> per partition to an averaged 500000, or 30MB
> using the average row size you claim. That
> is quite manageable for partition scan instead
> of indexing.
>
> Interesting concept, and quite relevant in these
> days of tens of billion row tables: traditional
> concepts of indexing are not quite applicable
> anymore, are they? Good lateral thinking.

Just make sure you investigate all of the patches for known bugs. One involves processes not releasing locks properly when doing a rollback against a partitioned index. (IIRC) Depending on your db design, this can cause LOTS of problems in a very busy OLTP db. Received on Sat Apr 19 2008 - 13:21:35 CDT

Original text of this message