sane number of the table partitions in DWH

From: Milen Kulev <>
Date: Mon, 30 Mar 2009 18:47:55 +0200
Message-ID: <>

Hallo Listers,
I am on site by a pretty creative customer. Developers want to partition a (fact) table(s) to the smallest slice possible. The number of calculated/estimated partitions is 200 000 - 300 000 (Yep, this is NOT a typing error !). My questions:
1) What is the "manageable" number of partitions (from performance point of view == parse times) that I you have seen by clients. 
2) Any parameters (also underscore ...) to reduce the hard-parse times (because of the enormous amount of partitions) ?
3) If I have so many partitions , what is the influence on the fragmentation of the shared pool? Any experiences ?
4) Is multi-column partitioning  a save bet (bugs) ? 
Is is running stable ?
Are any special one-off patches on top of needed ?

Fact tables have sizes ~ 700 - 1000 GB.
At the moment partitioning schmeme is range(date)/hash(location_id) NO partition exchange is used/and will not be used. Data is relatively volatile (UPDATES, DELETES) + usual INSERTS (some kind of operational DWH ).

Application type = DWH ( ~ 4.5 TB)
Application sends SQLs with literals (static partition pruning) Oracle Version =
OS = Suse 9 SP4

I will be grateful for any links/ presentations , experiences, relevant ML notes etc ...

Best Regards

Neu: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate + Telefonanschluss für nur 17,95 Euro/mtl.!*
-- Received on Mon Mar 30 2009 - 11:47:55 CDT

Original text of this message