Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> re "About having a large SGA for a DataWarehouse"

re "About having a large SGA for a DataWarehouse"

From: Hemant K Chitale <hkchital_at_singnet.com.sg>
Date: Tue, 20 Jul 2004 22:40:48 +0800
Message-Id: <6.1.0.6.0.20040720223853.02457c88@pop.singnet.com.sg>

 

My thanks to all who responded.
I have combined the responses and will use them at my workplace.

My query to ORACLE-L :

At my site we'll be migrating a [currently very small 20GB] Corporate DataWarehouse
from 32-bit 8.1.7 Solaris to 32-bit 9i Linux. It will grow rapidly with Manufacturing Datamarts
to probably 500GB in a year.

Some of the DataMart teams are pushing for a very large SGA while I would rather have a
large PGA_AGGREGATE_TARGET [not using 10g and Automatic Memory Management]

My rebuttal was :
We must consider how large we really need the SGA. A too large SGA
[assuming that it is not too large as to cause excessive Paging and begin
Swapping at the OS level]
can result in :

  1. Performance impact in Checkpoints -- larger checkpoints, longer write times for DBWR
  2. Performance impact in Free Buffer scans -- longer buffer chains
  3. Performance impact on Cache Buffer Chains latch -- more buffers per latch means that the latch may be held more frequently
  4. Delayed Block Cleanouts -- modified blocks remaining in memory requiring cleanups and causing potential ORA-1555s

Comments / Recommendations / Corrections from this list, please ?



Response from Dennis Williams :

   I agree with you. The theory of having a large buffer cache for an OLTP database is that there are certain database blocks that will be used quite often and if those blocks are in memory, then performance is much greater.

   With a DW, the situation may be different, depending on how the DW is designed. Usually a DW query spans a lot of blocks, so the possibility of keeping a particular block in memory decreases rapidly. Because many operations are aggregations, full table scans are the rule. That is why the partitioning option is often worth the additional cost. In a FTS, Oracle doesn't even try to keep the blocks in the buffer cache.

   There may be exceptions. You may have some smaller tables that are frequently referenced. Consider creating a KEEP pool.

   Best idea -- test! Don't speculate, conduct tests to see what yields best performance.



Response from Thomas Mercadante :
We currently have a 1 terabyte warehouse. It's SGA is currently set at 4Gig. PGA Target at 2500M. I'm not sure how "big" your users want your SGA to be. The real answer (in my opinion) is that it needs to be as large as it needs to be. Picking a number out of a hat to start with might be the wrong approach. But you have to start somewhere.

If you can post how many users you expect to be querying the warehouse during the day, we all might be able to help.



Response from Khedr Waleed :
Data warehouses usually require using PQO, and this uses direct reads that will benefit nothing of big buffer cache.

Response from Mladen Gogala :

They will also not benefit much from MTS, so allocating large chunks of memory
for large pool is also pointless. Hashe and sort are size will need to be exploded,
typically to values like 64M/process.

Hemant K Chitale
Oracle 9i Database Administrator Certified Professional http://web.singnet.com.sg/~hkchital
[1]"A man's reputation is what other people think of him; his character is
what he really is."

Received on Tue Jul 20 2004 - 09:44:41 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US