Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: large PGA_AGGREGATE_TARGET value problems

Re: large PGA_AGGREGATE_TARGET value problems

From: Mark Bole <makbo_at_pacbell.net>
Date: Wed, 28 Jan 2004 04:15:20 GMT
Message-ID: <sHGRb.18708$gz7.13133@newssvr25.news.prodigy.com>


Telemachus wrote:

> Perhaps it thinks it can fit all of one table in memory and switches access
> plan accordingly ?
> 
> does use of SQL*Analyze show a difference when you play around with  the
> value ? although I've a feeling you might need to do a workarea size policy
> =manual and watch the numbers yerselves for S_A_S,H_A_S,B_A_S
> 
> Alternatively there's always 10053 output to diagnose the optimizer choices
> although the reading is a little abstruse
> 
> 
> "James McCudden" <james.mccudden_at_mckesson.com> wrote in message
> news:5b8af1c7.0401270651.4ddd395f_at_posting.google.com...
> 

>>We are currently experiencing problems on systems where we set the
>>PGA_AGGREGATE_TARGET up to a higher value (800MB or more). We ported
>>a decision support (Data Warehouse) system from Sybase 1.5 years ago.
>>Things have generally went well. In development and early customer
>>use, the P_A_T value of 100-200MB seemed to give adequate performance.
>> More research and large real-life loads have made us increase this up
>>to the recommended max of 40% of physical memory. Now, however, we
>>are starting to see problems where the oracle shadow processes will,
>>when processing a large multi-table join with lots of table/index
>>scans and hash joins, just start running at 100% CPU and never
>>returning. If the P_A_T is set lower, or if use manual memory
>>mangement, the problem does not happen.
>>
>>I'm working with Oracle on an open TAR for this, but thought I'd throw
>>it out to the community to see if anyone else has exprienced this.
>>Our systems are generally:
>>
>>Tru64 unix, ver. 5.1A
>>4-8GB physical memory, plenty of swap
>>2-4 cpu systems
>>Oracle 9.2.0.2 (also happens in 9.2.0.4)
>>(doesn't/hasn't yet happended on on HP-UX 11.11 installs)
>>
>>Our applications are all C code using the OCI calls. I have a feeling
>>it's an issue with the P_A_T auto memory management under Tru64 for
>>larger P_A_T values.
>>
>>As an aside, out of curiosity, how many other people run a moderate
>>(100-400GB) data warehouse on Tru64 using essentially only OCI calls
>>in C for the applications? I'm starting to wonder if we are the only
>>ones...
>>
>>Thanks.
>>
>>James

>

If you truly (no pun intended) believe that it is a Tru64-specific issue, then set up the same environment (memory, CPU, version, data) on another platform and show that it works!! (theoretical, of course, since getting the resources to do this is not often easy ).

As mentioned, your best approach is to first look at the execution plan for your worst queries (the large multi-table hash joins, for example) and tune them. Many factors, including memory, number of CPU's, optimizer statistics, and so on can cause queries that worked fine in "early development and customer user" to stop working well in production.

Hints and outlines are Oracle tools for stablizing query execution relative times from one environment to the next.

Also, the basic Enterprise Manager tool (OEM) has a nice graphic feature that helps you tune your P_A_T along with other memory settings, so see how that compares across databases.

--Mark Bole Received on Tue Jan 27 2004 - 22:15:20 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US