Re: DSS System -- db_cache_size and pga_aggregate_target

From: Hemant K Chitale <>
Date: Sun, 20 Jul 2008 15:22:34 +0800
Message-Id: <>

No, the vendor hasn't put in any parallelism (ie no DEGREE at table level and no usage of Hints) explicitly. Partitioning is used and I am given to understand that only the last partition (ie the one for the current month) would be actively used in every month's run.

However, from the AWR report given to me I do see 'db file sequential read' waits (although lesser than 'db file scattered read') waits.

At 02:22 PM Sunday, Greg Rahn wrote:
>This depends more on the detail of what you mean by DSS in terms of
>workload. Are you using Parallel Query? Is the workload partition
>scans and hash joins?
>"With the exception of parallel update and delete, parallel operations
>do not generally benefit from larger buffer cache sizes. Other
>parallel operations can benefit only if you increase the size of the
>buffer pool and thereby accommodate the inner table or index for a
>nested loop join."
>On Sat, Jul 19, 2008 at 9:34 PM, Hemant K Chitale
><> wrote:
> >
> > I have been informed that advice to increase the db_cache_size makes sense
> > for an OLTP system but not a DSS system.
> > That for a DSS system, the db_cache_size should not be large but the
> > pga_aggregate_target should be large.
> >
> > Are there any "best practices" , "test cases", "white papers",
> > "benchmarks" which indicate such ?
>Greg Rahn

Hemant K Chitale

Received on Sun Jul 20 2008 - 02:22:34 CDT

Original text of this message