RE: Temp Space performance

From: Matt <>
Date: Thu, 28 May 2015 22:08:55 -0400
Message-ID: <03f801d099b4$6a81d140$3f8573c0$_at_net>

I am running with following in this 11.2 database, I wish I could run a higer pga running out of memory with the other db’s on the box.  

alter system set optimizer_dynamic_sampling=7;

alter system set pga_aggregate_target=32G  

I am using HCC compression query high since I am on Exadata  

I have some really good results that I will share later this week with the partition wise joins.  


From: [] On Behalf Of Juan Carlos Reyes Pacheco Sent: Thursday, May 28, 2015 5:44 PM
Cc:; ORACLE-L Subject: Re: Temp Space performance  

Hello Matt, when you increase the pga oracle optimizer can improve the join path.

There are other variables you can include in your analysis in a test case

alter system set optimizer_dynamic_sampling=4 scope=BOTH; alter system set statistics_level=all scope=BOTH; alter system set session_cached_cursors=2000 scope=spfile; alter SYSTEM set temp_undo_enabled=true SCOPE=BOTH; alter system set memory_target=2000m scope=spfile; alter system set pga_aggregate_target=800m scope=spfile;

and if yuu have licence you can include compressing tables, and indexes in 12c(I didn't tested yet), before 12c compressing indexes is not a good idea.      

2015-05-26 20:21 GMT-04:00 Matt <>:

Thank you Lothar and Randolf for the responses....

I fully agree with tuning temp tablespace usage first. I have increased PGA, going manual and have even played with the underscore settings parallel query using auto pga_aggregate_target. I was just trying to get every ounce of performance out temp tablespaces....

I am running into the Hash Join Buffered sorts that you mentioned in your article and in your presentation.

Today I tested the partition wise join methodology and have seen pga usage go way down.

Thanks, Matt
-----Original Message-----
From: []

On Behalf Of Randolf Geist
Sent: Wednesday, May 20, 2015 2:56 PM
Subject: Re: Temp Space performance

Hi Matt,

+1 what Lothar mentioned - rather than trying to tune Temp Tablespaces I would focus more on trying to avoid / minimize temp space usage.

In case of parallel operations there are two common scenarios why you see increased temp space usage: The required buffering (mostly HASH JOIN BUFFERED but could also be additional BUFFER SORTs) and inappropriate distribution methods, like BROADCASTing a row source of significantly underestimated size, or HASH distribution of a row source of significantly overestimated size, leading to additional BUFFERing requirements (see e.g.

Therefore I would check what causes the temp space usage - if it's the buffering (most likely) the most efficient way to address this is making use of full or at least partial partition wise joins, but this of course means the data needs to be partitioned in suitable way, at least some of the tables, so you need the partitioning option, and you need the corresponding physical design, which isn't something that usually can be done as a "tuning exercise".

You can also try to increase available PGA and/or using BROADCAST instead of HASH distribution, but for large joins this is rather limited.

I've talked about the impact of buffering and the dramatic differences the distribution / partitioning method can make, especially when it comes to Exadata / In-Memory, in a webinar I did for AllThingsOracle/RedGate some time ago:


> We are running queries with parallel 8 and outer joining 10 tables.
> Quite a
> few of the table are a billion rows and are using HCC query high
> compression. I have tuned the queries and we good there. The slowness we
> are seeing is in the reading and writing to temp.


No virus found in this message.
Checked by AVG -
Version: 2014.0.4800 / Virus Database: 4311/9829 - Release Date: 05/20/15




No virus found in this message.
Checked by AVG -
Version: 2014.0.4800 / Virus Database: 4311/9829 - Release Date: 05/20/15
Internal Virus Database is out of date.

Received on Fri May 29 2015 - 04:08:55 CEST

Original text of this message