RE: Oracle RAC TEMP / TEMP GROUP tablespace performance question

From: <Christopher.Taylor2_at_parallon.net>
Date: Tue, 19 Feb 2013 11:22:29 -0600
Message-ID: <F05D8DF1FB25F44085DB74CB916678E885823B7BB1_at_NADCWPMSGCMS10.hca.corpad.net>



Like I said, I'm not trying to solve a problem :) It's basically an intellectual discussion - sure temp is not "bad" as is, but can it be better? That's the goal of this discussion - when RAC shares tempfiles between the instances, then when is it better to have multiple temp tablespaces with a few small tempfiles versus a couple of temp tablespaces with largish tempfiles? That is the question I'm curious about.

Chris

(From earlier email: My gut 'instinct' says instead of having 2 TEMP tablespaces with 5 tempfiles, I could have 3 (or more TEMP tablespaces with 3 tempfiles each - 3 node RAC) and 1 group (perhaps more than 1 group?) and get better distribution of sorting and spread the IO out across more devices if done properly - though I haven't thought too far along that path yet. )
From: Kellyn Pot'vin [mailto:kellyn.potvin_at_ymail.com] Sent: Tuesday, February 19, 2013 11:11 AM To: Taylor Christopher - Nashville; gajav_at_yahoo.com; mwf_at_rsiz.com Cc: oracle-l_at_freelists.org
Subject: Re: Oracle RAC TEMP / TEMP GROUP tablespace performance question

So if temp waits are WAY down on the list, why do you think temp is the issue and not the design or use of your GTT's? I know, I ask those annoying questions... :)

Kellyn Pot'Vin
Senior Technical Consultant
Enkitec
DBAKevlar.com
RMOUG Director of Training Days 2013



From: "Christopher.Taylor2_at_parallon.net<mailto:Christopher.Taylor2_at_parallon.net><mailto:Christopher.Taylor2_at_parallon.net<mailto:Christopher.Taylor2_at_parallon.net>>" <Christopher.Taylor2_at_parallon.net<mailto:Christopher.Taylor2_at_parallon.net><mailto:Christopher.Taylor2_at_parallon.net<mailto:Christopher.Taylor2_at_parallon.net>>> To: kellyn.potvin_at_ymail.com<mailto:kellyn.potvin_at_ymail.com><mailto:kellyn.potvin_at_ymail.com<mailto:kellyn.potvin_at_ymail.com>>; gajav_at_yahoo.com<mailto:gajav_at_yahoo.com><mailto:gajav_at_yahoo.com<mailto:gajav_at_yahoo.com>>; mwf_at_rsiz.com<mailto:mwf_at_rsiz.com><mailto:mwf_at_rsiz.com<mailto:mwf_at_rsiz.com>> Cc: oracle-l_at_freelists.org<mailto:oracle-l_at_freelists.org><mailto:oracle-l_at_freelists.org<mailto:oracle-l_at_freelists.org>> Sent: Tuesday, February 19, 2013 7:11 AM Subject: RE: Oracle RAC TEMP / TEMP GROUP tablespace performance question

I realize now (after receiving a few emails) that my original left something to be desired.

I'm not trying to solve a performance problem really. After reading Riyaj's post http://orainternals.wordpress.com/2012/02/13/temporary-tablespaces-in-rac/ (thanks Gaja), I'm seeing similar behavior where there doesn't seem to be a balance between which TEMP tablespace is being used. (Remember, both my TEMP tablespaces make up 1 TEMPGROUP tablespace, and each TEMP tablespace has 5 tempfiles in a 3 node RAC).

To make things more interesting, we have mostly JDBC connections which leave temp segments behind in the TEMP tablespaces (Metalink notes 802897.1, 1384829.1).

My gut 'instinct' says instead of having 2 TEMP tablespaces with 5 tempfiles, I could have 3 (or more TEMP tablespaces with 3 tempfiles each - 3 node RAC) and 1 group (perhaps more than 1 group?) and get better distribution of sorting and spread the IO out across more devices if done properly - though I haven't thought too far along that path yet.

So, I guess my question really is this: Does anyone know (test cases/whitepapers etc) if it makes any difference having multiple temp tablespace with a few tempfiles rather than a few temp tablespaces with many tempfiles?

Current:
TEMPGROUP = 2 TEMP tablespaces = 10 largish tempfiles (2x5)

Thinking about:
TEMPGROUP = 4 TEMP tablespaces = 12 smallish tempfiles (4x3) (or could go 5 temp tablespaces with 15 smallish tempfiles and combine with more than 1 tempgroup?)

Chris

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Feb 19 2013 - 18:22:29 CET

Original text of this message