RE: Optimizing Big Index Build on Standard Edition

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Tue, 4 Feb 2014 01:47:53 -0500
Message-ID: <220601cf2175$086990a0$193cb1e0$_at_rsiz.com>



JL is as usual correct. Pumping up to larger and larger memory is only useful if it keeps the sort in memory altogether, which does not seem to apply in your case.  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Jonathan Lewis
Sent: Monday, February 03, 2014 6:15 PM
To: Mark W. Farnham; charlottejanehammond_at_yahoo.com; 'ORACLE-L' Subject: RE: Optimizing Big Index Build on Standard Edition    

Back of an envelope calculation coming up:  

Once you're into a one-pass sort, increasing the memory doesn't make the sort faster and may actually make it slower. (More CPU to "place" the next row as the sorted set increases in size.)  

For 60GB I think a figure in the order of 300MB would probably be about the right amount of memory to do a one pass sort with the minimum memory. That would give you roughly 300 sort runs of 200MB which you could merge in one pass at 1MB reads per run. (I would be interested in doing the experiments if anyone had a machine I could borrow - but I would make it a firm suggestion to the OP since there's no great need to conserve memory or CPU).    

Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
_at_jloracle


From: Mark W. Farnham [mwf_at_rsiz.com]
Sent: 03 February 2014 18:33
To: charlottejanehammond_at_yahoo.com; Jonathan Lewis; 'ORACLE-L' Subject: RE: Optimizing Big Index Build on Standard Edition  

I would focus at this point in trying to get your in memory sort much much larger so there are as few as possible temp disk i/o of any kind. JL's writings may well be the best source of information on the ins and outs of that and what the limiting factors are. If someone has a better reference please pony up.    

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Feb 04 2014 - 07:47:53 CET

Original text of this message