RE: Optimizing Big Index Build on Standard Edition

From: Jonathan Lewis <>
Date: Mon, 3 Feb 2014 23:15:25 +0000
Message-ID: <CE70217733273F49A8A162EE074F64D901DDA31B_at_exmbx05.thus.corp>

Back of an envelope calculation coming up:

Once you're into a one-pass sort, increasing the memory doesn't make the sort faster and may actually make it slower. (More CPU to "place" the next row as the sorted set increases in size.)

For 60GB I think a figure in the order of 300MB would probably be about the right amount of memory to do a one pass sort with the minimum memory. That would give you roughly 300 sort runs of 200MB which you could merge in one pass at 1MB reads per run. (I would be interested in doing the experiments if anyone had a machine I could borrow - but I would make it a firm suggestion to the OP since there's no great need to conserve memory or CPU).

Jonathan Lewis

From: Mark W. Farnham []
Sent: 03 February 2014 18:33
To:; Jonathan Lewis; 'ORACLE-L' Subject: RE: Optimizing Big Index Build on Standard Edition

I would focus at this point in trying to get your in memory sort much much larger so there are as few as possible temp disk i/o of any kind. JLs writings may well be the best source of information on the ins and outs of that and what the limiting factors are. If someone has a better reference please pony up.

Received on Tue Feb 04 2014 - 00:15:25 CET

Original text of this message