Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: INSERT...SELECT pegs CPU, but is waiting on scattered read?

Re: INSERT...SELECT pegs CPU, but is waiting on scattered read?

From: Daniel Fink <Daniel.Fink_at_Sun.COM>
Date: Thu, 29 Apr 2004 11:13:25 -0600
Message-id: <>

WAITs (or timed events) are emitted into the trace file upon completion. So the db file scattered read has completed (with an elapsed time of 0, which makes me wonder if timed_statistics is turned on). The logical next step is that the data returned by the scattered read (8 blocks worth) is processed. Probably logical i/os, sorting, hash joins, etc. all CPU intensive. Once the processing is completed, another physical i/o call is made and the process repeats. If the between i/o processing is the real problem, you will see the waits separated by seconds or minutes.

One system I've worked on performed hash joins in a cpu-starved environment. Execution plan w/hash join was 10 minutes. The full table scans of each table in the query took about 2 minutes each for a grand total of 4 minutes of physical i/o processing. Last I checked, 4 != 10 (even with a weak dollar). Once we changed the execution plan to use nested loops w/indexes, the processing time dropped to less than 5 minutes. This is one of the blind spots of the CBO w/out system stats.

If the system is cpu-starved, I'd avoid cpu-intensive operations such as sorting and hash joins. Also look at the number of indexes on the target table and see if they can be reduced. Without seeing the whole trace file, this is where I'd start.

Now, for the Mladen version....
The Goody's Headache Powder Miller Geniune Draft Mobil Synthetic Oil Ebay Google Digital Equipment Corporation VBC42 had great pit stops to get the data from the files but we really had problems getting through all the traffic on the cpu. The stops and starts made it impossible to get up to full speed.


"Jesse, Rich" wrote:

> Hey all,
> I'm trying to help our folks get an procedure down from 18 hours
> without a complete redesign (which it desperately needs), so I run a 10046
> trace on it from the test system. It merrily hops along the trail of
> INSERT...SELECTs (five of them, looped five times to create five tables),
> until the last INSERT...SELECT on the first loop. The trace shows:
> WAIT #26: nam='db file scattered read' ela= 0 p1=7 p2=57145 p3=8
> nauseum, but each of these lines in the trace takes at least a couple
> of seconds to show while viewing the trace using "tail -f". Previous data
> in the trace showed up with a typical "tail -f" batching of several lines or
> pages per second. While in this "state", HP-UX's GlancePlus ("gpm") shows
> the server process pegging the single CPU on the test system, while doing
> little or no I/O.
> Sometimes, the process breaks loose and continues on it's I/O trashing, then
> back to the CPU-hogging super slow mode. My knee-jerk is that the data's
> buffered during the CPU-hogging, but I'm at a loss as to what steps to take
> next to determine what's really going on.
> As this process is creating a mini-DM, there's going to be significant I/O.
> I plan on reducing the 25 FTSs (five loops of five INSERT...SELECTs) to 1 to
> eliminate most of the I/O, but I'm curious as to the troubleshooting of this
> from a systems perspective -- as though I didn't have access to the SQL.
> I know, I know. "Finish with Cary's book." My six-month old isn't enjoying
> the "Method R" bedtime stories anymore, so the book's been relegated to my
> "library". <sigh>
> TIA,
> Rich
> Rich Jesse System/Database Administrator
> QuadTech, Sussex, WI USA
> ----------------------------------------------------------------
> Please see the official ORACLE-L FAQ:
> ----------------------------------------------------------------
> To unsubscribe send email to:
> put 'unsubscribe' in the subject line.
> --
> Archives are at
> FAQ is at
> -----------------------------------------------------------------

Please see the official ORACLE-L FAQ:

To unsubscribe send email to: put 'unsubscribe' in the subject line.
Archives are at
FAQ is at
Received on Thu Apr 29 2004 - 12:13:39 CDT

Original text of this message