Re: db file sequential read waits on INSERT into non-indexed partitioned table

From: Greg Rahn <greg_at_structureddata.org>
Date: Mon, 26 Sep 2011 20:16:44 -0700
Message-ID: <CAGXkmitW+NLdZ5WpRpOoA_b5QSH_BD2TugfuJF8WSazrVMcK0Q_at_mail.gmail.com>



To elaborate on these points:

I'd wager the reason these wait events are showing up is that the next extent size is relatively small for the hash partition segments. Perhaps this is the first insert into an empty range partition so the initial extent size is just 64K, the default for ASSM. If you are putting 50000 rows in per array insert, these are most likely spanning all 128 hash subpartition segments so this is where you are getting your concurrency - all 128 segments are growing, allocating new extents at the same time. If you increase your initial extent size (say to like 8M, which is what 11.2.0.2 defaults to for partitioned segments -- the size of _partition_large_extents), these waits will likely disappear.

On Mon, Sep 26, 2011 at 4:16 PM, Gaja Krishna Vaidyanatha <gajav_at_yahoo.com> wrote:
> Hi Wayne,
> A few clarifications :
>
> 1) The occurrence of db file sequential read does NOT always imply the existence or use of an index. Although that event does show up when an index is present, it can occur even when there are no indexes in the picture. It simply implies that the server process did a single-block I/O call and encountered a wait. This can also occur even on a full table scan or index fast full scan, if the number blocks remaining to be read (before an extent boundary) is less than I/O chunksize (db_file_multiblock_read_count). I have observed on many occasions where FTS encounters the db file sequential read wait for this very reason.
>
> 2) The table STAGE_TRANS_WAYNE is setup as a composite range-hash partitioned table. Thus, I am not surprised to see the occurrence of db file sequential read especially for the sub-partitions which are hashed. There could be other reasons with varying amounts of freespace in the blocks, the hashing that is being done on the sub-partitions and available blocks on the freelist that may cause this event to occur too.
>
> 3) gc current grant 2-way is surfacing as part of the messaging with the "Table Master Node". It is possible that the Node you are currently connected where the INSERT is being run (and the blocks that are being modified), is NOT the Master Node for those modified blocks of STAGE_TRANS_WAYNE. Thus, it will have to communicate with the Master Node which covers the blocks in question, before writing to those blocks.
>
> 4) enq: HW contention usually happens when concurrent/parallel INSERT operations occur. You mention yours being a single-threaded operation. Do you have any undesired parallelism happening unbeknownst to you? For example, table-level PARALLELISM set on table creation etc. Given that it is 0.01 seconds of the total elapsed time, I am not sure I'd spend too much time on it, unless your quest is purely academic :)
>
> Finally, I am just curious. It seems like we are focused on trying to account for 5.97 seconds of waits, out of the total elapsed time of 20.48 seconds. I'd ask the bigger question of why do we have 14.51 seconds worth of CPU time for an INSERT. From past observations, I have seen the use of INSERT statements with scores of bind variables causing all sorts of CPU spikes on systems. The workaround (if possible) is to populate, manipulate and process using a Global Temporary Table (if relevant and possible). Does magic to the system in CPU reduction :)

-- 
Regards,
Greg Rahn
http://structureddata.org
--
http://www.freelists.org/webpage/oracle-l
Received on Mon Sep 26 2011 - 22:16:44 CDT

Original text of this message