Re: Preventing delayed block cleanout due to loading a datawarehouse

From: Randolf Geist <>
Date: Tue, 12 Apr 2011 01:00:37 -0700 (PDT)
Message-ID: <>

On Apr 8, 11:02 pm, (Jaap W. van Dijk) wrote:
> If I would load the tables with
> would the blocks be written pre-cleaned out, thus preventing the
> ORA-01555 from happening?

Yes, I think CTAS is the only way of creating "clean" blocks since the COMMIT SCN is already known as part of the DDL operation, all other DML operations will generate blocks that need to be cleaned out afterwards.

If you want to stick to the INSERT approach - what do you do with the tables after loading - is there any gather statistics job running (I hope so...).

If yes, and you can afford it, try to gather the statistics using DBMS_STATS serially (I don't know if ANALYZE does a block cleanout, I guess it does/has to but I haven't check that recently) which might pose a problem in terms of runtime with that data volume. If you gather them in parallel, the direct path read performed by the parallel slaves can *not* do the block clean out persistently, although it happens during the processing of the data in the PGA of the parallel slave, the modified block won't be written back to disk. If you run the DBMS_STATS serially the SELECT performed will do the delayed block cleanout.

If gathering statistics serially is too expensive then executing a serial dummy "SELECT ... FROM TAB" after the load is another idea to force the delayed block cleanout.

Note all this also applies to the new adaptive serial direct path reads introduced in 11.1 - since you're still on 9.2 it doesn't matter but is interesting to know that in case the serial direct read is used a delayed block cleanout is not written back to disk and has to be repeated by subsequent queries.

See e.g.


Oracle related stuff blog:

Co-author of the "OakTable Expert Oracle Practices" book: Received on Tue Apr 12 2011 - 03:00:37 CDT

Original text of this message