Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Do selects block truncates?

RE: Do selects block truncates?

From: Mark W. Farnham <>
Date: Fri, 21 Dec 2007 14:07:08 -0500
Message-ID: <001e01c84404$afd97080$>

I believe the actual guarantee is read consistency IF a result set is delivered without error. I do know that the Oracle VLDB's actual vote in requesting TRUNCATE was to leave it up to Oracle whether or not to deliver an error on a par with snapshot too old in the eventuality a select was in progress at the time of the TRUNCATE, and there was discussion of TRUNCATE IMMEDIATE versus naked TRUNCATE precisely because of the issue you raise as well as the mutating structure issue. I didn't think they ever implemented the distinction between TRUNCATE and TRUNCATE IMMEDIATE, but I've been wrong before. I *think* fully testing this requires true multiprocessors and observation of the timing of the parse, but Jared's results look correct to me for the case he is testing.

Further complicating the matter is the truncation of a cluster stored object, including single table clusters, even though you might think they could optimize that out they really haven't in any version I've tested. Of course you have to essentially execute an unfiltered row delete on the table in the cluster and flush all the dirty blocks in that eventuality.

The reason there was a lot of pressure for TRUNCATE to be implemented as quickly as possible at the time was partly rooted in the old dictionary managed drop table problem where many extents being freed with the tablespace on line had an extraordinarily costly algorithm. Since having many extents wasn't much of a problem, you could therefore quickly empty out a table with TRUNCATE (keeping storage). Plus there was the convenience of not having to redo grants and the like. Those assembled thought of TRUNCATE as an AXE to be wielded for time critical operations, so I really thought they were going to allow breaking of parse locks by TRUNCATE, which seems to be what Jared's test shows. There are additional side issues, but I'm also wondering if there is a difference in TRUNCATE behavior when ASSM bitmap storage is involved.

Some releases of Oracle apparently also block TRUNCATE while buffers in cache are dumped, and being paired with a big select that loads lots of blocks into cache might delay the TRUNCATE (although I've never been able to figure out why, aside from the cluster case, Oracle would need to write those buffers at all and couldn't delay marking them free until after the TRUNCATE dictionary operations were complete.) Hmm - RAC and cache fusion, anyone?

If someone from Oracle kernel development is lurking and is allowed to spill the gory details from looking at the code, that will save what I think are some pretty hairy cases to enumerate in generating a comprehensive and accurate answer to the question of the OP.

If any of all y'all get around to case by case testing before I do (or someone can post what the code is intended to do exactly, or GASP! finds it clearly written out in documentation somewhere), please let us know.  

If I've made some silly error in this analysis, please clue me in.



Received on Fri Dec 21 2007 - 13:07:08 CST

Original text of this message