RE: Oracle Read Consistent Overhead

From: Matt McClernon <mccmx_at_hotmail.com>
Date: Mon, 18 Apr 2011 00:59:47 +0000
Message-ID: <COL117-W2778C91A51483DBABED0E7B7910_at_phx.gbl>


>
> How large was the "medium-sized" transaction.

I just repeated the test again the and medium sized transaction was 100,000 rows (update of one column)
> Read-consistency costs are largely about the number of undo records applied, not
> about the number of blocks in the underlying object, and the number of undo
> records is (generally) related to the number of changes, which often means
> number of rows.

Interesting. I did suspect that the multi-versioning might be row based, not block based, but I ruled it out because it seemed too inefficient. In my re-run of the simulation today I saw 200,000 consistent reads in the second session which is 2 CR blocks per row update. That still seems a little high. More interesting than that though is that I repeated the test with an index on the MV log and the CR count was 3.8 million..! which is 38 CR blocks per row update. That is highly suspicious.                                                

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Apr 17 2011 - 19:59:47 CDT

Original text of this message