Re: How to read parent session data - without forcing a commit in the parent

From: ddf <oratune_at_msn.com>
Date: Tue, 21 Oct 2008 08:28:48 -0700 (PDT)
Message-ID: <859c423b-e6da-488b-9732-7ce16c361436@v72g2000hsv.googlegroups.com>


Comments embedded.
On Oct 21, 8:18 am, nis..._at_gmail.com wrote:
> Hi Folks,

> I have an interesting problem. Basically, I have an oracle job that I
> need to speed up as we are exceeding our allocated time window. The
> oracle job consists of a large number of small changes across multiple
> 'source' tables and then one of my functions is called to try and
> process this source data in the fastest possible time - in order to
> create a single huge table from the uncommitted data. If there is an
> error in my processing, I simply raise an exception that causes a
> rollback of the source data. So far so good.

> Now here's the problem. The 'huge' table that I am building consists
> of 2 distinct sets of SELECT queries on the uncomitted source data
> tables. There is no problem in doing this serially - other than it
> takes us 4 hours which is too long. To speed things up, I wanted to
> run the 2 distinct SELECT queries in parallel - and therein lies the
> problem. I cannot get parallel SELECT functions to run without them
> forcing the source data to become committed. Is there any way to get
> around this??

No.

> Below is a cut down version of the top level function
> that I used to kick off the parallel functions that build the huge
> table from the source tables. It works - but we cannot rollback
> because the parallel functions force the source data to become
> comitted!!! is there no way for a child session to have access to its
> parents uncomitted data?

No. This is how the read consistency model works in Oracle, readers don't block writers and writers don't block readers. You cannot 'see' the uncommitted changes from another session since the image of the data displayed to your current session is built upon the committed data and the undo images of the data undergoing change. You can't access those changes until they are committed. I suppose you could write a procedure to utilize LogMiner and generate UNDO statements for the changes you committed so another session could 'see' them:

--
-- Add every redo log to the 'mix' so LogMiner can
-- use them
--
select 'exec dbms_logmnr.add_logfile('''||member||''')'
from v$logfile

spool add_logfiles.sql
/
spool off

@add_logfiles

--
-- Fire up LogMiner
--
exec dbms_logmnr.start_logmnr(options =>
DBMS_LOGMNR.COMMITTED_DATA_ONLY)

--
-- Prepare the environment for the output
--
set linesize 5000 trimspool on

--
-- Retrieve the SQL statements to 'undo' the
-- committed changes
--
select sql_undo
from v$logmnr_contents
where seg_owner = upper('&1')

spool undo_committed_changes.sql
/
spool off

--
-- Shut down LogMiner
--
exec dbms_logmnr.end_logmnr

I would carefully check the statements generated by this method and
'weed out' any changes by others connected as this same user which you
do not want undone.


> Thanks folks..
> Nis.

> PROCEDURE testParallel
> IS

> p_ba varchar2(100) :='CSR';
> p_no number :=100;

> v_command1 varchar2(200):= 'nis_test_parallel.func1(' || '''' ||
> '''' || p_ba || '''' || '''' || ',' || p_no || ')';
> v_command2 varchar2(200):= 'nis_test_parallel.func2(' || '''' ||
> '''' || p_ba || '''' || '''' || ',' || p_no || ')';

> BEGIN

> -- When this function is called, then at this point we have got
> uncomitted data in tables. These tables are
> -- need to be read by

> -- The start_background_job command uses the
> dbms_scheduler.create_job command to kick off the thread
> start_background_job( 'BUILD_TABLE_PART1',
> v_command1,
> 'Background Job to build part1 of the
> results table');

> start_background_job( 'BUILD_TABLE_PART2',
> v_command2,
> 'Background Job to build part2 of the
> results table');

> END testParallel;
David Fitzjarrell
Received on Tue Oct 21 2008 - 10:28:48 CDT

Original text of this message