Re: How to read parent session data - without forcing a commit in the parent
Date: Wed, 22 Oct 2008 02:35:53 -0700 (PDT)
Message-ID: <3f61bfdd-6b04-44bf-b79b-84c26c67f73c@u46g2000hsc.googlegroups.com>
On 21 Oct, 19:18, Ed Prochak <edproc..._at_gmail.com> wrote:
> On Oct 21, 8:18 am, nis..._at_gmail.com wrote:
>
>
>
> > Hi Folks,
>
> > I have an interesting problem. Basically, I have an oracle job that I
> > need to speed up as we are exceeding our allocated time window. The
> > oracle job consists of a large number of small changes across multiple
> > 'source' tables and then one of my functions is called to try and
> > process this source data in the fastest possible time - in order to
> > create a single huge table from the uncommitted data. If there is an
> > error in my processing, I simply raise an exception that causes a
> > rollback of the source data. So far so good.
>
> > Now here's the problem. The 'huge' table that I am building consists
> > of 2 distinct sets of SELECT queries on the uncomitted source data
> > tables. There is no problem in doing this serially - other than it
> > takes us 4 hours which is too long. To speed things up, I wanted to
> > run the 2 distinct SELECT queries in parallel - and therein lies the
> > problem. I cannot get parallel SELECT functions to run without them
> > forcing the source data to become committed. Is there any way to get
> > around this?? Below is a cut down version of the top level function
> > that I used to kick off the parallel functions that build the huge
> > table from the source tables. It works - but we cannot rollback
> > because the parallel functions force the source data to become
> > comitted!!! is there no way for a child session to have access to its
> > parents uncomitted data?
>
> > Thanks folks..
> > Nis.
>
> > PROCEDURE testParallel
> > IS
>
> > p_ba varchar2(100) :='CSR';
> > p_no number :=100;
>
> > v_command1 varchar2(200):= 'nis_test_parallel.func1(' || '''' ||
> > '''' || p_ba || '''' || '''' || ',' || p_no || ')';
> > v_command2 varchar2(200):= 'nis_test_parallel.func2(' || '''' ||
> > '''' || p_ba || '''' || '''' || ',' || p_no || ')';
>
> > BEGIN
>
> > -- When this function is called, then at this point we have got
> > uncomitted data in tables. These tables are
> > -- need to be read by
>
> > -- The start_background_job command uses the
> > dbms_scheduler.create_job command to kick off the thread
> > start_background_job( 'BUILD_TABLE_PART1',
> > v_command1,
> > 'Background Job to build part1 of the
> > results table');
>
> > start_background_job( 'BUILD_TABLE_PART2',
> > v_command2,
> > 'Background Job to build part2 of the
> > results table');
>
> > END testParallel;
>
> Have you considered using some staging tables?
> 0. truncate staging tables.
> 1. parent commits the tentative data to the staging tables
> 2. parallel jobs create the results data based on the staging table
> data
> 3. if all looks good, the staging table data is copied to the real
> tables and the results and stage data are committed.
>
> Something like that should work.
> Ed
Hi Ed, thanks for your input. There's a 'parallel' discussion (no pun intended) on this problem in comp.databaes.oracle.server. Certainly your idea would have been fine if there was just a couple of tables, however i'm working on a data warehouse with daily data feeds into potentially 100 different tables. Each table can contain millions of records and so i don't it would be acceptable for me to duplicate that structure using temporary tables. It's a real shame that I can't get the source data recognised by processes descended from it. Oh well... Thanks! Nis. Received on Wed Oct 22 2008 - 04:35:53 CDT