Re: How to read parent session data - without forcing a commit in the parent

From: joel garry <joel-garry_at_home.com>
Date: Wed, 22 Oct 2008 10:37:34 -0700 (PDT)
Message-ID: <86f853a3-f43c-41ce-b7e6-956584e9f06f@31g2000prz.googlegroups.com>


On Oct 22, 2:26 am, nis..._at_gmail.com wrote:
> On 21 Oct, 19:04, joel garry <joel-ga..._at_home.com> wrote:
>
>
>
>
>
> > On Oct 21, 6:26 am, nis..._at_gmail.com wrote:
>
> > > Hi Folks,
>
> > > I have an interesting problem. Basically, I have an oracle job that I
> > > need to speed up as we are exceeding our allocated time window. The
> > > oracle job consists of a large number of small changes across multiple
> > > 'source' tables and then one of my functions is called to try and
> > > process this source data in the fastest possible time - in order to
> > > create a single huge table from the uncommitted data. If there is an
> > > error in my processing, I simply raise an exception that causes a
> > > rollback of the source data. So far so good.
>
> > > Now here's the problem. The 'huge' table that I am building consists
> > > of 2 distinct sets of SELECT queries on the uncomitted source data
> > > tables. There is no problem in doing this serially - other than it
> > > takes us 4 hours which is too long. To speed things up, I wanted to
> > > run the 2 distinct SELECT queries in parallel - and therein lies the
> > > problem. I cannot get parallel SELECT functions to run without them
> > > forcing the source data to become committed. Is there any way to get
> > > around this?? Below is a cut down version of the top level function
> > > that I used to kick off the parallel functions that build the huge
> > > table from the source tables. It works - but we cannot rollback
> > > because the parallel functions force the source data to become
> > > comitted!!! is there no way for a child session to have access to its
> > > parents uncomitted data?
>
> > > Thanks folks..
> > > Nis.
>
> > > PROCEDURE testParallel
> > >   IS
>
> > >     p_ba varchar2(100) :='CSR';
> > >     p_no number :=100;
>
> > >     v_command1 varchar2(200):= 'nis_test_parallel.func1(' || '''' ||
> > > '''' || p_ba || '''' || '''' || ',' || p_no || ')';
> > >     v_command2 varchar2(200):= 'nis_test_parallel.func2(' || '''' ||
> > > '''' || p_ba || '''' || '''' || ',' || p_no || ')';
>
> > >   BEGIN
>
> > >     -- When this function is called, then at this point we have got
> > > uncomitted data in tables. These tables are
> > >     -- need to be read by
>
> > >     -- The start_background_job command uses the
> > > dbms_scheduler.create_job command to kick off the thread
> > >     start_background_job( 'BUILD_TABLE_PART1',
> > >                           v_command1,
> > >                           'Background Job to build part1 of the
> > > results table');
>
> > >     start_background_job( 'BUILD_TABLE_PART2',
> > >                           v_command2,
> > >                           'Background Job to build part2 of the
> > > results table');
>
> > >   END testParallel;
>
> > Have you looked to see _why_ it is so slow?  The hot button phrase
> > "select uncommitted data" is often a code for "not understanding how
> > oracle does read consistency."  It may be other things are happening
> > to that data, or maybe even other data not related to it.  Since that
> > leads more gently into what Sybrand says, I don't even want to suggest
> > other possibilities, like spitting the data into another transaction
> > that... I'd say go ask Tom, as well as being more specific about the
> > code you are using.  There's a reason most performance problems turn
> > out to be the code.
>
> > jg
> > --
> > @home.com is bogus.http://forums.oracle.com/ECVtest.html
>
> Hi jg,
>
> The speed of the SELECT operations themselves is not actually slow -
> but there's other work that is going on in the functions both before
> and after the select statement, so the totality of the individual
> operations in each function is slow (2hrs per function so 4 hours in
> total if run serially). Since the operations in each function are not
> dependent on each other - being only dependent on the common
> uncommitted 'source' data, then it looked like a perfect candidate for
> parallel operations.In the past I've worked on multi-threaded C
> programs where a child thread is just forked off from the parent and
> each individual thread can see the current state of data in their
> common ancestor - as well as having their own data stack. Looks like
> Oracle does things differently. I've looked at other options such as
> using the PARALLEL clause and compiler hints (i.e. nologging, append)
> when running each query but that won't have much effect as the other
> 'non select' operations in each function are taking time and the
> collective total is high. I don't like the idea of trying to use
> flashback queries to restore the database if something goes wrong -
> seems long winded and possibly leaves more room for problems down the
> line. The system I'm working on is a data warehouse, and basically the
> folks in charge of the data can't risk data being committed to the
> source tables unless my operations on them are also successful. Hope
> that explains the background a bit. Thanks for all the input!

Fire off autonomous transactions which write to a file when they complete successfully, then controlling transaction checks file near end of window and tells them whether to rollback or commit?

Again, it depends on what is slowing them down. If they are choking on cpu, and this allows distributing the work among cpu, it may help. If the other work isn't really all that much, and serially is near I/O saturation, serial may work better. Ya gotta watch where you're squeezing the balloons.

jg

--
@home.com is bogus.
A vendor support person is trying to tell me _OPTIM_PEEK_USER_BINDS is
a silver bullet that will fix their cpu choking problem that is in
their code where it is not talking to Oracle, and in fact Oracle is
bored.  sigh.  But he's a good guy, he'll come around.
Received on Wed Oct 22 2008 - 12:37:34 CDT

Original text of this message