Re: Using DBMS_JOB and DBMS_PIPE to mimic multithreaded PL/SQL application
From: Jan Korecki <Jan.Korecki_at_contactor.se>
Date: Wed, 19 Nov 2003 02:05:06 +0100
Message-ID: <3fbac1f5$0$97834$57c3e1d3_at_news3.bahnhof.se>
>
>
> We currently have a batch process that takes 5 hours to process around
> 5000 records. Not very efficient (CPU not used effectively), but the
> processing logic is very complex and we've already made a lot of gain
> by fixing the logic and SQL. However, we are performing the same
> logic on all 5000 records, so it stands to reason that I can increase
> performance if I can process more than one at a time.
>
> The way I will do it is:
> - Select 5000 records using complex SQL
> - Insert these records into a table with the addition of a batch
> number (ie. records 1-100, batch number is 1000; records 101-200,
> batch number is 1001; etc)
> - Spawn a child thread or job to process one batch number of 100
> records.
Received on Wed Nov 19 2003 - 02:05:06 CET
Date: Wed, 19 Nov 2003 02:05:06 +0100
Message-ID: <3fbac1f5$0$97834$57c3e1d3_at_news3.bahnhof.se>
Hi!
I might have missed something about why you want to use DBMS_PIPE but you could parallel the batch like this:
Start up to 36 parallel job processes (up to 1000 in 9i)
dbms_job.submit( l_job, 'process(p_batchnumber );' ); dbms_job.submit( l_job, 'process(p_batchnumber );' ); dbms_job.submit( l_job, 'process(p_batchnumber );' );...
loop
if(any available job processes AND any more batches left to be run) start up x number of new jobs; elsif (all batches are processed) exit; end if, dbms_lock.sleep(sleep for a minute or so)
end loop;
To check for available batches to run while in the loop:
when the procedure process starts it updates a parameter table that that
batchid is running
when the procedure process is done it updates a parameter table that the
batchid has finished.
Regards,
Janne!
Paul Jones wrote:
>>BTW, why are you trying to do this? >>
>
>
> We currently have a batch process that takes 5 hours to process around
> 5000 records. Not very efficient (CPU not used effectively), but the
> processing logic is very complex and we've already made a lot of gain
> by fixing the logic and SQL. However, we are performing the same
> logic on all 5000 records, so it stands to reason that I can increase
> performance if I can process more than one at a time.
>
> The way I will do it is:
> - Select 5000 records using complex SQL
> - Insert these records into a table with the addition of a batch
> number (ie. records 1-100, batch number is 1000; records 101-200,
> batch number is 1001; etc)
> - Spawn a child thread or job to process one batch number of 100
> records.
Received on Wed Nov 19 2003 - 02:05:06 CET