Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Processing batches, requesting comments with sqlldr and preapred UPDATes.

Re: Processing batches, requesting comments with sqlldr and preapred UPDATes.

From: Brian Peasland <dba_at_nospam.peasland.net>
Date: Tue, 23 Oct 2007 12:02:47 -0500
Message-ID: <471e1d50$0$26403$88260bb3@free.teranews.com>


Brian Tkatch wrote:
> Oracle 9.2
>
> We process a few thousand batches nightly, each with a few hundred
> lines, where the data is stored in a TABLE and an UPDATE is executed
> using the data from another (non-DB) server.
>
> 1) We are in the process of switching to sqlldr to get the files into
> the database. Tests show a significant speed up over the current
> record-by-record approach (as was expected) and we hope to implement
> that soon.
>
> In the current system the data is INSERTed INTO a TABLE and processed
> from there by a subsequent process. It is my understanding that if we
> load directly into that TABLE it will LOCK it for the duration of the
> LOAD. Because we want to allow the load and the process to run
> concurrently (on different batches) we would like to avoid TABLE
> LOCKs.

There will be two different types of locks at play here. One, the newly INSERTed row will be locked so that no one can make any changes to the row until the INSERT is committed. Other application can still see the table's contents but will not see the newly inserted rows until COMMIT. Two, there is a DDL lock so that no one can alter the physical structure of the table until the transaction is complete. This should not cause any application problems since no one should be altering the structure of the table. In both types of locks, other users will still be able to access committed data in the table.

Why do you want to avoid "TABLE LOCKs"? Oracle DML locks only the affected rows...and this is a good thing. Any lock on the table just stops DDL from occurring on the table...again, a good thing. Oracle *does not* lock the table to stop others from reading committed data. And Oracle *does not* lock the entire table to stop other DML from occurring.

> The proposed solution would be to load the data into a new TABLE
> first, then INSERT the data into the current TABLE and DELETE it from
> this new first TABLE.
>
> Originally, i thought that to allow sqlldr to run from different
> servers, a GLOBAL TEMPORARY TABLE could be used for the first TABLE
> (and then INSERT it from there INTO the "real" TABLE). But, it seems
> that sqlldr cannot use this approach due to the COMMIT at the end.

Have you looked at External Tables? This is available in Oracle 9i. External Tables use the same SQL*Loader engine so you get the same speed. And you can do a simple INSERT..SELECT to read the External Table's contents and populate your (internal) table.

> 2) After the data is INSERTed, an external process SELECTs a COLUMN
> FROM the TABLE and queries another (non-DB) server for more
> information (which, i have been told, policy states we are not
> supposed to cache), which then uses the data in a conditional UPDATE
> statement. This statement is executed once per unique data in the
> present batches.

Any block of data read by any user in an Oracle database must be stored in the Buffer Cache. You cannot avoid this. Or do you mean you can't "cache" the data by storing it in a local table?

> I was wondering about saving trips to the DB by making one large
> UPDATE with a CASE statement listing the tens or hundreds of different
> values, or perhaps by CREATEing a PROCEDURE to PREPARE an INSERT, get
> passed an array and do a FOR ALL or the like. Currently, the java code
> uses its own INSERT, once per new value.
>
> Any thoughts?
>
> B.

Again...an External Table may be able to help you here. You can craft any SQL statement against the External Table. So you can use CASE statements here too. Why not write a SELECT statement which reads data from the External Table and uses information in the "(non-DB) server". This SELECT statement can be used as a basis for your INSERT..SELECT to push the final results into your table.

HTH,
Brian

-- 
===================================================================

Brian Peasland
dba_at_nospam.peasland.net
http://www.peasland.net

Remove the "nospam." from the email address to email me.


"I can give it to you cheap, quick, and good.
Now pick two out of the three" - Unknown

-- 
Posted via a free Usenet account from http://www.teranews.com
Received on Tue Oct 23 2007 - 12:02:47 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US