Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Optimising PL/SQL Script. Takes 2 days to run.

Re: Optimising PL/SQL Script. Takes 2 days to run.

From: Rauf Sarwar <rs_arwar_at_hotmail.com>
Date: 23 Jan 2003 15:01:54 -0800
Message-ID: <92eeeff0.0301231501.6f2591a0@posting.google.com>


DA Morgan <damorgan_at_exesolutions.com> wrote in message news:<3E2CB263.8814A730_at_exesolutions.com>...
> Just a few quick comments.
>
> I agree with error handling inside a loop but it should not just ignore errors ... it should log them
>
> I disagree with your suggestion about incremental commits. Can't think a better way to kill performance and
> generate ORA-01555.
>
> Daniel Morgan

I agree with both your points...only because there are always more then one way to do this sort of thing (Depending on one's priority). It's not an absolute science.

One constant in all this that I believe we can both agree on is this sort of query should not be run when everybody + their kitchen sink is logged into the database. There are arguments on both sides of (Large rollback segments and fewer commits) or (Small rollback segments and frequent commits). If one is moving 1000,000 rows...then it can go either way...however, if one is moving 1/2 billion rows...then it's time to take a closer look.

If this is a one time thing...then none of this matters. Speed is a non factor
because one can take time to do it properly.

However, if this is a weekly or a monthly load, then you have to take a balanced approach. You do not want to sacrifice speed for restartability and error logging and vice versa. Few years back I had an opportunity to work as a software engineer in an Oracle 7 data warehouse environment. My first project (a boring one though) was to take all the existing data mart load shell/sql scripts and modify/rewrite them to be restartable with error logging. Emphasis was that it used to take us full 3 days in a month to build the data marts and we wanted to make sure that process was restartable at any time when an error occured.

In this scenario, there are 1/2 billion rows that are being processed so naturally one has to take a balanced approach. IMHO, restartability/Error logging should definitely be incorporated by sacrificing some of the speed. It will make the script more robust. I would also suggest looking into partitioning the table and move one partition at a time.

One of the ways I could think of (if possible), to handle restartability and Error logging is to add a flag column in items_locations table. On every successfull insert, update the row to 1 and on any failure update the row to 0. Cursor query would be something like,
select ITEM, LOCATION from ITEM_LOCATIONS where nvl(flag, 0) = 0; To make it more fancy, one can also add an err_msg column also to log any error messages. Ofcourse this will have an impact on performance but then again...it's one's priority and business rules that matters the most.

Regards
/Rauf Sarwar Received on Thu Jan 23 2003 - 17:01:54 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US