Re: Executing Multiple Scripts.

From: Ed prochak <ed.prochak_at_magicinterface.com>
Date: 5 May 2004 11:59:14 -0700
Message-ID: <4b5394b2.0405051059.4edfaf1f_at_posting.google.com>


rjwerning_at_yahoo.com (RJ) wrote in message news:<570dd773.0404301014.738703a_at_posting.google.com>...
> We currently send product releases to our customers, and often have to
> include scripts that need to be ran on the Oracle databases (also do
> it for SqlServer customers, but we use a different set of scripts for
> that). Some of the Sql scripts can be quite long, and so we break
> them out to their own script files and call them all from one main
> .sql file, using sqlplus to execute it all.
>
>
> The admin would run the file like:

[]
>
> The problem that I have with this is that there is no error trapping,
> and no sanity checks to see if a script has already been ran. At any
> given release, there may be several dozen scripts, some of them
> running against millions of rows, so I'd like to record when a script
> has been ran against a specific Instance, and only run the script if
> it hasn't been ran yet. This would help the process if an error does
> exist, and the admin's need to restart the execution.
>
> What I was thinking of doing was creating a table that I would use to
> record what scripts are ran, and when.. call it SQL_UPDATE - fields
> for SQL_FILE, DATE_EXECUTED and UPDATE_VERSION. Before I run a Sql
> file, I'd like to check against SQL_UPDATE and see if that file name
> exists, if it does skip running that file, if not then run it and if
> it is successfull insert a row into SQL_UPDATE.
>
>
> What I'm wondering is if this concept makes sense, and how would I go
> about having the code for checking against SQL_UPDATE in the main
> execution file? Can I use PL/SQL to do the checks? Ultimately, I'd
> like to have 1 procedure that I pass the command to, it does the check
> against the SQL_UPDATE table, runs the command if needed, checks for
> errors, and updates if successfull.
>
>
> Sorry for the long winded description, and thanks for any advice you
> can offer.
>
> - Rich Werning

There are two issues here:
1. error trapping/logging/reporting
2. auditting (to answer: when was update0427.sql executed?)

Re: 1. error trapping/logging/reporting
you can approach this at multiple levels.  You say you are using simple SQL scripts. Do you include WHEN ERROR clauses in your SQL scripts? You can trap errors and deal with them accordingly.
 You do also send backout/recovery scripts don't you? ie if the update script has ANY problem whatever, the backout script should put the data back to it's initial state, WITHOUT having the DMA do a restore of the whole database (A restore is a LAST resort option, so you do run these updates after a clean backup, right??)

PL/SQL is certainly a more precise methos to controlling the error handling. It can give you enough flexibility to have checkpoints for continuing an update when an error does happen. The control is more fine grained, eg instead of in SQL doing:

        UPDATE tableA ...<all affected rows>
        UPDATE tableB ...<all affected child rows>
where all the updates on tableA must succeed before starting the updates to tableB, in PLSQL you can link the parent and child in a cursor loop and:
        UPDATE tableA ...<current row>
        UPDATE tableB ...<children of current row>
        EXCEPTION
            WHEN... deal with this row

and now one row failing to update in tableA can be logged and the rest of the updates to both tableA and tableB can continue.

The PL/SQL approach is obviously stronger. If these updates are fairly repeatable (same tables, same kinds of data DML changes, as opposed to table DDL changes), then building a change control package in PL/SQL is a good solution. (I've done this with previous clients and it is well received by the DBA especially.)

But even just wrapping your SQL in OS shell scripts can add some success/failure reporting capability.

Re: 2. auditting (to answer: when was update0427.sql executed?) Again, many solutions from OS/file system solutions (as simple as generate log files) to audit tables down to the attribute level. At the very least, your script should generate an unabiguous GOOD/BAD report so the person running it can know whether to proceed to their next task. Summary reports of rows updated, number expected to be updated(derived from testing), number of failed updates, go/no go limits can make the person running the scripts more confident. The audit trail should be able to identfy at least the rows touched and when it happened.

Usually a combination of log files and DB tables are used. Again, consider all the possible tools: SQL PL/SQL, shell scripts, and other languages/tools.
PERL for example is a great language for programming some updates. With the perl DB interface, you can connect to multiple databases and work such updates from one program. (as opposed to dumping an update data file and copying it to another server to run. The time delay may mean using old data.)

I would suggest avoiding the ONE-DO-ALL procedure in favor of a PL/SQL package with appropriate functions and procedures. Don't forget to include an ID() function that just returns the revision of the package. You WILL need that. It's a lot easier to

             SELECT package.id() from dual; than to dump source from the database and do a diff with your source files.

So there are lots of ways, so my key suggestion is to keep in mind the fact there are two issues here.

HTH,
   Ed Received on Wed May 05 2004 - 20:59:14 CEST

Original text of this message