Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Best way for updating EVERY row in a large database?

Re: Best way for updating EVERY row in a large database?

From: Dave Wotton <Dave.Wotton_at_it.camcnty.no-spam.gov.uk>
Date: 24 Apr 1998 09:03:21 GMT
Message-ID: <6hpkgp$mvd@dns.camcnty.gov.uk>


"Jim Morgan" <jimmorgan_at_csi.com> wrote:
>I've got an application that is run once every several months which resets
>several columns to zero in a table that contains > 3,000,000 rows.

   [snip]

>This program is taking over 9 hours to run, and I am wondering if there is
>something I can do about that.

Hi,

I haven't got any direct experience of updating this many rows but, reading the text-books, here's two very different approaches which should reduce your running time down to 1 or 2 hours (guesswork).

Approach 1.


  1. Write a small program which simply reads your entire table and writes it out to a file outside the database, resetting the counter columns as it goes.
  2. Truncate the table, drop the indexes, and disable any constraints
  3. Use SQL*Loader with DIRECT PATH option to load the data back in.
  4. Rebuild indexes, using the "unrecoverable" option. ( To prevent the generation of redo log entries - speeding up the process )
  5. re-enable constraints.

Approach 1a


    as above, but modify step (1) to split the output into approx 4 separate     files, preferably on different discs. Then use multiple SQL*Loader processes     with DIRECT PATH and PARALLEL options to speed up the loading even more.

    Also use PARALLEL options on the CREATE INDEX statements to speed them up     too.

Approach 2


  1. Use CREATE TABLE ... AS SELECT .... UNRECOVERABLE
         where the SELECT clause copies across all your columns unchanged,
         except the counter columns which you replace with zeros. The 
         UNRECOVERABLE option prevents the generation of redo log entries
         and ( so the books say, rollback table entries ) so you won't
         need big rollback segs. This method can also use PARALLEL options
         to speed it up even more. Your best performance would be if the
         new table is on a different disc to the old one.

    2)   Drop the old table, rename the new one, build indexes ( again, using
         UNRECOVERABLE and PARALLEL options ).

    3) Recreate constraints, grants etc.

Neither option maintains read-consistency, so you'd have to run them when no-one else was using your table. Approach 1 and 1a require sufficient disc space outside the database to hold the unloaded table. Approach 2 requires sufficient spare space within your database to hold two copies of the table. Both approaches need enough TEMP space to rebuild your indexes.

All approaches benefit from the UNRECOVERABLE options which are available in Oracle 7.3 and later, and benefit even more from the parallel query options which also require Oracle 7.3+

As I said, I've no direct experience of these techniques, but they seem feasible. Has anyone tried them and have practical experience of the performance gains and any problems?

HTH, Dave.

--

To reply by email, remove the "no-spam" bit from my email address. Received on Fri Apr 24 1998 - 04:03:21 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US