Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Large set-based UPDATE

Re: Large set-based UPDATE

From: <ctcgag_at_hotmail.com>
Date: 12 Feb 2003 21:11:03 GMT
Message-ID: <20030212161103.540$nd@newsreader.com>


f1fteen_at_hotmail.com (Paul) wrote:
> Hi all,
>
> I am reviewing an Oracle stored procedure (BULK_UPDATE) that makes a
> large set-based update to a large table (ACCOUNTS, 5 million rows).
>
> The problem we are encountering with this is that, because the UPDATE
> statement applies changes to a large number of rows at a time to the
> ACCOUNTS table (around 150,000), a web layer that also allows updates
> to this table is being regularly locked out for an unacceptable length
> of time.

What length of time would be acceptable? Any hope of tuning the bulk update to execute in under this amount of time? (And what is a set-based update?)

> To summarise the code:

> An UPDATE statement is constructed within a stored procedure and
> executed using EXECUTE IMMEDIATE
>
> This UPDATE statement affects around 150,000 rows of a 5,000,000 row
> table

Based on what criteria? Can the web-layer users change this criteria out from under you, or is it only used by the batch processes?

>
> I was thinking a possible solution to this is to break the 150,000 row
> set based update into 150,000 updates to single rows by processing
> them through a cursor. I would then COMMIT within the cursor every
> 1000 rows.

Isn't it illegal to commit within a cursor (and then continue to use the cursor)? I seem to remember trying that before with no joy.

> Is this a good way to go?

Probably not or definitely not. What if it dies during processing, would you have some way to know which ones were already updated and which weren't, just by inspecting the data? What happens when the web-layer users see data from this 15_000 where some has been updated and some haven't?

> Am I right in thinking that only 1000 rows at a time will be locked
> using this method?

Yea, probably, depending on exactly how you implement it.

> Is this likely to be much slower given that I am now updating 1 row at
> a time through a cursor?

Again, I'm not sure you can even do that exactly in that manner. But yeah, I imagine whatever is analogous to it would be much slower.

> All help is gratefully received.

I've never had to do this as a recurring thing in a live production box, but I have had to do it live as one-time occurences. I usually do something like this:

create table auxiliary_table as
  select rowid as the_rowid from main_table where <whatever>;

Then, execute somethint like this:
  start transaction.
  select the_rowid from auxiliary_table where rownum=1;   verify that that row in the main_table still meets criteria.   update that row in in main_table.
  delete that from auxiliary_table
  repeat 1000 times (or whatever)
  commit;
repeat whole transaction until auxiliary is empty.

This assumes the updates are of such a nature that the web-layer users can see them partially done without causing problems.

Xho

-- 
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service              New Rate! $9.95/Month 50GB
Received on Wed Feb 12 2003 - 15:11:03 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US