Re: Need help performing large updates...

From: Jared Still <jared_at_valleynet.com>
Date: 1996/04/15
Message-ID: <4ktn48$92u_at_alpine.valleynet.com>#1/1


sahmad_at_mfa.com (Saad Ahmad) wrote:

>rollback segments are essential for keeping the read consistent database.
>There is no way "around" that.
 

>Either you have to divide your job to divide work and commit, or you can
>make a big rollback segment and use it using SET TRANSACTION USE ROLLBACK
>SEGMENT.
 
>I think you should divide work, that would also improve performance.
>If you have a sequenced/indexed primary key, try going for x number of
>spans of the primary key. Or you can think of some other intelligent way
>of dividing work.

>C. Eric Ladner (clad_at_chevron.com) wrote:
>> I have a couple of huge tables (>500,000 rows) that need to be updated.
>> The nature of the update causes every row to be changed. The problem
>> I'm running into is that I keep running out of extents in the rollback
>> segments. This is using just a regular SQL statement like:
>>
>> update foo set bar = 'NEW' where bar = 'OLD';

Here is one way to get around the problem. Keep in mind that if you try to update too many rows before a commit, you could run into a problem with 'Snapshot too old' if there is any other activity on the table you are updating.

I'm assuming in this instance that you to move all rows in the table too_big that contain 'CA' as a state from the eastern division to the western division where they belong.

  • CUT HERE --------------------------
create or replace procedure divide_and_conquer ( is
	cursor too_big_cur
	is
	select * 
	from too_big
	where division = 'EAST' and state = 'CA';

	/* 
	|| this cursor needed for the update process 
	|| pass it the primary key
	*/
	cursor too_big_upd_cur ( pk_upd in number )
	is 
	select * 
	from too_big where pk = pk_upd
	for update;

	too_big_upd_rec too_big%rowtype;

	commit_count integer := 0;

	max_trans_count integer := 50;

	begin

		for too_big_rec in too_big_cur
		loop
			if too_big_upd_cur%isopen
			then
				close too_big_upd_cur;
			end if;

			open too_big_upd_cur(too_big_rec.pk);
			fetch too_big_upd_cur into too_big_upd_rec;
			
			if too_big_upd_cur%notfound
			then
				raise_application_error(-20001,'unable to find Pk in too_big');
			end if;

			update too_big
			set division = 'WEST'
			where current of too_big_upd_cur;
			
			close too_big_upd_cur;

			commit_count := commit_count + 1;

			if commit_count >= max_trans_count
			then
				commit_count := 0;
				commit;
			end if;		

		end loop;
	end;

/

show errors

  • CUT HERE -------------------------
invoke by:

        exec divide_and_conquer

or

	begin
		divide_and_conquer;
	end;
	/


In this example, your update will be commited every 50 rows. You may want to use more or less depending on your system.

This entire process can also be generalized by a procedure that uses dynamic seqel to create the above process.         

Jared Still, Oracle DBA
RxNet, Division of Value Health
"All opinions are mine, not my employers" jared_at_valleynet.com Received on Mon Apr 15 1996 - 00:00:00 CEST

Original text of this message