Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Update of Clobs *Performance*

RE: Update of Clobs *Performance*

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Wed, 3 Nov 2004 16:18:10 -0500
Message-ID: <KNEIIDHFLNJDHOOCFCDKGEHLFJAA.mwf@rsiz.com>

-----Original Message-----
From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org]On Behalf Of Anthony Molinaro Sent: Wednesday, November 03, 2004 3:35 PM To: Mark W. Farnham; oracle-l_at_freelists.org Subject: RE: Update of Clobs *Performance*

Mark,

>there is a good chance it will blow up later as the tables grow. Kidding right?

<< not kidding at all. I've seen plenty of toy monolith designs blow up
<< when unleashed on big or growing data. "But we haven't changed the
programs at all!"

You bring up a good point, but I disagree. My reasons are:

  1. one commit will outperform the counter.
    << That is something you can measure, but you can't possibly know the answer
    without testing.
    << There is a cost to accumulation of UNDO, and generally you reach a
    plateau where the cost
    << of extra commits is balanced against the cost overhead of accumulated
    UNDO and then if
    << you continue adding to the size of the update between commits performance
    will degrade.
    << Sometimes it is even worthwhile to measure to make sure you pick a commit
    frequency from
    << the plateau.
  2. if the table is so huge that a single commit is a problem then he shouldn't be looping one by one in the first place
    << Feel free to send him a set update example.
  3. undo is a little different here, these are lobs - pctversion
    << Unrestricted size of UNDO accumulated uncommitted is still the issue

The idea of a counter is useful if you want "partial loads". If half get in and the rest fail, and that's what you want, and you have a reliable point where you can restart, then cool. Otherwise, I really disagree.=20

<< The idea of a counter is useful if you want a job that will work
regardless of the growth
<< of your data. Clearly his process is restartable, but it would indeed do
the entire job
<< again. The data model presented, however, does not lend itself to having
a restart point
<< unless you would trust select to always return rows in the same order.
Then I suppose you
<< could scan off the n rows previously updated before entering the update
loop. Of course
<< someone may have added a row since the previous update was attempted and
ASSM may have
<< stuck it anywhere.
<< Of course you're free to disagree, but you might find it enlightening to
try some timed
<< tests of monolithic commits versus commit counters. Of course you'll need
to be sure to
<< test array updates versus array updates or row by row versus row by row
to get valid
<< results.
<<
<< Regards,
<< mwf

-----Original Message-----
From: Mark W. Farnham [mailto:mwf_at_rsiz.com]=20 Sent: Wednesday, November 03, 2004 3:17 PM To: Anthony Molinaro; oracle-l_at_freelists.org Subject: RE: Update of Clobs *Performance*

Putting in a counter, picking a reasonable size, commit'n and reset'n the counter when you hit the limit is usually useful.

Even if a single commit for the whole table will work for him now, there is a good chance it will blow up later as the tables grow. It's pretty likely committing each row is unreasonable, but committing monoliths is a recipe for future problems and driving UNDO out of cache without need. I recommend avoiding monolithic commits unless there is a hard requirement for reversibility (rollback) and avoiding a program architecture that drives a need for monolithic commits is up there with the golden mean as far as I'm concerned.

mwf

-----Original Message-----
From: Anthony Molinaro [mailto:amolinaro_at_wgen.net] Sent: Wednesday, November 03, 2004 3:05 PM To: mwf_at_rsiz.com; oracle-l_at_freelists.org Subject: RE: Update of Clobs *Performance*

In regard to: >>>>>>>> Even better, just commit once at the end...

-----Original Message-----
From: Mark W. Farnham [mailto:mwf_at_rsiz.com] Sent: Wednesday, November 03, 2004 3:00 PM To: oracle-l_at_freelists.org
Subject: RE: Update of Clobs *Performance*

create index why_full_scan_all_my_clobs_for_each_one_row_update on tableB(tabB_num)

change your where clause to where tabB_num =3D to_number(v_id)

Think about a commit counter within the loop less than the entire table. Maybe 1000 or 10000?

Regards,

mwf

<snip>

Procedure
declare
v_clob varchar2(32500);
v_id varchar(10);

cursor cont_rep_clob is
select tabA_char, tabA_clob
from Table_A;

begin
open cont_rep_clob;
loop
fetch cont_rep_clob into v_id, v_clob;

exit when cont_rep_clob%NOTFOUND;

update Table_B
set tabB_clob =3D v_clob
where to_char(tabB_num) =3D v_id;

commit;

end loop;
close cont_rep_clob;

--
http://www.freelists.org/webpage/oracle-l


--
http://www.freelists.org/webpage/oracle-l
Received on Wed Nov 03 2004 - 15:16:35 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US