Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: large delete

Re: large delete

From: Robert Klemme <bob.news_at_gmx.net>
Date: Thu, 22 Jun 2006 11:02:19 +0200
Message-ID: <4fv4krF1ksnmdU1@individual.net>


Sybrand Bakker wrote:
> On Wed, 21 Jun 2006 12:53:16 +0200, Robert Klemme <bob.news_at_gmx.net>
> wrote:
>

>> Sybrand Bakker wrote:
>>> On Tue, 20 Jun 2006 16:21:27 -0700, Akimeu <Nospam_at_Email.com> wrote:
>>>
>>>> hello,
>>>>
>>>> i have done some looking around for this, but haven't really found a 
>>>> solid answer.  we have an sp, to which we're passing in a table name, 
>>>> after which we execute immediate a delete statement.  is there any way 
>>>> of committing within this particular delete?
>>>>
>>>> any insights/suggestions are appreciated.  thanks.
>>>>
>>>> this is what i was thinking of doing:
>>>>
>>>>   loop
>>>>   	  vsql:='delete from '||vschem||'.'||vtablename||
>>>>         	 ' where '||vcol||'='''||vid||''' and rownum <= 10000';
>>>>  	  execute immediate vsql;
>>>> 	  exit when sql%rowcount = 0;
>>>> 	  commit;
>>>>   end loop;	
>>> do NOT commit every n records. You will be hitting 'snapshot too old'
>>> because of this.
>> Could you elaborate this a bit?  Your answer isn't clear to me.  It 
>> seems, that Tom suggests to do exactly this (i.e. working only on a 
>> subset of the data) to *avoid* ORA-01555 "Snapshot too old":
>>
>> "3.  Run the processing against a range of data rather than the whole 
>> table. (Same reason as 1)."
>>
>> http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:275215756923
>>
>> Kind regards
>>
>> 	robert

>
>
> Actually if you do read his article carefully
> /* quote */
> and in the rare cases where
> ORA-01555 is returned one of the solutions below must be used.
>
> CASE 1 - ROLLBACK OVERWRITTEN
>
> 1. Increase size of rollback segment which will reduce the
> likelihood of
> overwriting rollback information that is needed.
>
> 2. Reduce the number of commits (same reason as 1).
>
> 3. Run the processing against a range of data rather than the whole
> table.
> (Same reason as 1).
>
> 4. Add additional rollback segments. This will allow the updates
> etc. to be
> spread across more rollback segments thereby reducing the chances of
> overwriting
> required rollback information.
>
> 5. If fetching across commits, the code can be changed so that this
> is not
> done.
>
> 6. Ensure that the outer select does not revisit the same block at
> different
> times during the processing. This can be achieved by :
>
> - Using a full table scan rather than an index lookup
> - Introducing a dummy sort so that we retrieve all the data,
> sort it and
> then sequentially visit these data blocks.
>
> /*unquote*/
>
> So he recommends using *one of the solutions* below.
> Why should that be necessarily solution 3 according to you?

Um, I don't think anybody claimed it *should* be option 3 but since that option *can* be a solution I was curious why you made your suggestion. Considering the rest of this posting of yours I assume you probably meant "do not favor commit every n records over increasing undo space which is cleaner and safer". I didn't get that from your first posting.   Thanks for clarifying!

> Actually the OP is already deleting a range of data (look at the first
> predicate in his delete statement), and as far as I am concerned he
> should just bite the bullet and accept his undo segments are going to
> grow.

Yes, that's definitively the cleanest and probably also the most efficient solution.

> Furthermore: the OP seems to split a *logical* transaction into
> several smaller physical ones. This is invariably a bad idea.
> In what state will the table be if his program crashes? At it's best
> the situation will be hybrid. m records would have been deleted, and
> the delete of n records would have been undone. The OPs table is no
> longer logically consistent, and it is unclear what has been deleted
> and what hasn't

Yes, of course. Completely agree.

Kind regards

        robert Received on Thu Jun 22 2006 - 04:02:19 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US