Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Select Distinct over Database Link

Re: Select Distinct over Database Link

From: <Kenneth>
Date: Wed, 21 Aug 2002 10:41:44 GMT
Message-ID: <3d6368e5.10696951@news.capgemini.se>


On Wed, 21 Aug 2002 12:31:17 +0400, "Vladimir M. Zakharychev" <bob_at_dpsp-yes.com> wrote:

><Kenneth Koenraadt> wrote in message news:3d6343c3.1191052_at_news.capgemini.se...
>> The advantages of committing with a certain frequency :
>>
>> 1) Prevents rollback segment from growing enormously. With inserts, as
>> in this case, that is not the biggest problem.
>
>In my books, committing inside a loop over cursor in Oracle (and I should
>emphasize this, in Oracle), is a very straightforward way of getting
>ORA-1555. By committing in a loop you allow rollback blocks needed for
>maintaining read-consistent view for the cursor to be overwritten. This
>alone makes such intermediate commits a bad practice.

Nope. When batch updating/inserting millions of rows in a table, a read-consistent view will very often (though not always) do more harm than good, as that view is completely out of time.... Anyway, you brought in another (irrelevant) issue here. In the actual case, it was a whole new table, so no one needed a read-constistent view, did they ?!

In general, of course, more frequent commits will increase the possibility of ORA-1555.

>
>> 2) The possibility of easily auditing the job from another session,
>> i.e. counting the number of records currently inserted by the job and
>> from that estimate i.e. the time of job completion.
>
>There is v$session_longops view and dbms_application_info.set_session_longops()
>for this purpose. Your transaction sets its target, whatever it may be, row count or
>anything, and then periodically updates status using this procedure, and another
>session can query the view to see your transaction progress.

There is always another way to do things....The one you mention is surely sufficient but not as straightforward as 'select count(*) from mytable.....

>>
>> 3) Less rollback is needed in case of job failure.
>>
>
>Did not understand this one. Do you mean less i/o is needed to rollback
>failed transaction?

Yep.

Or less rollback space?
>
>> 4) The commit freqency can be varied. In the actual case, you could
>> rewrite the block as a named procedure which takes commit_frequency as
>> a parameter. That parameter could even have a default of, say, -1,
>> which would mean "no commit before all inserts are done".
>
>In Oracle, commit frequency should not be varied. You should only commit
>when necessary (that is, at the end of transaction) and not any more often.
>It doesn't matter if you insert 1 or 1 million rows - you should commit when
>the whole transaction is done (or rollback). Any issues with lack of rollback
>space I would attribute to bad rollback sizing, which is what DBA is responsible
>for. And you can always allocate a huge rollback segment for an enormously
>big transaction if need be (that is, your system does not normally run such big
>transactions and this was not planned, but all of a sudden you need to.)
>If you commit transaction intermediately, you face serious risk of ending up with
>hardly recoverable errors. For example, you need to insert 1 million rows into a
>table. You commit after each 100,000 rows. After inserting, say, 500,000 rows
>(and committing!) you hit some error that causes your process to stop. Now you
>have a problem - your tx is semi-finished. You either need to delete all those rows
>you inserted and start anew, or find a way to detect which rows were already
>inserted and continue from where the process stopped.

Again, that problem is not topical in this case.

Either way, you should
>carefully plan for failures and program a workaround for them so that you can
>continue or restart after error.

Agree !

 If you simply insert the whole 1m rows and
>commit or rollback, your data is pretty consistent whatever the outcome - you
>either inserted all rows or none, so you can easily correct the problem and
>restart in case of error.
>
>>My statement regards the processing of a large number of records.
>>"Large number" is relative in the sense that 1M inserts/updates may be
>>a considerable number on a small system, but no problem in a larger
>>system.
>
>"Small systems" (do these refer to systems with very limited disk space?)
>shouldn't be running large transactions - they are just not sized appropriately.

Welcome to the real world...many systems are "under-sized" in the sense that *sometimes* you need to run jobs that will take a long time to run and must be split into smaller pieces.. You just can't buy new hardware every time you need to run a single,large batch job, when the system runs fine the other 99,5 % of the time.

>Using tricky and dangerous ways to overcome system limits is not what I
>would ever recommend...

Neither would I. And I didn't.
y.
>
>Corrections and additions welcome.

Thanks for the debate ;-)>
>--
>Vladimir Zakharychev (bob@dpsp-yes.com) http://www.dpsp-yes.com
>Dynamic PSP(tm) - the first true RAD toolkit for Oracle-based internet applications.
>All opinions are mine and do not necessarily go in line with those of my employer.
>
>
Received on Wed Aug 21 2002 - 05:41:44 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US