RE: Inserting with billion of rows faster

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Sat, 3 Apr 2021 12:04:15 -0400
Message-ID: <179601d728a2$ff3c64c0$fdb52e40$_at_rsiz.com>



By the way, how many indexes? IF exactly one and the new data is above the current rows in that order it is probably worth ordering your input data. Possibly even if adding a prefix of date-time to the existing index to make that true.  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Lok P Sent: Saturday, April 03, 2021 7:38 AM
To: Jonathan Lewis
Cc: Oracle L
Subject: Re: Inserting with billion of rows faster  

Basically my thought was, In a normal data load scenario where we have to move TB's of data(say 1TB+), considering post data load indexing will take the same amount of time in any case. Whether it's suggested to go for INSERT APPEND method of data load in chunks OR doing one shot data load using CTAS? In both the cases there will be almost zero UNDO i, so i was wondering if any limit is there for one shot dataload, so that we should not go for CTAS approach after a certain size of data but have to do that in chunks only(may be using INSERT.. APPEND method only)?  

Regards

Lok  

On Thu, Apr 1, 2021 at 6:49 PM Lok P <loknath.73_at_gmail.com> wrote:

Thanks much, Jonathan. It worked after shrinking the top segments. Thanks again.  

Out of curiosity , and as we were almost end up deciding to attempt this one time load, so considering ~18billion rows( worth ~1TB+ size ) to be moved at one shot , whether it's sensible to go for "Insert append into stage table ... select.. from source table" approach OR "CTAS stage table.. select from source table"? I believe in both cases UNDO will be almost Zero as no index is there in them, so wondering what other DB resource it will consume?  

On Thu, Apr 1, 2021 at 6:19 PM Lok P <loknath.73_at_gmail.com> wrote:

Thank you so much Jonathan. As you rightly mentioned , the Shrink of specific rollback segment seems to be the best work around to get away in this situation. i'm going to try it now. Thanks.  

On Thu, Apr 1, 2021 at 5:07 PM Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:    

Follow-up to my previous comments about dropping or affecting the size of undo segments. Read MOS nod 1580182.1

The basic comment is that you can hit this problem if you have very long running transactions and a very large undo retention - and it probably needs a reasonable level of concurrency as well, but that's not stressed.  

It is possible to set a debug parameter that allows you to shrink undo segments by name - details in the note.

It seems to work.    

Regards

Jonathan Lewis      

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Apr 03 2021 - 18:04:15 CEST

Original text of this message