Re: Inserting with billion of rows faster

From: Jonathan Lewis <jlewisoracle_at_gmail.com>
Date: Thu, 1 Apr 2021 11:51:47 +0100
Message-ID: <CAGtsp8ni5_HwgoP_BRoORLAEh3i4phKHTeT4XpB+4i6=Trv+iA_at_mail.gmail.com>



Your comments about the partition approach are exactly what I had in mind.

As far as keeping the current undo tablespace: without having access to the system and being able to observe it's behaviour on a few large-scale experiments it's very hard to predict what will happen and why; but I suspect that once this ORA-01628 error strarts to appear it's going to be hard to get rid of it. It depends how Oracle handles stealing extents in your version. My thinking is as follows:

If you have an undo segment which has managed to get to 32,765 extents and you start this very big transaction in a different segment, that segment will grow and use up any free space in the undo segment fairly quickly; then it will start stealing extents from other segments - and if it starts stealing extents from the big segment that hit the problem will it steal one extent at a time, or will it try to steal (e.g.) 1MB at a time. I suspect it will HAVE to steal one extent at a time, which means it's going to keep stealing lots of little extents and, as a consequence, will reach 32,765 extents.

If there's a way to force Oracle to drop undo segments, allowing lots of little extents to be freed and (in effect) coalesced, then you might rescue yourself without creating a new tablespace - but I don't think there is.

Regards
Jonathan Lewis

On Wed, 31 Mar 2021 at 06:20, Lok P <loknath.73_at_gmail.com> wrote:

> Thank you Jonathan.
>
> Yes all the indexes are local in this case. And we are not very sure of
> how the data movement happens across active/inactive partitions , we will
> try to check with the dev team and also if possible will engage experts to
> have a detailed analysis around this.
>
> And i was initially not able to understand your point fully though, but
> now i re-read it and your point is to try loading just those 700million
> into a blank stage table with exactly the same partitioning structure but
> without any indexes. And then loop through each of those non zero hash sub
> partitions of the stage table and do a "insert"/*+append*/ into
> actual_table partition().. select from stage_table partition();" . We will
> try this option , but as i mentioned in past, we tried making the data load
> happen in a chunk of 1million through FORALL insert, and that time we saw
> the insert were very slow, and the waits(cell single block physical read)
> were all around the indexes, so it might be that the order of the incoming
> data really making difference. And i think your point is , possibly this
> subpartition to subpartition load may help us going faster as it makes
> those ordering of data better. Correct me if wrong.
>
> Creating a new UNDO tablespace or dropping and creating an existing undo
> tablespace is something that should help us here to get rid of those large
> numbers of small extents. We are evaluating that option too. But before
> that we also want to have a final try to load those ~700million rows at one
> shot (using insert into.. select from..) just to see if we are lucky enough
> to get some rollback segment for this transaction which must not be having
> a large number of small extents in it.
>
>
> Regards
> Lok
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Apr 01 2021 - 12:51:47 CEST

Original text of this message