Re: Huge Table Insert - take long time?

From: Woody McKay <woody.mckay_at_gmail.com>
Date: Sat, 28 Jul 2018 20:43:29 -0400
Message-ID: <CAAxONsTF+PZ=dCcXKqbZq6o5WkBmkA5pKPJTh+yRmtDR=RJhzw_at_mail.gmail.com>



Thanks you Mark! I appreciate your thoughts and insight. Going to incorporate this as much as possible to see what it buys me.

On Fri, Jul 27, 2018 at 1:04 PM Mark W. Farnham <mwf_at_rsiz.com> wrote:

> You are probably correct. An insert dop of 1 might even be best so there
> is a single insert point for each table and thus no contention for above
> the high water mark blocks (which is presumably the beginning since you’re
> using a new table for each insert target.
>
>
>
> Unlike online move, in this model your original data source still exists,
> so you may persuade your administrators to give you new time period based
> tablespaces that are initially NOT force logged and back them and ship them
> well before the source goes away. Remember, since exchange doesn’t move the
> data, these new tablespaces are where the data will remain. If you’re
> appending to only one table per tablespace at a time, then dbwr is going to
> have contiguous blocks or compression units to write. I don’t know how much
> that might speed things up on an Exadata. On older disk tech it can be a
> lot.
>
>
>
> If there is a known best order for both compression and query block
> clustering (or even one or the other), this is the time to do it. With the
> way HCC works this can save a lot of space if commonly duplicated column
> values gain a high fan-in per compression unit. With zone maps this can
> dramatically reduce the compression units needed to be unpacked and
> delivered at query time. You get that forever for any partitions that have
> become quiescent with age.
>
>
>
> Good luck!
>
>
>
> mwf
>
>
>
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *Woody McKay
> *Sent:* Thursday, July 26, 2018 7:58 PM
> *To:* keyantech_at_gmail.com
> *Cc:* ORACLE-L
> *Subject:* Re: Huge Table Insert - take long time?
>
>
>
> Ya got me thinking about the "load data by partition" idea...
>
>
>
> I wonder if instead of insert dop 16 from select dop 16, what if I have
> say 50 nonpartitioned tables (one for every partition) and dop4 insert into
> the from *_old partition () all at the same time. Then do a partition
> exchange from the 50 nonpartitioned tables to the main fully partitioned
> table? The exchange is quick since it is just a data dictionary change.
>
>
>
> This may be faster because the amount of DOP is higher. Maybe less
> exchange waits? I'm assuming that the DOP 16 can't work on all 50
> partitions all together as efficiently as loading all 50 nonpartitioned
> tables?
>
>
>
> Got me thinking...
>
>
>
>
>
>
>
>
>
> On Mon, Jul 23, 2018 at 8:15 PM Karthikeyan Panchanathan <
> keyantech_at_gmail.com> wrote:
>
> What is the partition type involved here?
>
>
>
> One option you can load data by partition into intermediary table(with new
> columns) and exchange. This way you can run parallel sessions for each
> partition and before exchange new table you can build at partition level.
>
>
>
> You mentioned other db, have you thought about moving data from other db
> to prod using data-pump?
>
>
>
> Karth
>
> Sent from my IPhone
>
>
> On Jul 23, 2018, at 7:52 PM, Woody McKay <woody.mckay_at_gmail.com> wrote:
>
> Hi gurus, I'm fishing for creative thoughts here...
>
>
>
> 12.1.0.2 RAC on Exadata with forced logging for standby
>
> There's a refactoring process: (subpartitioned tables with millions to a
> few billions of rows)
>
> 1. rename the main table to main_old
> 2. create the new_main table (no indexes or constraints at this point)
> 3. set table prefs for concurrent incremental stats
> 4. enable parallel dml
> 5. insert into new_main from main_old using APPEND PARALLEL 16 (note: most
> tables are compressed)
> 6. create a PK or an index (as needed) on new_main
> 7. gather table stats on new_main
>
> The insert in Step 5 took 9.5 hours... Total refactor process time was
> about 13 hours.
>
> Is there anything you can think of to make this process any faster?
>
> I assume the APPEND hint isn't doing much because the DB has FORCE
> LOGGING. Guessing alter table nologging wouldn't do much either. Can't
> exchange from old to new since new mostly likely had columns added.
>
> I've mentioned using alter table if we are only adding a few columns, but
> the production DBA's are worried about chained-rows, fragmentation,
> possible explain plan changes, etc. I'm not sure if they are justified in
> their thoughts.
>
>
>
> I've thought of doing the refactor in a diff DB (where I can truly have
> nologging) and use transportable tablespace, but don't think that would be
> any better.
>
> Any Thoughts or ideas to try?
>
>
>
> Thanks, as always - Woody
>
>
>
>
>
>
>
>
> --
>
> Sincerely,
>
> WoodyMcKay
>

-- 
Sincerely,

WoodyMcKay

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Jul 29 2018 - 02:43:29 CEST

Original text of this message