Re: Huge Table Insert - take long time?

From: Brent Day <coloradodba_at_gmail.com>
Date: Mon, 23 Jul 2018 18:10:23 -0600
Message-ID: <CAEz8shwMCJ6BOqs6RcDL5rZm8u+ZRSWv75RF5c_dDfdHyemJsQ_at_mail.gmail.com>



Have you looked at DBMS_REDEFINITION? Your could possibly reorg while the app remains up.

On Mon, Jul 23, 2018, 5:53 PM Woody McKay <woody.mckay_at_gmail.com> wrote:

> Hi gurus, I'm fishing for creative thoughts here...
>
> 12.1.0.2 RAC on Exadata with forced logging for standby
>
> There's a refactoring process: (subpartitioned tables with millions to a
> few billions of rows)
>
> 1. rename the main table to main_old
> 2. create the new_main table (no indexes or constraints at this point)
> 3. set table prefs for concurrent incremental stats
> 4. enable parallel dml
> 5. insert into new_main from main_old using APPEND PARALLEL 16 (note: most
> tables are compressed)
> 6. create a PK or an index (as needed) on new_main
> 7. gather table stats on new_main
>
> The insert in Step 5 took 9.5 hours... Total refactor process time was
> about 13 hours.
>
> Is there anything you can think of to make this process any faster?
>
> I assume the APPEND hint isn't doing much because the DB has FORCE
> LOGGING. Guessing alter table nologging wouldn't do much either. Can't
> exchange from old to new since new mostly likely had columns added.
>
> I've mentioned using alter table if we are only adding a few columns, but
> the production DBA's are worried about chained-rows, fragmentation,
> possible explain plan changes, etc. I'm not sure if they are justified in
> their thoughts.
>
> I've thought of doing the refactor in a diff DB (where I can truly have
> nologging) and use transportable tablespace, but don't think that would be
> any better.
>
> Any Thoughts or ideas to try?
>
> Thanks, as always - Woody
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jul 24 2018 - 02:10:23 CEST

Original text of this message