Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Idea for database migration using parallelism at the table level

Re: Idea for database migration using parallelism at the table level

From: Vincent G. Poore <vincepoore_at_excite.com>
Date: 21 Mar 2004 07:23:43 -0800
Message-ID: <9f69adb7.0403210723.24019fd3@posting.google.com>

Since you didn't mention #2, I assumed your suggestion of using indexfile was to accomplish #1. If everything balanced perfectly, the total phase 2 time would be the time to move the largest table and build its indexes. I suggested 'altering the parallel degree of the indexes before starting the migration' as a simple way to parallelize the index creation within a single table. This would only be done for maybe the top two or three tables in the database.

All in all, I think you've helped me tweak the plan to about the fastest it's going to get from a logical division of labor problem without delving into something that is too complicated to design a scripted solution for. I have 80 databases to upgrade, so I've got to have something that is fast and yet simple to implement.

"koert54" <nospam_at_spam.com> wrote in message news:<rxk6c.86184$iA2.11815_at_news.easynews.com>...
> - i'm not sure what you mean with 'altering the parallel degree of the
> indexes before starting the migration' - i don't think using pq slaves
> to speed up the index creation is a good idea (i think it would blow up your
> runqueue) - i think chopping up the index file and do
> parallel runs will be faster (just my gut feeling talking here)
> - you could always do an 'enable novalidate' for the constraints (create a
> script in your source db to generate the ddl)
> if you want to avoid the FTS
>
> "Vincent G. Poore" <vincepoore_at_excite.com> wrote in message
> news:9f69adb7.0403180750.451cf9dc_at_posting.google.com...
> > Thank you for the tip.
> >
> > Originally, my thinking was to do as many tables in a single
> > imp pass (data,index,constraints) as possible. It would reduce
> > the number of imp logs to review, but I should just automate
> > the review in the script.
> >
> > As far as going with INDEXES=y or INDEXFILE=index.sql, I don't
> > see much difference between the two. Either way the indexes are
> > created after the table is loaded and I can always alter the
> > parallel degree of the indexes before starting the migration.
> > Isn't that simpler than messing around with hundreds of
> > INDEXFILEs or breaking up a single big INDEXFILE into parallel
> > chunks?
> >
> > Enabling table constraints will involve full scans of the
> > tables and parent index lookups, so I don't don't think your
> > suggestion of enabling them via full import is a good idea
> > since it won't be parallel.
> >
> > Even though I disagree with a couple of your points, I do agree
> > my first attempt was needlessly complex. I'm not even sure Phase
> > 4 in the new plan below is really needed. The table level imports
> > will enable table triggers once the data and indexes are done.
> > The exp documentation doesn't mention if TRIGGERS=n excludes
> > table triggers only. Unless someone knows for sure, it should be
> > simple to check once I start testing.
> >
> >
> > Phase 1 - a. Precreate Tables
> > b. Get Constraints/Triggers not in first exp/imp
> > =================================================================
> > exp FULL=y ROWS=n INDEXES=n CONSTRAINTS=n TRIGGERS=n| \
> > imp FULL=y ROWS=n
> >
> > exp FILE=postActions.dmp FULL=y ROWS=n INDEXES=n CONSTRAINTS=y \
> > TRIGGERS=y
> >
> > Phase 2 and 3 will use a script to generate the table list and
> > then spawn N processes to process the list in parallel.
> >
> > Phase 2 - All Tables and Indexes
> > =================================================================
> > exp TABLES=(schemaname.tablename:partitionname) ROWS=y \
> > INDEXES=y CONSTRAINTS=n|imp FROMUSER=schemaname \
> > TABLES=(tablename) IGNORE=y
> >
> > Phase 3 - All Table Constraints
> > =================================================================
> > imp FILE=postActions.dmp ROWS=n INDEXES=n CONSTRAINTS=Y
> > FROMUSER=schemaname TABLES=(tablename) IGNORE=y
> >
> > Phase 4 - Non-Table Triggers (anything else missed)
> > =================================================================
> > imp FILE=postActions.dmp FULL=y ROWS=n INDEXES=y CONSTRAINTS=y \
> > TRIGGERS=y IGNORE=y
> >
> > "koert54" <nospam_at_spam.com> wrote in message
> news:<Q4f6c.10918688$Id.1831959_at_news.easynews.com>...
> > > I think you make it too complex - deal with the data & indexes first -
> do
> > > everything else last
> > > I would :
> > > - precreate users/tables
> > > - parallel import data (just the data)
> > > - break up indexfile -> parallel create indexes
> > > - full import (no data, no index, all the rest - to enable constrants,
> > > grants, views, etc)
> > >
> > > maybe http://pepi.sourceforge.net is something for you ?
> > >
Received on Sun Mar 21 2004 - 09:23:43 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US