RE: Migrating 9i to 10g performance issues

From: Hand, Michael T <>
Date: Tue, 25 Mar 2008 11:21:47 -0400
Message-ID: <>


Just to clarify, do you export statistics in this scenario, or recreate them afterwards.  

Mike Hand

[] On Behalf Of Andrew Kerber

	Sent: Tuesday, March 25, 2008 10:20 AM
	To: Howard Latham
	Cc:; oracle-l
	Subject: Re: Migrating 9i to 10g performance issues
	I have found that large table exports are much easier than
imports, set direct=y, do the export on the local server, then copy the export to your destination server to do the import. Also, if you build the indexes as a separate import job it works better.         

        For best performance, here is what I found worked best, Do all of this with multiple streams:         

	Export the tables, (rows=n) to get your table definitions.
	export rows=y, direct=y to get the data
	Import the table definitions (rows=n, constraints=n, indexes=n).
	Import the tables, constraints=n and indexes=n, rows=y
	Then import again, rows=n indexes=y constraints=y
	On Tue, Mar 25, 2008 at 9:15 AM, Howard Latham

<> wrote:

                I found that large tables and exp are difficult. also exp/imp means you have to rebuild the indexes -

                that aint arf slow - even for a small db!

                On 25/03/2008, Andrew Kerber <> wrote:

                        It can be done with export import. If you use export import, use multiple streams, and disable all constraints before the import. Enable the constraints when all imports are complete. That includes primary key constraints. It takes planning and practice to use that method, but it will work. The advantage is that most DBA's have a good understanding of exp/imp.                         

                        Datapump will also work. Datapump will run in parallel mode, but it has some bugs in 10g.                         

                        If you have blobs/clobs they will be your single largest time consumer. If you use export/import do not commit until all the blobs/clobs are imported. I have done this a few times, and at one point I wrote something for a mass delete using an export (basically, export the rows to keep, truncate the table, and import). It was nice since it rebuilt the table and indexes at the same time it did the export/import.                                                                           

                        On Tue, Mar 25, 2008 at 6:56 AM, Sandra Becker
<> wrote:

                                Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit SE1, little endian

                                Target DB: IBM VM on series z9,
SLES10, 4G RAM, Oracle EE, big endian

                                Database is just shy of 1 terabyte in size--70 percent of data is in a single table; total of 212 tables.

                                12 very large tables, including the
monster--most have no column, like a static date, that can be used to logically break the data into "partitions".                                  

                                I'm working on procedures to move our production database from the Dell to the IBM. My tests so far indicate that I'm going to need more than 24 hours. Management, in their infinite wisdom, is insisting that it be done in less than 8. It will take as long as it takes, but I'm wondering what I can do to speed things up. So far I've done the following:                                  

  1. exp/imp - too slow overall
  2. plsql that commits at regular intervals, depending on the size of the table - works very well for all tables under 1M; can load 111 tables in under 2 hours using 2 concurrent sessions. Works for larger tables, but obviously takes much longer. I had 2 sessions doing tables under 1M and 2 doing tables between 1M and 100M concurrently. Didn't try for the 12 tables over 100M.
  3. Direct-path insert - used on the table holding 70 percent of the data. Four months ago I insisted this table have a static date column added. I can logically break the data loads down by date--they want the most current data loaded first, the remainder can be done over a period of days. This is working reasonably well, but having done this same thing once before on this table, I know it will take about a month to get all the data moved based on the constraints I'm working under--can't be done during core business hours, etc.
  4. I put the target database in noarchivelog mode for my testing. Is this a wise move for migrating production during the go live?

                                Manage has suggested that I leave off old data and load it later. Doesn't work with 95 pecent of the tables because of their structure and foreign key constraints. They also suggested I use both the primary and the standby databases to read from. No way to test this until I go live--constraints again--although this actually was part of my plan from the beginning. Will too many concurrent sessions loading data slow things down too much? What would I look at to determine this? 10g is new to me so I'm not familiar with all the features yet and may be missing something significant.                                  

                                Any suggestions are appreciated, other than telling management what they can do with their time constraint. (Already did that.)                                  



			Andrew W. Kerber
			'If at first you dont succeed, dont take up
		Howard A. Latham

	Andrew W. Kerber
	'If at first you dont succeed, dont take up skydiving.' 

Received on Tue Mar 25 2008 - 10:21:47 CDT

Original text of this message