RE: Migrating 9i to 10g performance issues

From: Hand, Michael T <HANDM_at_polaroid.com>
Date: Tue, 25 Mar 2008 11:21:47 -0400
Message-ID: <96FB7D6574516C45922A76B354B684FD045120ED@USHQEXVS.polanet.polaroid.com>


Andrew,  

Just to clarify, do you export statistics in this scenario, or recreate them afterwards.  

Mike Hand


        From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Andrew Kerber

	Sent: Tuesday, March 25, 2008 10:20 AM
	To: Howard Latham
	Cc: sbecker6925_at_gmail.com; oracle-l
	Subject: Re: Migrating 9i to 10g performance issues
	
	
	I have found that large table exports are much easier than
imports, set direct=y, do the export on the local server, then copy the export to your destination server to do the import. Also, if you build the indexes as a separate import job it works better.         

        For best performance, here is what I found worked best, Do all of this with multiple streams:         

	Export the tables, (rows=n) to get your table definitions.
	export rows=y, direct=y to get the data
	Import the table definitions (rows=n, constraints=n, indexes=n).
	Import the tables, constraints=n and indexes=n, rows=y
	Then import again, rows=n indexes=y constraints=y
	
	
	On Tue, Mar 25, 2008 at 9:15 AM, Howard Latham

<howard.latham_at_gmail.com> wrote:
        

                I found that large tables and exp are difficult. also exp/imp means you have to rebuild the indexes -

                that aint arf slow - even for a small db!

                On 25/03/2008, Andrew Kerber <andrew.kerber_at_gmail.com> wrote:

                        It can be done with export import. If you use export import, use multiple streams, and disable all constraints before the import. Enable the constraints when all imports are complete. That includes primary key constraints. It takes planning and practice to use that method, but it will work. The advantage is that most DBA's have a good understanding of exp/imp.                         

                        Datapump will also work. Datapump will run in parallel mode, but it has some bugs in 10g.                         

                        If you have blobs/clobs they will be your single largest time consumer. If you use export/import do not commit until all the blobs/clobs are imported. I have done this a few times, and at one point I wrote something for a mass delete using an export (basically, export the rows to keep, truncate the table, and import). It was nice since it rebuilt the table and indexes at the same time it did the export/import.                                                                           

                        On Tue, Mar 25, 2008 at 6:56 AM, Sandra Becker
<sbecker6925_at_gmail.com> wrote:
                        

                                Source DB: Dell server, RHEL4, 12G RAM, Oracle 64 bit 9.2.0.8 SE1, little endian

                                Target DB: IBM VM on series z9,
SLES10, 4G RAM, Oracle 10.2.0.3 EE, big endian

                                Database is just shy of 1 terabyte in size--70 percent of data is in a single table; total of 212 tables.

                                12 very large tables, including the
monster--most have no column, like a static date, that can be used to logically break the data into "partitions".                                  

                                I'm working on procedures to move our production database from the Dell to the IBM. My tests so far indicate that I'm going to need more than 24 hours. Management, in their infinite wisdom, is insisting that it be done in less than 8. It will take as long as it takes, but I'm wondering what I can do to speed things up. So far I've done the following:                                  

  1. exp/imp - too slow overall
  2. plsql that commits at regular intervals, depending on the size of the table - works very well for all tables under 1M; can load 111 tables in under 2 hours using 2 concurrent sessions. Works for larger tables, but obviously takes much longer. I had 2 sessions doing tables under 1M and 2 doing tables between 1M and 100M concurrently. Didn't try for the 12 tables over 100M.
  3. Direct-path insert - used on the table holding 70 percent of the data. Four months ago I insisted this table have a static date column added. I can logically break the data loads down by date--they want the most current data loaded first, the remainder can be done over a period of days. This is working reasonably well, but having done this same thing once before on this table, I know it will take about a month to get all the data moved based on the constraints I'm working under--can't be done during core business hours, etc.
  4. I put the target database in noarchivelog mode for my testing. Is this a wise move for migrating production during the go live?

                                Manage has suggested that I leave off old data and load it later. Doesn't work with 95 pecent of the tables because of their structure and foreign key constraints. They also suggested I use both the primary and the standby databases to read from. No way to test this until I go live--constraints again--although this actually was part of my plan from the beginning. Will too many concurrent sessions loading data slow things down too much? What would I look at to determine this? 10g is new to me so I'm not familiar with all the features yet and may be missing something significant.                                  

                                Any suggestions are appreciated, other than telling management what they can do with their time constraint. (Already did that.)                                  

                                Thanks.                                  

                                Sandy

			-- 
			Andrew W. Kerber
			
			'If at first you dont succeed, dont take up
skydiving.'
		-- 
		Howard A. Latham
		




	-- 
	Andrew W. Kerber
	
	'If at first you dont succeed, dont take up skydiving.' 


--
http://www.freelists.org/webpage/oracle-l
Received on Tue Mar 25 2008 - 10:21:47 CDT

Original text of this message