Re: Fwd: RE: convert big endian to medium endian

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Sun, 9 Feb 2020 16:00:06 +0000
Message-ID: <LNXP265MB15622490E65A42BF0FBABE2FA51E0_at_LNXP265MB1562.GBRP265.PROD.OUTLOOK.COM>


I think a lot of people would say that "expdp is slow" (meaning "much slower than it reasonably ought to be") IS a bug.

Be cautious about pinning your hopes on an upgrade. the document reference I posted reports the following for EXPDP for 12.2.0.1 and 18.3.0.0

Unpublished Bug 26736110 - DATAPUMP METADATA EXPORT IS SLOW FOR INDEXES WITH A HIGH PARTITION COUNT

However it also reports

 MLR Patch 30498473 released on top of 12.2.0.1 contains the fixes for the bugs:

     26736110 ....

So you may be okay so long as you're also able to apply that patch. The patch matrix also reports patches for various released of 18 up to 18.9, and there's a fix built into 19.1.

Regards
Jonathan Lewis



From: Ahmed Fikri <gherrami_at_gmail.com> Sent: 09 February 2020 14:46
To: Mark J. Bobak
Cc: Tanel Poder; Jonathan Lewis; ORACLE-L Subject: Re: Fwd: RE: convert big endian to medium endian

:-)
At the beginning I was not convinced that the export is slow because of a bug, but because of the large number of partitions. (That's why I ruled out an upgrade in the source will help and definitely forgot this option) Thanks for this simple idea. I will see if we can realize it.

Am So., 9. Feb. 2020 um 15:04 Uhr schrieb Mark J. Bobak <mark_at_bobak.net<mailto:mark_at_bobak.net>>: Getting into this late, but, you're going from 11g on AIX to 12c on Linux, correct? So, to avoid 11g bug, couldn't you update source database to 12c first, then do the export? Presuming that bug is fixed in 12c?

-Mark

On Sun, Feb 9, 2020, 06:57 Ahmed Fikri <gherrami_at_gmail.com<mailto:gherrami_at_gmail.com>> wrote: I think what would be problematic for us is the import of the metadata (I have no idea how long it will take, but with regard to the export time, I can expect that it will take a long time).

But I think the idea is also a good option for us. I think we will find a way to synchronize both DBs after the migration.

I will report about which option we have chosen and how the migration went (if we did it).

Thanks and regards
Ahmed

Am Sa., 8. Feb. 2020 um 23:32 Uhr schrieb Tanel Poder <tanel_at_tanelpoder.com<mailto:tanel_at_tanelpoder.com>>: If the metadata export takes most of the time, you can do the export in advance, ensure that no schema (and other metadata) changes happen after you've started your metadata export and later on load the data separately.

You'd need to worry about metadata that changes implicitly with your application workload - like sequence numbers increasing. The sequences can be altered just before switchover to continue from where they left off in the source.

You can even copy historical partitions (that don't change) over way before the final migration to cut down the time further. But if XTTS speed is enough for you for bulk data transfer, then no need to complicate the process.

Years (and decades) ago, when there was no XTTS and even no RMAN, this is how you performed large migrations. Copy everything you can over in advance and only move/refresh what you absolutely need to during the downtime.

--

Tanel Poder
https://tanelpoder.com/seminar

On Sat, Feb 8, 2020 at 10:38 AM Ahmed Fikri <gherrami_at_gmail.com<mailto:gherrami_at_gmail.com>> wrote: Hello Jonathan,

sorry for the confusion.

the productive db is about 16 TB (11.2.0.4 on AIX) and has about 4,5 Mio Partitions and every day come about 1000 partitions. And the metadata export takes 3 days 4 hours. The DBA I think is very experienced he had already find out that the export is slow because of known bug in 11.2.0.4 (he sent me the MOS ID and has mentioned that the problem is related to an x$k... view - Monday I will send the exact information). Unfortunately I haven't shown much interest to this information (big mistake) because I thought that the problem is because of our application design (and this is only my opinion as developer ) and also I thought that it should be possible somehow to convert the whole db without the need to use the datapump for the metadata (from the theory I think it is possible - but as I realized now it is in the practice tough )

And to check my assumption that we can convert all db datafiles (the datafiles for the metadata too) using c or c++ I use a 10GB big DB.

Thanks and regards
Ahmed

--

http://www.freelists.org/webpage/oracle-l Received on Sun Feb 09 2020 - 17:00:06 CET

Original text of this message