Re: Fwd: RE: convert big endian to medium endian

From: Tanel Poder <tanel_at_tanelpoder.com>
Date: Sat, 8 Feb 2020 17:32:42 -0500
Message-ID: <CAMHX9JKRG_vKaZqOOejB5Ybung=gkpv0KkgizG2wBgfKUxfQMA_at_mail.gmail.com>



If the metadata export takes most of the time, you can do the export in advance, ensure that no schema (and other metadata) changes happen after you've started your metadata export and later on load the data separately.

You'd need to worry about metadata that changes implicitly with your application workload - like sequence numbers increasing. The sequences can be altered just before switchover to continue from where they left off in the source.

You can even copy historical partitions (that don't change) over way before the final migration to cut down the time further. But if XTTS speed is enough for you for bulk data transfer, then no need to complicate the process.

Years (and decades) ago, when there was no XTTS and even no RMAN, this is how you performed large migrations. Copy everything you can over in advance and only move/refresh what you absolutely need to during the downtime.

--
Tanel Poder
https://tanelpoder.com/seminar


On Sat, Feb 8, 2020 at 10:38 AM Ahmed Fikri <gherrami_at_gmail.com> wrote:


> Hello Jonathan,
>
> sorry for the confusion.
>
> the productive db is about 16 TB (11.2.0.4 on AIX) and has about 4,5 Mio
> Partitions and every day come about 1000 partitions. And the metadata
> export takes 3 days 4 hours. The DBA I think is very experienced he had
> already find out that the export is slow because of known bug in 11.2.0.4
> (he sent me the MOS ID and has mentioned that the problem is related to an
> x$k... view - Monday I will send the exact information). Unfortunately I
> haven't shown much interest to this information (big mistake) because I
> thought that the problem is because of our application design (and this is
> only my opinion as developer ) and also I thought that it should be
> possible somehow to convert the whole db without the need to use the
> datapump for the metadata (from the theory I think it is possible - but as
> I realized now it is in the practice tough )
>
> And to check my assumption that we can convert all db datafiles (the
> datafiles for the metadata too) using c or c++ I use a 10GB big DB.
>
> Thanks and regards
> Ahmed
>
>>
>>>
-- http://www.freelists.org/webpage/oracle-l
Received on Sat Feb 08 2020 - 23:32:42 CET

Original text of this message