Re: Fwd: RE: convert big endian to medium endian

From: Ahmed Fikri <gherrami_at_gmail.com>
Date: Sun, 9 Feb 2020 12:55:45 +0100
Message-ID: <CANkb5P22UcrOBfaPNMTHvhHqkLciT7yK2DM7SZFAm-nD9k3-5w_at_mail.gmail.com>



 I think what would be problematic for us is the import of the metadata (I have no idea how long it will take, but with regard to the export time, I can expect that it will take a long time).

But I think the idea is also a good option for us. I think we will find a way to synchronize both DBs after the migration.

I will report about which option we have chosen and how the migration went (if we did it).

Thanks and regards
Ahmed

Am Sa., 8. Feb. 2020 um 23:32 Uhr schrieb Tanel Poder <tanel_at_tanelpoder.com
>:

> If the metadata export takes most of the time, you can do the export in
> advance, ensure that no schema (and other metadata) changes happen after
> you've started your metadata export and later on load the data separately.
>
> You'd need to worry about metadata that changes implicitly with your
> application workload - like sequence numbers increasing. The sequences can
> be altered just before switchover to continue from where they left off in
> the source.
>
> You can even copy historical partitions (that don't change) over way
> before the final migration to cut down the time further. But if XTTS speed
> is enough for you for bulk data transfer, then no need to complicate the
> process.
>
> Years (and decades) ago, when there was no XTTS and even no RMAN, this is
> how you performed large migrations. Copy everything you can over in advance
> and only move/refresh what you absolutely need to during the downtime.
>
> --
> Tanel Poder
> https://tanelpoder.com/seminar
>
>
> On Sat, Feb 8, 2020 at 10:38 AM Ahmed Fikri <gherrami_at_gmail.com> wrote:
>
>> Hello Jonathan,
>>
>> sorry for the confusion.
>>
>> the productive db is about 16 TB (11.2.0.4 on AIX) and has about 4,5 Mio
>> Partitions and every day come about 1000 partitions. And the metadata
>> export takes 3 days 4 hours. The DBA I think is very experienced he had
>> already find out that the export is slow because of known bug in 11.2.0.4
>> (he sent me the MOS ID and has mentioned that the problem is related to an
>> x$k... view - Monday I will send the exact information). Unfortunately I
>> haven't shown much interest to this information (big mistake) because I
>> thought that the problem is because of our application design (and this is
>> only my opinion as developer ) and also I thought that it should be
>> possible somehow to convert the whole db without the need to use the
>> datapump for the metadata (from the theory I think it is possible - but as
>> I realized now it is in the practice tough )
>>
>> And to check my assumption that we can convert all db datafiles (the
>> datafiles for the metadata too) using c or c++ I use a 10GB big DB.
>>
>> Thanks and regards
>> Ahmed
>>
>>>
>>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Feb 09 2020 - 12:55:45 CET

Original text of this message