Re: Fwd: RE: convert big endian to medium endian
From: Tanel Poder <tanel_at_tanelpoder.com>
Date: Sat, 8 Feb 2020 17:32:42 -0500
Message-ID: <CAMHX9JKRG_vKaZqOOejB5Ybung=gkpv0KkgizG2wBgfKUxfQMA_at_mail.gmail.com>
Date: Sat, 8 Feb 2020 17:32:42 -0500
Message-ID: <CAMHX9JKRG_vKaZqOOejB5Ybung=gkpv0KkgizG2wBgfKUxfQMA_at_mail.gmail.com>
-- Tanel Poder https://tanelpoder.com/seminar On Sat, Feb 8, 2020 at 10:38 AM Ahmed Fikri <gherrami_at_gmail.com> wrote:Received on Sat Feb 08 2020 - 23:32:42 CET
> Hello Jonathan,
>
> sorry for the confusion.
>
> the productive db is about 16 TB (11.2.0.4 on AIX) and has about 4,5 Mio
> Partitions and every day come about 1000 partitions. And the metadata
> export takes 3 days 4 hours. The DBA I think is very experienced he had
> already find out that the export is slow because of known bug in 11.2.0.4
> (he sent me the MOS ID and has mentioned that the problem is related to an
> x$k... view - Monday I will send the exact information). Unfortunately I
> haven't shown much interest to this information (big mistake) because I
> thought that the problem is because of our application design (and this is
> only my opinion as developer ) and also I thought that it should be
> possible somehow to convert the whole db without the need to use the
> datapump for the metadata (from the theory I think it is possible - but as
> I realized now it is in the practice tough )
>
> And to check my assumption that we can convert all db datafiles (the
> datafiles for the metadata too) using c or c++ I use a 10GB big DB.
>
> Thanks and regards
> Ahmed
>
>>
>>>
-- http://www.freelists.org/webpage/oracle-l