Re: Minimizing downtime for 9i to 10g upgrade

From: Chuck <chuckh1958_nospam_at_gmail.com>
Date: Fri, 12 Sep 2008 16:41:12 GMT
Message-ID: <IYwyk.17$1a2.5@trnddc04>


DA Morgan wrote:
> Chuck wrote:

>> I need to upgrade a 9i database to 10g but the business unit cannot
>> tolerate any downtime. I've advised them the cleanest way is to upgrade
>> the db in place which takes about 90 minutes. They are uncomfortable
>> with that much down time. An exp/imp to a new 10g database would take
>> even longer.

>
> Why? you can do TB systems in seconds.
>
> First set up your 10g database.
> Then copy over all of the rows except for the last 24 hours.
>
> Then copy over all of the information not previously copied except for
> the last hour.
>
> Repeat for everything except the previous 2-5 minutes.
>
> Then switch them over. This is the few seconds.
>
> Then bring in that last five minutes worth of transactions which likely
> will never be missed in the minute or two it will take to capture them.
>
> This technique is aided by the use of partitioning and I highly
> recommend several practice runs to perfect your technique.

I think you must mean "terabyte" by tb. At first I thought you were talking about some software product by a company called "tb systems" :-).

How do you automate the copy of rows to know which ones were updated in previous hour, 2-5 minutes, etc.? Are you suggesting converting every table in the database to a materialized view, with an mview log, and refreshing periodically? Is this supported across different major releases?

I would think whatever you're talking about needs to be SQL based as I dont believe log shipping is supported between major releases.

Can you please clarify? Thanks. Received on Fri Sep 12 2008 - 11:41:12 CDT

Original text of this message