Re: Minimizing downtime for 9i to 10g upgrade

From: Chuck <>
Date: Mon, 15 Sep 2008 15:16:10 GMT
Message-ID: <__uzk.265$8v5.245@trnddc01>

DA Morgan wrote:

> Chuck wrote:

>> DA Morgan wrote:
>>> Chuck wrote:
>>>> I need to upgrade a 9i database to 10g but the business unit cannot
>>>> tolerate any downtime. I've advised them the cleanest way is to upgrade
>>>> the db in place which takes about 90 minutes. They are uncomfortable
>>>> with that much down time. An exp/imp to a new 10g database would take
>>>> even longer.
>>> Why? you can do TB systems in seconds.
>>> First set up your 10g database.
>>> Then copy over all of the rows except for the last 24 hours.
>>> Then copy over all of the information not previously copied except for
>>> the last hour.
>>> Repeat for everything except the previous 2-5 minutes.
>>> Then switch them over. This is the few seconds.
>>> Then bring in that last five minutes worth of transactions which likely
>>> will never be missed in the minute or two it will take to capture them.
>>> This technique is aided by the use of partitioning and I highly
>>> recommend several practice runs to perfect your technique.
>> I think you must mean "terabyte" by tb. At first I thought you were
>> talking about some software product by a company called "tb systems" :-).
>> How do you automate the copy of rows to know which ones were updated in
>> previous hour, 2-5 minutes, etc.? Are you suggesting converting every
>> table in the database to a materialized view, with an mview log, and
>> refreshing periodically? Is this supported across different major
>> releases?
>> I would think whatever you're talking about needs to be SQL based as I
>> dont believe log shipping is supported between major releases.
>> Can you please clarify? Thanks.
> One way is to add a column to each table. Not pretty. Another is the
> temporary use partitioning since the database will be blown away in
> a week or less. A third way is with DBMS_WM (workspace management).
> Give me a minute or two and I can likely come up with a few more.
> Again it all depends on what is going on, the size, dependencies, etc.
> None of which were stated by the OP.

I am the OP and it sounds pretty much like what I expected. There is no magic bullet solution. Whatever is done to minimize down time must be done above the RDBMS layer.

The database is not extremely large - perhaps 35g. However there are approx 500 tables, 1000 indexes, 25 lobs, 100 sequences, but the real pain is the 300+ foreign key constraints.

I'm not sure the effort and risk are worth saving perhaps 30 minutes of down time.

Are transportable tablespaces compatible from to (same o/s and hardware)? Received on Mon Sep 15 2008 - 10:16:10 CDT

Original text of this message