Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Manual update / replication on a live database?

Re: Manual update / replication on a live database?

From: tcp100 <tcp277_at_yahoo.com>
Date: 3 Jan 2007 09:49:22 -0800
Message-ID: <1167846562.289019.138040@42g2000cwt.googlegroups.com>


Ok folks. Let me see if I can explain this in no uncertain terms.

Hiring is not a possibility. Changing the requirements is not a possibility. Man, I wish I could do both.

I'm not going to get too deep into this, because, well, I cannot.. However I imagine most of you folks work in private industry, where things such as hardware and human resources are somewhat "flexible".

If you couldn't tell by my crypticness already, we're talking about dealing with a bureaucratic, governmental situation here.

There is no headroom, there is no ability to bend rules or hire people, at least without a good year of red tape. Budgets are set, hardware is allocated, security policies are in place. It would take acts and laws to change them. So again, hiring an expert to analyze the situation isn't an option, nor is installing connectivity or bending the uptime rules. We're trying to get a remote 24/7, yet unconnected office running with some semblance of currentness under these unfortunate constraints.

So, with that out of the way, I appreciate the help that has been given; I realize that at an outside glance, folks are saying "why in the world would you do things this way??" Trust me, we were scratching our heads pretty hard when we got this requirement.

Regardless, Mr. Hinsdale's option is complete, from what I see - and will keep us up. In the interrim, "complex" as it may be, it makes sense for the situation. The only thing that I'd really like to know is if there's a way to do this incrementally, since right now the database size is able to be handled, but I'm guessing that it -could- grow unwieldy for this at some point soon; this is a relatively new database.

John, I understand what you've explained about the hardware match. You make a good point - I cannot guarantee hardware will match on either side, so it sounds like the Dataguard idea, although it looks more straighforward, is not as simple of an option, actually, once that's considered.

However -- and I'm not sure how this would work out - I was considering perhaps running both sides in a VMWare VM or Solaris Container, which would at least give me some environment configuration freedom, pending that the processors on both machines were relatively the same; so that could be a work-around there.

I'm definitely going to start doing some research on the Dataguard and other possibilities, unfortunately time constraints are tough, which is what brought me here. If anything, I'm looking for an interrim solution.

hpuxrac, what do you find unrealistic in Mr. Hinsdale's approach? I'm not asking out of criticism, I'd actually like to know, since to me it makes sense and seems do-able; the only thing of any concern is the schema "switch" for a few seconds - which I think could be scripted to be relatively painless.

Again, if there's an easier option, I'm definitely still open to it -- but please try and ignore the oddness of the requirements; I know they're weird, but they're inflexible. That's why I came here for some help.

hpuxrac wrote:
> tcp100 wrote:
> > Oh.. And for details, I think this will work.. We're dealing only with
> > about half a million rows - the database size is about 700mb. Not too
> > crazy at all.
> >
> > Most of these are inserts, I will wager that there are no deletes save
> > for the occasional development / administrative one, and few updates.
> >
> > I think this process will work - of course incremental would be better,
> > but from what I'm hearing, copying a redo log over and restoring it
> > won't meet my zero downtime requirement -- but it could meet a "low"
> > downtime requirement if reasonable.. (Unfortunately that's not where I
> > stand.)
> >
> > Thanks again for the advice...
> >
>
> The advice Mr. Hinsdale sent in doesn't make a lot of sense to me and
> is highly complicated and full of unrealistic client connection
> assumptions.
>
> If I were you I would spend time reading the oracle concepts
> documentation first available at http://tahiti.oracle.com and then read
> up specifically on data guard.
>
> Matt Hart has a book that might be worth looking at called Oracle
> Database 10g High Availability specifically for you probably the
> chapters on Data Guard and rman.
>
> It might be worth your time doing some research and reading and think
> about hiring for at least a couple of days an oracle consultant
> experienced in high availability designs. While you are pursuing that
> work on clarifying the management expectations and business drivers for
> this whole esoteric setup.
>
> If the situation ( as you already stated ) really is that the database
> is only 700 meg then you can probably patch something together using
> cd's for archive logs ( not ONLINE redo logs ... archived copies of
> them instead ). Kludgy and unsophisticated but perhaps workable with
> such a small database.
>
> Why approach the whole situation like this in the first place with such
> a small database though?
Received on Wed Jan 03 2007 - 11:49:22 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US