Been racking my brain to see if there is a more... elegant architecture for
the following:
- Largish Oracle-based web app... fairly heavy traffic, up to 200GB of
application data. 10g, pretty fast storage, etc.
- Users query data from this application all the live long day, downtime not
possible
- Daily (or more often) we load/delete data into a "pre-prod" copy of the
current production (say, a couple of hours of sqlldr and pl/sql processing),
customer validates and authorizes it for production, then we must switch the
"pre-prod" environment (aka a 200GB schema) into production, then turn the
old production environment into the next "pre-prod" environment.
We have this in place, and it works, but it's not very elegant and requires
more DBA supervision than we'd like:
- Schema XYZ1 and XYZ2 in the same database. XYZ1 = production, XYZ2 is
pre-production, both are identical in the morning. Current production app
hitting XYZ1 schema via synonyms.
- We load data into XYZ2. Customer verifies and approves through an
instance of the web app hitting XYZ2 tables via synonyms.
- We swap the synonyms... now production web site connections are hitting
XYZ2 and pre-prod web site connections are hitting XYZ1.
- We use expdp/impdp to empty XYZ1 and refresh it with the full contents of
XYZ2. It's scripted, but dangerous, so DBAs run this manually at least
daily. We thought about rman, transportable tablespaces, etc., but those
required even greater privileged access. At least the expdp/impdp process
can run as a regular user and therefore be handed off to a
developer/operator/etc in some packaged form... someday.
That's the system in a nutshell. I guess my question is, is there a better
way to architect a solution like this, so as to produce the prod-preprod
swapping w/o doing things as invasive as cloning schemas or tablespaces?
Thoughts appreciated!
--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jan 24 2006 - 18:47:38 CST