Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: EXPORT/IMPORT and archiving data

Re: EXPORT/IMPORT and archiving data

From: Mark D Powell <mark.powell_at_eds.com>
Date: 18 Dec 2001 06:16:25 -0800
Message-ID: <178d2795.0112180616.a02b6fb@posting.google.com>


raghu1ra_at_rediffmail.com (raghu) wrote in message news:<d11b9387.0112172059.33cdee4f_at_posting.google.com>...
> Richard Novak <rnovak_at_yet2.com> wrote in message news:<3C1E654D.F426D824_at_yet2.com>...
> > Here's my situation that I hope someone can help me with:
> >
> > I've set up an archival schema which mirrors the production schema.
> > Also I've created some stored procedures that will take older data from
> > the production schema and move it to the archival schema. based on the
> > age of the production data.
> >
> > With the volume of data already in production, these routines would have
> > to run for a very long time to "catch up". Since we are a 24 by 7 shop
> > I need some way to move the data in a bulk fashion from the production
> > schema to the archive schema without wiping out the data already in the
> > archival schema.
> >
> > Would the EXPORT/IMPORT utilities work for this? If so, what parameters
> > do you suggest that I use which will ensure that the existing archival
> > data doesn't get wiped out?
> >
> > Is there another way to accomplish this? If so, how?
> >
> > Thanks in advance!
>
> hi richard
>
> one possible way using export and import utilities is to take
> incremental type of export and imports this method will take into
> consideration all those schemas which have changed by recent export.
> and the whole table will be exported even if there is a small change
> in the table but not only the changes instead you can use rman for
> taking backups it can satisfy your reuirement for taking backup for
> 24/7 database

The problem with incremental exports is that if a single row in a table is changed then the entire table is exported, not just the changed data!

The exp utility does not affect the original data in any way and if you are talking about exporting the base table and using imp to add it to the archieve table then the main problem is if any of the exported data already exists in the archive schema you imp will get duplicate index key errors.

If none of the original data has been purged yet then you could truncate the archive version and lay a full export of the original tables into them since you would be replacing all the data. If the data in the archive schema has been deleted from production then it will not be present in the exp file to create duplicate key errors on import so this method should work just fine. I would suggest adding indexes=n, constraints=n, and maybe grants=n to the exp step since your archive version of the tables may not need the same indexes as production.

If you have a large enough rollback segment available then insert as select may be a faster way to move the data.

If you are ultimately moving the archive schema data to tape or files then you should consider just extracting the data into delimited files with sqlldr control cards appended to the top of the file for future retrieval. If you ever alter the table structure then old imp files do not work very well.

Received on Tue Dec 18 2001 - 08:16:25 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US