Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Deleting Large Volumes of Data

Deleting Large Volumes of Data

From: Otto Norse <ottonorse_at_yahoo.com>
Date: 29 Nov 2001 10:12:50 -0800
Message-ID: <5330c896.0111291012.53eb4bb6@posting.google.com>


RE:
http://www.oracle.com/oramag/webcolumns/2000/index.html?DeletingData_Hart.html

I saw this article after I had created a script that does something similar. I wondered how the two methods would differ in performance and final results. (my step 5 vs. the exp, and no rename). I have not fully tested this script but it seems to be working so far.

My steps:

  1. disable constraints
  2. create temp_table as (select * from keep_table where 'keep clause')
    - just a place to hold the data I want to preserve
  3. truncate table keep_table
    - get rid of the huge amounts of data I want to delete
  4. insert into keep_table (select * from temp_table)
    - re-populate my table that has the indexes, cons, etc.
  5. drop temp_table
  6. repeat for all generations below (I have a loop that starts with a seed table and then does that delete, then the children based on user_constraints )
  7. exp user, re-import

===

I'm doing this to create a small version of a very large database -- i.e. something w/ n Gigs of data reduced by 80% or so. This allows me to have a small test database with a pretty good representation of production data.

My other issue was how to effectively make this database actually smaller after I exp / re-imp. So far, I've been exp with SHOW=Y INDEXFILE=foo.sql. I then edit foo.sql's INITIAL / NEXT extents and do the imp. I need to look at this some more.

Thanks for any feedback,
ON Received on Thu Nov 29 2001 - 12:12:50 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US