Re: Need alternate way of eliminating duplicates from 93K table

From: Ken Denny <kedenny_at_mail.concentric.net>
Date: 1996/04/08
Message-ID: <4kbug3$s7o_at_tribune.concentric.net>#1/1


jp_at_vllyoak.resun.com (Jeff Perry) wrote:
>Here's my situation; a user discovered their table
>with 93K rows has duplicated data, and they want
>the duplicates deleted.
>
>First, I attempted a correlated subquery as follows:
> delete from ciim86_oeorder_history A
> where rowid>
> (select min(rowid)
> from ciim86_oeorder_history
> where field1 = A.field1
> and field2 = A.field2
> );
>Thirty six (36) hours later, this query was still
>running when the system was rebooted.
>
>Next, wishing to avoid the correlated subquery method,
>I created a duplicate table with a unique constraint.
>Using PL/SQL, I built a loop to insert only
>unique records from the original table into this
>new temporary table, HOWEVER, I cannot figure out
>a way to resume the insert where it left off when
>it encounters its first duplicate record.
>
>If you know of some alternate method of removing these
>duplicate records, please share with me your ideas.
>
>TIA
>
>
>
>
>--
>Jeff Perry
>jp%vllyoak.resun.com%blkhole_at_sdd.hp.com

declare

   cursor c_coh is select * from ciim86_oeorder_history;    r_coh ciim86_oeorder_history%rowtype; begin

   for r_coh in c_coh loop

      begin
         insert into alt_coh (<list of fields>) values
            (<list of fields all preceded with r_coh.>);
      exception
         when dup_val_on_index then
            null;
      end;

   end loop;
end;

By putting the begin ... exception ... end; inside the loop, it will continue the loop after it encounters the exception.

Ken Denny

-- 

 /|  /
/ | /   _        _
  |<   / \    /|/ \
  | \  \ /   / |   \
  |  \__X___/  |    \___
Received on Mon Apr 08 1996 - 00:00:00 CEST

Original text of this message