Re: Need alternate way of eliminating duplicates from 93K table

From: Bhasker Ravikanti <bhasker_at_ix.netcom.com>
Date: 1996/04/09
Message-ID: <4kcq17$q2b_at_cloner2.ix.netcom.com>#1/1


In <6RD3LD1w165w_at_vllyoak.resun.com> jp_at_vllyoak.resun.com (Jeff Perry) writes:
>
>Here's my situation; a user discovered their table
>with 93K rows has duplicated data, and they want
>the duplicates deleted.
>
>First, I attempted a correlated subquery as follows:
> delete from ciim86_oeorder_history A
> where rowid>
> (select min(rowid)
> from ciim86_oeorder_history
> where field1 = A.field1
> and field2 = A.field2
> );
>Thirty six (36) hours later, this query was still
>running when the system was rebooted.
>
>Next, wishing to avoid the correlated subquery method,
>I created a duplicate table with a unique constraint.
>Using PL/SQL, I built a loop to insert only
>unique records from the original table into this
>new temporary table, HOWEVER, I cannot figure out
>a way to resume the insert where it left off when
>it encounters its first duplicate record.
>
>If you know of some alternate method of removing these
>duplicate records, please share with me your ideas.
>
>TIA
>
>
>
>
>--
>Jeff Perry
>jp%vllyoak.resun.com%blkhole_at_sdd.hp.com

How about:

  • export the table
  • drop the table
  • create table with unique constraint
  • import the table data

Bhasker Ravikanti. Received on Tue Apr 09 1996 - 00:00:00 CEST

Original text of this message