Re: Need alternate way of eliminating duplicates from 93K table

From: Chuck Hamilton <chuckh_at_ix.netcom.com>
Date: 1996/04/11
Message-ID: <316d1254.5747132_at_news2.ios.com>#1/1


jp_at_vllyoak.resun.com (Jeff Perry) wrote:

>Here's my situation; a user discovered their table
>with 93K rows has duplicated data, and they want
>the duplicates deleted.

[snipped]

Here's a script I use to delete duplicates. It creates a temporary table of just the duplicates. You can drop it at the end if you wish but I like to leave it around so I can see what exactly was deleted.

------dedupe.sql--------
accept tabnam prompt 'Enter table name to dedupe:' accept pk_cols prompt 'Enter primary key columns separated by commas:' set verify off
drop table dedupe$
/
create table dedupe$
tablespace temp
as (select '&&tabnam' table_name, &&pk_cols, min(rowid) min_rowid

    from &&tabnam
    group by &&pk_cols
    having count(*) > 1)
/
delete from &&tabnam
where (&&pk_cols) in (select (&&pk_cols) from dedupe$) and rowid not in (select min_rowid from dedupe$) /
--------end of dedupe.sql--------

--
Chuck Hamilton
chuckh_at_dvol.com

Never share a foxhole with anyone braver than yourself!
Received on Thu Apr 11 1996 - 00:00:00 CEST

Original text of this message