Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: how to rebuild index after direct path load?

Re: how to rebuild index after direct path load?

From: <hkpang_at_my-deja.com>
Date: Fri, 26 Nov 1999 06:10:22 GMT
Message-ID: <81l88d$pm3$1@nnrp1.deja.com>


Using conventional path is about 10-20 times slower. I don't think I can do that.

For the second method using an exception table, as I already noted, both the new record and the old record will be written to the exception table. I will have to run a query to delete rows whose rowid is smaller. This is also very slow.

I am puzzled why Oracle would write both the old and new rows to the exception table. I would expect Oracle to skip the first one.

In article <81k3qt$spu$1_at_news6.svr.pol.co.uk>,   "-=< a q u a b u b b l e >=-"
<aquabubble_at_Remove.No.Parma.Ham.Remove.geocities.com> wrote:
> <hkpang_at_my-deja.com> wrote in message
news:81hbld$6nu$1_at_nnrp1.deja.com...
> > Hi,
> >
> > I need to upload a huge dataset into an existing table. If the
dataset
> > contains duplicate keys, I want to the delete the old ones already
in
> > the table.
> >
> > My current approach is using sqlldr direct path. Since the data
> > contains duplicate keys, the primary key index is violated and put
in
> > direct load state afterward.
> >
> > Now my problem is how to rebuild the index *and* eliminate the
> > duplicates?
> >
> > 1) use a delete command to eliminate the duplicates. this kind of
works
> > but the performance is unacceptable. i potentially have millions to
> > delete.
> >
> > 2) drop the index before upload, reenable the index with exceptions
> > going to an exception table. the probelm is that both the old rows
and
> > the new rows are in the table, there is no way I can figure out
which
> > one to eliminate.
> >
> > Any suggestions???
> >
> > Many thanks for making a happy thanksgiving possible. :-)
>
> Okay, I have two suggestions. The simplest is to have your
unique/primary
> key constraint enabled when you do your SQL*Load (I think for this
you have
> to use the conventional path). Here, all exceptions won't be loaded by
> default, and their rownums will be written to a bad file. Oh, you
have to
> make sure that your number of exceptions is defined as very large in
your
> SQL*Loader control file.
>
> Secondly, if you use method (2) you can determine the rowids of all
columns
> that cause an exception from the EXCEPTIONS table. It would be a very
> efficient DELETE to delete from your table where rowid = ... I'd opt
for the
> latter, since I would think it was faster, and also you have more
control
> over what is/isn't deleted.
>
> HTH
>
>

Sent via Deja.com http://www.deja.com/
Before you buy. Received on Fri Nov 26 1999 - 00:10:22 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US