Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Performance Brainstorming

Re: Performance Brainstorming

From: ls cheng <g-e-n-o_at_europe.com>
Date: 28 Nov 2002 01:02:43 -0800
Message-ID: <b352e1b3.0211280102.1145d16a@posting.google.com>


I bet itīs overindexed

We have a table with 50 million rows, to delete 15 million rows from that it takes 4 days! (Table with 82 columns and 18 indexes, delete 750000 rows every 4 hours). However I do not expect insert take such long time as my process since itīs a faster process than delete but my opinion is it must be overindexed

My only solution (occured to me) was partition the table by some date column and convert all indexes into global indexes (including the partition column index) and drop the corresponding partitions

nsouto_at_optushome.com.au (Nuno Souto) wrote in message news:<dd5cc559.0211271433.13059276_at_posting.google.com>...
> davidosborne_at_comcast.net (David Osborne) wrote in message news:<193f1e93.0211250729.a16590a_at_posting.google.com>...
> >
> > The table involved has 75 million rows, and is non-partitioned. Each
> > night we add about a million rows and it takes several hours (4+).
>
> Hmmmmm.... Do you ever delete rows? There might be something
> wrong with the setup of the table: 4 hours for a million rows
> is waaaaaay too much, unless you have a very slow Sun box.
> Are you using the array interface for row insertion?
> How many indexes?
>
> My desktop PC can insert a million rows in less than 10 minutes.
> How often do you re-build the indexes? Why haven't you
> considered partitioning? Sounds like you'd have a
> perfect fit for it. You didn't say, but I'm assuming you
> have 8.0 or later?
>
> > Our goal is under two hours (is this reasonable?) The table and
> > indexes are on different tablespaces; which are on different disks and
> > controllers (although I'm hearing that this is no longer an issue).
>
> It might or might not be. Depends on too many things.
> In general, if you have a SAN or RAID-type disk architecture,
> that would tend to be true. Even with native disks and
> controllers it can also be true if you're using a LVM.
> In your case, I'd say putting the redo logs in separate disks
> and/or controllers would gain you more.
>
> >
> > I have divided the load up into 16 seaprate jobs (using DBMS_JOB) and
> > set the freelists on the table and indexes to 16.
>
> Probably too much. I'd say you shouldn't set freelists to more
> than you have set the number of db-writer processes/threads.
> Pure rule of thumb, but it seems to work fine for me.
>
> >
> > When the job is running (all 16), TOP shows that the CPU is 0-3% idle
> > and the IO wait varies from about 5% to 20% so it appears that we are
> > fully utilizing the server.
>
> Sounds OK, although I'd expect I/O saturation for that many
> concurrent insert jobs . What size are the redo logs? Are you
> positive you're using array processing?
>
> >
> > One idea is to load the new rows into a new table and then insert them
> > with one "insert into big_table select * from new_small_table" with
> > the append hint. The big table has pctfree at 10%.
>
> Dunno why that would be so much better. What are you using
> now, SQL*Loader? If you're using a programmatic loop with an
> INSERT for every row without even array processing thrown in,
> there lies your problem.
> Make sure you're using arrays for the INSERT. That will
> gain you at least an order of magnitude in processing time.
>
> If you're going to the trouble of the above, I'd investigate
> partitioning instead. A much better way to handle this type
> of load.
>
> HTH
> Cheers
> Nuno Souto
> nsouto_at_optusnet.com.au.nospam
Received on Thu Nov 28 2002 - 03:02:43 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US