Re: wierd performance problem

From: Dba DBA <oracledbaquestions_at_gmail.com>
Date: Thu, 10 Jul 2008 16:55:38 -0400
Message-ID: <f30139790807101355m26531369n6b0b97c131b0b110@mail.gmail.com>


had to be fragmentation.

I am running alter table <tablename> shrink space.

still running. been running a few hours.

now my select count(*) is down to 43 consistent gets from 125,000.

thanks guys!

On Thu, Jul 10, 2008 at 3:32 PM, Dba DBA <oracledbaquestions_at_gmail.com> wrote:

> I am going to try to respond to several emails in one post.
>
>
> 1. we don't have any indexes, because I just did an insert of 100 GBs of
> data. and did not want indexes to slow this down. They will go back on in
> the future
> 2. the select one row was because we were just pulling sample data to look
> at and it was taking a long time. This is not normal functionality.
> 3. we are using locally managed tablespaces with fixed extent sizes. All
> extents are 20 mbs in the tablespace.
> 4. we did do a few million deletes, but its a relatively small number of
> deletes relative to the size of the table
> 5. I did large insert selects across a db link that died several times and
> rolled back. I think that may be the problem. Lots of fragmentation. lost
> connection across the database link after pulling many GBs of data. so I
> made my pulls smaller (got less data with each call).
> 6*. I am currently running an alter table shrink space operation to see if
> this fixes the problem*
> 7. the table may grow to 2 TBs, so these types of solution are problematic
> long term
> 8. most of the time we will not be pulling large amounts of data with
> inserts should be a few megabytes at a time. just happens alot and we have
> to keep data for a really long time, so the table will get very big
> 9. should not be an updates or deletes. We are in early deployment stage,
> so I had to clean stuff up with deletes.
>
> @Mark W. Farnham: What is a "gorman". I am assuming you are referring to
> Tim Gorman. by reorganize do you mean using dbms_redefinition on the table?
> Is that better than than a shrink space?
>
> What do you mean by
> "For a partitioned table just copying the problem partition and doing a
> Gorman is probably the cheapest way to fix the problem. Since you're
> partitioning by day, it seems unlikely the problem will recur for that
> partition, and you **may** also benefit by copying the rows to the new
> physical location in a single threaded order that matches your predominant
> retrospective access path, if any such access path or ties being at least
> 25% of queries exist.)"
>
> Are you talking about co-locating data by using an order by? Not sure how
> that has anything to do with my problem of getting just 1 row.
>
> I thought about doing a CTAS to a new table and then renaming it back. That
> is not practical at 2 TBs so is not scaleable.
>
> I really do not think I should scan that many blocks to get just a few rows
> back. I think that statement is incorrect in a properly functioning table.
> (I forgot who posted this). I tested this on another table that is about 11
> Gbs and did a total of 950 phyiscal reads. This table is about 15 times
> smaller than my problem table, but its 1,000s of times less physical IOs.
>
> Another problem:
> Looks like I am having problems with partition elimination. I did a test
> query as follows:
> select count(*)
> from myBigTable
> where mypartitioncolumn < to_date (date in the past)
> and rownum < 2
>
> where "date in the past" is earlier than my earliest piece of data and
> should only scan empty partitions (I have empty partitions with earlier
> dates than I have data. I thought I was going to have to go back and get
> earlier data than I needed.. its about 20 empty partitions and 20 MB each so
> about 400 MB of space).
>
> This above query ran for 15 minutes and I killed it. Its doing tons of
> scattered reads.
>
> Thank you to everyone who responded. If I missed some of the responses, I
> apologize. I'll read through the emails again.
>
>
> PS-- We are considering compressing the data as follows:
> we insert data to the table.
> every night we have a job that copies data from a partition that is say 2
> days old to a new table that is compressed and then we exchange the
> partitions. I noticed that this reduces space to almost 1/6 the size. This
> should solve any fragmentation issues since I am re-copying the data to a
> new table and then exchanging it?
>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Jul 10 2008 - 15:55:38 CDT

Original text of this message