Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Please help me solve this Oracle mystery...

Re: Please help me solve this Oracle mystery...

From: <mpir_at_compuserve.com>
Date: Fri, 11 Jun 1999 13:11:33 GMT
Message-ID: <7jr1tu$f80$1@nnrp1.deja.com>


The orignal question was how a 45 row table could affect the query.

Without seeing the sql statements and table definitions, the thing that could do it fastest(slowest?) would be a join condition that disabled indexes, forcing a full table scan. The most common example would be a field in table a that is a number and field in table b that is defined as a char but only contains numbers. Even if both are indexed, full table scans can result from the condition a.field=b.field because a conversion function is automatically converted to a.field, disabling the index. The solution might be a.field=to_number(b.field).

In article <375CA6EB.59411BF8_at_home.com>,   Austin Durbin <adurbin_at_home.com> wrote:
> What could be happening is called 'high water mark'. As rows are
inserted
> into a table, Oracle maintains a high water mark. Later, if rows are
> deleted, the high water mark remains. When you query the table,
Oracle
> reads it all the up to the high water mark. Most good Oracle tuning
books
> will have a good discussion on this. The only way to lower the high
water
> mark is to either reorg the table or truncate the table (if you are
purging
> all the rows).
>
> John Seitz wrote:
>
> > I was having the same problem at my work. What was happening was I
had
> > be testing data purging code for about a week with a table that had
1
> > million rows. I would fill the table up with data then purge it.
After
> > about a week, I finished my testing. Then i did a count on the
table.
> > It took about 30 seconds, however the table had no records.
> >
> > So what I had to do is export the entire user. Drop the tablespace,
> > delete the datafile, and start fresh with a new tablespace. Then
> > imported the data. Now if rocks, even with 1 million rows.
> >
> > John
> >
> > NNOOR wrote:
> >
> > > Hi,
> > >
> > > We are running Oracle 8.0.5. Machine is a dual Pentium Pro 200
with
> > > 512MB RAM and RAID level-3 hard drives.
> > >
> > > In a query, a table with only 45 records was part of a join. Other
> > > two tables had about 100,000 record each. The query ran very slow.
> > > When the smaller table (only 45 records, indexed, two fields--one
> > > varchar(10), the other varchar(35), indexed and joined on the
smaller
> > > field with fields of same kind) was dropped from the join, the
> > > query was lightning fast. Nothing else changed in the query! When
> > > drop and recreate the smaller table and make it part of the join
> > > again, query is now very fast. So dropping the table and
recreating
> > > it fixed the problem. Same thing happened on another query with
> > > another table. Now, since we were "experienced", we dropped and
> > > recreated that table and the problem fixed itself.
> > >
> > > The Question: What do you think was causing such a behaviour? Can
> > > the data in Oracle go corrupt? Fragmented? To a degree that a
table
> > > of only 45 records could bring a query to it's knees? Are there
any
> > > tools that we can use to check the integrity of the rest of the
> > > data? Where would you start?
> > >
> > > Thanks very much for your help.
> > >
> > > Regards,
> > > Nasir Noor (nnoor_at_cris.com)
>
>

--
Joseph R.P. Maloney, CCP,CSP,CDP
MPiR, Inc.
502-451-7404
some witty phrase goes here, I think.

Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't. Received on Fri Jun 11 1999 - 08:11:33 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US