Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: bad performing query when joining long table to temp table

Re: bad performing query when joining long table to temp table

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Sun, 28 Nov 2004 10:22:49 +0000 (UTC)
Message-ID: <coc8tp$29h$1@titan.btinternet.com>

Mixing temporary tables (GTTs) with
permanent tables usually causes some
grief to the CBO. It has no information about the number of rows in the GTT,
and therefore guesses (badly).

Even if you

    analyze table ..
or

    dbms_stats.gather_table_stats()

you don't get stats on the temporary table.

You'll notice that the CARDINALITY figure on the real table (for the index full scan line) is just ONE - which allows Oracle to decide that an indexed path would work well.

With the GTT, the cardinality is 2K - so Oracle has probably worked out that the cost of performing the nested loop 2000 times exceeds the cost of doing the hash join.

    (Note - the cost of the one-row NL is 6, so the     cost of forcing CBO to do the NL with 2,000     rows will probably be about 12,000 - which is     far more than the 6,000 quoted).

For 8i, one of the work-arounds - which may not be viable - is to use the procedures

    dbms_stats.get_table_stats()
    dbms_stats.set_table_stats()
to copy REPRESENTATIVE figures from a real table to the GTT, so that the numbers end up on the dictionary definition, and get used every time someone uses the GTT. (There are similar procedures for getting and setting index stats).

-- 
Regards

Jonathan Lewis

http://www.jlcomp.demon.co.uk/faq/ind_faq.html
The Co-operative Oracle Users' FAQ

http://www.jlcomp.demon.co.uk/seminar.html
Optimising Oracle Seminar - schedule updated Sept 19th





"Vincent" <v_c_at_yahoo.com> wrote in message 
news:23edd8ed.0411251744.7815d783_at_posting.google.com...

> Hello Everyone,
> Maybe you can help me with this. This is Oracle 8.1.4
>
> I use temp tables to filter data in queries. This is fine for tables
> that are several thousand rows long.
>
> However, when I use temp tables to filter down tables that are
> hundreds of thousands of rows long, it takes a long time and produces
> a bad plan. This is true even if the temp table is empty. If I swap
> the temp table for a real empty table, I get the no-row result very
> quickly.
>
> TABLE DESCRIPTION:
> tmp is the empty global temporary table (on commit delete rows)
> tmp has a compound primary key leading with id
> a is 357k rows
> b is 502k rows
>
> QUERY:
> SELECT
> tmp.id
> ,a.id
> ,b.id
> FROM
> temptable tmp
> ,tablea a
> ,tableb b
> WHERE
> tmp.id = a.id1
> AND a.id2 = b.id
> AND b.field1 = 'a'
>
> PLAN:
> Operation Object Name Rows Bytes Cost
> SELECT STATEMENT Optimizer Mode=CHOOSE 4 M 6389
> HASH JOIN 4 M 131 M 6389
> INDEX FAST FULL SCAN tmp 2 K 25 K 1
> MERGE JOIN 219 K 3 M 4123
> SORT JOIN 357 K 3 M
> TABLE ACCESS FULL a 357 K 3 M 83
> SORT JOIN 167 K 1 M 3486
> TABLE ACCESS FULL b 167 K 1 M 554
>
> Now SWAP the TMP with a real empty table
>
> TABLE DESCRIPTION:
> c is 0 rows
> a is 357k rows
> b is 502k rows
> idx_a is nonunique index on id1
> pk_c is primary key on c.id
> pk_b is primary key on b.id
>
> QUERY:
> SELECT
> c.id
> ,a.id
> ,b.id
> FROM
> tablec c
> ,tablea a
> ,tableb b
> WHERE
> c.id = a.id1
> AND a.id2 = b.id
> AND b.field1 = 'a'
>
> PLAN:
> Operation Object Name Rows Bytes Cost
> SELECT STATEMENT Optimizer Mode=CHOOSE 20 6
> NESTED LOOPS 20 620 6
> NESTED LOOPS 32 736 2
> INDEX FULL SCAN pk_c 1 13
> TABLE ACCESS BY INDEX ROWID a 357 K 3 M 1
> INDEX RANGE SCAN idx_a 357 K
> TABLE ACCESS BY INDEX ROWID b 167 K 1 M 1
> INDEX UNIQUE SCAN pk_b 167 K
>
> Thanks for any responses
> Vincent
Received on Sun Nov 28 2004 - 04:22:49 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US