Re: GLOBAL TEMPORARY table - serious join performance problem
From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Thu, 23 Sep 2004 00:50:44 +0000 (UTC)
Message-ID: <cit6l4$8le$1_at_titan.btinternet.com>
Date: Thu, 23 Sep 2004 00:50:44 +0000 (UTC)
Message-ID: <cit6l4$8le$1_at_titan.btinternet.com>
Read about dynamic sampling.
Use a level where all tables without
statistics are sampled at run time, and
your problem will go away.
-- Regards Jonathan Lewis http://www.jlcomp.demon.co.uk/faq/ind_faq.html The Co-operative Oracle Users' FAQ http://www.jlcomp.demon.co.uk/seminar.html Optimising Oracle Seminar - schedule updated Sept 19th "Jim Garrison" <jhg_at_athensgroup.com> wrote in message news:GfednS5GOq_UaszcRVn-iA_at_giganews.com...Received on Thu Sep 23 2004 - 02:50:44 CEST
> Scenario:
>
> 1) Create a GLOBAL TEMPORARY table and populate it with
> one (1) row.
>
> 2) Join that table to another with about 1 million rows.
> The join condition selects a few hundred rows.
>
> Performance: 4 seconds, the system is doing a full-table
> scan of the second table, and the Explain Plan output
> indicates that Oracle thinks the first table has 4000
> rows.
>
> Now replace the GLOBAL TEMPORARY table with a real table
> and repeat exactly the same query. This runs in 94 milliseconds
> and the Explain Plan shows the correct number of rows for
> the driving table and an index scan on the second table, as
> I would have expected.
>
> Can anyone suggest a solution that will make the GLOBAL TEMPORARY
> implementation as fast as the real table version?
>
> BTW, why are there two sets of parallel groups under both
> comp.database.oracle and comp.databases.oracle?
>
> Jim Garrison
> jhg_at_athensgroup.com