RE: NULLS and query plan.

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Sat, 12 May 2012 18:39:00 -0400
Message-ID: <00f501cd3090$0751aef0$15f50cd0$_at_rsiz.com>



Of course the single column index containing NULLs (as you explain in your blog and as I've explained in "Managing Transactions with Disappearing Indexes") is smaller and CAN be used if you simply include an IS NOT NULL predicate. If the index previously had few NULL values it *may* be beneficial to rebuild it to shrink it.

However, I think you are probing the other direction, so if you're getting an equijoin key value from the table producing the 5000 values and that is potentially using the index (using it on one system and not on the other) so without seeing the details of the query and statistics I'm still confused. It could, for example, just be that the cluster factor is very different for the two systems.

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of David Fitzjarrell
Sent: Saturday, May 12, 2012 5:24 PM
To: ftilly_at_btinternet.com; oracle-l_at_freelists.org Subject: Re: NULLS and query plan.

If the index in quesition is a single-column index it might be beneficial to recreate it with either a second, non-null column or with a constant value:  You might want to take a look at one of my older blog posts regarding a similar situation:
 

http://oratips-ddf.blogspot.com/2008/04/tale-of-two-indexes.html
 

In it I create an index on the column of interest (which is nullable) and run some queries against the table; I then recreate the index using the column of interest and a constant, 0, to enaure the NULL values end up in the index.  Using the second index provides an index scan without resorting to a 'where col is not null' condition.
 

Possibly this could help your situation. David Fitzjarrell



From: Fred Tilly <ftilly_at_btinternet.com> To: "oracle-l_at_freelists.org" <oracle-l_at_freelists.org> Sent: Saturday, May 12, 2012 10:32 AM
Subject: NULLS and query plan.

All,

Looking at an issue with a particular sql query which runs fine on other customer sites but one customer is having a problem with it. On most customer sites when it joins to a particular table it uses an index, but on this site it just does a hash join to the table. The major difference is not on the volume of data between the sites, but the customer with the bad performance has more NULL values in the join column, in fact nearly half the rows in the table have nulls on the join column. 

The table has 2128094 rows, and the join column has 1132709 distinct values and 995385 nulls. 

The table feeding the join is passing in approximately 5000 rows and its a one to one match so there would be 5000 rows coming out of the join.

So would the fact that half the rows in the table have null values on the join column lead the optimizer to a hash join to the table rather than using an index.

Thanks

Fred

--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l


--
http://www.freelists.org/webpage/oracle-l
Received on Sat May 12 2012 - 17:39:00 CDT

Original text of this message