Re: NULLS and query plan.

From: David Fitzjarrell <>
Date: Sat, 12 May 2012 14:23:30 -0700 (PDT)
Message-ID: <>

If the index in quesition is a single-column index it might be beneficial to recreate it with either a second, non-null column or with a constant value:  You might want to take a look at one of my older blog posts regarding a similar situation:

In it I create an index on the column of interest (which is nullable) and run some queries against the table; I then recreate the index using the column of interest and a constant, 0, to enaure the NULL values end up in the index.  Using the second index provides an index scan without resorting to a 'where col is not null' condition.

Possibly this could help your situation. David Fitzjarrell

From: Fred Tilly <> To: "" <> Sent: Saturday, May 12, 2012 10:32 AM
Subject: NULLS and query plan.


Looking at an issue with a particular sql query which runs fine on other customer sites but one customer is having a problem with it. On most customer sites when it joins to a particular table it uses an index, but on this site it just does a hash join to the table. The major difference is not on the volume of data between the sites, but the customer with the bad performance has more NULL values in the join column, in fact nearly half the rows in the table have nulls on the join column. 

The table has 2128094 rows, and the join column has 1132709 distinct values and 995385 nulls. 

The table feeding the join is passing in approximately 5000 rows and its a one to one match so there would be 5000 rows coming out of the join.

So would the fact that half the rows in the table have null values on the join column lead the optimizer to a hash join to the table rather than using an index.



Received on Sat May 12 2012 - 16:23:30 CDT

Original text of this message