Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Can joins be less efficient?
"NoSpam" <NoSpam_at_NoSpam.com> wrote:
> I'm working on the code written by somebody else. What this code does is
> that it makes a query to the DB to obtain a resultset. Then from each row of
> the resultset, it repeatedly makes a second query to obtain a second
> resultset by using the key field in the first as a key to the second. It is
> very obvious that the two queries can be combined with a simple join. But
> this person insisted that there are times that cascaded reading of the
> resultsets can be faster than a simple join but he couldn't give me an
> example. I just wonder if it really can be true and under what circumstances
> that reading two resultsets can be faster than a single resultset with a
> join when a common key field exists in both tables.
You don't mention which database backend you are referring to. Nowadays, any relational database worth its salt would use an efficient cost-based optimiser. It would attempt to process joins in the most efficient manner possible. Incompetant data modelling with poor index design may throw it sometimes but in those situations a good optimiser would allow the developer some latitude with override options. Also consider that if this row at a time selection is happening between a client process and the database server, the amount of overhead involved in this is proportional to the number of rows being joined. The simplest way to test, of course, is to recode the query and run some comparative tests using the same datasets.
-am © 2002 Received on Fri Jan 25 2002 - 03:37:08 CST