Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Object Types and Java (migrating from mod_plsql)

Re: Oracle Object Types and Java (migrating from mod_plsql)

From: Galen Boyer <galenboyer_at_hotpop.com>
Date: 22 Feb 2005 21:32:10 -0600
Message-ID: <u4qg3kbj9.fsf@hotpop.com>


On Mon, 21 Feb 2005, connor_mcdonald_at_yahoo.com wrote:
> Thomas Kyte wrote:
>
> [snip]
>

>> 
>> --
>> Thomas Kyte
>> Oracle Public Sector
>> http://asktom.oracle.com/
>> opinions are my own and may not reflect those of Oracle Corporation

>
> With regard to:
>
> "the fastest way to paginate through a result set is NOT to pull the
> entire result set to the middle tier, sort it and display it - it is
> to ask the database for the first 10 rows, then rows 10-20 and so on."
>
> we've been having problems with the "too successful" search, ie lots
> of rows in the result, eg, user enters a criteria like (say) "person's
> age < 100 years". Now, when you pump that through typical pagination
> query:
>
> select * from
> ( select * from ...
> where age < 100
> order by some_sort_key )
> where rownum < 10, 20, 30, 40, etc...
>
> you get killed on performance, all in the sorting (even with the SORT
> STOPKEY optimization). We've been canvassing a alternative solution
> (still in the database) which is:
>
> open cursor for
> select *, some_sort_key
> from ...
> where <criteria>
> and rownum <= 200
>
> fetch bulk collect 200
>
> if rows fetched = 200, then
> "warning - give me some decent criteria"
> else
> bulk bind insert into a gtt
> now do std pagination queries on the gtt
> end if;
>
> now, all that is straight forward enough, but the next phase is allow
> the user to override the warning, and continue on up to a higher
> ceiling (say 2,000 rows). So, now we're at
>
> open cursor for
> select *, some_sort_key
> from ...
> where <criteria>
> and rownum <= 2000
>
> fetch bulk collect 2000;
>
> if rows fetched = 200, then
> show warning
> if warning_overridden then
> goto <keep going>
> end if;
> else
> <keep going>
> bulk bind insert into a gtt
> now do std pagination queries on the gtt
> end if;
>
> Threats:
> - lots of users bulk collecting 2000 rows could be a lot of memory
> - how long do we let the cursor stay open (ie, waiting for the user to
> say 'keep going' or 'cancel'
>
> but can anyone suggest a better option ? I played a little with the
> estimated execution time with resource manager, but its nowhere near
> accurate enough. Also tried using profiles to cancel the queries
> after 'n' seconds, but in our case, there's little correlation between
> run time and rows returned.
>
> Thoughts ?

The most performant way I could come up with was have a table that got preloaded on the first page hit by running the exact query, but returned the ids and rownum. Stored that in a table. Then, the pagination joined that table to the base tables where rownbr between start and end.

The problem with this is that the underlying data can change during pagination but ...

-- 
Galen deForest Boyer
Received on Tue Feb 22 2005 - 21:32:10 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US