Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Question and suggestion regarding rownum

Re: Question and suggestion regarding rownum

From: mcstock <mcstockspamplug_at_spamdamenquery.com>
Date: Wed, 19 Nov 2003 22:13:59 -0500
Message-ID: <296dnYAYW5YarCGi4p2dnA@comcast.com>

"Joel Garry" <joel-garry_at_home.com> wrote in message news:91884734.0311191705.75507a8d_at_posting.google.com...
| "ketan Parekh" <ketanparekh_at_sbcglobal.net> wrote in message
news:<dOZsb.55$VX4.0_at_newssvr13.news.prodigy.com>...
| > Lakshmi - but again with Order by clause and Group all this goes for a
toss
| > For a more clear picture imagine you doa search on google.com and u have
2
| > million hits and it displayes on 10
| > but before it shows u page,2,3,4,5 next - u click on page 4 directly
which
| > should display from 31 to 40 - this is what
| > I have to accomplish.
|
| I hate to say it, but google doesn't always work correctly. It lacks
| consistency, you wind up seeing things like the count not matching how
| many things are found, so displaying 31 to 40 may show the same things
| as 41 to 50, and the last page may have nothing at all. Sometimes it
| doesn't find things that it finds at other times. Sometimes things
| will come and go as you page forward and backward. That's not even
| counting new things coming into the possible result set during the
| session. Rather than reinventing the database wheel, they've given
| transaction processing a flat tire. But that's ok, it's just usenet,
| right?
|
| 500K table isn't so big. I routinely use a proprietary generic QBF to
| peruse several million row tables - it may hourglass a couple minutes
| for a non-key inquiry when the system is loaded.

you've always got to consider how many users are doing the same type of query -- couple of minutes per concurrect user can add up quickly. this type of functionality needs to be approached with efficiency on the front burner

| Maybe you can use an autonomous transaction: get the first few pages
| quickly, while in the background the full set is being generated.
|

autonomous transactions simply allow a sub-transaction to be committed without affecting the main transaction (which waits for the autonomous transaction to complete) -- it sounds like you're looking for a way to spawn a separate query process; but the issues remain the same -- the rowset needs to be identified once and records need to be fetched only as needed (not fetched and thrown away as the user pages forward and backward) Received on Wed Nov 19 2003 - 21:13:59 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US