Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: limiting result set size

Re: limiting result set size

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: 2000/04/02
Message-ID: <954660610.26610.1.nnrp-07.9e984b29@news.demon.co.uk>#1/1

One of the ways to work around
your specific problem is to have
an indexed meaningless sequence number
on the rows and a control table with a
single row holding:

    'start_value' 'end_value'

Write a procedure which does:

    loop

        lock row with nowait
        if error 54 (resource locked etc.)
            dbms_lock.sleep (0.01)
        else
            update start and end-range
            commit
        endif

    end loop;
    pass new start and end range back to caller.

Process then does:

    select from work table
    where seq_id between start_value and end_value

Processes will collide from time on the control table, but the 1/100 second sleep should minimise the cost of collision.

To be safe you will need some process to wake up from time to time to check if dying processes have managed to leave some old rows around.

--

Jonathan Lewis
Yet another Oracle-related web site:  http://www.jlcomp.demon.co.uk

Brian Beuning wrote in message <38E6C8A5.D50450F9_at_mindspring.com>...

>We want the same thing. We have 25 machines running the same application
>aganist one Oracle server. Each machine will be doing this:
>1. Begin trans
>2. select N (viz. 16) rows from "work" table
>3. Apply the work to the affected (other) tables
>4. delete the N rows from the work table
>5. Commit the transaction
>
>We have two reasons for wanting to do this:
>1. The transaction creating the work is real time sensitive, and inserting
>into
> the work table is quicker than "doing the work"
>2. We have a lock contention problem on some rows the "work" applies to
> and by batching N changes to one row like this we get less contention.
> For this to be successful, the "select work" query should get as many
> changes as it can for the same rows. The work table is indexed on the
> column that identifies the rows the work applies against.
>
>We have two problems:
>1. Getting just N "work" rows
>2. Not blocking the other machines when one machines has the first N "work"
> rows locked.
>
>Any tips are greatly appreciated!
>
>Brian Beuning
>
>
>Matt Brown wrote:
>
>> This might be a FAQ, but so far my research hasn't turned up an answer.
>>
>> My situation is this. I plan on having a scalable java application
>> which will process rows from a table. 1..n of these applications could
>> be deployed to distribute the processing.
>>
>> Each application will need to execute a query or call a stored procedure
>> to get a set of rows to work on. What's the best way to accomplish
>> this?
>>
>> Is there a way to limit the number of rows returned in a result set?
>>
>> Is it better to have each application invoke a SELECT...FOR UPDATE query
>> (itself or via a stored procedure) or have them talk to some other
>> applicaiton which is font-ending the database and doling out x number of
>> rows from the total result set to each requestor??
>>
>> I'm assuming this isn't an uncommon problem, so there are probably good
>> solutions.
>>
>> Any help appreciated.
>>
>> Matt
>
Received on Sun Apr 02 2000 - 00:00:00 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US