Re: Retreiving multiple data sets

From: Michael Cunningham <napacunningham_at_gmail.com>
Date: Thu, 29 Jan 2015 11:32:22 -0800
Message-ID: <CAPt39tuZ_96o+KmMeTqpQic_2AkmdBk6mgSpTiWu8nVyQYLPhg_at_mail.gmail.com>



You may also want to look at REF CURSOR. I've used this in the past to return result sets and it was the fastest I could get. The test was over a 28.8K baud modem (does that date me?) and the connection to the database through the modem across the country.

I'm going to assume (hopefully correctly) there is a web server on the same network as the database. In that case I think your slowest process will be sending the data from the web server to the users browser. If I'm right about that, and that is the bottleneck, then I would go with the simpler method, as mentioned, of using straight select statements in the web server code.

Just in case, Tim Hall gives a great example of REF CURSOR for different types of code (pl/sql, java, ADO).
http://oracle-base.com/articles/misc/using-ref-cursors-to-return-recordsets.php

Michael

On Thu, Jan 29, 2015 at 10:21 AM, <avramil_at_avramil.cnc.net> wrote:

> Hi folks,
>
>
>
> A new project has come our way that will require a decent amount of SQL
> testing and tuning. We thought we would ask the community for some
> suggestions on best approaches/solutions.
>
> Among other things, it’s looking like we will have to send result sets of
> several thousand records every 30 seconds to user browsers. Those few
> thousand records will be collected from multiple tables via UNIONs and
> JOINs.
>
>
>
> We are considering several approaches:
>
>
>
> Straight SELECT statements
>
> Using stored procedures to return result sets
>
> PIPELINED functions
>
> Bulk operations
> A "regular view / views"
>
> Materialized views
>
> Possibly compressing the result set for network transmission (if this
> isn’t already done automatically)
>
> Other ideas…
>
>
>
> Further background: It’s an 11.2.0.3 database with over 400 million
> records spread out over several schemas. It receives 3.5 million updates
> per day. Most records contain alarm data. The primary goal will be to
> retrieve uncleared alarms, plus alarms that were updated in the last 5
> minutes. Right now, there are 41,000 uncleared alarms, which are part of
> the 240,000 alarms that were updated in the last five minutes. We would be
> returning subsets of this data.
>
>
>
> The developers (and we) are looking for a straightforward way to retrieve
> everything in one shot – though that can mean, on our side, morphing
> records from one table to look like those in another.
>
> Thanks,
> Lou Avrami
>
>

-- 
Michael Cunningham

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Jan 29 2015 - 20:32:22 CET

Original text of this message