RE: Large table query with cpu throttling

From: Elliott, Patrick <>
Date: Mon, 18 Feb 2008 09:25:09 -0600
Message-ID: <>

A table with a billion records really must be partitioned. Then you should include ranges of partition key values in your where clause to force partition pruning. Partitioning or not, you should not be issuing the query multiple times if you are displaying 100 rows each time. You should only issue the query once and then fetch 100 rows at a time. Then you don't have to worry about rownum's or rowid's, and you will have the minimum impact on the server.


-----Original Message-----
From: [] On Behalf Of Peter Teoh Sent: Monday, February 18, 2008 12:02 AM To:
Subject: Large table query with cpu throttling

My requirements:

  1. I have a large table, in the range of 1 billion records.
  2. But the machine is a relatively slow one. Therefore I would not like to clog the machine when querying this large table.

My question:

  1. Supposed I need to extract two fields of ALL the record - but the full table scan can afford to take its time. Therefore, how do I do a full table scan, with cpu throttling (eg, sleeping or slowing down once in a while) so as to let other jobs continue? (any SQL hints available?)

Ideally I thought it should not involved any indexes, as that will incur additional lookups, plus possibly some sorting etc, because the target is a FTS anyway - is my analysis correct?

B. Supposed there are no solution to the above, so alternatively is to do slicing. My method is find the min and max of the rowid, and then do a direct rowid access:

select * from table where rowid = 'xxxx'.

where the xxxx correspond to the rowid automatically generated. This statement does not generate any temporary sorting, or depending on any prior indexes.

To do this, I did a simple experiment:

select rowid from a_large_table;

and the character range of the rowid seemed to be (in this order):
[A-Za-z0-9+/] and that's all, and a total of 17 characters. This is
verified in Ora10gR2, and Ora9iR2 (both Linux). Are my analysis correct? Any other documentation have u seen with these info?

Therefore my plan is to construct the SQL with the rowid following the above format, slowly increasing, so as directly retrieve the data with minimal performance hits on the server. Is this the best way of slicing the table so as to do a full table scan while still meeting the requirements above?

C. Currently, I found that select rowid from a_large_table where rownum < 100 and rowid > 'xxxx' return almost immediately, whereas a "select count(*)" can take about 20mins or more to return just a number.

This therefore is the 3rd way to get data - 100 records at a time. Similar in design to (2) above. But because the last row's rowid is always the largest of all the 100 rows, therefore, i will use that value as the input to the next query, to get the next 100 rows, and so on. So this way seem not to incur any performance hits at all, but it has the assumption that "select" statement ALWAYS RETURN THE ROWID in an incremental manner, if just rownum and rowid are used in the where clause. Is this assumption reasonable?



Information transmitted by this email is proprietary to Medtronic and is intended for use only by the individual or entity to which it is addressed, and may contain information that is private, privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient or it appears that this mail has been forwarded to you without proper authority, you are notified that any use or dissemination of this information in any manner is strictly prohibited. In such cases, please delete this mail from your records. To view this notice in other languages you can either select the following link or manually copy and paste the link into the address bar of a web browser: --
Received on Mon Feb 18 2008 - 09:25:09 CST

Original text of this message