[Q/H]: Large cursor selects and rollbacks

From: Neil Greene <neil_at_kynug.org>
Date: Wed, 9 Mar 1994 16:31:16 GMT
Message-ID: <1994Mar9.163116.16247_at_KYnug.org>


I have a cursor I create that may need to work on anywhere from 13,000 to 30,000 records at one time. General use of the system is usually very small, so there is no need for large roll_back_segments. Our system has 5 rollback segments, one which is defined as ROLL_BIG just in case (a few megabytes in size).

The cursor fetches a record from a view, and then performs the appropriate table inserts based on the values fetched. For each record, there will be at least one insert and possibly two. So, if I have a cursor that needs to work on 30,000 records there may be a total of 60,000 inserts. Commits are performed after each insert to update the tables.

How will this cursor perform if I over extend my rollback segments? Am I extending my rollback segments by committing on each record? I do not necessarily need to create a rollback segment large enough for all 60,000 inserts, but apparently things do not work properly on large selects. On large selects, the cursor performs for a few 1,000 and then it looks as if it just loses connection with the database. The system load average sky rockets and things stop working.

How can I fix this? I really hate having to break the loads up into smaller chunks and would like this to work with large selects as well. Are there any startup parameters I can adjust? What about my rollbacks?

-- 
Neil Greene
benchMark Developments, Inc. [NeXT VAR]
2040 Regency Road, Suite C Lexington, KY 40503
Phone: 606-231-6599 / Fax: 606-254-4864
Received on Wed Mar 09 1994 - 17:31:16 CET

Original text of this message