Home » SQL & PL/SQL » SQL & PL/SQL » Committing Large Amounts of Data
Committing Large Amounts of Data [message #186596] Tue, 08 August 2006 12:19 Go to next message
tessdeveloper
Messages: 17
Registered: August 2006
Location: Ireland
Junior Member
Hi there,

I currently have a situation where one pl/sql function (a wrapper function) calls at least five others, which perform a lot of calculations based on the information that is on the database.

When the wrapper function completes, the user is then prompted to commit or not through the Oracle Forms side, However it does not commit properly even if the user says yes. However, if you run the wrapper function a second time, and agree to commit the changes - it works perfectly.

My understanding is that the amount of data is maybe too large and that this is why it does not commit properly the first time. Why it commits the second time - I really don't know.

any help would be much appreciated,

If ye need anymore info, just let me know.

thanks
Re: Committing Large Amounts of Data [message #186617 is a reply to message #186596] Tue, 08 August 2006 14:59 Go to previous messageGo to next message
andrew again
Messages: 2577
Registered: March 2000
Senior Member
Sounds 99% like a bug in your code. Try adding autonomous transaction log at the point in your code where you do the commit. Capture time, IP address, session ID or whatever else you think useful.

CREATE OR REPLACE PROCEDURE put_debug_sql (i_seq IN INTEGER, i_msg VARCHAR)
IS
   PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
   INSERT INTO DEBUG_SQL
        VALUES (i_seq, SYSDATE, i_msg);
   COMMIT;
END;
/


Also try turning on sql trace and then count the actual commits in the trace file. The script was from oramag in 2000.
#!/usr/bin/ksh
# Script to calculate # of commits and rollbacks from a trace file
# Takes trace file as input
#
# XCTEND rlbk=%d rd_only=%d
# ----------------------------------------------------------------------------
#  XCTEND      A transaction end marker.
#  rlbk        1 if a rollback was performed, 0 if no rollback (commit).
#  rd_only     1 if transaction was read only, 0 if changes occurred.
grep XCTEND $1  > /tmp/read.lst
commit=`grep XCTEND /tmp/read.lst | grep "rlbk=0, rd_only=0" | wc -l`
trans=`grep XCTEND /tmp/read.lst | wc -l`
rollback=`grep XCTEND /tmp/read.lst | grep "rlbk=1, rd_only=0" | wc -l`
echo " "
echo "Transactions"
echo "~~~~~~~~~~~~"
echo "Transaction Ends  $trans"
echo "Real Commits      $commit"
echo "Rollbacks         $rollback"



You can also try "lock table XYZ in exclusive mode" from another session to see if there are any outstanding deletes or updates.
Re: Committing Large Amounts of Data [message #186797 is a reply to message #186596] Wed, 09 August 2006 11:01 Go to previous messageGo to next message
JRowbottom
Messages: 5933
Registered: June 2006
Location: Sunny North Yorkshire, ho...
Senior Member
The volume of data changed is irrelevent to a commit. By the time you get to issue a commit, you've already updated all the blocks with the new data - thats why rollbacks take so long, because you've got to go and undo all the changes that have taken place in this transaction.
All a commit needs to do (at a simple level) is mark the transaction as finished and free up the rollback space.

I'm with Anrew_Again on this - sounds 99% certain code bug.
Re: Committing Large Amounts of Data [message #186800 is a reply to message #186596] Wed, 09 August 2006 11:34 Go to previous message
tessdeveloper
Messages: 17
Registered: August 2006
Location: Ireland
Junior Member
Thanks very much for that.

Both yourself and Andrew again were right - it actually was a bug with the code, so sorry if I was wasting yer time.

Ye did help me a lot though - I was able to narrow down where the error was occurring by inserting into a temp table at the point where the commit was called.

so, thanks again lads.
Previous Topic: Reset Record Object
Next Topic: ORA-06500: PL/SQL: storage error
Goto Forum:
  


Current Time: Sat Dec 10 09:01:13 CST 2016

Total time taken to generate the page: 0.24674 seconds