Re: ORA-00600: [xplSetRws:1] and a query with large number of nested ORs

From: Stefan Koehler <contact_at_soocs.de>
Date: Thu, 26 Mar 2015 12:06:45 +0100 (CET)
Message-ID: <997471891.236593.1427368005753.JavaMail.open-xchange_at_app09.ox.hosteurope.de>



Hi Jeff,

> is it reasonable to expect Oracle to be able to handle this load (after fixing the bug)? The queries could have a varying number of OR subqueries -
> anywhere from a few to 100's maybe up to a 1000.

This can not be answered as we don't know anything about your system, the varray content and the frequency. However be aware of the optimizer limitations with table functions. Oracle got some hard coded default values, if you are not using some (advanced) method to apply the (correct) cardinality. It is explained very well in this article (section "the extensible optimiser, table functions and variable in-lists"): http://www.oracle-developer.net/display.php?id=427

So i would concern more about bad execution plans in your case as the where clause can become very large and complex.

> How does Oracle handle the VARRAY in terms of allocating memory? What happens if we have hundreds of these TABLE(:) variables being instantiated per
> query and a high query volume rate?

Bind variables are stored in heap 0 of the child cursor (KGLHD -> KGLOB -> heap0 -> bind vars). So you need the amount of memory in shared pool, if you have a large where clause with huge bindings. Not quite sure how Java itself handles the varray via JDBC as there are different methods to implement/handle them, but i would expect the same amount of memory for handling the array values.

Maybe in your case it would be better to handle these kind of queries via temporary table sub-selects (by using a identifier for each different subquery).

Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher Homepage: http://www.soocs.de
Twitter: _at_OracleSK

> Jeff Thomas <dbmangler_at_gmail.com> hat am 25. März 2015 um 21:25 geschrieben:
>
> Yesterday we had a java-based dynamic SQL that triggered the following:
>
> ORA-00600: internal error code, arguments: [xplSetRws:1], [0x43D1F3930]
>
> After that incident - we started experiencing shared memory issues across a 4-node cluster.
> Killing the query stopped the shared memory problems, but we continued to experience performance
> degradation until we bounced the instances.
>
> The problematic SQL was one recently redesigned to use bind variables and just put into production. For the
> SQL that caused the ORA-600, there were something like over 300 of the following nested "OR" subquery fragments,
> and over 1000 of these TABLE(:) bind variables.
>
> OR (START_DATE BETWEEN to_date('20150324', 'yyyymmdd') AND to_date('20180131', 'yyyymmdd')
> AND ((RCODE NOT IN(SELECT COLUMN_VALUE FROM TABLE(:4 ) ) )
> OR (RCODE IN (SELECT COLUMN_VALUE FROM TABLE(:5 ) )
> AND PID IN (SELECT COLUMN_VALUE FROM TABLE(:6 ) )))
>
> Checking further - I found the TABLE(:) bind was instantiated with the following type:
>
> TYPE V_VARCHAR_ARR_TYP IS VARRAY(100000) OF VARCHAR2(255);
>
> The following are rather basic questions but I was wondering if other DBAs had seen similar code fragments and
> could provided some guidance as this is a new scenario for our DBA team.
>
> a) I know there is a bug associated with the ORA-600, however - if we were to start seeing 1000's of these queries
> every hour, is it reasonable to expect Oracle to be able to handle this load (after fixing the bug)? The queries
> could have a varying number of OR subqueries - anywhere from a few to 100's maybe up to a 1000.
>
> b) How does Oracle handle the VARRAY in terms of allocating memory? What happens if we have hundreds
> of these TABLE(:) variables being instantiated per query and a high query volume rate?
>
> The problem functionality has been disabled for now - wondered if there were more elegant ways to handle this
> type of situation.

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Mar 26 2015 - 12:06:45 CET

Original text of this message