Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 1 day 1 hour ago

CONNECT_TIME with reset on action

Thu, 2025-09-18 11:19
We have an Oracle Forms / Database application and were asked to limit User's maximum online time to 15 minutes. Following this, we set the profiles' CONNECT_TIME to 15 - this works well. How is it possible to reset this if user does any action on frontend? Maybee v$session.seconds_in_wait can help? Thanks in advance Helmut
Categories: DBA Blogs

PL/SQL package

Thu, 2025-09-18 11:19
Which PL/SQL package is primarily used for interacting with Generative AI services in Oracle Database 23ai? DBMS_AI or DBMS_ML or DBMS_VECTOR_CHAIN or DBMS_GENAI?
Categories: DBA Blogs

Agent OEM 24ai

Thu, 2025-09-18 11:19
Hi there! I've some trouble trying to upload data from agent to OMS server, everything seems correct but: <b>emctl upload agent Oracle Enterprise Manager 24ai Release 1 Copyright (c) 1996, 2024 Oracle Corporation. All rights reserved. --------------------------------------------------------------- EMD upload error:full upload has failed: uploadXMLFiles skipped :: OMS version not checked yet. If this issue persists check trace files for ping to OMS related errors. (OMS_DOWN)</b> agent is running, OMS too, telnet from agent is: telnet myserver.com 4903 Trying 10.8.0.65... Connected to myserver.com. Escape character is '^]'. curl output: curl -vk myserver.com:4903/empbs/upload * Trying 10.8.0.65... * TCP_NODELAY set * Connected to myserver.com (10.8.0.**) port 4903 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=myserver.com * start date: Jul 10 16:57:40 2025 GMT * expire date: Jul 9 16:57:40 2035 GMT * issuer: O=EnterpriseManager on myserver.com; OU=EnterpriseManager on myserver.com; L=EnterpriseManager on myserver.com; ST=CA; C=US; CN=myserver.com * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. > GET /empbs/upload HTTP/1.1 > Host: myserver.com:4903 > User-Agent: curl/7.61.1 > Accept: */* > < HTTP/1.1 200 OK < Date: Wed, 03 Sep 2025 18:53:47 GMT < X-ORCL-EM-APIGW-CONSOLIDATED-BY: apigateway < X-ORCL-EM-APIGW-ERR: 0 < X-ORACLE-DMS-ECID: 21f06e98-2895-465a-b3f3-17be919babe9-00001673 < X-ORCL-EMOA: true < X-ORCL-EM-APIGW-GUID: 6113d58d-fde2-8b19-f054-c5eee6216d13 < X-ORACLE-DMS-RID: 0 < Date: Wed, 03 Sep 2025 18:53:47 GMT < Content-Type: text/html;charset=utf-8 < X-Content-Type-Options: nosniff < X-XSS-Protection: 1; mode=block < Vary: Accept-Encoding < Content-Length: 306 < <HTML><HEAD><TITLE> Http XML File receiver </TITLE></HEAD><BODY bgcolor="#FFFFFF"> <H1>Http XML File receiver</H1> <H2> Http Receiver Servlet active!</h2> <H2> Product version is: 24ai </H2> <H2> Product release version is: 24.1.0.0.0 </H2> <H2> Core release version is: 24.1.0.0.0 </H2> </BODY></HTML> * Connection #0 to host myserver.com left intact thank you very much!
Categories: DBA Blogs

General Question for your thought or Experience with the Outbox Pattern and viable alternatives

Thu, 2025-09-18 11:19
Question We're evaluating different approaches to implement the Outbox Pattern in Oracle 19c for reliable event publishing in our microservices architecture, but we're concerned about the significant I/O overhead and performance implications. Could you provide guidance on the best practices and alternatives? Current Implementation Options We're Considering 1. Traditional Polling Approach Method: Standard outbox table with application polling using SELECT ... FOR UPDATE SKIP LOCKED Concerns: Constant polling creates unnecessary database load Potential for high latency in event delivery Resource consumption even when no events exist 2. Change Data Capture (CDC) with Debezium Method: Using Debezium to mine Oracle redo logs for outbox table changes Concerns: Additional complexity in deployment and monitoring Dependency on external CDC infrastructure Potential log mining overhead on the database 3. Oracle Advanced Queuing (AQ) with Sharded Queues Method: Leveraging Oracle's native messaging with 19c sharded queue improvements Concerns: Learning curve for development teams familiar with table-based approaches Potential vendor lock-in Queue management complexity Primary Concerns I/O Impact: All approaches seem to significantly increase database I/O: Polling creates constant read operations CDC requires continuous log scanning Queuing systems add their own storage and processing overhead Scalability: As our event volume grows, we're worried about: Database performance degradation Increased storage requirements for outbox/queue tables Network bandwidth consumption Specific Questions Performance Optimization: What Oracle 19c specific features or configurations can minimize the I/O overhead of outbox pattern implementations? Alternative Architectures: Are there Oracle-native alternatives to the traditional outbox pattern that provide similar transactional guarantees with better performance characteristics? Hybrid Approaches: Would a combination approach (e.g., AQ for high-priority events, polling for batch operations) be advisable? Monitoring and Tuning: What specific metrics should we monitor, and what tuning parameters are most critical for outbox pattern performance in Oracle 19c? Resource Planning: How should we size our database resources (I/O capacity, storage, memory) when implementing outbox patterns at scale? Environment Details Oracle Database 19c Enterprise Edition Microservices architecture with moderate to high event volume Requirements for exactly-once delivery semantics Mixed OLTP and event-driven workloads Any insights on Oracle-specific optimizations, alternative patterns, or architectural recommendations would be greatly appreciated.
Categories: DBA Blogs

Remote Procedure Call (RPC) Dependency Management - where is remote timestamp stored?

Thu, 2025-09-18 11:19
Hi AskTom Team! According to https://docs.oracle.com/en/database/oracle/oracle-database/19/adfns/schema-object-dependency.html#GUID-B99E885E-900F-4F29-A188-A617A301FDCE : "Whenever a procedure is compiled, its time stamp is recorded in the data dictionary." Is it possible to see this recorded timestamp ? Marcin
Categories: DBA Blogs

Bring the consecutive number(strat_val,end_val) record based on id as one record and add new record if the consecutive number is breaks down

Tue, 2025-09-16 23:17
Bring the consecutive number(strat_val,end_val) record based on id as one record and add new record if the consecutive number is breaks down https://livesql.oracle.com/next/?compressed_code=H4sIAAAAAAAACo3PwQqCQBCA4fvCvsPcVBjBXTWLboXHCoLOMupCgm2wO%252Fr8oXbo1jKn%252BflgmM4ZYgNM7WiAjeemX3YpYikAAIYeZnLdk5yOVZbgVj2T42amEa6Py6m%252Bf7Ox%252FRrBTq%252FWOCmSoxTbDNYbxzBYfv%252BcmWmcjI8hoggV5gv%252FDwvchcEKVRYmlUZVBtIdqkMY1RlqHUbzEvMqkO6xWN9K0%252Fp2%252FgB1bHIywAEAAA%253D%253D&code_language=PL_SQL&code_format=false <code>create table test_date ( id varchar2(10), start_val NUMBER, end_val number );</code> input data : <code>insert into test_date values( 'a',1,3); insert into test_date values( 'a',4,6); insert into test_date values( 'a',7,10); insert into test_date values( 'a',12,15); insert into test_date values( 'a',16,19); insert into test_date values( 'a',20,22); insert into test_date values( 'a',35,37); insert into test_date values( 'a',38,40);</code> output data: 'a' , 1, 10 'a' , 12, 19 'a' , 35 , 40
Categories: DBA Blogs

Materialized View Staleness Changes to NEEDS_COMPILE after DML

Tue, 2025-09-16 23:17
Why does DML on a materialized view's master table cause the materialized view staleness to change from FRESH to NEEDS_COMPILE? I would have thought it would have changed to STALE. My understanding about NEEDS_COMPILE is that it is supposed to reflect structural changes (DDL), so I must have some gap in my understanding because I am getting NEEDS_COMPILE when only performing DML (not DDL). <code>SQL>column mview_name format a20 SQL>create table T ( id NUMBER 2 ) 3 / Table created. SQL>insert into T ( id ) values ( 1 ) 2 / 1 row created. SQL>create materialized view T_MV 2 ( id 3 ) 4 as select id 5 from T 6 / Materialized view created. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV' 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV FRESH VALID COMPLETE SQL>select * from T_MV order by id 2 / ID ----------- 1 SQL>insert into T ( id ) values ( 2 ) 2 / 1 row created. SQL>commit 2 / Commit complete. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV' 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV NEEDS_COMPILE NEEDS_COMPILE COMPLETE SQL>select * from T_MV order by id 2 / ID ----------- 1 SQL>begin 2 dbms_snapshot.refresh( list => user || '.T_MV' 3 , purge_option => 2 4 , atomic_refresh => false 5 , out_of_place => true 6 ) 7 ; 8 end; 9 / PL/SQL procedure successfully completed. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV' 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV FRESH VALID COMPLETE SQL>select * from T_MV order by id 2 / ID ----------- 1 2 SQL>insert into T ( id ) values ( 3 ) 2 / 1 row created. SQL>commit 2 / Commit complete. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV' 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV NEEDS_COMPILE NEEDS_COMPILE ...
Categories: DBA Blogs

Materialized View USING TRUSTED CONSTRAINTS and staleness=UNKNOWN

Tue, 2025-09-16 23:17
It appears USING TRUSTED CONSTRAINTS causes a materialized view to have staleness=UNKNOWN when it would otherwise be FRESH. Is it possible to have a materialized view with staleness=FRESH when USING TRUSTED CONSTRAINTS? If not, would the optimizer be less likely to consider a materialized view with staleness=UNKNOWN for query rewrite if query_rewrite_integrity=TRUSTED and query_rewrite_enabled=TRUE? How about if query_rewrite_integrity=ENFORCED? <code>SQL>column mview_name format a20 SQL>create table T ( id NUMBER 2 ) 3 / Table created. SQL>insert into T ( id ) values ( 1 ) 2 / 1 row created. SQL>create materialized view T_MV 2 ( id 3 ) 4 -- refresh using trusted constraints 5 as select id 6 from T 7 / Materialized view created. SQL>create materialized view T_trusted_MV 2 ( id 3 ) 4 refresh using trusted constraints 5 as select id 6 from T 7 / Materialized view created. SQL> SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name like 'T\_%MV' escape '\' order by 1 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV FRESH VALID COMPLETE T_TRUSTED_MV UNKNOWN VALID COMPLETE SQL>begin 2 dbms_snapshot.refresh( list => user || '.T_MV' 3 , purge_option => 2 4 , atomic_refresh => false 5 , out_of_place => true 6 ) 7 ; 8 dbms_snapshot.refresh( list => user || '.T_TRUSTED_MV' 9 , purge_option => 2 10 , atomic_refresh => false 11 , out_of_place => true 12 ) 13 ; 14 end; 15 / PL/SQL procedure successfully completed. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name like 'T\_%MV' escape '\' order by 1 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV FRESH VALID COMPLETE T_TRUSTED_MV UNKNOWN VALID COMPLETE SQL>select * from T_MV order by id 2 / ID ----------- 1 SQL>select * from T_trusted_MV order by id 2 / ID ----------- 1 SQL>drop materialized view T_MV 2 / Materialized ...
Categories: DBA Blogs

Returning rows in the order they were inserted

Tue, 2025-09-16 23:17
Hi Tom. Is it possible to return rows from a table in the order that they were inserted? I have an old query, pre 8.1.6, that always returned rows in the order they were inserted. Now, in version 8.1.7 & 9 they are returned in a seemingly random order. Thanks
Categories: DBA Blogs

Viewing table partition compression

Tue, 2025-09-16 23:17
Hi tom, I have a table with compression, also have partitions and subpartitions. You can see The create DDL on the below. As you can see all of my objects are compressed or nocompressed. Bu i can't see this information on the all_tables table. Compression ad compress_for is turning null. Why i can see the table is compressed on the DDL. What is the point on this issue. <code>CREATE TABLE EFSSALES1 ( sale_id NUMBER, sale_date DATE, region VARCHAR2(10), amount NUMBER ) COMPRESS BASIC PARTITION BY RANGE (sale_date) SUBPARTITION BY HASH (region) ( PARTITION sales_2024 VALUES LESS THAN (TO_DATE('01-JAN-2025','DD-MON-YYYY')) ( SUBPARTITION sp_east1 COMPRESS FOR OLTP, SUBPARTITION sp_west1 NOCOMPRESS ), PARTITION sales_2025 VALUES LESS THAN (TO_DATE('01-JAN-2026','DD-MON-YYYY')) ( SUBPARTITION sp_east2 COMPRESS FOR OLTP, SUBPARTITION sp_west2 COMPRESS FOR OLTP ) ); SELECT compression, COMPRESS_FOR FROM all_tables WHERE table_name = 'EFSSALES1' AND owner='COPYCATLIVE'</code>
Categories: DBA Blogs

Rman backup setup

Tue, 2025-09-16 23:17
Hello Tom, I am wondering what is the benefit of using a recovery catalog over a control file? What are some of the decisions one should consider before going either way? Please and thank you
Categories: DBA Blogs

Unable to use nested tables on my system

Tue, 2025-09-16 23:17
I am using Windows 11 64 bit / Oracle 19c .Everything works fine except when I create a nested table and associate it with my table as a column . I can neither query it nor drop the table . When I query I get a message that the connection to the database was reset and when I try to drop it I get a message as: RA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound], [78561], [], [], [], [], [], [], [], [], [], [] 00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]" *Cause: This is the generic internal error number for Oracle program exceptions. It indicates that a process has encountered a low-level, unexpected condition. The first argument is the internal message number. This argument and the database version number are critical in identifying the root cause and the potential impact to your system. How do I resolve this?
Categories: DBA Blogs

Pl/sql procedure freezes while fetching data from dblink

Tue, 2025-09-16 23:17
I encountered inexplicable freezing of pl/sql procedure while trying to fetch and insert data into table, however same operation runs successfully when called with sql query in any IDE. I am trying to realise ETL process in data warehouse database. I need to load data from a remote database's table into a local one, i have low privileges on the remote database meaning i cant tune network or create materialised view logs there. Target table is 'long' and daily data that must be transferred is about one million rows 7 columns wide with mostly numeric data and no large objects. I created db link and a simple view that represents remote table in local database and granted select privilege to <code>'DATAW'</code> schema designed for data storage. In the <code>'DATAW'</code> i created a procedure for merging data into identical local table. Something like this: <code> create procedure merge_t_data (p_days_offset in number) as begin merge /*+ APPEND */ into t_data_local tgt using ( select col_id, col2, ..., col7 from pdbadmin.v_data_remote where updated_at >= trunc(sysdate) - interval '1' day * p_days _offset ) src on (tgt.col_id = src.col_id) when matched then update set ... when not matched then insert ...; end; </code> When i run the procedure the session acquires wait event 'Sql*net message from dblink' or 'Sql*net more data from dblink' which stays the same forever. When i check incoming traffic on the server while the procedure is running i see that it is not used at all. <b>HOWEVER</b> When i run the same merge operatiion using query like: <code> merge /*+ APPEND */ into t_data_local tgt using ( select col_id, col2, ..., col7 from pdbadmin.v_data_remote where updated_at >= trunc(sysdate) - interval '1' day * 3 ) src on (tgt.col_id = src.col_id) when matched then update set ... when not matched then insert ...; </code> it runs successfully: i see incoming traffic up to 2Mb, and query finishes after +-10 minutes. I am the only user of the database, no other people works with it for now. I have also tried inserting the data into temporary table; using fetch cursor bulk collect; running execute immediate in the procedure, result was the same - execution freezes. Also worth mentioning that i also successfully realised ETL process for second table: it is much wider: daily data needed for transferring is about 50k rows and the number of collumns is more than 20. I did it with similar merge procedure that runs successfully unlike the previously discussed one. I want to know if it is possible to achieve success in running my merge procedure for the 'long' table or what might be other solutions to this problem
Categories: DBA Blogs

implicitStatementCacheSize appears to leave open cursors

Tue, 2025-09-16 23:17
Hey Tom, I have a Java application that runs thousands of different types of queries against an Oracle database millions of times. I wanted to save the query preparation time by using the oracle.jdbc.implicitStatementCacheSize JDBC property to cache prepared queries. But I easily end up with an error ORA-01000: maximum open cursors exceeded, even when running a single query at a time and reading it to completion. In my mind, an open cursor represents a way to scroll through the results of a query via communication with the database server. I don't immediately see a correlation between a statement and a cursor beyond the idea that the statement yields a cursor when executed. But it appears to be deeper than that in the Oracle JDBC driver. See the following example code that can quickly reproduce what I am experiencing: <code> public class OracleCursorExhaustionThroughStatementCaching { public static void main(String[] args) { try { Class.forName("oracle.jdbc.driver.OracleDriver"); final Properties props = new Properties(); props.put("user", "scott"); props.put("password", "tiger"); // CURSORS on the remote system are set currently to 300. Just run more unique queries than there are cursors to reproduce. // This cache should only be holding statement information and cursors should only be used during an individual query props.put("oracle.jdbc.implicitStatementCacheSize", "1500"); // commenting/removing this line allows this test to run without error try (Connection c = DriverManager.getConnection("jdbc:oracle:thin:@someserver:1521/mydb", props)) { DatabaseMetaData meta = c.getMetaData(); System.out.println("Product: " + meta.getDatabaseProductName()); System.out.println("Version: " + meta.getDatabaseProductVersion()); System.out.println("Driver: " + meta.getDriverVersion()); System.out.println("JVM Version: " + System.getProperty("java.runtime.version")); for(int i = 0; i < 1000; i++) { // Each statement will be closed after executing try(PreparedStatement ps = c.prepareStatement("select " + i + " from dual")) // for demo a unique query against dual is enough { // Being explicit with closing the result set after execution because logically this is the end of the cursor. (Statement close closes it anyway) try(ResultSet rs = ps.executeQuery()) { while(rs.next()) ; // just read each result set fully, which should bring the cursor to its end } } } } } catch(Exception ex) { ex.printStackTrace(); } } } </code> So on my machine and database this code yields the following: Product: Oracle Version: Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production Version 19.22.0.0.0 Driver: 23.8.0.25.04 JVM Vers...
Categories: DBA Blogs

Creating hierarchy using a parent child table

Tue, 2025-06-10 22:38
hello guys, I'm trying to create hierarchy using a parent child table for analytic view but I'm struggling. the steps: 1. create a attribute dimension using the table --> (id,name,parent_id) (I succeeded but idk the correct way for the parent child hierarchy). 2.create a hierarchy using the attribute dimension from previous step, (i succeeded to create a regular hierarchy that isn't based on parent child) idk how to do it, tried 3 chatbots and also oracle docs with no answer. im doing this on toad 19 - oracle 19 hope someone can help me. thanks!
Categories: DBA Blogs

plsql gateway procedure on Oracle Autonomous Database

Tue, 2025-06-10 22:38
Ask Tom: I would like to migrate a PL/SQL Web Toolkit application from an on-prem database instance to the Oracle Autonomous Database. I see references to the PL/SQL Gateway via ORDS (I think) in documentation and administration screens but I'm struggling to connect the dots. Most documentation and articles I find focus on the ORDS APIs but not a simple HTTP to PL/SQL connection. I have created a User, CAAT_CODE, and can create a procedure through the web-based SQL Developer. I want to execute a procedure something like <code> create or replace procedure gateway_test as begin htp.print( '<html><body>Hello, World.</body></html>' ); end; / </code> with a URL that looks like https://g3958a9838612f9-caat.adb.us-chicago-1.oraclecloudapps.com/ords/caat_code/gateway_test So far, all I get is the HTTP 404, not found error. Please suggest documentation or a set of steps to configure the gateway in the Oracle Autonomous Database environment or let me know that what I'm trying to do is not supported. Thanks for many years of Ask Tom. I have found it to be valuable in many Oracle development cases. John Jeunnette johnjeunnette@prairiesystemsgroup.com
Categories: DBA Blogs

Inability to Export/Import our APEX APP

Tue, 2025-06-10 22:38
Hi everyone, We?re developing an APEX app that handles Excel files, and we?ve run into a problem when trying to export the app from one Workspace and import it into another. <b>The main issue: When we export the app and attempt to import it into a different Workspace, the import fails with various errors, and ultimately the app does not load.</b> Has anyone experienced similar issues or have any advice on how to resolve this? Being able to seamlessly transfer the app across Workspaces would greatly improve our workflow. Additional details: We export the entire application using default export options. During import, we select ?Import as Application? in the Import Wizard. After some loading time, an error appears?although the exact error message varies. Examples include: - <b>Internal Server Error - Write</b> The server encountered an internal error or misconfiguration and was unable to complete your request. Reference #4.73a2f17.1748513046.f1d44d4 https://errors.edgesuite.net/4.73a2f17.1748513046.f1d44d4 - <b>Execution of the statement was unsuccessful. ORA-02091: transaction rolled back</b> <code> begin wwv_flow_imp.import_end(p_auto_install_sup_obj => nvl(wwv_flow_application_install.get_auto_install_sup_obj, false)); commit; end; </code> We encounter the same problem even when exporting and importing a single page into a blank app. Clicking ?Install Page? triggers the same ORA-02091 error as above. Any guidance, tips, or workarounds would be highly appreciated. Thanks so much for your help!
Categories: DBA Blogs

Additonal help with JSON object - Add/Remove items from array within a JSON object

Fri, 2025-06-06 08:26
I thought I could take what Chris showed me in my original question (https://asktom.oracle.com/ords/asktom.search?tag=looking-for-help-with-json-object). However, after 2 days, I have not made any progress. The JSON data - { "recall_control": [ { "item_id":"item1", "item_cd1":"CA", "item_cd2":"AP", "item_cd3":"CO", "descr":"Description text here...", "recall_list": ["VAL1", "VAL3", "VAL5"] }, { "item_id":"item2", "item_cd1":"CA", "item_cd2":"AP", "item_cd3":"EX", "descr":"Description text here...", "recall_list": ["VAL1", "VAL2"] }, { "item_id":"item3", "item_cd1":"CA", "item_cd2":"TW", "item_cd3":"CO", "descr":"Description text here...", "recall_list": ["VAL1", "VAL2", "VAL3"] }, { "item_id":"item4", "item_cd1":"CA", "item_cd2":"TW", "item_cd3":"EX", "descr":"Description text here...", "recall_list": ["VAL1", "VAL2", "VAL4"] } ] } With Chris's help - I can get the item_id values <b><code>with jdata as ( select treat ( '{ "recall_control": [ { "item_id":"item1", "item_cd1":"CA", "item_cd2":"AP", "item_cd3":"CO", "descr":"Description text here...", "recall_list": ["VAL1", "VAL3", "VAL5"] }, { "item_id":"item2", "item_cd1":"CA", "item_cd2":"AP", "item_cd3":"EX", "descr":"Description text here...", "recall_list": ["VAL1", "VAL2"] }, { "item_id":"item3", "item_cd1":"CA", "item_cd2":"TW", "item_cd3":"CO", "descr":"Description text here...", "recall_list": ["VAL1", "VAL2", "VAL3"] }, { "item_id":"item4", "item_cd1":"CA", "item_cd2":"TW", "item_cd3":"EX", "descr":"Description text here...", "recall_list": ["VAL1", "VAL2", "VAL4"] } ] }' as json ) j from dual ) select json_query ( j, '$.recall_control[*] ? ( exists ( @.recall_list[*] ? ( @ == "VAL3" ) ) ).item_id' with wrapper ) items from jdata;</code> ITEMS ----------------- ["item1","item3"]</b> I took what he gave me and over the last 2 days I have tried to figure out the second half of my issue. How to ADD/REMOVE items from the RECALL_LIST array based on the values obtained in the original query. I thought I could use the JSON_VALUE method to find where ITEM_ID = "item1" and create an update query to remove VAL3 from the array or add VAL6 to it. However, the query returns now rows updated. I tried to use Chris's search advice - <b>"You can do this by nesting search expressions, first to locate the recall_control. Then using the exists method to see search the recall_list array. e.g.:"</b> That didn't work. What a...
Categories: DBA Blogs

Deadlock while doing parallel operation

Fri, 2025-06-06 08:26
Hello, I am executing a package on multiple threads. Our package is for deleting data in bulk. Whenever I am running the package on multiple threads I am getting a deadlock. On checking the trace file I found the delete statement which is causing deadlock. But the graph shows as Rows waited on: Session 38: no row Session 70: no row So I am not understanding the root cause of this deadlock. There are indexes on all the foreign key columns for the table being deleted. The deletion is based on primary key as well. Can you please help what could be the probable issue?
Categories: DBA Blogs

Oracle.DataAccess.Client.OracleException ORA-03135: connection lost contact

Tue, 2025-06-03 02:19
I am using the following IIS Server for App server. ODP.Net Oracle 10.2.0 After 30 minutes of idle time i get the error ORA-03135: connection lost contact. The Error is same even if I use Sql*plus. I asked the network group to increase firewall to 4 hours which they did increase it on port 1521. I am not sure of the return ports time out values can be set. Since it?s a web service on IIS, the w3w.exe opens an active connection and we have default settings on connection pooling, so one connection is maintained. After 30 minutes idle on the next call, I get the error. I tried the same with Sql*plus and I do get the same error. Is there a way or work around for the same? Appreciate your help.
Categories: DBA Blogs

Pages