Feed aggregator

SQL Developer and CBO

Jonathan Lewis - Wed, 2025-09-24 07:08
Conclusion

In recent posts I’ve started with the conclusion so that you can decide very quickly whether or not the article is likely to be relevant to your current requirements.

SQL Developer (I have been told) always uses the character data type to pass bind variables to the database and leaves it up to the database/optimizer to deal with any conversions it might need. If the code is then used with the correct data type for the bind variables the optimizer will probably produce differently cardinality estimates for exactly the same code, and changes in cardinality estimates may result in changes in execution plans.

Even when plans with the correct bind variable datatypes match plans with the character datatype they may still perform differently for “exactly the same” inputs. (On the plus side, the plans with the correct datatype are more likely to have the better performance.)

When checking execution plans from memory (dbms_xplan.display_cursor()) always check the Predicate Information section; there is also a peeked_binds option to the format call that makes differences in bind datatypes very obvious.

Preamble

At POUG2025 (Polish Oracle User Group) conference earlier on this month, I attended a presentation by Monika Lewandowska on the special treatment Oracle gives the nvl() operator. While showing us an execution plan that demonstrated an important point, Monika mentioned in passing that she hadn’t yet worked out why one of the cardinality estimates (E-rows) showed the value it did.

I looked at the plan and, since we’d been given all the necessary details of the data, found that I could state very confidently that I didn’t know why the number had appeared either.

After getting home, building a smaller model of the dataset, and exchanging a couple of emails with Monika I discovered the (horrible) answer. If you want to build the model, the script is at the end of this note, I’m just going to show you the query I executed (SQL*Plus, 19.11 and 23.6) and two (different) execution plans:

select  /* PIDN */ 
        count(*), 
        max(product_name) max_name
from    prod
where   product_id = nvl(:nProdId, product_id)
/

The table prod holds 10M rows, the value of column product_id is 1 for half the rows, and unique even numbers for the other half. There is a hybrid histogram on product_id, so the optimizer is (approximately, of course) aware of the data pattern, and product_id is declate not null.

You’ll notice that the query includes a bind variable :nProdID, and a call to nvl() on that bind variable. Here’s the plan, pulled from memory by a call to dbms_xplan.display_cursor(), when I supplied NULL for that variable:

---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                 |      1 |        |      1 |00:00:08.25 |   29717 |
|   1 |  SORT AGGREGATE                         |                 |      1 |      1 |      1 |00:00:08.25 |   29717 |
|   2 |   VIEW                                  | VW_ORE_1807A7AF |      1 |     10M|     10M|00:00:07.09 |   29717 |
|   3 |    UNION-ALL                            |                 |      1 |     10M|     10M|00:00:05.54 |   29717 |
|*  4 |     FILTER                              |                 |      1 |        |      0 |00:00:00.01 |       0 |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| PROD            |      0 |      1 |      0 |00:00:00.01 |       0 |
|*  6 |       INDEX RANGE SCAN                  | PROD_I1         |      0 |      1 |      0 |00:00:00.01 |       0 |
|*  7 |     FILTER                              |                 |      1 |        |     10M|00:00:02.64 |   29717 |
|   8 |      TABLE ACCESS FULL                  | PROD            |      1 |     10M|     10M|00:00:01.13 |   29717 |
---------------------------------------------------------------------------------------------------------------------

(If you’re not familiar with the transformation that the optimizer has applied to the original query then you may want to read this note on OR-expansion of nvl() predicates, or this more recent note on the topic by Monika.)

The plan is exactly what I expected, which didn’t correspond to the cardinality estimate that Monika’s data had shown. So, after an exchange of email, I repeated the test but switched to “Monika-mode” to get the following plan (which is only slightly different):

---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                 |      1 |        |      1 |00:00:10.04 |   29717 |
|   1 |  SORT AGGREGATE                         |                 |      1 |      1 |      1 |00:00:10.04 |   29717 |
|   2 |   VIEW                                  | VW_ORE_1807A7AF |      1 |   2478K|     10M|00:00:08.87 |   29717 |
|   3 |    UNION-ALL                            |                 |      1 |   2478K|     10M|00:00:07.33 |   29717 |
|*  4 |     FILTER                              |                 |      1 |        |      0 |00:00:00.01 |       0 |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| PROD            |      0 |      1 |      0 |00:00:00.01 |       0 |
|*  6 |       INDEX RANGE SCAN                  | PROD_I1         |      0 |      1 |      0 |00:00:00.01 |       0 |
|*  7 |     FILTER                              |                 |      1 |        |     10M|00:00:04.42 |   29717 |
|*  8 |      TABLE ACCESS FULL                  | PROD            |      1 |   2478K|     10M|00:00:02.89 |   29717 |
---------------------------------------------------------------------------------------------------------------------

Note how the cardinality estimate (E-Rows) for operation 8 has dropped to roughly a quarter of the estimate shown in the previous plan – and that 1/4 is the effect that Monika had shown in the original presentation. What’s going on?

The correct response you should give at this point is: “Where’s the Predicate Information?”

If you translated “product_id = nvl(:nprodid, product_id)” into a procedural form you might write it as:

if :nprodid is null then
        product_id = product_id
else
        product_id = :nprodid
if

So my first thought about the difference between the two results was that Monika was running a version of Oracle that actually generated the product_id = product_id predicate then did some arithmetic that behaved as if it was comparing two different columns. That would explain why we both saw the estimated cardinality as (very close to) 1/4 of the expected value.

You could check my claim about the 1/4 estimate by creating the table with a duplicate of the product_id column, including its histogram, and checking the estimiate for the predicate product_id = product_id_dup. Roughly speaking: because the histogram tells the optimizer that half the data has the value 1 and the other half has no repeats then in the Cartesian product half the data from the “left hand table” will match half the data of the “right hand table” – and that “half of a half” is where the 1/4 drops out of the arithmetic.

The email we exchanged was about the Oracle version, the predicate information and the client application.

  • We were using different versions of Oracle – both 19c but different RUs
  • We were using different applications – I was using SQL*Plus, she was using SQL Developer
  • The two sets of predicate information did not match; here they are (based on my sample data):
Predicate Information (identified by operation id):    (SQL*Plus)
---------------------------------------------------
   4 - filter(:NPRODID IS NOT NULL)
   6 - access("PRODUCT_ID"=:NPRODID)
   7 - filter(:NPRODID IS NULL)

Predicate Information (identified by operation id):    ("SQL Developer mode")
---------------------------------------------------
   4 - filter(:NPRODID IS NOT NULL)
   6 - access("PRODUCT_ID"=TO_NUMBER(:NPRODID))
   7 - filter(:NPRODID IS NULL)
   8 - filter("PRODUCT_ID"=TO_NUMBER(TO_CHAR("PRODUCT_ID")))

My guess wasn’t right, but it was in the right ballpark. The problem wasn’t a strange optimizer bug dependent on the version of Oracle, but it was the about the arithmetic of a predicate very similar to “product_id = product_id”.

That predicate had appeared because (I have been told) SQL Developer always uses a character datatype to supply bind variables to the server and lets the server sort out any problems of coercion; so the “Monika-mode” hack I had used from SQL*Plus was to declare variable nProdId varchar2(30) rather than variable nProdId number.

Becauses :nprodid was a character bind variable the mechanism the optimizer had to use to deal with the predicate product_id = nvl(:nprodid, product_id) was to convert the right-hand expresssion to a number, producing the predicate product_id = to_number(:nprodid, product_id), except you can’t apply to_number() to a number, so the optimizer had to convert the numeric column inside the nvl() to a character, leading to the rewritten predicate product_id = to_number(nvl(:nprodid, to_char(product_id))) before the _or_expand_nvl_predicate transformation is applied.

If you want to confirm that Oracle goes through this intermediate step you can generate the CBO (10053) trace, and you will find in it the text (reformatted here for readability):

Transformed predicate 
        "PROD"."PRODUCT_ID"=TO_NUMBER(NVL(:B1,TO_CHAR("PROD"."PRODUCT_ID"))) 
to
           :B1 IS NOT NULL AND "PROD"."PRODUCT_ID"=TO_NUMBER(:B2) 
        OR :B3 IS NULL AND "PROD"."PRODUCT_ID"=TO_NUMBER(TO_CHAR("PROD"."PRODUCT_ID")

As for the arithmetic, there are some functions where function(column) is treated as “unknown, unpeekable value” leading to a fixed selectivity guess and there are some functions where the selectivity of function(column) is assumed to be the same as the selectivity of the underlying column – in this case the optimizer does the predicate arithmetic of product_id = product_id without realising that it’s (effectively) calculating for a predicate that it would normally eliminate.

Consequences

Changes in cardinality estimates can lead to changes in execution plans. This was a very simple query with a very obvious change in estimate, but no change in plan; in production databases the variation in estimate may be far less obvious. If you use SQL Developer to create and test SQL and then embed the SQL in some other tool (with the correct bind declarations) for production execution you may have difficulty why “it works okay in test but picks the wrong plan in production”.

There are variations on this theme; the two plans I’ve shown have the same plan hash value – but do different amounts of work because of the calls to conversion functions. They would also appear as two separate child cursors if one session was using character binds and the other was using numeric binds. Have you ever heard the complaint: This query runs faster for userX than userY, even though the plan is the same and they’re both using the same {some column name/description} – maybe they’re using different tools to run the query.

This is just another reason for getting into the habit of: “always check the Predicate Information”. In fact this prompts me to add a comment about including the “peeked_binds” format option in the call to dbms_xplan.display_cursor() et. al. which would give you a very clear declaration of the source of the problem (if they appear). Here are the two sections from my demo:

Peeked Binds (identified by position):
--------------------------------------
   1 - :1 (NUMBER): (null)

Peeked Binds (identified by position):
--------------------------------------
   1 - :1 (VARCHAR2(30), CSID=873): (null)

Addendum

While SQL Developer makes it a little awkward to test your SQL with numeric bind variables, SQL*Plus has a similar limitation with date bind variables. Although, as we saw above, you need only write variable n1 number in SQL*Plus to create a numeric bind variable, the equivalent variable d1 date results in the following response (in 23.6):

Usage: VAR[IABLE] [ <variable> [ NUMBER | CHAR | CHAR (n [CHAR|BYTE]) |
                    VARCHAR2 (n [CHAR|BYTE]) | NCHAR | NCHAR (n) |
                    NVARCHAR2 (n) | CLOB | NCLOB | BLOB | BFILE |
                    REFCURSOR | BINARY_FLOAT | BINARY_DOUBLE |
                    BOOLEAN | VECTOR] ]

If you want to test your code to generated execution plans with date bind variables (from SQL*Plus) you’ll have to create a PL/SQL block that defines a variable, populates it, then does something that forces the statement to be parsed.

Addendum 2

The link I posted to Bluesky publicising this note has generated a number of responses, covering SQL Developer for VS Code, “runscript”, and SQLcl (so far). I’ll draw together all the bits some time in the next few days; meanwhile, following an initial response about “Run Script (F5)”, Monika sent me an email about that feature of SQL Developer and I’ll just quote it here (with the reminder that different SQL Developer is constantly being enhanced, so the following may not be true for every version):

If you run this:

variable nProdId number
exec :nProdId := null
 
select  /* PIDN */ 
        count(*), 
        max(product_name) max_name
from    prod
where   product_id = nvl(:nProdId, product_id)
/

as a script (F5) in SQL Developer it sends the bind as a number. Unfortunately I work with SQL Developer for VS Code and here it doesn’t work that way.

The model

If you want to build the data set click here to view/hide the test script
rem
rem     Script:         monika_nvl_3a.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sept 2025
rem     Purpose:        
rem
rem     Last tested 
rem             23.6.0.0
rem             19.11.0.0

drop table prod purge;

set linesize 132 pagesize 120 trimspool on tab off

execute dbms_random.seed(0)

prompt  ====================================
prompt  create table PROD: 50 * 200,000 rows
prompt  Half the rows have product_id = 1
prompt  The rest hold distinct even numbers
prompt  ====================================

create table prod
as
with data as (
        select  rownum rn
        from    dual 
        connect by 
                level <= 2e5
),
products as (
        select 'Product ' || to_char(rownum,'FM000') product_name
        from    dual 
        connect by 
                level <= 50
)
select 
        case when mod(rownum,2) = 1 then 1 else rownum end product_id,
        prd.product_name
from 
        products        prd, 
        data            dat
/

alter table prod modify (product_id not null);
create index prod_i1 on prod(product_id);

spool monika_nvl_3a.lst

select  * 
from    (
        select 
                product_id,
                count(*)                product_count,
                sum(count(*)) over ()   total_rows
        from 
                prod 
        group by 
                product_id 
        order by 
                product_id
        )
where 
        rownum < 11
/

select count(*) from prod;

prompt  ======================
prompt  Collecting  histograms
prompt  ======================

exec dbms_stats.gather_table_stats(user, 'prod', method_opt  => 'for all columns size skewonly')

select  column_name, histogram, num_buckets
from    user_tab_cols
where   table_name = 'PROD'
/

alter system flush shared_pool;
alter session set statistics_level = all;

set timing on
set serveroutput off

column max_name format a32

prompt  =========================================================
prompt  Test for NULL (product not supplied): nProdId as number
prompt  E-Rows is 10M, time to run ca. 9 seconds.
prompt  Note the simple filter predicate, and the absence of a
prompt  predicate for op. 8 when product_id is declared not null.
prompt  If not so declared we'd see "product_id is not null"
prompt  ===========================================================

variable nProdId number
exec :nProdId := null

select  /* PIDN */ 
        count(*), 
        max(product_name) max_name
from    prod
where   product_id = nvl(:nProdId, product_id)
/

select  *
from    dbms_xplan.display_cursor( format=> 'allstats last peeked_binds')
/

prompt  =========================================================
prompt  Test for NULL (product not supplied): nProdId as varchar2
prompt  E-Rows is ca. 2.5M, time to run ca. 11 seconds.
prompt  Note the more complex filter predicates, especially op. 8
prompt  =========================================================

variable nProdId varchar2(30)
exec :nProdId := null

-- alter session set events '10053 trace name context forever';

select  /* PIDN */ 
        count(*), 
        max(product_name) max_name
from    prod
where   product_id = nvl(:nProdId, product_id)
/

-- alter session set events '10053 trace name context off';

select  *
from    dbms_xplan.display_cursor( format=> 'allstats last peeked_binds')
/

spool off

CONNECT_TIME with reset on action

Tom Kyte - Thu, 2025-09-18 11:19
We have an Oracle Forms / Database application and were asked to limit User's maximum online time to 15 minutes. Following this, we set the profiles' CONNECT_TIME to 15 - this works well. How is it possible to reset this if user does any action on frontend? Maybee v$session.seconds_in_wait can help? Thanks in advance Helmut
Categories: DBA Blogs

PL/SQL package

Tom Kyte - Thu, 2025-09-18 11:19
Which PL/SQL package is primarily used for interacting with Generative AI services in Oracle Database 23ai? DBMS_AI or DBMS_ML or DBMS_VECTOR_CHAIN or DBMS_GENAI?
Categories: DBA Blogs

Agent OEM 24ai

Tom Kyte - Thu, 2025-09-18 11:19
Hi there! I've some trouble trying to upload data from agent to OMS server, everything seems correct but: <b>emctl upload agent Oracle Enterprise Manager 24ai Release 1 Copyright (c) 1996, 2024 Oracle Corporation. All rights reserved. --------------------------------------------------------------- EMD upload error:full upload has failed: uploadXMLFiles skipped :: OMS version not checked yet. If this issue persists check trace files for ping to OMS related errors. (OMS_DOWN)</b> agent is running, OMS too, telnet from agent is: telnet myserver.com 4903 Trying 10.8.0.65... Connected to myserver.com. Escape character is '^]'. curl output: curl -vk myserver.com:4903/empbs/upload * Trying 10.8.0.65... * TCP_NODELAY set * Connected to myserver.com (10.8.0.**) port 4903 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=myserver.com * start date: Jul 10 16:57:40 2025 GMT * expire date: Jul 9 16:57:40 2035 GMT * issuer: O=EnterpriseManager on myserver.com; OU=EnterpriseManager on myserver.com; L=EnterpriseManager on myserver.com; ST=CA; C=US; CN=myserver.com * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. > GET /empbs/upload HTTP/1.1 > Host: myserver.com:4903 > User-Agent: curl/7.61.1 > Accept: */* > < HTTP/1.1 200 OK < Date: Wed, 03 Sep 2025 18:53:47 GMT < X-ORCL-EM-APIGW-CONSOLIDATED-BY: apigateway < X-ORCL-EM-APIGW-ERR: 0 < X-ORACLE-DMS-ECID: 21f06e98-2895-465a-b3f3-17be919babe9-00001673 < X-ORCL-EMOA: true < X-ORCL-EM-APIGW-GUID: 6113d58d-fde2-8b19-f054-c5eee6216d13 < X-ORACLE-DMS-RID: 0 < Date: Wed, 03 Sep 2025 18:53:47 GMT < Content-Type: text/html;charset=utf-8 < X-Content-Type-Options: nosniff < X-XSS-Protection: 1; mode=block < Vary: Accept-Encoding < Content-Length: 306 < <HTML><HEAD><TITLE> Http XML File receiver </TITLE></HEAD><BODY bgcolor="#FFFFFF"> <H1>Http XML File receiver</H1> <H2> Http Receiver Servlet active!</h2> <H2> Product version is: 24ai </H2> <H2> Product release version is: 24.1.0.0.0 </H2> <H2> Core release version is: 24.1.0.0.0 </H2> </BODY></HTML> * Connection #0 to host myserver.com left intact thank you very much!
Categories: DBA Blogs

General Question for your thought or Experience with the Outbox Pattern and viable alternatives

Tom Kyte - Thu, 2025-09-18 11:19
Question We're evaluating different approaches to implement the Outbox Pattern in Oracle 19c for reliable event publishing in our microservices architecture, but we're concerned about the significant I/O overhead and performance implications. Could you provide guidance on the best practices and alternatives? Current Implementation Options We're Considering 1. Traditional Polling Approach Method: Standard outbox table with application polling using SELECT ... FOR UPDATE SKIP LOCKED Concerns: Constant polling creates unnecessary database load Potential for high latency in event delivery Resource consumption even when no events exist 2. Change Data Capture (CDC) with Debezium Method: Using Debezium to mine Oracle redo logs for outbox table changes Concerns: Additional complexity in deployment and monitoring Dependency on external CDC infrastructure Potential log mining overhead on the database 3. Oracle Advanced Queuing (AQ) with Sharded Queues Method: Leveraging Oracle's native messaging with 19c sharded queue improvements Concerns: Learning curve for development teams familiar with table-based approaches Potential vendor lock-in Queue management complexity Primary Concerns I/O Impact: All approaches seem to significantly increase database I/O: Polling creates constant read operations CDC requires continuous log scanning Queuing systems add their own storage and processing overhead Scalability: As our event volume grows, we're worried about: Database performance degradation Increased storage requirements for outbox/queue tables Network bandwidth consumption Specific Questions Performance Optimization: What Oracle 19c specific features or configurations can minimize the I/O overhead of outbox pattern implementations? Alternative Architectures: Are there Oracle-native alternatives to the traditional outbox pattern that provide similar transactional guarantees with better performance characteristics? Hybrid Approaches: Would a combination approach (e.g., AQ for high-priority events, polling for batch operations) be advisable? Monitoring and Tuning: What specific metrics should we monitor, and what tuning parameters are most critical for outbox pattern performance in Oracle 19c? Resource Planning: How should we size our database resources (I/O capacity, storage, memory) when implementing outbox patterns at scale? Environment Details Oracle Database 19c Enterprise Edition Microservices architecture with moderate to high event volume Requirements for exactly-once delivery semantics Mixed OLTP and event-driven workloads Any insights on Oracle-specific optimizations, alternative patterns, or architectural recommendations would be greatly appreciated.
Categories: DBA Blogs

Remote Procedure Call (RPC) Dependency Management - where is remote timestamp stored?

Tom Kyte - Thu, 2025-09-18 11:19
Hi AskTom Team! According to https://docs.oracle.com/en/database/oracle/oracle-database/19/adfns/schema-object-dependency.html#GUID-B99E885E-900F-4F29-A188-A617A301FDCE : "Whenever a procedure is compiled, its time stamp is recorded in the data dictionary." Is it possible to see this recorded timestamp ? Marcin
Categories: DBA Blogs

Errorhandling in Ansible

Yann Neuhaus - Thu, 2025-09-18 08:54
Errors in tasks – abort or ignore?

Per default, if a task in a playbook fails, then the execution of the playbook is stopped for that host.

- name: PLAY1
  hosts: localhost
  gather_facts: no
  tasks:
    - name: let this shell-command fail
      ansible.builtin.shell: exit 1

    - name: let this shell-command complete
      ansible.builtin.shell: exit 0

As you can see, the 2nd task is not executed. If you want to continue in such a case, the ignore_errors parameter is your friend

    - name: let this shell-command fail
      ansible.builtin.shell: exit 1
      ignore_errors: true
Custom error conditions

Per default, Ansible evaluates the exit-code of the module, in case of the shell-module, the exit-code of the last command.

But for some commands, that is not adequate. Example: The Oracle commandline tool sqlplus to submit sql-commands will have an exit-code of 0 if it can connect to the database-instance. It is not related to the result of your SQL-commands. Error-messages in Oracle are prefixed by ORA-.

So, if you want to check for application errors, you have to implement it yourself. For that, you can use the failed_when option.

    - name: let this shell-command fail
      ansible.builtin.shell: |
        . ~/.profile
        echo "select count(*) from all_users;" | sqlplus / as sysdba
      register: number_of_users
      failed_when: "'ORA-' in number_of_users.stdout"

Caution: In this case the exit-code of the shell is no longer evaluated. To also get the exit-code of the sqlplus call (e.g., sqlplus can not connect to the database, or sqlplus binary not found), you have to add this (default) condition:

      failed_when: "number_of_users.rc != 0 or 'ORA-' in number_of_users.stdout"

But caution! Not all modules will have an rc field.

Tuning: Perform several checks at once

Conceptually, Ansible is not the fastest tool. For each task, it will usually login with ssh to the remote server. If you have to run several checks in the shell, then, instead of running each in a separate task, you can run all these check-commands in one shell-task, and evaluate the result afterwards.

    - name: run many check commands
      ansible.builtin.shell: |
        mount | grep ' on /u00 '  #check if /u00 is mounted
        rpm -q ksh 2>&1  #check if ksh is installed
        exit 0 # do not fail if rpm exit != 0; we will do our own errorhandling
      register: checks

    - name: fail if no /u00 is mounted
      fail:
        msg: "/u00 is not mounted"
      when: "' on /u00 ' not in checks.stdout"

    - name: No ksh found, try to install it
      yum:
        name: ksh
        state: present
      when: "'package ksh is not installed' in checks.stdout"

If you only want to throw an error, then you can do it directly in the shell-task:

when: "' on /u00 ' not in checks.stdout" or 'package ksh is not installed' in checks.stdout

But if you parse the output afterwards, you can run tasks to fix the error.

Sometimes it is difficult to parse the output if some commands return the same output, e.g. “OK”.
If your check-commands always return exactly 1 line, then you can directly parse the output of the command. The output of the 3rd command is in checks.stdout_lines[2].
In the above example that will not work because grep will return the exit-code 0 (not found) or 1 (found) plus the found line. So, expand it as: mount | grep ' on /u00 ' || echo error

Print errormessages more readable

Do not fail the task itself, it is very usually unreadable because all information is printed on one line.

Instead, use ignore_errors: true and failed_when: false in the task. Do the errorhandling in a separate task with a customized errormessage. To print the multiline list of stdout_lines, use debug: otherwise you can directy use ansible.builtin.fail: with a customized message:

    - name: force sqlplus to throw error because of missing environment
      ansible.builtin.shell: |
        /u00/app/oracle/product/19.7.0/bin/sqlplus -L / as sysdba @myscript.sql 2>&1
      register: checks
      ignore_errors: true
      failed_when: false

    - name: Check for errors of task before
      debug: var=checks.stdout_lines
      failed_when: checks.rc != 0 or 'ORA-' in checks.stdout
      when: checks.rc != 0 or 'ORA-' in checks.stdout
Re-run failed hosts

As an example for this scenario: A customer of mine will do an offline backup of the development databases. And I have to run an Ansible playbook against all databases. If the playbook for this host is run at the backup time, it will fail because the database is down. But after some minutes the database will be restarted.

What we can do now?

Wait until the database is up again. That is possible, see the example of ansible.builtin.wait_for in my blog post Parallel execution of Ansible roles. But for this scenario it is a waste of time. The database can be stopped (not for backup) and will not be restarted within the next few minutes.

Try later after a while. My playbook for all hosts (parallel forks=5) takes about 1 hour. The idea now is to remember the host with the stopped database and to continue with the next host. After the play finished for all hosts, restart the play for the remembered hosts.

  • The 1st play running against all database hosts:
  • gets the status of the databases on the host
  • assigns the database instances to a is_running and a not_open list
  • Include the role to run against the running databases
  • Dynamically add the host to the group re_run_if_not_open if there are not_open databases
  • The next play only runs for the re_run_if_not_open group
  • Include the role to run against the (hopefully now running) databases
  • If the database then is still down, we assume it is stopped permanently.

L’article Errorhandling in Ansible est apparu en premier sur dbi Blog.

Bring the consecutive number(strat_val,end_val) record based on id as one record and add new record if the consecutive number is breaks down

Tom Kyte - Tue, 2025-09-16 23:17
Bring the consecutive number(strat_val,end_val) record based on id as one record and add new record if the consecutive number is breaks down https://livesql.oracle.com/next/?compressed_code=H4sIAAAAAAAACo3PwQqCQBCA4fvCvsPcVBjBXTWLboXHCoLOMupCgm2wO%252Fr8oXbo1jKn%252BflgmM4ZYgNM7WiAjeemX3YpYikAAIYeZnLdk5yOVZbgVj2T42amEa6Py6m%252Bf7Ox%252FRrBTq%252FWOCmSoxTbDNYbxzBYfv%252BcmWmcjI8hoggV5gv%252FDwvchcEKVRYmlUZVBtIdqkMY1RlqHUbzEvMqkO6xWN9K0%252Fp2%252FgB1bHIywAEAAA%253D%253D&code_language=PL_SQL&code_format=false <code>create table test_date ( id varchar2(10), start_val NUMBER, end_val number );</code> input data : <code>insert into test_date values( 'a',1,3); insert into test_date values( 'a',4,6); insert into test_date values( 'a',7,10); insert into test_date values( 'a',12,15); insert into test_date values( 'a',16,19); insert into test_date values( 'a',20,22); insert into test_date values( 'a',35,37); insert into test_date values( 'a',38,40);</code> output data: 'a' , 1, 10 'a' , 12, 19 'a' , 35 , 40
Categories: DBA Blogs

Materialized View Staleness Changes to NEEDS_COMPILE after DML

Tom Kyte - Tue, 2025-09-16 23:17
Why does DML on a materialized view's master table cause the materialized view staleness to change from FRESH to NEEDS_COMPILE? I would have thought it would have changed to STALE. My understanding about NEEDS_COMPILE is that it is supposed to reflect structural changes (DDL), so I must have some gap in my understanding because I am getting NEEDS_COMPILE when only performing DML (not DDL). <code>SQL>column mview_name format a20 SQL>create table T ( id NUMBER 2 ) 3 / Table created. SQL>insert into T ( id ) values ( 1 ) 2 / 1 row created. SQL>create materialized view T_MV 2 ( id 3 ) 4 as select id 5 from T 6 / Materialized view created. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV' 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV FRESH VALID COMPLETE SQL>select * from T_MV order by id 2 / ID ----------- 1 SQL>insert into T ( id ) values ( 2 ) 2 / 1 row created. SQL>commit 2 / Commit complete. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV' 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV NEEDS_COMPILE NEEDS_COMPILE COMPLETE SQL>select * from T_MV order by id 2 / ID ----------- 1 SQL>begin 2 dbms_snapshot.refresh( list => user || '.T_MV' 3 , purge_option => 2 4 , atomic_refresh => false 5 , out_of_place => true 6 ) 7 ; 8 end; 9 / PL/SQL procedure successfully completed. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV' 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV FRESH VALID COMPLETE SQL>select * from T_MV order by id 2 / ID ----------- 1 2 SQL>insert into T ( id ) values ( 3 ) 2 / 1 row created. SQL>commit 2 / Commit complete. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV' 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV NEEDS_COMPILE NEEDS_COMPILE ...
Categories: DBA Blogs

Materialized View USING TRUSTED CONSTRAINTS and staleness=UNKNOWN

Tom Kyte - Tue, 2025-09-16 23:17
It appears USING TRUSTED CONSTRAINTS causes a materialized view to have staleness=UNKNOWN when it would otherwise be FRESH. Is it possible to have a materialized view with staleness=FRESH when USING TRUSTED CONSTRAINTS? If not, would the optimizer be less likely to consider a materialized view with staleness=UNKNOWN for query rewrite if query_rewrite_integrity=TRUSTED and query_rewrite_enabled=TRUE? How about if query_rewrite_integrity=ENFORCED? <code>SQL>column mview_name format a20 SQL>create table T ( id NUMBER 2 ) 3 / Table created. SQL>insert into T ( id ) values ( 1 ) 2 / 1 row created. SQL>create materialized view T_MV 2 ( id 3 ) 4 -- refresh using trusted constraints 5 as select id 6 from T 7 / Materialized view created. SQL>create materialized view T_trusted_MV 2 ( id 3 ) 4 refresh using trusted constraints 5 as select id 6 from T 7 / Materialized view created. SQL> SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name like 'T\_%MV' escape '\' order by 1 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV FRESH VALID COMPLETE T_TRUSTED_MV UNKNOWN VALID COMPLETE SQL>begin 2 dbms_snapshot.refresh( list => user || '.T_MV' 3 , purge_option => 2 4 , atomic_refresh => false 5 , out_of_place => true 6 ) 7 ; 8 dbms_snapshot.refresh( list => user || '.T_TRUSTED_MV' 9 , purge_option => 2 10 , atomic_refresh => false 11 , out_of_place => true 12 ) 13 ; 14 end; 15 / PL/SQL procedure successfully completed. SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name like 'T\_%MV' escape '\' order by 1 2 / MVIEW_NAME STALENESS COMPILE_STATE LAST_REF -------------------- ------------------- ------------------- -------- T_MV FRESH VALID COMPLETE T_TRUSTED_MV UNKNOWN VALID COMPLETE SQL>select * from T_MV order by id 2 / ID ----------- 1 SQL>select * from T_trusted_MV order by id 2 / ID ----------- 1 SQL>drop materialized view T_MV 2 / Materialized ...
Categories: DBA Blogs

Returning rows in the order they were inserted

Tom Kyte - Tue, 2025-09-16 23:17
Hi Tom. Is it possible to return rows from a table in the order that they were inserted? I have an old query, pre 8.1.6, that always returned rows in the order they were inserted. Now, in version 8.1.7 & 9 they are returned in a seemingly random order. Thanks
Categories: DBA Blogs

Viewing table partition compression

Tom Kyte - Tue, 2025-09-16 23:17
Hi tom, I have a table with compression, also have partitions and subpartitions. You can see The create DDL on the below. As you can see all of my objects are compressed or nocompressed. Bu i can't see this information on the all_tables table. Compression ad compress_for is turning null. Why i can see the table is compressed on the DDL. What is the point on this issue. <code>CREATE TABLE EFSSALES1 ( sale_id NUMBER, sale_date DATE, region VARCHAR2(10), amount NUMBER ) COMPRESS BASIC PARTITION BY RANGE (sale_date) SUBPARTITION BY HASH (region) ( PARTITION sales_2024 VALUES LESS THAN (TO_DATE('01-JAN-2025','DD-MON-YYYY')) ( SUBPARTITION sp_east1 COMPRESS FOR OLTP, SUBPARTITION sp_west1 NOCOMPRESS ), PARTITION sales_2025 VALUES LESS THAN (TO_DATE('01-JAN-2026','DD-MON-YYYY')) ( SUBPARTITION sp_east2 COMPRESS FOR OLTP, SUBPARTITION sp_west2 COMPRESS FOR OLTP ) ); SELECT compression, COMPRESS_FOR FROM all_tables WHERE table_name = 'EFSSALES1' AND owner='COPYCATLIVE'</code>
Categories: DBA Blogs

Rman backup setup

Tom Kyte - Tue, 2025-09-16 23:17
Hello Tom, I am wondering what is the benefit of using a recovery catalog over a control file? What are some of the decisions one should consider before going either way? Please and thank you
Categories: DBA Blogs

Unable to use nested tables on my system

Tom Kyte - Tue, 2025-09-16 23:17
I am using Windows 11 64 bit / Oracle 19c .Everything works fine except when I create a nested table and associate it with my table as a column . I can neither query it nor drop the table . When I query I get a message that the connection to the database was reset and when I try to drop it I get a message as: RA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound], [78561], [], [], [], [], [], [], [], [], [], [] 00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]" *Cause: This is the generic internal error number for Oracle program exceptions. It indicates that a process has encountered a low-level, unexpected condition. The first argument is the internal message number. This argument and the database version number are critical in identifying the root cause and the potential impact to your system. How do I resolve this?
Categories: DBA Blogs

Pl/sql procedure freezes while fetching data from dblink

Tom Kyte - Tue, 2025-09-16 23:17
I encountered inexplicable freezing of pl/sql procedure while trying to fetch and insert data into table, however same operation runs successfully when called with sql query in any IDE. I am trying to realise ETL process in data warehouse database. I need to load data from a remote database's table into a local one, i have low privileges on the remote database meaning i cant tune network or create materialised view logs there. Target table is 'long' and daily data that must be transferred is about one million rows 7 columns wide with mostly numeric data and no large objects. I created db link and a simple view that represents remote table in local database and granted select privilege to <code>'DATAW'</code> schema designed for data storage. In the <code>'DATAW'</code> i created a procedure for merging data into identical local table. Something like this: <code> create procedure merge_t_data (p_days_offset in number) as begin merge /*+ APPEND */ into t_data_local tgt using ( select col_id, col2, ..., col7 from pdbadmin.v_data_remote where updated_at >= trunc(sysdate) - interval '1' day * p_days _offset ) src on (tgt.col_id = src.col_id) when matched then update set ... when not matched then insert ...; end; </code> When i run the procedure the session acquires wait event 'Sql*net message from dblink' or 'Sql*net more data from dblink' which stays the same forever. When i check incoming traffic on the server while the procedure is running i see that it is not used at all. <b>HOWEVER</b> When i run the same merge operatiion using query like: <code> merge /*+ APPEND */ into t_data_local tgt using ( select col_id, col2, ..., col7 from pdbadmin.v_data_remote where updated_at >= trunc(sysdate) - interval '1' day * 3 ) src on (tgt.col_id = src.col_id) when matched then update set ... when not matched then insert ...; </code> it runs successfully: i see incoming traffic up to 2Mb, and query finishes after +-10 minutes. I am the only user of the database, no other people works with it for now. I have also tried inserting the data into temporary table; using fetch cursor bulk collect; running execute immediate in the procedure, result was the same - execution freezes. Also worth mentioning that i also successfully realised ETL process for second table: it is much wider: daily data needed for transferring is about 50k rows and the number of collumns is more than 20. I did it with similar merge procedure that runs successfully unlike the previously discussed one. I want to know if it is possible to achieve success in running my merge procedure for the 'long' table or what might be other solutions to this problem
Categories: DBA Blogs

implicitStatementCacheSize appears to leave open cursors

Tom Kyte - Tue, 2025-09-16 23:17
Hey Tom, I have a Java application that runs thousands of different types of queries against an Oracle database millions of times. I wanted to save the query preparation time by using the oracle.jdbc.implicitStatementCacheSize JDBC property to cache prepared queries. But I easily end up with an error ORA-01000: maximum open cursors exceeded, even when running a single query at a time and reading it to completion. In my mind, an open cursor represents a way to scroll through the results of a query via communication with the database server. I don't immediately see a correlation between a statement and a cursor beyond the idea that the statement yields a cursor when executed. But it appears to be deeper than that in the Oracle JDBC driver. See the following example code that can quickly reproduce what I am experiencing: <code> public class OracleCursorExhaustionThroughStatementCaching { public static void main(String[] args) { try { Class.forName("oracle.jdbc.driver.OracleDriver"); final Properties props = new Properties(); props.put("user", "scott"); props.put("password", "tiger"); // CURSORS on the remote system are set currently to 300. Just run more unique queries than there are cursors to reproduce. // This cache should only be holding statement information and cursors should only be used during an individual query props.put("oracle.jdbc.implicitStatementCacheSize", "1500"); // commenting/removing this line allows this test to run without error try (Connection c = DriverManager.getConnection("jdbc:oracle:thin:@someserver:1521/mydb", props)) { DatabaseMetaData meta = c.getMetaData(); System.out.println("Product: " + meta.getDatabaseProductName()); System.out.println("Version: " + meta.getDatabaseProductVersion()); System.out.println("Driver: " + meta.getDriverVersion()); System.out.println("JVM Version: " + System.getProperty("java.runtime.version")); for(int i = 0; i < 1000; i++) { // Each statement will be closed after executing try(PreparedStatement ps = c.prepareStatement("select " + i + " from dual")) // for demo a unique query against dual is enough { // Being explicit with closing the result set after execution because logically this is the end of the cursor. (Statement close closes it anyway) try(ResultSet rs = ps.executeQuery()) { while(rs.next()) ; // just read each result set fully, which should bring the cursor to its end } } } } } catch(Exception ex) { ex.printStackTrace(); } } } </code> So on my machine and database this code yields the following: Product: Oracle Version: Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production Version 19.22.0.0.0 Driver: 23.8.0.25.04 JVM Vers...
Categories: DBA Blogs

Happy 21st Anniversary to my Oracle Security Blog

Pete Finnigan - Tue, 2025-09-16 23:17
The 21st anniversary of this blog is coming up on the 20th September 2025. I started this blog on the 20th of September 2004 and it has been doing well ever since. I write almost exclusively on the subject of....[Read More]

Posted by Pete On 10/09/25 At 11:55 AM

Categories: Security Blogs

Can we Block EXECUTE ANY PROCEDURE for our API?

Pete Finnigan - Tue, 2025-09-16 23:17
I did a five part series on the security of AUDSYS.AUD$UNIFIED and showed how it works at a functional level and how it might be secured by Oracle and how we might design a similar system using standard database license....[Read More]

Posted by Pete On 09/09/25 At 08:56 AM

Categories: Security Blogs

Testing a READONLY table and sometimes WRITE and DELETE

Pete Finnigan - Tue, 2025-09-16 23:17
This is the next part of the series looking at the AUDSYS schema and AUD$UNIFIED table that Oracle has created and protected. In the first part we explored what AUDSYS and AUD$UNIFIED looks like in terms of security; in part....[Read More]

Posted by Pete On 03/09/25 At 11:38 AM

Categories: Security Blogs

Implement a Test System to Create a Readonly and Sometimes Insert / Delete Table

Pete Finnigan - Tue, 2025-09-16 23:17
This is the next part (4th part) of the series exploring the AUDSYS schema and AUD$UNIFIED table that is READONLY with a lot of INSERTING and sometimes deleting. In the first part we explored the AUDSYS schema and the AUD$UNIFIED....[Read More]

Posted by Pete On 26/08/25 At 08:54 AM

Categories: Security Blogs

Pages

Subscribe to Oracle FAQ aggregator