Feed aggregator
SQL Developer and CBO
In recent posts I’ve started with the conclusion so that you can decide very quickly whether or not the article is likely to be relevant to your current requirements.
SQL Developer (I have been told) always uses the character data type to pass bind variables to the database and leaves it up to the database/optimizer to deal with any conversions it might need. If the code is then used with the correct data type for the bind variables the optimizer will probably produce differently cardinality estimates for exactly the same code, and changes in cardinality estimates may result in changes in execution plans.
Even when plans with the correct bind variable datatypes match plans with the character datatype they may still perform differently for “exactly the same” inputs. (On the plus side, the plans with the correct datatype are more likely to have the better performance.)
When checking execution plans from memory (dbms_xplan.display_cursor()) always check the Predicate Information section; there is also a peeked_binds option to the format call that makes differences in bind datatypes very obvious.
PreambleAt POUG2025 (Polish Oracle User Group) conference earlier on this month, I attended a presentation by Monika Lewandowska on the special treatment Oracle gives the nvl() operator. While showing us an execution plan that demonstrated an important point, Monika mentioned in passing that she hadn’t yet worked out why one of the cardinality estimates (E-rows) showed the value it did.
I looked at the plan and, since we’d been given all the necessary details of the data, found that I could state very confidently that I didn’t know why the number had appeared either.
After getting home, building a smaller model of the dataset, and exchanging a couple of emails with Monika I discovered the (horrible) answer. If you want to build the model, the script is at the end of this note, I’m just going to show you the query I executed (SQL*Plus, 19.11 and 23.6) and two (different) execution plans:
select /* PIDN */
count(*),
max(product_name) max_name
from prod
where product_id = nvl(:nProdId, product_id)
/
The table prod holds 10M rows, the value of column product_id is 1 for half the rows, and unique even numbers for the other half. There is a hybrid histogram on product_id, so the optimizer is (approximately, of course) aware of the data pattern, and product_id is declate not null.
You’ll notice that the query includes a bind variable :nProdID, and a call to nvl() on that bind variable. Here’s the plan, pulled from memory by a call to dbms_xplan.display_cursor(), when I supplied NULL for that variable:
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:08.25 | 29717 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:08.25 | 29717 |
| 2 | VIEW | VW_ORE_1807A7AF | 1 | 10M| 10M|00:00:07.09 | 29717 |
| 3 | UNION-ALL | | 1 | 10M| 10M|00:00:05.54 | 29717 |
|* 4 | FILTER | | 1 | | 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID BATCHED| PROD | 0 | 1 | 0 |00:00:00.01 | 0 |
|* 6 | INDEX RANGE SCAN | PROD_I1 | 0 | 1 | 0 |00:00:00.01 | 0 |
|* 7 | FILTER | | 1 | | 10M|00:00:02.64 | 29717 |
| 8 | TABLE ACCESS FULL | PROD | 1 | 10M| 10M|00:00:01.13 | 29717 |
---------------------------------------------------------------------------------------------------------------------
(If you’re not familiar with the transformation that the optimizer has applied to the original query then you may want to read this note on OR-expansion of nvl() predicates, or this more recent note on the topic by Monika.)
The plan is exactly what I expected, which didn’t correspond to the cardinality estimate that Monika’s data had shown. So, after an exchange of email, I repeated the test but switched to “Monika-mode” to get the following plan (which is only slightly different):
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:10.04 | 29717 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:10.04 | 29717 |
| 2 | VIEW | VW_ORE_1807A7AF | 1 | 2478K| 10M|00:00:08.87 | 29717 |
| 3 | UNION-ALL | | 1 | 2478K| 10M|00:00:07.33 | 29717 |
|* 4 | FILTER | | 1 | | 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID BATCHED| PROD | 0 | 1 | 0 |00:00:00.01 | 0 |
|* 6 | INDEX RANGE SCAN | PROD_I1 | 0 | 1 | 0 |00:00:00.01 | 0 |
|* 7 | FILTER | | 1 | | 10M|00:00:04.42 | 29717 |
|* 8 | TABLE ACCESS FULL | PROD | 1 | 2478K| 10M|00:00:02.89 | 29717 |
---------------------------------------------------------------------------------------------------------------------
Note how the cardinality estimate (E-Rows) for operation 8 has dropped to roughly a quarter of the estimate shown in the previous plan – and that 1/4 is the effect that Monika had shown in the original presentation. What’s going on?
The correct response you should give at this point is: “Where’s the Predicate Information?”
If you translated “product_id = nvl(:nprodid, product_id)” into a procedural form you might write it as:
if :nprodid is null then
product_id = product_id
else
product_id = :nprodid
if
So my first thought about the difference between the two results was that Monika was running a version of Oracle that actually generated the product_id = product_id predicate then did some arithmetic that behaved as if it was comparing two different columns. That would explain why we both saw the estimated cardinality as (very close to) 1/4 of the expected value.
You could check my claim about the 1/4 estimate by creating the table with a duplicate of the product_id column, including its histogram, and checking the estimiate for the predicate product_id = product_id_dup. Roughly speaking: because the histogram tells the optimizer that half the data has the value 1 and the other half has no repeats then in the Cartesian product half the data from the “left hand table” will match half the data of the “right hand table” – and that “half of a half” is where the 1/4 drops out of the arithmetic.
The email we exchanged was about the Oracle version, the predicate information and the client application.
- We were using different versions of Oracle – both 19c but different RUs
- We were using different applications – I was using SQL*Plus, she was using SQL Developer
- The two sets of predicate information did not match; here they are (based on my sample data):
Predicate Information (identified by operation id): (SQL*Plus)
---------------------------------------------------
4 - filter(:NPRODID IS NOT NULL)
6 - access("PRODUCT_ID"=:NPRODID)
7 - filter(:NPRODID IS NULL)
Predicate Information (identified by operation id): ("SQL Developer mode")
---------------------------------------------------
4 - filter(:NPRODID IS NOT NULL)
6 - access("PRODUCT_ID"=TO_NUMBER(:NPRODID))
7 - filter(:NPRODID IS NULL)
8 - filter("PRODUCT_ID"=TO_NUMBER(TO_CHAR("PRODUCT_ID")))
My guess wasn’t right, but it was in the right ballpark. The problem wasn’t a strange optimizer bug dependent on the version of Oracle, but it was the about the arithmetic of a predicate very similar to “product_id = product_id”.
That predicate had appeared because (I have been told) SQL Developer always uses a character datatype to supply bind variables to the server and lets the server sort out any problems of coercion; so the “Monika-mode” hack I had used from SQL*Plus was to declare variable nProdId varchar2(30) rather than variable nProdId number.
Becauses :nprodid was a character bind variable the mechanism the optimizer had to use to deal with the predicate product_id = nvl(:nprodid, product_id) was to convert the right-hand expresssion to a number, producing the predicate product_id = to_number(:nprodid, product_id), except you can’t apply to_number() to a number, so the optimizer had to convert the numeric column inside the nvl() to a character, leading to the rewritten predicate product_id = to_number(nvl(:nprodid, to_char(product_id))) before the _or_expand_nvl_predicate transformation is applied.
If you want to confirm that Oracle goes through this intermediate step you can generate the CBO (10053) trace, and you will find in it the text (reformatted here for readability):
Transformed predicate
"PROD"."PRODUCT_ID"=TO_NUMBER(NVL(:B1,TO_CHAR("PROD"."PRODUCT_ID")))
to
:B1 IS NOT NULL AND "PROD"."PRODUCT_ID"=TO_NUMBER(:B2)
OR :B3 IS NULL AND "PROD"."PRODUCT_ID"=TO_NUMBER(TO_CHAR("PROD"."PRODUCT_ID")
As for the arithmetic, there are some functions where function(column) is treated as “unknown, unpeekable value” leading to a fixed selectivity guess and there are some functions where the selectivity of function(column) is assumed to be the same as the selectivity of the underlying column – in this case the optimizer does the predicate arithmetic of product_id = product_id without realising that it’s (effectively) calculating for a predicate that it would normally eliminate.
ConsequencesChanges in cardinality estimates can lead to changes in execution plans. This was a very simple query with a very obvious change in estimate, but no change in plan; in production databases the variation in estimate may be far less obvious. If you use SQL Developer to create and test SQL and then embed the SQL in some other tool (with the correct bind declarations) for production execution you may have difficulty why “it works okay in test but picks the wrong plan in production”.
There are variations on this theme; the two plans I’ve shown have the same plan hash value – but do different amounts of work because of the calls to conversion functions. They would also appear as two separate child cursors if one session was using character binds and the other was using numeric binds. Have you ever heard the complaint: This query runs faster for userX than userY, even though the plan is the same and they’re both using the same {some column name/description} – maybe they’re using different tools to run the query.
This is just another reason for getting into the habit of: “always check the Predicate Information”. In fact this prompts me to add a comment about including the “peeked_binds” format option in the call to dbms_xplan.display_cursor() et. al. which would give you a very clear declaration of the source of the problem (if they appear). Here are the two sections from my demo:
Peeked Binds (identified by position):
--------------------------------------
1 - :1 (NUMBER): (null)
Peeked Binds (identified by position):
--------------------------------------
1 - :1 (VARCHAR2(30), CSID=873): (null)
While SQL Developer makes it a little awkward to test your SQL with numeric bind variables, SQL*Plus has a similar limitation with date bind variables. Although, as we saw above, you need only write variable n1 number in SQL*Plus to create a numeric bind variable, the equivalent variable d1 date results in the following response (in 23.6):
Usage: VAR[IABLE] [ <variable> [ NUMBER | CHAR | CHAR (n [CHAR|BYTE]) |
VARCHAR2 (n [CHAR|BYTE]) | NCHAR | NCHAR (n) |
NVARCHAR2 (n) | CLOB | NCLOB | BLOB | BFILE |
REFCURSOR | BINARY_FLOAT | BINARY_DOUBLE |
BOOLEAN | VECTOR] ]
If you want to test your code to generated execution plans with date bind variables (from SQL*Plus) you’ll have to create a PL/SQL block that defines a variable, populates it, then does something that forces the statement to be parsed.
Addendum 2The link I posted to Bluesky publicising this note has generated a number of responses, covering SQL Developer for VS Code, “runscript”, and SQLcl (so far). I’ll draw together all the bits some time in the next few days; meanwhile, following an initial response about “Run Script (F5)”, Monika sent me an email about that feature of SQL Developer and I’ll just quote it here (with the reminder that different SQL Developer is constantly being enhanced, so the following may not be true for every version):
If you run this:
variable nProdId number
exec :nProdId := null
select /* PIDN */
count(*),
max(product_name) max_name
from prod
where product_id = nvl(:nProdId, product_id)
/
as a script (F5) in SQL Developer it sends the bind as a number. Unfortunately I work with SQL Developer for VS Code and here it doesn’t work that way.
The model If you want to build the data set click here to view/hide the test scriptrem rem Script: monika_nvl_3a.sql rem Author: Jonathan Lewis rem Dated: Sept 2025 rem Purpose: rem rem Last tested rem 23.6.0.0 rem 19.11.0.0 drop table prod purge; set linesize 132 pagesize 120 trimspool on tab off execute dbms_random.seed(0) prompt ==================================== prompt create table PROD: 50 * 200,000 rows prompt Half the rows have product_id = 1 prompt The rest hold distinct even numbers prompt ==================================== create table prod as with data as ( select rownum rn from dual connect by level <= 2e5 ), products as ( select 'Product ' || to_char(rownum,'FM000') product_name from dual connect by level <= 50 ) select case when mod(rownum,2) = 1 then 1 else rownum end product_id, prd.product_name from products prd, data dat / alter table prod modify (product_id not null); create index prod_i1 on prod(product_id); spool monika_nvl_3a.lst select * from ( select product_id, count(*) product_count, sum(count(*)) over () total_rows from prod group by product_id order by product_id ) where rownum < 11 / select count(*) from prod; prompt ====================== prompt Collecting histograms prompt ====================== exec dbms_stats.gather_table_stats(user, 'prod', method_opt => 'for all columns size skewonly') select column_name, histogram, num_buckets from user_tab_cols where table_name = 'PROD' / alter system flush shared_pool; alter session set statistics_level = all; set timing on set serveroutput off column max_name format a32 prompt ========================================================= prompt Test for NULL (product not supplied): nProdId as number prompt E-Rows is 10M, time to run ca. 9 seconds. prompt Note the simple filter predicate, and the absence of a prompt predicate for op. 8 when product_id is declared not null. prompt If not so declared we'd see "product_id is not null" prompt =========================================================== variable nProdId number exec :nProdId := null select /* PIDN */ count(*), max(product_name) max_name from prod where product_id = nvl(:nProdId, product_id) / select * from dbms_xplan.display_cursor( format=> 'allstats last peeked_binds') / prompt ========================================================= prompt Test for NULL (product not supplied): nProdId as varchar2 prompt E-Rows is ca. 2.5M, time to run ca. 11 seconds. prompt Note the more complex filter predicates, especially op. 8 prompt ========================================================= variable nProdId varchar2(30) exec :nProdId := null -- alter session set events '10053 trace name context forever'; select /* PIDN */ count(*), max(product_name) max_name from prod where product_id = nvl(:nProdId, product_id) / -- alter session set events '10053 trace name context off'; select * from dbms_xplan.display_cursor( format=> 'allstats last peeked_binds') / spool off
CONNECT_TIME with reset on action
PL/SQL package
Agent OEM 24ai
General Question for your thought or Experience with the Outbox Pattern and viable alternatives
Remote Procedure Call (RPC) Dependency Management - where is remote timestamp stored?
Errorhandling in Ansible
Per default, if a task in a playbook fails, then the execution of the playbook is stopped for that host.
- name: PLAY1
hosts: localhost
gather_facts: no
tasks:
- name: let this shell-command fail
ansible.builtin.shell: exit 1
- name: let this shell-command complete
ansible.builtin.shell: exit 0

As you can see, the 2nd task is not executed. If you want to continue in such a case, the ignore_errors
parameter is your friend
- name: let this shell-command fail
ansible.builtin.shell: exit 1
ignore_errors: true

Per default, Ansible evaluates the exit-code of the module, in case of the shell-module, the exit-code of the last command.
But for some commands, that is not adequate. Example: The Oracle commandline tool sqlplus
to submit sql-commands will have an exit-code of 0 if it can connect to the database-instance. It is not related to the result of your SQL-commands. Error-messages in Oracle are prefixed by ORA-.
So, if you want to check for application errors, you have to implement it yourself. For that, you can use the failed_when
option.
- name: let this shell-command fail
ansible.builtin.shell: |
. ~/.profile
echo "select count(*) from all_users;" | sqlplus / as sysdba
register: number_of_users
failed_when: "'ORA-' in number_of_users.stdout"
Caution: In this case the exit-code of the shell is no longer evaluated. To also get the exit-code of the sqlplus call (e.g., sqlplus can not connect to the database, or sqlplus binary not found), you have to add this (default) condition:
failed_when: "number_of_users.rc != 0 or 'ORA-' in number_of_users.stdout"
But caution! Not all modules will have an rc
field.
Conceptually, Ansible is not the fastest tool. For each task, it will usually login with ssh to the remote server. If you have to run several checks in the shell, then, instead of running each in a separate task, you can run all these check-commands in one shell-task, and evaluate the result afterwards.
- name: run many check commands
ansible.builtin.shell: |
mount | grep ' on /u00 ' #check if /u00 is mounted
rpm -q ksh 2>&1 #check if ksh is installed
exit 0 # do not fail if rpm exit != 0; we will do our own errorhandling
register: checks
- name: fail if no /u00 is mounted
fail:
msg: "/u00 is not mounted"
when: "' on /u00 ' not in checks.stdout"
- name: No ksh found, try to install it
yum:
name: ksh
state: present
when: "'package ksh is not installed' in checks.stdout"
If you only want to throw an error, then you can do it directly in the shell-task:
when: "' on /u00 ' not in checks.stdout" or 'package ksh is not installed' in checks.stdout
But if you parse the output afterwards, you can run tasks to fix the error.
Sometimes it is difficult to parse the output if some commands return the same output, e.g. “OK”.
If your check-commands always return exactly 1 line, then you can directly parse the output of the command. The output of the 3rd command is in checks.stdout_lines[2]
.
In the above example that will not work because grep
will return the exit-code 0 (not found) or 1 (found) plus the found line. So, expand it as: mount | grep ' on /u00 ' || echo error
Do not fail the task itself, it is very usually unreadable because all information is printed on one line.

Instead, use ignore_errors: true
and failed_when: false
in the task. Do the errorhandling in a separate task with a customized errormessage. To print the multiline list of stdout_lines, use debug:
otherwise you can directy use ansible.builtin.fail:
with a customized message:
- name: force sqlplus to throw error because of missing environment
ansible.builtin.shell: |
/u00/app/oracle/product/19.7.0/bin/sqlplus -L / as sysdba @myscript.sql 2>&1
register: checks
ignore_errors: true
failed_when: false
- name: Check for errors of task before
debug: var=checks.stdout_lines
failed_when: checks.rc != 0 or 'ORA-' in checks.stdout
when: checks.rc != 0 or 'ORA-' in checks.stdout

As an example for this scenario: A customer of mine will do an offline backup of the development databases. And I have to run an Ansible playbook against all databases. If the playbook for this host is run at the backup time, it will fail because the database is down. But after some minutes the database will be restarted.
What we can do now?
Wait until the database is up again. That is possible, see the example of ansible.builtin.wait_for
in my blog post Parallel execution of Ansible roles. But for this scenario it is a waste of time. The database can be stopped (not for backup) and will not be restarted within the next few minutes.
Try later after a while. My playbook for all hosts (parallel forks=5) takes about 1 hour. The idea now is to remember the host with the stopped database and to continue with the next host. After the play finished for all hosts, restart the play for the remembered hosts.
- The 1st play running against all database hosts:
- gets the status of the databases on the host
- assigns the database instances to a
is_running
and anot_open
list - Include the role to run against the running databases
- Dynamically add the host to the group
re_run_if_not_open
if there arenot_open
databases
- The next play only runs for the
re_run_if_not_open
group - Include the role to run against the (hopefully now running) databases
- If the database then is still down, we assume it is stopped permanently.
L’article Errorhandling in Ansible est apparu en premier sur dbi Blog.
Bring the consecutive number(strat_val,end_val) record based on id as one record and add new record if the consecutive number is breaks down
Materialized View Staleness Changes to NEEDS_COMPILE after DML
Materialized View USING TRUSTED CONSTRAINTS and staleness=UNKNOWN
Returning rows in the order they were inserted
Viewing table partition compression
Rman backup setup
Unable to use nested tables on my system
Pl/sql procedure freezes while fetching data from dblink
implicitStatementCacheSize appears to leave open cursors
Happy 21st Anniversary to my Oracle Security Blog
Posted by Pete On 10/09/25 At 11:55 AM
Can we Block EXECUTE ANY PROCEDURE for our API?
Posted by Pete On 09/09/25 At 08:56 AM
Testing a READONLY table and sometimes WRITE and DELETE
Posted by Pete On 03/09/25 At 11:38 AM
Implement a Test System to Create a Readonly and Sometimes Insert / Delete Table
Posted by Pete On 26/08/25 At 08:54 AM
Pages
