Feed aggregator

Oracle.DataAccess.Client.OracleException ORA-03135: connection lost contact

Tom Kyte - Tue, 2025-06-03 02:19
I am using the following IIS Server for App server. ODP.Net Oracle 10.2.0 After 30 minutes of idle time i get the error ORA-03135: connection lost contact. The Error is same even if I use Sql*plus. I asked the network group to increase firewall to 4 hours which they did increase it on port 1521. I am not sure of the return ports time out values can be set. Since it?s a web service on IIS, the w3w.exe opens an active connection and we have default settings on connection pooling, so one connection is maintained. After 30 minutes idle on the next call, I get the error. I tried the same with Sql*plus and I do get the same error. Is there a way or work around for the same? Appreciate your help.
Categories: DBA Blogs

With Oracle EBR, creating DML triggers when foreign keys are present

Tom Kyte - Tue, 2025-06-03 02:19
Dear Tom, I have a question related to Oracle EBR. We are using Oracle 19C (Enterprise Edition) & now moving to be EBR compliant. Currently we do have DML triggers defined on tables. These table have foreign key constraints defined with cascade on delete option. For instance, ORDER_LINE_TABLE has a foreign key referring to ORDER_TABLE with cascade when delete (deleting a record in ORDER_TABLE). As per the Oracle guidelines, the recommended approach for creating triggers in general seems to be define them on editioning views (EVs), instead on real tables, for many good reasons. But in this case, it is not possible to create the triggers on EVs due to cascade operation initiated by FK constraint is performed at real table level, thus not visible to triggers on the EV. This limitation has been mentioned by Oren Nakdimon in below document: https://db-oriented.com/2025/01/08/ebr-part-13-the-trouble-with-foreign-keys-with-on-delete-clause-and-related-triggers/ As a workaround for this, we have been testing on defining the DML triggers on the real tables, in this like scenarios. But we see some challenges when they are present especially while using Cross Edition (XE) Triggers during upgrades. Since XE triggers are operating on real table level, DML triggers would see those DML operations happening on real tables, hence, it is hard for us to have an isolation in upgrade period. Even the above document says, I doubt it is a rare combination of having FKs with cascade & having DML triggers. Do you have a better approach to define DML trigger in such scenarios, instead of the approach I mentioned above? Thank you & Kind Regards, Navinth
Categories: DBA Blogs

Connection lost with ORA-01335 with ODA server

Tom Kyte - Tue, 2025-06-03 02:19
Hello, We have migration project form Oracle database 11g in Exadata server, to 19c in ODA server, when we tested the query in the new server with client tool such as Sql developper, Sqlplus and Sqlcl, the connection was lost after 1 hour of inactivity, the only difference between Exadata and ODA is in the network architecture, for the traceroute Exadata, we take switch A, but for ODA we, take firstly Firewall, after switch b to come to ODA, we increase timeout in Firewall for the port 1521 to 24h but the same issue. When we try to connect ton ODA directly, there is no timeout. I added this ligne: SQLNET.EXPIRE_TIME = 10, in my sqlnet file, and seems work perfectly, my question is it the right solution, or there is another solution more efficnecy. If it is the only solution, I need to do others configurations for the connection doesn't use sqlnet, such as SSIS that use ODBC, and others that use JDBC. Thanks in advance
Categories: DBA Blogs

Database DDL trigger

Tom Kyte - Tue, 2025-06-03 02:19
We use OCI GoldenGate Microservice to replicate tables from Oracle Database to BigData. I was tasked to create a process to replicate any "ALTER TABLE ADD COLUMN" statements to let bigdata team modify schema safely. Tested solution using database trigger in dedicated schema in test environment was rejected because 1. Triggers are evil 2. Database patching can have unpredicted consequences having database trigger. I wonder if that is true. I used similar trigger to log my own DDL for journaling for 5 years in a database which passed through multiple patching without any issues. Still, my experience is not statistically correct prove. My colleague suggested using use aud$unified table, but that involves creating procedure executed by frequently running scheduled job because releases are not 100 % regular. Are there better approach? With regards, Alex Here is the setup: prompt create table utility.ddl_log --drop table utility.ddl_log purge; create table utility.ddl_log ( ddl_time timestamp(6) default systimestamp not null ddl_by varcha2(30) default sys_context('userenv', 'current_user') , os_user varchar2(100) default sys_context('userenv','os_user') , host varchar2(100) default sys_context('userenv','host') , ip_address varchar2(100) default sys_context('userenv','ip_address') , module varchar2(100) default sys_context('userenv','module') , terminal varchar2(100) default sys_context('userenv','terminal') , operation varchar2(100) , owner varchar2(50) , object_name varchar2(50) , object_type varchar2(50) , sqltext clob ); create or replace trigger utility.after_gg_table_ddl after alter on database declare l_2run binary_integer := 0; l_sql_id varchar2(16); l_sql clob; begin --p('ora_dict_obj_name = '||ora_dict_obj_name); select count(*) into l_2run from dba_users u where 1 = 1 and profile = 'APP_USER' -- only application schemas to exclude any other schemas and ora_dict_obj_type = 'TABLE' and exists ( select 1 from all_log_groups a where a.owner = u.username and a.table_name = ora_dict_obj_name and log_group_type is not null ) and ora_dict_obj_owner = u.username ; --p('l_2run = '||to_char(l_2run)||'; sid = '|| sys_context('userenv','sid')); if l_2run > 0 then select sql_id into l_sql_id from gv$session where sid = sys_context('userenv','sid') and rownum = 1 ; select lower(sql_fulltext) into l_sql from gv$sql where sql_id = l_sql_id and rownum = 1; --to be on the safe side as many sessions and children can have the same sql_id -- This part is not required insert into utility.ddl_log (operation, owner, object_name, object_type, sqltext, prevsqltext) values (ora_sysevent, ora_dict_obj_owner, ora_dict_obj_name, ora...
Categories: DBA Blogs

User Defined Extensions in SQLDeveloper Classic – something you can’t do in VSCode (yet)

The Anti-Kyte - Sun, 2025-06-01 06:16

I can tell you from personal experience that, when you reach a certain point in your life, you start looking for synonyms to use in place of “old”.
If your a venerable yet still useful Oracle IDE for example, you may prefer the term “Classic”.
One thing SQLDeveloper Classic isn’t is obsolete. It still allows customisations that are not currently available in it’s shiny new successor – the SQLDeveloper extension for VSCode.
Fortunately, there’s no reason you can’t run both versions at the same time – unless your corporate IT has been overzealous and either packaged VSCode in an MSI that prohibits installation of extensions or has a policy preventing extensions running because “security”.
Either way, SQLDeveloper Classic is likely to be around for a while.
One particular area where Classic still has the edge over it’s shiny new successor is when it comes to user-defined extensions.
In this case – finding out the partition key and method of a table without having to wade through the DDL for that object…

Table Partitioning Meta Data

The following query should give us what we’re after – details of the partitioning and sub-partitioning methods used for a table, together with a list of the partition and (if applicable) sub-partition key columns :

with part_cols as
(
    select 
        owner,
        name,
        listagg(column_name, ', ') within group ( order by column_position) as partition_key_cols
    from all_part_key_columns
    group by owner, name
),
subpart_cols as
(
    select 
        owner,
        name,
        listagg(column_name, ', ') within group ( order by column_position) as subpartition_key_cols
    from all_subpart_key_columns
    group by owner, name
)
select 
    tab.owner, 
    tab.table_name, 
    tab.partitioning_type, 
    part.partition_key_cols,
    tab.subpartitioning_type,
    sp.subpartition_key_cols
from all_part_tables tab
inner join part_cols part    
    on part.owner = tab.owner 
    and part.name = tab.table_name 
left outer join subpart_cols sp 
    on sp.owner = tab.owner 
    and sp.name = tab.table_name
where tab.owner = 'SH'
and table_name = 'SALES'
order by 1,2
/

That’s quite a lot of code to type in – let alone remember – every time we want to check this metadata, so let’s just add an extra tab to the Table view in SQLDeveloper.

Using this query, I’ve created an xml file called table_partitioning.xml to add a tab called “Partition Keys” to the SQLDeveloper Tables view :

<items>
    <item type="editor" node="TableNode" vertical="true">
        <title><![CDATA[Partition Keys]]></title>
        <query>
            <sql>
                <![CDATA[
                    with part_cols as
                    (
                        select 
                            owner,
                            name,
                            listagg(column_name, ', ') within group ( order by column_position) as partition_key_cols
                        from all_part_key_columns
                        group by owner, name
                    ),
                    subpart_cols as
                    (
                        select 
                            owner,
                            name,
                            listagg(column_name, ', ') within group ( order by column_position) as subpartition_key_cols
                        from all_subpart_key_columns
                        group by owner, name
                    )
                    select 
                        tab.owner, 
                        tab.table_name, 
                        tab.partitioning_type, 
                        part.partition_key_cols,
                        tab.subpartitioning_type,
                        sp.subpartition_key_cols
                    from all_part_tables tab
                    inner join part_cols part    
                        on part.owner = tab.owner 
                        and part.name = tab.table_name 
                    left outer join subpart_cols sp 
                        on sp.owner = tab.owner 
                        and sp.name = tab.table_name
                    where tab.owner = :OBJECT_OWNER
                    and table_name = :OBJECT_NAME
                    order by 1,2
                ]]>
            </sql>
        </query>
    </item>
</items>

Note that we’re using the SQLDeveloper supplied ( and case-sensitive) variables :OBJECT_OWNER and :OBJECT_NAME so that the data returned is for the table that is in context when we open the tab.

If you are familiar with the process of adding User Defined Extensions to SQLDeveloper and want to get your hands on this one, just head over to the Github Repo where I’ve uploaded the relevant file.
You can also find instructions for adding the tab to SQLDeveloper as a user defined extension there.
They are…

Adding a User Defined Extension

In SQLDeveloper select the Tools Menu then Preferences.

Search for User Defined Extensions

Click the Add Row button then click in the Type field and select Editor from the drop-down list

In the Location field, enter the full path to the xml file containing the extension you want to add

Hit OK

Restart SQLDeveloper.
When you select an object of the type for which this extension is defined ( Tables in this example), you will see the new tab has been added

The new tab will work like any other :

Useful Links

The documentation for Extensions has been re-organised in recent years, but here are some links you may find useful :

As you’d expect, Jeff Smith has published a few articles on this topic over the years. Of particular interest are :

The Oracle-Samples GitHub Repo contains lots of example code and some decent instructions.

I’ve also covered this topic once or twice over the years and there are a couple of posts that you may find helpful :

Looking for help with JSON object

Tom Kyte - Wed, 2025-05-28 08:04
Attempting to learn and make use of JSON data within a table. Stuck on the following and would really appreciate some help... Looking for help to create a list of ITEM_IDs where a match is found in the JSON object. Sample column data - (column: recall_control_json) <code>{ "recall_control": [ { "item_id":"item1", "item_cd1":"CA", "item_cd2":"AP", "item_cd3":"CO", "descr":"Description text here...", "recall_list": ["VAL1", "VAL3", "VAL5"] }, { "item_id":"item2", "item_cd1":"CA", "item_cd2":"AP", "item_cd3":"EX", "descr":"Description text here...", "recall_list": ["VAL1", "VAL2"] }, { "item_id":"item3", "item_cd1":"CA", "item_cd2":"TW", "item_cd3":"CO", "descr":"Description text here...", "recall_list": ["VAL1", "VAL2", "VAL3"] }, { "item_id":"item4", "item_cd1":"CA", "item_cd2":"TW", "item_cd3":"EX", "descr":"Description text here...", "recall_list": ["VAL1", "VAL2", "VAL4"] } ] }</code> Trying to write the query that would return a list of ITEM_IDs where the RECALL_LIST contains VAL3 or getting help from others in learning why it can't be done and/or how it could be done. So far, I have been able to return all the ITEM_ID values with this query -- <code>select json_query(rt.recall_control_json, '$.recall_control.item_id' With wrapper) as recallList from recallTbl rt where recall_id = 502;</code> RETURNS -- ["item1","item2","item3","item4"] But I have not been able to update the where clause so that I only get the ones I need
Categories: DBA Blogs

Setup Apex 22.1 email with MS O365 Outlook SMTP

Tom Kyte - Wed, 2025-05-28 08:04
I am trying to integrate Microsoft Office365 Outlook's SMTP with Oracle Apex 22.1(Running on Amazon RDS for Oracle 19c). I have used the following configuration on Apex: SMTP server name: smtp.office365.com SMTP port: 587 Username: <my-email-id@outlook.com> Password: <generated App password after 2 factor authentication> use TLS: Yes I have uploaded the correct MS root and intermediate certificates to the auto-login wallet and created required Netwrok ACLs. When I try to send email using Apex (SQLPLUS), I do not get any error but the email never reaches the inbox instead the MAIl QUEUE in Apex gets populated. begin apex_util.set_security_group_id (p_security_group_id => 5140893825405459); APEX_MAIL.SEND(p_from => 'awsrahul2024@outlook.com', p_to => 'awsrahul2024@outlook.com', p_subj => 'Test Mail2', p_body => 'This is a test email'); apex_mail.push_queue(); END; / PL/SQL procedure successfully completed. When I try to validate the email configuration, I get the following error. SQL> begin APEX_INSTANCE_ADMIN.validate_email_config; end; 2 3 4 / begin * ERROR at line 1: ORA-29279: SMTP permanent error: 535 5.7.139 Authentication unsuccessful, basic authentication is disabled. [BM1P287CA0002.INDP287.PROD.OUTLOOK.COM 2025-05-21T21:47:12.528Z 08DD989AFE00C8F6] ORA-06512: at "APEX_220100.WWV_FLOW_MAIL", line 1284 ORA-06512: at "SYS.UTL_SMTP", line 57 ORA-06512: at "SYS.UTL_SMTP", line 142 ORA-06512: at "SYS.UTL_SMTP", line 446 ORA-06512: at "APEX_220100.WWV_FLOW_MAIL", line 1268 ORA-06512: at "APEX_220100.WWV_FLOW_MAIL", line 2588 ORA-06512: at "APEX_220100.WWV_FLOW_INSTANCE_ADMIN", line 1671 ORA-06512: at line 2 Kindly advise. Thank you!
Categories: DBA Blogs

unique index

Tom Kyte - Wed, 2025-05-28 08:04
from the oracle10g concepts guide "Oracle recommends that unique indexes be created explicitly, and not through enabling a unique constraint on a table. Alternatively, you can define UNIQUE integrity constraints on the desired columns. Oracle enforces UNIQUE integrity constraints by automatically defining a unique index on the unique key. However, it is advisable that any index that exists for query performance, including unique indexes, be created explicitly." what's the difference between creating unique constraint while creating table (any way unique constraint will create unique index to enforce integrity) and creating unique index explicitly?
Categories: DBA Blogs

Pagination Cost – 2

Jonathan Lewis - Tue, 2025-05-27 10:36

This note is a follow-on to a note I published a couple of years ago, and was prompted by a question on the MOS community forum (needs an acount) about the performance impact of using bind variables instead of literal values in a clause of the form: offset 20 rows fetch next 20 rows only

The issue on MOS may have been to do with the complexity of the view that was was being queried, so I thought I’d take a look at what happened when I introduced bind variables to the simple tests from the previous article. Here’s the (cloned) script with the necessary modification:

rem
rem     Script:         fetch_first_offset_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2025
rem

create table t1 
as
select 
        * 
from 
        all_objects
where   rownum <= 50000
order by 
        dbms_random.value
/

create index t1_i1 on t1(object_name);

alter session set statistics_level = all;
set serveroutput off

column owner format a32
column object_type format a12
column object_name format a32

spool fetch_first_offset_3.lst

prompt  ===================================
prompt  SQL with literals (non-zero offset)
prompt  ===================================

select
        owner, object_type, object_name
from
        t1
order by
        object_name
offset 
        10 rows
fetch next 
        20 rows only
/

select * from table(dbms_xplan.display_cursor(format=>'+cost allstats last peeked_binds'));


variable offset_size    number
variable fetch_size     number

begin
        :offset_size  := 10; :fetch_size := 20;
end;
/

prompt  ==============
prompt  SQL with binds
prompt  ==============

alter session set events '10053 trace name context forever';

select
        owner, object_type, object_name
from
        t1
order by
        object_name
offset 
        :offset_size rows
fetch next 
        :fetch_size rows only
/

alter session set events '10053 trace name context off';

select * from table(dbms_xplan.display_cursor(format=>'+cost allstats last peeked_binds'));

I’ve created a simple data set by copying 50,000 rows from the view all_objects and creating an index on the object_name column then, using two different strategies, I’ve selected the 21st to 30th rows in order of object_name. The first strategy uses literal values in the offset and fetch first/next clauses to skip 10 rows then fetch 20 rows; the second strategy creates a couple of bind variables to specify the offset and fetch sizes.

Here’s the execution plan (pulled from memory, with the rowsource execution statistics enabled) for the example using literal values:

SQL_ID  d7tm0uhcmpwc4, child number 0
-------------------------------------
select  owner, object_type, object_name from  t1 order by  object_name
offset  10 rows fetch next  20 rows only

Plan hash value: 3254925009

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |    33 (100)|     20 |00:00:00.01 |      35 |    333 |
|*  1 |  VIEW                         |       |      1 |     30 |    33   (0)|     20 |00:00:00.01 |      35 |    333 |
|*  2 |   WINDOW NOSORT STOPKEY       |       |      1 |     30 |    33   (0)|     30 |00:00:00.01 |      35 |    333 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1    |      1 |  50000 |    33   (0)|     30 |00:00:00.01 |      35 |    333 |
|   4 |     INDEX FULL SCAN           | T1_I1 |      1 |     30 |     3   (0)|     30 |00:00:00.01 |       5 |     28 |
-----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=30 AND
              "from$_subquery$_002"."rowlimit_$$_rownumber">10))
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "OBJECT_NAME")<=30)

As you can see, the optimizer has used (started) an index full scan to access the rows in order of object_name, but the A-Rows column tells you that it has passed just 30 rowids (the 10 to be skipped plus the 20 to be fetched) up to its parent (table access) operation, and the table access operation has passed the required columns up to its parent (window nosort stopkey) which can conveniently discard the first 10 rows that arrive and pass the remain 20 rows up and on to the client without actually doing any sorting.

You can also see in the Predicate Information that the window operation has used the row_number() function to limit itself to the first 30 (i.e. 10 + 20) rows, passing them up to its parent where the “30 rows” predicate is repeated with a further predicate that eliminates the first 10 of those rows, leaving only the 20 rows requested.

So what does the plan look like when we switch to bind variables:

SQL_ID  5f85rkjc8bv8a, child number 0
-------------------------------------
select  owner, object_type, object_name from  t1 order by  object_name
offset  :offset_size rows fetch next  :fetch_size rows only

Plan hash value: 1024497473

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |      1 |        |   933 (100)|     20 |00:00:00.03 |     993 |    990 |       |       |          |
|*  1 |  VIEW                    |      |      1 |  50000 |   933   (1)|     20 |00:00:00.03 |     993 |    990 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK|      |      1 |  50000 |   933   (1)|     30 |00:00:00.03 |     993 |    990 | 11264 | 11264 |10240  (0)|
|*  3 |    FILTER                |      |      1 |        |            |  50000 |00:00:00.02 |     993 |    990 |       |       |          |
|   4 |     TABLE ACCESS FULL    | T1   |      1 |  50000 |   275   (1)|  50000 |00:00:00.01 |     993 |    990 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber" <= GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:OFFSET_SIZE))),0)+:FETCH_SIZE AND
              "from$_subquery$_002"."rowlimit_$$_rownumber" > :OFFSET_SIZE))
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "OBJECT_NAME") <= GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:OFFSET_SIZE))),0)+:FETCH_SIZE)
   3 - filter(:OFFSET_SIZE < GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:OFFSET_SIZE))),0)+:FETCH_SIZE)

This looks like bad news – we haven’t taken advantage of an index to avoid visiting and sorting all the rows in the table, operation 4 shows us a table scan passing 50,000 rows through a filter up to the window sort at operation 2 which discards the 49,970 rows we definitely don’t want before passing the remaining 30 rows to the view operation that discards the first 10 that we needed to skip. Why don’t we see the far more efficient index scan?

You may have noticed a couple of oddities in the Predicate Information.

  • Wherever you see the :offset_size bind variable the optimizer has wrapped it in to_number(to_char()) – why?! My first thought about this was that the double conversion made it impossible for the optimizer to peek at the value and use it to get a better estimate of cost, but that’s (probably) not why the index full scan disappeared.
  • The offset and fetch first are both supposed to be numeric (according to the tram-tracks in the manual) so it seems a little strange that Oracle treats just one of them to a double conversion.
  • What is that filter() in operation 3 actually trying to achieve? If you tidy up the messy bits it’s just checking two bind variables to make sure that the offset is less than the offset plus fetch size. This is just an example of “conditional SQL”. In this case it’s following the pattern for “columnX between :bind1 and :bind2” – allowing Oracle to short-circuit the sub-plan if the value of bind2 is less than that of bind1. (It wasn’t needed for the example where we used literals because Oracle could do the artithmetic at parse time and see that 10 was – and always would be – less than 30.)
  • What are the checks actually saying about the optimizer’s (or developer’s) expectation for the way you might use the feature? The generated SQL actually allows for negative, non-integer values here. Negative offsets are replaced by zero, negative fetch sizes result in the query short-circuiting and returning no data (in fact any fetech size strictly less than 1 will return no rows).

Hoping to find further clues about the poor choice of plan, I took a look at the “UNPARSED QUERY” from the CBO (10053) trace, and cross-checked against the result from using the dbms_utility.expand_sql() procedure; the results were (logically, though not cosmetically) the same. Here, with a little extra cosmetic tidying is the SQL the optimizer actually works with:

select 
        a1.owner owner,
        a1.object_type object_type,
        a1.object_name object_name 
from    (
        select 
                a2.owner owner,
                a2.object_type object_type,
                a2.object_name object_name,
                a2.object_name rowlimit_$_0,
                row_number() over (order by a2.object_name) rowlimit_$$_rownumber 
        from
                test_user.t1 a2 
        where
                :b1 < greatest(floor(to_number(to_char(:b2))),0)+:b3
        ) a1 
where 
        a1.rowlimit_$$_rownumber <= greatest(floor(to_number(to_char(:b4))),0) + :b5 
and     a1.rowlimit_$$_rownumber >  :b6
order by 
        a1.rowlimit_$_0
;

It’s fascinating that the optimizer manages to expand the original two bind variables to six bind variables (lots of duplication) and then collapse them back to two named bind variables for the purposes of reporting the Predicate Information. For reference:

  • b1 = b3 = b5 = fetch_size
  • b2 = b4 = b5 = offset_size

Line 15, which I’ve highlighted, is clearly the source of the “conditional SQL” filter predicate at operation 3 of the previous execution plan, so I thought I’d try running this query (pre-defining all 6 bind variables correctly) to see if I could get the index-driven plan by modifying that line.

My first attempt was simply to remove the (highly suspect) to_number(to_char()) – but that didn’t help. Then I thought I’d make it really simple by getting rid of the greatest(floor()) functions – and that didn’t help either,. Finally I decided to change what was now :b4 + :b5 to a single bind variable :b7 with the right values – and that’s when I got the plan I wanted:

--------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |       |      1 |        |     20 |00:00:00.01 |      35 |
|*  1 |  VIEW                          |       |      1 |     30 |     20 |00:00:00.01 |      35 |
|*  2 |   WINDOW NOSORT STOPKEY        |       |      1 |     30 |     30 |00:00:00.01 |      35 |
|*  3 |    FILTER                      |       |      1 |        |     30 |00:00:00.01 |      35 |
|   4 |     TABLE ACCESS BY INDEX ROWID| T1    |      1 |  50000 |     30 |00:00:00.01 |      35 |
|   5 |      INDEX FULL SCAN           | T1_I1 |      1 |     30 |     30 |00:00:00.01 |       5 |
--------------------------------------------------------------------------------------------------

Workaround

Of course this doesn’t help answer the question – how do I make the query faster – it just highlights where in the current transformation the performance problem appears. Maybe it’s a pointer to some Oracle developer that there’s some internal code that could be reviewed – possibly for a special (but potentially common) pattern. Perhaps there’s a point of interception where a fairly small, isolated piece of code could be modified to give the optimizer the simpler expression during optimisation.

As for addressing the problem of finding a “client-oriented” mechanism, I found two solutions for my model. First add the (incomplete, but currently adequate) hint /*+ index(t1) */ to the SQL to get:

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |       |      1 |        | 50311 (100)|     20 |00:00:00.01 |      25 |
|*  1 |  VIEW                          |       |      1 |  50000 | 50311   (1)|     20 |00:00:00.01 |      25 |
|*  2 |   WINDOW NOSORT STOPKEY        |       |      1 |  50000 | 50311   (1)|     20 |00:00:00.01 |      25 |
|*  3 |    FILTER                      |       |      1 |        |            |     20 |00:00:00.01 |      25 |
|   4 |     TABLE ACCESS BY INDEX ROWID| T1    |      1 |  50000 | 50311   (1)|     20 |00:00:00.01 |      25 |
|   5 |      INDEX FULL SCAN           | T1_I1 |      1 |  50000 |   339   (1)|     20 |00:00:00.01 |       5 |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0
              )+:B2 AND "from$_subquery$_002"."rowlimit_$$_rownumber">:B1))
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "OBJECT_NAME")<=GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0)+:
              B2)
   3 - filter(:B1<GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0)+:B2)

As you can see we now do the index full scan, but it stops after only 20 rowids have been passed up the plan. This isn’t a good solution, of course, since (a) it’s specific to my model and (b) the estimates still show the optimizer working on the basis of handling and forwarding 50,000 rows (E-rows).

The alternative was to tell the optimizer that since we’re doing pagination queries we’re only expecting to fetch a little data each time we execute the query – let’s add the hint /*+ first_rows(30) */ which gives us:

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |       |      1 |        |    33 (100)|     20 |00:00:00.01 |      25 |
|*  1 |  VIEW                          |       |      1 |     30 |    33   (0)|     20 |00:00:00.01 |      25 |
|*  2 |   WINDOW NOSORT STOPKEY        |       |      1 |     30 |    33   (0)|     20 |00:00:00.01 |      25 |
|*  3 |    FILTER                      |       |      1 |        |            |     20 |00:00:00.01 |      25 |
|   4 |     TABLE ACCESS BY INDEX ROWID| T1    |      1 |  50000 |    33   (0)|     20 |00:00:00.01 |      25 |
|   5 |      INDEX FULL SCAN           | T1_I1 |      1 |     30 |     3   (0)|     20 |00:00:00.01 |       5 |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0
              )+:B2 AND "from$_subquery$_002"."rowlimit_$$_rownumber">:B1))
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "OBJECT_NAME")<=GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0)+:
              B2)
   3 - filter(:B1<GREATEST(FLOOR(TO_NUMBER(TO_CHAR(:B1))),0)+:B2)

This is likely to be a much better strategy than “micro-management” hinting; and it may even be appropriate to set the optimizer_mode at the session level with a logon trigger, first_rows_10 or first_rows_100 could well be a generally acceptable result if most of the queries tended to be about reporting the first few rows of a large result set. A key point to note is that both E-Rows and Cost are reasonably representative of the work done, while the corresponding figures when we hinted the use of the index were wildly inaccurate.

MacIntyre, Memory Eternal

Greg Pavlik - Fri, 2025-05-23 06:57

 There are a handful of living thinkers that have made me re-think fundamental presuppositions that I held consciously (or not) for some time in my early life. Each, in his own way, a genius - but in particular a genius in re-shaping the conceptualization of an intellectual space for me. Until yesterday they were in no particular order, Noam Chomsky, David Bentley Hart, John Milbank, Michael Hudson, Alain de Benoist and Alasdair MacIntyre. We recently lost Rene Girard. Now MacIntyre is no longer with us. His precise analytics, pulling insights from thinkers ranging from Aristotle to Marx, was rarely matched in the contemporary world. The hammer blow that After Virtue was to so many of my assumptions and beliefs is hard to describe - my entire view of the modern project, especially around ethics, was undone. But it was also his wisdom about the human animal and what really mattered in terms of being a human being that set him apart.

A sad day for humanity.

To the servant of God, Alasdair, Vichnaya Pamyat': may your memory be eternal!

SQL Server 2025 – ZSTD – A new compression algorithm for backups

Yann Neuhaus - Thu, 2025-05-22 18:43
Introduction

SQL Server 2025 introduces a new algorithm for backup compression: ZSTD. As a result, SQL Server 2025 now offers three solutions for backup compression:

  • MS_XPRESS
  • QAT
  • ZSTD

In this blog, we will compare MS_XPRESS and ZSTD.

Environment

To perform these tests, the following virtual machine was used:

  • OS: Windows Server 2022 Datacenter
  • SQL Server: 2025 Standard Developer
  • CPU: 8 cores
  • VM memory: 12 GB
  • (SQL) Max server memory: 4 GB

Additionally, I used the StackOverflow database to run the backup tests (reference: https://www.brentozar.com/archive/2015/10/how-to-download-the-stack-overflow-database-via-bittorrent/).

ZSTD usage

There are several ways to use the new ZSTD compression algorithm. Here are two methods:

  • Add the following terms to the SQL backup commands: WITH COMPRESSION (ALGORITHM = ZSTD)
BACKUP DATABASE StackOverflow TO DISK = 'T:\S1.bak' WITH INIT, FORMAT, COMPRESSION (ALGORITHM = ZSTD), STATS = 5
  • Change the compression algorithm at the instance level:
EXECUTE sp_configure 'backup compression algorithm', 3; 
RECONFIGURE;
The initial data

The StackOverflow database used has a size of approximately 165 GB. To perform an initial test using the MS_XPRESS algorithm, the commands below were executed:

SET STATISTICS TIME ON
BACKUP DATABASE StackOverflow TO DISK = 'T:\S1.bak' WITH INIT, FORMAT, COMPRESSION, STATS = 5;

Here is the result:

BACKUP DATABASE successfully processed 20 932 274 pages in 290.145 seconds (563.626 MB/sec).
SQL Server Execution Times: CPU time = 11 482 ms,  elapsed time = 290 207 ms.

For the second test, we are using the ZSTD algorithm with the commands below:

SET STATISTICS TIME ON
BACKUP DATABASE StackOverflow TO DISK = 'T:\S1.bak' WITH INIT, FORMAT, COMPRESSION (ALGORITHM = ZSTD), STATS = 5

Here is the result:

BACKUP DATABASE successfully processed 20 932 274 pages in 171.338 seconds (954.449 MB/sec).
CPU time = 10 750 ms,  elapsed time = 171 397 ms.

It should be noted that my storage system cannot sustain its maximum throughput for an extended period. In fact, when transferring large files (e.g., 100 GB), the throughput drops after about 15 seconds (for example, from 1.2 GB/s to 500 MB/s).

According to the initial data, the CPU time between MS_XPRESS and ZSTD is generally the same. However, since ZSTD allows backups to be performed more quickly (based on the tests), the overall CPU time is lower with the ZSTD algorithm. Indeed, because the backup duration is reduced, the time the CPU spends executing instructions (related to backups) is also lower.

Comparison table for elapsed time with percentage gain:

Test NumberCompression TypeDuration In Seconds1MS_XPRESS2902ZSTD171PerformanceApproximately 41% faster
Comparison of captured data

During the tests, performance counters were set up to gain a more accurate view of the behavior of the two algorithms during a backup. For this, we used the following counters:

  • Backup throughput/sec (KB)
  • Disk Read KB/sec (in my case, Disk Read KB/sec is equal to the values of the Backup Throughput/sec (KB) counter). In fact, the “Backup throughput/sec (KB)” counter reflects the reading of data pages during the backup.
  • Disk Write KB/sec
  • Processor Time (%)

We observe that the throughput is higher with the ZSTD algorithm. The drop that appears is explained by the fact that ZSTD enabled the backup to be completed more quickly. As a result, the backup operation took less time, and the amount of data collected is lower compared to the other solution. Additionally, it should be noted that the database is hosted on volume (S) while the backups are stored on another volume (T).

We also observe that the write throughput is higher when using the ZSTD algorithm.

For the same observed period, the CPU load is generally the same however ZSTD allows a backup to be completed more quickly (in our case). As a result, the overall CPU load is generally lower.

We also observe that the backup ratio (on this database) is higher with the ZSTD algorithm. This indicates that the size occupied by the compressed backup is smaller with ZSTD.

backup_ratiodatabase_namebackup_typecompressed_backup_size (bytes)compression_algorithm3.410259900691847063StackOverflowFull50 283 256 836MS_XPRESS3.443440933211591093StackOverflowFull49 798 726 852ZSTD Conclusion

Based on the tests performed, we observe that the ZSTD algorithm allows:

  • Faster backup creation
  • Reduced CPU load because backups are produced more quickly
  • Reduced backup size

However, it should be noted that further testing is needed to confirm the points above.

Thank you, Amine Haloui.

L’article SQL Server 2025 – ZSTD – A new compression algorithm for backups est apparu en premier sur dbi Blog.

MYSQL Custom Function Return issue

Tom Kyte - Wed, 2025-05-21 06:43
With?row?based replication?(binlog_format = ROW), the function generates duplicate sequence values; in?mixed?mode (binlog_format = MIXED), no duplicates occur. Using AWS RDS 8.0.22 CREATE DEFINER=abc@% FUNCTION nextval(seq_name varchar(100)) RETURNS bigint BEGIN DECLARE cur_val bigint(20); SELECT sequence_cur_value INTO cur_val FROM sequence_data WHERE sequence_name = seq_name ; IF cur_val IS NOT NULL THEN UPDATE sequence_data SET sequence_cur_value = IF ( (sequence_cur_value + sequence_increment) > sequence_max_value, IF ( sequence_cycle = TRUE, sequence_min_value, NULL ), sequence_cur_value + sequence_increment ) WHERE sequence_name = seq_name ; END IF; RETURN cur_val;
Categories: DBA Blogs

ORA-12637: RÚception du paquet impossible

Tom Kyte - Wed, 2025-05-21 06:43
Hello , Hello, I'm having a rather strange problem: A single Windows user is unable to connect to a database hosted on the WINDOWSPROD server. The error is: ORA-12637: Unable to receive packet; in the alert.log: ORA-00609: Could not attach to incoming connection ORA-12699: Native service internal error The same user can connect to the database hosted on the WINDOWSPROD server. Both servers are in the same domain. Does anyone have any ideas?
Categories: DBA Blogs

Need to get match and unmatch output for all users having profile and one role

Tom Kyte - Wed, 2025-05-21 06:43
Dear Tom, I have a question. I have a profile say myprofile in dba_users view. I'm selecting how many users having this myprofile select username from dba_users where profile='MYPROFILE'; User1 User2 User3 Now I want to check whether all this 3 users have Connect role or not from dba_role_privs. Suppose User1 and User2 are granted Connect role so how to check that in single query along with unmatched one which is User3. Thanks Nite
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator