Feed aggregator

Oracle Telegram Bot

XTended Oracle SQL - Mon, 2025-04-28 20:23

For the Oracle performance tuning and troubleshooting Telegram channel https://t.me/ora_perf, I developed a simple helpful Telegram bot. It simplifies common Oracle database tasks directly within Telegram.

Here’s what the bot can do:

FAQ Management
  • /faq add: Reply with !faq add {description} to save a message.
  • /faq list: Lists all FAQ articles.
  • /faq list [search words]: Search FAQ by keywords.
  • /faq {N}: Shows FAQ article number N.
  • /faq replace {N}: Updates FAQ article N.
Oracle Error Lookup

/oerr: Shows details of Oracle errors

/oerr ora 29024

29024, 00000, "Certificate validation failure"
// *Cause:  The certificate sent by the other side could not be validated. This may occur if
//          the certificate has expired, has been revoked, or is invalid for another reason.
// *Action: Check the certificate to determine whether it is valid. Obtain a new certificate,
//          alert the sender that the certificate has failed, or resend.

Oracle Parameter Search

/param: Finds Oracle parameters by mask.

/param %cnt%

  PAR_N  NAME                                 VAL_N  VALUE    IS_DEFAULT
-------  ---------------------------------  -------  -------  ------------
   5231  _approx_cnt_distinct_gby_pushdown        1  OFF      FALSE
   5231  _approx_cnt_distinct_gby_pushdown        2  FORCE    FALSE
   5231  _approx_cnt_distinct_gby_pushdown        3  CHOOSE   FALSE
Oracle Hints

/hint: Lists Oracle hints by mask

/hint 19
Oracle Fix Controls

/fix_control: Lists fix controls by keyword and version.

/fix_control count 8.1

   BUGNO    VALUE  SQL_FEATURE                      DESCRIPTION                                                       OFE         EVENT  DEF
--------  -------  -------------------------------  ----------------------------------------------------------------  --------  -------  -----
16954950        1  QKSFM_ACCESS_PATH_16954950       take into account empty partitions when prorating cost            12.1.0.2        0  Y
 3120429        1  QKSFM_ACCESS_PATH_3120429        account for join key sparsity in computing NL index access cost   10.1.0.3        0  Y
 6897034        1  QKSFM_ACCESS_PATH_6897034        index cardinality estimates not taking into account NULL rows     10.2.0.5        0  Y
 9456688        1  QKSFM_ACCESS_PATH_9456688        account for to_number/to_char cost after temp conversion          11.2.0.2        0  Y
14176203        1  QKSFM_CARDINALITY_14176203       Account for filter sel while computing join sel using histograms  11.2.0.4        0  Y
14254052        1  QKSFM_CARDINALITY_14254052       amend accounting for nulls in skip scan selectivity calculation   11.2.0.4        0  Y
16486095        1  QKSFM_CARDINALITY_16486095       Do not count predicate marked for no selectivity                  12.2.0.1        0  Y
23102649        1  QKSFM_CARDINALITY_23102649       correction to inlist element counting with constant expressions   12.2.0.1        0  Y
11843512        1  QKSFM_CBO_11843512               null value is not accounted in NVL rewrite                        11.2.0.3        0  Y
 1403283        1  QKSFM_CBO_1403283                CBO do not count 0 rows partitions                                8.1.6       10135  Y
22272439        1  QKSFM_CBO_22272439               correction to inlist element counting with bind variables         12.2.0.1        0  Y
25090203        1  QKSFM_CBO_25090203               account for selectivity of non sub subquery preds                 18.1.0          0  Y
 5483301        1  QKSFM_CBO_5483301                Use min repeat count in freq histogram to compute the density     10.2.0.4        0  Y
 5578791        1  QKSFM_CBO_5578791                do not discount branch io cost if inner table io cost is already  11.1.0.6        0  Y
 6694548        1  QKSFM_CBO_6694548                Account for chained rows when computing TA by ROWID from bitmap   10.2.0.4        0  Y
27500916        1  QKSFM_COMPILATION_27500916       only count one with clause reference from connect by              19.1.0          0  Y
10117760        1  QKSFM_CURSOR_SHARING_10117760    cardinality feedback should account for bloom filters             11.2.0.3        0  Y
 9841679        1  QKSFM_CVM_9841679                do not set col count for OPNTPLS                                  11.2.0.3        0  Y
26585420        1  QKSFM_DBMS_STATS_26585420        cap approx_count_distinct with non nulls                          18.1.0          0  Y
17760686        1  QKSFM_DYNAMIC_SAMPLING_17760686  Account for BMB blocks when dynamic sampling partitioned ASSM ta  12.1.0.2        0  Y

This bot helps streamline database maintenance and troubleshooting tasks. Join ora_perf to try it and share your feedback!

Categories: Development

How to configure proxy server between Golden Gate source and destination servers?

Tom Kyte - Mon, 2025-04-28 17:20
Environment Information Database : Oracle Database 19c (Multi-Tenant Architecture) - Version 19.25.0.0.0 GoldenGate : Oracle GoldenGate Microservices 19c Replication Setup : Data replication will be configured between: Source : On-premises database server (Server A) Target : Azure VM Cloud database server (Server B) Operating System : Linux x86_64 on-prem. RHEL8 on cloud. Security Concern The security team has flagged the direct connection between the on-premises database server (Server A) and the cloud-based database server (Server B). They have mandated the use of a reverse proxy server to transfer GoldenGate trail files between the source and destination environments. This is to ensure secure data transfer and minimize exposure of sensitive database connections. Challenge The current GoldenGate configuration uses the WebUI for managing database connections and replication processes for both the source and destination. Trail files generated by the Extract process on the source (on-premises) need to be transferred securely to the target (cloud) via the reverse proxy server. The question is: How can we integrate a reverse proxy server into the GoldenGate replication setup to securely transfer trail files between the source and target environments?
Categories: DBA Blogs

Use of dbms_scheuler

Tom Kyte - Mon, 2025-04-28 17:20
After using dbms_scheduler since its introduction I was surprised to learn that it is not the way application development should go. I was explained that having scheduling outside of the database is better than having it inside the database. And the replacement are crontab, complicated self-made scheduler using Perl, Python, favorite language, preferred third party application, or refreshable materialized views. I did not get the answer to my question why using developed app stored code or other Oracle supplied packages inside the database is good practice and using dbms_scheduler is bad. What is your view on that issue?
Categories: DBA Blogs

Make not reproducible baselines reproducible again

Tom Kyte - Thu, 2025-04-24 22:06
Hello, I have multiple baselines and after migrating the application many of them got not reproducible. The reason was renaming of some indexes during the migration. I have renamed those indexes back to the original name. My question is how I get Oracle to set those baseline back to reproduced = YES. Non of the DBMS_SPM routines is able to do that. Thanks
Categories: DBA Blogs

Solving a logical problem using analytical functions

Tom Kyte - Thu, 2025-04-24 22:06
(https://livesql.oracle.com/ords/livesql/s/da34i74lkmxt2mqrtp0k4xsk6 ) (I was able to format better with screenshots on the Oracle forums - https://forums.oracle.com/ords/apexds/post/help-solving-a-logical-problem-using-analytical-query-3778) Hi, I am trying to solve a problem using analytical functions but I am stuck. 1. I have a list of coupons that I can use. The usage sequence is in alphabetical order of the coupon name. +---------+-------+ | Coupons | Value | +---------+-------+ | A | 100 | +---------+-------+ | B | 40 | +---------+-------+ | C | 120 | +---------+-------+ | D | 10 | +---------+-------+ | E | 200 | +---------+-------+ 2. There is a limit (cap) on the total value across all coupons that can be used in a day +----------+-----------+ | Cap Name | Cap Limit | +----------+-----------+ | Cap 1 | 150 | +----------+-----------+ | Cap 2 | 70 | +----------+-----------+ 3. Each coupon is subject to 1 or 2 caps. If it is subject to 2 caps, there is a specified sequence to apply caps. +--------+--------------+----------+ | Coupon | Cap Sequence | Cap Name | +--------+--------------+----------+ | A | 1 | Cap 1 | +--------+--------------+----------+ | A | 2 | Cap 2 | +--------+--------------+----------+ | B | 1 | Cap 2 | +--------+--------------+----------+ | C | 1 | Cap 2 | +--------+--------------+----------+ | C | 2 | Cap 1 | +--------+--------------+----------+ | D | 1 | Cap 1 | +--------+--------------+----------+ | E | 1 | Cap 1 | +--------+--------------+----------+ | E | 2 | Cap 2 | +--------+--------------+----------+ 4. I have to now find how much coupon value could be utilized before my daily cap was reached. i.e. find "coupon usage" and ?Cap Remaining? below. So, If I join the tables above +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | # | Coupon | Value | Cap Name | Cap Sequence | Cap Limit | Coupon Usage | Cap Remaining | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 1 | A | 100 | Cap 1 | 1 | 150 | 100 | 50 | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 2 | A | 100 | Cap 2 | 2 | 70 | 0 | 70 | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 3 | B | 40 | Cap 2 | 1 | 70 | 40 | 30 | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 4 | C | 120 | Cap 2 | 1 | 70 | 30 | 0 | +---+--------+-------+----------+--------------+--...
Categories: DBA Blogs

Fate and the Individual

Greg Pavlik - Thu, 2025-04-24 14:01


 

odacli create-appliance failed on an ODA HA

Yann Neuhaus - Thu, 2025-04-24 09:14

I recently had to install an Oracle Database Appliance X11 HA and it failed when creating the appliance:

[root@oak0 ~]# odacli create-appliance -r /u01/patch/my_new_oda.json
...
[root@mynewoda ~]# odacli describe-job -i 88e4b5e3-3a73-4c18-9d9f-960151abc45e

Job details                                                      
----------------------------------------------------------------
                     ID:  88e4b5e3-3a73-4c18-9d9f-960151abc45e
            Description:  Provisioning service creation
                 Status:  Failure (To view Error Correlation report, run "odacli describe-job -i 88e4b5e3-3a73-4c18-9d9f-960151abc45e --ecr" command)
                Created:  April 23, 2025 16:15:35 CEST
                Message:  DCS-10001:Internal error encountered: Failed to provision GI with RHP at the home: /u01/app/19.26.0.0/grid: DCS-10001:Internal error encountered: PRGH-1002 : Failed to copy files from /opt/oracle/rhp/RHPCheckpoints/rhptemp/grid8631129022929485455.rsp to /opt/oracle/rhp/RHPCheckpoints/wOraGrid192600
PRKC-1191 : Remote command execution setup check for node mynewoda using shell /usr/bin/ssh failed.
No ECDSA host key is known for mynewoda and you have requested strict checking.Host key verification failed...

It happened randomly in the past that we got this error “host key verification failed” and we just had to rerun our “odacli create-appliance” command again. However, this time restarting was not possible:

[root@mynewoda ~]# odacli create-appliance -r /u01/patch/my_new_oda.json
DCS-10047:Same job is already running: Provisioning FAILED in different request.

Following MOS Note “ODA Provisioning Fails to Create Appliance w/ Error: DCS-10047:Same Job is already running : Provisioning FAILED in different request. (Doc ID 2809836.1)” I cleaned up the ODA, updated the repository with the Grid Infrastructure clone and DB clone:

Stop the dcs agent on both nodes:

# systemctl stop initdcsagent

Then, run cleanup.pl on ODA node 0.

# /opt/oracle/oak/onecmd/cleanup.pl -f
...

If you get warnings that the cleanup cannot transfer the public key to node 1 or cannot setup SSH equivalence, then run the cleanup on node 1 as well.

At the end of the cleanup-output you get those messages:

WARNING: After system reboot, please re-run "odacli update-repository" for GI/DB clones,
WARNING: before running "odacli create-appliance".

So, after the reboot I updated the repository with the GI and DB Clone:

[root@oak0 patch]# /opt/oracle/dcs/bin/odacli update-repository -f /u01/patch/odacli-dcs-19.26.0.0.0-250127-GI-19.26.0.0.zip
...
[root@oak0 patch]# odacli describe-job -i 674f7c66-1615-450f-be27-4e4734abca97

Job details                                                      
----------------------------------------------------------------
                     ID:  674f7c66-1615-450f-be27-4e4734abca97
            Description:  Repository Update
                 Status:  Success
                Created:  April 23, 2025 14:37:29 UTC
                Message:  /u01/patch/odacli-dcs-19.26.0.0.0-250127-GI-19.26.0.0.zip
...

[root@oak0 patch]# /opt/oracle/dcs/bin/odacli update-repository -f /u01/patch/odacli-dcs-19.26.0.0.0-250127-DB-19.26.0.0.zip
...
[root@oak0 patch]# odacli describe-job -i 4299b124-1c93-4d22-bac4-44a65cbaac67

Job details                                                      
----------------------------------------------------------------
                     ID:  4299b124-1c93-4d22-bac4-44a65cbaac67
            Description:  Repository Update
                 Status:  Success
                Created:  April 23, 2025 14:39:34 UTC
                Message:  /u01/patch/odacli-dcs-19.26.0.0.0-250127-DB-19.26.0.0.zip
...

Checked that the clones are available:

[root@oak0 patch]# ls -ltrh /opt/oracle/oak/pkgrepos/orapkgs/clones
total 12G
-rwxr-xr-x 1 root root 6.0G Jan 28 03:33 grid19.250121.tar.gz
-rwxr-xr-x 1 root root   21 Jan 28 03:34 grid19.250121.tar.gz.info
-r-xr-xr-x 1 root root 5.4G Jan 28 03:42 db19.250121.tar.gz
-rw-rw-r-- 1 root root  19K Jan 28 03:42 clonemetadata.xml
-rw-rw-r-- 1 root root   21 Jan 28 03:43 db19.250121.tar.gz.info
[root@oak0 patch]# 

The same on node 1:

[root@oak1 ~]# ls -ltrh /opt/oracle/oak/pkgrepos/orapkgs/clones
total 12G
-rwxr-xr-x 1 root root 6.0G Jan 28 03:33 grid19.250121.tar.gz
-rwxr-xr-x 1 root root   21 Jan 28 03:34 grid19.250121.tar.gz.info
-r-xr-xr-x 1 root root 5.4G Jan 28 03:42 db19.250121.tar.gz
-rw-rw-r-- 1 root root  19K Jan 28 03:42 clonemetadata.xml
-rw-rw-r-- 1 root root   21 Jan 28 03:43 db19.250121.tar.gz.info
[root@oak1 ~]# 

Before running the create-appliance again, you should first validate the storage topology on both nodes again.

[root@oak0 ~]# odacli validate-storagetopology
INFO    : ODA Topology Verification         
INFO    : Running on Node0                  
INFO    : Check hardware type               
INFO    : Check for Environment(Bare Metal or Virtual Machine)
SUCCESS : Type of environment found : Bare Metal
INFO    : Check number of Controllers       
SUCCESS : Number of onboard OS disk found : 2
SUCCESS : Number of External SCSI controllers found : 2
INFO    : Check for Controllers correct PCIe slot address
SUCCESS : Internal RAID controller   : 
SUCCESS : External LSI SAS controller 0 : 61:00.0
SUCCESS : External LSI SAS controller 1 : e1:00.0
INFO    : Check for Controller Type in the System
SUCCESS : There are 2 SAS 38xx controller in the system
INFO    : Check if JBOD powered on          
SUCCESS : 1JBOD : Powered-on
INFO    : Check for correct number of EBODS(2 or 4)
SUCCESS : EBOD found : 2
INFO    : Check for External Controller 0   
SUCCESS : Controller connected to correct EBOD number
SUCCESS : Controller port connected to correct EBOD port
SUCCESS : Overall Cable check for controller 0
INFO    : Check for External Controller 1   
SUCCESS : Controller connected to correct EBOD number
SUCCESS : Controller port connected to correct EBOD port
SUCCESS : Overall Cable check for Controller 1
INFO    : Check for overall status of cable validation on Node0
SUCCESS : Overall Cable Validation on Node0
INFO    : Check Node Identification status  
SUCCESS : Node Identification
SUCCESS : Node name based on cable configuration found : NODE0
INFO    : The details for Storage Topology Validation can also be found in the log file=/opt/oracle/oak/diag/oak0/oak/storagetopology/StorageTopology-2025-04-23-14:42:34_70809_7141.log
[root@oak0 ~]# 

Validate the storage-topology on node 1 as well. Not validating the storage topology may lead to the following error when creating the appliance again:

OAK-10011:Failure while running storage setup on the system. Cause: Node number set on host not matching node number returned by storage topology tool. Action: Node number on host not set correctly. For default storage shelf node number needs to be set by storage topology tool itself.

Afterwards the “odacli create-appliance” should run through.

Summary

If your “odacli create-appliance” fails on an ODA HA environment and you cannot restart it, then run a cleanup, update the repository with the Grid Infra- and DB-clone and validate the storage-topology before doing the create-appliance again.

L’article odacli create-appliance failed on an ODA HA est apparu en premier sur dbi Blog.

Bitmap indexes and BITMAP CONVERSION TO ROWIDS

Tom Kyte - Thu, 2025-04-24 04:03
Hello Chris. Hello Connor, I?m struggling with a performance issue ? We have a single big table (167 million of rows, 118 columns). For ad hoc queries we have created a simple Apex application with 15 most important columns as filters. To avoid long running queries we start a simple count(*) query bevor running the main query. Depending of a result of our count(*) we warn or even force the end user for better filtering. select count(*) from big_table where filter_column_1 = filter_value_1 and filter_column_2 = filter_value_2 and filter_column_3 between filter_value_3 and filter_value_4; To support the count(*) query I have created a bitmap index for each filter column. It works fine! Oracle combines the relevant bitmap indexes with a BITMAP AND or BITMAP MERGE. The response time is excellent, maximum a few seconds, often lower than 1 second. Works as expected and I?m happy! But: sometimes the optimizer decides to quit the path of bitmap processing and converts all bitmap indexes with BITMAP CONVERSION TO ROWIDS and uses then HASH JOINs. In this case the response time of the count(*) query is much worse, sometimes even minutes! My questions: ? Why does the optimizer use BITMAP CONVERSION TO ROWIDS? My feeling is, that it is more probably if the where clause is complex (many filters in use) or the count(*) delivers a big number. ? Are there any SQL-hints to avoid BITMAP CONVERSION TO ROWIDS? ? Are there any tips for my count(*) query ( WITH clauses, subqueries, ? ) to avoid BITMAP CONVERSION TO ROWIDS? The DB-Version is 19.24.0.0.0, the big table and all indexes have fresh statistics. Thank you in advance! Christian
Categories: DBA Blogs

Virtualize, Anonymize, Validate: The Power of Delphix & OMrun

Yann Neuhaus - Wed, 2025-04-23 11:27
The Challenge: Modern Data Complexity

As businesses scale, so do their data environments. With hybrid cloud adoption, legacy system migrations, and stricter compliance requirements, IT teams must ensure test environments are:

  • Quickly available
  • Secure and compliant
  • Accurate mirrors of production environments
The Solution: Delphix – OMrun

Also for your heterogenouse data storage technology, Delphix and OMrun provide a seamless way to virtualize, anonymize and validate your test data securely and fast.

Virtualize with Delphix: Fast, Efficient, and Agile

Delphix replaces slow, storage-heavy physical test environments with virtualized data environments. Here’s what makes it a major advance:

Anonymize with Confidence: Built-in Data Masking

Data privacy isn’t optional, it’s critical. Delphix includes automated data masking to anonymize sensitive information. Whether it’s PII, PHI, or financial data, Delphix ensures:

  • Compliance with regulations (GDPR, CCPA, etc.)
  • Reduced risk of data leaks in non-production environments
  • Built-in masking templates and customization options
Validate with OMrun: Quality Assurance at Scale

OMrun brings powerful data validation and quality assurance capabilities into the mix. It’s tailor-made for data anonymzation validation (ensuring data privacy), providing:

  • Automated script generation
  • Scalable validation (running parallel OMrun instances)
  • Transparent reporting and dashboard
Final Thoughts: A Future-Ready Data Strategy

Whether you’re planning a cloud migration, regulatory compliance initiative, or just looking to modernize your Dev/Test practices, Delphix & OMrun provide a future-proof foundation. This powerful combination helps businesses move faster, safer, and smarter – turning data from a bottleneck into a business accelerator.

Want to see it in action?

Watch the OMrun Video Tutorials at www.youtube.com/@Dbi-services or explore Delphix & OMrun Solutions at:
OMrun
dbi-services.com/products/omrun/
OMrun Online Manual
Delphix
Delphix Data Masking Software

L’article Virtualize, Anonymize, Validate: The Power of Delphix & OMrun est apparu en premier sur dbi Blog.

Looking for SQL Query to get below output

Tom Kyte - Wed, 2025-04-23 10:01
Can someone please provide Oracle sql query to get below output? Deptno Dept Name No. of emp 10 a 10 b 20 c 30 20 d 40 e 50 f 60 30 g 70 h 80 i 90
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator