Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 5 days 23 hours ago

odacli create-appliance failed on an ODA HA

Thu, 2025-04-24 09:14

I recently had to install an Oracle Database Appliance X11 HA and it failed when creating the appliance:

[root@oak0 ~]# odacli create-appliance -r /u01/patch/my_new_oda.json
...
[root@mynewoda ~]# odacli describe-job -i 88e4b5e3-3a73-4c18-9d9f-960151abc45e

Job details                                                      
----------------------------------------------------------------
                     ID:  88e4b5e3-3a73-4c18-9d9f-960151abc45e
            Description:  Provisioning service creation
                 Status:  Failure (To view Error Correlation report, run "odacli describe-job -i 88e4b5e3-3a73-4c18-9d9f-960151abc45e --ecr" command)
                Created:  April 23, 2025 16:15:35 CEST
                Message:  DCS-10001:Internal error encountered: Failed to provision GI with RHP at the home: /u01/app/19.26.0.0/grid: DCS-10001:Internal error encountered: PRGH-1002 : Failed to copy files from /opt/oracle/rhp/RHPCheckpoints/rhptemp/grid8631129022929485455.rsp to /opt/oracle/rhp/RHPCheckpoints/wOraGrid192600
PRKC-1191 : Remote command execution setup check for node mynewoda using shell /usr/bin/ssh failed.
No ECDSA host key is known for mynewoda and you have requested strict checking.Host key verification failed...

It happened randomly in the past that we got this error “host key verification failed” and we just had to rerun our “odacli create-appliance” command again. However, this time restarting was not possible:

[root@mynewoda ~]# odacli create-appliance -r /u01/patch/my_new_oda.json
DCS-10047:Same job is already running: Provisioning FAILED in different request.

Following MOS Note “ODA Provisioning Fails to Create Appliance w/ Error: DCS-10047:Same Job is already running : Provisioning FAILED in different request. (Doc ID 2809836.1)” I cleaned up the ODA, updated the repository with the Grid Infrastructure clone and DB clone:

Stop the dcs agent on both nodes:

# systemctl stop initdcsagent

Then, run cleanup.pl on ODA node 0.

# /opt/oracle/oak/onecmd/cleanup.pl -f
...

If you get warnings that the cleanup cannot transfer the public key to node 1 or cannot setup SSH equivalence, then run the cleanup on node 1 as well.

At the end of the cleanup-output you get those messages:

WARNING: After system reboot, please re-run "odacli update-repository" for GI/DB clones,
WARNING: before running "odacli create-appliance".

So, after the reboot I updated the repository with the GI and DB Clone:

[root@oak0 patch]# /opt/oracle/dcs/bin/odacli update-repository -f /u01/patch/odacli-dcs-19.26.0.0.0-250127-GI-19.26.0.0.zip
...
[root@oak0 patch]# odacli describe-job -i 674f7c66-1615-450f-be27-4e4734abca97

Job details                                                      
----------------------------------------------------------------
                     ID:  674f7c66-1615-450f-be27-4e4734abca97
            Description:  Repository Update
                 Status:  Success
                Created:  April 23, 2025 14:37:29 UTC
                Message:  /u01/patch/odacli-dcs-19.26.0.0.0-250127-GI-19.26.0.0.zip
...

[root@oak0 patch]# /opt/oracle/dcs/bin/odacli update-repository -f /u01/patch/odacli-dcs-19.26.0.0.0-250127-DB-19.26.0.0.zip
...
[root@oak0 patch]# odacli describe-job -i 4299b124-1c93-4d22-bac4-44a65cbaac67

Job details                                                      
----------------------------------------------------------------
                     ID:  4299b124-1c93-4d22-bac4-44a65cbaac67
            Description:  Repository Update
                 Status:  Success
                Created:  April 23, 2025 14:39:34 UTC
                Message:  /u01/patch/odacli-dcs-19.26.0.0.0-250127-DB-19.26.0.0.zip
...

Checked that the clones are available:

[root@oak0 patch]# ls -ltrh /opt/oracle/oak/pkgrepos/orapkgs/clones
total 12G
-rwxr-xr-x 1 root root 6.0G Jan 28 03:33 grid19.250121.tar.gz
-rwxr-xr-x 1 root root   21 Jan 28 03:34 grid19.250121.tar.gz.info
-r-xr-xr-x 1 root root 5.4G Jan 28 03:42 db19.250121.tar.gz
-rw-rw-r-- 1 root root  19K Jan 28 03:42 clonemetadata.xml
-rw-rw-r-- 1 root root   21 Jan 28 03:43 db19.250121.tar.gz.info
[root@oak0 patch]# 

The same on node 1:

[root@oak1 ~]# ls -ltrh /opt/oracle/oak/pkgrepos/orapkgs/clones
total 12G
-rwxr-xr-x 1 root root 6.0G Jan 28 03:33 grid19.250121.tar.gz
-rwxr-xr-x 1 root root   21 Jan 28 03:34 grid19.250121.tar.gz.info
-r-xr-xr-x 1 root root 5.4G Jan 28 03:42 db19.250121.tar.gz
-rw-rw-r-- 1 root root  19K Jan 28 03:42 clonemetadata.xml
-rw-rw-r-- 1 root root   21 Jan 28 03:43 db19.250121.tar.gz.info
[root@oak1 ~]# 

Before running the create-appliance again, you should first validate the storage topology on both nodes again.

[root@oak0 ~]# odacli validate-storagetopology
INFO    : ODA Topology Verification         
INFO    : Running on Node0                  
INFO    : Check hardware type               
INFO    : Check for Environment(Bare Metal or Virtual Machine)
SUCCESS : Type of environment found : Bare Metal
INFO    : Check number of Controllers       
SUCCESS : Number of onboard OS disk found : 2
SUCCESS : Number of External SCSI controllers found : 2
INFO    : Check for Controllers correct PCIe slot address
SUCCESS : Internal RAID controller   : 
SUCCESS : External LSI SAS controller 0 : 61:00.0
SUCCESS : External LSI SAS controller 1 : e1:00.0
INFO    : Check for Controller Type in the System
SUCCESS : There are 2 SAS 38xx controller in the system
INFO    : Check if JBOD powered on          
SUCCESS : 1JBOD : Powered-on
INFO    : Check for correct number of EBODS(2 or 4)
SUCCESS : EBOD found : 2
INFO    : Check for External Controller 0   
SUCCESS : Controller connected to correct EBOD number
SUCCESS : Controller port connected to correct EBOD port
SUCCESS : Overall Cable check for controller 0
INFO    : Check for External Controller 1   
SUCCESS : Controller connected to correct EBOD number
SUCCESS : Controller port connected to correct EBOD port
SUCCESS : Overall Cable check for Controller 1
INFO    : Check for overall status of cable validation on Node0
SUCCESS : Overall Cable Validation on Node0
INFO    : Check Node Identification status  
SUCCESS : Node Identification
SUCCESS : Node name based on cable configuration found : NODE0
INFO    : The details for Storage Topology Validation can also be found in the log file=/opt/oracle/oak/diag/oak0/oak/storagetopology/StorageTopology-2025-04-23-14:42:34_70809_7141.log
[root@oak0 ~]# 

Validate the storage-topology on node 1 as well. Not validating the storage topology may lead to the following error when creating the appliance again:

OAK-10011:Failure while running storage setup on the system. Cause: Node number set on host not matching node number returned by storage topology tool. Action: Node number on host not set correctly. For default storage shelf node number needs to be set by storage topology tool itself.

Afterwards the “odacli create-appliance” should run through.

Summary

If your “odacli create-appliance” fails on an ODA HA environment and you cannot restart it, then run a cleanup, update the repository with the Grid Infra- and DB-clone and validate the storage-topology before doing the create-appliance again.

L’article odacli create-appliance failed on an ODA HA est apparu en premier sur dbi Blog.

Virtualize, Anonymize, Validate: The Power of Delphix & OMrun

Wed, 2025-04-23 11:27
The Challenge: Modern Data Complexity

As businesses scale, so do their data environments. With hybrid cloud adoption, legacy system migrations, and stricter compliance requirements, IT teams must ensure test environments are:

  • Quickly available
  • Secure and compliant
  • Accurate mirrors of production environments
The Solution: Delphix – OMrun

Also for your heterogenouse data storage technology, Delphix and OMrun provide a seamless way to virtualize, anonymize and validate your test data securely and fast.

Virtualize with Delphix: Fast, Efficient, and Agile

Delphix replaces slow, storage-heavy physical test environments with virtualized data environments. Here’s what makes it a major advance:

Anonymize with Confidence: Built-in Data Masking

Data privacy isn’t optional, it’s critical. Delphix includes automated data masking to anonymize sensitive information. Whether it’s PII, PHI, or financial data, Delphix ensures:

  • Compliance with regulations (GDPR, CCPA, etc.)
  • Reduced risk of data leaks in non-production environments
  • Built-in masking templates and customization options
Validate with OMrun: Quality Assurance at Scale

OMrun brings powerful data validation and quality assurance capabilities into the mix. It’s tailor-made for data anonymzation validation (ensuring data privacy), providing:

  • Automated script generation
  • Scalable validation (running parallel OMrun instances)
  • Transparent reporting and dashboard
Final Thoughts: A Future-Ready Data Strategy

Whether you’re planning a cloud migration, regulatory compliance initiative, or just looking to modernize your Dev/Test practices, Delphix & OMrun provide a future-proof foundation. This powerful combination helps businesses move faster, safer, and smarter – turning data from a bottleneck into a business accelerator.

Want to see it in action?

Watch the OMrun Video Tutorials at www.youtube.com/@Dbi-services or explore Delphix & OMrun Solutions at:
OMrun
dbi-services.com/products/omrun/
OMrun Online Manual
Delphix
Delphix Data Masking Software

L’article Virtualize, Anonymize, Validate: The Power of Delphix & OMrun est apparu en premier sur dbi Blog.

Restore a database using Veeam RMAN plug-in on an ODA

Tue, 2025-04-22 16:07

I recently wrote a blog to show how to configure Veeam RMAN plug-in to take database backups. As all DBA knows, configuring a backup, will not go without testing a restore. In this blog I will show how I tested my previous Veeam configuration and backups performed with this Veeam RMAN plug-in on the same ODA. In order to test that the Veeam backups are usable, we will create a new CVEEAMT container database on the ODA and restore existing CDB1 container database into CVEEAMT using a previous existing VEEAM backup we took after configuring the plug-in. The restore will be done through a duplicate.

Pay attention

As we will restore an existing production container database, named CDB1, hosting a PDB named, PDB1, into new CVEEAMT container database, we will have a duplicate PDB. As we know that each PDB is registered into the listener, both PDB1 will be reachable through the same service, which, if the PDB1 is in use, could have dramatical consequence. There before doing the restore into the new container database we will change the domain of the newly created one.

Create new container database CVEEAMT

With odacli we will create the new container databasse named CVEEAMT.

[root@ODA2 ~]# odacli list-dbhomes
ID                                       Name                 DB Version           DB Edition Home Location                                            Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
3941f574-77bd-4f9e-a1f6-db2bb654f334     OraDB19000_home1     19.25.0.0.241015     SE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1     CONFIGURED
b922980f-cecd-4bf8-a688-eb41dd4b5b4b     OraDB19000_home2     19.25.0.0.241015     SE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2     CONFIGURED

[root@ODA2 ~]# odacli create-database -dh 3941f574-77bd-4f9e-a1f6-db2bb654f334 -n CVEEAMT -u CVEEAMT_SITE1 -cl OLTP -c -p VEEAMT -no-co -cs AL32UTF8 -ns UTF8 -l AMERICAN -dt AMERICA -s odb1 -r ACFS
Enter SYS, SYSTEM and PDB Admin user password:
Retype SYS, SYSTEM and PDB Admin user password:

Job details
----------------------------------------------------------------
                     ID:  7d99e795-31e8-4c96-af15-376405180978
            Description:  Database service creation with DB name: CVEEAMT
                 Status:  Created
                Created:  February 19, 2025 11:37:16 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------

[root@ODA2 ~]# odacli describe-job -i 7d99e795-31e8-4c96-af15-376405180978

Job details
----------------------------------------------------------------
                     ID:  7d99e795-31e8-4c96-af15-376405180978
            Description:  Database service creation with DB name: CVEEAMT
                 Status:  Success
                Created:  February 19, 2025 11:37:16 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Setting up SSH equivalence               February 19, 2025 11:37:20 CET           February 19, 2025 11:37:20 CET           Success
Setting up SSH equivalence               February 19, 2025 11:37:20 CET           February 19, 2025 11:37:20 CET           Success
Creating volume datCVEEAMT               February 19, 2025 11:37:20 CET           February 19, 2025 11:37:35 CET           Success
Creating volume rdoCVEEAMT               February 19, 2025 11:37:35 CET           February 19, 2025 11:37:50 CET           Success
Creating ACFS filesystem for DATA        February 19, 2025 11:37:50 CET           February 19, 2025 11:38:14 CET           Success
Creating ACFS filesystem for RECO        February 19, 2025 11:38:14 CET           February 19, 2025 11:38:37 CET           Success
Database Service creation                February 19, 2025 11:38:38 CET           February 19, 2025 11:52:54 CET           Success
Database Creation by RHP                 February 19, 2025 11:38:38 CET           February 19, 2025 11:50:16 CET           Success
Change permission for xdb wallet files   February 19, 2025 11:50:16 CET           February 19, 2025 11:50:17 CET           Success
Add Startup Trigger to Open all PDBS     February 19, 2025 11:50:17 CET           February 19, 2025 11:50:18 CET           Success
Place SnapshotCtrlFile in sharedLoc      February 19, 2025 11:50:18 CET           February 19, 2025 11:50:21 CET           Success
SqlPatch upgrade                         February 19, 2025 11:51:35 CET           February 19, 2025 11:51:55 CET           Success
Running dbms_stats init_package          February 19, 2025 11:51:55 CET           February 19, 2025 11:51:56 CET           Success
Set log_archive_dest for Database        February 19, 2025 11:51:56 CET           February 19, 2025 11:51:58 CET           Success
Updating the Database version            February 19, 2025 11:51:58 CET           February 19, 2025 11:52:02 CET           Success
Create Users tablespace                  February 19, 2025 11:52:54 CET           February 19, 2025 11:52:57 CET           Success
Clear all listeners from Database        February 19, 2025 11:52:57 CET           February 19, 2025 11:52:58 CET           Success
Copy Pwfile to Shared Storage            February 19, 2025 11:53:00 CET           February 19, 2025 11:53:01 CET           Success

[root@ODA2 ~]#

Change the domain

As explained previously, for the newly created container database, we will change the existing domain domain.ch to test.ch, in order not to conflict connecting to both PDB1 once the restore is done.

Existing listener registration for the new CVEEAMT container database and PDB:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] lsnrctl status | grep -i veeam
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMTXDB.domain.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMT_SITE1.domain.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "veeamt.domain.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...

As we can see new CDB and new PDB are registered into the listener using existing ODA domain domain.ch.

Let’s change it to test.ch.

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 11:59:25 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show parameter domain

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_domain                            string      domain.ch

SQL> alter system set db_domain='test.ch' scope=spfile;

System altered.

We will restart the database for the changes to take effects.

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] srvctl stop database -d CVEEAMT_SITE1
oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] srvctl start database -d CVEEAMT_SITE1

We will check listener registration:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] lsnrctl status | grep -i veeam
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMTXDB.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMT_SITE1.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "veeamt.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...

As well as domain instance parameter:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 12:02:23 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show parameter db_domain

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_domain                            string      test.ch

Listener configuration

As we will duplicate CDB1 to CVEEAMT, the database name will be renamed. This implies a database restart, which will be done through a listener connection. Therefore, for RMAN to connect to a closed database remotely, we will need to add a static entry that will be used for RMAN duplicate auxiliary connection.

Static registration:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = CVEEAMT_SITE1.test.ch)
      (ORACLE_HOME   = /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1)
      (SID_NAME      = CVEEAMT)
     )
  )

Backup existing listener configuration on the ODA:

grid@ODA2:~/ [rdbms1900] grinf19
grid@ODA2:~/ [grinf19] cdt
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] ls -ltrh
total 28K
-rw-r--r-- 1 grid oinstall 1.5K Feb 14  2018 shrept.lst
drwxr-xr-x 2 grid oinstall 4.0K Apr 17  2019 samples
-rw-r--r-- 1 grid oinstall  266 Dec  3 17:33 listener.ora.bak.ODA2.grid
-rw-r--r-- 1 grid oinstall  504 Dec  3 17:34 listener.ora
-rw-r----- 1 grid oinstall  504 Dec  3 17:34 listener2412035PM3433.bak
-rw-r----- 1 grid oinstall  179 Dec  3 17:34 sqlnet.ora.20250204
-rw-r----- 1 grid oinstall  200 Feb  4 15:00 sqlnet.ora
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] mkdir history
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] cp -p listener.ora ./history/listener.ora.202502191205

Add listener static entry:

grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] vi listener.ora
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] diff listener.ora ./history/listener.ora.202502191205
7,15d6
<
< SID_LIST_LISTENER =
<   (SID_LIST =
<     (SID_DESC =
<       (GLOBAL_DBNAME = CVEEAMT_SITE1.test.ch)
<       (ORACLE_HOME   = /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1)
<       (SID_NAME      = CVEEAMT)
<      )
<   )
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19]

Reload the listener:

grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] lsnrctl reload

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 19-FEB-2025 13:01:10

Copyright (c) 1991, 2024, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
The command completed successfully

And check running static registration, that we can recognize with the UNKNOWN status:

grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] lsnrctl status | grep -i veeam
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMTXDB.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMT_SITE1.test.ch" has 2 instance(s).
  Instance "CVEEAMT", status UNKNOWN, has 1 handler(s) for this service...
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "veeamt.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...

Configure Oracle network connections

We will configure appropriate tnsnames.ora entries that will be used to connect to target and auxiliary database.

We just need to add new auxiliary entry. The target entry for CDB1 connection is still existing and will permit connection to existing CDB1 production container database.

tnsnames connection to add:

CVEEAMT_SITE1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ODA2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = CVEEAMT_SITE1.test.ch)
    )
  )

tnsnames.ora backup and configuration changes. Entry for CVEEAMT_SITE1 already exist and was performed initially by odacli:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] cdt
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] ls -ltrh
total 112K
-rw-r--r-- 1 oracle oinstall 1.5K Feb 14  2018 shrept.lst
drwxr-xr-x 2 oracle oinstall  20K Apr 17  2019 samples
drwxr-xr-x 2 oracle oinstall  20K Dec 18 14:01 history
-rw-r----- 1 oracle oinstall 2.6K Feb 19 11:45 tnsnames.ora
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] cp -p tnsnames.ora ./history/tnsnames.ora.202502191305
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] vi tnsnames.ora
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] diff tnsnames.ora ./history/tnsnames.ora.202502191305
115c115
       (SERVICE_NAME = CVEEAMT_SITE1.domain.ch)

Test target and auxiliary connections

Test connection to auxiliary database, CVEEAMT:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] sqlplus sys@CVEEAMT_SITE1 as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 13:07:24 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> set line 300
SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
CVEEAMT          ODA2.domain.ch

SQL>

Test connection to target database, CDB1:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] sqlplus sys@CDB1_SITE1 as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 13:08:53 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> set line 300
SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
CDB1            ODA2.domain.ch

SQL>

Delete CVEEAMT DB files

We will now delete CVEEAMT database files before executing the restore.

We will first check spfile:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] cdh
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/ [CVEEAMT (CDB$ROOT)] cd dbs
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] ls -ltrh *CVEEAMT*
-rw-r----- 1 oracle asmadmin   24 Feb 19 11:39 lkCVEEAMT_SITE1
-rw-r----- 1 oracle asmadmin   24 Feb 19 11:40 lkCVEEAMT
-rw-r----- 1 oracle oinstall   69 Feb 19 11:48 initCVEEAMT.ora
-rw-rw---- 1 oracle asmadmin 1.6K Feb 19 12:01 hc_CVEEAMT.dat
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] cat initCVEEAMT.ora
SPFILE='/u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora'
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] srvctl config database -d CVEEAMT_SITE1 | grep -i spfile
Spfile: /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)]

We will stop the database:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] srvctl stop database -d CVEEAMT_SITE1
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] CVEEAMT

 **************************
 INSTANCE_NAME   : CVEEAMT
 STATUS          : DOWN
 **************************
 Statustime: 2025-02-19 13:11:56

We will backup the spfile:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] cp -p /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora.bak.202502191312
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] ls -ltrh /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/
total 20K
-rw-r----- 1 oracle asmadmin 2.0K Feb 19 11:41 orapwCVEEAMT
-rw-r----- 1 oracle oinstall 6.5K Feb 19 12:02 spfileCVEEAMT.ora.bak.202502191312
-rw-r----- 1 oracle asmadmin 6.5K Feb 19 12:02 spfileCVEEAMT.ora
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)]

We will drop CVEEAMT database:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 13:14:09 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount restrict
ORACLE instance started.

Total System Global Area 4294965864 bytes
Fixed Size                  9185896 bytes
Variable Size             855638016 bytes
Database Buffers         3388997632 bytes
Redo Buffers               41144320 bytes
Database mounted.

SQL> drop database;

Database dropped.

Disconnected from Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0
SQL>

We will restore the spfile that was deleted with the drop database command:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] ls -ltrh /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/
total 12K
-rw-r----- 1 oracle asmadmin 2.0K Feb 19 11:41 orapwCVEEAMT
-rw-r----- 1 oracle oinstall 6.5K Feb 19 12:02 spfileCVEEAMT.ora.bak.202502191312

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] cp -p /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora.bak.202502191312 /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] ls -ltrh /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/
total 20K
-rw-r----- 1 oracle asmadmin 2.0K Feb 19 11:41 orapwCVEEAMT
-rw-r----- 1 oracle oinstall 6.5K Feb 19 12:02 spfileCVEEAMT.ora.bak.202502191312
-rw-r----- 1 oracle oinstall 6.5K Feb 19 12:02 spfileCVEEAMT.ora

Startup nomount auxiliary database

We will startup in nomount status the auxiliary database, CVEEAMT.

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 13:15:45 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 4294965864 bytes
Fixed Size                  9185896 bytes
Variable Size             855638016 bytes
Database Buffers         3388997632 bytes
Redo Buffers               41144320 bytes
SQL>

Database is started in nomunt and static registration available:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] lsnrctl status | grep -i veeam
Service "CVEEAMT_SITE1.test.ch" has 2 instance(s).
  Instance "CVEEAMT", status UNKNOWN, has 1 handler(s) for this service...
  Instance "CVEEAMT", status BLOCKED, has 1 handler(s) for this service...
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)]

Check CDB1 backups

We will check that last automatic backups that we configured in the crontab at the end of the VEEAM RMAN plug-in configuration are successfully.

INC0 backup:

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] ls -ltrh *inc0* | tail -n1
-rw-r--r-- 1 oracle oinstall 17K Feb 16 18:20 CDB1_bck_inc0_no_arc_del_tape_20250216_180002.log

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] tail CDB1_bck_inc0_no_arc_del_tape_20250216_180002.log

Recovery Manager complete.

RMAN return Code: 0

#**************************************************************************************************#
#                    END OF: CDB1_bck_inc0_no_arc_del_tape_20250216_180002.log                    #
#--------------------------------------------------------------------------------------------------#
#                                  timestamp: 2025-02-16_18:20:54                                  #
#**************************************************************************************************#

INC1 backup:

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] ls -ltrh *inc1* | tail -n1
-rw-r--r-- 1 oracle oinstall 17K Feb 18 18:01 CDB1_bck_inc1_no_arc_del_tape_20250218_180002.log

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] tail CDB1_bck_inc1_no_arc_del_tape_20250218_180002.log

Recovery Manager complete.

RMAN return Code: 0

#**************************************************************************************************#
#                    END OF: CDB1_bck_inc1_no_arc_del_tape_20250218_180002.log                    #
#--------------------------------------------------------------------------------------------------#
#                                  timestamp: 2025-02-18_18:01:31                                  #
#**************************************************************************************************#

Archived log backup:

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] ls -ltrh *arc_no_arc* | tail -n1
-rw-r--r-- 1 oracle oinstall 7.6K Feb 19 12:40 CDB1_bck_arc_no_arc_del_tape_20250219_124002.log

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] tail CDB1_bck_arc_no_arc_del_tape_20250219_124002.log

Recovery Manager complete.

RMAN return Code: 0

#**************************************************************************************************#
#                    END OF: CDB1_bck_arc_no_arc_del_tape_20250219_124002.log                     #
#--------------------------------------------------------------------------------------------------#
#                                  timestamp: 2025-02-19_12:40:49                                  #
#**************************************************************************************************#

Create a new table on PDB1 existing in target CDB1

In order to check some data contents after the restore we will create a TEST1 table in the PDB1 from existing target CDB1 container database.

oracle@ODA2:~/ [CDB1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 14:07:35 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> alter session set container=PDB1;

Session altered.

SQL> create table TEST1 as select * from dba_users;

Table created.

Archived log backup on CDB1

Let’s take a last archived log backup to record last transaction, including our TEST1 table creation.

oracle@ODA2:~/ [CDB1 (CDB$ROOT)] /u01/app/oracle/local/dmk_ha/bin/check_primary.ksh CDB1 "/u01/app/oracle/local/dmk_dbbackup/bin/dmk_rman.ksh -s CDB1 -t bck_arc_no_arc_del_tape.rcv -c /u01/app/odaorabase/oracle/admin/CDB1_SITE1/etc/rman.cfg"
2025-02-19_14:09:49::check_primary.ksh::SetOraEnv       ::INFO ==> Environment: CDB1 (/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1)
2025-02-19_14:09:49::check_primary.ksh::MainProgram     ::INFO ==> Getting V$DATABASE.DB_ROLE for CDB1
2025-02-19_14:09:49::check_primary.ksh::MainProgram     ::INFO ==> CDB1 Database Role is: PRIMARY
2025-02-19_14:09:49::check_primary.ksh::MainProgram     ::INFO ==> Program going ahead and starting requested command
2025-02-19_14:09:49::check_primary.ksh::MainProgram     ::INFO ==> Script : /u01/app/oracle/local/dmk_dbbackup/bin/dmk_rman.ksh -s CDB1 -t bck_arc_no_arc_del_tape.rcv -c /u01/app/odaorabase/oracle/admin/CDB1_SITE1/etc/rman.cfg

[OK]::EBL::RMAN::dmk_dbbackup::CDB1::bck_arc_no_arc_del_tape.rcv::RMAN_retCode::0
Logfile is : /u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/CDB1_bck_arc_no_arc_del_tape_20250219_140949.log


2025-02-19_14:10:37::check_primary.ksh::CleanExit       ::INFO ==> Program exited with ExitCode : 0
oracle@ODA2:~/ [CDB1 (CDB$ROOT)]

Duplicate CDB1 to CVEEAMT

Let’s do our VEEAM RMAN Plug-in test by restoring CDB1 to CVEEAMT using duplicate from backup command.

The run block will be the following. We will allocate an auxiliary channel using the VEEAM RMAN plug-in library connection that was configured in previous blog.

run {
ALLOCATE AUXILIARY CHANNEL VeeamAgentChannel1 DEVICE TYPE SBT_TAPE PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so' FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';
duplicate database CDB1 to CVEEAMT;
}

Check auxiliary database files. We can see there is no OMF datafile directory.

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] ls -lrh /u02/app/oracle/oradata/CVEEAMT_SITE1/
total 168K
drwx------ 2 root   root     64K Feb 19 11:38 lost+found
drwxr-x--- 2 oracle oinstall 20K Feb 19 13:15 dbs
drwxrwx--- 2 oracle oinstall 20K Feb 19 11:51 arc10
oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)]

Restore the database using the VEEAM backups. We will only use 1 target and 1 auxiliary channel knowing we are running Oracle SE2 edition at customer side:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] rmanh

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Feb 19 14:13:30 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target sys@CDB1_SITE1
connect target *
target database Password:
connected to target database: CDB1 (DBID=756666048)

RMAN> connect auxiliary sys@CVEEAMT_SITE1
connect auxiliary *
auxiliary database Password:
connected to auxiliary database: CVEEAMT (not mounted)

run {
run {
2> ALLOCATE AUXILIARY CHANNEL VeeamAgentChannel1 DEVICE TYPE SBT_TAPE PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so' FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';
ALLOCATE AUXILIARY CHANNEL VeeamAgentChannel1 DEVICE TYPE SBT_TAPE PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so' FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';
3> duplicate database CDB1 to CVEEAMT;
duplicate database CDB1 to CVEEAMT;
4> }
}
using target database control file instead of recovery catalog
allocated channel: VeeamAgentChannel1
channel VeeamAgentChannel1: SID=16 device type=SBT_TAPE
channel VeeamAgentChannel1: Veeam Plug-in for Oracle RMAN

Starting Duplicate Db at 19-FEB-2025 14:15:08
current log archived
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/u04/app/oracle/redo/CVEEAMT/CVEEAMT_SITE1/controlfile/o1_mf_mvcf9nor_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''CDB1'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile;
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''/u04/app/oracle/redo/CVEEAMT/CVEEAMT_SITE1/controlfile/o1_mf_mvcf9nor_.ctl'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''CDB1'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area    4294965864 bytes

Fixed Size                     9185896 bytes
Variable Size                855638016 bytes
Database Buffers            3388997632 bytes
Redo Buffers                  41144320 bytes
allocated channel: VeeamAgentChannel1
channel VeeamAgentChannel1: SID=21 device type=SBT_TAPE
channel VeeamAgentChannel1: Veeam Plug-in for Oracle RMAN

Starting restore at 19-FEB-2025 14:15:33

channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: restoring control file
channel VeeamAgentChannel1: reading from backup piece c-756666048-20250219-09_RMAN_AUTOBACKUP.vab
channel VeeamAgentChannel1: piece handle=c-756666048-20250219-09_RMAN_AUTOBACKUP.vab tag=TAG20250219T141031
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
output file name=/u04/app/oracle/redo/CVEEAMT/CVEEAMT_SITE1/controlfile/o1_mf_mvcf9nor_.ctl
Finished restore at 19-FEB-2025 14:15:56

database mounted

contents of Memory Script:
{
   set until scn  13117839;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  9 to new;
   set newname for clone datafile  10 to new;
   set newname for clone datafile  11 to new;
   set newname for clone datafile  12 to new;
   set newname for clone datafile  13 to new;
   set newname for clone datafile  14 to new;
   set newname for clone datafile  15 to new;
   set newname for clone datafile  16 to new;
   set newname for clone datafile  17 to new;
   set newname for clone datafile  18 to new;
   set newname for clone datafile  19 to new;
   set newname for clone datafile  20 to new;
   set newname for clone datafile  21 to new;
   set newname for clone datafile  22 to new;
   set newname for clone datafile  23 to new;
   set newname for clone datafile  24 to new;
   set newname for clone datafile  25 to new;
   set newname for clone datafile  26 to new;
   set newname for clone datafile  27 to new;
   set newname for clone datafile  28 to new;
   set newname for clone datafile  29 to new;
   set newname for clone datafile  30 to new;
   set newname for clone datafile  31 to new;
   set newname for clone datafile  32 to new;
   set newname for clone datafile  33 to new;
   set newname for clone datafile  34 to new;
   set newname for clone datafile  35 to new;
   set newname for clone datafile  36 to new;
   set newname for clone datafile  37 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 19-FEB-2025 14:16:01

channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00005 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_system_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00006 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00007 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250204_en3guuba_471_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250204_en3guuba_471_1_1.vab tag=INC0_20250204_133948
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00010 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 1 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00010 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 2 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_2_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_2_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 2
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00010 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 3 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_3_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_3_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 3
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00013 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00014 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00017 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00020 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00023 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00026 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00029 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00032 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00035 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s83hv262_904_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s83hv262_904_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00008 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00015 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00018 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00021 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00024 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00027 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00030 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00033 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00036 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s93hv2h6_905_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s93hv2h6_905_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00009 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00011 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00016 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00019 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00022 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00025 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00028 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00031 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00034 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00037 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sa3hv2pf_906_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sa3hv2pf_906_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00001 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00003 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00004 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00012 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sb3hv2pu_907_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sb3hv2pu_907_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
Finished restore at 19-FEB-2025 14:16:50

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=40 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_mvcpfwvf_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=41 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=42 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=43 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_system_mvcpdmpq_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=44 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_mvcpdmq3_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=45 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_mvcpdmqj_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=46 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_mvcpfhf3_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=47 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=48 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=49 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=50 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=51 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=52 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf
datafile 15 switched to datafile copy
input datafile copy RECID=53 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf
datafile 16 switched to datafile copy
input datafile copy RECID=54 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf
datafile 17 switched to datafile copy
input datafile copy RECID=55 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf
datafile 18 switched to datafile copy
input datafile copy RECID=56 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf
datafile 19 switched to datafile copy
input datafile copy RECID=57 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf
datafile 20 switched to datafile copy
input datafile copy RECID=58 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf
datafile 21 switched to datafile copy
input datafile copy RECID=59 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf
datafile 22 switched to datafile copy
input datafile copy RECID=60 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf
datafile 23 switched to datafile copy
input datafile copy RECID=61 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf
datafile 24 switched to datafile copy
input datafile copy RECID=62 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf
datafile 25 switched to datafile copy
input datafile copy RECID=63 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf
datafile 26 switched to datafile copy
input datafile copy RECID=64 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf
datafile 27 switched to datafile copy
input datafile copy RECID=65 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf
datafile 28 switched to datafile copy
input datafile copy RECID=66 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf
datafile 29 switched to datafile copy
input datafile copy RECID=67 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf
datafile 30 switched to datafile copy
input datafile copy RECID=68 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf
datafile 31 switched to datafile copy
input datafile copy RECID=69 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf
datafile 32 switched to datafile copy
input datafile copy RECID=70 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf
datafile 33 switched to datafile copy
input datafile copy RECID=71 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf
datafile 34 switched to datafile copy
input datafile copy RECID=72 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf
datafile 35 switched to datafile copy
input datafile copy RECID=73 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf
datafile 36 switched to datafile copy
input datafile copy RECID=74 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf
datafile 37 switched to datafile copy
input datafile copy RECID=75 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf

contents of Memory Script:
{
   set until scn  13117839;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 19-FEB-2025 14:16:51
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00010: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 1 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00010: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 2 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_2_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_2_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 2
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00010: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 3 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_3_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_3_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 3
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00013: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf
destination for restore of datafile 00014: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf
destination for restore of datafile 00017: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf
destination for restore of datafile 00020: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf
destination for restore of datafile 00023: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf
destination for restore of datafile 00026: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf
destination for restore of datafile 00029: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf
destination for restore of datafile 00032: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf
destination for restore of datafile 00035: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uc3i4aqd_972_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uc3i4aqd_972_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00008: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_mvcpfhf3_.dbf
destination for restore of datafile 00015: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf
destination for restore of datafile 00018: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf
destination for restore of datafile 00021: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf
destination for restore of datafile 00024: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf
destination for restore of datafile 00027: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf
destination for restore of datafile 00030: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf
destination for restore of datafile 00033: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf
destination for restore of datafile 00036: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ud3i4aqk_973_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ud3i4aqk_973_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00009: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf
destination for restore of datafile 00011: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf
destination for restore of datafile 00016: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf
destination for restore of datafile 00019: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf
destination for restore of datafile 00022: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf
destination for restore of datafile 00025: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf
destination for restore of datafile 00028: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf
destination for restore of datafile 00031: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf
destination for restore of datafile 00034: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf
destination for restore of datafile 00037: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ue3i4aqr_974_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ue3i4aqr_974_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_mvcpfwvf_.dbf
destination for restore of datafile 00003: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf
destination for restore of datafile 00004: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf
destination for restore of datafile 00012: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uf3i4ar2_975_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uf3i4ar2_975_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03

starting media recovery

archived log for thread 1 with sequence 236 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_236_mv9h6t7z_.arc
archived log for thread 1 with sequence 237 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_237_mv9rjr7s_.arc
archived log for thread 1 with sequence 238 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_238_mvb6lr1f_.arc
archived log for thread 1 with sequence 239 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_239_mvbnnqvg_.arc
archived log for thread 1 with sequence 240 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_240_mvc2pr6m_.arc
archived log for thread 1 with sequence 241 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_241_mvcjrr7v_.arc
archived log for thread 1 with sequence 242 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_242_mvcp13py_.arc
archived log for thread 1 with sequence 243 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_243_mvcpbw6z_.arc
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_236_mv9h6t7z_.arc thread=1 sequence=236
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_237_mv9rjr7s_.arc thread=1 sequence=237
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_238_mvb6lr1f_.arc thread=1 sequence=238
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_239_mvbnnqvg_.arc thread=1 sequence=239
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_240_mvc2pr6m_.arc thread=1 sequence=240
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_241_mvcjrr7v_.arc thread=1 sequence=241
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_242_mvcp13py_.arc thread=1 sequence=242
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_243_mvcpbw6z_.arc thread=1 sequence=243
media recovery complete, elapsed time: 00:00:03
Finished recover at 19-FEB-2025 14:17:17
released channel: VeeamAgentChannel1
Oracle instance started

Total System Global Area    4294965864 bytes

Fixed Size                     9185896 bytes
Variable Size                855638016 bytes
Database Buffers            3388997632 bytes
Redo Buffers                  41144320 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''CVEEAMT'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
}
executing Memory Script

sql statement: alter system set  db_name =  ''CVEEAMT'' comment= ''Reset to original value by RMAN'' scope=spfile
Oracle instance started

Total System Global Area    4294965864 bytes

Fixed Size                     9185896 bytes
Variable Size                855638016 bytes
Database Buffers            3388997632 bytes
Redo Buffers                  41144320 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CVEEAMT" RESETLOGS ARCHIVELOG
  MAXLOGFILES     16
  MAXLOGMEMBERS      3
  MAXDATAFILES     1024
  MAXINSTANCES     8
  MAXLOGHISTORY      292
 LOGFILE
  GROUP     1  SIZE 512 M ,
  GROUP     2  SIZE 512 M ,
  GROUP     3  SIZE 512 M
 DATAFILE
  '/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_mvcpfwvf_.dbf',
  '/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_system_mvcpdmpq_.dbf',
  '/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_mvcpfhf3_.dbf'
 CHARACTER SET AL32UTF8


contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   set newname for clone tempfile  3 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_mvcpdmq3_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_mvcpdmqj_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 2 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temp_%u_.tmp in control file

cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf RECID=1 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf RECID=2 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_mvcpdmq3_.dbf RECID=3 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_mvcpdmqj_.dbf RECID=4 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf RECID=5 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf RECID=6 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf RECID=7 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf RECID=8 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf RECID=9 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf RECID=10 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf RECID=11 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf RECID=12 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf RECID=13 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf RECID=14 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf RECID=15 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf RECID=16 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf RECID=17 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf RECID=18 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf RECID=19 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf RECID=20 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf RECID=21 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf RECID=22 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf RECID=23 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf RECID=24 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf RECID=25 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf RECID=26 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf RECID=27 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf RECID=28 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf RECID=29 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf RECID=30 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf RECID=31 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf RECID=32 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf RECID=33 STAMP=1193494662

datafile 3 switched to datafile copy
input datafile copy RECID=1 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=2 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=3 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_mvcpdmq3_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=4 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_mvcpdmqj_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=5 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=6 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=7 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=8 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=9 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=10 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf
datafile 15 switched to datafile copy
input datafile copy RECID=11 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf
datafile 16 switched to datafile copy
input datafile copy RECID=12 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf
datafile 17 switched to datafile copy
input datafile copy RECID=13 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf
datafile 18 switched to datafile copy
input datafile copy RECID=14 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf
datafile 19 switched to datafile copy
input datafile copy RECID=15 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf
datafile 20 switched to datafile copy
input datafile copy RECID=16 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf
datafile 21 switched to datafile copy
input datafile copy RECID=17 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf
datafile 22 switched to datafile copy
input datafile copy RECID=18 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf
datafile 23 switched to datafile copy
input datafile copy RECID=19 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf
datafile 24 switched to datafile copy
input datafile copy RECID=20 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf
datafile 25 switched to datafile copy
input datafile copy RECID=21 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf
datafile 26 switched to datafile copy
input datafile copy RECID=22 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf
datafile 27 switched to datafile copy
input datafile copy RECID=23 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf
datafile 28 switched to datafile copy
input datafile copy RECID=24 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf
datafile 29 switched to datafile copy
input datafile copy RECID=25 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf
datafile 30 switched to datafile copy
input datafile copy RECID=26 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf
datafile 31 switched to datafile copy
input datafile copy RECID=27 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf
datafile 32 switched to datafile copy
input datafile copy RECID=28 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf
datafile 33 switched to datafile copy
input datafile copy RECID=29 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf
datafile 34 switched to datafile copy
input datafile copy RECID=30 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf
datafile 35 switched to datafile copy
input datafile copy RECID=31 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf
datafile 36 switched to datafile copy
input datafile copy RECID=32 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf
datafile 37 switched to datafile copy
input datafile copy RECID=33 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf
Reenabling controlfile options for auxiliary database
Executing: alter database force logging

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened

contents of Memory Script:
{
   sql clone "alter pluggable database all open";
}
executing Memory Script

sql statement: alter pluggable database all open
Finished Duplicate Db at 19-FEB-2025 14:17:48

We can see that RMAN used INC0 VEEAM backups:

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250204_en3guuba_471_1_1.vab tag=INC0_20250204_133948

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_1_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_2_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_3_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s83hv262_904_1_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s93hv2h6_905_1_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sa3hv2pf_906_1_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sb3hv2pu_907_1_1.vab tag=INC0_20250216_180002

We can see that RMAN used INC1 VEEAM backups:

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_1_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_2_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_3_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uc3i4aqd_972_1_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ud3i4aqk_973_1_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ue3i4aqr_974_1_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uf3i4ar2_975_1_1.vab tag=INC1_20250218_180002

RMAN duplicate did not used any of the archived log backup as archived log file was still existing in the FRA, which is ok for our tests, see media recovery message like:

archived log for thread 1 with sequence 236 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_236_mv9h6t7z_.arc

RMAN duplicated played all archived log files as we did not specified any until_scn or until_time clause.

Checks

We have 2 PDB1 pdb one for each CDB on appropriate domain registered to the listener:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] lsnrctl status | grep -iE veeam\|pdb1
  Instance "CDB1", status READY, has 1 handler(s) for this service...
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CDB1XDB.domain.ch" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_SITE1.domain.ch" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CVEEAMTXDB.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMT_SITE1.test.ch" has 2 instance(s).
  Instance "CVEEAMT", status UNKNOWN, has 1 handler(s) for this service...
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "PDB1_PRI.domain.ch" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1.domain.ch" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)]

Check target container database CDB1:

oracle@ODA2:~/ [CDB1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 14:46:06 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
CDB1

SQL> set line 300
SQL> col name for a20
SQL> select NAME, GUID, total_size/1024/1024/1024 GB from v$pdbs;

NAME                 GUID                                     GB
-------------------- -------------------------------- ----------
PDB$SEED             2987BF93B6232B35E063425C210AC02A 1.09960938
PDB1                 2987D4B68CF25579E063425C210AB61B 46.3935547

SQL>

Check auxiliary container database CVEEAMT. We will check PDB1, that our TEST1 table exists, and also database files:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] CVEEAMT

 ******************************************************
 INSTANCE_NAME   : CVEEAMT
 DB_NAME         : CVEEAMT
 DB_UNIQUE_NAME  : CVEEAMT_SITE1
 STATUS          : OPEN READ WRITE
 LOG_MODE        : ARCHIVELOG
 USERS/SESSIONS  : Normal: 0/0, Oracle-maintained: 2/7
 DATABASE_ROLE   : PRIMARY
 FLASHBACK_ON    : NO
 FORCE_LOGGING   : YES
 VERSION         : 19.25.0.0.0
 NLS_LANG        : AMERICAN_AMERICA.AL32UTF8
 CDB_ENABLED     : YES
 PDBs            : PDB1  PDB$SEED
 ******************************************************

 PDB color: pdbname=open read-write, pdbname=open read-only
 Statustime: 2025-02-19 14:42:03

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 14:42:05 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> alter session set container=PDB1;

Session altered.

SQL> select count(*) from test1;

  COUNT(*)
----------
        51

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
CVEEAMT

SQL> @qdbstbssize.sql

Container                          Nb      Extent Segment     Alloc.      Space       Max.    Percent Block
name            Name            files Type Mgmnt  Mgmnt    size (GB)  free (GB)  size (GB)     used % size  Log Encrypt Compress
--------------- --------------- ----- ---- ------ ------- ---------- ---------- ---------- ---------- ----- --- ------- --------
PDB1            XXX_XXX_INDEXES     1 DATA LM-SYS AUTO          1.00        .90      10.00       1.00 8 KB  YES NO      NO
                XXX_XXX_TABLES      1 DATA LM-SYS AUTO          1.00        .91      10.00        .95 8 KB  YES NO      NO
                XXXX_SYSTEM         1 DATA LM-SYS AUTO          1.00        .70      10.00       3.01 8 KB  YES NO      NO
                XXXX_SYSTEM_IND     1 DATA LM-SYS AUTO          1.00        .86      10.00       1.36 8 KB  YES NO      NO
                EXES

                IDM                 1 DATA LM-SYS AUTO          1.00        .85      10.00       1.53 8 KB  YES NO      NO
                JOB                 1 DATA LM-SYS AUTO          1.00        .92      10.00        .83 8 KB  YES NO      NO
                JOB_INDEXES         1 DATA LM-SYS AUTO          1.00        .92      10.00        .83 8 KB  YES NO      NO
                LOG                 1 DATA LM-SYS AUTO          1.00        .88      10.00       1.24 8 KB  YES NO      NO
                LOG_INDEXES         1 DATA LM-SYS AUTO          1.00        .86      10.00       1.41 8 KB  YES NO      NO
                MAIN                1 DATA LM-SYS AUTO          1.00        .93      10.00        .74 8 KB  YES NO      NO
                XX_XXXXXXX          1 DATA LM-SYS AUTO          1.00        .92      10.00        .78 8 KB  YES NO      NO
                XX_XX_XXXXXXX_INDE     1 DATA LM-SYS AUTO          1.00        .92      10.00        .80 8 KB  YES NO      NO
                XES

                QUEUE_TABLES        1 DATA LM-SYS AUTO          1.00        .93      10.00        .75 8 KB  YES NO      NO
                XXXXXXX             1 DATA LM-SYS AUTO          1.00        .84      10.00       1.61 8 KB  YES NO      NO
                XXXXXXX_INDEXES     1 DATA LM-SYS AUTO          1.00        .88      10.00       1.20 8 KB  YES NO      NO
                SETUP               1 DATA LM-SYS AUTO          1.00        .91      10.00        .93 8 KB  YES NO      NO
                SETUP_INDEXES       1 DATA LM-SYS AUTO          1.00        .91      10.00        .88 8 KB  YES NO      NO
                STATISTIC           1 DATA LM-SYS AUTO          1.00        .71      10.00       2.85 8 KB  YES NO      NO
                STATSPACK           1 DATA LM-SYS AUTO           .98        .13       2.00      42.32 8 KB  YES NO      NO
                SYSAUX              1 DATA LM-SYS AUTO           .58        .04      10.00       5.32 8 KB  YES NO      NO
                SYSTEM              1 DATA LM-SYS MANUAL         .62        .05       4.00      14.21 8 KB  YES NO      NO
                TEMP                1 TEMP LM-UNI MANUAL         .22        .66      31.00      -1.40 8 KB  NO  NO      NO
                TEMPORARY_DATA      1 DATA LM-SYS AUTO          1.00        .93      10.00        .67 8 KB  YES NO      NO
                TEMPORARY_DATA_     1 DATA LM-SYS AUTO          1.00        .93      10.00        .66 8 KB  YES NO      NO
                INDEXES

                XXX                 1 DATA LM-SYS AUTO          1.00        .93      10.00        .68 8 KB  YES NO      NO
                UNDOTBS1            1 UNDO LM-SYS MANUAL       20.00      19.97      20.00        .13 8 KB  YES NO      NO
                XXXX                1 DATA LM-SYS AUTO          1.00        .92      10.00        .82 8 KB  YES NO      NO
                XXXX_INDEXES        1 DATA LM-SYS AUTO          1.00        .91      10.00        .94 8 KB  YES NO      NO
                USERS               1 DATA LM-SYS AUTO           .00        .00       2.00        .05 8 KB  YES NO      NO
                USER_DATA           1 DATA LM-SYS AUTO          1.00        .93      10.00        .66 8 KB  YES NO      NO
***************                 -----                     ---------- ---------- ----------
TOTAL                              30                          46.39      42.14     309.00

SQL> alter session set container=cdb$root;

Session altered.

SQL> set lines 300
SQL> col name for a20
SQL> select NAME, GUID, total_size/1024/1024/1024 GB from v$pdbs;

NAME                 GUID                                     GB
-------------------- -------------------------------- ----------
PDB$SEED             2987BF93B6232B35E063425C210AC02A 1.09960938
PDB1                 2987D4B68CF25579E063425C210AB61B 46.3935547

2 rows selected.

SQL> set lines 300 pages 500
SQL> col file_name for a150
SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
CVEEAMT

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> select con_id, file_name from cdb_data_files;

    CON_ID FILE_NAME
---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
         1 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_mvcpfwvf_.dbf
         1 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf
         1 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf
         1 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_mvcpfhf3_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxx_xxx__mvcpf7lt_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxx_xxx__mvcpfgl2_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_sys_mvcpfost_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_sys_mvcpf7m4_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxx_mvcpfgls_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_mvcpf7nb_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xx_xxxxx_mvcpfgmz_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xx_xxxxx_mvcpfgmz_mvcpfovr_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxxxxx_mvcpfgnp_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxxxxx__mvcpfowc_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxx_mvcpfoxh_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_mvcpf7pp_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_ind_mvcpfgph_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf

33 rows selected.

SQL>

As we can see, the restore from CDB1 into CVEEAMT with RMAN duplicate command using VEEAM backups could be done successfully.

Cleanup

Let’s cleanup by deleting CVEEAMT database.

[root@ODA2 ~]# odacli delete-database -n CVEEAMT
{
  "jobId" : "565aa4e3-9152-45f8-a739-dd7c53b22044",
  "status" : "Running",
  "message" : "",
  "reports" : [ {
    "taskId" : "TaskDcsJsonRpcExt_14309",
    "taskName" : "Validate DB 96122ad1-182a-4059-8a26-677300d93d71 for deletion",
    "nodeName" : "ODA2",
    "taskResult" : "",
    "startTime" : "February 19, 2025 14:55:01 CET",
    "endTime" : null,
    "duration" : "00:00:00.10",
    "status" : "Running",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_14307",
    "jobId" : "565aa4e3-9152-45f8-a739-dd7c53b22044",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "February 19, 2025 14:55:01 CET"
  } ],
  "createTimestamp" : "February 19, 2025 14:54:59 CET",
  "resourceList" : [ ],
  "description" : "Database service deletion with DB name: CVEEAMT with ID : 96122ad1-182a-4059-8a26-677300d93d71",
  "updatedTime" : "February 19, 2025 14:55:01 CET",
  "jobType" : null,
  "cpsMetadata" : null
}

[root@ODA2 ~]# odacli describe-job -i "565aa4e3-9152-45f8-a739-dd7c53b22044"

Job details
----------------------------------------------------------------
                     ID:  565aa4e3-9152-45f8-a739-dd7c53b22044
            Description:  Database service deletion with DB name: CVEEAMT with ID : 96122ad1-182a-4059-8a26-677300d93d71
                 Status:  Success
                Created:  February 19, 2025 14:54:59 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Validate DB                              February 19, 2025 14:55:01 CET           February 19, 2025 14:55:01 CET           Success
96122ad1-182a-4059-8a26-677300d93d71
for deletion
Deleting the RMAN logs                   February 19, 2025 14:55:01 CET           February 19, 2025 14:55:01 CET           Success
Database Deletion By RHP                 February 19, 2025 14:55:01 CET           February 19, 2025 14:56:07 CET           Success
Unregister DB From Cluster               February 19, 2025 14:56:07 CET           February 19, 2025 14:56:08 CET           Success
Kill PMON Process                        February 19, 2025 14:56:08 CET           February 19, 2025 14:56:08 CET           Success
Database Files Deletion                  February 19, 2025 14:56:08 CET           February 19, 2025 14:56:08 CET           Success
Deleting Volume                          February 19, 2025 14:56:13 CET           February 19, 2025 14:56:17 CET           Success
Deleting Volume                          February 19, 2025 14:56:23 CET           February 19, 2025 14:56:26 CET           Success

We would also restore initial listener.ora configuration file. As you might see, there is job in the appliance that already regularly restore initial listener configuration file.

We would also delete our TEST1 table we created in the production PDB1.

To wrap up…

We could successfully restore CDB1 into CVEEAMT with RMAN duplicate command using VEEAM backups. This validates our VEEAM RMAN plug-in previous configuration and also any backup done with the VEEAM RMAN plug-in.

L’article Restore a database using Veeam RMAN plug-in on an ODA est apparu en premier sur dbi Blog.

Integrate YaK into Red Hat Ansible Automation Platform

Tue, 2025-04-22 03:00
Introduction to YaK

YaK is an open-source automation project developed by dbi services. Built on Ansible playbooks, YaK streamlines the deployment process for various components across any platform. It ensures adherence to best practices, maintains deployment quality, and significantly reduces time-to-deploy.

Initially created in response to the growing demand from dbi services’ consultants and clients, YaK simplifies and accelerates deployments across multi-technology infrastructures. Whether targeting cloud environments or on-premises systems, YaK drastically cuts down deployment effort, optimizing the overall time-to-market.

Find more informations on the YaK website

Why Integrate YaK into Red Hat Ansible Automation Platform (AAP)? YaK Advantages:
  • User-Friendly Interface: YaK simplifies configuration and deployment through an intuitive user interface, allowing teams to quickly manage servers and applications deployments.
  • Centralized Metadata Database: It replaces traditional YAML configuration files with a centralized database to store deployment metadata, ensuring improved manageability and consistency.
  • Comprehensive Reporting: YaK provides capabilities for generating detailed reports on all deployments, offering insights for continuous improvement.
  • dbi services components: dbi services offering a range of subscriptions components readily deployable on any platform, further easing the accessibility, management of deployments. These components integrates all the expertise of dbi services’ expertise.
  • Custom Application Integration: YaK supports creating custom components for your specific applications. Developers can easily add Ansible playbooks to deploy the application into the component template.
Why Red Hat Ansible Automation Platform (AAP) with YaK:
  • Expert-Crafted Packages: YaK provides expertly maintained Ansible packages, ensuring reliability and built-in support for a wide range of scenarios, fully compatible with AAP.
  • Unified Dynamic Inventory: A single dynamic Ansible inventory for all your infrastructures, supporting multi-platform environments.
  • Platform-Agnostic Deployments: Seamless deployment across various platforms, enabling true platform independence.
  • Deep Integration with AAP Features: Full integration with AAP’s scheduler, workflows, and other advanced features, simplifying automation of servers, components (databases, applications, etc..), and complex multi-component infrastructures. of servers/component and/or multi-component infrastructure automation.
Integration Steps Generate a YaK API Token

To start integration, generate an API token from the YaK database pod. So you need:

  1. To have access to the Kubernetes cluster (rke2 for exemple) on which is deployed your YaK instance, with the kubectl command
  2. Know the namespace on which is deployed your YaK instance

Once you have access, you only have to type this command (replace <yak-namespace> by the namespace on which is deployed your YaK instance) :

$ kubectl -n <yak-namespace> exec -it deploy/yak-postgres -- psql -U postgres -d agoston -c 'select set_user_token(agoston_api.add_user()) as "token";'
                               token                               
-------------------------------------------------------------------
 <generated_token>
(1 row)

You can store the YaK API generated token for next steps.

AAP Resources Configuration

Access to Ansible Automation Platform with an administrator rôle.

  • Execution Environment: Define a customized execution environment in AAP that includes YaK-specific dependencies and tools.
    In the left menu, go to Automation Execution ⟶ Infrastructure ⟶ Execution Environments, then click on Create execution environment button
    Fill the form like this:
    Name: YaK EE
    Image: registry.gitlab.com/yak4all/yak_core:ee-stable
    Pull: Only pull the image if not present before running
    Registry credential: <empty> (YaK images are publicly available on GitLab repository)
    Description: Execution environment for YaK related jobs
    Organization: Default (or any other if you have a specific policy)
  • Job Settings: Update parameters to add persistency for YaK jobs.
    In the left menu, go to Settings ⟶ Job then click on Edit button
    update the parameter Paths to expose to isolated jobs, and add these lines at the end:
- /data/yak/component_types:/workspace/yak/component_types
- /data/yak/tmp:/tmp
- /data/yak/uploads:/uploads
  • Credential Types: Create customs credential types to securely handle YaK specific credentials.
    In the left menu, go to Automation Execution ⟶ Infrastructure ⟶ Credential Types, then click on Create credential type button
  1. YaK API:
    Name: YaK API
    Input configuration:
fields:
  - id: yak_ansible_transport_url
    type: string
    label: YaK API URL
  - id: yak_ansible_http_token
    type: string
    label: YaK API HTTP Token
    secret: true
  - id: yak_ansible_ssl_verify_certificate
    type: string
    label: Verify SSL certificate
    choices:
      - 'true'
      - 'false'
required:
  - yak_ansible_transport_url
  - yak_ansible_http_token
  - yak_ansible_ssl_verify_certificate

⠀⠀⠀⠀‎‎‎‎- Injector configuration:

env:
  YAK_ANSIBLE_DEBUG: 'false'
  YAK_ANSIBLE_HTTP_TOKEN: '{{ yak_ansible_http_token }}'
  YAK_ANSIBLE_TRANSPORT_URL: '{{ yak_ansible_transport_url }}'
  YAK_ANSIBLE_SSL_VERIFY_CERTIFICATE: '{{ yak_ansible_ssl_verify_certificate }}'
  1. YaK API With Component:
    Name: YaK API With Component
    Input configuration:
fields:
  - id: yak_ansible_transport_url
    type: string
    label: YaK API URL
  - id: yak_ansible_http_token
    type: string
    label: YaK API HTTP Token
    secret: true
  - id: yak_ansible_ssl_verify_certificate
    type: string
    label: Verify SSL certificate
    choices:
      - 'true'
      - 'false'
  - id: yak_core_component
    type: string
    label: YaK Core Component (used for component deployment)
required:
  - yak_ansible_transport_url
  - yak_ansible_http_token
  - yak_ansible_ssl_verify_certificate

⠀⠀⠀⠀- Injector configuration:

env:
  YAK_ANSIBLE_DEBUG: 'true'
  YAK_CORE_COMPONENT: '{{ yak_core_component }}'
  YAK_ANSIBLE_HTTP_TOKEN: '{{ yak_ansible_http_token }}'
  YAK_ANSIBLE_TRANSPORT_URL: '{{ yak_ansible_transport_url }}'
  YAK_ANSIBLE_SSL_VERIFY_CERTIFICATE: '{{ yak_ansible_ssl_verify_certificate }}'
  • Credentials: Set up credentials in AAP using the custom credential type to securely store and manage YaK API tokens.
    In the left menu, go to Automation Execution ⟶ Infrastructure ⟶ Credential, then click on Create credential button
  1. YaK API Core:
    Name: YaK API Core
    Credential type: YaK API
    YaK API URL: <url to your yak instance>/data/graphql
    YaK API HTTP Token: <YaK API token generated previously>
    Verify SSL certificate: depending if your YaK url have a valid SSL certificate (select true) or not (select false)
  1. YaK API Component:
    Name: YaK API Component – <component name set in YaK>
    Credential type: YaK API Withe Component
    YaK API URL: <url to your yak instance>/data/graphql
    YaK API HTTP Token: <YaK API token generated previously>
    YaK Core Component (used for component deployment): <component name set in YaK>
    Verify SSL certificate: depending if your YaK url have a valid SSL certificate (select true) or not (select false)
  • Project: Create an AAP project pointing to your YaK repository containing playbooks.
    In the left menu, go to Automation Execution ⟶ Project, then click on Create project button
  1. YaK Core:
    Name: YaK Core
    Execution environment: YaK EE
    Source control type: Git
    Source control URL: https://gitlab.com/yak4all/yak_core.git
    Source control branch/tag/commit: <select the same release version than your YaK deployed>
    You can find the YaK release version at the bottom of the YaK left menu:
  1. YaK Component:
    Name: YaK <component type> Component
    Execution environment: YaK EE
    Source control type: Git
    Source control URL: <private git repository url to your component>
    Source control branch/tag/commit: main
    Source control credential: <your credential where stored your authentications to the git repository>
  • Inventory: Configure the inventory, aligning it with YaK’s managed targets and deployment definitions.
    In the left menu, go to Automation Execution ⟶ Infrastructure ⟶ Inventories, then click on Create inventory button
  1. YaK Inventory:
    Name: YaK Inventory

⠀⠀From the YaK Inventory, go to Sources tab, then click on Create source button
⠀⠀- Name: YaK Core
⠀⠀- Execution environment: YaK EE
⠀⠀- Source: Sourced from a Project
⠀⠀- Credential: YaK API Core
⠀⠀- Project: YaK Core
⠀⠀- Inventory file: inventory/yak.core.db.yml
⠀⠀- Verbosity: 0
⠀⠀- Options: Overwrite, Overwrite variables, Update on launch

  1. YaK Inventory for component (you will need to create one inventory by component you want to manage from AAP):
    Name: YaK Inventory – <component name>

⠀⠀From the YaK Inventory – <component name>, go to Sources tab, then click on Create source button
⠀⠀- Name: YaK <component type>
⠀⠀- Execution environment: YaK EE
⠀⠀- Source: Sourced from a Project
⠀⠀- Credential: YaK API Component – <component name>
⠀⠀- Project: YaK <component type> Component
⠀⠀- Inventory file: inventory/yak.core.db.yml
⠀⠀- Verbosity: 0
⠀⠀- Options: Overwrite, Overwrite variables, Update on launch

  • Template: Develop AAP templates leveraging YaK playbooks and workflows, enabling repeatable and consistent deployments.
    In the left menu, go to Automation Execution ⟶ Templates, then click on Create template button and select Create job template
  1. Server – Deploy:
    Name: Server – Deploy
    Job type: Run
    Inventory: YaK Inventory
    Project: YaK Core
    Playbook: servers/deploy.yml
    Execution environment: YaK EE
    Credentials: YaK API Core
    Extra variables: target: ”
    Select the checkbox Prompt on launch for the Extra variables section. It will permit to set the server you want to deploy when you will run the job.
  1. Your component – Deploy:
    Name: <component name> – Deploy
    Job type: Run
    Inventory: YaK Inventory – <component name>
    Project: YaK <component type> Component
    Playbook: <path to your component deployment playbook>
    Execution environment: YaK EE
    Credentials: YaK API Component – <component name>
Creating an AAP Workflow for Full-Stack Deployment

Leveraging AAP workflows enables structured, automated deployments. In this chapter we will deploy a server named redhat-demo and the attached PostgreSQL component named pg-demo. These resources have already been created in the YaK, using the UI.

  • In AAP, create a new workflow:
    In the left menu, go to Automation Execution ⟶ Templates, then click on Create template button and select Create workflow job template:
    Name: Deploy Server and PG using YaK
  • Add and connect job templates corresponding to each deployment stage using YaK inventories and playbooks, here the complete workflow to create:
  1. YaK Core:
    After the Start, add a new step with the following infos:
    Node type: Inventory Source Sync
    Inventory source: YaK Core
    Convergence: Any
  1. Deploy redhat-demo server:
    After the YaK Core, add a new step with the following infos:
    Node type: Job Template
    Job template: Server – Deploy
    Status: Run on success
    Convergence: Any
    Node alias: Deploy redhat-demo

⠀⠀⠀⠀After clicking on Next button, you will have to set the playbook extra variables:
⠀⠀⠀⠀- Variables:

target: redhat-demo
  1. YaK Component inventory:
    After the Deploy redhat-demo, add a new step with the following infos:
    Node type: Inventory Source Sync
    Inventory source: YaK PostgreSQL
    Status: Run on success
    Convergence: Any
  1. Deploy redhat-demo server:
    After the YaK PostgreSQL, add a new step with the following infos:
    Node type: Job Template
    Job template: PostgreSQL – Deploy PG demo
    Status: Run on success
    Convergence: Any
    Node alias: Deploy pg-demo
  • You can save your workflow template.
  • Initiate the workflow manually or configure scheduled runs for fully automated deployments.

By integrating YaK into AAP workflows, teams can automate entire stack deployments efficiently, achieving unprecedented consistency and speed.

Conclusion

Integrating YaK with Red Hat Ansible Automation Platform combines YaK’s ease-of-use and powerful features with AAP’s comprehensive automation capabilities. This synergy ensures that deployment processes are more structured, faster, and consistently aligned with best practices, thus significantly enhancing overall efficiency and reducing time-to-market for businesses.

L’article Integrate YaK into Red Hat Ansible Automation Platform est apparu en premier sur dbi Blog.

How to: Restore a Nutanix virtual machine to AWS using HYCU R-CLOUD

Tue, 2025-04-22 02:29

In this blog I will show you how to restore a Nutanix virtual machine in AWS using HYCU R-CLOUD, formerly HYCU Protege.

Context

Our HYCU setup is composed of multiple environments, on premise in our datacenter and in the cloud in our AWS accounts. We have a HYCU instance deployed on our Nutanix cluster which is the “master” instance, meaning that all the backups are created by this instance. The backups are saved in two environment, they are saved in our NAS server and a copy of each backup is also transferred to an AWS S3 bucket in one of our AWS accounts. We also have a secondary HYCU instance in AWS, this one is in “Restore mode” meaning that it can only restore instances and can not do any backup.

We created this setup with two environments and two HYCU instances to be able to restore our environment in case we lose our whole Nutanix cluster. The schema above represents the infrastructure with an example of restored instance with all the temporary resources created by HYCU during the process.

HYCU setup

In the HYCU console we have two parameters to configure so that the restore operations works.

First we have to configure the Cloud Account :

We also have to configure the HYCU R-CLOUD account:

When this is done, we can start the restore operations.

Virtual machine Spin-up

First step: select the virtual we want to restore:

Then we must select the restore point and click on “SpinUp to cloud”:

Then we select our cloud provider:

In this window, we select the information about the AWS account we want to use and give some detail on the region and availability zone. The AWS account ID is gathered from the HYCU R-CLOUD configuration.

Then we have to give some more details about the virtual machine such as the shape:

We also have to give the virtual machine a network adapter, so from the previous panel we click on “Add Network Adapter” and fill the following form:

The machine needs an internet access to communicate with R-CLOUD, it doesn’t need to have a public IP if you have a VPN and routing to Internet configured. In our case, we will give our test machine a public IP since our DR VPC does not have a running NAT gateway. Once we are done with the network setup, we click on “Add” and on “SpinUp” from the previous window.

In the Jobs tab in our console, a new restore job has started:

From here you can follow every step of the restore such as the creation of the temporary S3 bucket. To get information, click on “View report” at the top right:

During the SpinUp, HYCU creates a temporary virtual machine that will orchestrate the cloud operations such as the creation of a temporary S3 bucket to store your virtual machine backup data. The SpinUp will also create a snapshot based on the temp S3 bucket data, then an AMI based on this snapshot and finally recreate the virtual machine based on the AMI.

Here you can see the HYCU virtual machine and the temporary machine booted by the restore operation. The infrastructure schema would now look like this with all the temporary resources running:

After the SpinUp job is done, we see in the AWS console that my virtual machine is here:

Note that the temporary HYCU instance is automatically deleted as well as the S3 bucket created earlier. We noticed during our tests that one factor has a big impact on restore time: the temporary virtual machine shape. The bigger the shape, the lower the restore time. Just note that only the HYCU support can change this parameter for you so if you want faster restores you should raise them a ticket.

L’article How to: Restore a Nutanix virtual machine to AWS using HYCU R-CLOUD est apparu en premier sur dbi Blog.

Migrating an Oracle database to another server

Fri, 2025-04-11 10:40

There are several situations when you have to migrate your Oracle databases to a new server. This could be due to hardware lifecycle reasons for on-prem systems or you need to upgrade your Operating System (OS) from Enterprise Linux 8 to Enterprise Linux 9. In this blog I wanted to talk about my recommended methods for such migrations considering ease of use and reduced downtime. I do not cover migrations to the Oracle Cloud here, because the recommended way is to use the Zero Downtime Migration tool from Oracle for that.

For a migration to another server, we have different possibilities:

  • Data Pump expdp/impdp
  • Logical replication with e.g. Golden Gate
  • Setup of a Standby DB with Data Guard (or third party products like dbvisit standby for Standard Edition 2 DBs) and switchover during cutover
  • Using a refreshable PDB in case the multitenant architecture is already used. During migration, stop the source PDB and do a final refresh, stop refreshing and open the target PDB read/write.
  • Relocate a PDB
  • Unplug PDB, copy PDB-related files and Plug-In the PDB
  • RMAN backup and restore. To reduce downtime this can also be combined with incremental backups restored regularly on the target until cutover, when a last incremental backup is applied to the target DB.
  • RMAN duplicate
  • Data Pump Full Transportable, where you set your source tablespaces read only, export the metadata and physically move datafiles to the target, where you can import the metadata.
  • Transportable tablespaces. This can be combined with Incremental Backups to do a cross platform migration to a different endian as described in MOS Note “V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)”
  • Detaching ASM devices from the old server and attaching them to the new server.
    REMARK: This is kind of what happens when migrating to a new OS-version on the Oracle Database Appliance with Data Preserving Reprovisioning (DPR). See the blogs from my colleague Jérôme Duba on that: https://www.dbi-services.com/blog/author/jerome-dubar/
  • Just copy (e.g. with scp) all needed files to the new server

There are even more possibilities, but with above list you should find a method which fits your needs. Some of the methods above do require to be on the same Operating System and hardware architecture (no endian change), and some of them are totally independent on platform, version or endian change (like the logical migrations with data pump or Golden Gate).

One of the best methods in my view is the possibility of refreshable PDBs, because

  • it is very easy to do
  • provides a short downtime during cutover
  • allows a fallback as the previous PDB is still available
  • allows migrating PDBs individually at different times
  • allows migrating non-CDBs to PDBs as well. I.e. I can refresh a non-CDB to a PDB.
  • it is available since 12.2. and can also be used with Standard Edition 2 (SE2) DBs
  • allows going to a different Release Update (RU)
  • even allows going to a different major release and run the PDB upgrade afterwards on the target CDB
  • if the source PDB is on SE2 then the target PDB can also be on Enterprise Edition (EE)
  • moving Transparent Data Encrypted PDBs is almost as easy as moving non-encrypted PDBs
  • the inital copy of the PDB can be done very fast as Oracle is using a block-level-copy mechanism when cloning a PDB and parallelism is allowed as well on EE
  • we can use 3 PDBs per CDB since 19c without licensing the Multitenant Option. This provides some flexibility on which CDB to move the PDB to

You may check this blog with the steps to do when migrating through the refreshable PDB mechanism.

Can we migrate a 19c database to 23ai with refreshable PDBs? Yes, we can do that as shown below:

REMARK: The whole process described below can be done with the autoupgrade tool automatically. However, to see each step separately, I do this manually here.

1. Preparing the source CDB, which is on 19.22.:

sys@CDB0> select force_logging from v$database;

FORCE_LOGGING
---------------------------------------
YES

sys@CDB0> create user c##refresh_pdbs identified by welcome1 container=all;

User created.

sys@CDB0> grant create session, create pluggable database to c##refresh_pdbs container=all;

Grant succeeded.

2. Create the refreshable PDB

To have a connection between the Oracle Cloud and my on-prem 19.22.-DB I used the method described here through a ssh-tunnel:
https://www.ludovicocaldara.net/dba/push-pdb-to-cloud/

On the target server:

[oracle@db23aigi ~]$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Fri Apr 4 15:16:15 2025
Version 23.7.0.25.01

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai EE High Perf Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems
Version 23.7.0.25.01

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB1 			  READ WRITE NO
SQL> exit
Disconnected from Oracle Database 23ai EE High Perf Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems
Version 23.7.0.25.01
[oracle@db23aigi ~]$ cat clone_db.sh 
SRC_PDB=$1
TGT_PDB=$2
ALIAS=$3
 
export ORACLE_HOME=/u01/app/oracle/product/23.0.0.0/dbhome_1
export ORACLE_SID=DB23AIGI
 
$ORACLE_HOME/bin/sqlplus -s / as sysdba <<EOF
        set timing on
        create database link prod_clone_link connect to c##refresh_pdbs
          identified by welcome1 using '$ALIAS';
        create pluggable database $2 from $1@prod_clone_link refresh mode manual;
        dbms_session.sleep(120);
        alter pluggable database $2 refresh;
        alter pluggable database $2 refresh mode none;
        exit
EOF
[oracle@db23aigi ~]$ 

On the source-server:

oracle@pm-DB-OEL8:~/Keys/dbi-OCI/dbi3oracle/DB-systems/db23aigi/ [cdb0 (CDB$ROOT)] ssh -i ./ssh-key-2025-04-04.key opc@<public-ip-OCI> -R 1522:pm-DB-OEL8:1521 "sudo -u oracle /home/oracle/clone_db.sh PROD PROD23AI localhost:1522/PROD_PRI"

Database link created.

Elapsed: 00:00:00.01

Pluggable database created.

Elapsed: 00:06:16.42

Pluggable database altered.

Elapsed: 00:00:14.99

Pluggable database altered.

Elapsed: 00:00:00.78
oracle@pm-DB-OEL8:~/Keys/dbi-OCI/dbi3oracle/DB-systems/db23aigi/ [cdb0 (CDB$ROOT)] 

3. Upgrade the PDB to 23ai on the target server

SQL> alter pluggable database PROD23AI open upgrade;

Pluggable database altered.

SQL> select name, open_mode, restricted from v$pdbs where name='PROD23AI';

NAME				 OPEN_MODE  RES
-------------------------------- ---------- ---
PROD23AI			 MIGRATE    YES

SQL> 

[oracle@db23aigi ~]$ $ORACLE_HOME/bin/dbupgrade -c "PROD23AI" -l /tmp
....
Upgrade Summary Report Located in:
/tmp/upg_summary.log

     Time: 673s For PDB(s)

Grand Total Time: 673s 

 LOG FILES: (/tmp/catupgrd*.log)


Grand Total Upgrade Time:    [0d:0h:11m:13s]
[oracle@db23aigi ~]$ 

REMARK: As mentioned initially I should have used autoupgrade for the whole process (or just the upgrade) here as $ORACLE_HOME/bin/dbupgrade has been desupported in 23ai, but for demonstration purposes of refreshable PDBs it is OK.

4. Final steps after the upgrade

-- check the PDB_PLUG_IN_VIOLATIONS view for unresolved issues
SQL> alter session set container=PROD23AI;

Session altered.

SQL> select type, cause, message 
from PDB_PLUG_IN_VIOLATIONS 
where name='PROD23AI' and status != 'RESOLVED';  2    3  

TYPE		CAUSE			       MESSAGE
--------------- ------------------------------ ------------------------------------------------------------------------------------------
WARNING 	is encrypted tablespace?       Tablespace SYSTEM is not encrypted. Oracle Cloud mandates all tablespaces should be encrypted.
WARNING 	is encrypted tablespace?       Tablespace SYSAUX is not encrypted. Oracle Cloud mandates all tablespaces should be encrypted.
WARNING 	is encrypted tablespace?       Tablespace USERS is not encrypted. Oracle Cloud mandates all tablespaces should be encrypted.
WARNING 	Traditional Audit	       Traditional Audit configuration mismatch between the PDB and CDB$ROOT

SQL> administer key management set key using tag 'new own key' force keystore identified by "<wallet password>" with backup;

keystore altered.

SQL> alter tablespace users encryption online  encrypt;

Tablespace altered.

SQL> alter tablespace sysaux encryption online  encrypt;

Tablespace altered.

SQL> alter tablespace system encryption online  encrypt;

Tablespace altered.

SQL> exec dbms_pdb.CLEAR_PLUGIN_VIOLATIONS;

PL/SQL procedure successfully completed.

SQL> select type, cause, message 
from PDB_PLUG_IN_VIOLATIONS 
where name='PROD23AI' and status != 'RESOLVED';

no rows selected


-- Recompile invalid objects using the utlrp.sql script:
SQL> alter session set container=PROD23AI;
 
Session altered.
 
SQL> @?/rdbms/admin/utlrp.sql
 
PL/SQL procedure successfully completed.

-- Downtime ends. Check the DBA_REGISTRY_SQLPATCH view:
SQL> alter session set container=PROD23AI;
 
Session altered.
 
SQL> select patch_id, patch_type, status, description, action_time from dba_registry_sqlpatch order by action_time desc;

  PATCH_ID PATCH_TYPE STATUS	 DESCRIPTION						      ACTION_TIME
---------- ---------- ---------- ------------------------------------------------------------ --------------------------------
  37366180 RU	      SUCCESS	 Database Release Update : 23.7.0.25.01 (37366180) Gold Image 04-APR-25 04.00.06.975353 PM

Summary:

If you haven’t done this yet, then I do recommend to migrate to the multitenant architecture as soon as possible. It makes several DBA tasks so much easier. Especially the migration to a new server with refreshable PDBs is very easy to do with low downtime, high flexibility and almost no impact on the source PDB during refreshes. On top of it you do not lose your source PDB during the process and may go back to it in case tests show that the target is not working correctly.

L’article Migrating an Oracle database to another server est apparu en premier sur dbi Blog.

What’s New in M-Files 25.3

Thu, 2025-04-10 10:20
What's New in M-Files 25.3

I’m not a big fan of doing a post for each new release, but I think the last one is a big step towards what M-Files will tend to be in the coming months.
M-Files 25.3, was released to the cloud on March 30th, and is available for download and auto-update since April 2nd. It brings a suite of powerful updates designed to improve document management efficiency and user experience.
Here’s a breakdown of the most notable features, improvements, and fixes.

New Features and Improvements

Admin Workflow State Changes in M-Files Web

System administrators can now override any workflow state directly from the context menu in M-Files Web using the new “Change state (Admin)” option. This allows for greater control and quicker resolution of workflow issues.

Zero-Click Metadata Filling

When users drag and drop new objects into specific views, required metadata fields can now be automatically prefilled without displaying the metadata card. This creates a seamless and efficient upload process.

Object-Based Hierarchies Support

Object-based hierarchies are now available on the metadata card in both M-Files Web and the new Desktop interface, providing more structured data representation.

Enhanced Keyboard Navigation

Improved keyboard shortcuts now allow users to jump quickly to key interface elements like the search bar and tabs, streamlining navigation for power users.

Document Renaming in Web and Desktop

Users can now rename files in M-Files Web and the new Desktop interface via the context menu or the F2 key, making file management more intuitive.

Default gRPC Port Update

The default gRPC port for new vault connections is now set to 443, improving compatibility with standard cloud environments and simplifying firewall configurations.

AutoCAD 2025 Support

The M-Files AutoCAD add-in is now compatible with AutoCAD 2025, ensuring continued integration with the latest CAD workflows.

Fixes and Performance Enhancements
  • Drag-and-Drop Upload Error Resolved: Fixed a bug that caused “Upload session not found” errors during file uploads.
  • Automatic Property Filling: Ensured property values now update correctly when source properties are modified.
  • Version-Specific Links: Resolved an issue where links pointed to the latest version rather than the correct historical version.
  • Anonymous User Permissions: Closed a loophole that allowed anonymous users to create and delete views.
  • Theme Display Consistency: Custom themes now persist correctly across multiple vault sessions.
  • Office Add-In Fixes: Resolved compatibility issues with merged cells in Excel documents.
  • Date & Time Accuracy: Fixed timezone issues that affected Date & Time metadata.
  • Metadata Card Configuration: Ensured proper application of workflow settings.
  • Annotation Display in Web: Annotations are now correctly tied to their document versions.
  • Improved Link Functionality: Object ID-based links now work as expected in the new Desktop client.
Conclusion

M-Files 25.3 introduces thoughtful improvements that empower both administrators and end-users. From seamless metadata handling to improved keyboard accessibility and robust error fixes, this release makes it easier than ever to manage documents effectively.

Stay tuned for more insights and tips on making the most of your M-Files solution with us!

L’article What’s New in M-Files 25.3 est apparu en premier sur dbi Blog.

PostgreSQL 18: Support for asynchronous I/O

Thu, 2025-04-10 03:09

This is maybe one the biggest steps forward for PostgreSQL: PostgreSQL 18 will come with support for asynchronous I/O. Traditionally PostgreSQL relies on the operating system to hide the latency of writing to disk, which is done synchronously and can lead to double buffering (PostgreSQL shared buffers and the OS file cache). This is most important for WAL writes, as PostgreSQL must make sure that changes are flushed to disk and needs to wait until it is confirmed.

Before we do some tests let’s see what’s new from a parameter perspective. One of the new parameters is io_method:

postgres=# show io_method;
 io_method 
-----------
 worker
(1 row)

The default is “worker” and the maximum number of worker processes to perform asynchronous is controller by io_workers:

postgres=# show io_workers;
 io_workers 
------------
 3
(1 row)

This can also be seen on the operating system:

postgres=# \! ps aux | grep "io worker" | grep -v grep
postgres   29732  0.0  0.1 224792  7052 ?        Ss   Apr08   0:00 postgres: pgdev: io worker 1
postgres   29733  0.0  0.2 224792  9884 ?        Ss   Apr08   0:00 postgres: pgdev: io worker 0
postgres   29734  0.0  0.1 224792  7384 ?        Ss   Apr08   0:00 postgres: pgdev: io worker 2

The other possible settings for io_method are:

  • io_uring: Asynchronous I/O using io_uring
  • sync: The behavior before PostgreSQL 18, do synchronous I/O

io_workers only has an effect if io_method is set to “worker”, which is the default configuration.

As usual: What follows are just some basic tests. Test for your own, in your environment with your specific workload to get some meaningful numbers. Especially if you test in a public cloud, be aware that the numbers might not show you the full truth.

We’ll do the tests on an AWS EC2 t3.large instance running Debian 12. The storage volume is gp3 with ext4 (default settings):

postgres@ip-10-0-1-209:~$ grep proc /proc/cpuinfo 
processor       : 0
processor       : 1
postgres@ip-10-0-1-209:~$ free -g
               total        used        free      shared  buff/cache   available
Mem:               7           0           4           0           3           7
Swap:              0           0           0
postgres@ip-10-0-1-209:~$ mount | grep 18
/dev/nvme1n1 on /u02/pgdata/18 type ext4 (rw,relatime)

PostgreSQL was initialized with the default settings:

postgres@ip-10-0-1-209:~$ /u01/app/postgres/product/18/db_0/bin/initdb --pgdata=/u02/pgdata/18/data/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "C.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are enabled.

fixing permissions on existing directory /u02/pgdata/18/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default "max_connections" ... 100
selecting default "autovacuum_worker_slots" ... 16
selecting default "shared_buffers" ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /u01/app/postgres/product/18/db_0/bin/pg_ctl -D /u02/pgdata/18/data/ -l logfile start

The following settings have been changed:

postgres@ip-10-0-1-209:~$ echo "shared_buffers='2GB'" >> /u02/pgdata/18/data/postgresql.auto.conf 
postgres@ip-10-0-1-209:~$ echo "checkpoint_timeout='20min'" >> /u02/pgdata/18/data/postgresql.auto.conf 
postgres@ip-10-0-1-209:~$ echo "random_page_cost=1.1" >> /u02/pgdata/18/data/postgresql.auto.conf
postgres@ip-10-0-1-209:~$ echo "max_wal_size='8GB'" >> /u02/pgdata/18/data/postgresql.auto.conf  
postgres@ip-10-0-1-209:~$ /u01/app/postgres/product/18/db_0/bin/pg_ctl --pgdata=/u02/pgdata/18/data/ -l /dev/null start
postgres@ip-10-0-1-209:~$ /u01/app/postgres/product/18/db_0/bin/psql -c "select version()"
                                        version                                        
---------------------------------------------------------------------------------------
 PostgreSQL 18devel dbi services build on x86_64-linux, compiled by gcc-12.2.0, 64-bit
(1 row)
postgres@ip-10-0-1-209:~$ export PATH=/u01/app/postgres/product/18/db_0/bin/:$PATH

The first test is data loading. How long does that take when io_method is set to worker (3 times in a row), this gives a data set of around 1536MB:

postgres@ip-10-0-1-209:~$ pgbench -i -s 100
dropping old tables...
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
creating tables...
generating data (client-side)...
vacuuming...                                                                                   
creating primary keys...
done in 31.85 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 24.82 s, vacuum 0.35 s, primary keys 6.68 s).
postgres@ip-10-0-1-209:~$ pgbench -i -s 100
dropping old tables...
creating tables...
generating data (client-side)...
vacuuming...                                                                                   
creating primary keys...
done in 31.97 s (drop tables 0.24 s, create tables 0.00 s, client-side generate 25.44 s, vacuum 0.34 s, primary keys 5.93 s).
postgres@ip-10-0-1-209:~$ pgbench -i -s 100
dropping old tables...
creating tables...
generating data (client-side)...
vacuuming...                                                                                   
creating primary keys...
done in 30.72 s (drop tables 0.26 s, create tables 0.00 s, client-side generate 23.93 s, vacuum 0.55 s, primary keys 5.98 s).

The same test with “sync”:

postgres@ip-10-0-1-209:~$ psql -c "alter system set io_method='sync'"
ALTER SYSTEM
postgres@ip-10-0-1-209:~$ pg_ctl --pgdata=/u02/pgdata/18/data/ restart -l /dev/null
postgres@ip-10-0-1-209:~$ psql -c "show io_method"
 io_method 
-----------
 sync
(1 row)
postgres@ip-10-0-1-209:~$ pgbench -i -s 100
dropping old tables...
creating tables...
generating data (client-side)...
vacuuming...                                                                                   
creating primary keys...
done in 20.89 s (drop tables 0.29 s, create tables 0.01 s, client-side generate 14.70 s, vacuum 0.45 s, primary keys 5.44 s).
postgres@ip-10-0-1-209:~$ pgbench -i -s 100
dropping old tables...
creating tables...
generating data (client-side)...
vacuuming...                                                                                   
creating primary keys...
done in 21.57 s (drop tables 0.20 s, create tables 0.00 s, client-side generate 16.13 s, vacuum 0.46 s, primary keys 4.77 s).
postgres@ip-10-0-1-209:~$ pgbench -i -s 100
dropping old tables...
creating tables...
generating data (client-side)...
vacuuming...                                                                                   
creating primary keys...
done in 21.44 s (drop tables 0.20 s, create tables 0.00 s, client-side generate 16.04 s, vacuum 0.52 s, primary keys 4.67 s).

… and finally “io_uring”:

postgres@ip-10-0-1-209:~$ psql -c "alter system set io_method='io_uring'"
ALTER SYSTEM
postgres@ip-10-0-1-209:~$ pg_ctl --pgdata=/u02/pgdata/18/data/ restart -l /dev/null
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
postgres@ip-10-0-1-209:~$ psql -c "show io_method"
 io_method 
-----------
 io_uring
(1 row)

postgres@ip-10-0-1-209:~$ pgbench -i -s 100
dropping old tables...
creating tables...
generating data (client-side)...
vacuuming...                                                                                   
creating primary keys...
done in 20.63 s (drop tables 0.35 s, create tables 0.01 s, client-side generate 14.92 s, vacuum 0.47 s, primary keys 4.88 s).
postgres@ip-10-0-1-209:~$ pgbench -i -s 100
dropping old tables...
creating tables...
generating data (client-side)...
vacuuming...                                                                                   
creating primary keys...
done in 20.81 s (drop tables 0.29 s, create tables 0.00 s, client-side generate 14.43 s, vacuum 0.46 s, primary keys 5.63 s).
postgres@ip-10-0-1-209:~$ pgbench -i -s 100
dropping old tables...
creating tables...
generating data (client-side)...
vacuuming...                                                                                   
creating primary keys...
done in 21.11 s (drop tables 0.24 s, create tables 0.00 s, client-side generate 15.63 s, vacuum 0.53 s, primary keys 4.70 s).

There not much difference for “sync” and “io_uring”, but “worker” clearly is slower for that type of workload.

Moving on, let’s see how that looks like for a standard pgbench benchmark. We’ll start with “io_uring” as this is the current setting:

postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=2 --jobs=2
pgbench (18devel dbi services build)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: simple
number of clients: 2
number of threads: 2
maximum number of tries: 1
duration: 600 s
number of transactions actually processed: 567989
number of failed transactions: 0 (0.000%)
latency average = 2.113 ms
initial connection time = 8.996 ms
tps = 946.659673 (without initial connection time)
postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=2 --jobs=2
pgbench (18devel dbi services build)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: simple
number of clients: 2
number of threads: 2
maximum number of tries: 1
duration: 600 s
number of transactions actually processed: 557640
number of failed transactions: 0 (0.000%)
latency average = 2.152 ms
initial connection time = 6.994 ms
tps = 929.408406 (without initial connection time)
postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=2 --jobs=2
pgbench (18devel dbi services build)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: simple
number of clients: 2
number of threads: 2
maximum number of tries: 1
duration: 600 s
number of transactions actually processed: 563613
number of failed transactions: 0 (0.000%)
latency average = 2.129 ms
initial connection time = 16.351 ms
tps = 939.378627 (without initial connection time)

Same test with “worker”:

postgres@ip-10-0-1-209:~$ psql -c "alter system set io_method='worker'"
ALTER SYSTEM
postgres@ip-10-0-1-209:~$ pg_ctl --pgdata=/u02/pgdata/18/data/ restart -l /dev/null
waiting for server to shut down............. done
server stopped
waiting for server to start.... done
server started
postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=2 --jobs=2
pgbench (18devel dbi services build)
starting vacuum...end.
transaction type: &lt;builtin: TPC-B (sort of)&gt;
scaling factor: 100
query mode: simple
number of clients: 2
number of threads: 2
maximum number of tries: 1
duration: 600 s
number of transactions actually processed: 549176
number of failed transactions: 0 (0.000%)
latency average = 2.185 ms
initial connection time = 7.189 ms
tps = 915.301403 (without initial connection time)
postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=2 --jobs=2
pgbench (18devel dbi services build)
starting vacuum...end.
transaction type: &lt;builtin: TPC-B (sort of)&gt;
scaling factor: 100
query mode: simple
number of clients: 2
number of threads: 2
maximum number of tries: 1
duration: 600 s
number of transactions actually processed: 564898
number of failed transactions: 0 (0.000%)
latency average = 2.124 ms
initial connection time = 11.332 ms
tps = 941.511304 (without initial connection time)
postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=2 --jobs=2
pgbench (18devel dbi services build)
starting vacuum...end.
transaction type: &lt;builtin: TPC-B (sort of)&gt;
scaling factor: 100
query mode: simple
number of clients: 2
number of threads: 2
maximum number of tries: 1
duration: 600 s
number of transactions actually processed: 563041
number of failed transactions: 0 (0.000%)
latency average = 2.131 ms
initial connection time = 9.120 ms
tps = 938.412979 (without initial connection time)

… and finally “sync”:

postgres@ip-10-0-1-209:~$ psql -c "alter system set io_method='sync'"
ALTER SYSTEM
postgres@ip-10-0-1-209:~$ pg_ctl --pgdata=/u02/pgdata/18/data/ restart -l /dev/null
waiting for server to shut down............ done
server stopped
waiting for server to start.... done
server started
postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=2 --jobs=2
pgbench (18devel dbi services build)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: simple
number of clients: 2
number of threads: 2
maximum number of tries: 1
duration: 600 s
number of transactions actually processed: 560420
number of failed transactions: 0 (0.000%)
latency average = 2.141 ms
initial connection time = 12.000 ms
tps = 934.050237 (without initial connection time)
postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=2 --jobs=2
pgbench (18devel dbi services build)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: simple
number of clients: 2
number of threads: 2
maximum number of tries: 1
duration: 600 s
number of transactions actually processed: 560077
number of failed transactions: 0 (0.000%)
latency average = 2.143 ms
initial connection time = 7.204 ms
tps = 933.469665 (without initial connection time)
postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=2 --jobs=2
pgbench (18devel dbi services build)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: simple
number of clients: 2
number of threads: 2
maximum number of tries: 1
duration: 600 s
number of transactions actually processed: 566150
number of failed transactions: 0 (0.000%)
latency average = 2.120 ms
initial connection time = 7.579 ms
tps = 943.591451 (without initial connection time)

As you see there is not much difference, no matter the io_method. Let’s stress the system a bit more (only putting the summaries here):

postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=10 --jobs=10
## sync
tps = 2552.785398 (without initial connection time)
tps = 2505.476064 (without initial connection time)
tps = 2542.419230 (without initial connection time)
## io_uring
tps = 2511.138931 (without initial connection time)
tps = 2529.705311 (without initial connection time)
tps = 2573.195751 (without initial connection time)
## worker
tps = 2531.657962 (without initial connection time)
tps = 2523.854335 (without initial connection time)
tps = 2515.490351 (without initial connection time)

Some picture, there is not much difference. One last test, hammering the system even more:

postgres@ip-10-0-1-209:~$ pgbench --time=600 --client=20 --jobs=20
## worker
tps = 2930.268033 (without initial connection time)
tps = 2799.499964 (without initial connection time)
tps = 3033.491153 (without initial connection time)
## io_uring
tps = 2942.542882 (without initial connection time)
tps = 3061.487286 (without initial connection time)
tps = 2995.175169 (without initial connection time)
## sync
tps = 2997.654084 (without initial connection time)
tps = 2924.269626 (without initial connection time)
tps = 2753.853272 (without initial connection time)

At least for these tests, there is not much difference between the three settings for io_method (sync seems to be a bit slower), but I think this is still great. For such a massive change getting to the same performance as before is great. Things in PostgreSQL improve all the time, and I am sure there will be a lot of improvements in this area as well.

Usually I link to the commit here, but in this case that would be a whole bunch of commits. To everyone involved in this, a big thank you.

L’article PostgreSQL 18: Support for asynchronous I/O est apparu en premier sur dbi Blog.

Use a expired SLES with OpenSUSE repositories

Wed, 2025-04-09 08:05

Last week, I hit a wall when my SUSE Linux Enterprise Server license expired, stopping all repository access. Needing PostgreSQL urgently, I couldn’t wait for SUSE to renew my license and had to act fast.
I chose to disable every SLES repository and switched to the openSUSE Leap repository. This worked flawless and made my system usable in very short time. This is why I wanted to make a short blog about it:

# First, check what repos are active with:
$ sudo zypper repos

# In case you only have SLES-Repositories on your system you can disable all of them at once. Otherwise you will be spammed with error messages when running zypper. To disable all repos in one shot, use:
$ sudo zypper modifyrepo --all --disable

# Now we come to the fun part. Depending on what minor version of release 15 you use, it is needed to change it inside the repository link. In my case I'm using 15.6:
$ slesver=15.6 && sudo zypper addrepo http://download.opensuse.org/distribution/leap/$slesver/repo/oss/ opensuse-leap-oss

# Now we only need to refresh the repositories and accept the gpg-keys:
$ sudo zypper refresh

# From now on we can install any packages we need without ever having to activate the system.
Disclaimer

This guide shows you how to swap out the paid SUSE Linux Enterprise Server (SLES) repositories, which are professionally managed and maintained by SUSE, for the open-source openSUSE Leap repositories. A community of volunteers, with some help from SUSE, drives and supports the openSUSE Leap repositories. But they lack the enterprise grade support, testing, and update guarantees provided by SLES. By following these steps, you will lose access to SUSE’s official updates, security patches, and support services tied to your expired SLES license. This process converts your system into a community supported setup, which may not align with production or enterprise needs. Proceed at your own risk, and ensure you understand the implications. Especially regarding security, stability, and compliance. To stay up to date with security announcements on OpenSUSE you can subscribe here.

L’article Use a expired SLES with OpenSUSE repositories est apparu en premier sur dbi Blog.

PostgreSQL 18: Add function to report backend memory contexts

Wed, 2025-04-09 00:31

Another great feature was committed for PostgreSQL 18 if you are interested how memory is used by a backend process. While you can take a look at the memory contexts for your current session since PostgreSQL 14, there was no way to retrieve that information for another backend.

Since PostgreSQL 14 there is the pg_backend_memory_contexts catalog view. This view displays the memory contexts of the server process attached to the current session, e.g.:

postgres=# select * from pg_backend_memory_contexts;
                      name                      |                     ident                      |    type    | level |         path          | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes 
------------------------------------------------+------------------------------------------------+------------+-------+-----------------------+-------------+---------------+------------+-------------+------------
 TopMemoryContext                               |                                                | AllocSet   |     1 | {1}                   |      174544 |             7 |      36152 |          20 |     138392
 Record information cache                       |                                                | AllocSet   |     2 | {1,2}                 |        8192 |             1 |       1640 |           0 |       6552
 RegexpCacheMemoryContext                       |                                                | AllocSet   |     2 | {1,3}                 |        1024 |             1 |        784 |           0 |        240
 collation cache                                |                                                | AllocSet   |     2 | {1,4}                 |        8192 |             1 |       6808 |           0 |       1384
 TableSpace cache                               |                                                | AllocSet   |     2 | {1,5}                 |        8192 |             1 |       2152 |           0 |       6040
 Map from relid to OID of cached composite type |                                                | AllocSet   |     2 | {1,6}                 |        8192 |             1 |       2544 |           0 |       5648
 Type information cache                         |                                                | AllocSet   |     2 | {1,7}                 |       24624 |             2 |       2672 |           0 |      21952
 Operator lookup cache                          |                                                | AllocSet   |     2 | {1,8}                 |       24576 |             2 |      10816 |           4 |      13760
 search_path processing cache                   |                                                | AllocSet   |     2 | {1,9}                 |        8192 |             1 |       5656 |           8 |       2536
 RowDescriptionContext                          |                                                | AllocSet   |     2 | {1,10}                |        8192 |             1 |       6920 |           0 |       1272
 MessageContext                                 |                                                | AllocSet   |     2 | {1,11}                |       32768 |             3 |       1632 |           0 |      31136
 Operator class cache                           |                                                | AllocSet   |     2 | {1,12}                |        8192 |             1 |        616 |           0 |       7576
 smgr relation table                            |                                                | AllocSet   |     2 | {1,13}                |       32768 |             3 |      16904 |           9 |      15864
 PgStat Shared Ref Hash                         |                                                | AllocSet   |     2 | {1,14}                |        9264 |             2 |        712 |           0 |       8552
 PgStat Shared Ref                              |                                                | AllocSet   |     2 | {1,15}                |        8192 |             4 |       3440 |           5 |       4752
 PgStat Pending                                 |                                                | AllocSet   |     2 | {1,16}                |       16384 |             5 |      15984 |          58 |        400
 TopTransactionContext                          |                                                | AllocSet   |     2 | {1,17}                |        8192 |             1 |       7776 |           0 |        416
 TransactionAbortContext                        |                                                | AllocSet   |     2 | {1,18}                |       32768 |             1 |      32528 |           0 |        240
 Portal hash                                    |                                                | AllocSet   |     2 | {1,19}                |        8192 |             1 |        616 |           0 |       7576
 TopPortalContext                               |                                                | AllocSet   |     2 | {1,20}                |        8192 |             1 |       7688 |           0 |        504
 Relcache by OID                                |                                                | AllocSet   |     2 | {1,21}                |       16384 |             2 |       3608 |           3 |      12776
 CacheMemoryContext                             |                                                | AllocSet   |     2 | {1,22}                |     8487056 |            14 |    3376568 |           3 |    5110488
 LOCALLOCK hash                                 |                                                | AllocSet   |     2 | {1,23}                |        8192 |             1 |        616 |           0 |       7576
 WAL record construction                        |                                                | AllocSet   |     2 | {1,24}                |       50200 |             2 |       6400 |           0 |      43800
 PrivateRefCount                                |                                                | AllocSet   |     2 | {1,25}                |        8192 |             1 |        608 |           0 |       7584
 MdSmgr                                         |                                                | AllocSet   |     2 | {1,26}                |        8192 |             1 |       7296 |           0 |        896
 GUCMemoryContext                               |                                                | AllocSet   |     2 | {1,27}                |       24576 |             2 |       8264 |           1 |      16312
 Timezones                                      |                                                | AllocSet   |     2 | {1,28}                |      104112 |             2 |       2672 |           0 |     101440
 ErrorContext                                   |                                                | AllocSet   |     2 | {1,29}                |        8192 |             1 |       7952 |           0 |        240
 RegexpMemoryContext                            | ^(.*memory.*)$                                 | AllocSet   |     3 | {1,3,30}              |       13360 |             5 |       6800 |           8 |       6560
 PortalContext                                  | <unnamed>                                      | AllocSet   |     3 | {1,20,31}             |        1024 |             1 |        608 |           0 |        416
 relation rules                                 | pg_backend_memory_contexts                     | AllocSet   |     3 | {1,22,32}             |        8192 |             4 |       3840 |           1 |       4352
 index info                                     | pg_toast_1255_index                            | AllocSet   |     3 | {1,22,33}             |        3072 |             2 |       1152 |           2 |       1920
 index info                                     | pg_toast_2619_index                            | AllocSet   |     3 | {1,22,34}             |        3072 |             2 |       1152 |           2 |       1920
 index info                                     | pg_constraint_conrelid_contypid_conname_index  | AllocSet   |     3 | {1,22,35}             |        3072 |             2 |       1016 |           1 |       2056
 index info                                     | pg_statistic_ext_relid_index                   | AllocSet   |     3 | {1,22,36}             |        2048 |             2 |        752 |           2 |       1296
 index info                                     | pg_index_indrelid_index                        | AllocSet   |     3 | {1,22,37}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_db_role_setting_databaseid_rol_index        | AllocSet   |     3 | {1,22,38}             |        3072 |             2 |       1120 |           1 |       1952
 index info                                     | pg_opclass_am_name_nsp_index                   | AllocSet   |     3 | {1,22,39}             |        3072 |             2 |       1048 |           1 |       2024
 index info                                     | pg_foreign_data_wrapper_name_index             | AllocSet   |     3 | {1,22,40}             |        2048 |             2 |        792 |           3 |       1256
 index info                                     | pg_enum_oid_index                              | AllocSet   |     3 | {1,22,41}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_class_relname_nsp_index                     | AllocSet   |     3 | {1,22,42}             |        3072 |             2 |       1080 |           3 |       1992
 index info                                     | pg_foreign_server_oid_index                    | AllocSet   |     3 | {1,22,43}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_publication_pubname_index                   | AllocSet   |     3 | {1,22,44}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_statistic_relid_att_inh_index               | AllocSet   |     3 | {1,22,45}             |        3072 |             2 |        872 |           2 |       2200
 index info                                     | pg_cast_source_target_index                    | AllocSet   |     3 | {1,22,46}             |        3072 |             2 |       1080 |           3 |       1992
 index info                                     | pg_language_name_index                         | AllocSet   |     3 | {1,22,47}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_transform_oid_index                         | AllocSet   |     3 | {1,22,48}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_collation_oid_index                         | AllocSet   |     3 | {1,22,49}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_amop_fam_strat_index                        | AllocSet   |     3 | {1,22,50}             |        3248 |             3 |        840 |           0 |       2408
 index info                                     | pg_index_indexrelid_index                      | AllocSet   |     3 | {1,22,51}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_ts_template_tmplname_index                  | AllocSet   |     3 | {1,22,52}             |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_ts_config_map_index                         | AllocSet   |     3 | {1,22,53}             |        3072 |             2 |       1192 |           2 |       1880
 index info                                     | pg_opclass_oid_index                           | AllocSet   |     3 | {1,22,54}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_foreign_data_wrapper_oid_index              | AllocSet   |     3 | {1,22,55}             |        2048 |             2 |        792 |           3 |       1256
 index info                                     | pg_publication_namespace_oid_index             | AllocSet   |     3 | {1,22,56}             |        2048 |             2 |        792 |           3 |       1256
 index info                                     | pg_event_trigger_evtname_index                 | AllocSet   |     3 | {1,22,57}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_statistic_ext_name_index                    | AllocSet   |     3 | {1,22,58}             |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_publication_oid_index                       | AllocSet   |     3 | {1,22,59}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_ts_dict_oid_index                           | AllocSet   |     3 | {1,22,60}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_event_trigger_oid_index                     | AllocSet   |     3 | {1,22,61}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_conversion_default_index                    | AllocSet   |     3 | {1,22,62}             |        2224 |             2 |        216 |           0 |       2008
 index info                                     | pg_operator_oprname_l_r_n_index                | AllocSet   |     3 | {1,22,63}             |        3248 |             3 |        840 |           0 |       2408
 index info                                     | pg_trigger_tgrelid_tgname_index                | AllocSet   |     3 | {1,22,64}             |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_extension_oid_index                         | AllocSet   |     3 | {1,22,65}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_enum_typid_label_index                      | AllocSet   |     3 | {1,22,66}             |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_ts_config_oid_index                         | AllocSet   |     3 | {1,22,67}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_user_mapping_oid_index                      | AllocSet   |     3 | {1,22,68}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_opfamily_am_name_nsp_index                  | AllocSet   |     3 | {1,22,69}             |        3072 |             2 |       1192 |           2 |       1880
 index info                                     | pg_foreign_table_relid_index                   | AllocSet   |     3 | {1,22,70}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_type_oid_index                              | AllocSet   |     3 | {1,22,71}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_aggregate_fnoid_index                       | AllocSet   |     3 | {1,22,72}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_constraint_oid_index                        | AllocSet   |     3 | {1,22,73}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_rewrite_rel_rulename_index                  | AllocSet   |     3 | {1,22,74}             |        3072 |             2 |       1152 |           3 |       1920
 index info                                     | pg_ts_parser_prsname_index                     | AllocSet   |     3 | {1,22,75}             |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_ts_config_cfgname_index                     | AllocSet   |     3 | {1,22,76}             |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_ts_parser_oid_index                         | AllocSet   |     3 | {1,22,77}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_publication_rel_prrelid_prpubid_index       | AllocSet   |     3 | {1,22,78}             |        3072 |             2 |       1264 |           2 |       1808
 index info                                     | pg_operator_oid_index                          | AllocSet   |     3 | {1,22,79}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_namespace_nspname_index                     | AllocSet   |     3 | {1,22,80}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_ts_template_oid_index                       | AllocSet   |     3 | {1,22,81}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_amop_opr_fam_index                          | AllocSet   |     3 | {1,22,82}             |        3072 |             2 |        904 |           1 |       2168
 index info                                     | pg_default_acl_role_nsp_obj_index              | AllocSet   |     3 | {1,22,83}             |        3072 |             2 |       1160 |           2 |       1912
 index info                                     | pg_collation_name_enc_nsp_index                | AllocSet   |     3 | {1,22,84}             |        3072 |             2 |        904 |           1 |       2168
 index info                                     | pg_publication_rel_oid_index                   | AllocSet   |     3 | {1,22,85}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_range_rngtypid_index                        | AllocSet   |     3 | {1,22,86}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_ts_dict_dictname_index                      | AllocSet   |     3 | {1,22,87}             |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_type_typname_nsp_index                      | AllocSet   |     3 | {1,22,88}             |        3072 |             2 |       1080 |           3 |       1992
 index info                                     | pg_opfamily_oid_index                          | AllocSet   |     3 | {1,22,89}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_statistic_ext_oid_index                     | AllocSet   |     3 | {1,22,90}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_statistic_ext_data_stxoid_inh_index         | AllocSet   |     3 | {1,22,91}             |        3072 |             2 |       1264 |           2 |       1808
 index info                                     | pg_class_oid_index                             | AllocSet   |     3 | {1,22,92}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_proc_proname_args_nsp_index                 | AllocSet   |     3 | {1,22,93}             |        3072 |             2 |       1048 |           1 |       2024
 index info                                     | pg_partitioned_table_partrelid_index           | AllocSet   |     3 | {1,22,94}             |        2048 |             2 |        792 |           3 |       1256
 index info                                     | pg_range_rngmultitypid_index                   | AllocSet   |     3 | {1,22,95}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_transform_type_lang_index                   | AllocSet   |     3 | {1,22,96}             |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_attribute_relid_attnum_index                | AllocSet   |     3 | {1,22,97}             |        3072 |             2 |       1080 |           3 |       1992
 index info                                     | pg_proc_oid_index                              | AllocSet   |     3 | {1,22,98}             |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_language_oid_index                          | AllocSet   |     3 | {1,22,99}             |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_namespace_oid_index                         | AllocSet   |     3 | {1,22,100}            |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_amproc_fam_proc_index                       | AllocSet   |     3 | {1,22,101}            |        3248 |             3 |        840 |           0 |       2408
 index info                                     | pg_foreign_server_name_index                   | AllocSet   |     3 | {1,22,102}            |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_attribute_relid_attnam_index                | AllocSet   |     3 | {1,22,103}            |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_publication_namespace_pnnspid_pnpubid_index | AllocSet   |     3 | {1,22,104}            |        3072 |             2 |       1264 |           2 |       1808
 index info                                     | pg_conversion_oid_index                        | AllocSet   |     3 | {1,22,105}            |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_user_mapping_user_server_index              | AllocSet   |     3 | {1,22,106}            |        3072 |             2 |       1264 |           2 |       1808
 index info                                     | pg_subscription_rel_srrelid_srsubid_index      | AllocSet   |     3 | {1,22,107}            |        3072 |             2 |       1264 |           2 |       1808
 index info                                     | pg_sequence_seqrelid_index                     | AllocSet   |     3 | {1,22,108}            |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_extension_name_index                        | AllocSet   |     3 | {1,22,109}            |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_conversion_name_nsp_index                   | AllocSet   |     3 | {1,22,110}            |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_authid_oid_index                            | AllocSet   |     3 | {1,22,111}            |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_auth_members_member_role_index              | AllocSet   |     3 | {1,22,112}            |        3072 |             2 |       1160 |           2 |       1912
 index info                                     | pg_subscription_oid_index                      | AllocSet   |     3 | {1,22,113}            |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_parameter_acl_oid_index                     | AllocSet   |     3 | {1,22,114}            |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_tablespace_oid_index                        | AllocSet   |     3 | {1,22,115}            |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_parameter_acl_parname_index                 | AllocSet   |     3 | {1,22,116}            |        2048 |             2 |        824 |           3 |       1224
 index info                                     | pg_shseclabel_object_index                     | AllocSet   |     3 | {1,22,117}            |        3072 |             2 |       1192 |           2 |       1880
 index info                                     | pg_replication_origin_roname_index             | AllocSet   |     3 | {1,22,118}            |        2048 |             2 |        792 |           3 |       1256
 index info                                     | pg_database_datname_index                      | AllocSet   |     3 | {1,22,119}            |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_subscription_subname_index                  | AllocSet   |     3 | {1,22,120}            |        3072 |             2 |       1296 |           3 |       1776
 index info                                     | pg_replication_origin_roiident_index           | AllocSet   |     3 | {1,22,121}            |        2048 |             2 |        792 |           3 |       1256
 index info                                     | pg_auth_members_role_member_index              | AllocSet   |     3 | {1,22,122}            |        3072 |             2 |       1160 |           2 |       1912
 index info                                     | pg_database_oid_index                          | AllocSet   |     3 | {1,22,123}            |        2048 |             2 |        680 |           2 |       1368
 index info                                     | pg_authid_rolname_index                        | AllocSet   |     3 | {1,22,124}            |        2048 |             2 |        680 |           2 |       1368
 GUC hash table                                 |                                                | AllocSet   |     3 | {1,27,125}            |       32768 |             3 |      11696 |           6 |      21072
 ExecutorState                                  |                                                | AllocSet   |     4 | {1,20,31,126}         |       49200 |             4 |      13632 |           3 |      35568
 tuplestore tuples                              |                                                | Generation |     5 | {1,20,31,126,127}     |       32768 |             3 |      13360 |           0 |      19408
 printtup                                       |                                                | AllocSet   |     5 | {1,20,31,126,128}     |        8192 |             1 |       7952 |           0 |        240
 Table function arguments                       |                                                | AllocSet   |     5 | {1,20,31,126,129}     |        8192 |             1 |       7912 |           0 |        280
 ExprContext                                    |                                                | AllocSet   |     5 | {1,20,31,126,130}     |       32768 |             3 |       5656 |           4 |      27112
 pg_get_backend_memory_contexts                 |                                                | AllocSet   |     6 | {1,20,31,126,130,131} |       16384 |             2 |       5664 |           3 |      10720
(131 rows)

This is quite some information, but as said above, this information is only available for the backend process which is attached to the current session.

Starting with PostgreSQL 18, you can most probably get those statistics for other backends as well. For this a new function was added:

postgres=# \dfS pg_get_process_memory_contexts
                                                                                                                                                                                                    List of functions
   Schema   |              Name              | Result data type |                                                                                                                                                               Argument da>
------------+--------------------------------+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------->
 pg_catalog | pg_get_process_memory_contexts | SETOF record     | pid integer, summary boolean, retries double precision, OUT name text, OUT ident text, OUT type text, OUT path integer[], OUT level integer, OUT total_bytes bigint, OUT >
(1 row)

Let’s play a bit with this. Suppose we have a session which reports this backend process ID:

postgres=# select version();
                              version                               
--------------------------------------------------------------------
 PostgreSQL 18devel on x86_64-linux, compiled by gcc-14.2.1, 64-bit
(1 row)

postgres=# select pg_backend_pid();
 pg_backend_pid 
----------------
          31291
(1 row)

In another session we can now ask for the summary of the memory contexts for the PID we got in the first session like this (the second parameter turns on the summary, the third is the waiting time in seconds for updated statistics):

postgres=# select * from pg_get_process_memory_contexts(31291,true,2);
             name             | ident |   type   |  path  | level | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes | num_agg_contexts |       stats_timestamp        
------------------------------+-------+----------+--------+-------+-------------+---------------+------------+-------------+------------+------------------+------------------------------
 TopMemoryContext             |       | AllocSet | {1}    |     1 |      141776 |             6 |       5624 |          11 |     136152 |                1 | 2025-04-08 13:37:38.63979+02
                              |       | ???      |        |     0 |           0 |             0 |          0 |           0 |          0 |                0 | 2025-04-08 13:37:38.63979+02
 search_path processing cache |       | AllocSet | {1,2}  |     2 |        8192 |             1 |       5656 |           8 |       2536 |                1 | 2025-04-08 13:37:38.63979+02
 RowDescriptionContext        |       | AllocSet | {1,3}  |     2 |        8192 |             1 |       6920 |           0 |       1272 |                1 | 2025-04-08 13:37:38.63979+02
 MessageContext               |       | AllocSet | {1,4}  |     2 |       16384 |             2 |       7880 |           2 |       8504 |                2 | 2025-04-08 13:37:38.63979+02
 Operator class cache         |       | AllocSet | {1,5}  |     2 |        8192 |             1 |        616 |           0 |       7576 |                1 | 2025-04-08 13:37:38.63979+02
 smgr relation table          |       | AllocSet | {1,6}  |     2 |       16384 |             2 |       4664 |           3 |      11720 |                1 | 2025-04-08 13:37:38.63979+02
 PgStat Shared Ref Hash       |       | AllocSet | {1,7}  |     2 |        9264 |             2 |        712 |           0 |       8552 |                1 | 2025-04-08 13:37:38.63979+02
 PgStat Shared Ref            |       | AllocSet | {1,8}  |     2 |        4096 |             3 |       1760 |           3 |       2336 |                1 | 2025-04-08 13:37:38.63979+02
 PgStat Pending               |       | AllocSet | {1,9}  |     2 |        8192 |             4 |       7832 |          28 |        360 |                1 | 2025-04-08 13:37:38.63979+02
 TopTransactionContext        |       | AllocSet | {1,10} |     2 |        8192 |             1 |       7952 |           0 |        240 |                1 | 2025-04-08 13:37:38.63979+02
 TransactionAbortContext      |       | AllocSet | {1,11} |     2 |       32768 |             1 |      32528 |           0 |        240 |                1 | 2025-04-08 13:37:38.63979+02
 Portal hash                  |       | AllocSet | {1,12} |     2 |        8192 |             1 |        616 |           0 |       7576 |                1 | 2025-04-08 13:37:38.63979+02
 TopPortalContext             |       | AllocSet | {1,13} |     2 |        8192 |             1 |       7952 |           1 |        240 |                1 | 2025-04-08 13:37:38.63979+02
 Relcache by OID              |       | AllocSet | {1,14} |     2 |       16384 |             2 |       3608 |           3 |      12776 |                1 | 2025-04-08 13:37:38.63979+02
 CacheMemoryContext           |       | AllocSet | {1,15} |     2 |      737984 |           182 |     183208 |         221 |     554776 |               88 | 2025-04-08 13:37:38.63979+02
 LOCALLOCK hash               |       | AllocSet | {1,16} |     2 |        8192 |             1 |        616 |           0 |       7576 |                1 | 2025-04-08 13:37:38.63979+02
 WAL record construction      |       | AllocSet | {1,17} |     2 |       50200 |             2 |       6400 |           0 |      43800 |                1 | 2025-04-08 13:37:38.63979+02
 PrivateRefCount              |       | AllocSet | {1,18} |     2 |        8192 |             1 |       2672 |           0 |       5520 |                1 | 2025-04-08 13:37:38.63979+02
 MdSmgr                       |       | AllocSet | {1,19} |     2 |        8192 |             1 |       7936 |           0 |        256 |                1 | 2025-04-08 13:37:38.63979+02
 GUCMemoryContext             |       | AllocSet | {1,20} |     2 |       57344 |             5 |      19960 |           7 |      37384 |                2 | 2025-04-08 13:37:38.63979+02
 Timezones                    |       | AllocSet | {1,21} |     2 |      104112 |             2 |       2672 |           0 |     101440 |                1 | 2025-04-08 13:37:38.63979+02

Turning off the summary, gives you the full picture:

postgres=# select * from pg_get_process_memory_contexts(31291,false,2) order by level, name;
                 name                  |                     ident                      |   type   |    path    | level | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes | num_agg_contexts |        stats_timestamp        
---------------------------------------+------------------------------------------------+----------+------------+-------+-------------+---------------+------------+-------------+------------+------------------+-------------------------------
 TopMemoryContext                      |                                                | AllocSet | {1}        |     1 |      141776 |             6 |       5624 |          11 |     136152 |                1 | 2025-04-08 13:38:02.508423+02
 CacheMemoryContext                    |                                                | AllocSet | {1,15}     |     2 |      524288 |             7 |     101280 |           1 |     423008 |                1 | 2025-04-08 13:38:02.508423+02
 ErrorContext                          |                                                | AllocSet | {1,22}     |     2 |        8192 |             1 |       7952 |           4 |        240 |                1 | 2025-04-08 13:38:02.508423+02
 GUCMemoryContext                      |                                                | AllocSet | {1,20}     |     2 |       24576 |             2 |       8264 |           1 |      16312 |                1 | 2025-04-08 13:38:02.508423+02
 LOCALLOCK hash                        |                                                | AllocSet | {1,16}     |     2 |        8192 |             1 |        616 |           0 |       7576 |                1 | 2025-04-08 13:38:02.508423+02
 MdSmgr                                |                                                | AllocSet | {1,19}     |     2 |        8192 |             1 |       7936 |           0 |        256 |                1 | 2025-04-08 13:38:02.508423+02
 MessageContext                        |                                                | AllocSet | {1,4}      |     2 |       16384 |             2 |       2664 |           4 |      13720 |                1 | 2025-04-08 13:38:02.508423+02
 Operator class cache                  |                                                | AllocSet | {1,5}      |     2 |        8192 |             1 |        616 |           0 |       7576 |                1 | 2025-04-08 13:38:02.508423+02
 PgStat Pending                        |                                                | AllocSet | {1,9}      |     2 |        8192 |             4 |       7832 |          28 |        360 |                1 | 2025-04-08 13:38:02.508423+02
 PgStat Shared Ref                     |                                                | AllocSet | {1,8}      |     2 |        4096 |             3 |       1760 |           3 |       2336 |                1 | 2025-04-08 13:38:02.508423+02
 PgStat Shared Ref Hash                |                                                | AllocSet | {1,7}      |     2 |        9264 |             2 |        712 |           0 |       8552 |                1 | 2025-04-08 13:38:02.508423+02
 Portal hash                           |                                                | AllocSet | {1,12}     |     2 |        8192 |             1 |        616 |           0 |       7576 |                1 | 2025-04-08 13:38:02.508423+02
 PrivateRefCount                       |                                                | AllocSet | {1,18}     |     2 |        8192 |             1 |       2672 |           0 |       5520 |                1 | 2025-04-08 13:38:02.508423+02
 Relcache by OID                       |                                                | AllocSet | {1,14}     |     2 |       16384 |             2 |       3608 |           3 |      12776 |                1 | 2025-04-08 13:38:02.508423+02
 RowDescriptionContext                 |                                                | AllocSet | {1,3}      |     2 |        8192 |             1 |       6920 |           0 |       1272 |                1 | 2025-04-08 13:38:02.508423+02
 search_path processing cache          |                                                | AllocSet | {1,2}      |     2 |        8192 |             1 |       5656 |           8 |       2536 |                1 | 2025-04-08 13:38:02.508423+02
 smgr relation table                   |                                                | AllocSet | {1,6}      |     2 |       16384 |             2 |       4664 |           3 |      11720 |                1 | 2025-04-08 13:38:02.508423+02
 Timezones                             |                                                | AllocSet | {1,21}     |     2 |      104112 |             2 |       2672 |           0 |     101440 |                1 | 2025-04-08 13:38:02.508423+02
 TopPortalContext                      |                                                | AllocSet | {1,13}     |     2 |        8192 |             1 |       7952 |           1 |        240 |                1 | 2025-04-08 13:38:02.508423+02
 TopTransactionContext                 |                                                | AllocSet | {1,10}     |     2 |        8192 |             1 |       7952 |           0 |        240 |                1 | 2025-04-08 13:38:02.508423+02
 TransactionAbortContext               |                                                | AllocSet | {1,11}     |     2 |       32768 |             1 |      32528 |           0 |        240 |                1 | 2025-04-08 13:38:02.508423+02
 WAL record construction               |                                                | AllocSet | {1,17}     |     2 |       50200 |             2 |       6400 |           0 |      43800 |                1 | 2025-04-08 13:38:02.508423+02
 GUC hash table                        |                                                | AllocSet | {1,20,111} |     3 |       32768 |             3 |      11696 |           6 |      21072 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_ts_dict_oid_index                           | AllocSet | {1,15,46}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_event_trigger_oid_index                     | AllocSet | {1,15,47}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_conversion_default_index                    | AllocSet | {1,15,48}  |     3 |        2224 |             2 |        216 |           0 |       2008 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_operator_oprname_l_r_n_index                | AllocSet | {1,15,49}  |     3 |        2224 |             2 |        216 |           0 |       2008 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_trigger_tgrelid_tgname_index                | AllocSet | {1,15,50}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_extension_oid_index                         | AllocSet | {1,15,51}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_enum_typid_label_index                      | AllocSet | {1,15,52}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_ts_config_oid_index                         | AllocSet | {1,15,53}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_user_mapping_oid_index                      | AllocSet | {1,15,54}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_opfamily_am_name_nsp_index                  | AllocSet | {1,15,55}  |     3 |        3072 |             2 |       1192 |           2 |       1880 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_foreign_table_relid_index                   | AllocSet | {1,15,56}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_type_oid_index                              | AllocSet | {1,15,57}  |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_aggregate_fnoid_index                       | AllocSet | {1,15,58}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_constraint_oid_index                        | AllocSet | {1,15,59}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_rewrite_rel_rulename_index                  | AllocSet | {1,15,60}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_ts_parser_prsname_index                     | AllocSet | {1,15,61}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_ts_config_cfgname_index                     | AllocSet | {1,15,62}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_ts_parser_oid_index                         | AllocSet | {1,15,63}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_publication_rel_prrelid_prpubid_index       | AllocSet | {1,15,64}  |     3 |        3072 |             2 |       1264 |           2 |       1808 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_operator_oid_index                          | AllocSet | {1,15,65}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_namespace_nspname_index                     | AllocSet | {1,15,66}  |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_ts_template_oid_index                       | AllocSet | {1,15,67}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_amop_opr_fam_index                          | AllocSet | {1,15,68}  |     3 |        3072 |             2 |       1192 |           2 |       1880 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_default_acl_role_nsp_obj_index              | AllocSet | {1,15,69}  |     3 |        3072 |             2 |       1160 |           2 |       1912 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_collation_name_enc_nsp_index                | AllocSet | {1,15,70}  |     3 |        3072 |             2 |       1192 |           2 |       1880 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_publication_rel_oid_index                   | AllocSet | {1,15,71}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_range_rngtypid_index                        | AllocSet | {1,15,72}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_ts_dict_dictname_index                      | AllocSet | {1,15,73}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_type_typname_nsp_index                      | AllocSet | {1,15,74}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_opfamily_oid_index                          | AllocSet | {1,15,75}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_statistic_ext_oid_index                     | AllocSet | {1,15,76}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_statistic_ext_data_stxoid_inh_index         | AllocSet | {1,15,77}  |     3 |        3072 |             2 |       1264 |           2 |       1808 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_class_oid_index                             | AllocSet | {1,15,78}  |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_proc_proname_args_nsp_index                 | AllocSet | {1,15,79}  |     3 |        3072 |             2 |       1048 |           1 |       2024 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_partitioned_table_partrelid_index           | AllocSet | {1,15,80}  |     3 |        2048 |             2 |        792 |           3 |       1256 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_range_rngmultitypid_index                   | AllocSet | {1,15,81}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_transform_type_lang_index                   | AllocSet | {1,15,82}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_attribute_relid_attnum_index                | AllocSet | {1,15,83}  |     3 |        3072 |             2 |       1080 |           3 |       1992 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_proc_oid_index                              | AllocSet | {1,15,84}  |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_language_oid_index                          | AllocSet | {1,15,85}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_namespace_oid_index                         | AllocSet | {1,15,86}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_amproc_fam_proc_index                       | AllocSet | {1,15,87}  |     3 |        3248 |             3 |        912 |           0 |       2336 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_foreign_server_name_index                   | AllocSet | {1,15,88}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_attribute_relid_attnam_index                | AllocSet | {1,15,89}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_publication_namespace_pnnspid_pnpubid_index | AllocSet | {1,15,90}  |     3 |        3072 |             2 |       1264 |           2 |       1808 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_conversion_oid_index                        | AllocSet | {1,15,91}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_user_mapping_user_server_index              | AllocSet | {1,15,92}  |     3 |        3072 |             2 |       1264 |           2 |       1808 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_subscription_rel_srrelid_srsubid_index      | AllocSet | {1,15,93}  |     3 |        3072 |             2 |       1264 |           2 |       1808 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_sequence_seqrelid_index                     | AllocSet | {1,15,94}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_extension_name_index                        | AllocSet | {1,15,95}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_conversion_name_nsp_index                   | AllocSet | {1,15,96}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_authid_oid_index                            | AllocSet | {1,15,97}  |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_subscription_oid_index                      | AllocSet | {1,15,99}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_parameter_acl_oid_index                     | AllocSet | {1,15,100} |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_tablespace_oid_index                        | AllocSet | {1,15,101} |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_parameter_acl_parname_index                 | AllocSet | {1,15,102} |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_shseclabel_object_index                     | AllocSet | {1,15,103} |     3 |        3072 |             2 |       1192 |           2 |       1880 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_replication_origin_roname_index             | AllocSet | {1,15,104} |     3 |        2048 |             2 |        792 |           3 |       1256 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_database_datname_index                      | AllocSet | {1,15,105} |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_subscription_subname_index                  | AllocSet | {1,15,106} |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_replication_origin_roiident_index           | AllocSet | {1,15,107} |     3 |        2048 |             2 |        792 |           3 |       1256 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_auth_members_role_member_index              | AllocSet | {1,15,108} |     3 |        3072 |             2 |       1160 |           2 |       1912 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_database_oid_index                          | AllocSet | {1,15,109} |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_authid_rolname_index                        | AllocSet | {1,15,110} |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_auth_members_member_role_index              | AllocSet | {1,15,98}  |     3 |        3072 |             2 |       1160 |           2 |       1912 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_db_role_setting_databaseid_rol_index        | AllocSet | {1,15,24}  |     3 |        3072 |             2 |       1120 |           1 |       1952 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_opclass_am_name_nsp_index                   | AllocSet | {1,15,25}  |     3 |        3072 |             2 |       1192 |           2 |       1880 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_foreign_data_wrapper_name_index             | AllocSet | {1,15,26}  |     3 |        2048 |             2 |        792 |           3 |       1256 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_enum_oid_index                              | AllocSet | {1,15,27}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_class_relname_nsp_index                     | AllocSet | {1,15,28}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_foreign_server_oid_index                    | AllocSet | {1,15,29}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_publication_pubname_index                   | AllocSet | {1,15,30}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_statistic_relid_att_inh_index               | AllocSet | {1,15,31}  |     3 |        3072 |             2 |       1160 |           2 |       1912 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_cast_source_target_index                    | AllocSet | {1,15,32}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_language_name_index                         | AllocSet | {1,15,33}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_transform_oid_index                         | AllocSet | {1,15,34}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_collation_oid_index                         | AllocSet | {1,15,35}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_amop_fam_strat_index                        | AllocSet | {1,15,36}  |     3 |        2224 |             2 |        216 |           0 |       2008 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_index_indexrelid_index                      | AllocSet | {1,15,37}  |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_ts_template_tmplname_index                  | AllocSet | {1,15,38}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_ts_config_map_index                         | AllocSet | {1,15,39}  |     3 |        3072 |             2 |       1192 |           2 |       1880 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_opclass_oid_index                           | AllocSet | {1,15,40}  |     3 |        2048 |             2 |        680 |           2 |       1368 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_foreign_data_wrapper_oid_index              | AllocSet | {1,15,41}  |     3 |        2048 |             2 |        792 |           3 |       1256 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_publication_namespace_oid_index             | AllocSet | {1,15,42}  |     3 |        2048 |             2 |        792 |           3 |       1256 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_event_trigger_evtname_index                 | AllocSet | {1,15,43}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_statistic_ext_name_index                    | AllocSet | {1,15,44}  |     3 |        3072 |             2 |       1296 |           3 |       1776 |                1 | 2025-04-08 13:38:02.508423+02
 index info                            | pg_publication_oid_index                       | AllocSet | {1,15,45}  |     3 |        2048 |             2 |        824 |           3 |       1224 |                1 | 2025-04-08 13:38:02.508423+02
 pg_get_remote_backend_memory_contexts |                                                | AllocSet | {1,4,23}   |     3 |       16384 |             2 |       6568 |           3 |       9816 |                1 | 2025-04-08 13:38:02.508423+02
(111 rows)

As you can see above there are many entries with “index info” which are not directly visible in the summary view. The reason is the aggregation when you go for the summary. All “index info” entries are aggregated into under “CacheMemoryContext” and we can easily verify this:

postgres=# select count(*) from pg_get_process_memory_contexts(31291,false,2) where name = 'index info';
 count 
-------
    87
(1 row)

… which is very close to the 88 aggregations reported in the summary. Excluding all the system/catalog indexes we get the following picture:

postgres=# select * from pg_get_process_memory_contexts(31291,false,2) where name = 'index info' and ident !~ 'pg_';
 name | ident | type | path | level | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes | num_agg_contexts | stats_timestamp 
------+-------+------+------+-------+-------------+---------------+------------+-------------+------------+------------------+-----------------
(0 rows)

-- system/catalog indexes
postgres=# select count(*) from pg_get_process_memory_contexts(31291,false,2) where name = 'index info' and ident ~ 'pg_';
 count 
-------
    87
(1 row)

Creating a new table and an index on that table in the first session will change the picture to this:

-- first session
postgres=# create table t ( a int );
CREATE TABLE
postgres=# create index i on t(a);
CREATE INDEX
postgres=# 

-- second session
postgres=# select * from pg_get_process_memory_contexts(31291,false,2) where name = 'index info' and ident !~ 'pg_';
    name    | ident |   type   |   path    | level | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes | num_agg_contexts |        stats_timestamp        
------------+-------+----------+-----------+-------+-------------+---------------+------------+-------------+------------+------------------+-------------------------------
 index info | i     | AllocSet | {1,16,26} |     3 |        2048 |             2 |        776 |           3 |       1272 |                1 | 2025-04-08 13:44:55.496668+02
(1 row)

… and this will also increase the aggregation count we did above:

postgres=# select count(*) from pg_get_process_memory_contexts(31291,false,2) where name = 'index info';
 count 
-------
    98
(1 row)

… but why to 98 and not to 89? Because additional system indexes have also been loaded (I let it as an exercise to you find out which ones those are):

postgres=# select count(*) from pg_get_process_memory_contexts(31291,false,2) where name = 'index info' and ident ~ 'pg_';
 count 
-------
    97
(1 row)

You can go on and do additional tests for the other memory contexts to get an idea how that works. Personally, I think this is a great new feature because you can now have a look at the memory contexts of problematic processes. Thanks to all involved, details here.

L’article PostgreSQL 18: Add function to report backend memory contexts est apparu en premier sur dbi Blog.

PostgreSQL 18: Allow NOT NULL constraints to be added as NOT VALID

Tue, 2025-04-08 02:07

Before we take a look at what this new feature is about, let’s have a look at how PostgreSQL 17 (and before) handles “NOT NULL” constraints when they get created. As usual we start with a simple table:

postgres=# select version();
                                                           version                                                           
-----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 17.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 14.2.1 20250110 (Red Hat 14.2.1-7), 64-bit
(1 row)
postgres=# create table t ( a int not null, b text );
CREATE TABLE
postgres=# \d t
                 Table "public.t"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 

Trying to insert data into that table which violates the constraint of course will fail:

postgres=# insert into t select null,1 from generate_series(1,2);
ERROR:  null value in column "a" of relation "t" violates not-null constraint
DETAIL:  Failing row contains (null, 1).

Even if you can set the column to “NOT NULL” syntax wise, this will not disable the constraint:

postgres=# alter table t alter column a set not null;
ALTER TABLE
postgres=# insert into t select null,1 from generate_series(1,2);
ERROR:  null value in column "a" of relation "t" violates not-null constraint
DETAIL:  Failing row contains (null, 1).
postgres=# \d t
                 Table "public.t"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 

The only option you have when you want to do this, is to drop the constraint:

postgres=# alter table t alter column a drop not null;
ALTER TABLE
postgres=# insert into t select null,1 from generate_series(1,2);
INSERT 0 2
postgres=# \d t
                 Table "public.t"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | text    |           |          | 

The use case for this is data loading. Maybe you want to load data where you know that the constraint would be violated but you’re ok with fixing that manually afterwards and then re-enable the constraint like this:

postgres=# update t set a = 1;
UPDATE 2
postgres=# alter table t alter column a set not null;
ALTER TABLE
postgres=# \d t
                 Table "public.t"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 

postgres=# insert into t select null,1 from generate_series(1,2);
ERROR:  null value in column "a" of relation "t" violates not-null constraint
DETAIL:  Failing row contains (null, 1).

This will change with PostgreSQL 18. From now you have more options. The following still behaves as before:

postgres=# select version();
                              version                               
--------------------------------------------------------------------
 PostgreSQL 18devel on x86_64-linux, compiled by gcc-14.2.1, 64-bit
(1 row)
postgres=# create table t ( a int, b text );
CREATE TABLE
postgres=# alter table t add constraint c1 not null a;
ALTER TABLE
postgres=# \d t
                 Table "public.t"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 

postgres=# insert into t select null,1 from generate_series(1,2);
ERROR:  null value in column "a" of relation "t" violates not-null constraint
DETAIL:  Failing row contains (null, 1).

This, of course, leads to the same behavior as with PostgreSQL 17 above. But now you can do this:

postgres=# create table t ( a int, b text );
CREATE TABLE
postgres=# insert into t select null,1 from generate_series(1,2);
INSERT 0 2
postgres=# alter table t add constraint c1 not null a not valid;
ALTER TABLE

This gives us a “NOT NULL” constraint which will not be enforced when it is created. Doing the same in PostgreSQL 17 (and before) will scan the table and enforce the constraint:

postgres=# select version();
                                                           version                                                           
-----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 17.2 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 14.2.1 20250110 (Red Hat 14.2.1-7), 64-bit
(1 row)

postgres=# create table t ( a int, b text );
CREATE TABLE
postgres=# insert into t select null,1 from generate_series(1,2);
INSERT 0 2
postgres=# alter table t add constraint c1 not null a not valid;
ERROR:  syntax error at or near "not"
LINE 1: alter table t add constraint c1 not null a not valid;
                                        ^
postgres=# alter table t alter column a set not null;
ERROR:  column "a" of relation "t" contains null values

As you can see the syntax is not supported and adding a “NOT NULL” constraint will scan the table and enforce the constraint.

Back to the PostgreSQL 18 cluster. As we now have data which would violate the constraint:

postgres=# select * from t;
 a | b 
---+---
   | 1
   | 1
(2 rows)

postgres=# \d t
                 Table "public.t"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 

… we can fix that manually and then validate the constraint afterwards:

postgres=# update t set a = 1;
UPDATE 2
postgres=# alter table t validate constraint c1;
ALTER TABLE
postgres=# insert into t values (null, 'a');
ERROR:  null value in column "a" of relation "t" violates not-null constraint
DETAIL:  Failing row contains (null, a).

Nice, thanks to all involved, details here.

L’article PostgreSQL 18: Allow NOT NULL constraints to be added as NOT VALID est apparu en premier sur dbi Blog.

Best Practices for Structuring Metadata in M-Files

Mon, 2025-04-07 09:30

In one of my previous post, I talked about the importance of Metadata, it is the skeleton of efficient document management system and particularly in M-Files!

As I like to say, M-Files is like an empty shell that can be very easily customized to obtain the ideal solution for our business/customers.

Unlike traditional folder-based storage, M-Files leverages metadata to classify, search, and retrieve documents with ease.
Properly structuring metadata ensures better organization, improved searchability, and enhanced workflow automation.
In this blog post, we will explore best practices for structuring metadata in M-Files to maximize its potential.

The importance of Metadata in M-Files

Metadata in M-Files is used to describe documents, making them easier to find and categorize.
Instead of placing files in rigid folder structures, metadata allows documents to be dynamically organized based on properties such as document type, project, client, or status…
Metadata are then used to create views, which are simply predefined searches based on them.
Additionally M-files is able to make automatic relations between different objects.
For example in a project “XXX” related to a customer “YYY”, all associated documents to this project are automatically related (and findable) to the customer.
Or Choosing a “Document Type” can dictate required approval workflows.

Another advantage can be, only the members working on the project can access the customers information and contacts…

Define Clear and Consistent Metadata Fields

When setting up metadata in M-Files, define clear fields that align with your business processes. Some essential metadata fields include:

  • Document Type (e.g., Invoice, Contract, Report)
  • Department (e.g., HR, Finance, Legal)
  • Project or Client Name
  • Status (e.g., Draft, Approved, Archived)

Ensure consistency by using standardized field names and avoiding duplicate or unnecessary fields.
When values are known like for Departments or Status,… better to use the Value lists to ensure the accuracy of the data (no typo-mistake, no end user creativity).

To make sure to respect the company naming convention, use the automatic values on properties, it can be:

  • Automatic numbering
  • Customized Numbering
  • Concatenation of properties
  • Calculated value (script)
Utilize Metadata Card Configuration

M-Files provides customizable metadata cards, allowing users to input relevant data efficiently.
Often there are properties not relevant for the end user, we can use the Metadata card configuration to hide them.
To improve readability, we can also create “section” to logically group the properties.
And Finally, Metadata card Configuration can be used to set default value and provide tips (Property description and/or Tooltip).

Metadata card display
Leverage Automatic Metadata Population

Reduce manual entry and improve accuracy by setting up automatic metadata population.
M-Files, with the help of its intelligence service (previous post here), can suggest metadata from file properties, templates, or integrated systems, minimizing human error and saving time.

Review and Maintain Metadata Structure Regularly

M-Files is a living system and needs and must evolve with the business needs.
It is important to periodically review metadata structures to ensure they remain relevant.
Refine metadata rules, and continuously train employees on best practices to keep your M-Files environment optimized.

Check your config regularly
Final Thoughts

A well-structured metadata system in M-Files enhances efficiency, improves document retrieval, and supports seamless automation. By implementing these best practices, organizations can create a smarter document management strategy that adapts to their needs.

Are you making the most of metadata in M-Files? Good news it’s never to late with M-Files, so start optimizing your structure today!

If you feel a bit lost, we can help you!

L’article Best Practices for Structuring Metadata in M-Files est apparu en premier sur dbi Blog.

Working with Btrfs & Snapper

Tue, 2025-04-01 03:32

In this Blog post I will try to give a pure technical cheat sheet using Btrfs on any distribution. Additionally I will explain how to use one of the most sophisticated backup/ restore tool called “snapper”.

What is Btrfs and how to set it up if not already installed?

Btrfs (aka Butter FS or aka B-Tree FS) is like xfs and ext4 a filesystem which offers any linux user many features to maintain or manage their filesystem. Usually Btrfs is used stand-alone but it works with LVM2 too, without any additional configuration.

Key Features of Btrfs:

  • Copy-on-Write (CoW): Btrfs uses CoW, meaning it creates new copies of data instead of overwriting existing files.
  • Snapshots: You can create instant, space-efficient snapshots of your filesystem or specific directories, like with Snapper on SUSE. These are perfect for backups, rollbacks, or tracking changes (e.g., before/after updates).
  • Self-Healing: Btrfs supports data integrity with checksums and can detect and repair errors, especially with RAID configurations.
  • Flexible Storage: It handles multiple devices, RAID (0, 1, 5, 6, 10), and dynamic resizing, making it adaptable for growing storage needs.
  • Compression: Btrfs can compress files on the fly (e.g., using Zstandard or LZO), saving space without sacrificing performance.
  • Subvolumes: Btrfs lets you create logical partitions (subvolumes) within the same filesystem, enabling fine-grained control. It is like having separate root, home, or snapshot subvolumes.

Usually Btrfs is used by default in any SuSE Linux Server (and OpenSuSE) and can be used in RHEL & OL and other distribution. To use Btrfs on any RPM based distribution just install the package “btrfs-progs”. With Debian and Ubuntu this is a bit more tricky, which is why we will keep this blog about RPM-based distributions only.

Create new filesystem and increase it with Btrfs
# Wipe any old filesystem on /dev/sdb (careful—data’s toast!)
wipefs -a /dev/vdb

# Create a Btrfs filesystem on /dev/sdb
mkfs.btrfs /dev/vdb

# Make a mount point
mkdir /mnt/btrfs

# Mount it—basic setup, no fancy options yet
mount /dev/vdb /mnt/btrfs

# Check it’s there and Btrfs
df -h /mnt/btrfs

# Add to /etc/fstab for permanence (use your device UUID from blkid)
/dev/vdb  /mnt/btrfs  btrfs  defaults  0  2

# Test fstab
mount -a

# List all Btrfs Filesystems
btrfs filesystem show

# Add additional storage to existing Btrfs filesystem (in our case /)
btrfs add device /dev/vdd /

# In some cases it is smart to balance the storage between all the devices
btrfs balance start /

# If the space allows it you can remove devices from a Btrfs filesystem
btrfs device delete /dev/vdd /
Restore Btrfs filesystem to a specific point
# List all snapshots for the root config
snapper -c root list

# Pick a snapshot and check its diff (example with 5 & 6)
snapper -c root diff 5..6

# Roll back to snapshot 5 (dry run first)
snapper -c root rollback 5 --print-number

# Do it for real—reboots to snapshot 5
snapper -c root rollback 5
reboot

# Verify after reboot—root’s now at snapshot 5
snapper -c root list
Btrfs Subvolume explained and used
# Mount the Btrfs root filesystem (not a subvolume yet)
mount /dev/vdb /mnt/butter

# Create a subvolume called ‘data’ (only possible inside a existing Btrfs volume)
btrfs subvolume create /mnt/butter/data

# List subvolumes
btrfs subvolume list /mnt/butter

# Make a mount point for the subvolume
mkdir /mnt/data

# Mount the subvolume explicitly
mount -o subvol=data /dev/vdb /mnt/data

# Check it’s mounted as a subvolume
df -h /mnt/data

# Create a new subvolume by creating a snapshot inside the btrfs volume
btrfs subvolume snapshot /mnt/data /mnt/butter/data-snap1

# Delete the snapshot which is a subvolume
btrfs subvolume delete /mnt/data-snap1
Configuring Snapper for automatic snapshots
# Install Snapper if it’s not there
zypper install snapper

# Create a snapper config of filesystem
snapper -c <ConfigName> create-config <btrfs-mountpoint>

# Enable timeline snapshots (if not enabled by default)
echo "TIMELINE_CREATE=\"yes\"" >> /etc/snapper/configs/root

# Set snapshot limits (e.g., keep 10 hourly)
sed -i 's/TIMELINE_LIMIT_HOURLY=.*/TIMELINE_LIMIT_HOURLY="10"/' /etc/snapper/configs/root

# Start the Snapper timer (if not enabled by default)
systemctl enable snapper-timeline.timer
systemctl start snapper-timeline.timer

# Trigger a manual snapshot to test
snapper -c <ConfigName> create -d "Manual test snapshot"

# List snapshots to confirm
snapper -c <ConfigName> list

Here is a small overview of the most important settings to use within a snapper config file:

  • SPACE_LIMIT=”0.5″
    • Sets the maximum fraction of the filesystem’s space that snapshots can occupy. 0.5 = 50%
  • FREE_LIMIT=”0.2″
    • Ensures a minimum fraction of the filesystem stays free. 0.2= 20%
  • ALLOW_USERS=”admin dbi”
    • Lists users allowed to manage this Snapper config.
  • ALLOW_GROUPS=”admins”
    • A list of Groups that are allowed to manage this config.
  • SYNC_ACL=”no”
    • Syncs permissions from ALLOW_USERS and ALLOW_GROUPS to the .snapshots directory. If yes, Snapper updates the access control lists on /.snapshots to match ALLOW_USERS/ALLOW_GROUPS. With no, it skips this, and you manage permissions manually.
  • NUMBER_CLEANUP=”yes”
    • When yes, Snapper deletes old numbered snapshots (manual and/ or automated ones) when they exceed NUMBER_LIMIT or age past NUMBER_MIN_AGE.
  • NUMBER_MIN_AGE=”1800″
    • Minimum age (in seconds) before a numbered snapshot can be deleted.
  • NUMBER_LIMIT=”50″
    • Maximum number of numbered snapshots to keep.
  • NUMBER_LIMIT_IMPORTANT=”10″
    • Maximum number of numbered snapshots marked as “important” to keep.
  • TIMELINE_CREATE=”yes”
    • Enables automatic timeline snapshots.
  • TIMELINE_CLEANUP=”yes”
    • Enables cleanup of timeline snapshots based on limits.
  • TIMELINE_LIMIT_*=”10″
    • TIMELINE_LIMIT_HOURLY=”10″
    • TIMELINE_LIMIT_DAILY=”10″
    • TIMELINE_LIMIT_WEEKLY=”0″ (disabled)
    • TIMELINE_LIMIT_MONTHLY=”10″
    • TIMELINE_LIMIT_YEARLY=”10″
      • Controls how many snapshots Snapper retains over time. Keeps 10 hourly, 10 daily, 10 monthly, and 10 yearly, but skips weekly 0.

For further information about the settings, check out the SUSE documentation.

Btrfs RAID and Multi-Device management
# Format two disks (/dev/vdb, /dev/vdc) as Btrfs RAID1
mkfs.btrfs -d raid1 -m raid1 /dev/vdb /dev/vdc

# Mount it
mount /dev/vdb /mnt/btrfs-raid

# Check RAID status
btrfs filesystem show /mnt/btrfs-raid

# Add a third disk (/dev/sdd) to the array
btrfs device add /dev/sdd /mnt/btrfs-raid

# Rebalance to RAID1 across all three (dconvert is data raid and mconvert is metadata raid definition)
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/btrfs-raid

# Check device stats for errors
btrfs device stats /mnt/btrfs-raid

# Remove a disk if needed
btrfs device delete /dev/vdc /mnt/btrfs-raid
Troubleshooting & using Btrfs and Snapper
# Check disk usage?
btrfs filesystem df /mnt/btrfs

# Full filesystem and need more storage? Add an additional empty storage device to the Btrfs volume:
btrfs add device /dev/vde /mnt/btrfs

# If the storage devices has grown (via LVM or virtually) one can resize the size to max:
btrfs filesystem resize max /mnt/btrfs

# Balance to free space if it’s tight (dusage defines from what % the rebalance should start. In our case only 50% or less data per block will trigger the re-balance)
btrfs balance start -dusage=50 /mnt/btrfs

# Too many snapshots? List them
snapper -c root list | wc -l

# Delete old snapshots (e.g., #10)
snapper -c root delete 10

# Check filesystem for corruption
btrfs check /dev/vdb

# Repair if it’s borked (careful—backup first!)
btrfs check --repair /dev/vdb

# Rollback stuck? Force it
snapper -c root rollback 5 --force
Hint

Something that needs to be pointed out is that the snapper list is sorted from the oldest (starting at point 1) to the newest. BUT: At the top there is always the current state of the filesystem with the number 0.

L’article Working with Btrfs & Snapper est apparu en premier sur dbi Blog.

Installing and configuring Veeam RMAN plug-in on an ODA

Mon, 2025-03-31 15:58

I recently had to install and configure the Veeam RMAN plug-in on an ODA, and would like to provide the steps in this article, as it might be helpful for many other people.

Read more: Installing and configuring Veeam RMAN plug-in on an ODA Create Veeam linux OS user

We will create an OS linux user on the ODA that will be used to authenticate on the Veeam Backup server. This user on the sever will need to have the Veeam Backup Administrator role or Veeam Backup Operator and Veeam Restore Operator roles.

Check if role for user and group is not already used:

[root@ODA02 ~]# grep 497 /etc/group
[root@ODA02 ~]# grep 54323 /etc/passwd

Create the group:

[root@ODA02 ~]# groupadd -g 497 veeam

Create the user:

[root@ODA02 ~]# useradd -g 497 -u 54323 -d /home/veeam -s /bin/bash oda_veeam

[root@ODA02 ~]# passwd oda_veeam
Changing password for user oda_veeam.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Installing Veeam RMAN plug-in

As we know, we might need to be careful installing new packages on an ODA, and we might need to remove them during patching in case of issue. We could easily here install the plug-in with a rpm, knowing no dependencies are needed, but I decided to even be less transparent installing the plugin with the tar file.

I installed the agent with the root user.

The downloaded tar file:

[root@ODA02 ~]# ls -ltrh Veeam*.tar*
-rw-r--r-- 1 root root 62M Oct 31 15:08 VeeamPluginforOracleRMAN.tar.gz

I created the directory for the installation:

[root@ODA02 ~]# mkdir /opt/veeam

And uncompressed it in the newly created directory:

[root@ODA02 ~]# tar -xzvf VeeamPluginforOracleRMAN.tar.gz -C /opt/veeam
VeeamPluginforOracleRMAN/
VeeamPluginforOracleRMAN/oracleproxy
VeeamPluginforOracleRMAN/RMANPluginManager
VeeamPluginforOracleRMAN/OracleRMANConfigTool
VeeamPluginforOracleRMAN/3rdPartyNotices.txt
VeeamPluginforOracleRMAN/libOracleRMANPlugin.so
VeeamPluginforOracleRMAN/veeamagent

I checked the files:

[root@ODA02 ~]# ls -ltrh /opt/veeam/
total 4.0K
drwxr-xr-x 2 grid 1000 4.0K Aug 24 04:10 VeeamPluginforOracleRMAN

[root@ODA02 ~]# cd /opt/veeam/VeeamPluginforOracleRMAN/

[root@ODA02 VeeamPluginforOracleRMAN]# ls -ltrh
total 167M
-rwxr-xr-x 1 grid 1000  81M Aug 24 04:10 veeamagent
-rwxr-xr-x 1 grid 1000  37M Aug 24 04:10 RMANPluginManager
-rwxr-xr-x 1 grid 1000  35M Aug 24 04:10 OracleRMANConfigTool
-rwxr-xr-x 1 grid 1000 6.7M Aug 24 04:10 oracleproxy
-rwxr-xr-x 1 grid 1000 7.0M Aug 24 04:10 libOracleRMANPlugin.so
-r--r--r-- 1 grid 1000  65K Aug 24 04:10 3rdPartyNotices.txt

And changed the ownership to root:

[root@ODA02 veeam]# pwd
/opt/veeam

[root@ODA02 veeam]# ls -l
total 4
drwxr-xr-x 2 grid 1000 4096 Aug 24 04:10 VeeamPluginforOracleRMAN

[root@ODA02 veeam]# chown -R root: VeeamPluginforOracleRMAN/

[root@ODA02 veeam]# ls -l
total 4
drwxr-xr-x 2 root root 4096 Aug 24 04:10 VeeamPluginforOracleRMAN

[root@ODA02 veeam]# ls -l VeeamPluginforOracleRMAN/
total 170052
-r--r--r-- 1 root root    65542 Aug 24 04:10 3rdPartyNotices.txt
-rwxr-xr-x 1 root root  7251448 Aug 24 04:10 libOracleRMANPlugin.so
-rwxr-xr-x 1 root root  6968560 Aug 24 04:10 oracleproxy
-rwxr-xr-x 1 root root 36475936 Aug 24 04:10 OracleRMANConfigTool
-rwxr-xr-x 1 root root 38515744 Aug 24 04:10 RMANPluginManager
-rwxr-xr-x 1 root root 84837408 Aug 24 04:10 veeamagent

I gave writable permissions to other group, so oracle linux user can write in the directory:

[root@ODA02 veeam]# pwd
/opt/veeam

[root@ODA02 veeam]# chmod o+w VeeamPluginforOracleRMAN/

[root@ODA02 veeam]# ls -l
total 4
drwxr-xrwx 2 root root 4096 Aug 24 04:10 VeeamPluginforOracleRMAN

Configure Veeam RMAN plug-in

With oracle linux, we now need to configure the plugin. The information like the backup repositories name, will come from the Veeam Backup server side. We are running SE2 databases, so there will be no parallelism. The installation script will not ask for any number of channels.

The following information that can be requested:

  • DNS name or IP address of the Veeam Backup & Replication server
  • port which will be used to communicate with the Veeam Backup & Replication server. Default port is 10006
  • OS user credentials to authenticate against the Veeam Backup & Replication server
  • The backup repository to be selected from a list of available one. For duplexing functionality you can select up to 4 repositories.
  • The number of channel for parallelism backup. In our case, as we use SE2 database, this options will not be requested.
  • Compression or no compression
  • Authentication method between OS or database one. We will use OS authentication. We will use oracle linux user as it is part of DBA group (mandatory)

Running the configuration command:

[oracle@ODA02 VeeamPluginforOracleRMAN]$ ./OracleRMANConfigTool --wizard
Enter backup server name or IP address: X.X.X.41
Enter backup server port number [10006]:
Enter username: oda_veeam
Enter password for oda_veeam:
Available backup repositories:
1. 
Select Veeam repository from the list by typing the repository number: 1
RMAN parallelism is not supported in Oracle Standard Edition.
Do you want to use Veeam compression? (Y/n): Y
Select the Oracle environment authentication method:
1. Operating system authentication
2. Database authentication
Enter [1]:

The current user is restricted and cannot read required OS information. Please re-run the following command with root rights: OracleRMANConfigTool --set-credentials

RMAN settings:
CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE;
CONFIGURE CHANNEL DEVICE TYPE SBT_TAPE
PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so'
FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1;
CONFIGURE DEVICE TYPE SBT_TAPE PARALLELISM 1;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F_RMAN_AUTOBACKUP.vab';

RMAN settings will be applied automatically to the following databases:
ORACLE_SID=INST1 ORACLE_HOME=/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1
ORACLE_SID=INST2 ORACLE_HOME=/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1
ORACLE_SID=INST3 ORACLE_HOME=/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2
ORACLE_SID=INST4 ORACLE_HOME=/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1
ORACLE_SID=INST5 ORACLE_HOME=/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2
ORACLE_SID=INST6 ORACLE_HOME=/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2

Channel definition for RMAN scripts:
ALLOCATE CHANNEL VeeamAgentChannel1 DEVICE TYPE SBT_TAPE
PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so'
FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';

Save configuration?
1. Apply configuration to the Oracle environment
2. Export configuration into a file for manual setup
3. Cancel without saving
Enter: 1

*** Database instance INST1 is configured ***

*** Database instance INST2 is configured ***

*** Database instance INST3 is configured ***

*** Database instance INST4 is configured ***

*** Database instance INST5 is configured ***

*** Database instance INST6 is configured ***

As root, we now need to run the following command to specify credentials to connect to the Veeam Backup Server, knowing current oracle linux user is restricted.

[root@ODA02 VeeamPluginforOracleRMAN]# pwd
/opt/veeam/VeeamPluginforOracleRMAN

[root@ODA02 VeeamPluginforOracleRMAN]# ./OracleRMANConfigTool --set-credentials 'oda_veeam' '***********'
[root@ODA02 VeeamPluginforOracleRMAN]#

RMAN configuration

We are using our own perl solution, named dmk (https://www.dbi-services.com/fr/produits/dmk-management-kit/), to perform the backup.

Following allocate channel had to be hard coded in our rcv scripts. We hard coded it as workaround for known bug to address through variable the % character:

ALLOCATE CHANNEL VeeamAgentChannel1 DEVICE TYPE SBT_TAPE PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so' FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';

rcv example for inc0 backup:

oracle@ODA02:/u01/app/oracle/local/dmk_dbbackup/rcv/oracle12/ [rdbms1900] cat bck_inc0_no_arc_del_tape.rcv
#
# RMAN template: Online full database backup
#
# $Author: marc.wagner@dbi-services.com $

CONFIGURE ARCHIVELOG DELETION POLICY TO ;

CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '_%F';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '_snapcf_.f';
CONFIGURE BACKUP OPTIMIZATION ON;

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF  DAYS;

show all;
run
{
   #
   ALLOCATE CHANNEL VeeamAgentChannel1 DEVICE TYPE SBT_TAPE PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so' FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';

   backup  incremental level 0 section size  filesperset  database TAG 'INC0_';

   backup  filesperset  archivelog all TAG 'ARCH_';

   backup  current controlfile TAG 'CTRL_';

   sql "create pfile=''init_.ora'' from spfile";

   RELEASE CHANNEL VeeamAgentChannel1;
}

Test

I could successfully test a backup on the Veeam tape and confirm with customer that the file was properly written on the server. We could also confirmed the same with RMAN.

oracle@ODA02:~/ [DB1 (CDB$ROOT)] /u01/app/oracle/local/dmk_ha/bin/check_primary.ksh DB1 "/u01/app/oracle/local/dmk_dbbackup/bin/dmk_rman.ksh -s DB1 -t bck_inc0_no_arc_del_tape.rcv -c /u01/app/odaorabase/oracle/admin/DB1_SITE1/etc/rman.cfg"
2025-02-04_11:12:48::check_primary.ksh::SetOraEnv       ::INFO ==> Environment: DB1 (/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2)
2025-02-04_11:12:48::check_primary.ksh::MainProgram     ::INFO ==> Getting V$DATABASE.DB_ROLE for DB1
2025-02-04_11:12:48::check_primary.ksh::MainProgram     ::INFO ==> DB1 Database Role is: PRIMARY
2025-02-04_11:12:48::check_primary.ksh::MainProgram     ::INFO ==> Program going ahead and starting requested command
2025-02-04_11:12:48::check_primary.ksh::MainProgram     ::INFO ==> Script : /u01/app/oracle/local/dmk_dbbackup/bin/dmk_rman.ksh -s DB1 -t bck_inc0_no_arc_del_tape.rcv -c /u01/app/odaorabase/oracle/admin/DB1_SITE1/etc/rman.cfg

[OK]::customer::RMAN::dmk_dbbackup::DB1::bck_inc0_no_arc_del_tape.rcv::RMAN_retCode::0
Logfile is : /u01/app/odaorabase/oracle/admin/DB1_SITE1/log/DB1_bck_inc0_no_arc_del_tape_20250204_111249.log

2025-02-04_11:14:17::check_primary.ksh::CleanExit       ::INFO ==> Program exited with ExitCode : 0

To wrap up

Using Veeam RMAN plug-in on an ODA is working fine. I hope this article will help you configure it. In a next article I will test the backups restoring it in a new instance.

L’article Installing and configuring Veeam RMAN plug-in on an ODA est apparu en premier sur dbi Blog.

PostgreSQL 18: “swap” mode for pg_upgrade

Fri, 2025-03-28 09:15

When you want to upgrade from one major version of PostgreSQL to another you probably want to go with pg_upgrade (or logical replication). There are several modes of operations for this already:

  • –copy: Copy the data files from the old to the new cluster
  • –clone: Clone, instead of copying (when the file system supports it)
  • –copy-file-range: Use the copy_file_range system call for efficient copying, if the file system supports it
  • –link: Use hard links instead of copying files

What is best for you, depends on the requirements. We usually go with “–link” as this is pretty fast, but you can only do that if the old and the new cluster are in the same file system. The downside is, that you cannot anymore use the old cluster once the new cluster is started up.

With PostgreSQL 18 there will probably a new option called “–swap”. This mode, instead of linking or copying the files, moves the files from the old to the new cluster and then replaces the catalog files with the ones from the new cluster. The reason for this additional mode (see the link to the commit at the end of this post) is, that this might outperform even “–link” mode (and the others) when a cluster contains many relations.

Let’s see if we can prove this by creating two new PostgreSQL 17 clusters with many relations:

postgres@pgbox:/home/postgres/ [172] initdb --version
initdb (PostgreSQL) 17.2 
postgres@pgbox:/home/postgres/ [172] initdb -D /var/tmp/dummy/17.2_1 --data-checksums
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
...

Success. You can now start the database server using:

  pg_ctl -D /var/tmp/17.2 -l logfile start

postgres@pgbox:/home/postgres/ [172] echo "port=8888" >> /var/tmp/dummy/17.2_1/postgresql.auto.conf 
postgres@pgbox:/home/postgres/ [172] pg_ctl --pgdata=/var/tmp/dummy/17.2_1 start -l /dev/null
waiting for server to start.... done
server started
postgres@pgbox:/home/postgres/ [172] psql -p 8888 -l
                                                     List of databases
   Name    |  Owner   | Encoding | Locale Provider |   Collate   |    Ctype    | Locale | ICU Rules |   Access privileges   
-----------+----------+----------+-----------------+-------------+-------------+--------+-----------+-----------------------
 postgres  | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |        |           | 
 template0 | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |        |           | =c/postgres          +
           |          |          |                 |             |             |        |           | postgres=CTc/postgres
 template1 | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |        |           | =c/postgres          +
           |          |          |                 |             |             |        |           | postgres=CTc/postgres
(3 rows)

Here is a little script that creates some tables, indexes and a bit of data:

#!/bin/bash

for i in {1..10000}; do
    psql -p 8888 -c "create table t${i} ( a int, b text )"
    psql -p 8888 -c "insert into t${i} select i, i::text from generate_series(1,1000) i;"
    psql -p 8888 -c "create index i${i} on t${i}(a);"
done

If we run that against the cluster we’ll have 10’000 tables (each containing 1000 rows) and 10’000 indexes. This should be sufficient “many relations” to do a quick test.

We create the second cluster by copying the first one:

postgres@pgbox:/home/postgres/ [172] mkdir /var/tmp/dummy/17.2_2/
postgres@pgbox:/home/postgres/ [172] pg_basebackup --port=8888 --pgdata=/var/tmp/dummy/17.2_2/ --checkpoint=fast
postgres@pgbox:/home/postgres/ [172] sed -i 's/8888/8889/g' /var/tmp/dummy/17.2_2/postgresql.auto.conf

Now, lets create two PostgreSQL 18 clusters we will be upgrading to. One of them we will upgrade with “–link” mode, the other with the new “–swap” mode (we’ll also stop the old cluster):

postgres@pgbox:/home/postgres/ [pgdev] initdb --version
initdb (PostgreSQL) 18devel
postgres@pgbox:/home/postgres/ [pgdev] initdb --pgdata=/var/tmp/dummy/18link
postgres@pgbox:/home/postgres/ [pgdev] initdb --pgdata=/var/tmp/dummy/18swap
postgres@pgbox:/home/postgres/ [pgdev] echo "port=9000" >> /var/tmp/dummy/18link/postgresql.auto.conf 
postgres@pgbox:/home/postgres/ [pgdev] echo "port=9001" >> /var/tmp/dummy/18swap/postgresql.auto.conf 
postgres@pgbox:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/dummy/17.2_1/ stop

A quick check if all seems to be fine for the first upgrade:

postgres@pgbox:/home/postgres/ [pgdev] pg_upgrade --version
pg_upgrade (PostgreSQL) 18devel
postgres@pgbox:/home/postgres/ [pgdev] export PGDATAOLD=/var/tmp/dummy/17.2_1/
postgres@pgbox:/home/postgres/ [pgdev] export PGDATANEW=/var/tmp/dummy/18link/
postgres@pgbox:/home/postgres/ [pgdev] export PGBINOLD=/u01/app/postgres/product/17/db_2/bin
postgres@pgbox:/home/postgres/ [pgdev] export PGBINNEW=/u01/app/postgres/product/DEV/db_0/bin/
postgres@pgbox:/home/postgres/ [pgdev] pg_upgrade --check
Performing Consistency Checks
-----------------------------
Checking cluster versions                                     ok
Checking database connection settings                         ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for contrib/isn with bigint-passing mismatch         ok
Checking for valid logical replication slots                  ok
Checking for subscription state                               ok
Checking data type usage                                      ok
Checking for presence of required libraries                   ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for new cluster tablespace directories               ok

*Clusters are compatible*

Time for a test using the “–link” mode:

postgres@pgbox:/home/postgres/ [pgdev] time pg_upgrade --link
Performing Consistency Checks
-----------------------------
Checking cluster versions                                     ok
Checking database connection settings                         ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for contrib/isn with bigint-passing mismatch         ok
Checking for valid logical replication slots                  ok
Checking for subscription state                               ok
Checking data type usage                                      ok
Creating dump of global objects                               ok
Creating dump of database schemas                             
                                                              ok
Checking for presence of required libraries                   ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for new cluster tablespace directories               ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Setting locale and encoding for new cluster                   ok
Analyzing all rows in the new cluster                         ok
Freezing all rows in the new cluster                          ok
Deleting files from new pg_xact                               ok
Copying old pg_xact to new server                             ok
Setting oldest XID for new cluster                            ok
Setting next transaction ID and epoch for new cluster         ok
Deleting files from new pg_multixact/offsets                  ok
Copying old pg_multixact/offsets to new server                ok
Deleting files from new pg_multixact/members                  ok
Copying old pg_multixact/members to new server                ok
Setting next multixact ID and offset for new cluster          ok
Resetting WAL archives                                        ok
Setting frozenxid and minmxid counters in new cluster         ok
Restoring global objects in the new cluster                   ok
Restoring database schemas in the new cluster                 
                                                              ok
Adding ".old" suffix to old global/pg_control                 ok

If you want to start the old cluster, you will need to remove
the ".old" suffix from /var/tmp/dummy/17.2_1/global/pg_control.old.
Because "link" mode was used, the old cluster cannot be safely
started once the new cluster has been started.
Linking user relation files                                   
                                                              ok
Setting next OID for new cluster                              ok
Sync data directory to disk                                   ok
Creating script to delete old cluster                         ok
Checking for extension updates                                ok

Upgrade Complete
----------------
Some optimizer statistics may not have been transferred by pg_upgrade.
Once you start the new server, consider running:
    /u01/app/postgres/product/DEV/db_0/bin/vacuumdb --all --analyze-in-stages --missing-stats-only
Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

real    0m13.776s
user    0m0.654s
sys     0m1.536s

Let’s do the same test with the new “swap” mode:

postgres@pgbox:/home/postgres/ [pgdev] export PGDATAOLD=/var/tmp/dummy/17.2_2/
postgres@pgbox:/home/postgres/ [pgdev] export PGDATANEW=/var/tmp/dummy/18swap/
postgres@pgbox:/home/postgres/ [pgdev] export PGBINOLD=/u01/app/postgres/product/17/db_2/bin
postgres@pgbox:/home/postgres/ [pgdev] export PGBINNEW=/u01/app/postgres/product/DEV/db_0/
postgres@pgbox:/home/postgres/ [pgdev] pg_upgrade --check
Performing Consistency Checks
-----------------------------
Checking cluster versions                                     ok
Checking database connection settings                         ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for contrib/isn with bigint-passing mismatch         ok
Checking for valid logical replication slots                  ok
Checking for subscription state                               ok
Checking data type usage                                      ok
Checking for presence of required libraries                   ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for new cluster tablespace directories               ok

*Clusters are compatible*

postgres@pgbox:/home/postgres/ [pgdev] time pg_upgrade --swap
Performing Consistency Checks
-----------------------------
Checking cluster versions                                     ok
Checking database connection settings                         ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for contrib/isn with bigint-passing mismatch         ok
Checking for valid logical replication slots                  ok
Checking for subscription state                               ok
Checking data type usage                                      ok
Creating dump of global objects                               ok
Creating dump of database schemas                             
                                                              ok
Checking for presence of required libraries                   ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for new cluster tablespace directories               ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Setting locale and encoding for new cluster                   ok
Analyzing all rows in the new cluster                         ok
Freezing all rows in the new cluster                          ok
Deleting files from new pg_xact                               ok
Copying old pg_xact to new server                             ok
Setting oldest XID for new cluster                            ok
Setting next transaction ID and epoch for new cluster         ok
Deleting files from new pg_multixact/offsets                  ok
Copying old pg_multixact/offsets to new server                ok
Deleting files from new pg_multixact/members                  ok
Copying old pg_multixact/members to new server                ok
Setting next multixact ID and offset for new cluster          ok
Resetting WAL archives                                        ok
Setting frozenxid and minmxid counters in new cluster         ok
Restoring global objects in the new cluster                   ok
Restoring database schemas in the new cluster                 
                                                              ok
Adding ".old" suffix to old global/pg_control                 ok

Because "swap" mode was used, the old cluster can no longer be
safely started.
Swapping data directories                                     
                                                              ok
Setting next OID for new cluster                              ok
Sync data directory to disk                                   ok
Creating script to delete old cluster                         ok
Checking for extension updates                                ok

Upgrade Complete
----------------
Some optimizer statistics may not have been transferred by pg_upgrade.
Once you start the new server, consider running:
    /u01/app/postgres/product/DEV/db_0/bin/vacuumdb --all --analyze-in-stages --missing-stats-only
Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

real    0m11.426s
user    0m0.600s
sys     0m0.659s

This was around 2 seconds faster, not much, but at least faster. Of course this was a very simple test case and this further needs to be tested further. Please also note the warning in the output:

Because "swap" mode was used, the old cluster can no longer be
safely started.
Swapping data directories 

This is a consequence of using this mode. Thanks to all involved, details here.

L’article PostgreSQL 18: “swap” mode for pg_upgrade est apparu en premier sur dbi Blog.

PostgreSQL 18: Add “–missing-stats-only” to vacuumdb

Thu, 2025-03-20 04:31

Loosing all the object statistics after a major version upgrade of PostgreSQL with pg_upgrade is one of the real paint points in PostgreSQL. Collecting/generating the statistics can take much longer than the actual upgrade which is quite painful. A first the to resolve this was already committed for PostgreSQL 18 and I’ve written about this here. Yesterday, another bit was committed in this area, and this time it is vacuumdb which got a new switch.

Before we can take a look at this we need an object without any statistics and an object which already has statistics::

postgres=# create table t ( a int, b text );
CREATE TABLE
postgres=# create table tt ( a int, b text );
CREATE TABLE
postgres=# insert into t select i, 'aaa' from generate_series(1,10) i;
INSERT 0 100
postgres=# insert into tt select i, 'aaa' from generate_series(1,1000000) i;
INSERT 0 1000000

The first insert will not trigger statistics collection, while second insert will trigger it:

postgres=# select relname, last_autoanalyze from pg_stat_all_tables where relname in ('t','tt');
 relname |       last_autoanalyze        
---------+-------------------------------
 tt      | 2025-03-20 10:18:03.745504+01
 t       | 
(2 rows)

What the new switch for vacuumdb is providing, is to only collect statistics on objects which do not have any:

postgres@pgbox:/home/postgres/ [pgdev] vacuumdb --help | grep missing
      --missing-stats-only        only analyze relations with missing statistics

Using that, we should see fresh statistics on the first table, but not on the second one:

postgres@pgbox:/home/postgres/ [pgdev] vacuumdb --analyze-only --missing-stats-only postgres
vacuumdb: vacuuming database "postgres"
postgres@pgbox:/home/postgres/ [pgdev] psql -c "select relname, last_analyze from pg_stat_all_tables where relname in ('t','tt');"
 relname |         last_analyze         
---------+------------------------------
 tt      | 
 t       | 2025-03-20 10:24:36.88391+01
(2 rows)

Nice. All the details here and, as usual, thanks to everybody involved in this.

L’article PostgreSQL 18: Add “–missing-stats-only” to vacuumdb est apparu en premier sur dbi Blog.

Do you know Aino?

Thu, 2025-03-20 02:44

Nowadays, working in a company requires handling of huge amounts of information, and therefore efficiency is crucial!
M-Files Aino, your virtual assistant integrated in M-Files, is here to support you in your daily activities.

Aino answer
What is M-Files Aino?

M-Files Aino leverages natural language processing to help users interact seamlessly with their organization’s knowledge base.
It can summarize documents, translate these summaries, answer questions, and save information as M-Files metadata.
This automation not only makes knowledge more accessible but also assists the users to better organize and classify new documents.

M-Files Aino top features

Aino brings many features to enterprise content management systems, in each new release new capabilities appear, but the main ones (currently) are:

  • AI-Powered Summaries: Generate intelligent document summaries for quicker comprehension and review.
  • Streamlined Information Discovery: Locate essential information rapidly, reducing data clutter.
  • Language-Independent Queries: Interact with document content in any language, fostering global collaboration.
  • Automated Task Processing: Automate manual tasks, freeing up time for strategic work.
  • Secure Data Handling: Ensure data security, those are only processed by M-Files Aino in the M-Files Cloud, nothing is spread across other tools.
Benefits of M-Files Aino

Advantages are obvious but here are some important points:

  • Enhanced Productivity: By automating routine tasks, employees can focus on more strategic activities, leading to increased efficiency.
  • Improved Accuracy: AI-driven processes reduce the likelihood of human errors in document handling and data entry.
  • Global Collaboration: With multilingual support, teams across different regions can collaborate seamlessly.
  • Informed Decision-Making: Quick access to summarized information and AI-generated insights aids in making timely and informed decisions.
And in real life?

OK, AI is cool and trendy, but does it really add anything? The answer is yes. We deal with more and more documents and retrieving the right information is always more challenging.
I’ve listed three very concrete scenarios where AI saves time for people who have little or no time:

  • In Law firms can quickly summarize big legal documents, making it easier to extract pertinent information.
  • Medical professionals can swiftly access patient information and research data, enhancing patient care.
  • Financial analysts can efficiently process large volumes of financial reports and market analyses.
Future

AI technology continues to evolve, M-Files Aino too, as written above, each version adds new interesting things, the way we manage and work with documents changes.
We are still at the premises of the awesome capabilities offered by this technology, which is revolutionizing the world of ECM.

If you want to see a small Aino introduction, it’s here.

And as usual for any question feel free to contact us.

L’article Do you know Aino? est apparu en premier sur dbi Blog.

PRGH-1030 when doing restore-node -g on ODA

Fri, 2025-03-14 07:47
Introduction

Patching your Oracle Database Appliance from 19.20 or earlier release to 19.21 or newer release implies the use of Data Preserving Reprovisioning (DPR). Most of the time, and with an adequate preparation, the DPR works fine. But if something goes wrong at the restore-node -g step, you will need some troubleshooting and maybe opening a SR. There is no possible rollback. Here is the problem I recently had when patching an ODA from 19.20 to 19.26 using the intermediate 19.24 for DPR.

What is DPR and why you need it on ODA?

Oracle Database Appliance has a global quarterly patch: this is the only patch you are supposed to use on this kind of hardware. The patch includes updates for Grid Infrastructure (GI) and database (DB), but also updates for the operating system (OS), BIOS, firmwares and other software components. Regarding the OS, Linux 7 was in use up to 19.20 version. Starting from 19.21, Linux 8 is mandatory and there is no update-server command available: you will need to reinstall the new OS on your ODA using DPR.

Basically, DPR is a complete reinstall of the OS without erasing the data disks. Before doing the reinstall, you will “detach” the node. After the reinstall, you will “restore” the node using an odacli command: restore-node -g. You can compare this to the unplug/plug operations on a PDB. Unplugging a PDB is writing the metadata in a file you will later use to plug in back your PDB somewhere else. The DPR feature is embedded in patch 19.21, 19.22, 19.23 and 19.24. As patches are only cumulative with the 4 previous versions, starting from 19.25 you’re supposed to use release 19.21 at least, meaning already in Linux 8.

My ODA setup: a real client environment

My client has 2x ODA X9-2S with Enterprise Edition and Data Guard for production databases. These ODAs were deployed 2 years ago, and already patched from 19.17 to 19.20 1 year ago. Now, it’s time to patch them to the latest 19.26, using DPR 19.24 for the Linux 8 jump.

Pre-upgrade report and detach-node

DCS components are updated to 19.24 at first step, then the pre-upgrade report can be tried:

odacli update-dcsadmin -v 19.24.0.0.0
odacli update-dcscomponents -v 19.24.0.0.0
odacli update-dcsagent -v 19.24.0.0.0
odacli create-preupgradereport -bm 
odacli describe-preupgradereport -i 03f53c9c-fe82-4c2b-bf18-49fd31853054
...

This pre-upgrade report MUST be successful: if not, solve the listed problems and retry until it’s OK.

Once OK, the detach-node will backup the metadata of your ODA for later restore:

odacli detach-node -all

As detach files are stored locally, make sure to backup these files on an external volume otherwise you will not be able to restore your data after reimaging.

cp -r /opt/oracle/oak/restore/out/* /mnt/backup/ODA_backup/
Reimaging the ODA

Reimaging is done with patch 30403643 (version 19.24 in my case). This is an ISO file you will virtually plug in through ILOM as a virtual CDROM. Then you will reboot the server and automatic setup will start. After this reimaging, you will need to do a configure-firstnet for basic network configuration, and then register the new GI and the detached server archive:

odacli update-repository -f /mnt/backup/ODA_backup/odacli-dcs-19.24.0.0.0-240724-GI-19.24.0.0.zip
odacli update-repository -f /mnt/backup/ODA_backup/serverarchive_srvoda1.zip

Once both files have successfully been registered, the restore-node can be done:

odacli restore-node -g
PRGH-1030 and timeout of the restore-node -g

The restore-node -g is similar to a create-appliance: it will configure the system, create the users, provision GI and configure ASM. Instead of configuring ASM with fresh disks without any data, it will read the ASM headers on disks and mount the existing diskgroups. It means that you will get back the DB homes, the databases and ACFS volumes.

This time, the restore-node -g didn’t work for me, it took a very long time (more than one hour) before ending with a failure, a timeout was probably triggered as odacli jobs always finish with a success or a failure:

odacli describe-job -i "46addc5f-7a1f-4f4b-bccf-78cb3708bef9"
Job details
----------------------------------------------------------------
                     ID:  46addc5f-7a1f-4f4b-bccf-78cb3708bef9
            Description:  Restore node service - GI
                 Status:  Failure (To view Error Correlation report, run "odacli describe-job -i 46addc5f-7a1f-4f4b-bccf-78cb3708bef9 --ecr" command)
                Created:  March 5, 2025 9:03:44 AM CET
                Message:  DCS-10001:Internal error encountered: Failed to provision GI with RHP at the home: /u01/app/19.24.0.0/grid: DCS-10001:Internal error encountered: PRGH-1030 : The environments on nodes 'srvoda1' do not satisfy some of the prerequisite checks.
..
 
Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ---------
Restore node service creation            March 5, 2025 9:04:01 AM CET             March 5, 2025 10:17:35 AM CET            Failure
Restore node service creation            March 5, 2025 9:04:01 AM CET             March 5, 2025 10:17:35 AM CET            Failure
Restore node service creation            March 5, 2025 9:04:01 AM CET             March 5, 2025 10:17:35 AM CET            Failure
Provisioning service creation            March 5, 2025 9:04:01 AM CET             March 5, 2025 10:17:35 AM CET            Failure
Provisioning service creation            March 5, 2025 9:04:01 AM CET             March 5, 2025 10:17:35 AM CET            Failure
Validate absence of Interconnect         March 5, 2025 9:04:01 AM CET             March 5, 2025 9:04:02 AM CET             Success
network configuration file
Setting up Network                       March 5, 2025 9:04:03 AM CET             March 5, 2025 9:04:03 AM CET             Success
Restart network interface pubnet         March 5, 2025 9:04:03 AM CET             March 5, 2025 9:04:09 AM CET             Success
Setting up Vlan                          March 5, 2025 9:04:09 AM CET             March 5, 2025 9:04:10 AM CET             Success
Restart network interface priv0.100      March 5, 2025 9:04:10 AM CET             March 5, 2025 9:04:11 AM CET             Success
Restart network interface privasm        March 5, 2025 9:04:11 AM CET             March 5, 2025 9:04:11 AM CET             Success
Setting up Network                       March 5, 2025 9:04:11 AM CET             March 5, 2025 9:04:11 AM CET             Success
Restart network interface privasm        March 5, 2025 9:04:11 AM CET             March 5, 2025 9:04:17 AM CET             Success
Network update                           March 5, 2025 9:04:17 AM CET             March 5, 2025 9:04:30 AM CET             Success
Updating network                         March 5, 2025 9:04:17 AM CET             March 5, 2025 9:04:30 AM CET             Success
Setting up Network                       March 5, 2025 9:04:17 AM CET             March 5, 2025 9:04:17 AM CET             Success
Restart network interface btbond1        March 5, 2025 9:04:17 AM CET             March 5, 2025 9:04:20 AM CET             Success
Restart network interface btbond1        March 5, 2025 9:04:20 AM CET             March 5, 2025 9:04:24 AM CET             Success
Restart network interface pubnet         March 5, 2025 9:04:24 AM CET             March 5, 2025 9:04:29 AM CET             Success
Validate availability of pubnet          March 5, 2025 9:04:30 AM CET             March 5, 2025 9:04:30 AM CET             Success
OS usergroup 'asmdba' creation           March 5, 2025 9:04:30 AM CET             March 5, 2025 9:04:30 AM CET             Success
OS usergroup 'asmoper' creation          March 5, 2025 9:04:30 AM CET             March 5, 2025 9:04:30 AM CET             Success
OS usergroup 'asmadmin' creation         March 5, 2025 9:04:30 AM CET             March 5, 2025 9:04:30 AM CET             Success
OS usergroup 'dba' creation              March 5, 2025 9:04:30 AM CET             March 5, 2025 9:04:30 AM CET             Success
OS usergroup 'dbaoper' creation          March 5, 2025 9:04:30 AM CET             March 5, 2025 9:04:30 AM CET             Success
OS usergroup 'oinstall' creation         March 5, 2025 9:04:30 AM CET             March 5, 2025 9:04:31 AM CET             Success
OS user 'grid' creation                  March 5, 2025 9:04:31 AM CET             March 5, 2025 9:04:31 AM CET             Success
OS user 'oracle' creation                March 5, 2025 9:04:31 AM CET             March 5, 2025 9:04:31 AM CET             Success
Default backup policy creation           March 5, 2025 9:04:31 AM CET             March 5, 2025 9:04:31 AM CET             Success
Backup Config name validation            March 5, 2025 9:04:31 AM CET             March 5, 2025 9:04:31 AM CET             Success
Backup config metadata persist           March 5, 2025 9:04:31 AM CET             March 5, 2025 9:04:31 AM CET             Success
Grant permission to RHP files            March 5, 2025 9:04:31 AM CET             March 5, 2025 9:04:31 AM CET             Success
Add SYSNAME in Env                       March 5, 2025 9:04:32 AM CET             March 5, 2025 9:04:32 AM CET             Success
Install oracle-ahf                       March 5, 2025 9:04:32 AM CET             March 5, 2025 9:06:13 AM CET             Success
Stop DCS Admin                           March 5, 2025 9:06:16 AM CET             March 5, 2025 9:06:16 AM CET             Success
Generate mTLS certificates               March 5, 2025 9:06:16 AM CET             March 5, 2025 9:06:17 AM CET             Success
Exporting Public Keys                    March 5, 2025 9:06:17 AM CET             March 5, 2025 9:06:18 AM CET             Success
Creating Trust Store                     March 5, 2025 9:06:18 AM CET             March 5, 2025 9:06:22 AM CET             Success
Update config files                      March 5, 2025 9:06:22 AM CET             March 5, 2025 9:06:22 AM CET             Success
Restart DCS Admin                        March 5, 2025 9:06:22 AM CET             March 5, 2025 9:06:43 AM CET             Success
Unzipping storage configuration files    March 5, 2025 9:06:43 AM CET             March 5, 2025 9:06:43 AM CET             Success
Reloading multipath devices              March 5, 2025 9:06:43 AM CET             March 5, 2025 9:06:44 AM CET             Success
Restart oakd                             March 5, 2025 9:06:44 AM CET             March 5, 2025 9:06:55 AM CET             Success
Restart oakd                             March 5, 2025 9:08:05 AM CET             March 5, 2025 9:08:15 AM CET             Success
Restore Quorum Disks                     March 5, 2025 9:08:15 AM CET             March 5, 2025 9:08:16 AM CET             Success
Creating GI home directories             March 5, 2025 9:08:16 AM CET             March 5, 2025 9:08:16 AM CET             Success
Extract GI clone                         March 5, 2025 9:08:16 AM CET             March 5, 2025 9:09:29 AM CET             Success
Creating wallet for Root User            March 5, 2025 9:09:29 AM CET             March 5, 2025 9:09:33 AM CET             Success
Creating wallet for ASM Client           March 5, 2025 9:09:33 AM CET             March 5, 2025 9:09:39 AM CET             Success
Grid stack creation                      March 5, 2025 9:09:39 AM CET             March 5, 2025 10:17:35 AM CET            Failure
GI Restore with RHP                      March 5, 2025 9:09:39 AM CET             March 5, 2025 10:17:35 AM CET            Failure

According to the error message, my system doesn’t satisfy the prerequisites checks.

Let’s check if error correlation may help:

odacli describe-job -i 46addc5f-7a1f-4f4b-bccf-78cb3708bef9 --ecr
ODA Assistant - Error Correlation report
----------------------------------------
          Failed job ID:  46addc5f-7a1f-4f4b-bccf-78cb3708bef9
            Description:  Restore node service - GI
             Start Time:  2025-03-05 09:04:01
               End Time:  2025-03-05 10:17:35
         EC report path: /opt/oracle/dcs/da/da_repo/46addc5f-7a1f-4f4b-bccf-78cb3708bef9.json

Failed Task Messages
--------------------
[Restore node service - GI] -  DCS-10001:Internal error encountered: Failed to provision GI with RHP at the home: /u01/app/19.24.0.0/grid: DCS-10001:Internal error encountered: PRGH-1030 : The environments on nodes 'srvoda1' do not satisfy some of the prerequisite checks. ..

srvoda1 Log Messages
----------------------------
  DCS Agent
  ~~~~~~~~~
    Error Logs
    ==========
    [Install oracle-ahf] - Trying to add string [ERROR] : Unable to switch to home directory
    [Install oracle-ahf] - error is package tfa-oda is not installed
    [GI Restore with RHP] - Calling rhp provGI
    [GI Restore with RHP] - Task(id: TaskDcsJsonRpcExt_707, name: GI Restore with RHP) failed
    [GI Restore with RHP] - .. d.subtasks.isempty=true d.status=Failure
    [Grid stack creation] - ..
    [Grid stack creation] - .. d.subtasks.isempty=false d.status=Failure
    [Grid stack creation] - DCS-10001:Internal error encountered: PRGH-1030 : The environments on nodes 'srvoda1' do not satisfy some of the prerequisite checks.
      Error code - DCS-10001
      Cause: An internal error occurred.
      Action: Contact Oracle Support for assistance.
    [Grid stack creation] - .., output:
    Warning Logs
    ============
    [[ SEND-THREAD 77 ]] - [ [ SEND-THREAD 77 ] dcs0-priv:22001] Request failed: Operation: GET Host: dcs0-priv:22001 Path: /joblocks/46addc5f-7a1f-4f4b-bccf-78cb3708bef9 Data: null Status: 404
    [Provisioning service creation] - dm_multipath module is not loaded, attempting to load it...

  RHP
  ~~~
    Error Logs
    ==========
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - [SiteFactoryImpl.fetchSite:136]  EntityNotExistsException PRGR-110 : Repository object "SRVODA1" of type "SITE" does not exist.
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - [WorkingCopyOperationImpl.internalAddGI:18517]  Expected: Site does not exist, will have to be created :PRGR-119 : Site "srvoda1" does not exist.
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - [RemoteFactoryImpl.getCRSHomeOfRemoteCluster:1377]  COException: PRCZ-4001 : failed to execute command "/bin/cat" using the privileged execution plugin "odaexec" on nodes "srvoda1" within 120 seconds
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - PRCZ-2103 : Failed to execute command "/bin/cat" on node "srvoda1" as user "root". Detailed error:
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - /bin/cat: /etc/oracle/olr.loc: No such file or directory
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - [WorkingCopyOperationImpl.internalAddGI:18972]  Exec Exception, expected here :PRGR-118 : Working copy "OraGrid192400" does not exist.
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - [OracleHomeImpl.addWCMarkerFile:4403]  Ignore the exception to create RHP WC marker file : PRKH-1009 : CRS HOME must be defined in the environment or in the Oracle Cluster Registry
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - [ExecCommandNoUserEqImpl.runCmd:491]  Final CompositeOperation exception: PRCC-1021 : One or more of the submitted commands did not execute successfully.
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - PRCC-1026 : Command "/u01/app/19.24.0.0/grid/gridSetup.sh -silent -responseFile /u01/app/19.24.0.0/grid//crs/install/rhpdata/grid7317919467403765800.rsp -ignorePrereq -J-Dskip.cvu.root.checks=true -J-Doracle.install.grid.validate.all=false oracle_install_crs_ODA_CONFIG=olite oracle_install_crs_ConfigureMgmtDB=false -J-Doracle.install.crs.skipGIMRDiskGroupSizeCheck=true oracle_install_asm_UseExistingDG=true -J-Doracle.install.grid.validate.CreateASMDiskGroup=false" submitted on node srvoda1 timed out after 4,000 seconds.
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - </OUTPUT>
    [jobid-46addc5f-7a1f-4f4b-bccf-78cb3708bef9] - [OperationAPIImpl.provGI:676]  OperationAPIException: PRGH-1030 : The environments on nodes 'srvoda1' do not satisfy some of the prerequisite checks.

Release Notes
-------------
  No matching results were found.

Documentation
-------------
  1. Error in restore node process in Data Preserving Reprovisioning
      Abstract - In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.
      Link - https://srvoda1.dbi-services.net:7093/docs/cmtrn/issues-with-oda-odacli.html#GUID-F1385628-9F87-4FEF-8D27-289A3ED459EC
  2. Error in restore node process in Data Preserving Reprovisioning
      Abstract - In the Data Preserving Reprovisioning process, during node restore, an error may be encountered.
      Link - https://srvoda1.dbi-services.net:7093/docs/cmtrn/issues-with-oda-odacli.html#GUID-75D52887-D425-4753-AF44-EFAB5C148873
  3. Managing Backup, Restore, and Recovery on a DB System in a KVM Deployment
      Abstract - Understand the backup, restore, and recovery operations supported on a DB system in a KVM deployment.
      Link - https://srvoda1.dbi-services.net:7093/docs/cmtxp/managing-oracle-database-appliance-kvm-deployment1.html#GUID-7318F4D7-4CB8-486C-9DC7-A7490925B866
  4. Backup, Restore and Recover Databases
      Abstract - Review backup, restore, and recovery operations for your databases.
      Link - https://srvoda1.dbi-services.net:7093/docs/cmtxp/backup-recover-restore.html#GUID-032C43EC-20B9-4036-ADA9-7631EEBBFEF6
  5. Using the CLI to Backup, Restore, and Recover
      Abstract - Use the command-line interface to backup, restore, and recover databases.
      Link - https://srvoda1.dbi-services.net:7093/docs/cmtxp/backup-recover-restore.html#GUID-54F9A4A6-59B8-4A18-BE41-1CCB9096E2C5

NOTE: For additional details such as file name and line numbers of error logs, please refer to /opt/oracle/dcs/da/da_repo/924d139e-50cc-4893-ab0a-6acb7e7eeb9c.json

There is a lot of errors, but hard to find what is the source problem.

Let’s have a look at the logfile of the GI setup:

vi /u01/app/oraInventory/logs/GridSetupActions2025-03-05_09-09-40AM/gridSetupActions2025-03-05_09-09-40AM.out
…
Resolution failed, cached value is left untouched for variable
…
No ECDSA host key is known for srvoda1 and you have requested strict checking
…

Could it be a DNS issue? Regarding the network settings, I’m pretty sure that IP, netmask and gateway are OK.

Identify and solve the problem by editing the json file

Like the create-appliance, the restore-node -g will use a json file restored previously with the update-repository of the server archive.

Let’s compare my own backup files and the json file used by restore-node -g:

cat /mnt/backup/ODA_backup/backup_ODA_srvoda1_20250312_0820/resolv.conf
search dbi-services.net
nameserver 10.1.2.127
nameserver 10.1.2.128

cat /opt/oracle/oak/restore/metadata/provisionInstance.json  | grep dns
    "dnsServers" : [ "10.4.110.4", "10.4.110.5" ],

OK, DNS servers have been changed manually in the resolv.conf. They are no more correct in the json file.

As the restore-node -g failed, a cleanup of the system is mandatory:

/opt/oracle/oak/onecmd/cleanup.pl
INFO: Log file is /opt/oracle/oak/log/srvoda1/cleanup/cleanup_2025-03-05_10-40-06.log
INFO: Log file is /opt/oracle/oak/log/srvoda1/cleanup/dcsemu_diag_precleanup_2025-03-05_10-40-06.log

INFO: Platform is 'BM'
INFO: *******************************************************************
INFO: ** Starting process to cleanup provisioned host srvoda1          **
INFO: *******************************************************************
WARNING: DPR environment detected. DPR specific cleanup involves
WARNING: deconfiguring the ODA software stack without touching ASM
WARNING: storage to allow rerunning of the 'odacli restore-node -g'
WARNING: command. If regular cleanup(which erases ASM disk headers)
WARNING: is intended, rerun cleanup.pl with '-nodpr' option.
WARNING: If Multi-User Access is enabled, use '-omausers' option to
WARNING: delete the custom users created during the previous run.
Do you want to continue (yes/no) : yes
INFO:
Running cleanup will delete Grid User - 'grid' and
INFO: DB user - 'oracle' and also the
INFO: groups 'oinstall,dba,asmadmin,asmoper,asmdba'
INFO: nodes will be rebooted
Do you want to continue (yes/no) : yes

INFO: /u01/app/19.24.0.0/grid/bin/crsctl.bin

INFO: *************************************
INFO: ** Checking for GI bits presence
INFO: *************************************
INFO: GI bits /u01/app/19.24.0.0/grid found on system under /u01/app directory...

INFO: *************************************
INFO: ** DPR Cleanup
INFO: *************************************
INFO: Nothing to do.
SUCCESS: DPR cleanup actions completed.
INFO: Attempting to stop DCS agent on local node

INFO: *************************************
INFO: ** Executing AFD cleanup commands
INFO: *************************************
INFO: *************************************
INFO: ** Cleaning Oracle HAMI for ODA
INFO: *************************************
INFO: ** - Oracle HAMI for ODA - ensembles cleaned successfully
INFO: ** - Oracle HAMI for ODA - users cleaned successfully
INFO: *************************************
INFO: ** Executing stack deinstall commands
INFO: *************************************
INFO: *************************************
INFO: ** Removing IPC objects
INFO: *************************************
INFO: Cleaning up IDM configurations...
Deleting directory </opt/oracle/dcs/idm>INFO: *************************************
 
INFO: ** Cleaning miscellaneous components:
INFO: *************************************
INFO: ** - reset limits.conf
INFO: ** - delete users
INFO: ** - delete groups
INFO: ** - hostname, gateway and hosts reset commands
INFO: ** - dcs cleanup and orphan files removal commands
INFO: Attempting to clean MySQL tables on local node

INFO: Cleaning up network bridges
INFO: default net is: pubnet
INFO: Cleaning up network bridges: pubnet on btbond1
INFO: Reset public interface: pubnet
INFO: Cleaning up network bridges: privasm on priv0.100
INFO: remove VLAN config: /etc/sysconfig/network-scripts/ifcfg-priv0.100
INFO: BaseDBCC cleanup - skip
INFO: *************************************
INFO: ** Removing KVM files
INFO: *************************************
INFO: *************************************
INFO: ** Removing BM CPU Pool files
INFO: *************************************
INFO: ** - networking cleaning commands
INFO: ** - UTC reset commands
INFO: *************************************
INFO: ** Removing Oracle AHF RPM
INFO: *************************************
INFO: Oracle AHF RPM is installed as : oracle-ahf-2405000-20240715121646.x86_64
INFO: Uninstalling Oracle AHF RPM
INFO: Oracle AHF RPM uninstalled successfully
INFO: Oracle AHF RPM is installed as : oracle-ahf-2405000-20240715121646.x86_64
INFO: Delete directory clones.local (if existing)...
INFO: Cleaning up ACFS mounts...
INFO: Reset password for 'root' to default value
INFO: Executing <command to reset root password to default value>
INFO: Removing SSH keys on srvoda1

INFO: Rebooting the system via <reboot>...
INFO: Executing <reboot>

INFO: Cleanup was successful
INFO: Log file is /opt/oracle/oak/log/srvoda1/cleanup/cleanup_2025-03-05_10-40-06.log

WARNING: After system reboot, please re-run "odacli update-repository" for GI/DB clones,
WARNING: before running "odacli restore-node -g".

Once the server has rebooted, let’s register again the GI and server archive files:

odacli update-repository -f /mnt/backup/ODA_backup/odacli-dcs-19.24.0.0.0-240724-GI-19.24.0.0.zip
odacli update-repository -f /mnt/backup/ODA_backup/serverarchive_srvoda1.zip

Let’s change the DNS in the provisionInstance.json and do the restore-node -g:

sed -i 's/10.4.110.4/10.1.2.127/g' /opt/oracle/oak/restore/metadata/provisionInstance.json
sed -i 's/10.4.110.5/10.1.2.128/g' /opt/oracle/oak/restore/metadata/provisionInstance.json

cat /opt/oracle/oak/restore/metadata/provisionInstance.json  | grep dns
    "dnsServers" : [ "10.1.2.127", "10.1.2.128" ],

odacli restore-node -g
...

odacli describe-job -i "f2e14691-1fc4-4b8d-9186-cbb55a69c5dd"
Job details
----------------------------------------------------------------
                     ID:  f2e14691-1fc4-4b8d-9186-cbb55a69c5dd
            Description:  Restore node service - GI
                 Status:  Success
                Created:  March 5, 2025 5:33:33 PM CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ---------
Restore node service creation            March 5, 2025 5:33:40 PM CET             March 5, 2025 5:53:33 PM CET             Success
Validate absence of Interconnect         March 5, 2025 5:33:40 PM CET             March 5, 2025 5:33:41 PM CET             Success
network configuration file
Setting up Network                       March 5, 2025 5:33:42 PM CET             March 5, 2025 5:33:42 PM CET             Success
Restart network interface pubnet         March 5, 2025 5:33:42 PM CET             March 5, 2025 5:33:48 PM CET             Success
Setting up Vlan                          March 5, 2025 5:33:48 PM CET             March 5, 2025 5:33:49 PM CET             Success
Restart network interface priv0.100      March 5, 2025 5:33:49 PM CET             March 5, 2025 5:33:50 PM CET             Success
Restart network interface privasm        March 5, 2025 5:33:50 PM CET             March 5, 2025 5:33:50 PM CET             Success
Setting up Network                       March 5, 2025 5:33:51 PM CET             March 5, 2025 5:33:51 PM CET             Success
Restart network interface privasm        March 5, 2025 5:33:51 PM CET             March 5, 2025 5:33:56 PM CET             Success
Network update                           March 5, 2025 5:33:56 PM CET             March 5, 2025 5:34:09 PM CET             Success
Updating network                         March 5, 2025 5:33:56 PM CET             March 5, 2025 5:34:09 PM CET             Success
Setting up Network                       March 5, 2025 5:33:56 PM CET             March 5, 2025 5:33:56 PM CET             Success
Restart network interface btbond1        March 5, 2025 5:33:56 PM CET             March 5, 2025 5:34:00 PM CET             Success
Restart network interface btbond1        March 5, 2025 5:34:00 PM CET             March 5, 2025 5:34:03 PM CET             Success
Restart network interface pubnet         March 5, 2025 5:34:03 PM CET             March 5, 2025 5:34:09 PM CET             Success
Validate availability of pubnet          March 5, 2025 5:34:09 PM CET             March 5, 2025 5:34:09 PM CET             Success
OS usergroup 'asmdba' creation           March 5, 2025 5:34:09 PM CET             March 5, 2025 5:34:09 PM CET             Success
OS usergroup 'asmoper' creation          March 5, 2025 5:34:09 PM CET             March 5, 2025 5:34:09 PM CET             Success
OS usergroup 'asmadmin' creation         March 5, 2025 5:34:09 PM CET             March 5, 2025 5:34:09 PM CET             Success
OS usergroup 'dba' creation              March 5, 2025 5:34:09 PM CET             March 5, 2025 5:34:10 PM CET             Success
OS usergroup 'dbaoper' creation          March 5, 2025 5:34:10 PM CET             March 5, 2025 5:34:10 PM CET             Success
OS usergroup 'oinstall' creation         March 5, 2025 5:34:10 PM CET             March 5, 2025 5:34:10 PM CET             Success
OS user 'grid' creation                  March 5, 2025 5:34:10 PM CET             March 5, 2025 5:34:10 PM CET             Success
OS user 'oracle' creation                March 5, 2025 5:34:10 PM CET             March 5, 2025 5:34:11 PM CET             Success
Default backup policy creation           March 5, 2025 5:34:11 PM CET             March 5, 2025 5:34:11 PM CET             Success
Backup Config name validation            March 5, 2025 5:34:11 PM CET             March 5, 2025 5:34:11 PM CET             Success
Backup config metadata persist           March 5, 2025 5:34:11 PM CET             March 5, 2025 5:34:11 PM CET             Success
Grant permission to RHP files            March 5, 2025 5:34:11 PM CET             March 5, 2025 5:34:11 PM CET             Success
Add SYSNAME in Env                       March 5, 2025 5:34:11 PM CET             March 5, 2025 5:34:11 PM CET             Success
Install oracle-ahf                       March 5, 2025 5:34:11 PM CET             March 5, 2025 5:35:54 PM CET             Success
Stop DCS Admin                           March 5, 2025 5:35:56 PM CET             March 5, 2025 5:35:56 PM CET             Success
Generate mTLS certificates               March 5, 2025 5:35:56 PM CET             March 5, 2025 5:35:58 PM CET             Success
Exporting Public Keys                    March 5, 2025 5:35:58 PM CET             March 5, 2025 5:35:59 PM CET             Success
Creating Trust Store                     March 5, 2025 5:35:59 PM CET             March 5, 2025 5:36:02 PM CET             Success
Update config files                      March 5, 2025 5:36:02 PM CET             March 5, 2025 5:36:02 PM CET             Success
Restart DCS Admin                        March 5, 2025 5:36:02 PM CET             March 5, 2025 5:36:23 PM CET             Success
Unzipping storage configuration files    March 5, 2025 5:36:23 PM CET             March 5, 2025 5:36:23 PM CET             Success
Reloading multipath devices              March 5, 2025 5:36:23 PM CET             March 5, 2025 5:36:24 PM CET             Success
Restart oakd                             March 5, 2025 5:36:24 PM CET             March 5, 2025 5:36:34 PM CET             Success
Restart oakd                             March 5, 2025 5:37:45 PM CET             March 5, 2025 5:37:55 PM CET             Success
Restore Quorum Disks                     March 5, 2025 5:37:55 PM CET             March 5, 2025 5:37:56 PM CET             Success
Creating GI home directories             March 5, 2025 5:37:56 PM CET             March 5, 2025 5:37:56 PM CET             Success
Extract GI clone                         March 5, 2025 5:37:56 PM CET             March 5, 2025 5:39:07 PM CET             Success
Creating wallet for Root User            March 5, 2025 5:39:07 PM CET             March 5, 2025 5:39:15 PM CET             Success
Creating wallet for ASM Client           March 5, 2025 5:39:15 PM CET             March 5, 2025 5:39:18 PM CET             Success
Grid stack creation                      March 5, 2025 5:39:18 PM CET             March 5, 2025 5:49:35 PM CET             Success
GI Restore with RHP                      March 5, 2025 5:39:18 PM CET             March 5, 2025 5:46:26 PM CET             Success
Updating GIHome version                  March 5, 2025 5:46:28 PM CET             March 5, 2025 5:46:31 PM CET             Success
Restarting Clusterware                   March 5, 2025 5:46:32 PM CET             March 5, 2025 5:49:35 PM CET             Success
Post cluster OAKD configuration          March 5, 2025 5:49:35 PM CET             March 5, 2025 5:50:30 PM CET             Success
Mounting disk group DATA                 March 5, 2025 5:50:30 PM CET             March 5, 2025 5:50:31 PM CET             Success
Mounting disk group RECO                 March 5, 2025 5:50:38 PM CET             March 5, 2025 5:50:45 PM CET             Success
Setting ACL for disk groups              March 5, 2025 5:50:53 PM CET             March 5, 2025 5:50:55 PM CET             Success
Register Scan and Vips to Public Network March 5, 2025 5:50:55 PM CET             March 5, 2025 5:50:57 PM CET             Success
Adding Volume DUMPS to Clusterware       March 5, 2025 5:51:10 PM CET             March 5, 2025 5:51:13 PM CET             Success
Adding Volume ACFSCLONE to Clusterware   March 5, 2025 5:51:13 PM CET             March 5, 2025 5:51:15 PM CET             Success
Adding Volume ODABASE_N0 to Clusterware  March 5, 2025 5:51:15 PM CET             March 5, 2025 5:51:18 PM CET             Success
Adding Volume COMMONSTORE to Clusterware March 5, 2025 5:51:18 PM CET             March 5, 2025 5:51:20 PM CET             Success
Adding Volume ORAHOME_SH to Clusterware  March 5, 2025 5:51:20 PM CET             March 5, 2025 5:51:23 PM CET             Success
Enabling Volume(s)                       March 5, 2025 5:51:23 PM CET             March 5, 2025 5:52:26 PM CET             Success
Discover ACFS clones config              March 5, 2025 5:53:18 PM CET             March 5, 2025 5:53:27 PM CET             Success
Configure export clones resource         March 5, 2025 5:53:26 PM CET             March 5, 2025 5:53:27 PM CET             Success
Discover DbHomes ACFS config             March 5, 2025 5:53:27 PM CET             March 5, 2025 5:53:30 PM CET             Success
Discover OraHomeStorage volumes          March 5, 2025 5:53:27 PM CET             March 5, 2025 5:53:30 PM CET             Success
Setting up Hugepages                     March 5, 2025 5:53:30 PM CET             March 5, 2025 5:53:30 PM CET             Success
Provisioning service creation            March 5, 2025 5:53:32 PM CET             March 5, 2025 5:53:32 PM CET             Success
Persist new agent state entry            March 5, 2025 5:53:32 PM CET             March 5, 2025 5:53:32 PM CET             Success
Persist new agent state entry            March 5, 2025 5:53:32 PM CET             March 5, 2025 5:53:32 PM CET             Success
Restart DCS Agent                        March 5, 2025 5:53:32 PM CET             March 5, 2025 5:53:33 PM CET             Success

This time it worked fine. Next step will be the restore-node -d for restoring the databases.

The backup script I use before patching an ODA

Troubleshooting is easier if you can have a look at configuration files that were in use prior reimaging. Here is a script I’ve been using for years before patching or reimaging an ODA. I would recommend making your own script based on mine according to your specific configuration:

vi /mnt/backup/prepatch_backup.sh
# Backup important files before patching
export BKPPATH=/mnt/backup/ODA_backup/backup_ODA_`hostname`_`date +"%Y%m%d_%H%M"`
echo "Backing up to " $BKPPATH
mkdir -p $BKPPATH
odacli list-databases > $BKPPATH/list-databases.txt
ps -ef | grep pmon | grep -v ASM | grep -v APX | grep -v grep | cut -c 58- | sort > $BKPPATH/running-instances.txt
odacli list-dbhomes > $BKPPATH/list-dbhomes.txt
odacli list-dbsystems > $BKPPATH/list-dbsystems.txt
odacli list-vms > $BKPPATH/list-vms.txt
crontab -u oracle -l  > $BKPPATH/crontab-oracle.txt
crontab -u grid -l  > $BKPPATH/crontab-grid.txt
crontab -l  > $BKPPATH/crontab-root.txt

cat /etc/fstab >  $BKPPATH/fstab.txt
cat /etc/oratab >  $BKPPATH/oratab.txt
cat /etc/sysconfig/network >  $BKPPATH/etc-sysconfig-network.txt
cat /etc/hosts  >  $BKPPATH/hosts
cat /etc/resolv.conf  >  $BKPPATH/resolv.conf
cp /etc/krb5.conf  $BKPPATH/
cp /etc/krb5.keytab  $BKPPATH/
mkdir $BKPPATH/network-scripts
cp  /etc/sysconfig/network-scripts/ifcfg*  $BKPPATH/network-scripts/
odacli describe-system > $BKPPATH/describe-system.txt
odacli  describe-component >  $BKPPATH/describe-component.txt
HISTFILE=~/.bash_history
set -o history
history > $BKPPATH/history-root.txt
cp /home/oracle/.bash_history $BKPPATH/history-oracle.txt
df -h >  $BKPPATH/filesystems-status.txt

for a in `odacli list-dbhomes -j | grep dbHomeLocation | awk -F '"' '{print $4}' | sort` ; do mkdir -p $BKPPATH/$a/network/admin/ ; cp $a/network/admin/tnsnames.ora $BKPPATH/$a/network/admin/; cp $a/network/admin/sqlnet.ora $BKPPATH/$a/network/admin/; done
for a in `odacli list-dbhomes -j | grep dbHomeLocation | awk -F '"' '{print $4}' | sort` ; do mkdir -p $BKPPATH/$a/owm/ ; cp -r $a/owm/* $BKPPATH/$a/owm/; done
cp `ps -ef | grep -v grep | grep LISTENER | awk -F ' ' '{print $8}' | awk -F 'bin' '{print $1}'`network/admin/listener.ora $BKPPATH/gridhome-listener.ora
cp `ps -ef | grep -v grep | grep LISTENER | awk -F ' ' '{print $8}' | awk -F 'bin' '{print $1}'`/network/admin/sqlnet.ora $BKPPATH/gridhome-sqlnet.ora

tar czf $BKPPATH/u01-app-oracle-admin.tgz /u01/app/oracle/admin/
tar czf $BKPPATH/u01-app-oracle-local.tgz /u01/app/oracle/local/
tar czf $BKPPATH/home.tgz /home/
cp /etc/passwd $BKPPATH/
cp /etc/group $BKPPATH/

echo "End"
echo "Backup files size:"
du -hs  $BKPPATH
echo "Backup files content:"
ls -lrt  $BKPPATH
Conclusion

I would recommend these 3 rules to save hours of troubleshooting:

  • as mentioned in the ODA documentation: never change networks parameters manually (IP, gateway, hostname, DNS, bonding mode, …)
  • document every manual change you make on your ODA (additional tools setup, specific settings, …)
  • do an extensive backup of all configuration files before doing a patching or a DPR (use the script provided in the previous chapter for creating yours)

L’article PRGH-1030 when doing restore-node -g on ODA est apparu en premier sur dbi Blog.

PostgreSQL 18: More granular log_connections

Thu, 2025-03-13 03:24

Many of our customers enable log_connections because of auditing requirements. This is a simple boolean which is either turned on or off. Once this is enabled and active every new connection to a PostgreSQL database is logged into the PostgreSQL log file. Up to PostgreSQL 17, a typical line in the log file for a logged connection looks like this:

2025-03-13 08:50:05.607 CET - 1 - 6195 - [local] - [unknown]@[unknown] - 0LOG:  connection received: host=[local]
2025-03-13 08:50:05.607 CET - 2 - 6195 - [local] - postgres@postgres - 0LOG:  connection authenticated: user="postgres" method=trust (/u02/pgdata/17/pg_hba.conf:117)
2025-03-13 08:50:05.607 CET - 3 - 6195 - [local] - postgres@postgres - 0LOG:  connection authorized: user=postgres database=postgres application_name=psql

As you can see, there are three stages logged: Connection received, authenticated and authorized. This gives you an idea of how long each of the stages took to complete by comparing the timestamps logged. A consequence of this is, that it can generate quite some noise in the log file if you have many connections.

With PostgreSQL 18 this will change, log_connections is not anymore a simple boolean but a list of supported values. The valid options are:

  • receipt
  • authentication
  • authorization
  • [empty string]

This list should already tell you what changed. You now have the option to enable logging of specific stages only, and not all of them at once if you don’t need them. An empty string disables connection logging.

So, e.g. if you are only interested in the authorization stage you can now configure that:

postgres@pgbox:/home/postgres/ [pgdev] psql
psql (18devel)
Type "help" for help.

postgres=# select version();
                              version                               
--------------------------------------------------------------------
 PostgreSQL 18devel on x86_64-linux, compiled by gcc-14.2.1, 64-bit
(1 row)

postgres=# alter system set log_connections = 'authorization';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

From now on only the “authorization” stage is logged into the log file:

2025-03-13 09:10:41.710 CET - 1 - 6617 - [local] - postgres@postgres - 0LOG:  connection authorized: user=postgres database=postgres application_name=psql

This reduces the amount of logging quite a bit, if you are only interested in that stage. Adding all stages will restore the old behavior of logging all stages:

postgres=# alter system set log_connections = 'authorization','receipt','authentication';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

With this setting, it looks exactly like before:

2025-03-13 09:14:19.520 CET - 1 - 6629 - [local] - [unknown]@[unknown] - 0LOG:  connection received: host=[local]
2025-03-13 09:14:19.521 CET - 2 - 6629 - [local] - postgres@postgres - 0LOG:  connection authenticated: user="postgres" method=trust (/u02/pgdata/PGDEV/pg_hba.conf:117)
2025-03-13 09:14:19.521 CET - 3 - 6629 - [local] - postgres@postgres - 0LOG:  connection authorized: user=postgres database=postgres application_name=psql

Nice, all details here, and as usual, thank you to all involved in this.

L’article PostgreSQL 18: More granular log_connections est apparu en premier sur dbi Blog.

Why Metadata Matters?

Tue, 2025-03-11 07:30
Why?

In today’s digital landscape, efficient Content Management is crucial for business productivity. While many organizations still rely on traditional folder-based file management. Modern solutions like M-Files offer a smarter way to store, organize, and retrieve information.

The key differentiator? Metadata. In this blog post, we’ll explore the limitations of traditional file management, the advantages of metadata-driven organization, and why M-Files is the best choice for businesses.

Traditional File Management

Most of the Document Management Systems (DMS) relies on a hierarchical folder structure where documents are manually placed in specific locations. While this approach is familiar and simple to use, that is not the best way for some reasons:

  • Difficult to Locate Files: Searching for documents requires navigating multiple nested folders, leading to wasted time and frustration.
  • File Duplication: The same document may be stored in different locations, increasing redundancy and the risk of outdated versions.
  • Limited Search Capabilities: Keyword searches often yield inaccurate results since traditional systems rely on file names rather than properties.
  • Rigid Structure: A document can only reside in one location, making it difficult to classify files that belong to multiple categories.
  • Version Control Issues: Without proper versioning, employees may work on outdated files, leading to errors and inefficiencies.
M-Files concept

Unlike traditional file systems, M-Files eliminates reliance on folders by organizing documents based on metadata—descriptive properties that define a document’s content, purpose, and relationships. Here’s why this approach is transformative:

  • Faster and Smarter Search: With M-Files, users can quickly find documents by searching for metadata fields such as document type, author, project name, or approval status. No more clicking through endless folders, simply enter relevant terms and get instant results.
    • Eliminates Duplication and Redundancy: Since documents are classified based on properties rather than locations, there’s no need for multiple copies stored across different folders. Users access the latest version from a single source of truth.
      • Dynamic Organization: Documents can be viewed in multiple ways without duplication. For example, a contract can appear under “Legal,” “Finance,” and “Project X” based on its metadata while existing as a single file.
        • Automated Workflows and Compliance: M-Files allows businesses to automate document workflows based on metadata. For instance, an invoice marked “Pending Approval” can be automatically routed to the finance team, ensuring compliance and efficiency.
          • Version Control: Every document change is automatically saved with a full version history, preventing accidental overwrites and ensuring teams always work with the most recent file.
            What next?

            If your business is still struggling with traditional file structures, it’s time to rethink how you manage information. M-Files provides a smarter, faster, and more flexible solution that aligns with modern business needs.

            Moving your document to M-Files may look a huge amount of work, but fortunately it is not so!

            Why? two main reasons:

            • M-Files provides solutions to smoothly move your data (see my other post here).
            • We are here to help you revolutionizing your document management

            L’article Why Metadata Matters? est apparu en premier sur dbi Blog.

            Pages