Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 3 weeks 2 days ago

SQL Server 2025 – ZSTD – A new compression algorithm for backups

Thu, 2025-05-22 18:43
Introduction

SQL Server 2025 introduces a new algorithm for backup compression: ZSTD. As a result, SQL Server 2025 now offers three solutions for backup compression:

  • MS_XPRESS
  • QAT
  • ZSTD

In this blog, we will compare MS_XPRESS and ZSTD.

Environment

To perform these tests, the following virtual machine was used:

  • OS: Windows Server 2022 Datacenter
  • SQL Server: 2025 Standard Developer
  • CPU: 8 cores
  • VM memory: 12 GB
  • (SQL) Max server memory: 4 GB

Additionally, I used the StackOverflow database to run the backup tests (reference: https://www.brentozar.com/archive/2015/10/how-to-download-the-stack-overflow-database-via-bittorrent/).

ZSTD usage

There are several ways to use the new ZSTD compression algorithm. Here are two methods:

  • Add the following terms to the SQL backup commands: WITH COMPRESSION (ALGORITHM = ZSTD)
BACKUP DATABASE StackOverflow TO DISK = 'T:\S1.bak' WITH INIT, FORMAT, COMPRESSION (ALGORITHM = ZSTD), STATS = 5
  • Change the compression algorithm at the instance level:
EXECUTE sp_configure 'backup compression algorithm', 3; 
RECONFIGURE;
The initial data

The StackOverflow database used has a size of approximately 165 GB. To perform an initial test using the MS_XPRESS algorithm, the commands below were executed:

SET STATISTICS TIME ON
BACKUP DATABASE StackOverflow TO DISK = 'T:\S1.bak' WITH INIT, FORMAT, COMPRESSION, STATS = 5;

Here is the result:

BACKUP DATABASE successfully processed 20 932 274 pages in 290.145 seconds (563.626 MB/sec).
SQL Server Execution Times: CPU time = 11 482 ms,  elapsed time = 290 207 ms.

For the second test, we are using the ZSTD algorithm with the commands below:

SET STATISTICS TIME ON
BACKUP DATABASE StackOverflow TO DISK = 'T:\S1.bak' WITH INIT, FORMAT, COMPRESSION (ALGORITHM = ZSTD), STATS = 5

Here is the result:

BACKUP DATABASE successfully processed 20 932 274 pages in 171.338 seconds (954.449 MB/sec).
CPU time = 10 750 ms,  elapsed time = 171 397 ms.

It should be noted that my storage system cannot sustain its maximum throughput for an extended period. In fact, when transferring large files (e.g., 100 GB), the throughput drops after about 15 seconds (for example, from 1.2 GB/s to 500 MB/s).

According to the initial data, the CPU time between MS_XPRESS and ZSTD is generally the same. However, since ZSTD allows backups to be performed more quickly (based on the tests), the overall CPU time is lower with the ZSTD algorithm. Indeed, because the backup duration is reduced, the time the CPU spends executing instructions (related to backups) is also lower.

Comparison table for elapsed time with percentage gain:

Test NumberCompression TypeDuration In Seconds1MS_XPRESS2902ZSTD171PerformanceApproximately 41% faster
Comparison of captured data

During the tests, performance counters were set up to gain a more accurate view of the behavior of the two algorithms during a backup. For this, we used the following counters:

  • Backup throughput/sec (KB)
  • Disk Read KB/sec (in my case, Disk Read KB/sec is equal to the values of the Backup Throughput/sec (KB) counter). In fact, the “Backup throughput/sec (KB)” counter reflects the reading of data pages during the backup.
  • Disk Write KB/sec
  • Processor Time (%)

We observe that the throughput is higher with the ZSTD algorithm. The drop that appears is explained by the fact that ZSTD enabled the backup to be completed more quickly. As a result, the backup operation took less time, and the amount of data collected is lower compared to the other solution. Additionally, it should be noted that the database is hosted on volume (S) while the backups are stored on another volume (T).

We also observe that the write throughput is higher when using the ZSTD algorithm.

For the same observed period, the CPU load is generally the same however ZSTD allows a backup to be completed more quickly (in our case). As a result, the overall CPU load is generally lower.

We also observe that the backup ratio (on this database) is higher with the ZSTD algorithm. This indicates that the size occupied by the compressed backup is smaller with ZSTD.

backup_ratiodatabase_namebackup_typecompressed_backup_size (bytes)compression_algorithm3.410259900691847063StackOverflowFull50 283 256 836MS_XPRESS3.443440933211591093StackOverflowFull49 798 726 852ZSTD Conclusion

Based on the tests performed, we observe that the ZSTD algorithm allows:

  • Faster backup creation
  • Reduced CPU load because backups are produced more quickly
  • Reduced backup size

However, it should be noted that further testing is needed to confirm the points above.

Thank you, Amine Haloui.

L’article SQL Server 2025 – ZSTD – A new compression algorithm for backups est apparu en premier sur dbi Blog.

Using dlt to get data from Db2 to PostgreSQL

Wed, 2025-05-21 06:34

For a recent project at one of our customers we needed to get data from a Db2 database into PostgreSQL. The first solution we thought of was the foreign data wrapper for Db2. This is usually easy to setup and configure and all you need are the client libraries (for Db2 in this case). But it turned out that db2_fdw is so old that it cannot be used against a recent version of PostgreSQL (we tested 15,16,17). We even fixed some of the code but it became clear very fast, that this is not the solution to go with. There is also db2topg but this is not as advanced as it’s brother ora2pg and we did not even consider trying that. Another tool you can use for such tasks is dtl (data load tool), and it turned out this is surprisingly easy to install, configure and use. You are not limited Db2 as a source, much more options are available.

As the customer is using Red Hat 8 for the PostgreSQL nodes, we start with a fresh Red Hat 8 installation as well:

postgres@rhel8:/u02/pgdata/17/ [PG1] cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.10 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.10"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.10 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://issues.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.10
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.10"

PostgreSQL 17 is already up and running:

postgres@rhel8:/u02/pgdata/17/ [PG1] psql -c "select version()"
                                                          version
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 17.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-26), 64-bit
(1 row)

For not messing up with the Python installation from the operating system, we’ll use a Python virtual environment for dlt and install libjpeg-turbo-devel and git as those are required later on:

postgres@rhel8:/home/postgres/ [PG1] sudo dnf install libjpeg-turbo-devel git
postgres@rhel8:/u02/pgdata/17/ [PG1] sudo dnf install python3-virtualenv -y
postgres@rhel8:/u02/pgdata/17/ [PG1] python3.12 -m venv .local
postgres@rhel8:/home/postgres/ [PG1] .local/bin/pip3 install --upgrade pip
postgres@rhel8:/u02/pgdata/17/ [PG1] . .local/bin/activate

Once we have the Python virtual environment ready and activated, the installation of dlt is just a matter of asking pip to install it for us (for this you need access to the internet, of course):

postgres@rhel8:/u02/pgdata/17/ [PG1] .local/bin/pip3 install -U "dlt[postgres]"
postgres@rhel8:/home/postgres/ [PG1] which dlt
~/.local/bin/dlt

Having that installed we can initialize a new pipeline based on the sql_database template and we want “postgres” as the destination:

postgres@rhel8:/home/postgres/ [PG1] mkdir db2_postgresql && cd $_
postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] dlt init sql_database postgres
Creating a new pipeline with the dlt core source sql_database (Source that loads tables form any SQLAlchemy supported database, supports batching requests and incremental loads.)
NOTE: Beginning with dlt 1.0.0, the source sql_database will no longer be copied from the verified sources repo but imported from dlt.sources. You can provide the --eject flag to revert to the old behavior.
Do you want to proceed? [Y/n]: y

Your new pipeline sql_database is ready to be customized!
* Review and change how dlt loads your data in sql_database_pipeline.py
* Add credentials for postgres and other secrets to ./.dlt/secrets.toml
* requirements.txt was created. Install it with:
pip3 install -r requirements.txt
* Read https://dlthub.com/docs/walkthroughs/create-a-pipeline for more information

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] ls -l
total 20
-rw-r--r--. 1 postgres postgres    34 May 21 09:07 requirements.txt
-rw-r--r--. 1 postgres postgres 12834 May 21 09:07 sql_database_pipeline.py

As mentioned in the output above, additional dependencies need to be installed:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] cat requirements.txt 
dlt[postgres,sql-database]>=1.11.0(.local)
postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] pip install -r requirements.txt
postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] find .
.
./.dlt
./.dlt/config.toml
./.dlt/secrets.toml
./.gitignore
./sql_database_pipeline.py
./requirements.txt

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] pip install ibm-db-sa

Now is the time to configure the credentials and connection parameters for the source and destination databases, and this is done in the “secrets.toml” file:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] cat .dlt/secrets.toml 
[sources.sql_database.credentials]
drivername = "db2+ibm_db"
database = "db1" 
password = "manager" 
username = "db2inst1" 
schema = "omrun"
host = "172.22.11.93"
port = 25010 

[destination.postgres.credentials]
database = "postgres" 
password = "postgres"
username = "postgres"
host = "192.168.122.60"
port = 5432
connect_timeout = 15

When we initialized the pipeline a template called “sql_database_pipeline.py” was created, and this is what we need to adjust now. There are several samples in that template, we’ve used the load_select_tables_from_database skeleton:

# flake8: noqa
import humanize
from typing import Any
import os

import dlt
from dlt.common import pendulum
from dlt.sources.credentials import ConnectionStringCredentials

from dlt.sources.sql_database import sql_database, sql_table, Table

from sqlalchemy.sql.sqltypes import TypeEngine
import sqlalchemy as sa


def load_select_tables_from_database() -> None:
    """Use the sql_database source to reflect an entire database schema and load select tables from it.

    This example sources data from the public Rfam MySQL database.
    """
    # Create a pipeline
    pipeline = dlt.pipeline(pipeline_name="omrun", destination='postgres', dataset_name="omrun")

    # This are the tables we want to load
    source_1 = sql_database(schema="omrun").with_resources("loadcheck_a", "loadcheck_b")

    # Run the pipeline. The merge write disposition merges existing rows in the destination by primary key
    info = pipeline.run(source_1, write_disposition="replace")
    print(info)

if __name__ == "__main__":
    # Load selected tables with different settings
    load_select_tables_from_database()

That’s all the code which is required for this simple use case. We’ve specified the database schema (omrun) and the two tables we want to load the data from (“loadcheck_a”, “loadcheck_b”). In addition we want the data to be replaced on the target (there is also merge and append).

This is how it looks like in Db2 for the first table:

Ready to run the pipeline:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] python sql_database_pipeline.py 
Pipeline omrun load step completed in 0.73 seconds
1 load package(s) were loaded to destination postgres and into dataset omrun
The postgres destination used postgresql://postgres:***@192.168.122.60:5432/postgres location to store data
Load package 1747817065.3199458 is LOADED and contains no failed jobs

Everything seems to be OK, let’s check in PostgreSQL. The schema “omrun” was created automatically:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] psql -c "\dn"
          List of schemas
     Name      |       Owner       
---------------+-------------------
 omrun         | postgres
 omrun_staging | postgres
 public        | pg_database_owner
(3 rows)

Looking at the tables in that schema, both tables are there and contain the data:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] psql -c "set search_path='omrun'" -c "\d"
SET
                List of relations
 Schema |        Name         | Type  |  Owner   
--------+---------------------+-------+----------
 omrun  | _dlt_loads          | table | postgres
 omrun  | _dlt_pipeline_state | table | postgres
 omrun  | _dlt_version        | table | postgres
 omrun  | loadcheck_a         | table | postgres
 omrun  | loadcheck_b         | table | postgres
(5 rows)

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] psql -c "select count(*) from omrun.loadcheck_a"
 count  
--------
 102401
(1 row)

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] psql -c "select * from omrun.loadcheck_a limit 5"
 spalte0 | spalte1 |    _dlt_load_id    |    _dlt_id     
---------+---------+--------------------+----------------
       1 | test1   | 1747817065.3199458 | tmQTbuEnpjoJ8Q
       2 | test2   | 1747817065.3199458 | Y5D4aEbyZmaDVw
       3 | test3   | 1747817065.3199458 | RxcyPugGndIRQA
       4 | test4   | 1747817065.3199458 | YHcJLkKML48/8g
       5 | test5   | 1747817065.3199458 | ywNZhazXRAlFnQ
(5 rows)

Two additional columns have been added to the tables. “_dlt_load_id” and “_dlt_id” are not there in Db2, but get added automatically by dlt for internal purposes. The same is true for the “omrun_staging” schema.

Inspecting the pipeline can be done with the “info” command:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] dlt pipeline omrun info 
Found pipeline omrun in /home/postgres/.dlt/pipelines
Synchronized state:
_state_version: 2
_state_engine_version: 4
dataset_name: omrun
schema_names: ['sql_database']
pipeline_name: omrun
default_schema_name: sql_database
destination_type: dlt.destinations.postgres
destination_name: None
_version_hash: e/mg52/UONZ79Z5wrl8THEl8LeuKw+xQlA8FqYvgdaU=

sources:
Add -v option to see sources state. Note that it could be large.

Local state:
first_run: False
initial_cwd: /home/postgres/db2_postgresql
_last_extracted_at: 2025-05-21 07:52:50.530143+00:00
_last_extracted_hash: e/mg52/UONZ79Z5wrl8THEl8LeuKw+xQlA8FqYvgdaU=

Resources in schema: sql_database
loadcheck_a with 1 table(s) and 0 resource state slot(s)
loadcheck_b with 1 table(s) and 0 resource state slot(s)

Working dir content:
Has 6 completed load packages with following load ids:
1747813450.4990926
1747813500.9859562
1747813559.5663254
1747813855.3201842
1747813968.0540593
1747817065.3199458

Pipeline has last run trace. Use 'dlt pipeline omrun trace' to inspect 

If you install the “streamlit” package, you can even bring up a website and inspect your data using the browser:

postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] pip install streamlit
postgres@rhel8:/home/postgres/db2_postgresql/ [PG1] dlt pipeline omrun show
Found pipeline omrun in /home/postgres/.dlt/pipelines

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to false.


  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501
  Network URL: http://192.168.122.60:8501
  External URL: http://146.4.101.46:8501

Really nice.

This was a very simple example, there is much more can do with dlt. Check the documentation for further details.

L’article Using dlt to get data from Db2 to PostgreSQL est apparu en premier sur dbi Blog.

SQL Server 2025 – Standard Developer edition

Tue, 2025-05-20 07:02
Introduction

The arrival of SQL Server 2025 introduces the Standard Developer edition, allowing companies to deploy across all development, quality and testing environments using an edition equivalent to the Standard Edition without having to pay the associated licensing fees.

Here are the different editions available in SQL Server 2025:

  • Express
  • Web
  • Standard
  • Enterprise
  • Standard Developer
  • Enterprise Developer
What problem does this solve?

Some companies deploy the Developer edition of SQL Server in environments that are not production in order to reduce licensing costs. The Developer edition, however, is functionally equivalent to the Enterprise Edition.

This can result in the following scenario:

  • Test environment: Developer edition
  • Development environment: Developer edition
  • Production environment: Standard edition

The Developer and Standard editions differ significantly, and some features available in the Developer edition are not available in the Standard edition. For example, index rebuilds can be done online with the Enterprise or Developer editions, but this is not possible with the Standard edition.

As a result, behavior and performance can vary greatly between environments when different editions are used.

Here is an example where the editions are not aligned:

With SQL Server 2025, it’s now possible to use the same edition across all environments without having to license instances used for test and development environments:

How is the installation performed?

Using the graphical interface, edition selection is done simply:

However, if the installation is performed using an .ini file, you must use the PID parameter with the following value:

PID=”33333-00000-00000-00000-00000″

This allows us to install the Standard Developer edition of SQL Server 2025:

Thank you, Amine Haloui.

L’article SQL Server 2025 – Standard Developer edition est apparu en premier sur dbi Blog.

SQLDay 2025 – Wrocław – Sessions

Mon, 2025-05-19 13:59

After a packed workshop day, the SQLDay conference officially kicked off on Tuesday with a series of sessions covering cloud, DevOps, Microsoft Fabric, AI, and more. Here is a short overview of the sessions I attended on the first day of the main conference.

Morning Kick-Off: Sponsors and Opening

The day started with a short introduction and a presentation of the sponsors. A good opportunity to acknowledge the partners who made this event possible.

Session 1: Composable AI and Its Impact on Enterprise Architecture

This session (by Felix Mutzl) provided a strategic view of how AI is becoming a core part of enterprise architecture.

Session 2: Migrate Your On-Premises SQL Server Databases to Microsoft Azure

A session (by Edwin M Sarmiento) that addressed one of the most common challenges for many DBAs and IT departments: how to migrate your SQL Server workloads to Azure. The speaker shared a well-structured approach, highlighting the key elements to consider before launching a migration project:

  • Team involvement: Ensure all stakeholders are aligned.
  • Planning: Migration isn’t just about moving data, dependencies must be mapped.
  • Cost: Evaluate Azure pricing models and estimate consumption.
  • Testing: Validate each stage in a non-production environment.
  • Monitoring: Post-migration monitoring is essential for stability.

Session 3: Fabric Monitoring Made Simple: Built-In Tools and Custom Solutions

This session was produced by Just Blindbaek and he talked about how Microsoft Fabric is gaining traction quickly, and with it comes the need for robust monitoring. This session explored native tools like Monitoring Hub, Admin Monitoring workspace, and Workspace Monitoring. In addition, the speaker introduced FUAM (Fabric Unified Admin Monitoring), an open-source solution supported by Microsoft that complements the built-in options.

Session 4: Database DevOps…CJ/CD: Continuous Journey or Continuous Disaster?

A hands-on session (by Tonie Huizer) about introducing DevOps practices in a legacy team that originally used SVN and had no automation. The speaker shared lessons learned from introducing:

  • Sprint-based development cycles
  • Git branching strategies
  • Build and release pipelines
  • Manual vs Pull Request releases
  • Versioned databases and IDPs

It was a realistic look at the challenges and practical steps involved when modernizing a database development process.

Session 5: (Developer) Productivity, Data Intelligence, and Building an AI Application

This session (from Felix Mutzl) shifted the focus from general AI to productivity-enhancing solutions. Built on Databricks, the use case demonstrated how to combine AI models with structured data to deliver real-time insights to knowledge workers. The practical Databricks examples were especially helpful to visualize the architecture behind these kinds of applications.

Session 6: Azure SQL Managed Instance Demo Party

The final session of the day was given by Dani Ljepava and Sasa Popovic and was more interactive and focused on showcasing the latest Azure SQL Managed Instance features. Demos covered:

  • Performance and scaling improvements
  • Compatibility for hybrid scenarios
  • Built-in support for high availability and disaster recovery

The session served as a great update on where Azure SQL MI is heading and what tools are now available for operational DBAs and cloud architects.

Thank you, Amine Haloui.

L’article SQLDay 2025 – Wrocław – Sessions est apparu en premier sur dbi Blog.

SQLDay 2025 – Wrocław – Workshops

Mon, 2025-05-19 13:58

I had the chance to attend SQLDay 2025 in Wrocław, one of the largest Microsoft Data Platform conferences in Central Europe. The event gathers a wide range of professionals, from database administrators to data engineers and Power BI developers. The first day was fully dedicated to pre-conference workshops. The general sessions are scheduled for the following two days.

In this first post, I’ll focus on Monday’s workshops.

Day 1 – Workshop Sessions

The workshop day at SQLDay is always a strong start. It gives attendees the opportunity to focus on a specific topic for a full day. This year, several tracks were available in parallel, covering various aspects of the Microsoft data stack: from Power BI and SQL Server to Azure and Microsoft Fabric.

Here are the sessions that were available:

Advanced DAX

This session was clearly targeted at experienced Power BI users. Alberto Ferrari delivered an in-depth look into evaluation context, expanded tables, and advanced usage of CALCULATE. One focus area was the correct use of ALLEXCEPT and how it interacts with complex relationships.

Execution Plans in Depth

For SQL Server professionals interested in performance tuning, this workshop provided a detailed walkthrough of execution plans. Hugo Kornelis covered a large number of operators, explained how they work internally, and showed how to analyze problematic queries. The content was dense but well-structured.

Becoming an Azure SQL DBA

This workshop was led by members of the Azure SQL product team. It focused on the evolution of the DBA role in cloud environments. The agenda included topics such as high availability in Azure SQL, backup and restore, cost optimization, and integration with Microsoft Fabric. It was designed to understand the shared responsibility model and how traditional DBA tasks are shifting in cloud scenarios.

Enterprise Databots

This workshop explored how to build intelligent DataBots using Azure and Databricks. The session combined theoretical content with practical labs. The goal was to implement chatbots capable of interacting with SQL data and leveraging AI models. Participants had the opportunity to create bots from scratch.

Analytics Engineering with dbt

This session was focused on dbt (data build tool) and its role in ELT pipelines. It was well-suited for data analysts and engineers looking to standardize and scale their workflows.

Build a Real-time Intelligence Solution in One Day

This workshop showed how to implement real-time analytics solutions using Microsoft Fabric. It covered Real-Time Hub, Eventstream, Data Activator, and Copilot.

From Power BI Developer to Fabric Engineer

This workshop addressed Power BI developers looking to go beyond the limitations of Power Query and Premium refresh schedules. The session focused on transforming reports into scalable Fabric-based solutions using Lakehouse, Notebooks, Dataflows, and semantic models. A good starting point for anyone looking to shift from report building to full data engineering within the Microsoft ecosystem.

Thank you, Amine Haloui.

L’article SQLDay 2025 – Wrocław – Workshops est apparu en premier sur dbi Blog.

SQL Server 2025 Public Preview and SSMS 21 now available

Mon, 2025-05-19 12:15

This is a short blog post to share that the SQL Server 2025 public preview is now available for download. At the same time, SSMS 21 has also been released and is now generally available.

The LinkedIn post by Bob Ward announcing the news can be found here: Announcing SQL Server 2025 Public Preview

In his post you’ll find a summary of the key changes coming with this new release.

Also note that the recommended version of SSMS for SQL Server 2025 is SSMS 21, which was just announced.

Here is the blog post by Erin Stellato: SQL Server Management Studio (SSMS) 21 is now generally available (GA)

There are many changes between SSMS 20 and 21, notably the fact that it’s now based on Visual Studio 2022, includes built-in Copilot integration, and finally introduces a Dark Theme.

I strongly recommend installing it, starting to use it, and providing feedback if you encounter any bugs or areas for improvement. You can do so here.

Now that SQL Server 2025 is available for testing, other blog posts written by my colleagues or myself will likely follow to showcase some of the new features.
I’m particularly thinking of the following:

L’article SQL Server 2025 Public Preview and SSMS 21 now available est apparu en premier sur dbi Blog.

APEX Connect 2025 (Day 3)

Thu, 2025-05-15 15:49
This image has an empty alt attribute; its file name is 2025-Apex_Connect-mit-DB-Banner-820x312-facebook_Header.jpg

After the “Welcome 3rd Day APEX Connect, DOAG e.V.”, and the very entertaining Keynote “Trouble in the Old Republic” by Samuel Nitsche, I decided to attend presentations on following topics:
– 23ai – Building an AI Vector Search API using APEX, ORDS, REST and PL/SQL
– APEX in Style – Ein Überblick über die verschiedenen UI-Customizingmöglichkeiten
– SQL und PL/SQL: Tipps & Tricks für APEX Entwickler
– Oracle APEX & Entra ID: Effiziente Benutzerverwaltung mit Workflows und SSO
Beside the presentations I also had the privilege to have 1:1 sessions with Carsten Czarski, Florian Grasshoff and Mark Swetz from the APEX development Team.

23ai – Building an AI Vector Search API using APEX, ORDS, REST and PL/SQL

Vectors are lists of numbers and their dimension is given by the amount of numbers in the vector definition. The creation of a vector from any other data is called vectorizing or embedding.
Oracle 23ai is having a new vector datatype and an associated PL/SQL package DBMS_VECTOR. Pre-trained Models can be imported based on ONNX standard.
APEX can be used to call external AI models as web services. Any needed transformation can be done thanks to the DBMS_VECTOR package.
One of the main advantage of vector search is language independent.

APEX in Style – Ein Überblick über die verschiedenen UI-Customizingmöglichkeiten

New template components allow to set attributes in templates for declarative usage in APEX.
This can be combined with dedicated CSS to be used in the template component. Those component can be used in any kind of pages (e.g. Interactive Report, Card Reports, …).
When changing templates, it is recommended to do them on a copy in order to be able to rollback to the original one if needed.
Beside the templates Theme can be modified globally with Theme styles over the theme roller. Theme changes can even be allowed to end users in APEX so they can personalize the look & Feel of the application.

SQL und PL/SQL: Tipps & Tricks für APEX Entwickler

SQL queries are in the heart of APEX. Looking into the debugger details of the SQL produced by APEX can be seen. Any filtering or other change will add to the original query and generate a new query which can be seen as onion SQL with following levels adding up:
– Component SQL (written by the developer)
– LOVs
– Computed columns
– Filter
– Sorts
– Aggregation
– Pagination
This means the query run by APEX can be very different than the one entered by the developer.
As a consequence, sorting with ORDER BY should never be part of the component SQL. Use the declarative column sorting parameter instead.
APEX allows to use pseudo-hints in the declarative optimizer hints field in order manage the pagination type.
PL/SQL tips:
– functions in SELECT are run on every row selected (expensive)
– functions in WHERE are run for all rows of the selected table (even more expensive)
– use bind variables so that substitution is happening in the database
– strictly define constants
– name loops
– map parameters
– always raise in “when others” clause of exception handling
– use conditional compilation

Oracle APEX & Entra ID: Effiziente Benutzerverwaltung mit Workflows und SSO

User Management requires an IAM system for easier and centralized use. One combination for APEX is with Microsoft ENTRA.
Possible usage:
– on / offboarding
– details and contact management
APEX is managing access to ENTRA through web services which allows to easily cover the previous use cases. Web services are part of declarative setup to address the Microsoft Graph interface and manage authorizations mapped to application and delegation over groups with ENTRA.
Access is secured with oAuth authentication.

NEWS!

One last news, the support of the last 3 APEX versions (23.2, 24.1 and 24.2) might be extended to 2 years instead of 18 months.

You can find a summary of Day 2 here.

That was the final day of APEX Connect 2025 conference. Great organization, great presentations and great people.
Hope to see all again on APEX Connect 2026.
How about you? Are you planning to join?

L’article APEX Connect 2025 (Day 3) est apparu en premier sur dbi Blog.

APEX Connect 2025 (Day 2)

Wed, 2025-05-14 17:50

This year I can unfortunately only attend APEX Connect for 2 days, starting on Day 2 of the conference.
Conference is hosted in the famous EuropaPark.
The Day has started with the traditional 5K run (which ended up to be 5.8 km in fact) to wake-up the body with fresh air and nice sunshine.
After the “Welcome 2nd Day APEX Connect & Opening of the DB Conference, DOAG e.V.”, and the Keynote “Mehr über Oracle’s Release 23ai – immer noch ohne Folien & Marketing, dafür aber zu 100% Demos”, I decided to attend presentations on following topics:
– Fortgeschrittene API-Entwicklung mit Oracle REST Data Services (ORDS)
– Search Images by Images in your APEX application with AI Vector Search
– Sponsored Session: Revolutionizing Oracle APEX: United Codes’ Innovative Solutions
– Single Sign-On: One Login for All Your Needs
– Dev Talk: Die Trivadis PL/SQL & SQL Coding Guidelines sind tot – was nun?
And the day ended by the Evening Event: “Dinner at the french themed area” of the park followed by a party with a DJ.

Fortgeschrittene API-Entwicklung mit Oracle REST Data Services (ORDS)

REST APIs deserve careful design and looking into them from a consumer point of view can help a lot in this exercise. REST APIs have a grammar made of Nouns (preferred to be plural) to show what it is about, verbs given by the HTTP methods (GET, POST, PUT, DELETE) and relations to sub-resources.It is very important to try them, test them and document them properly with tools like Swagger/OpenAPI.
ORDS provides already a lot of help allowing to develop them, but sometimes custom PL/SQL is required which should be as light as possible due to the numerous calls they can get.
Authentication is another critical point which should be managed with JWT (JSON Web Token) on Pre-hook or oAuth2. This integrates well with APEX.

Search Images by Images in your APEX application with AI Vector Search

Oracle 23ai has introduced Vector search capabilities from Embedded data on LLM (Large Language Models). ” use cases were presented with LLMs running locally for privacy and data control, allowing to search images from text description and similar images.
This can be run within a Docker image containing following packages:
ORDS
– Oracle 23ai DB
Open WebUI
Ollama (LLM)
Apache tika (document extraction)
The embedding is managed with some Python code and APEX serves as Interface to the prompt which can be easily defined on a page as Dynamic Action declaratively.

Sponsored Session: Revolutionizing Oracle APEX: United Codes’ Innovative Solutions

“Ideas deserve to be realized” and “visioneering solutions to fit any requirements” are the 2 mantras of United Codes. The company is mostly known about AOP (APEX Office Print) which is natively supported by APEX. Beyond the success of that tool (more than 15’000 users) they provide much more like:
APEX Office Edit
APEX Message Service
– All sorts of APEX Plug-ins (Online PDF editor, Application Search, Drag &Drop, Tooltip, Splitter, as well as all FOEX free plug-ins they took over to support)
APEX Project Eye
dbLinter (still in development) for PL/SQL code check
And much more. Feel free to look at the provided links to learn more about those very useful tools.

Single Sign-On: One Login for All Your Needs

Single Sign On is becoming more and more a business request. It provides a single point of authentication.
APEX is supporting different protocols related to this topic, like oAuth2 and OpenID since version 18.1 and SAML since version 21.2. Thanks to the so called “Social Sign-in” and web credentials all kind of identity providers like okta, Keycloak, Oracle IAM, Azure ID are supported by APEX. This will replace the standard login by the one of the identity provider allowing to make use of their additional features like 2FA (2 Factor Authentication) which reduces drastically the hack attempts.
And the future tends to go toward passwordless authentication with tools like fingerprint of face recognition.

Dev Talk: Die Trivadis PL/SQL & SQL Coding Guidelines sind tot – was nun?

For year, Trivadis PL/SQL Guidelines have been seen as a reference for code development. But last version issued in March 2024 is the very last of it, as it no longer will be maintained.
As the need for quality code is more important than ever, there are alternatives to get the code checked against rules:
SQLcl code scan
PMD
SonarQube
ZPA
SQLfluff
and a new joiner still in development: dbLinter which promises more flexibility, with more rules based on the Trivadis guidelines with integration into VSCode.
Beside that, SQL based tests are also a requirement which can be fulfilled by tools like:
utPLSQL
ora* CODECOP (which has an interface in APEX to manage the rules)
QUASTO
and also dbLinter. Sounds like a very promising tool.

You can find a summary of Day 3 here.

L’article APEX Connect 2025 (Day 2) est apparu en premier sur dbi Blog.

2025.pgconf.de recap

Tue, 2025-05-13 04:15

( note: I am writing this as member of the pgconf.de organization team, and not as a dbi services employee )

After almost half a year of planning and preparation PostgreSQL Conference Germany 2025 finally took place last week in Berlin. It was in the same hotel as the 2022 edition of PostgreSQL Conference Europe and this was a good choice. With 347 attendees this was the largest ever PostgreSQL Conference Germany and the expectation is that we’ll reach around 350 for next year. For the very first time there were two days packed with talks because many people requested an additional day in the feedback of the last conferences. A lot of people already wrote about the conference from either a sponsor or attendee perspective so I thought it is a good idea to write something from the organization perspective to give you an idea what is going on behind the scenes.

Preparing such an event is a lot of work. There are weekly meetings starting around half a year before the date of the conference once the date and location are set. Setting a date and location starts even before and this is quite some work as well. First we need to find a venue which has enough capacity for the amount of people we expect. The larger we grow, the less venues are available. A second requirement is, that we want to move around in Germany and not stick to one place. This is not only to make it more interesting but also for being fair to the people. Some will have a larger distance to the conference this year, but a shorter for next year. Once we have the date and location the contract with the venue needs to be signed. This usually comes with several iterations and adjustments and finally needs to be signed off be the venue and PostgreSQL Europe. Before all of that can happen a budget needs to be created: How many sponsors do we want to have a which sponsor level? How many attendees do we expect and what is the price per attendee in the venue? Do we want to a have social event? And much more which affects the final budget and costs, e.g. speaker presents, giveaways…

For the weekly meetings this is mostly about giving tasks to the team and tracking the status. Someone needs to check what speaker presents we want to have, and usually they should somehow relate to the city the conference will be in. Someone needs to organize the helpers and speakers dinner. Someone else needs to take care of the website. Someone needs to take care of our social media accounts. At what date do we want to open the call for papers and the call for sponsors? A lot of tasks to do and to track.

Once we reach the day before the conference there is still plenty of stuff to do. This includes final discussions with the hotel, tracking all the sponsor shipments:

… sorting badges:

… inspecting the rooms for the talks:

The evening of that day usually is the evening for the speakers and organizers dinner (you can imagine, this will be a long day). The next day in the morning before attendees will show up the registration desk needs to be prepared:

This is were volunteers which are not part of the organization team come into the game. We always ask for volunteers to help with the registration desk, for room hosting and the final cleanup after the conference. This year we had so many requests for volunteering that we had to stop accepting more (Thanks to all of them).

… and then it finally starts with the registration and the opening session:

This is the point were most of the work is done and usually it is running smooth from there on. But still we need to be around for questions, for the sponsors, for the attendees.

Of course, food is important, and the venue did a great job:

After a first long day there was the social event. This is meant for networking and discussions, having fun:

The conference is not only about talks, networking and sharing, it is also about meeting the community, old and new friends, doing something beside the official program:

That’s it for today, I hope you got some impression what is going on in the background to make such an event a success. See you next year:

L’article 2025.pgconf.de recap est apparu en premier sur dbi Blog.

M-Files IMPACT Global Conference 2025 – Day 3

Mon, 2025-05-12 08:13
M-Files IMPACT Global Conference 2025

The last day of the M-Files IMPACT Global Conference 2025 was dedicated to M-Files partners only. Throughout the day, M-Files shared partner-specific information and hot news. Below is a summary of the main topics.

Keynote

In the Keynote session they emphasised theuir continueing support of their partner network. In addition, to that they shared the fininacle results of the partners in 2024 abd Q1 2025. Not to merntion any numbers but gave a smile in the face of the partners.

Once again, they mentioned the importance of such events for networking and learning from each other.

After the keynote, I had the opportunity to attend the technical breakout session and discuss M-Files add-ons with other partners in the exhibition hall.

Best Practices with Impact Demos

Best practices for M-Files demos was a very interesting session. It was emphasised to deliver as many impactful demos as possible and to break them down into vignettes. A key to a demo is to understand the customer’s needs and what would be of value to them.
This session was very helpful to me and will support me in the upcoming customer demos.

The following topics should be taken into consideration to deliver impactful demos.

  • Be Entertaining
  • Be Engaging
  • Provide Education
  • Present Value

The demo session was followed by two sessions on M-Files implementation support, which partners can use for highly complex environments, and on implementing M-Files Hubshare to securely share M-Files content internally and externally. It is a great approach to work together, M-Files and the M-FILES partner, to deliver the best value to the customer.

New M-Files Admin

The first half of this session was a recap of what had been shared with M-Files customers during the first two days of the conference. In the second half we had the pleasure of a live demo of the current admin tool. Very interesting was the implementation of AINO, one of the ideas with AINO is to give advice to the administrator in case of a problem or error in the vault. I look forward to seeing this in action. After the demo, they announced that they would be forming a group of partners to test and develop M-Files Admin, and I expressed my willingness to be part of this initiative.

Partner Cloud Update for Administrators

This session was all about the new Partner Cloud and the M-Files experts gave a first-hand update for administrators of the Partner Cloud features. The Partner Cloud allows the partner to have better control over the subscription and the cloud vault. For example, scripts and applications no longer need to be tested with M-Files. This is due to the separation of the Standard Cloud and the Partner Cloud. Of course, this comes with the responsibility of ensuring the functionality of the custom scripts and application.

The final technical session of the day focused on developers and announced the implementation of script editors in the M-Files Admin Tool.

The conference ended with a closing session in which the M-Files team thanked all the partners for attending. They also emphasised the importance of the partner network and the need to work together as closely as possible in the future.

Afterwards I had to catch my plane back to Switzerland. All in all, it was a great and interesting week full of learning and news about M-Files.

If you have any questions or needs, please contact me or dbi services. We are ready to help and implement your M-Files projects.

L’article M-Files IMPACT Global Conference 2025 – Day 3 est apparu en premier sur dbi Blog.

M-Files IMPACT Global Conference 2025 – Day 2

Thu, 2025-05-08 00:32

The second day of the M-Files IMPACT Global Conference 2025 commenced at 9 am with a keynote session. On this occasion, the key note was entitled ‘Unlocking the Future of Business’. The Future of AI: Unlocking New Opportunities and Navigating the Challenges Ahead’
The importance of AINO was emphasised, especially the automatic filling of metadata.

Keynote

Alan Pelz-Sharpe from Deep Analytics provided insights into the future of AI, exploring both the opportunities and challenges that lie ahead. He provided a practical, actionable roadmap to help you navigate and thrive in the AI-powered era, allowing you to shape the future. He spoke about supporting and transforming Agentic AI business tasks and processes through AI & Agentic Process Automation.

Following a brief intermission, the second morning resumed with further sessions concentrating on the future of M-Files. The sessions are entitled “Shaping the Future: We are pleased to announce that the sessions will focus on two key topics: “Strategic Perspectives on Innovation and Product Vision’ and ‘Achieving Enterprise-Grade Scalability with M-Files”.
The main message of the sessions was that AI and cloud play a big role in the M-Files strategy, and that the strategy is based on the themes emphasised in yesterday’s session.

  • Reduce the Friction of Putting Content “In Context”
  • Offer a Horizontal Platform and Vertical Applications
  • Automate Customer Workflows for Critical Business Use Cases
  • Lead in Applied AI and Enable Customers to Generate Value from AI

The morning’s proceedings concluded with a panel discussion focused on customers in attendance at the M-Files conference.
Several companies from a variety of industries and sizes shared their experiences with M-Files and their implementation projects.

As on the first day, the agenda was divided into technical, development and business sessions. On this day, I had the honour of attending interesting sessions on “New M-Files Desktop”, “M-Files Admin Goes Web” and “Enabling M-Files capabilities with Microsoft 365”.

M-Files New M-Files Desktop

They are working to achieve WCAG 2.12 Level AA and Section 508 compliance in the new client. This initiative underlines the commitment to ensuring that digital content is accessible to all users, including those with disabilities. In addition, below is a list of upcoming features.

  • In-app guidance
  • Sharing center
  • Global search
  • AI search
  • AI agents

The second day was made even more memorable by an enjoyable dinner, in addition to the learning and insight into M-Files. I look forward to sharing this news with my colleagues at dbi services.
To switch to the new M-Files Desktop Client, make sure that you are not using the Custom UI and that the gRPC protocol is enabled.

M-Files Admin Goes Web

First, they showed us a bit of the history and development of the M-Files Admin Client over time. From a standard Microsoft Windows application with limited configuration options to a tool that allows complete configuration of M-Files. For example, the latest version has 1600 configuration options in the Advanced Configuration section alone. The screenshot below shows a preview of the new M-Files Admin Tool. It is important to note the integration of AINO, which will support the administrator in the future.

Enabling M-Files capabilities with Microsoft 365

Regarding the integration of M-Files and Microsoft 365. It is important to note that this is moving forward with a high level of focus from both sides. The fact that both companies are presenting together shows the close partnership between them.

Topics I would like to highlight include

  • Enabling desktop co-authoring with M-Files
  • M-Files for Outlook Pro
  • M-Files for Microsoft Teams
  • M-Files connector for Micrososft Copilot

See the screenshot below for the requirements to enable these features.

The second day was made even more memorable by an enjoyable dinner, in addition to the learning and insight into M-Files. I look forward to sharing this news with my colleagues at dbi services.

L’article M-Files IMPACT Global Conference 2025 – Day 2 est apparu en premier sur dbi Blog.

M-Files IMPACT Global Conference 2025 – Day 1

Tue, 2025-05-06 23:18

This year, the annual M-Files Customer and Partner Conference will be held in the capital of Greece. All M-Filers worldwide are expected to attend the event in Athens, where they will have the opportunity to network with their peers and gain insight into the latest developments concerning M-Files products.

I am delighted to have the opportunity to attend the event. I would like to express my gratitude for the opportunity, and would like to thank my company dbi services for sending me to the event.

After travelling to Greece, I went to the hotel and M-Files event registration desk. Then I joined the welcome reception on the pool deck of the hotel with M-Files customers, partners and the M-Files Team, including the C-Management. We had some interesting chats about M-Files delivery and customer demos.

Keynote

The event started with the keynote of the new M-Files CEO Jay Bhatt and Antti Nivala the Founder and Chief Innovation Officer.

M-Files Strategic Themes

  • Reduce the Friction of Putting Content “In Context”
  • Offer a Horizontal Platform and Vertical Applications
  • Automate Customer Workflows for Critical Business Use Cases
  • Lead in Applied AI and Enable Customers to Generate Value from AI

M-Files product vision and roadmap for 2025

Review of the M-Files product vision and roadmap for 2025 including announcements was the next session in the row held by Tony Grout from M-Files and Mika Turunen from M-Files.
Firstly, the integration of document co-authoring in the M-Files client and the new M-Files user interface was mentioned. Furthermore, they highlighted the availability of AINO Assist, which facilitates the automatic filling of document metadata.


The following subjects are key to the roadmap focus areas:

  • AI Innovation
  • User Experience
  • End-to-End Knowledge Work Enablers
  • Rapid Time to Value at Scale
M-Files & Microsoft: Better Together

After the break and a first visit of the exhibition hall the morning continued with a session called “M-Files & Microsoft: Better Together”
Main speakers were Ian Story and Ryan Barry, M-Files. They provided a comprehensive insight into the collaborative efforts between M-Files and Microsoft, emphasising the commitment to a long-term partnership. Furthermore, they have committed to the further development of integration in the area of AI.

During the final two morning sessions, the emphasis was on the various ways in which M-Files can support a range of use cases, and how customers can leverage these to enhance business outcomes. Following the morning’s closing session, the customer, TTX, shared the company’s M-File journey with us.

Following the afternoon’s proceedings, a series of technical, development and business-related sessions were held. I had the opportunity to attend two informative sessions on technical and online development.

Wrap up of M-Files IMPACT Global Conference 2025 – day 1

Following the afternoon’s proceedings, a series of technical, development and business-related sessions were held. I had the opportunity to attend two informative sessions on technical and online development.

To conclude the first M-Files IMPACT Global Conference 2025 , the confernce group enjoyed an evening drink at an establishment with a splendid view.

L’article M-Files IMPACT Global Conference 2025 – Day 1 est apparu en premier sur dbi Blog.

Oracle: A possible method to address a slow query if there is time pressure

Tue, 2025-04-29 10:22

Sometimes there is no time for a long analysis of a slow query and a fix (workaround) has to be provided asap. In such cases it often helps to check if a query did run faster with a previous OPTIMIZER_FEATURES_ENABLE-setting. If that is the case, then you also want to find out which optimizer bug fix caused a suboptimal plan to be generated. The following blog shows a way to find a quick workaround for a slowly performing query. However, please consider this to be a workaround and that you still should do an analysis to fix the “real” root cause.

So, let’s consider that you have a performance issue with a query and you want to test if an optimizer bug fix caused a suboptimal plan. How can you find the optimizer bug fix which caused that plan asap?

REMARK: I do assume that you have a script /tmp/sql.sql with the sql-query to reproduce the slowly running SQL.

Let’s get started to find the optimizer bug-fix:

Problem: Query is running slow in 19c. The query has been provided in the script /tmp/sql.sql

Currently we are on 19.26. and do have OPTIMIZER_FEATURES_ENABLE set to the default, which is 19.1.0 in 19c:

SQL> show spparameter optimizer_features_enable

SID	 NAME			       TYPE	   VALUE
-------- ----------------------------- ----------- ----------------------------
*	 optimizer_features_enable     string

–> not explicitly set in the spfile, so it’s set to default:

SQL> show parameter optimizer_features_enable

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_features_enable            string      19.1.0

SQL> show parameter fix_control
SQL> 

–> no _fix_control-parameter set.

According Active Session History (ASH) the query did run for days in the past (!!!):

SQL>  select sql_exec_start, sql_plan_hash_value, (count(*)*10)/3600 active_hours, max(sample_time)-min(sample_time) duration
  2  from dba_hist_active_sess_history
  3  where sql_id='8cdydzbsh10dd'
  4  group by sql_exec_start, sql_plan_hash_value
  5  order by 1;
 
SQL_EXEC_START       SQL_PLAN_HASH_VALUE ACTIVE_HOURS DURATION
-------------------- ------------------- ------------ ----------------------------
05-APR-2025 07:36:03           126619261        80.12 +000000003 08:14:27.685
07-APR-2025 07:36:07           126619261        32.33 +000000001 08:22:43.551
08-APR-2025 16:06:00           126619261        47.68 +000000001 23:45:26.074
10-APR-2025 15:57:22           126619261        17.76 +000000000 17:47:30.125
19-APR-2025 05:32:25           126619261       178.23 +000000007 10:31:45.271
21-APR-2025 05:32:46           126619261       134.60 +000000005 14:48:49.602
23-APR-2025 05:33:08           126619261        86.01 +000000003 14:09:08.540
 
7 rows selected.

REMARK: Please consider that using ASH requires the diagnostics pack to be licensed

1. Check if there is an Optimizer-Version-Setting, where the issue did not happen

We can test with different OPTIMIZER_FEATURES_ENABLE (OFE) settings, on what release this query may initially became slow with. But what different OPTIMIZER_FEATURES_ENABLE settings do we have? There’s a simple method to find that out. You just provide a non-existing OFE and the error tells you what OFE-settings are available:

SQL> alter session set optimizer_features_enable=blabla;
ERROR:
ORA-00096: invalid value BLABLA for parameter optimizer_features_enable, must
be from among 19.1.0.1, 19.1.0, 18.1.0, 12.2.0.1, 12.1.0.2, 12.1.0.1, 11.2.0.4,
11.2.0.3, 11.2.0.2, 11.2.0.1, 11.1.0.7, 11.1.0.6, 10.2.0.5, 10.2.0.4, 10.2.0.3,
10.2.0.2, 10.2.0.1, 10.1.0.5, 10.1.0.4, 10.1.0.3, 10.1.0, 9.2.0.8, 9.2.0,
9.0.1, 9.0.0, 8.1.7, 8.1.6, 8.1.5, 8.1.4, 8.1.3, 8.1.0, 8.0.7, 8.0.6, 8.0.5,
8.0.4, 8.0.3, 8.0.0

So now we can go backwards with OFE-settings to find a version where the query may have run fast:

alter session set OPTIMIZER_FEATURES_ENABLE='18.1.0';
@/tmp/sql.sql

--> Ctrl-C after some time.

alter session set OPTIMIZER_FEATURES_ENABLE='12.2.0.1';
@/tmp/sql.sql

--> Ctrl-C after some time.

alter session set OPTIMIZER_FEATURES_ENABLE='12.1.0.2';
@/tmp/sql.sql

--> Ctrl-C after some time.

alter session set OPTIMIZER_FEATURES_ENABLE='12.1.0.1';
@/tmp/sql.sql

--> Ctrl-C after some time.

alter session set OPTIMIZER_FEATURES_ENABLE='11.2.0.4';
@/tmp/sql.sql

--> Ctrl-C after some time.

alter session set OPTIMIZER_FEATURES_ENABLE='11.2.0.3';
@/tmp/sql.sql

--> Ctrl-C after some time.

alter session set OPTIMIZER_FEATURES_ENABLE='11.2.0.2';
@/tmp/sql.sql

--> Ctrl-C after some time.

alter session set OPTIMIZER_FEATURES_ENABLE='11.2.0.1';
@/tmp/sql.sql

Elapsed: 00:00:00.36

--> With 11.2.0.1 the query finished in 0.36 secs.

I.e. a change between 11.2.0.1 and 11.2.0.2 caused the optimizer to produce a suboptimal plan.

2. Test which bug fix caused the suboptimal plan

We can generate a script, which tests all _fix_control-settings which were introduced with OFE=11.2.0.2 to see what fix caused the query to run slow:

set lines 200 pages 999 trimspool on
spool /tmp/sql_fc.sql

select 'alter session set "_fix_control"='''||to_char(bugno)||':'||to_char(value)||''';'||chr(10)||
       'PROMPT '||to_char(bugno)||chr(10)||
       '@/tmp/sql.sql' 
from v$system_fix_control 
where OPTIMIZER_FEATURE_ENABLE='11.2.0.2';

...

spool off

REMARK: The CHR(10) inserts the necessary linefeeds in the script.

SQL> !vi /tmp/sql_fc.sql
--> remove all lines, which are not necessary
SQL> !cat /tmp/sql_fc.sql
alter session set "_fix_control"='6913094:1';
PROMPT 6913094
@/tmp/sql.sql

alter session set "_fix_control"='6670551:1';
PROMPT 6670551
@/tmp/sql.sql

...

alter session set "_fix_control"='9407929:1';
PROMPT 9407929
@/tmp/sql.sql

alter session set "_fix_control"='10359631:1';
PROMPT 10359631
@/tmp/sql.sql

SQL> 

First I do set OFE to 11.2.0.1 and then run my generated script to enable the fixes one after the other until it becomes slow:

SQL> alter session set optimizer_features_enable='11.2.0.1';
SQL> @?/tmp/sql_fc.sql

Session altered.

8602840

1 row selected.


Session altered.

8725296

1 row selected.

...

Session altered.

9443476

1 row selected.


Session altered.

9195582

--> after enabling fix for bugno 9195582 there is no output generated anymore, i.e. the query becomes slow.

So, we could identify the fix for bugno 9195582, which causes the optimizer to produce a suboptimal plan.

3. Details about bug 9195582 and the implementation of a workaround

The description field of v$system_fix_control provides more details about the bug-fix:

SQL> select description from v$system_fix_control where bugno=9195582;

DESCRIPTION
----------------------------------------------------------------
leaf blocks as upper limit for skip scan blocks
SQL> 

Searching in My Oracle SUpport for the bug resulted in MOS Note

Bug 9195582 – Skip Scan overcosted when an index column has high NDV (Doc ID 9195582.8)

In there we have the following description:

Symptoms:

Performance Of Query/ies Affected 

Description

The estimate for skip scan blocks will now be no more than leaf blocks.
Index skip scans will be used more often than previously.

REDISCOVERY INFORMATION:
If an index skip scan is not being selected for a query due to having a
high cost and one of the skip scan index columns has NDV which is higher
than the number of leaf blocks of the index then you may be facing this
bug.

Workaround
Force the skip scan using a hint.

Interestingly the faster plan did not have a skip scan in it, but a change in the estimation of blocks for skip scans, may of course also lead to other plans being skipped or considered.

At this point I did a relogin to the DB and ran my query with the appropriate fix turned off to verify that the query runs fast:

SQL> select value from v$session_fix_control where bugno=9195582;

     VALUE
----------
         1

SQL> alter session set "_fix_control"='9195582:OFF';

Session altered.

SQL> select value from v$session_fix_control where bugno=9195582;

     VALUE
----------
         0

SQL> @/tmp/sql.sql

Elapsed: 00:00:00.36

--> OK, it works around the issue.

For that workaround to become active, we can implement a SQL patch for the query with the issue. As I did not have the query anymore in the shared pool, I had to take the query text from Automatic Workload Repository (AWR) to create the SQL Patch:

REMARK: Again, please consider that AWR requires to diagnostics pack to be licensed.

set serveroutput on
declare
        v1      varchar2(128);
        v_sql   clob;
begin
        select sql_text into v_sql from dba_hist_sqltext where sql_id='8cdydzbsh10dd';
        v1 :=   dbms_sqldiag.create_sql_patch(
                        sql_text  => v_sql,
                        hint_text => q'{opt_param('_fix_control' '9195582:OFF')}',
                        name    => 'switch_off_fix_9195582'
                );
        dbms_output.put_line(v1);
end;
/
switch_off_fix_9195582

PL/SQL procedure successfully completed.

Later on you may check if the query runs fast by checking the data in the shared pool or in the AWR history:

select executions, (elapsed_time/executions)/1000000 avg_elapsed_secs, sql_patch from v$sql where sql_id='8cdydzbsh10dd';
Summary:

Sometimes you have to be fast to implement a workaround for a slowly running query. One option is to check if a query did run faster with older OPTIMIZER_FEATURES_ENABLE-settings and, if that is the case, identify the bugno which caused a suboptimal plan to be produced. Always implement only the smallest change possible (just disable a bug fix instead of going back to a previous OFE-setting) and change as local to the problem as possible (i.e. add a hint or a SQL Patch to a query and, if possible, do not change a parameter on session or even system level). And finally, document your change and try to get rid of that workaround as soon as you have time to do a deeper analysis.

The Oracle Support tool SQLTXPLAIN (see MOS Note “All About the SQLT Diagnostic Tool (Doc ID 215187.1)”) contains the XPLORE utility, which goes much deeper for a single SQL-statement and checks the plan change caused by all “_fix_control”-settings and optimizer-changes (underscore parameters) between releases and produces html-output.

L’article Oracle: A possible method to address a slow query if there is time pressure est apparu en premier sur dbi Blog.

odacli create-appliance failed on an ODA HA

Thu, 2025-04-24 09:14

I recently had to install an Oracle Database Appliance X11 HA and it failed when creating the appliance:

[root@oak0 ~]# odacli create-appliance -r /u01/patch/my_new_oda.json
...
[root@mynewoda ~]# odacli describe-job -i 88e4b5e3-3a73-4c18-9d9f-960151abc45e

Job details                                                      
----------------------------------------------------------------
                     ID:  88e4b5e3-3a73-4c18-9d9f-960151abc45e
            Description:  Provisioning service creation
                 Status:  Failure (To view Error Correlation report, run "odacli describe-job -i 88e4b5e3-3a73-4c18-9d9f-960151abc45e --ecr" command)
                Created:  April 23, 2025 16:15:35 CEST
                Message:  DCS-10001:Internal error encountered: Failed to provision GI with RHP at the home: /u01/app/19.26.0.0/grid: DCS-10001:Internal error encountered: PRGH-1002 : Failed to copy files from /opt/oracle/rhp/RHPCheckpoints/rhptemp/grid8631129022929485455.rsp to /opt/oracle/rhp/RHPCheckpoints/wOraGrid192600
PRKC-1191 : Remote command execution setup check for node mynewoda using shell /usr/bin/ssh failed.
No ECDSA host key is known for mynewoda and you have requested strict checking.Host key verification failed...

It happened randomly in the past that we got this error “host key verification failed” and we just had to rerun our “odacli create-appliance” command again. However, this time restarting was not possible:

[root@mynewoda ~]# odacli create-appliance -r /u01/patch/my_new_oda.json
DCS-10047:Same job is already running: Provisioning FAILED in different request.

Following MOS Note “ODA Provisioning Fails to Create Appliance w/ Error: DCS-10047:Same Job is already running : Provisioning FAILED in different request. (Doc ID 2809836.1)” I cleaned up the ODA, updated the repository with the Grid Infrastructure clone and DB clone:

Stop the dcs agent on both nodes:

# systemctl stop initdcsagent

Then, run cleanup.pl on ODA node 0.

# /opt/oracle/oak/onecmd/cleanup.pl -f
...

If you get warnings that the cleanup cannot transfer the public key to node 1 or cannot setup SSH equivalence, then run the cleanup on node 1 as well.

At the end of the cleanup-output you get those messages:

WARNING: After system reboot, please re-run "odacli update-repository" for GI/DB clones,
WARNING: before running "odacli create-appliance".

So, after the reboot I updated the repository with the GI and DB Clone:

[root@oak0 patch]# /opt/oracle/dcs/bin/odacli update-repository -f /u01/patch/odacli-dcs-19.26.0.0.0-250127-GI-19.26.0.0.zip
...
[root@oak0 patch]# odacli describe-job -i 674f7c66-1615-450f-be27-4e4734abca97

Job details                                                      
----------------------------------------------------------------
                     ID:  674f7c66-1615-450f-be27-4e4734abca97
            Description:  Repository Update
                 Status:  Success
                Created:  April 23, 2025 14:37:29 UTC
                Message:  /u01/patch/odacli-dcs-19.26.0.0.0-250127-GI-19.26.0.0.zip
...

[root@oak0 patch]# /opt/oracle/dcs/bin/odacli update-repository -f /u01/patch/odacli-dcs-19.26.0.0.0-250127-DB-19.26.0.0.zip
...
[root@oak0 patch]# odacli describe-job -i 4299b124-1c93-4d22-bac4-44a65cbaac67

Job details                                                      
----------------------------------------------------------------
                     ID:  4299b124-1c93-4d22-bac4-44a65cbaac67
            Description:  Repository Update
                 Status:  Success
                Created:  April 23, 2025 14:39:34 UTC
                Message:  /u01/patch/odacli-dcs-19.26.0.0.0-250127-DB-19.26.0.0.zip
...

Checked that the clones are available:

[root@oak0 patch]# ls -ltrh /opt/oracle/oak/pkgrepos/orapkgs/clones
total 12G
-rwxr-xr-x 1 root root 6.0G Jan 28 03:33 grid19.250121.tar.gz
-rwxr-xr-x 1 root root   21 Jan 28 03:34 grid19.250121.tar.gz.info
-r-xr-xr-x 1 root root 5.4G Jan 28 03:42 db19.250121.tar.gz
-rw-rw-r-- 1 root root  19K Jan 28 03:42 clonemetadata.xml
-rw-rw-r-- 1 root root   21 Jan 28 03:43 db19.250121.tar.gz.info
[root@oak0 patch]# 

The same on node 1:

[root@oak1 ~]# ls -ltrh /opt/oracle/oak/pkgrepos/orapkgs/clones
total 12G
-rwxr-xr-x 1 root root 6.0G Jan 28 03:33 grid19.250121.tar.gz
-rwxr-xr-x 1 root root   21 Jan 28 03:34 grid19.250121.tar.gz.info
-r-xr-xr-x 1 root root 5.4G Jan 28 03:42 db19.250121.tar.gz
-rw-rw-r-- 1 root root  19K Jan 28 03:42 clonemetadata.xml
-rw-rw-r-- 1 root root   21 Jan 28 03:43 db19.250121.tar.gz.info
[root@oak1 ~]# 

Before running the create-appliance again, you should first validate the storage topology on both nodes again.

[root@oak0 ~]# odacli validate-storagetopology
INFO    : ODA Topology Verification         
INFO    : Running on Node0                  
INFO    : Check hardware type               
INFO    : Check for Environment(Bare Metal or Virtual Machine)
SUCCESS : Type of environment found : Bare Metal
INFO    : Check number of Controllers       
SUCCESS : Number of onboard OS disk found : 2
SUCCESS : Number of External SCSI controllers found : 2
INFO    : Check for Controllers correct PCIe slot address
SUCCESS : Internal RAID controller   : 
SUCCESS : External LSI SAS controller 0 : 61:00.0
SUCCESS : External LSI SAS controller 1 : e1:00.0
INFO    : Check for Controller Type in the System
SUCCESS : There are 2 SAS 38xx controller in the system
INFO    : Check if JBOD powered on          
SUCCESS : 1JBOD : Powered-on
INFO    : Check for correct number of EBODS(2 or 4)
SUCCESS : EBOD found : 2
INFO    : Check for External Controller 0   
SUCCESS : Controller connected to correct EBOD number
SUCCESS : Controller port connected to correct EBOD port
SUCCESS : Overall Cable check for controller 0
INFO    : Check for External Controller 1   
SUCCESS : Controller connected to correct EBOD number
SUCCESS : Controller port connected to correct EBOD port
SUCCESS : Overall Cable check for Controller 1
INFO    : Check for overall status of cable validation on Node0
SUCCESS : Overall Cable Validation on Node0
INFO    : Check Node Identification status  
SUCCESS : Node Identification
SUCCESS : Node name based on cable configuration found : NODE0
INFO    : The details for Storage Topology Validation can also be found in the log file=/opt/oracle/oak/diag/oak0/oak/storagetopology/StorageTopology-2025-04-23-14:42:34_70809_7141.log
[root@oak0 ~]# 

Validate the storage-topology on node 1 as well. Not validating the storage topology may lead to the following error when creating the appliance again:

OAK-10011:Failure while running storage setup on the system. Cause: Node number set on host not matching node number returned by storage topology tool. Action: Node number on host not set correctly. For default storage shelf node number needs to be set by storage topology tool itself.

Afterwards the “odacli create-appliance” should run through.

Summary

If your “odacli create-appliance” fails on an ODA HA environment and you cannot restart it, then run a cleanup, update the repository with the Grid Infra- and DB-clone and validate the storage-topology before doing the create-appliance again.

L’article odacli create-appliance failed on an ODA HA est apparu en premier sur dbi Blog.

Virtualize, Anonymize, Validate: The Power of Delphix & OMrun

Wed, 2025-04-23 11:27
The Challenge: Modern Data Complexity

As businesses scale, so do their data environments. With hybrid cloud adoption, legacy system migrations, and stricter compliance requirements, IT teams must ensure test environments are:

  • Quickly available
  • Secure and compliant
  • Accurate mirrors of production environments
The Solution: Delphix – OMrun

Also for your heterogenouse data storage technology, Delphix and OMrun provide a seamless way to virtualize, anonymize and validate your test data securely and fast.

Virtualize with Delphix: Fast, Efficient, and Agile

Delphix replaces slow, storage-heavy physical test environments with virtualized data environments. Here’s what makes it a major advance:

Anonymize with Confidence: Built-in Data Masking

Data privacy isn’t optional, it’s critical. Delphix includes automated data masking to anonymize sensitive information. Whether it’s PII, PHI, or financial data, Delphix ensures:

  • Compliance with regulations (GDPR, CCPA, etc.)
  • Reduced risk of data leaks in non-production environments
  • Built-in masking templates and customization options
Validate with OMrun: Quality Assurance at Scale

OMrun brings powerful data validation and quality assurance capabilities into the mix. It’s tailor-made for data anonymzation validation (ensuring data privacy), providing:

  • Automated script generation
  • Scalable validation (running parallel OMrun instances)
  • Transparent reporting and dashboard
Final Thoughts: A Future-Ready Data Strategy

Whether you’re planning a cloud migration, regulatory compliance initiative, or just looking to modernize your Dev/Test practices, Delphix & OMrun provide a future-proof foundation. This powerful combination helps businesses move faster, safer, and smarter – turning data from a bottleneck into a business accelerator.

Want to see it in action?

Watch the OMrun Video Tutorials at www.youtube.com/@Dbi-services or explore Delphix & OMrun Solutions at:
OMrun
dbi-services.com/products/omrun/
OMrun Online Manual
Delphix
Delphix Data Masking Software

L’article Virtualize, Anonymize, Validate: The Power of Delphix & OMrun est apparu en premier sur dbi Blog.

Restore a database using Veeam RMAN plug-in on an ODA

Tue, 2025-04-22 16:07

I recently wrote a blog to show how to configure Veeam RMAN plug-in to take database backups. As all DBA knows, configuring a backup, will not go without testing a restore. In this blog I will show how I tested my previous Veeam configuration and backups performed with this Veeam RMAN plug-in on the same ODA. In order to test that the Veeam backups are usable, we will create a new CVEEAMT container database on the ODA and restore existing CDB1 container database into CVEEAMT using a previous existing VEEAM backup we took after configuring the plug-in. The restore will be done through a duplicate.

Pay attention

As we will restore an existing production container database, named CDB1, hosting a PDB named, PDB1, into new CVEEAMT container database, we will have a duplicate PDB. As we know that each PDB is registered into the listener, both PDB1 will be reachable through the same service, which, if the PDB1 is in use, could have dramatical consequence. There before doing the restore into the new container database we will change the domain of the newly created one.

Create new container database CVEEAMT

With odacli we will create the new container databasse named CVEEAMT.

[root@ODA2 ~]# odacli list-dbhomes
ID                                       Name                 DB Version           DB Edition Home Location                                            Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
3941f574-77bd-4f9e-a1f6-db2bb654f334     OraDB19000_home1     19.25.0.0.241015     SE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1     CONFIGURED
b922980f-cecd-4bf8-a688-eb41dd4b5b4b     OraDB19000_home2     19.25.0.0.241015     SE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2     CONFIGURED

[root@ODA2 ~]# odacli create-database -dh 3941f574-77bd-4f9e-a1f6-db2bb654f334 -n CVEEAMT -u CVEEAMT_SITE1 -cl OLTP -c -p VEEAMT -no-co -cs AL32UTF8 -ns UTF8 -l AMERICAN -dt AMERICA -s odb1 -r ACFS
Enter SYS, SYSTEM and PDB Admin user password:
Retype SYS, SYSTEM and PDB Admin user password:

Job details
----------------------------------------------------------------
                     ID:  7d99e795-31e8-4c96-af15-376405180978
            Description:  Database service creation with DB name: CVEEAMT
                 Status:  Created
                Created:  February 19, 2025 11:37:16 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------

[root@ODA2 ~]# odacli describe-job -i 7d99e795-31e8-4c96-af15-376405180978

Job details
----------------------------------------------------------------
                     ID:  7d99e795-31e8-4c96-af15-376405180978
            Description:  Database service creation with DB name: CVEEAMT
                 Status:  Success
                Created:  February 19, 2025 11:37:16 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Setting up SSH equivalence               February 19, 2025 11:37:20 CET           February 19, 2025 11:37:20 CET           Success
Setting up SSH equivalence               February 19, 2025 11:37:20 CET           February 19, 2025 11:37:20 CET           Success
Creating volume datCVEEAMT               February 19, 2025 11:37:20 CET           February 19, 2025 11:37:35 CET           Success
Creating volume rdoCVEEAMT               February 19, 2025 11:37:35 CET           February 19, 2025 11:37:50 CET           Success
Creating ACFS filesystem for DATA        February 19, 2025 11:37:50 CET           February 19, 2025 11:38:14 CET           Success
Creating ACFS filesystem for RECO        February 19, 2025 11:38:14 CET           February 19, 2025 11:38:37 CET           Success
Database Service creation                February 19, 2025 11:38:38 CET           February 19, 2025 11:52:54 CET           Success
Database Creation by RHP                 February 19, 2025 11:38:38 CET           February 19, 2025 11:50:16 CET           Success
Change permission for xdb wallet files   February 19, 2025 11:50:16 CET           February 19, 2025 11:50:17 CET           Success
Add Startup Trigger to Open all PDBS     February 19, 2025 11:50:17 CET           February 19, 2025 11:50:18 CET           Success
Place SnapshotCtrlFile in sharedLoc      February 19, 2025 11:50:18 CET           February 19, 2025 11:50:21 CET           Success
SqlPatch upgrade                         February 19, 2025 11:51:35 CET           February 19, 2025 11:51:55 CET           Success
Running dbms_stats init_package          February 19, 2025 11:51:55 CET           February 19, 2025 11:51:56 CET           Success
Set log_archive_dest for Database        February 19, 2025 11:51:56 CET           February 19, 2025 11:51:58 CET           Success
Updating the Database version            February 19, 2025 11:51:58 CET           February 19, 2025 11:52:02 CET           Success
Create Users tablespace                  February 19, 2025 11:52:54 CET           February 19, 2025 11:52:57 CET           Success
Clear all listeners from Database        February 19, 2025 11:52:57 CET           February 19, 2025 11:52:58 CET           Success
Copy Pwfile to Shared Storage            February 19, 2025 11:53:00 CET           February 19, 2025 11:53:01 CET           Success

[root@ODA2 ~]#

Change the domain

As explained previously, for the newly created container database, we will change the existing domain domain.ch to test.ch, in order not to conflict connecting to both PDB1 once the restore is done.

Existing listener registration for the new CVEEAMT container database and PDB:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] lsnrctl status | grep -i veeam
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMTXDB.domain.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMT_SITE1.domain.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "veeamt.domain.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...

As we can see new CDB and new PDB are registered into the listener using existing ODA domain domain.ch.

Let’s change it to test.ch.

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 11:59:25 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show parameter domain

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_domain                            string      domain.ch

SQL> alter system set db_domain='test.ch' scope=spfile;

System altered.

We will restart the database for the changes to take effects.

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] srvctl stop database -d CVEEAMT_SITE1
oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] srvctl start database -d CVEEAMT_SITE1

We will check listener registration:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] lsnrctl status | grep -i veeam
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMTXDB.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMT_SITE1.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "veeamt.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...

As well as domain instance parameter:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 12:02:23 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show parameter db_domain

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_domain                            string      test.ch

Listener configuration

As we will duplicate CDB1 to CVEEAMT, the database name will be renamed. This implies a database restart, which will be done through a listener connection. Therefore, for RMAN to connect to a closed database remotely, we will need to add a static entry that will be used for RMAN duplicate auxiliary connection.

Static registration:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = CVEEAMT_SITE1.test.ch)
      (ORACLE_HOME   = /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1)
      (SID_NAME      = CVEEAMT)
     )
  )

Backup existing listener configuration on the ODA:

grid@ODA2:~/ [rdbms1900] grinf19
grid@ODA2:~/ [grinf19] cdt
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] ls -ltrh
total 28K
-rw-r--r-- 1 grid oinstall 1.5K Feb 14  2018 shrept.lst
drwxr-xr-x 2 grid oinstall 4.0K Apr 17  2019 samples
-rw-r--r-- 1 grid oinstall  266 Dec  3 17:33 listener.ora.bak.ODA2.grid
-rw-r--r-- 1 grid oinstall  504 Dec  3 17:34 listener.ora
-rw-r----- 1 grid oinstall  504 Dec  3 17:34 listener2412035PM3433.bak
-rw-r----- 1 grid oinstall  179 Dec  3 17:34 sqlnet.ora.20250204
-rw-r----- 1 grid oinstall  200 Feb  4 15:00 sqlnet.ora
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] mkdir history
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] cp -p listener.ora ./history/listener.ora.202502191205

Add listener static entry:

grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] vi listener.ora
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] diff listener.ora ./history/listener.ora.202502191205
7,15d6
<
< SID_LIST_LISTENER =
<   (SID_LIST =
<     (SID_DESC =
<       (GLOBAL_DBNAME = CVEEAMT_SITE1.test.ch)
<       (ORACLE_HOME   = /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1)
<       (SID_NAME      = CVEEAMT)
<      )
<   )
grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19]

Reload the listener:

grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] lsnrctl reload

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 19-FEB-2025 13:01:10

Copyright (c) 1991, 2024, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
The command completed successfully

And check running static registration, that we can recognize with the UNKNOWN status:

grid@ODA2:/u01/app/19.25.0.0/grid/network/admin/ [grinf19] lsnrctl status | grep -i veeam
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMTXDB.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMT_SITE1.test.ch" has 2 instance(s).
  Instance "CVEEAMT", status UNKNOWN, has 1 handler(s) for this service...
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "veeamt.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...

Configure Oracle network connections

We will configure appropriate tnsnames.ora entries that will be used to connect to target and auxiliary database.

We just need to add new auxiliary entry. The target entry for CDB1 connection is still existing and will permit connection to existing CDB1 production container database.

tnsnames connection to add:

CVEEAMT_SITE1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ODA2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = CVEEAMT_SITE1.test.ch)
    )
  )

tnsnames.ora backup and configuration changes. Entry for CVEEAMT_SITE1 already exist and was performed initially by odacli:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] cdt
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] ls -ltrh
total 112K
-rw-r--r-- 1 oracle oinstall 1.5K Feb 14  2018 shrept.lst
drwxr-xr-x 2 oracle oinstall  20K Apr 17  2019 samples
drwxr-xr-x 2 oracle oinstall  20K Dec 18 14:01 history
-rw-r----- 1 oracle oinstall 2.6K Feb 19 11:45 tnsnames.ora
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] cp -p tnsnames.ora ./history/tnsnames.ora.202502191305
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] vi tnsnames.ora
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] diff tnsnames.ora ./history/tnsnames.ora.202502191305
115c115
       (SERVICE_NAME = CVEEAMT_SITE1.domain.ch)

Test target and auxiliary connections

Test connection to auxiliary database, CVEEAMT:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] sqlplus sys@CVEEAMT_SITE1 as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 13:07:24 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> set line 300
SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
CVEEAMT          ODA2.domain.ch

SQL>

Test connection to target database, CDB1:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/network/admin/ [CVEEAMT (CDB$ROOT)] sqlplus sys@CDB1_SITE1 as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 13:08:53 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> set line 300
SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
CDB1            ODA2.domain.ch

SQL>

Delete CVEEAMT DB files

We will now delete CVEEAMT database files before executing the restore.

We will first check spfile:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] cdh
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/ [CVEEAMT (CDB$ROOT)] cd dbs
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] ls -ltrh *CVEEAMT*
-rw-r----- 1 oracle asmadmin   24 Feb 19 11:39 lkCVEEAMT_SITE1
-rw-r----- 1 oracle asmadmin   24 Feb 19 11:40 lkCVEEAMT
-rw-r----- 1 oracle oinstall   69 Feb 19 11:48 initCVEEAMT.ora
-rw-rw---- 1 oracle asmadmin 1.6K Feb 19 12:01 hc_CVEEAMT.dat
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] cat initCVEEAMT.ora
SPFILE='/u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora'
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] srvctl config database -d CVEEAMT_SITE1 | grep -i spfile
Spfile: /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)]

We will stop the database:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] srvctl stop database -d CVEEAMT_SITE1
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] CVEEAMT

 **************************
 INSTANCE_NAME   : CVEEAMT
 STATUS          : DOWN
 **************************
 Statustime: 2025-02-19 13:11:56

We will backup the spfile:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] cp -p /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora.bak.202502191312
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] ls -ltrh /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/
total 20K
-rw-r----- 1 oracle asmadmin 2.0K Feb 19 11:41 orapwCVEEAMT
-rw-r----- 1 oracle oinstall 6.5K Feb 19 12:02 spfileCVEEAMT.ora.bak.202502191312
-rw-r----- 1 oracle asmadmin 6.5K Feb 19 12:02 spfileCVEEAMT.ora
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)]

We will drop CVEEAMT database:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 13:14:09 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount restrict
ORACLE instance started.

Total System Global Area 4294965864 bytes
Fixed Size                  9185896 bytes
Variable Size             855638016 bytes
Database Buffers         3388997632 bytes
Redo Buffers               41144320 bytes
Database mounted.

SQL> drop database;

Database dropped.

Disconnected from Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0
SQL>

We will restore the spfile that was deleted with the drop database command:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] ls -ltrh /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/
total 12K
-rw-r----- 1 oracle asmadmin 2.0K Feb 19 11:41 orapwCVEEAMT
-rw-r----- 1 oracle oinstall 6.5K Feb 19 12:02 spfileCVEEAMT.ora.bak.202502191312

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] cp -p /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora.bak.202502191312 /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/spfileCVEEAMT.ora

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] ls -ltrh /u02/app/oracle/oradata/CVEEAMT_SITE1/dbs/
total 20K
-rw-r----- 1 oracle asmadmin 2.0K Feb 19 11:41 orapwCVEEAMT
-rw-r----- 1 oracle oinstall 6.5K Feb 19 12:02 spfileCVEEAMT.ora.bak.202502191312
-rw-r----- 1 oracle oinstall 6.5K Feb 19 12:02 spfileCVEEAMT.ora

Startup nomount auxiliary database

We will startup in nomount status the auxiliary database, CVEEAMT.

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 13:15:45 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 4294965864 bytes
Fixed Size                  9185896 bytes
Variable Size             855638016 bytes
Database Buffers         3388997632 bytes
Redo Buffers               41144320 bytes
SQL>

Database is started in nomunt and static registration available:

oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)] lsnrctl status | grep -i veeam
Service "CVEEAMT_SITE1.test.ch" has 2 instance(s).
  Instance "CVEEAMT", status UNKNOWN, has 1 handler(s) for this service...
  Instance "CVEEAMT", status BLOCKED, has 1 handler(s) for this service...
oracle@ODA2:/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/dbs/ [CVEEAMT (CDB$ROOT)]

Check CDB1 backups

We will check that last automatic backups that we configured in the crontab at the end of the VEEAM RMAN plug-in configuration are successfully.

INC0 backup:

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] ls -ltrh *inc0* | tail -n1
-rw-r--r-- 1 oracle oinstall 17K Feb 16 18:20 CDB1_bck_inc0_no_arc_del_tape_20250216_180002.log

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] tail CDB1_bck_inc0_no_arc_del_tape_20250216_180002.log

Recovery Manager complete.

RMAN return Code: 0

#**************************************************************************************************#
#                    END OF: CDB1_bck_inc0_no_arc_del_tape_20250216_180002.log                    #
#--------------------------------------------------------------------------------------------------#
#                                  timestamp: 2025-02-16_18:20:54                                  #
#**************************************************************************************************#

INC1 backup:

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] ls -ltrh *inc1* | tail -n1
-rw-r--r-- 1 oracle oinstall 17K Feb 18 18:01 CDB1_bck_inc1_no_arc_del_tape_20250218_180002.log

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] tail CDB1_bck_inc1_no_arc_del_tape_20250218_180002.log

Recovery Manager complete.

RMAN return Code: 0

#**************************************************************************************************#
#                    END OF: CDB1_bck_inc1_no_arc_del_tape_20250218_180002.log                    #
#--------------------------------------------------------------------------------------------------#
#                                  timestamp: 2025-02-18_18:01:31                                  #
#**************************************************************************************************#

Archived log backup:

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] ls -ltrh *arc_no_arc* | tail -n1
-rw-r--r-- 1 oracle oinstall 7.6K Feb 19 12:40 CDB1_bck_arc_no_arc_del_tape_20250219_124002.log

oracle@ODA2:/u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/ [CDB1 (CDB$ROOT)] tail CDB1_bck_arc_no_arc_del_tape_20250219_124002.log

Recovery Manager complete.

RMAN return Code: 0

#**************************************************************************************************#
#                    END OF: CDB1_bck_arc_no_arc_del_tape_20250219_124002.log                     #
#--------------------------------------------------------------------------------------------------#
#                                  timestamp: 2025-02-19_12:40:49                                  #
#**************************************************************************************************#

Create a new table on PDB1 existing in target CDB1

In order to check some data contents after the restore we will create a TEST1 table in the PDB1 from existing target CDB1 container database.

oracle@ODA2:~/ [CDB1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 14:07:35 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> alter session set container=PDB1;

Session altered.

SQL> create table TEST1 as select * from dba_users;

Table created.

Archived log backup on CDB1

Let’s take a last archived log backup to record last transaction, including our TEST1 table creation.

oracle@ODA2:~/ [CDB1 (CDB$ROOT)] /u01/app/oracle/local/dmk_ha/bin/check_primary.ksh CDB1 "/u01/app/oracle/local/dmk_dbbackup/bin/dmk_rman.ksh -s CDB1 -t bck_arc_no_arc_del_tape.rcv -c /u01/app/odaorabase/oracle/admin/CDB1_SITE1/etc/rman.cfg"
2025-02-19_14:09:49::check_primary.ksh::SetOraEnv       ::INFO ==> Environment: CDB1 (/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1)
2025-02-19_14:09:49::check_primary.ksh::MainProgram     ::INFO ==> Getting V$DATABASE.DB_ROLE for CDB1
2025-02-19_14:09:49::check_primary.ksh::MainProgram     ::INFO ==> CDB1 Database Role is: PRIMARY
2025-02-19_14:09:49::check_primary.ksh::MainProgram     ::INFO ==> Program going ahead and starting requested command
2025-02-19_14:09:49::check_primary.ksh::MainProgram     ::INFO ==> Script : /u01/app/oracle/local/dmk_dbbackup/bin/dmk_rman.ksh -s CDB1 -t bck_arc_no_arc_del_tape.rcv -c /u01/app/odaorabase/oracle/admin/CDB1_SITE1/etc/rman.cfg

[OK]::EBL::RMAN::dmk_dbbackup::CDB1::bck_arc_no_arc_del_tape.rcv::RMAN_retCode::0
Logfile is : /u01/app/odaorabase/oracle/admin/CDB1_SITE1/log/CDB1_bck_arc_no_arc_del_tape_20250219_140949.log


2025-02-19_14:10:37::check_primary.ksh::CleanExit       ::INFO ==> Program exited with ExitCode : 0
oracle@ODA2:~/ [CDB1 (CDB$ROOT)]

Duplicate CDB1 to CVEEAMT

Let’s do our VEEAM RMAN Plug-in test by restoring CDB1 to CVEEAMT using duplicate from backup command.

The run block will be the following. We will allocate an auxiliary channel using the VEEAM RMAN plug-in library connection that was configured in previous blog.

run {
ALLOCATE AUXILIARY CHANNEL VeeamAgentChannel1 DEVICE TYPE SBT_TAPE PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so' FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';
duplicate database CDB1 to CVEEAMT;
}

Check auxiliary database files. We can see there is no OMF datafile directory.

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] ls -lrh /u02/app/oracle/oradata/CVEEAMT_SITE1/
total 168K
drwx------ 2 root   root     64K Feb 19 11:38 lost+found
drwxr-x--- 2 oracle oinstall 20K Feb 19 13:15 dbs
drwxrwx--- 2 oracle oinstall 20K Feb 19 11:51 arc10
oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)]

Restore the database using the VEEAM backups. We will only use 1 target and 1 auxiliary channel knowing we are running Oracle SE2 edition at customer side:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] rmanh

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Feb 19 14:13:30 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target sys@CDB1_SITE1
connect target *
target database Password:
connected to target database: CDB1 (DBID=756666048)

RMAN> connect auxiliary sys@CVEEAMT_SITE1
connect auxiliary *
auxiliary database Password:
connected to auxiliary database: CVEEAMT (not mounted)

run {
run {
2> ALLOCATE AUXILIARY CHANNEL VeeamAgentChannel1 DEVICE TYPE SBT_TAPE PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so' FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';
ALLOCATE AUXILIARY CHANNEL VeeamAgentChannel1 DEVICE TYPE SBT_TAPE PARMS 'SBT_LIBRARY=/opt/veeam/VeeamPluginforOracleRMAN/libOracleRMANPlugin.so' FORMAT 'e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_%I_%d_%T_%U.vab';
3> duplicate database CDB1 to CVEEAMT;
duplicate database CDB1 to CVEEAMT;
4> }
}
using target database control file instead of recovery catalog
allocated channel: VeeamAgentChannel1
channel VeeamAgentChannel1: SID=16 device type=SBT_TAPE
channel VeeamAgentChannel1: Veeam Plug-in for Oracle RMAN

Starting Duplicate Db at 19-FEB-2025 14:15:08
current log archived
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/u04/app/oracle/redo/CVEEAMT/CVEEAMT_SITE1/controlfile/o1_mf_mvcf9nor_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''CDB1'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile;
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''/u04/app/oracle/redo/CVEEAMT/CVEEAMT_SITE1/controlfile/o1_mf_mvcf9nor_.ctl'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''CDB1'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area    4294965864 bytes

Fixed Size                     9185896 bytes
Variable Size                855638016 bytes
Database Buffers            3388997632 bytes
Redo Buffers                  41144320 bytes
allocated channel: VeeamAgentChannel1
channel VeeamAgentChannel1: SID=21 device type=SBT_TAPE
channel VeeamAgentChannel1: Veeam Plug-in for Oracle RMAN

Starting restore at 19-FEB-2025 14:15:33

channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: restoring control file
channel VeeamAgentChannel1: reading from backup piece c-756666048-20250219-09_RMAN_AUTOBACKUP.vab
channel VeeamAgentChannel1: piece handle=c-756666048-20250219-09_RMAN_AUTOBACKUP.vab tag=TAG20250219T141031
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
output file name=/u04/app/oracle/redo/CVEEAMT/CVEEAMT_SITE1/controlfile/o1_mf_mvcf9nor_.ctl
Finished restore at 19-FEB-2025 14:15:56

database mounted

contents of Memory Script:
{
   set until scn  13117839;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  9 to new;
   set newname for clone datafile  10 to new;
   set newname for clone datafile  11 to new;
   set newname for clone datafile  12 to new;
   set newname for clone datafile  13 to new;
   set newname for clone datafile  14 to new;
   set newname for clone datafile  15 to new;
   set newname for clone datafile  16 to new;
   set newname for clone datafile  17 to new;
   set newname for clone datafile  18 to new;
   set newname for clone datafile  19 to new;
   set newname for clone datafile  20 to new;
   set newname for clone datafile  21 to new;
   set newname for clone datafile  22 to new;
   set newname for clone datafile  23 to new;
   set newname for clone datafile  24 to new;
   set newname for clone datafile  25 to new;
   set newname for clone datafile  26 to new;
   set newname for clone datafile  27 to new;
   set newname for clone datafile  28 to new;
   set newname for clone datafile  29 to new;
   set newname for clone datafile  30 to new;
   set newname for clone datafile  31 to new;
   set newname for clone datafile  32 to new;
   set newname for clone datafile  33 to new;
   set newname for clone datafile  34 to new;
   set newname for clone datafile  35 to new;
   set newname for clone datafile  36 to new;
   set newname for clone datafile  37 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 19-FEB-2025 14:16:01

channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00005 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_system_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00006 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00007 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250204_en3guuba_471_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250204_en3guuba_471_1_1.vab tag=INC0_20250204_133948
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00010 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 1 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00010 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 2 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_2_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_2_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 2
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00010 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 3 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_3_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_3_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 3
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00013 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00014 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00017 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00020 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00023 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00026 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00029 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00032 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00035 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s83hv262_904_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s83hv262_904_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00008 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00015 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00018 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00021 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00024 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00027 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00030 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00033 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00036 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s93hv2h6_905_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s93hv2h6_905_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00009 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00011 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00016 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00019 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00022 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00025 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00028 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00031 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00034 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00037 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sa3hv2pf_906_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sa3hv2pf_906_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
channel VeeamAgentChannel1: starting datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
channel VeeamAgentChannel1: restoring datafile 00001 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00003 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00004 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_%u_.dbf
channel VeeamAgentChannel1: restoring datafile 00012 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_%u_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sb3hv2pu_907_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sb3hv2pu_907_1_1.vab tag=INC0_20250216_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:07
Finished restore at 19-FEB-2025 14:16:50

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=40 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_mvcpfwvf_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=41 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=42 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=43 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_system_mvcpdmpq_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=44 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_mvcpdmq3_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=45 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_mvcpdmqj_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=46 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_mvcpfhf3_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=47 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=48 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=49 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=50 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=51 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=52 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf
datafile 15 switched to datafile copy
input datafile copy RECID=53 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf
datafile 16 switched to datafile copy
input datafile copy RECID=54 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf
datafile 17 switched to datafile copy
input datafile copy RECID=55 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf
datafile 18 switched to datafile copy
input datafile copy RECID=56 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf
datafile 19 switched to datafile copy
input datafile copy RECID=57 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf
datafile 20 switched to datafile copy
input datafile copy RECID=58 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf
datafile 21 switched to datafile copy
input datafile copy RECID=59 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf
datafile 22 switched to datafile copy
input datafile copy RECID=60 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf
datafile 23 switched to datafile copy
input datafile copy RECID=61 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf
datafile 24 switched to datafile copy
input datafile copy RECID=62 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf
datafile 25 switched to datafile copy
input datafile copy RECID=63 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf
datafile 26 switched to datafile copy
input datafile copy RECID=64 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf
datafile 27 switched to datafile copy
input datafile copy RECID=65 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf
datafile 28 switched to datafile copy
input datafile copy RECID=66 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf
datafile 29 switched to datafile copy
input datafile copy RECID=67 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf
datafile 30 switched to datafile copy
input datafile copy RECID=68 STAMP=1193494610 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf
datafile 31 switched to datafile copy
input datafile copy RECID=69 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf
datafile 32 switched to datafile copy
input datafile copy RECID=70 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf
datafile 33 switched to datafile copy
input datafile copy RECID=71 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf
datafile 34 switched to datafile copy
input datafile copy RECID=72 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf
datafile 35 switched to datafile copy
input datafile copy RECID=73 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf
datafile 36 switched to datafile copy
input datafile copy RECID=74 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf
datafile 37 switched to datafile copy
input datafile copy RECID=75 STAMP=1193494611 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf

contents of Memory Script:
{
   set until scn  13117839;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 19-FEB-2025 14:16:51
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00010: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 1 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00010: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 2 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_2_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_2_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 2
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00010: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
channel VeeamAgentChannel1: restoring section 3 of 3
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_3_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_3_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 3
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00013: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf
destination for restore of datafile 00014: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf
destination for restore of datafile 00017: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf
destination for restore of datafile 00020: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf
destination for restore of datafile 00023: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf
destination for restore of datafile 00026: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf
destination for restore of datafile 00029: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf
destination for restore of datafile 00032: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf
destination for restore of datafile 00035: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uc3i4aqd_972_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uc3i4aqd_972_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00008: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_mvcpfhf3_.dbf
destination for restore of datafile 00015: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf
destination for restore of datafile 00018: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf
destination for restore of datafile 00021: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf
destination for restore of datafile 00024: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf
destination for restore of datafile 00027: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf
destination for restore of datafile 00030: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf
destination for restore of datafile 00033: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf
destination for restore of datafile 00036: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ud3i4aqk_973_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ud3i4aqk_973_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00009: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf
destination for restore of datafile 00011: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf
destination for restore of datafile 00016: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf
destination for restore of datafile 00019: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf
destination for restore of datafile 00022: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf
destination for restore of datafile 00025: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf
destination for restore of datafile 00028: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf
destination for restore of datafile 00031: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf
destination for restore of datafile 00034: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf
destination for restore of datafile 00037: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ue3i4aqr_974_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ue3i4aqr_974_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03
channel VeeamAgentChannel1: starting incremental datafile backup set restore
channel VeeamAgentChannel1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_mvcpfwvf_.dbf
destination for restore of datafile 00003: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf
destination for restore of datafile 00004: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf
destination for restore of datafile 00012: /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf
channel VeeamAgentChannel1: reading from backup piece e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uf3i4ar2_975_1_1.vab
channel VeeamAgentChannel1: piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uf3i4ar2_975_1_1.vab tag=INC1_20250218_180002
channel VeeamAgentChannel1: restored backup piece 1
channel VeeamAgentChannel1: restore complete, elapsed time: 00:00:03

starting media recovery

archived log for thread 1 with sequence 236 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_236_mv9h6t7z_.arc
archived log for thread 1 with sequence 237 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_237_mv9rjr7s_.arc
archived log for thread 1 with sequence 238 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_238_mvb6lr1f_.arc
archived log for thread 1 with sequence 239 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_239_mvbnnqvg_.arc
archived log for thread 1 with sequence 240 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_240_mvc2pr6m_.arc
archived log for thread 1 with sequence 241 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_241_mvcjrr7v_.arc
archived log for thread 1 with sequence 242 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_242_mvcp13py_.arc
archived log for thread 1 with sequence 243 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_243_mvcpbw6z_.arc
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_236_mv9h6t7z_.arc thread=1 sequence=236
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_237_mv9rjr7s_.arc thread=1 sequence=237
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_238_mvb6lr1f_.arc thread=1 sequence=238
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_239_mvbnnqvg_.arc thread=1 sequence=239
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_240_mvc2pr6m_.arc thread=1 sequence=240
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_241_mvcjrr7v_.arc thread=1 sequence=241
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_242_mvcp13py_.arc thread=1 sequence=242
archived log file name=/u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_19/o1_mf_1_243_mvcpbw6z_.arc thread=1 sequence=243
media recovery complete, elapsed time: 00:00:03
Finished recover at 19-FEB-2025 14:17:17
released channel: VeeamAgentChannel1
Oracle instance started

Total System Global Area    4294965864 bytes

Fixed Size                     9185896 bytes
Variable Size                855638016 bytes
Database Buffers            3388997632 bytes
Redo Buffers                  41144320 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''CVEEAMT'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
}
executing Memory Script

sql statement: alter system set  db_name =  ''CVEEAMT'' comment= ''Reset to original value by RMAN'' scope=spfile
Oracle instance started

Total System Global Area    4294965864 bytes

Fixed Size                     9185896 bytes
Variable Size                855638016 bytes
Database Buffers            3388997632 bytes
Redo Buffers                  41144320 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CVEEAMT" RESETLOGS ARCHIVELOG
  MAXLOGFILES     16
  MAXLOGMEMBERS      3
  MAXDATAFILES     1024
  MAXINSTANCES     8
  MAXLOGHISTORY      292
 LOGFILE
  GROUP     1  SIZE 512 M ,
  GROUP     2  SIZE 512 M ,
  GROUP     3  SIZE 512 M
 DATAFILE
  '/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_mvcpfwvf_.dbf',
  '/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_system_mvcpdmpq_.dbf',
  '/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_mvcpfhf3_.dbf'
 CHARACTER SET AL32UTF8


contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   set newname for clone tempfile  3 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_mvcpdmq3_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_mvcpdmqj_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf",
 "/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 2 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temp_%u_.tmp in control file

cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf RECID=1 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf RECID=2 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_mvcpdmq3_.dbf RECID=3 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_mvcpdmqj_.dbf RECID=4 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf RECID=5 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf RECID=6 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf RECID=7 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf RECID=8 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf RECID=9 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf RECID=10 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf RECID=11 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf RECID=12 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf RECID=13 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf RECID=14 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf RECID=15 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf RECID=16 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf RECID=17 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf RECID=18 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf RECID=19 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf RECID=20 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf RECID=21 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf RECID=22 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf RECID=23 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf RECID=24 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf RECID=25 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf RECID=26 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf RECID=27 STAMP=1193494661
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf RECID=28 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf RECID=29 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf RECID=30 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf RECID=31 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf RECID=32 STAMP=1193494662
cataloged datafile copy
datafile copy file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf RECID=33 STAMP=1193494662

datafile 3 switched to datafile copy
input datafile copy RECID=1 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=2 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=3 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_sysaux_mvcpdmq3_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=4 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987BF93B6232B35E063425C210AC02A/datafile/o1_mf_undotbs1_mvcpdmqj_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=5 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=6 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=7 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=8 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=9 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=10 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpf7lt_.dbf
datafile 15 switched to datafile copy
input datafile copy RECID=11 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_ami_hub__mvcpfgl2_.dbf
datafile 16 switched to datafile copy
input datafile copy RECID=12 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpfost_.dbf
datafile 17 switched to datafile copy
input datafile copy RECID=13 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_base_sys_mvcpf7m4_.dbf
datafile 18 switched to datafile copy
input datafile copy RECID=14 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_idm_mvcpfgls_.dbf
datafile 19 switched to datafile copy
input datafile copy RECID=15 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf
datafile 20 switched to datafile copy
input datafile copy RECID=16 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf
datafile 21 switched to datafile copy
input datafile copy RECID=17 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf
datafile 22 switched to datafile copy
input datafile copy RECID=18 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf
datafile 23 switched to datafile copy
input datafile copy RECID=19 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_main_mvcpf7nb_.dbf
datafile 24 switched to datafile copy
input datafile copy RECID=20 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfgmz_.dbf
datafile 25 switched to datafile copy
input datafile copy RECID=21 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_md_readi_mvcpfovr_.dbf
datafile 26 switched to datafile copy
input datafile copy RECID=22 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf
datafile 27 switched to datafile copy
input datafile copy RECID=23 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading_mvcpfgnp_.dbf
datafile 28 switched to datafile copy
input datafile copy RECID=24 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_reading__mvcpfowc_.dbf
datafile 29 switched to datafile copy
input datafile copy RECID=25 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf
datafile 30 switched to datafile copy
input datafile copy RECID=26 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf
datafile 31 switched to datafile copy
input datafile copy RECID=27 STAMP=1193494661 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf
datafile 32 switched to datafile copy
input datafile copy RECID=28 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf
datafile 33 switched to datafile copy
input datafile copy RECID=29 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf
datafile 34 switched to datafile copy
input datafile copy RECID=30 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_uam_mvcpfoxh_.dbf
datafile 35 switched to datafile copy
input datafile copy RECID=31 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_mvcpf7pp_.dbf
datafile 36 switched to datafile copy
input datafile copy RECID=32 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_unit_ind_mvcpfgph_.dbf
datafile 37 switched to datafile copy
input datafile copy RECID=33 STAMP=1193494662 file name=/u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf
Reenabling controlfile options for auxiliary database
Executing: alter database force logging

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened

contents of Memory Script:
{
   sql clone "alter pluggable database all open";
}
executing Memory Script

sql statement: alter pluggable database all open
Finished Duplicate Db at 19-FEB-2025 14:17:48

We can see that RMAN used INC0 VEEAM backups:

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250204_en3guuba_471_1_1.vab tag=INC0_20250204_133948

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_1_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_2_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s53hv217_901_3_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s83hv262_904_1_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_s93hv2h6_905_1_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sa3hv2pf_906_1_1.vab tag=INC0_20250216_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250216_sb3hv2pu_907_1_1.vab tag=INC0_20250216_180002

We can see that RMAN used INC1 VEEAM backups:

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_1_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_2_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_u93i4ap9_969_3_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uc3i4aqd_972_1_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ud3i4aqk_973_1_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_ue3i4aqr_974_1_1.vab tag=INC1_20250218_180002

piece handle=e718bc55-0c60-43bc-b1f7-f8cf2c793120/RMAN_756666048_CDB1_20250218_uf3i4ar2_975_1_1.vab tag=INC1_20250218_180002

RMAN duplicate did not used any of the archived log backup as archived log file was still existing in the FRA, which is ok for our tests, see media recovery message like:

archived log for thread 1 with sequence 236 is already on disk as file /u03/app/oracle/fast_recovery_area/CDB1_SITE1/archivelog/2025_02_18/o1_mf_1_236_mv9h6t7z_.arc

RMAN duplicated played all archived log files as we did not specified any until_scn or until_time clause.

Checks

We have 2 PDB1 pdb one for each CDB on appropriate domain registered to the listener:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] lsnrctl status | grep -iE veeam\|pdb1
  Instance "CDB1", status READY, has 1 handler(s) for this service...
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CDB1XDB.domain.ch" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_SITE1.domain.ch" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CVEEAMTXDB.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "CVEEAMT_SITE1.test.ch" has 2 instance(s).
  Instance "CVEEAMT", status UNKNOWN, has 1 handler(s) for this service...
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
Service "PDB1_PRI.domain.ch" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1.domain.ch" has 1 instance(s).
  Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1.test.ch" has 1 instance(s).
  Instance "CVEEAMT", status READY, has 1 handler(s) for this service...
oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)]

Check target container database CDB1:

oracle@ODA2:~/ [CDB1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 14:46:06 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
CDB1

SQL> set line 300
SQL> col name for a20
SQL> select NAME, GUID, total_size/1024/1024/1024 GB from v$pdbs;

NAME                 GUID                                     GB
-------------------- -------------------------------- ----------
PDB$SEED             2987BF93B6232B35E063425C210AC02A 1.09960938
PDB1                 2987D4B68CF25579E063425C210AB61B 46.3935547

SQL>

Check auxiliary container database CVEEAMT. We will check PDB1, that our TEST1 table exists, and also database files:

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] CVEEAMT

 ******************************************************
 INSTANCE_NAME   : CVEEAMT
 DB_NAME         : CVEEAMT
 DB_UNIQUE_NAME  : CVEEAMT_SITE1
 STATUS          : OPEN READ WRITE
 LOG_MODE        : ARCHIVELOG
 USERS/SESSIONS  : Normal: 0/0, Oracle-maintained: 2/7
 DATABASE_ROLE   : PRIMARY
 FLASHBACK_ON    : NO
 FORCE_LOGGING   : YES
 VERSION         : 19.25.0.0.0
 NLS_LANG        : AMERICAN_AMERICA.AL32UTF8
 CDB_ENABLED     : YES
 PDBs            : PDB1  PDB$SEED
 ******************************************************

 PDB color: pdbname=open read-write, pdbname=open read-only
 Statustime: 2025-02-19 14:42:03

oracle@ODA2:~/ [CVEEAMT (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 19 14:42:05 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> alter session set container=PDB1;

Session altered.

SQL> select count(*) from test1;

  COUNT(*)
----------
        51

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
CVEEAMT

SQL> @qdbstbssize.sql

Container                          Nb      Extent Segment     Alloc.      Space       Max.    Percent Block
name            Name            files Type Mgmnt  Mgmnt    size (GB)  free (GB)  size (GB)     used % size  Log Encrypt Compress
--------------- --------------- ----- ---- ------ ------- ---------- ---------- ---------- ---------- ----- --- ------- --------
PDB1            XXX_XXX_INDEXES     1 DATA LM-SYS AUTO          1.00        .90      10.00       1.00 8 KB  YES NO      NO
                XXX_XXX_TABLES      1 DATA LM-SYS AUTO          1.00        .91      10.00        .95 8 KB  YES NO      NO
                XXXX_SYSTEM         1 DATA LM-SYS AUTO          1.00        .70      10.00       3.01 8 KB  YES NO      NO
                XXXX_SYSTEM_IND     1 DATA LM-SYS AUTO          1.00        .86      10.00       1.36 8 KB  YES NO      NO
                EXES

                IDM                 1 DATA LM-SYS AUTO          1.00        .85      10.00       1.53 8 KB  YES NO      NO
                JOB                 1 DATA LM-SYS AUTO          1.00        .92      10.00        .83 8 KB  YES NO      NO
                JOB_INDEXES         1 DATA LM-SYS AUTO          1.00        .92      10.00        .83 8 KB  YES NO      NO
                LOG                 1 DATA LM-SYS AUTO          1.00        .88      10.00       1.24 8 KB  YES NO      NO
                LOG_INDEXES         1 DATA LM-SYS AUTO          1.00        .86      10.00       1.41 8 KB  YES NO      NO
                MAIN                1 DATA LM-SYS AUTO          1.00        .93      10.00        .74 8 KB  YES NO      NO
                XX_XXXXXXX          1 DATA LM-SYS AUTO          1.00        .92      10.00        .78 8 KB  YES NO      NO
                XX_XX_XXXXXXX_INDE     1 DATA LM-SYS AUTO          1.00        .92      10.00        .80 8 KB  YES NO      NO
                XES

                QUEUE_TABLES        1 DATA LM-SYS AUTO          1.00        .93      10.00        .75 8 KB  YES NO      NO
                XXXXXXX             1 DATA LM-SYS AUTO          1.00        .84      10.00       1.61 8 KB  YES NO      NO
                XXXXXXX_INDEXES     1 DATA LM-SYS AUTO          1.00        .88      10.00       1.20 8 KB  YES NO      NO
                SETUP               1 DATA LM-SYS AUTO          1.00        .91      10.00        .93 8 KB  YES NO      NO
                SETUP_INDEXES       1 DATA LM-SYS AUTO          1.00        .91      10.00        .88 8 KB  YES NO      NO
                STATISTIC           1 DATA LM-SYS AUTO          1.00        .71      10.00       2.85 8 KB  YES NO      NO
                STATSPACK           1 DATA LM-SYS AUTO           .98        .13       2.00      42.32 8 KB  YES NO      NO
                SYSAUX              1 DATA LM-SYS AUTO           .58        .04      10.00       5.32 8 KB  YES NO      NO
                SYSTEM              1 DATA LM-SYS MANUAL         .62        .05       4.00      14.21 8 KB  YES NO      NO
                TEMP                1 TEMP LM-UNI MANUAL         .22        .66      31.00      -1.40 8 KB  NO  NO      NO
                TEMPORARY_DATA      1 DATA LM-SYS AUTO          1.00        .93      10.00        .67 8 KB  YES NO      NO
                TEMPORARY_DATA_     1 DATA LM-SYS AUTO          1.00        .93      10.00        .66 8 KB  YES NO      NO
                INDEXES

                XXX                 1 DATA LM-SYS AUTO          1.00        .93      10.00        .68 8 KB  YES NO      NO
                UNDOTBS1            1 UNDO LM-SYS MANUAL       20.00      19.97      20.00        .13 8 KB  YES NO      NO
                XXXX                1 DATA LM-SYS AUTO          1.00        .92      10.00        .82 8 KB  YES NO      NO
                XXXX_INDEXES        1 DATA LM-SYS AUTO          1.00        .91      10.00        .94 8 KB  YES NO      NO
                USERS               1 DATA LM-SYS AUTO           .00        .00       2.00        .05 8 KB  YES NO      NO
                USER_DATA           1 DATA LM-SYS AUTO          1.00        .93      10.00        .66 8 KB  YES NO      NO
***************                 -----                     ---------- ---------- ----------
TOTAL                              30                          46.39      42.14     309.00

SQL> alter session set container=cdb$root;

Session altered.

SQL> set lines 300
SQL> col name for a20
SQL> select NAME, GUID, total_size/1024/1024/1024 GB from v$pdbs;

NAME                 GUID                                     GB
-------------------- -------------------------------- ----------
PDB$SEED             2987BF93B6232B35E063425C210AC02A 1.09960938
PDB1                 2987D4B68CF25579E063425C210AB61B 46.3935547

2 rows selected.

SQL> set lines 300 pages 500
SQL> col file_name for a150
SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
CVEEAMT

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> select con_id, file_name from cdb_data_files;

    CON_ID FILE_NAME
---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
         1 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_system_mvcpfwvf_.dbf
         1 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_sysaux_mvcpfwtl_.dbf
         1 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_undotbs1_mvcpfwt0_.dbf
         1 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/datafile/o1_mf_users_mvcpfwvt_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_system_mvcpfhf3_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_sysaux_mvcpfppo_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_undotbs1_mvcpdoc3_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_users_mvcpfpz5_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statspac_mvcpf8jp_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxx_xxx__mvcpf7lt_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxx_xxx__mvcpfgl2_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_sys_mvcpfost_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_sys_mvcpf7m4_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxx_mvcpfgls_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_mvcpfotj_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_job_inde_mvcpf7mq_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_mvcpfgmd_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_log_inde_mvcpfov5_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_mvcpf7nb_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xx_xxxxx_mvcpfgmz_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xx_xxxxx_mvcpfgmz_mvcpfovr_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_queue_ta_mvcpf7nx_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxxxxx_mvcpfgnp_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxxxxx__mvcpfowc_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_mvcpf7oj_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_setup_in_mvcpfgo9_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_statisti_mvcpfowx_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpf7p2_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_temporar_mvcpfgow_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxx_mvcpfoxh_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_mvcpf7pp_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_xxxx_ind_mvcpfgph_.dbf
         3 /u02/app/oracle/oradata/CVEEAMT_SITE1/CVEEAMT_SITE1/2987D4B68CF25579E063425C210AB61B/datafile/o1_mf_user_dat_mvcpfoy2_.dbf

33 rows selected.

SQL>

As we can see, the restore from CDB1 into CVEEAMT with RMAN duplicate command using VEEAM backups could be done successfully.

Cleanup

Let’s cleanup by deleting CVEEAMT database.

[root@ODA2 ~]# odacli delete-database -n CVEEAMT
{
  "jobId" : "565aa4e3-9152-45f8-a739-dd7c53b22044",
  "status" : "Running",
  "message" : "",
  "reports" : [ {
    "taskId" : "TaskDcsJsonRpcExt_14309",
    "taskName" : "Validate DB 96122ad1-182a-4059-8a26-677300d93d71 for deletion",
    "nodeName" : "ODA2",
    "taskResult" : "",
    "startTime" : "February 19, 2025 14:55:01 CET",
    "endTime" : null,
    "duration" : "00:00:00.10",
    "status" : "Running",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_14307",
    "jobId" : "565aa4e3-9152-45f8-a739-dd7c53b22044",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "February 19, 2025 14:55:01 CET"
  } ],
  "createTimestamp" : "February 19, 2025 14:54:59 CET",
  "resourceList" : [ ],
  "description" : "Database service deletion with DB name: CVEEAMT with ID : 96122ad1-182a-4059-8a26-677300d93d71",
  "updatedTime" : "February 19, 2025 14:55:01 CET",
  "jobType" : null,
  "cpsMetadata" : null
}

[root@ODA2 ~]# odacli describe-job -i "565aa4e3-9152-45f8-a739-dd7c53b22044"

Job details
----------------------------------------------------------------
                     ID:  565aa4e3-9152-45f8-a739-dd7c53b22044
            Description:  Database service deletion with DB name: CVEEAMT with ID : 96122ad1-182a-4059-8a26-677300d93d71
                 Status:  Success
                Created:  February 19, 2025 14:54:59 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Validate DB                              February 19, 2025 14:55:01 CET           February 19, 2025 14:55:01 CET           Success
96122ad1-182a-4059-8a26-677300d93d71
for deletion
Deleting the RMAN logs                   February 19, 2025 14:55:01 CET           February 19, 2025 14:55:01 CET           Success
Database Deletion By RHP                 February 19, 2025 14:55:01 CET           February 19, 2025 14:56:07 CET           Success
Unregister DB From Cluster               February 19, 2025 14:56:07 CET           February 19, 2025 14:56:08 CET           Success
Kill PMON Process                        February 19, 2025 14:56:08 CET           February 19, 2025 14:56:08 CET           Success
Database Files Deletion                  February 19, 2025 14:56:08 CET           February 19, 2025 14:56:08 CET           Success
Deleting Volume                          February 19, 2025 14:56:13 CET           February 19, 2025 14:56:17 CET           Success
Deleting Volume                          February 19, 2025 14:56:23 CET           February 19, 2025 14:56:26 CET           Success

We would also restore initial listener.ora configuration file. As you might see, there is job in the appliance that already regularly restore initial listener configuration file.

We would also delete our TEST1 table we created in the production PDB1.

To wrap up…

We could successfully restore CDB1 into CVEEAMT with RMAN duplicate command using VEEAM backups. This validates our VEEAM RMAN plug-in previous configuration and also any backup done with the VEEAM RMAN plug-in.

L’article Restore a database using Veeam RMAN plug-in on an ODA est apparu en premier sur dbi Blog.

Integrate YaK into Red Hat Ansible Automation Platform

Tue, 2025-04-22 03:00
Introduction to YaK

YaK is an open-source automation project developed by dbi services. Built on Ansible playbooks, YaK streamlines the deployment process for various components across any platform. It ensures adherence to best practices, maintains deployment quality, and significantly reduces time-to-deploy.

Initially created in response to the growing demand from dbi services’ consultants and clients, YaK simplifies and accelerates deployments across multi-technology infrastructures. Whether targeting cloud environments or on-premises systems, YaK drastically cuts down deployment effort, optimizing the overall time-to-market.

Find more informations on the YaK website

Why Integrate YaK into Red Hat Ansible Automation Platform (AAP)? YaK Advantages:
  • User-Friendly Interface: YaK simplifies configuration and deployment through an intuitive user interface, allowing teams to quickly manage servers and applications deployments.
  • Centralized Metadata Database: It replaces traditional YAML configuration files with a centralized database to store deployment metadata, ensuring improved manageability and consistency.
  • Comprehensive Reporting: YaK provides capabilities for generating detailed reports on all deployments, offering insights for continuous improvement.
  • dbi services components: dbi services offering a range of subscriptions components readily deployable on any platform, further easing the accessibility, management of deployments. These components integrates all the expertise of dbi services’ expertise.
  • Custom Application Integration: YaK supports creating custom components for your specific applications. Developers can easily add Ansible playbooks to deploy the application into the component template.
Why Red Hat Ansible Automation Platform (AAP) with YaK:
  • Expert-Crafted Packages: YaK provides expertly maintained Ansible packages, ensuring reliability and built-in support for a wide range of scenarios, fully compatible with AAP.
  • Unified Dynamic Inventory: A single dynamic Ansible inventory for all your infrastructures, supporting multi-platform environments.
  • Platform-Agnostic Deployments: Seamless deployment across various platforms, enabling true platform independence.
  • Deep Integration with AAP Features: Full integration with AAP’s scheduler, workflows, and other advanced features, simplifying automation of servers, components (databases, applications, etc..), and complex multi-component infrastructures. of servers/component and/or multi-component infrastructure automation.
Integration Steps Generate a YaK API Token

To start integration, generate an API token from the YaK database pod. So you need:

  1. To have access to the Kubernetes cluster (rke2 for exemple) on which is deployed your YaK instance, with the kubectl command
  2. Know the namespace on which is deployed your YaK instance

Once you have access, you only have to type this command (replace <yak-namespace> by the namespace on which is deployed your YaK instance) :

$ kubectl -n <yak-namespace> exec -it deploy/yak-postgres -- psql -U postgres -d agoston -c 'select set_user_token(agoston_api.add_user()) as "token";'
                               token                               
-------------------------------------------------------------------
 <generated_token>
(1 row)

You can store the YaK API generated token for next steps.

AAP Resources Configuration

Access to Ansible Automation Platform with an administrator rôle.

  • Execution Environment: Define a customized execution environment in AAP that includes YaK-specific dependencies and tools.
    In the left menu, go to Automation Execution ⟶ Infrastructure ⟶ Execution Environments, then click on Create execution environment button
    Fill the form like this:
    Name: YaK EE
    Image: registry.gitlab.com/yak4all/yak_core:ee-stable
    Pull: Only pull the image if not present before running
    Registry credential: <empty> (YaK images are publicly available on GitLab repository)
    Description: Execution environment for YaK related jobs
    Organization: Default (or any other if you have a specific policy)
  • Job Settings: Update parameters to add persistency for YaK jobs.
    In the left menu, go to Settings ⟶ Job then click on Edit button
    update the parameter Paths to expose to isolated jobs, and add these lines at the end:
- /data/yak/component_types:/workspace/yak/component_types
- /data/yak/tmp:/tmp
- /data/yak/uploads:/uploads
  • Credential Types: Create customs credential types to securely handle YaK specific credentials.
    In the left menu, go to Automation Execution ⟶ Infrastructure ⟶ Credential Types, then click on Create credential type button
  1. YaK API:
    Name: YaK API
    Input configuration:
fields:
  - id: yak_ansible_transport_url
    type: string
    label: YaK API URL
  - id: yak_ansible_http_token
    type: string
    label: YaK API HTTP Token
    secret: true
  - id: yak_ansible_ssl_verify_certificate
    type: string
    label: Verify SSL certificate
    choices:
      - 'true'
      - 'false'
required:
  - yak_ansible_transport_url
  - yak_ansible_http_token
  - yak_ansible_ssl_verify_certificate

⠀⠀⠀⠀‎‎‎‎- Injector configuration:

env:
  YAK_ANSIBLE_DEBUG: 'false'
  YAK_ANSIBLE_HTTP_TOKEN: '{{ yak_ansible_http_token }}'
  YAK_ANSIBLE_TRANSPORT_URL: '{{ yak_ansible_transport_url }}'
  YAK_ANSIBLE_SSL_VERIFY_CERTIFICATE: '{{ yak_ansible_ssl_verify_certificate }}'
  1. YaK API With Component:
    Name: YaK API With Component
    Input configuration:
fields:
  - id: yak_ansible_transport_url
    type: string
    label: YaK API URL
  - id: yak_ansible_http_token
    type: string
    label: YaK API HTTP Token
    secret: true
  - id: yak_ansible_ssl_verify_certificate
    type: string
    label: Verify SSL certificate
    choices:
      - 'true'
      - 'false'
  - id: yak_core_component
    type: string
    label: YaK Core Component (used for component deployment)
required:
  - yak_ansible_transport_url
  - yak_ansible_http_token
  - yak_ansible_ssl_verify_certificate

⠀⠀⠀⠀- Injector configuration:

env:
  YAK_ANSIBLE_DEBUG: 'true'
  YAK_CORE_COMPONENT: '{{ yak_core_component }}'
  YAK_ANSIBLE_HTTP_TOKEN: '{{ yak_ansible_http_token }}'
  YAK_ANSIBLE_TRANSPORT_URL: '{{ yak_ansible_transport_url }}'
  YAK_ANSIBLE_SSL_VERIFY_CERTIFICATE: '{{ yak_ansible_ssl_verify_certificate }}'
  • Credentials: Set up credentials in AAP using the custom credential type to securely store and manage YaK API tokens.
    In the left menu, go to Automation Execution ⟶ Infrastructure ⟶ Credential, then click on Create credential button
  1. YaK API Core:
    Name: YaK API Core
    Credential type: YaK API
    YaK API URL: <url to your yak instance>/data/graphql
    YaK API HTTP Token: <YaK API token generated previously>
    Verify SSL certificate: depending if your YaK url have a valid SSL certificate (select true) or not (select false)
  1. YaK API Component:
    Name: YaK API Component – <component name set in YaK>
    Credential type: YaK API Withe Component
    YaK API URL: <url to your yak instance>/data/graphql
    YaK API HTTP Token: <YaK API token generated previously>
    YaK Core Component (used for component deployment): <component name set in YaK>
    Verify SSL certificate: depending if your YaK url have a valid SSL certificate (select true) or not (select false)
  • Project: Create an AAP project pointing to your YaK repository containing playbooks.
    In the left menu, go to Automation Execution ⟶ Project, then click on Create project button
  1. YaK Core:
    Name: YaK Core
    Execution environment: YaK EE
    Source control type: Git
    Source control URL: https://gitlab.com/yak4all/yak_core.git
    Source control branch/tag/commit: <select the same release version than your YaK deployed>
    You can find the YaK release version at the bottom of the YaK left menu:
  1. YaK Component:
    Name: YaK <component type> Component
    Execution environment: YaK EE
    Source control type: Git
    Source control URL: <private git repository url to your component>
    Source control branch/tag/commit: main
    Source control credential: <your credential where stored your authentications to the git repository>
  • Inventory: Configure the inventory, aligning it with YaK’s managed targets and deployment definitions.
    In the left menu, go to Automation Execution ⟶ Infrastructure ⟶ Inventories, then click on Create inventory button
  1. YaK Inventory:
    Name: YaK Inventory

⠀⠀From the YaK Inventory, go to Sources tab, then click on Create source button
⠀⠀- Name: YaK Core
⠀⠀- Execution environment: YaK EE
⠀⠀- Source: Sourced from a Project
⠀⠀- Credential: YaK API Core
⠀⠀- Project: YaK Core
⠀⠀- Inventory file: inventory/yak.core.db.yml
⠀⠀- Verbosity: 0
⠀⠀- Options: Overwrite, Overwrite variables, Update on launch

  1. YaK Inventory for component (you will need to create one inventory by component you want to manage from AAP):
    Name: YaK Inventory – <component name>

⠀⠀From the YaK Inventory – <component name>, go to Sources tab, then click on Create source button
⠀⠀- Name: YaK <component type>
⠀⠀- Execution environment: YaK EE
⠀⠀- Source: Sourced from a Project
⠀⠀- Credential: YaK API Component – <component name>
⠀⠀- Project: YaK <component type> Component
⠀⠀- Inventory file: inventory/yak.core.db.yml
⠀⠀- Verbosity: 0
⠀⠀- Options: Overwrite, Overwrite variables, Update on launch

  • Template: Develop AAP templates leveraging YaK playbooks and workflows, enabling repeatable and consistent deployments.
    In the left menu, go to Automation Execution ⟶ Templates, then click on Create template button and select Create job template
  1. Server – Deploy:
    Name: Server – Deploy
    Job type: Run
    Inventory: YaK Inventory
    Project: YaK Core
    Playbook: servers/deploy.yml
    Execution environment: YaK EE
    Credentials: YaK API Core
    Extra variables: target: ”
    Select the checkbox Prompt on launch for the Extra variables section. It will permit to set the server you want to deploy when you will run the job.
  1. Your component – Deploy:
    Name: <component name> – Deploy
    Job type: Run
    Inventory: YaK Inventory – <component name>
    Project: YaK <component type> Component
    Playbook: <path to your component deployment playbook>
    Execution environment: YaK EE
    Credentials: YaK API Component – <component name>
Creating an AAP Workflow for Full-Stack Deployment

Leveraging AAP workflows enables structured, automated deployments. In this chapter we will deploy a server named redhat-demo and the attached PostgreSQL component named pg-demo. These resources have already been created in the YaK, using the UI.

  • In AAP, create a new workflow:
    In the left menu, go to Automation Execution ⟶ Templates, then click on Create template button and select Create workflow job template:
    Name: Deploy Server and PG using YaK
  • Add and connect job templates corresponding to each deployment stage using YaK inventories and playbooks, here the complete workflow to create:
  1. YaK Core:
    After the Start, add a new step with the following infos:
    Node type: Inventory Source Sync
    Inventory source: YaK Core
    Convergence: Any
  1. Deploy redhat-demo server:
    After the YaK Core, add a new step with the following infos:
    Node type: Job Template
    Job template: Server – Deploy
    Status: Run on success
    Convergence: Any
    Node alias: Deploy redhat-demo

⠀⠀⠀⠀After clicking on Next button, you will have to set the playbook extra variables:
⠀⠀⠀⠀- Variables:

target: redhat-demo
  1. YaK Component inventory:
    After the Deploy redhat-demo, add a new step with the following infos:
    Node type: Inventory Source Sync
    Inventory source: YaK PostgreSQL
    Status: Run on success
    Convergence: Any
  1. Deploy redhat-demo server:
    After the YaK PostgreSQL, add a new step with the following infos:
    Node type: Job Template
    Job template: PostgreSQL – Deploy PG demo
    Status: Run on success
    Convergence: Any
    Node alias: Deploy pg-demo
  • You can save your workflow template.
  • Initiate the workflow manually or configure scheduled runs for fully automated deployments.

By integrating YaK into AAP workflows, teams can automate entire stack deployments efficiently, achieving unprecedented consistency and speed.

Conclusion

Integrating YaK with Red Hat Ansible Automation Platform combines YaK’s ease-of-use and powerful features with AAP’s comprehensive automation capabilities. This synergy ensures that deployment processes are more structured, faster, and consistently aligned with best practices, thus significantly enhancing overall efficiency and reducing time-to-market for businesses.

L’article Integrate YaK into Red Hat Ansible Automation Platform est apparu en premier sur dbi Blog.

How to: Restore a Nutanix virtual machine to AWS using HYCU R-CLOUD

Tue, 2025-04-22 02:29

In this blog I will show you how to restore a Nutanix virtual machine in AWS using HYCU R-CLOUD, formerly HYCU Protege.

Context

Our HYCU setup is composed of multiple environments, on premise in our datacenter and in the cloud in our AWS accounts. We have a HYCU instance deployed on our Nutanix cluster which is the “master” instance, meaning that all the backups are created by this instance. The backups are saved in two environment, they are saved in our NAS server and a copy of each backup is also transferred to an AWS S3 bucket in one of our AWS accounts. We also have a secondary HYCU instance in AWS, this one is in “Restore mode” meaning that it can only restore instances and can not do any backup.

We created this setup with two environments and two HYCU instances to be able to restore our environment in case we lose our whole Nutanix cluster. The schema above represents the infrastructure with an example of restored instance with all the temporary resources created by HYCU during the process.

HYCU setup

In the HYCU console we have two parameters to configure so that the restore operations works.

First we have to configure the Cloud Account :

We also have to configure the HYCU R-CLOUD account:

When this is done, we can start the restore operations.

Virtual machine Spin-up

First step: select the virtual we want to restore:

Then we must select the restore point and click on “SpinUp to cloud”:

Then we select our cloud provider:

In this window, we select the information about the AWS account we want to use and give some detail on the region and availability zone. The AWS account ID is gathered from the HYCU R-CLOUD configuration.

Then we have to give some more details about the virtual machine such as the shape:

We also have to give the virtual machine a network adapter, so from the previous panel we click on “Add Network Adapter” and fill the following form:

The machine needs an internet access to communicate with R-CLOUD, it doesn’t need to have a public IP if you have a VPN and routing to Internet configured. In our case, we will give our test machine a public IP since our DR VPC does not have a running NAT gateway. Once we are done with the network setup, we click on “Add” and on “SpinUp” from the previous window.

In the Jobs tab in our console, a new restore job has started:

From here you can follow every step of the restore such as the creation of the temporary S3 bucket. To get information, click on “View report” at the top right:

During the SpinUp, HYCU creates a temporary virtual machine that will orchestrate the cloud operations such as the creation of a temporary S3 bucket to store your virtual machine backup data. The SpinUp will also create a snapshot based on the temp S3 bucket data, then an AMI based on this snapshot and finally recreate the virtual machine based on the AMI.

Here you can see the HYCU virtual machine and the temporary machine booted by the restore operation. The infrastructure schema would now look like this with all the temporary resources running:

After the SpinUp job is done, we see in the AWS console that my virtual machine is here:

Note that the temporary HYCU instance is automatically deleted as well as the S3 bucket created earlier. We noticed during our tests that one factor has a big impact on restore time: the temporary virtual machine shape. The bigger the shape, the lower the restore time. Just note that only the HYCU support can change this parameter for you so if you want faster restores you should raise them a ticket.

L’article How to: Restore a Nutanix virtual machine to AWS using HYCU R-CLOUD est apparu en premier sur dbi Blog.

Migrating an Oracle database to another server

Fri, 2025-04-11 10:40

There are several situations when you have to migrate your Oracle databases to a new server. This could be due to hardware lifecycle reasons for on-prem systems or you need to upgrade your Operating System (OS) from Enterprise Linux 8 to Enterprise Linux 9. In this blog I wanted to talk about my recommended methods for such migrations considering ease of use and reduced downtime. I do not cover migrations to the Oracle Cloud here, because the recommended way is to use the Zero Downtime Migration tool from Oracle for that.

For a migration to another server, we have different possibilities:

  • Data Pump expdp/impdp
  • Logical replication with e.g. Golden Gate
  • Setup of a Standby DB with Data Guard (or third party products like dbvisit standby for Standard Edition 2 DBs) and switchover during cutover
  • Using a refreshable PDB in case the multitenant architecture is already used. During migration, stop the source PDB and do a final refresh, stop refreshing and open the target PDB read/write.
  • Relocate a PDB
  • Unplug PDB, copy PDB-related files and Plug-In the PDB
  • RMAN backup and restore. To reduce downtime this can also be combined with incremental backups restored regularly on the target until cutover, when a last incremental backup is applied to the target DB.
  • RMAN duplicate
  • Data Pump Full Transportable, where you set your source tablespaces read only, export the metadata and physically move datafiles to the target, where you can import the metadata.
  • Transportable tablespaces. This can be combined with Incremental Backups to do a cross platform migration to a different endian as described in MOS Note “V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)”
  • Detaching ASM devices from the old server and attaching them to the new server.
    REMARK: This is kind of what happens when migrating to a new OS-version on the Oracle Database Appliance with Data Preserving Reprovisioning (DPR). See the blogs from my colleague Jérôme Duba on that: https://www.dbi-services.com/blog/author/jerome-dubar/
  • Just copy (e.g. with scp) all needed files to the new server

There are even more possibilities, but with above list you should find a method which fits your needs. Some of the methods above do require to be on the same Operating System and hardware architecture (no endian change), and some of them are totally independent on platform, version or endian change (like the logical migrations with data pump or Golden Gate).

One of the best methods in my view is the possibility of refreshable PDBs, because

  • it is very easy to do
  • provides a short downtime during cutover
  • allows a fallback as the previous PDB is still available
  • allows migrating PDBs individually at different times
  • allows migrating non-CDBs to PDBs as well. I.e. I can refresh a non-CDB to a PDB.
  • it is available since 12.2. and can also be used with Standard Edition 2 (SE2) DBs
  • allows going to a different Release Update (RU)
  • even allows going to a different major release and run the PDB upgrade afterwards on the target CDB
  • if the source PDB is on SE2 then the target PDB can also be on Enterprise Edition (EE)
  • moving Transparent Data Encrypted PDBs is almost as easy as moving non-encrypted PDBs
  • the inital copy of the PDB can be done very fast as Oracle is using a block-level-copy mechanism when cloning a PDB and parallelism is allowed as well on EE
  • we can use 3 PDBs per CDB since 19c without licensing the Multitenant Option. This provides some flexibility on which CDB to move the PDB to

You may check this blog with the steps to do when migrating through the refreshable PDB mechanism.

Can we migrate a 19c database to 23ai with refreshable PDBs? Yes, we can do that as shown below:

REMARK: The whole process described below can be done with the autoupgrade tool automatically. However, to see each step separately, I do this manually here.

1. Preparing the source CDB, which is on 19.22.:

sys@CDB0> select force_logging from v$database;

FORCE_LOGGING
---------------------------------------
YES

sys@CDB0> create user c##refresh_pdbs identified by welcome1 container=all;

User created.

sys@CDB0> grant create session, create pluggable database to c##refresh_pdbs container=all;

Grant succeeded.

2. Create the refreshable PDB

To have a connection between the Oracle Cloud and my on-prem 19.22.-DB I used the method described here through a ssh-tunnel:
https://www.ludovicocaldara.net/dba/push-pdb-to-cloud/

On the target server:

[oracle@db23aigi ~]$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Fri Apr 4 15:16:15 2025
Version 23.7.0.25.01

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai EE High Perf Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems
Version 23.7.0.25.01

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB1 			  READ WRITE NO
SQL> exit
Disconnected from Oracle Database 23ai EE High Perf Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems
Version 23.7.0.25.01
[oracle@db23aigi ~]$ cat clone_db.sh 
SRC_PDB=$1
TGT_PDB=$2
ALIAS=$3
 
export ORACLE_HOME=/u01/app/oracle/product/23.0.0.0/dbhome_1
export ORACLE_SID=DB23AIGI
 
$ORACLE_HOME/bin/sqlplus -s / as sysdba <<EOF
        set timing on
        create database link prod_clone_link connect to c##refresh_pdbs
          identified by welcome1 using '$ALIAS';
        create pluggable database $2 from $1@prod_clone_link refresh mode manual;
        dbms_session.sleep(120);
        alter pluggable database $2 refresh;
        alter pluggable database $2 refresh mode none;
        exit
EOF
[oracle@db23aigi ~]$ 

On the source-server:

oracle@pm-DB-OEL8:~/Keys/dbi-OCI/dbi3oracle/DB-systems/db23aigi/ [cdb0 (CDB$ROOT)] ssh -i ./ssh-key-2025-04-04.key opc@<public-ip-OCI> -R 1522:pm-DB-OEL8:1521 "sudo -u oracle /home/oracle/clone_db.sh PROD PROD23AI localhost:1522/PROD_PRI"

Database link created.

Elapsed: 00:00:00.01

Pluggable database created.

Elapsed: 00:06:16.42

Pluggable database altered.

Elapsed: 00:00:14.99

Pluggable database altered.

Elapsed: 00:00:00.78
oracle@pm-DB-OEL8:~/Keys/dbi-OCI/dbi3oracle/DB-systems/db23aigi/ [cdb0 (CDB$ROOT)] 

3. Upgrade the PDB to 23ai on the target server

SQL> alter pluggable database PROD23AI open upgrade;

Pluggable database altered.

SQL> select name, open_mode, restricted from v$pdbs where name='PROD23AI';

NAME				 OPEN_MODE  RES
-------------------------------- ---------- ---
PROD23AI			 MIGRATE    YES

SQL> 

[oracle@db23aigi ~]$ $ORACLE_HOME/bin/dbupgrade -c "PROD23AI" -l /tmp
....
Upgrade Summary Report Located in:
/tmp/upg_summary.log

     Time: 673s For PDB(s)

Grand Total Time: 673s 

 LOG FILES: (/tmp/catupgrd*.log)


Grand Total Upgrade Time:    [0d:0h:11m:13s]
[oracle@db23aigi ~]$ 

REMARK: As mentioned initially I should have used autoupgrade for the whole process (or just the upgrade) here as $ORACLE_HOME/bin/dbupgrade has been desupported in 23ai, but for demonstration purposes of refreshable PDBs it is OK.

4. Final steps after the upgrade

-- check the PDB_PLUG_IN_VIOLATIONS view for unresolved issues
SQL> alter session set container=PROD23AI;

Session altered.

SQL> select type, cause, message 
from PDB_PLUG_IN_VIOLATIONS 
where name='PROD23AI' and status != 'RESOLVED';  2    3  

TYPE		CAUSE			       MESSAGE
--------------- ------------------------------ ------------------------------------------------------------------------------------------
WARNING 	is encrypted tablespace?       Tablespace SYSTEM is not encrypted. Oracle Cloud mandates all tablespaces should be encrypted.
WARNING 	is encrypted tablespace?       Tablespace SYSAUX is not encrypted. Oracle Cloud mandates all tablespaces should be encrypted.
WARNING 	is encrypted tablespace?       Tablespace USERS is not encrypted. Oracle Cloud mandates all tablespaces should be encrypted.
WARNING 	Traditional Audit	       Traditional Audit configuration mismatch between the PDB and CDB$ROOT

SQL> administer key management set key using tag 'new own key' force keystore identified by "<wallet password>" with backup;

keystore altered.

SQL> alter tablespace users encryption online  encrypt;

Tablespace altered.

SQL> alter tablespace sysaux encryption online  encrypt;

Tablespace altered.

SQL> alter tablespace system encryption online  encrypt;

Tablespace altered.

SQL> exec dbms_pdb.CLEAR_PLUGIN_VIOLATIONS;

PL/SQL procedure successfully completed.

SQL> select type, cause, message 
from PDB_PLUG_IN_VIOLATIONS 
where name='PROD23AI' and status != 'RESOLVED';

no rows selected


-- Recompile invalid objects using the utlrp.sql script:
SQL> alter session set container=PROD23AI;
 
Session altered.
 
SQL> @?/rdbms/admin/utlrp.sql
 
PL/SQL procedure successfully completed.

-- Downtime ends. Check the DBA_REGISTRY_SQLPATCH view:
SQL> alter session set container=PROD23AI;
 
Session altered.
 
SQL> select patch_id, patch_type, status, description, action_time from dba_registry_sqlpatch order by action_time desc;

  PATCH_ID PATCH_TYPE STATUS	 DESCRIPTION						      ACTION_TIME
---------- ---------- ---------- ------------------------------------------------------------ --------------------------------
  37366180 RU	      SUCCESS	 Database Release Update : 23.7.0.25.01 (37366180) Gold Image 04-APR-25 04.00.06.975353 PM

Summary:

If you haven’t done this yet, then I do recommend to migrate to the multitenant architecture as soon as possible. It makes several DBA tasks so much easier. Especially the migration to a new server with refreshable PDBs is very easy to do with low downtime, high flexibility and almost no impact on the source PDB during refreshes. On top of it you do not lose your source PDB during the process and may go back to it in case tests show that the target is not working correctly.

L’article Migrating an Oracle database to another server est apparu en premier sur dbi Blog.

What’s New in M-Files 25.3

Thu, 2025-04-10 10:20
What's New in M-Files 25.3

I’m not a big fan of doing a post for each new release, but I think the last one is a big step towards what M-Files will tend to be in the coming months.
M-Files 25.3, was released to the cloud on March 30th, and is available for download and auto-update since April 2nd. It brings a suite of powerful updates designed to improve document management efficiency and user experience.
Here’s a breakdown of the most notable features, improvements, and fixes.

New Features and Improvements

Admin Workflow State Changes in M-Files Web

System administrators can now override any workflow state directly from the context menu in M-Files Web using the new “Change state (Admin)” option. This allows for greater control and quicker resolution of workflow issues.

Zero-Click Metadata Filling

When users drag and drop new objects into specific views, required metadata fields can now be automatically prefilled without displaying the metadata card. This creates a seamless and efficient upload process.

Object-Based Hierarchies Support

Object-based hierarchies are now available on the metadata card in both M-Files Web and the new Desktop interface, providing more structured data representation.

Enhanced Keyboard Navigation

Improved keyboard shortcuts now allow users to jump quickly to key interface elements like the search bar and tabs, streamlining navigation for power users.

Document Renaming in Web and Desktop

Users can now rename files in M-Files Web and the new Desktop interface via the context menu or the F2 key, making file management more intuitive.

Default gRPC Port Update

The default gRPC port for new vault connections is now set to 443, improving compatibility with standard cloud environments and simplifying firewall configurations.

AutoCAD 2025 Support

The M-Files AutoCAD add-in is now compatible with AutoCAD 2025, ensuring continued integration with the latest CAD workflows.

Fixes and Performance Enhancements
  • Drag-and-Drop Upload Error Resolved: Fixed a bug that caused “Upload session not found” errors during file uploads.
  • Automatic Property Filling: Ensured property values now update correctly when source properties are modified.
  • Version-Specific Links: Resolved an issue where links pointed to the latest version rather than the correct historical version.
  • Anonymous User Permissions: Closed a loophole that allowed anonymous users to create and delete views.
  • Theme Display Consistency: Custom themes now persist correctly across multiple vault sessions.
  • Office Add-In Fixes: Resolved compatibility issues with merged cells in Excel documents.
  • Date & Time Accuracy: Fixed timezone issues that affected Date & Time metadata.
  • Metadata Card Configuration: Ensured proper application of workflow settings.
  • Annotation Display in Web: Annotations are now correctly tied to their document versions.
  • Improved Link Functionality: Object ID-based links now work as expected in the new Desktop client.
Conclusion

M-Files 25.3 introduces thoughtful improvements that empower both administrators and end-users. From seamless metadata handling to improved keyboard accessibility and robust error fixes, this release makes it easier than ever to manage documents effectively.

Stay tuned for more insights and tips on making the most of your M-Files solution with us!

L’article What’s New in M-Files 25.3 est apparu en premier sur dbi Blog.

Pages