Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 1 week 3 days ago

Patching SUSE Multi Linux Manager

Wed, 2025-07-30 04:10

In the last post about SUSE Multi Linux Manager we had a look at how you can schedule OpenSCAP reports using the API. In this post we’ll look into something very basic: How can you patch the server components of SUSE Multi Linux Manager. We speak about components because you need to patch the host (which is a SLE Micro in this case) and the container hosting the application.

Looking at the host operating system we can see this is a SLE Micro 5.5:

suma:~ $ cat /etc/os-release 
NAME="SLE Micro"
VERSION="5.5"
VERSION_ID="5.5"
PRETTY_NAME="SUSE Linux Enterprise Micro 5.5"
ID="sle-micro"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sle-micro:5.5"

As this comes with a read only root file system we cannot directly use zypper to patch the system. The tool to use in this case is transactional-update. This still uses zypper in the background, but the updates are installed into a new Btrfs snapshot. Using this approach the running system is not touched at all and the updates only become available when the system is rebooted into the new snapshot (which happens automatically when the system is rebooted). If something is wrong with the new snapshot, the system can be booted from the old snapshot and the system is back to what it was before patching.

Before we patch the host system let’s have a look at the snapshots we currently have available:

suma:~ $ snapper list
 # | Type   | Pre # | Date                             | User | Used Space | Cleanup | Description           | Userdata     
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+--------------
0  | single |       |                                  | root |            |         | current               |              
1  | single |       | Fri 08 Mar 2024 10:45:41 AM CET  | root |   1.30 GiB | number  | first root filesystem | important=yes
2  | single |       | Mon 07 Jul 2025 12:18:08 PM CEST | root |   1.51 MiB | number  | Snapshot Update of #1 | important=yes
3  | single |       | Mon 07 Jul 2025 12:30:01 PM CEST | root |   1.02 MiB | number  | Snapshot Update of #2 | important=yes
4  | single |       | Tue 08 Jul 2025 05:33:39 AM CEST | root |  39.78 MiB | number  | Snapshot Update of #3 | important=yes
5  | single |       | Wed 16 Jul 2025 09:25:23 AM CEST | root |  45.07 MiB |         | Snapshot Update of #4 |              
6* | single |       | Wed 23 Jul 2025 04:13:09 PM CEST | root |  58.62 MiB |         | Snapshot Update of #5 |         

Let’s patch and compare what we’ll have afterwards:

suma:~ $ zypper ref
Warning: The gpg key signing file 'repomd.xml' has expired.
  Repository:       SLE-Micro-5.5-Updates
  Key Fingerprint:  FEAB 5025 39D8 46DB 2C09 61CA 70AF 9E81 39DB 7C82
  Key Name:         SuSE Package Signing Key <build@suse.de>
  Key Algorithm:    RSA 2048
  Key Created:      Mon 21 Sep 2020 10:21:47 AM CEST
  Key Expires:      Fri 20 Sep 2024 10:21:47 AM CEST (EXPIRED)
  Rpm Name:         gpg-pubkey-39db7c82-5f68629b
Retrieving repository 'SLE-Micro-5.5-Updates' metadata ..............................................................................................................[done]
Building repository 'SLE-Micro-5.5-Updates' cache ...................................................................................................................[done]
Warning: The gpg key signing file 'repomd.xml' has expired.
  Repository:       SUSE-Manager-Server-5.0-Updates
  Key Fingerprint:  FEAB 5025 39D8 46DB 2C09 61CA 70AF 9E81 39DB 7C82
  Key Name:         SuSE Package Signing Key <build@suse.de>
  Key Algorithm:    RSA 2048
  Key Created:      Mon 21 Sep 2020 10:21:47 AM CEST
  Key Expires:      Fri 20 Sep 2024 10:21:47 AM CEST (EXPIRED)
  Rpm Name:         gpg-pubkey-39db7c82-5f68629b
Retrieving repository 'SUSE-Manager-Server-5.0-Updates' metadata ....................................................................................................[done]
Building repository 'SUSE-Manager-Server-5.0-Updates' cache .........................................................................................................[done]
Repository 'SLE-Micro-5.5-Pool' is up to date.                                                                                                                             
Repository 'SUSE-Manager-Server-5.0-Pool' is up to date.                                                                                                                   
All repositories have been refreshed.

suma:~ $ transactional-update 
Checking for newer version.
transactional-update 4.1.9 started
Options: 
Separate /var detected.
2025-07-30 09:42:32 tukit 4.1.9 started
2025-07-30 09:42:32 Options: -c6 open 
2025-07-30 09:42:33 Using snapshot 6 as base for new snapshot 7.
2025-07-30 09:42:33 /var/lib/overlay/6/etc
2025-07-30 09:42:33 Syncing /etc of previous snapshot 5 as base into new snapshot "/.snapshots/7/snapshot"
2025-07-30 09:42:33 SELinux is enabled.
ID: 7
2025-07-30 09:42:36 Transaction completed.
Calling zypper up
2025-07-30 09:42:38 tukit 4.1.9 started
2025-07-30 09:42:38 Options: callext 7 zypper -R {} up -y --auto-agree-with-product-licenses 
2025-07-30 09:42:39 Executing `zypper -R /tmp/transactional-update-JsIr01 up -y --auto-agree-with-product-licenses`:
Refreshing service 'SUSE_Linux_Enterprise_Micro_5.5_x86_64'.
Refreshing service 'SUSE_Manager_Server_Extension_5.0_x86_64'.
Loading repository data...
Reading installed packages...

The following 21 packages are going to be upgraded:
  boost-license1_66_0 libboost_system1_66_0 libboost_thread1_66_0 libpolkit-agent-1-0 libpolkit-gobject-1-0 mgradm mgradm-bash-completion mgrctl mgrctl-bash-completion polkit python3-pyparsing python3-pytz python3-PyYAML python3-requests python3-salt python3-simplejson python3-urllib3 salt salt-minion salt-transactional-update uyuni-storage-setup-server

21 packages to upgrade.

Package download size:    16.8 MiB

Package install size change:
              |      71.4 MiB  required by packages that will be installed
   654.0 KiB  |  -   70.8 MiB  released by packages that will be removed

Backend:  classic_rpmtrans
Continue? [y/n/v/...? shows all options] (y): y

...
2025-07-30 09:44:40 New default snapshot is #7 (/.snapshots/7/snapshot).
2025-07-30 09:44:40 Transaction completed.

Please reboot your machine to activate the changes and avoid data loss.
New default snapshot is #7 (/.snapshots/7/snapshot).
transactional-update finished

As noted above we must reboot the system for the updates to become active. Before we do that, let’s again have a look at the snapshots:

suma:~ $ snapper list
 # | Type   | Pre # | Date                             | User | Used Space | Cleanup | Description           | Userdata     
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+--------------
0  | single |       |                                  | root |            |         | current               |              
1  | single |       | Fri 08 Mar 2024 10:45:41 AM CET  | root |   1.30 GiB | number  | first root filesystem | important=yes
2  | single |       | Mon 07 Jul 2025 12:18:08 PM CEST | root |   1.51 MiB | number  | Snapshot Update of #1 | important=yes
3  | single |       | Mon 07 Jul 2025 12:30:01 PM CEST | root |   1.02 MiB | number  | Snapshot Update of #2 | important=yes
4  | single |       | Tue 08 Jul 2025 05:33:39 AM CEST | root |  39.78 MiB | number  | Snapshot Update of #3 | important=yes
5  | single |       | Wed 16 Jul 2025 09:25:23 AM CEST | root |  45.07 MiB |         | Snapshot Update of #4 |              
6- | single |       | Wed 23 Jul 2025 04:13:09 PM CEST | root |   4.11 MiB |         | Snapshot Update of #5 |              
7+ | single |       | Wed 30 Jul 2025 09:42:32 AM CEST | root |  88.39 MiB |         | Snapshot Update of #6 |      

We got a new snapshot (number 7) which is not yet active, let’s reboot and check again:

suma:~ $ reboot
...
suma:~ $ snapper list
 # | Type   | Pre # | Date                             | User | Used Space | Cleanup | Description           | Userdata     
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+--------------
0  | single |       |                                  | root |            |         | current               |              
1  | single |       | Fri 08 Mar 2024 10:45:41 AM CET  | root |   1.30 GiB | number  | first root filesystem | important=yes
2  | single |       | Mon 07 Jul 2025 12:18:08 PM CEST | root |   1.51 MiB | number  | Snapshot Update of #1 | important=yes
3  | single |       | Mon 07 Jul 2025 12:30:01 PM CEST | root |   1.02 MiB | number  | Snapshot Update of #2 | important=yes
4  | single |       | Tue 08 Jul 2025 05:33:39 AM CEST | root |  39.78 MiB | number  | Snapshot Update of #3 | important=yes
5  | single |       | Wed 16 Jul 2025 09:25:23 AM CEST | root |  45.07 MiB |         | Snapshot Update of #4 |              
6  | single |       | Wed 23 Jul 2025 04:13:09 PM CEST | root |   4.11 MiB |         | Snapshot Update of #5 |              
7* | single |       | Wed 30 Jul 2025 09:42:32 AM CEST | root |  88.39 MiB |         | Snapshot Update of #6 |            

The new snapshot became active and we’re fully patched on the host system.

Now that the host system is fully patched, we can proceed with patching the SUSE Multi Linux Manager application. Before we do that, let’s check what we currently have:

suma:~ $ mgradm inspect
10:40AM INF Welcome to mgradm
10:40AM INF Executing command: inspect
10:40AM INF Computed image name is registry.suse.com/suse/manager/5.0/x86_64/server:5.0.4.1
10:40AM INF Ensure image registry.suse.com/suse/manager/5.0/x86_64/server:5.0.4.1 is available
WARN[0002] Path "/etc/SUSEConnect" from "/etc/containers/mounts.conf" doesn't exist, skipping 
10:40AM INF 
{
  "CurrentPgVersion": "16",
  "ImagePgVersion": "16",
  "DBUser": "spacewalk",
  "DBPassword": "<REDACTED>",
  "DBName": "susemanager",
  "DBPort": 5432,
  "UyuniRelease": "",
  "SuseManagerRelease": "5.0.4.1",
  "Fqdn": "suma.dwe.local"
}

The currently running version is “5.0.4.1”. Patching is quite simple as this just updates the container:

suma:~ $ mgradm upgrade podman
10:41AM INF Welcome to mgradm
10:41AM INF Use of this software implies acceptance of the End User License Agreement.
10:41AM INF Executing command: podman
...
10:41AM INF No changes requested for hub. Keep 0 replicas.
10:41AM INF Computed image name is registry.suse.com/suse/manager/5.0/x86_64/server-hub-xmlrpc-api:5.0.5
10:41AM INF Ensure image registry.suse.com/suse/manager/5.0/x86_64/server-hub-xmlrpc-api:5.0.5 is available
10:42AM INF Cannot find RPM image for registry.suse.com/suse/manager/5.0/x86_64/server-hub-xmlrpc-api:5.0.5

Checking the version again:

suma:~ $ mgradm inspect
10:36AM INF Welcome to mgradm
10:36AM INF Use of this software implies acceptance of the End User License Agreement.
10:36AM INF Executing command: inspect
10:36AM INF Computed image name is registry.suse.com/suse/manager/5.0/x86_64/server:5.0.5
10:36AM INF Ensure image registry.suse.com/suse/manager/5.0/x86_64/server:5.0.5 is available
10:36AM ??? time="2025-07-30T10:36:20+02:00" level=warning msg="Path \"/etc/SUSEConnect\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping"
10:36AM INF 
{
  "CurrentPgVersion": "16",
  "ImagePgVersion": "16",
  "DBUser": "spacewalk",
  "DBPassword": "<REDACTED>",
  "DBName": "susemanager",
  "DBPort": 5432,
  "UyuniRelease": "",
  "SuseManagerRelease": "5.0.5",
  "Fqdn": "suma.dwe.local"
}

Now we are on version “5.0.5” and we’re done with our patching for the server part. Clients also should be upgraded, especially the Salt client as SUSE Multi Linux Manager uses Salt to manage the clients. You can either do that manually by using the package manager of the distributions you’re managing or you can do that from the WebUI:

That’s it, not hard to do and an easy process to follow.

L’article Patching SUSE Multi Linux Manager est apparu en premier sur dbi Blog.

What will happen to your spare clones during a switchover on ExaCC?

Tue, 2025-07-29 09:57

As you might know, on ExaCC, it is possible to create spare clones. This is based on snapshot technology. There is no physical copy and the modified blocks will be written to the sparse clone, and in our case these blocks will be part of the ASM spare disk group. And of course, the sparse clone then needs a master read only. This article is not intended to explain the whole functioning of this technology. For more information on the technology, I would recommend you to read following blog article :

https://blogs.oracle.com/exadata/post/exadata-sparse-clones-and-monitor-sparse-disk-groups-size

Intention of this article is to show what will happen to your sparse during a switchover. In multitenant environment, on the standby side the sparse clone PDB will be seen in mount status, but not accessible. The sparse is made locally, that is to say on the primary database side where it was created, and only accessible on this site. The sparse clone is storage dependant and primary and standby are not sharing same shelf storage. So after a switchover, the sparse clone is not accessible until switchover back to the initial primary is performed. Let’s see how that works!

Lab description

For the demonstration we will use:

  • an ExaCC cluster named exacc-cl01 (currently the primary side) on which we will have a CDB named SWIT001T_CHZ2
  • an ExaCC cluster named exacc-cl02 (currently the standby side) on which we will have the physical standby CDB named SWIT001T_CHZ3
  • The PDB will be SWTET_APP_001T
  • The Master Read Only PDB will be SWTET_TMR_001T
  • The sparce clone PDB will be SWTET_APP_002T
Master Read Only creation

The command used to create the Master Read Only on the primary CDB was:

SQL> CREATE PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2 CONNECT TO C##USER IDENTIFIED BY ".................." USING 'SWIT001T_CHZ2';

SQL> CREATE PLUGGABLE DATABASE SWTET_TMR_001T FROM SWTET_APP_001T@"SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2"
            KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
            PARALLEL 8
            SERVICE_NAME_CONVERT=('SWTET_APP_001T','SWTET_TMR_001T')
            REFRESH MODE MANUAL
            STANDBYS=NONE;

SQL> ALTER PLUGGABLE DATABASE SWTET_TMR_001T OPEN READ ONLY INSTANCES=ALL;

SQL> DROP PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2;

On the primary side, this looks like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 14:46:17 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 READ ONLY  NO
SQL>

On the standby side, this looks like:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 14:46:48 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
SQL>

Which is correct as the Master Read Only SWTET_TMR_001T was created with STANDBYS=NONE option.

Sparse clone creation

The command used to create the sparse clone on the primary CDB was:

SQL> CREATE PLUGGABLE DATABASE SWTET_APP_002T FROM SWTET_TMR_001T
                KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
                PARALLEL 8
                CREATE_FILE_DEST='+SPRC2'
                SERVICE_NAME_CONVERT=('SWTET_TMR_001T','SWTET_APP_002T','SWTET_APP_001T','SWTET_APP_002T')
                SNAPSHOT COPY
                STANDBYS=NONE;

SQL> ALTER PLUGGABLE DATABASE SWTET_APP_002T OPEN INSTANCES=ALL;

On the primary side, this looks like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:03:18 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 READ ONLY  NO
         5 SWTET_APP_002T                 READ WRITE NO
SQL>

The sparse clone PDB is opened as expected in READ/WRITE mode.

On the standby side, this looks like:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:37:11 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 MOUNTED
SQL>

Which is correct as we used the standby=no option to create the sparse clone and in any case the snapshot is only locally as storage dependant.

Create some data in the spare clone

Let’s create some data into the spare clone.

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:28:39 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 READ ONLY  NO
         5 SWTET_APP_002T                 READ WRITE NO

SQL> alter session set container=SWTET_APP_002T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> create table TEST_SWITCH (id int, name varchar(100), run_exec date);

Table created.

SQL> insert into TEST_SWITCH values (0, 'Before switch', sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from TEST_SWITCH;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Before switch                                                                                        06/06/2025 15:33:32

SQL>

Switchover to CHZ3

Let’s switchover from CHZ2 to CHZ3.

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] dgh
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Fri Jun 6 15:36:06 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.

DGMGRL> connect sys@SWIT001T_CHZ2
Password:
Connected to "SWIT001T_CHZ2"
Connected as SYSDBA.

DGMGRL> show configuration lag

Configuration - swit001t_dgconf

  Protection Mode: MaxPerformance
  Members:
  SWIT001T_CHZ2 - Primary database
    SWIT001T_CHZ3 - Physical standby database
                    Transport Lag:      0 seconds (computed 0 seconds ago)
                    Apply Lag:          0 seconds (computed 0 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 46 seconds ago)

DGMGRL> validate database SWIT001T_CHZ3

  Database Role:     Physical standby database
  Primary Database:  SWIT001T_CHZ2

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Managed by Clusterware:
    SWIT001T_CHZ2:  YES
    SWIT001T_CHZ3:  YES

  Temporary Tablespace File Information:
    SWIT001T_CHZ2 TEMP Files:  5
    SWIT001T_CHZ3 TEMP Files:  3

DGMGRL> switchover to SWIT001T_CHZ3;
Performing switchover NOW, please wait...
Operation requires a connection to database "SWIT001T_CHZ3"
Connecting ...
Connected to "SWIT001T_CHZ3"
Connected as SYSDBA.
New primary database "SWIT001T_CHZ3" is opening...
Oracle Clusterware is restarting database "SWIT001T_CHZ2" ...
Connected to "SWIT001T_CHZ2"
Switchover succeeded, new primary is "swit001t_chz3"
DGMGRL>

DGMGRL> show configuration lag

Configuration - swit001t_dgconf

  Protection Mode: MaxPerformance
  Members:
  SWIT001T_CHZ3 - Primary database
    SWIT001T_CHZ2 - Physical standby database
                    Transport Lag:      0 seconds (computed 1 second ago)
                    Apply Lag:          0 seconds (computed 1 second ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 7 seconds ago)

Create table in source pdb in new primary cl02

To check on the sparse clone database would react after the switchover, let’s add some data in the source PDB on the new primary (CHZ3).

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:40:16 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 MOUNTED

SQL> alter session set container=SWTET_APP_001T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> create table TEST_ON_STDBY (id int, name varchar(100), run_exec date);

Table created.

SQL> insert into TEST_ON_STDBY values (0, 'Cl08 is primary', sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 SWTET_APP_001T                 READ WRITE NO
		 
SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl02n1

SQL> select * from TEST_ON_STDBY;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Cl08 is primary                                                                                      06/06/2025 15:43:19

SQL>

Check table on new standby side cl01

We can check data on the standby side PDB on the cluster cl01.

oracle@exacc-cl01n1:~/ [grinf19] SWIT001T1

 ********************************************
 INSTANCE_NAME   : SWIT001T1
 DB_NAME         : SWIT001T
 DB_UNIQUE_NAME  : SWIT001T_CHZ2
 STATUS          : OPEN READ ONLY WITH APPLY
 LOG_MODE        : ARCHIVELOG
 USERS/SESSIONS  : 4/19
 DATABASE_ROLE   : PHYSICAL STANDBY
 FLASHBACK_ON    : YES
 FORCE_LOGGING   : YES
 VERSION         : 19.26.0.0.0
 CDB_ENABLED     : YES
 PDBs            : PDB$SEED  SWTET_APP_001T  SWTET_APP_002T  SWTET_TMR_001T
 ********************************************

 PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only
 Statustime: 2025-06-06 15:45:25

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:45:27 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 READ ONLY  NO

The Master Read Only PDB is in MOUNT status, opening it in READ ONLY mode would not be possible. This would make sense as it is the reference for the sparse clone and we created it with the no standby option.

SQL> alter pluggable database SWTET_TMR_001T open read only instances=all;
alter pluggable database SWTET_TMR_001T open read only instances=all
*
ERROR at line 1:
ORA-01173: data dictionary indicates missing data file from system tablespace

Let’s check if the source PDB is having the new created data on the new standby side cl01.

SQL> alter session set container=SWTET_APP_001T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl01n1

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 SWTET_APP_001T                 READ ONLY  NO

SQL> select * from TEST_ON_STDBY;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Cl08 is primary                                                                                      06/06/2025 15:43:19

SQL>

Yes, we have the new data.

Run switchover back

Let’s switch back to cl01.

DGMGRL> show configuration lag

Configuration - swit001t_dgconf

  Protection Mode: MaxPerformance
  Members:
  SWIT001T_CHZ3 - Primary database
    SWIT001T_CHZ2 - Physical standby database
                    Transport Lag:      0 seconds (computed 0 seconds ago)
                    Apply Lag:          0 seconds (computed 0 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 11 seconds ago)

DGMGRL> validate database SWIT001T_CHZ2;

  Database Role:     Physical standby database
  Primary Database:  SWIT001T_CHZ3

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Managed by Clusterware:
    SWIT001T_CHZ3:  YES
    SWIT001T_CHZ2:  YES

  Temporary Tablespace File Information:
    SWIT001T_CHZ3 TEMP Files:  3
    SWIT001T_CHZ2 TEMP Files:  5

DGMGRL> switchover to SWIT001T_CHZ2;
Performing switchover NOW, please wait...
Operation requires a connection to database "SWIT001T_CHZ2"
Connecting ...
Connected to "SWIT001T_CHZ2"
Connected as SYSDBA.
New primary database "SWIT001T_CHZ2" is opening...
Oracle Clusterware is restarting database "SWIT001T_CHZ3" ...
Connected to "SWIT001T_CHZ3"
Switchover succeeded, new primary is "swit001t_chz2"

DGMGRL> show configuration lag

Configuration - swit001t_dgconf

  Protection Mode: MaxPerformance
  Members:
  SWIT001T_CHZ2 - Primary database
    SWIT001T_CHZ3 - Physical standby database
                    Transport Lag:      0 seconds (computed 5 seconds ago)
                    Apply Lag:          0 seconds (computed 5 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 20 seconds ago)

On the standby side, this would look like:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:51:33 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 MOUNTED
SQL>

On the primary side, this would look like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:52:24 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 READ WRITE NO

Reopening the Master Read Only would fail.

SQL> alter pluggable database SWTET_TMR_001T open read only instances=all;
alter pluggable database SWTET_TMR_001T open read only instances=all
*
ERROR at line 1:
ORA-01173: data dictionary indicates missing data file from system tablespace

Add data in source PDB

Adding new data in source PDB would be of course successful.

SQL> alter session set container=SWTET_APP_001T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl01n1

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 SWTET_APP_001T                 READ WRITE NO

SQL> insert into TEST_ON_STDBY values (1,'primary back to cl07',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from TEST_ON_STDBY;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Cl08 is primary                                                                                      06/06/2025 15:43:19
         1 primary back to cl07                                                                                 06/06/2025 15:54:54

SQL>

And data would be also available in the source PDB on the standby side.

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:56:00 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 MOUNTED

SQL> alter session set container=SWTET_APP_001T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl02n1

SQL> select * from TEST_ON_STDBY;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Cl08 is primary                                                                                      06/06/2025 15:43:19
         1 primary back to cl07                                                                                 06/06/2025 15:54:54

SQL>

<h3>Add new data in spare clone</h3>

Let’s add new data in the sparse clone.

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 15:57:00 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 MOUNTED
         5 SWTET_APP_002T                 READ WRITE NO

SQL> alter session set container=SWTET_APP_002T;

Session altered.

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> set lines 300

SQL> select instance_name, host_name from v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
SWIT001T1        exacc-cl01n1

SQL> select * from TEST_SWITCH;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Before switch                                                                                        06/06/2025 15:33:32

SQL> insert into TEST_SWITCH values (1,'After switchover and back cl07', sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from TEST_SWITCH;

        ID NAME                                                                                                 RUN_EXEC
---------- ---------------------------------------------------------------------------------------------------- -------------------
         0 Before switch                                                                                        06/06/2025 15:33:32
         1 After switchover and back cl07                                                                       06/06/2025 15:59:01

SQL>

Adding new data into the sparse clone is possible.

Drop the sparse clone

Command to drop the sparse clone would be:

SQL> ALTER PLUGGABLE DATABASE SWTET_APP_002T CLOSE IMMEDIATE INSTANCES=ALL;

SQL> DROP PLUGGABLE DATABASE SWTET_APP_002T INCLUDING DATAFILES;

On the primary side, this would now look like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:25:32 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_TMR_001T                 MOUNTED
SQL>

On the standby side, this would now look like:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:26:03 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_TMR_001T                 MOUNTED
SQL>

Refresh Master Read Only

We can refresh the Master Read Only with following command.

SQL> drop pluggable database SWTET_TMR_001T including datafiles;

SQL> CREATE PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2 CONNECT TO C##USER IDENTIFIED BY ".................." USING 'SWIT001T_CHZ2';

SQL> CREATE PLUGGABLE DATABASE SWTET_TMR_001T FROM SWTET_APP_001T@"SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2"
            KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
            PARALLEL 8
            SERVICE_NAME_CONVERT=('SWTET_APP_001T','SWTET_TMR_001T')
            REFRESH MODE MANUAL
            STANDBYS=NONE;

SQL> ALTER PLUGGABLE DATABASE SWTET_TMR_001T OPEN READ ONLY INSTANCES=ALL;

SQL> DROP PUBLIC DATABASE LINK SWIT001T_e9085be389a239c888e5c0baf4f50405@SWIT001T_CHZ2;
Create spare clone again

And we can create the sparse clone again with the same command.

SQL> CREATE PLUGGABLE DATABASE SWTET_APP_002T FROM SWTET_TMR_001T
                KEYSTORE IDENTIFIED BY "..." INCLUDING SHARED KEY
                PARALLEL 8
                CREATE_FILE_DEST='+SPRC2'
                SERVICE_NAME_CONVERT=('SWTET_TMR_001T','SWTET_APP_002T','SWTET_APP_001T','SWTET_APP_002T')
                SNAPSHOT COPY
                STANDBYS=NONE;

SQL> ALTER PLUGGABLE DATABASE SWTET_APP_002T OPEN INSTANCES=ALL;

On the primary side, this looks like:

oracle@exacc-cl01n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:38:12 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ WRITE NO
         4 SWTET_APP_002T                 READ WRITE NO
         5 SWTET_TMR_001T                 READ ONLY  NO
SQL>

And on the standby side:

oracle@exacc-cl02n1:~/ [SWIT001T1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 6 17:38:38 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 SWTET_APP_001T                 READ ONLY  NO
         4 SWTET_APP_002T                 MOUNTED
         5 SWTET_TMR_001T                 MOUNTED
SQL>

To wrap up…

Sparse clone works successfully in Data Guard environment. Sparse clone are only available locally, and so available only when the CDB hosting initially the sparse clone PDB has the primary role. Switching to the other site is not a problem. The sparse clone would be available again once switching back. At last, we can then easily refresh the master read only and create again the spare clone.

L’article What will happen to your spare clones during a switchover on ExaCC? est apparu en premier sur dbi Blog.

dbaascli database move failing with ORA-01691 on ExaCC

Mon, 2025-07-28 10:59

I recently had to move a database from a 19.23 to a 19.26 dbhome on an ExaCC. This will include the database to be patched. The command dbaascli database move failed during the datapatch steps with following error:

DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM

With this blog I would like to share my troubleshooting with you and how I resolved the problem.

Moving database to new dbhome

In order to move/patch the database I used dbaascli database move command.

[root@exacc-cl01n1 ~]# dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST
DBAAS CLI version 25.1.1.0.0
Executing command database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST
Job id: 17f6fef5-2143-4bb7-8007-a27cf5232099
Session log: /var/opt/oracle/log/CDBTEST/database/move/dbaastools_2025-05-05_02-47-51-PM_237874.log
Loading PILOT...
Session ID of the current execution is: 15920
Log file location: /var/opt/oracle/log/CDBTEST/database/move/pilot_2025-05-05_02-47-55-PM_238225
-----------------
Running initialization job
Completed initialization job
-----------------
Running validate_user_input job
Completed validate_user_input job
-----------------
Running validate_database job
[WARNING] [DBAAS-70643] Following pluggable databases '{CDBTEST=[PDB_001T]}' do not have services configured.
   ACTION: Make sure to configure the services of pluggable databases so that pluggable databases are started after the database bounce.
Completed validate_database job
-----------------
Running validate_creg_file_existence job
Completed validate_creg_file_existence job
-----------------
Running validate_source_home job
Completed validate_source_home job
-----------------
Running validate_major_version job
Completed validate_major_version job
-----------------
Running validate_oracle_home_type job
Completed validate_oracle_home_type job
-----------------
Running check_target_source_home_not_same job
Completed check_target_source_home_not_same job
-----------------
Running validate_home_existence job
Completed validate_home_existence job
-----------------
Running validate_home_consistency job
Completed validate_home_consistency job
-----------------
Running validate_home_options job
Completed validate_home_options job
-----------------
Running validate_disk_space job
Completed validate_disk_space job
-----------------
Acquiring write lock: cdbtest
Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_3
Running pre_patch_lock_manager job
Completed pre_patch_lock_manager job
-----------------
Running enable_wallet_root job
Completed enable_wallet_root job
-----------------
Running copy_config_files job
Completed copy_config_files job
-----------------
Running stop_database_instance-exacc-cl01n1 job
Completed stop_database_instance-exacc-cl01n1 job
-----------------
Running update_database_resource-exacc-cl01n1 job
Completed update_database_resource-exacc-cl01n1 job
-----------------
Running start_database_instance-exacc-cl01n1 job
Completed start_database_instance-exacc-cl01n1 job
-----------------
Running exacs_post_patch_node_updation job
Completed exacs_post_patch_node_updation job
-----------------
Running update_dba_directories job
Completed update_dba_directories job
-----------------
Running datapatch_and_recompile_invalid_objects job
Datapatch execution on database 'CDBTEST' is in progress
Execution of datapatch_and_recompile_invalid_objects failed
[FATAL] [DBAAS-60022] Command '/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/datapatch -verbose' has failed on nodes [exacc-cl01n1].
*MORE DETAILS*
Result of node:exacc-cl01n1
[Interim patch 28318139 (ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.36.095921 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 28555193 (DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.37.111786 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 30944402 (SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.269998 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 32067171 (OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171)):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.35.074017 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 33192694 (OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694)):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.262033 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 33522539 (MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.271219 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 34786990 (OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.17.242116 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 34972375 (DATAPUMP BUNDLE PATCH 19.18.0.0.0):,   Binary registry: Not installed,   PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.19.038902 AM,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 36199232 (OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)):,   Binary registry: Not installed,   PDB PDB_001T: Applied successfully on 17-AUG-24 08.59.17.362926 AM,   PDB CDB$ROOT: Applied successfully on 18-APR-24 10.08.56.234640 AM,   PDB PDB$SEED: Applied successfully on 18-APR-24 10.19.33.039940 AM, Interim patch 36420641 (DATAPUMP BUNDLE PATCH 19.23.0.0.0):,   Binary registry: Not installed,   PDB PDB_001T: Applied successfully on 17-AUG-24 09.00.21.159959 AM,   PDB CDB$ROOT: Applied successfully on 02-OCT-24 11.47.34.300679 AM,   PDB PDB$SEED: Applied successfully on 02-OCT-24 11.50.34.101031 AM, Interim patch 37102264 (OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)):,   Binary registry: Installed,   PDB PDB_001T: Not installed,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Interim patch 37470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0):,   Binary registry: Installed,   PDB PDB_001T: Not installed,   PDB CDB$ROOT: Not installed,   PDB PDB$SEED: Not installed, Current state of release update SQL patches:,   Binary registry:,     19.26.0.0.0 Release_Update 250118124854: Installed,   PDB PDB_001T:,     Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 17-AUG-24 08.59.50.930598 AM,   PDB CDB$ROOT:,     Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 18-APR-24 10.19.15.621942 AM,   PDB PDB$SEED:,     Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 18-APR-24 10.28.06.019862 AM, Adding patches to installation queue and performing prereq checks...done, Installation queue:,   For the following PDBs: CDB$ROOT PDB$SEED PDB_001T,     The following interim patches will be rolled back:,       36199232 (OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)),       26749785 (PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK),       27605010 (DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999),       36420641 (DATAPUMP BUNDLE PATCH 19.23.0.0.0),     Patch 37260974 (Database Release Update : 19.26.0.0.250121 (37260974)):,       Apply from 19.23.0.0.0 Release_Update 240406004238 to 19.26.0.0.0 Release_Update 250118124854,     The following interim patches will be applied:,       37102264 (OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)),       26749785 (PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK),       27605010 (DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999),       37470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0), DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM, ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN,            INSERT INTO sys.dba_registry_sqlpatch_ru_info,              (patch_id,,               patch_uid,,               patch_descriptor,,               ru_version,,               ru_build_description,,               ru_build_timestamp,,               patch_directory),            VALUES,              (:patch_id,,               :patch_uid,,               :patch_descriptor,,               :ru_version,,               :ru_build_description,,               TO_TIMESTAMP(:ru_build_timestamp, 'YYMMDDHH24MISS'),,               :patch_directory);,            COMMIT;,          END;" with ParamValues: :patch_descriptor=OCIXMLTypePtr=SCALAR(0x4b4dcb0), :patch_directory='PK.........y4Z�&(.�...........37260974_rollback.sql�.]s.ֺ.��+t��ng�.o����v.�?2�����F..�$ . [���.@.q���dQ.�Tqc��û��z���rv�8.�������.���..����(M�ϳ��´.]%�,u
                                                                                                    �z��.�8���.��.s�f�����,J�Wa.�a6�y�.zp�������,..�?�l���t���..�/fW�(j�}pp�8^��..~w�N.�?�..�.��.(Z���e��.������b���.�������K6E�+�?��A.ln._�.=.�qp����������.������ǫ.qﻋ��.���.�.β �\/x��.M�Q�_�q�ý7.-N�Tory�|���y�&E..y��i��..)�N.�.7��.׌��������oqxz|�|st�X...������G'��.N�_�.?..*�.�..../����.��?|�pWn.�*��;;;.}��.�.�+����..fJ�`., 7+��z�>.?
                                                                                                                                                        �Y�.�.��...�.,...N��9��.��.Y�rp����O��8_���O��%�.E�F�.��t�|��.��.�Q|�L�����y.��[�'�|.���.�.y�ë.�:xy�..pS>��.|U.��r�/j�?=:�\...������.�������ry�����QY�J.�.}N��.�.S%������j�^j.=7T�Z�e.W�z.�?..�>��?�w�.{w9{������/Nj�������f])�.�����.O�*�
��϶!����?�.��&)���I�tzv�x��., �LU.�IV�..7.R7.y��&h{*�3.�mJm}n(, _�...��2�Lݰ�r*ç..�S7ղ@�V��.1>���'�2�.��., ...�9.m)s.�S.ȴ�T��K.�.$!'�..;�Z�.#T�.��(�f.��.�i.pBK��h�fX*ߩ...��i�, �.*d�.�e..6.Oy!*�.:�.MQ4s���.�...�SW��|��...F�OC�,e�..�"/��:5�...', :patch_id='37260974', :patch_uid='26040769', :ru_build_description="Release_Update", :ru_build_timestamp="250118124854", :ru_version="19.26.0.0.0"] at /u02/app/oracle/product/19.0.0.0/dbhome_3/sqlpatch/sqlpatch.pm line 5337., Please refer to MOS Note 1609718.1 and/or the invocation log, /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_258023_2025_05_05_14_50_06/sqlpatch_invocation.log, for information on how to resolve the above errors., SQL Patching tool complete on Mon May  5 14:50:40 2025]
Exit code of the operation:1
Releasing lock: cdbtest
Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_3
*** Executing jobs which need to be run always... ***
-----------------
Running post_patch_lock_manager job
Completed post_patch_lock_manager job
******** PLUGIN EXECUTION FAILED ********
To resume this failed session, run the following command:
dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume

As we can see the command failed with following error:

[FATAL] [DBAAS-60022] Command '/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/datapatch -verbose' has failed on nodes [exacc-cl01n1].
7470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0), DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM
Troubleshooting

So let’s troubleshoot!

The database has been moved to the new dbhome, and the dbaascli command failed on datapatch level.

oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [rdbms1900] CDBTEST1
2025-05-05_14-56-12::DMK_ENV.pm::read_oratab            ::INFO ==> changed ORACLE_HOME found in oratab: +ASM1 - old=/u01/app/19.0.0.0/grid, new=/u02/app/23.0.0.0/gridhome_1

 **********************************
 INSTANCE_NAME   : CDBTEST1
 DB_NAME         : CDBTEST
 DB_UNIQUE_NAME  : CDBTEST_CHZ2
 STATUS          : OPEN READ WRITE
 LOG_MODE        : ARCHIVELOG
 USERS/SESSIONS  : 2/7
 DATABASE_ROLE   : PRIMARY
 FLASHBACK_ON    : YES
 FORCE_LOGGING   : YES
 VERSION         : 19.26.0.0.0
 CDB_ENABLED     : YES
 PDBs            : PDB_001T  PDB$SEED
 **********************************

 PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only
 Statustime: 2025-05-05 14:56:12

oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [CDBTEST1 (CDB$ROOT)] echo $ORACLE_HOME
/u02/app/oracle/product/19.0.0.0/dbhome_3

And the PDB is opened read write.

oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [CDBTEST1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 14:56:31 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         4 PDB_001T                       READ WRITE NO
SQL>

Let’s try to run datapatch manually, expecting it will fail with same errors. But it might help me to extract more easily datapatch result.

oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/ [CDBTEST1 (CDB$ROOT)] cd OPatch/

oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/ [CDBTEST1 (CDB$ROOT)] ./datapatch -verbose
SQL Patching tool version 19.26.0.0.0 Production on Mon May  5 14:57:00 2025
Copyright (c) 2012, 2025, Oracle.  All rights reserved.

Log file for this invocation: /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_309940_2025_05_05_14_57_00/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
Interim patch 26749785 (PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK):
  Binary registry: Not installed
  PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.18.040319 AM
  PDB CDB$ROOT: Not installed
  PDB PDB$SEED: Not installed
Interim patch 26749785 (PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK):
  Binary registry: Not installed
  PDB PDB_001T: Applied successfully on 17-AUG-24 08.59.56.646477 AM
  PDB CDB$ROOT: Applied successfully on 02-OCT-24 11.46.11.005940 AM
  PDB PDB$SEED: Applied successfully on 02-OCT-24 11.47.47.794119 AM
...
...
...
DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM
ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN
           INSERT INTO sys.dba_registry_sqlpatch_ru_info
             (patch_id,
              patch_uid,
              patch_descriptor,
              ru_version,
              ru_build_description,
              ru_build_timestamp,
              patch_directory)
           VALUES
             (:patch_id,
              :patch_uid,
              :patch_descriptor,
              :ru_version,
              :ru_build_description,
              TO_TIMESTAMP(:ru_build_timestamp, 'YYMMDDHH24MISS'),
              :patch_directory);
           COMMIT;
         END;" with ParamValues: :patch_descriptor=OCIXMLTypePtr=SCALAR(0x4611c98), :patch_directory='PK.........y4Z▒&(.▒...........37260974_rollback.sql▒.]s.ֺ.▒▒+t▒▒ng▒.o▒▒▒▒v.▒͌?2▒▒▒▒▒F..▒$ . [▒▒▒.@.q▒▒▒dQ.▒Tqc▒▒û▒▒z▒▒▒rv▒8.▒▒▒▒▒▒▒.▒▒▒..▒▒▒▒(M▒ϳ▒▒´.]%▒,u
                                                                                                                                                                                                                                                              ▒z▒▒.▒8▒▒▒.▒▒.s▒f▒▒▒▒▒,J▒Wa.▒a6▒y▒.zp▒▒▒▒▒▒▒,..▒?▒l▒▒▒t▒▒▒..▒/fW▒(j▒}pp▒8^▒▒..~w▒N.▒?▒..▒.▒▒.(Z▒▒▒e▒▒.▒▒▒▒▒▒b▒▒▒.▒▒▒▒▒▒▒K6E▒+▒?▒▒A.ln._▒.=.▒ܸqp▒▒▒▒▒▒▒▒▒▒.▒▒▒▒▒▒ǫ.qﻋ▒▒.▒▒▒.▒.β ▒\/x▒.M▒Q▒_▒q▒ý7.-N▒Tory▒|▒▒▒y▒&E..y▒▒i▒▒..)▒N.▒.7▒▒.׌▒▒▒▒▒▒▒▒oqxz|▒|st▒X...▒▒▒▒▒▒G'▒▒.N▒_▒.?..*▒.▒..../▒▒▒▒.▒▒?|▒pWn.▒*▒▒;;;.}▒▒.▒.▒+▒▒▒▒..fJ▒`.
7+▒▒z▒>.?
         ▒Y▒.▒.▒▒...▒.,...N▒▒9▒.▒▒.Y▒rp▒▒ⷷ▒▒O▒8_▒▒▒O▒▒%▒.E▒F▒.▒▒t▒|▒▒.▒▒.▒Q|▒L▒▒▒▒y.▒▒[▒'▒|.▒▒▒.▒.y▒ë.▒:xy▒..pS>▒▒.|U.▒▒r▒/j▒?=:▒\...▒▒▒▒▒▒.▒▒▒▒▒▒▒ry▒▒▒▒▒QY▒J.▒.}N▒▒.▒.S%▒▒▒▒▒▒j▒^j.=7T▒Z▒e.W▒z.▒?..▒>▒▒?▒w▒.{w9{▒▒▒▒▒▒/Nj▒▒▒▒▒▒▒f])▒.▒▒▒▒.O▒*▒
▒▒϶!▒▒▒▒?▒.▒▒&)▒▒▒I▒tzv▒x▒▒.
▒LU.▒IV▒..7.R7.y▒▒&h{*▒3.▒mJm}n(
_▒...▒▒2▒Lݰ▒r*ç..▒S7ղ@▒V▒▒.1>▒▒▒'▒2▒.▒▒.
...▒9.m)s.▒S.ȴ▒T▒▒K.▒.$!'▒..;▒Z▒.#T▒.▒▒(▒f.▒▒.▒i.pBK▒▒h▒fX*ߩ...▒▒i▒
▒.*d▒.▒e..6.Oy!*▒.:▒.MQ4s▒▒▒.▒...▒SW▒▒|▒▒...F▒OC▒,e▒..▒"/▒▒:5▒...', :patch_id="37260974", :patch_uid="26040769", :ru_build_description="Release_Update", :ru_build_timestamp="250118124854", :ru_version="19.26.0.0.0"] at /u02/app/oracle/product/19.0.0.0/dbhome_3/sqlpatch/sqlpatch.pm line 5337.


Please refer to MOS Note 1609718.1 and/or the invocation log
/u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_309940_2025_05_05_14_57_00/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Mon May  5 14:57:34 2025

Let’s check SYSTEM tablespace usage in the CDB$ROOT.

oracle@exacc-cl01n1:~/ [CDBTEST1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 15:02:00 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> @qdbstbssize.sql

PL/SQL procedure successfully completed.


                             Nb      Extent Segment    Alloc.      Space        Max. Percent Block
Name                      files Type Mgmnt  Mgmnt    Size (GB)  Free (GB)  Size (GB)  used % size  Log Encrypt Compress
------------------------- ----- ---- ------ ------- ---------- ---------- ---------- ------- ----- --- ------- --------
SYSAUX                        1 DATA LM-SYS AUTO          5.95       4.47   32768.00     .00 8 KB  YES YES     NO
SYSTEM                        1 DATA LM-SYS MANUAL        1.95        .62   32768.00     .00 8 KB  YES YES     NO
TEMP                          1 TEMP LM-UNI MANUAL        1.00       2.31     512.00    -.26 8 KB  NO  YES     NO
UNDOTBS1                      1 UNDO LM-SYS MANUAL        1.95       1.89     512.00     .01 8 KB  YES YES     NO
USERS                         1 DATA LM-SYS AUTO          1.00        .93   32768.00     .00 8 KB  YES YES     NO
                          -----                     ---------- ---------- ----------
TOTAL                         5                          11.85      10.22   99328.00

SQL>

We can see all is ok. Let’s check the same in the PDB.

SQL> alter session set container=PDB_001T;

Session altered.

SQL> @qdbstbssize.sql
...
...
...
SYSAUX                        1 DATA         LM-SYS AUTO         82.62      74.52   32768.00     .02 8 KB  YES YES     NO
SYSTEM                        1 DATA         LM-SYS MANUAL       24.71        .10   32768.00     .08 8 KB  YES YES     NO
TEMP                          1 TEMP         LM-UNI MANUAL      114.26     342.75   32768.00    -.70 8 KB  NO  YES     NO
UNDO                          1 UNDO         LM-SYS MANUAL      684.57     673.58   32768.00     .03 8 KB  YES YES     NO
USERS                         1 DATA         LM-SYS AUTO         20.00      19.93      20.00     .33 8 KB  YES YES     NO
                          -----                             ---------- ---------- ----------
TOTAL                      2732                                7101.04    1385.79 ##########

2732 rows selected.

SQL>

It’s all good as well. And the pdb$seed?

SQL> alter session set container=PDB$SEED;

Session altered.

SQL> @qdbstbssize.sql

PL/SQL procedure successfully completed.


                             Nb      Extent Segment    Alloc.      Space        Max. Percent Block
Name                      files Type Mgmnt  Mgmnt    Size (GB)  Free (GB)  Size (GB)  used % size  Log Encrypt Compress
------------------------- ----- ---- ------ ------- ---------- ---------- ---------- ------- ----- --- ------- --------
SYSAUX                        1 DATA LM-SYS AUTO           .59        .08   32768.00     .00 8 KB  YES NO      NO
SYSTEM                        1 DATA LM-SYS MANUAL         .59        .00   32768.00     .00 8 KB  YES NO      NO
TEMP                          1 TEMP LM-UNI MANUAL         .18        .00   32768.00     .00 8 KB  NO  NO      NO
UNDOTBS1                      1 UNDO LM-SYS MANUAL         .59        .33     512.00     .05 8 KB  YES NO      NO
                          -----                     ---------- ---------- ----------
TOTAL                         4                           1.95        .41   98816.00

SQL>

All is ok as well.

Let’s check where this SYS_LOB belongs to. cdb$root? PDB? pdb$seed?

SQL> alter session set container=PDB$SEED;

Session altered.

SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS';

TABLESPACE_NAME                TABLE_NAME
------------------------------ --------------------------------------------------------------------------------------------------------------------------------
SYSTEM                         REGISTRY$SQLPATCH_RU_INFO

SQL> alter session set container=cdb$root;

Session altered.

SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS';

no rows selected

SQL> alter session set container=PDB_001T;

Session altered.

SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS';

no rows selected

SQL>

So the LOB belongs to the pdb$seed.

From the alert log I can see:

PDB$SEED(2):Pluggable database PDB$SEED opening in read write
PDB$SEED(2):Autotune of undo retention is turned on.
PDB$SEED(2):This instance was first to open pluggable database PDB$SEED (container=2)
PDB$SEED(2):queued attach DA request 0xb2325ed8 for pdb 2, ospid 3457
2025-05-05T15:11:01.061406+02:00
Domain Action Reconfiguration started (domid 2, new da inc 19, cluster inc 2)
Instance 1 is attaching to domain 2
 Global Resource Directory partially frozen for domain action
Domain Action Reconfiguration complete (total time 0.0 secs)
2025-05-05T15:11:01.067485+02:00
PDB$SEED(2):Endian type of dictionary set to little
PDB$SEED(2):Undo initialization recovery: Parallel FPTR failed: start:1139322373 end:1139322379 diff:6 ms (0.0 seconds)
PDB$SEED(2):Undo initialization recovery: err:0 start: 1139322373 end: 1139322392 diff: 19 ms (0.0 seconds)
PDB$SEED(2):[3457] Successfully onlined Undo Tablespace 2.
PDB$SEED(2):Undo initialization online undo segments: err:0 start: 1139322392 end: 1139322638 diff: 246 ms (0.2 seconds)
PDB$SEED(2):Undo initialization finished serial:0 start:1139322373 end:1139322643 diff:270 ms (0.3 seconds)
PDB$SEED(2):Database Characterset for PDB$SEED is AL32UTF8
PDB$SEED(2):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x0, new 0x0 (no suplog)
PDB$SEED(2):Opening pdb with no Resource Manager plan active
2025-05-05T15:11:01.806446+02:00
PDB$SEED(2):joxcsys_required_dirobj_exists: directory object exists with required path /u02/app/oracle/product/19.0.0.0/dbhome_3/javavm/admin/, pid 3457 cid 2
Pluggable database PDB$SEED opened read write
2025-05-05T15:11:03.081311+02:00
QPI: opatch file present, opatch
QPI: qopiprep.bat file present
QPI: Cleaning and refreshing metadata..
2025-05-05T15:11:09.187099+02:00
QPI: Cleaning and refreshing metadata..
2025-05-05T15:11:23.765174+02:00
PDB$SEED(2):ORA-1691: unable to extend lobsegment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM [PDB$SEED] (ospid 3221)
2025-05-05T15:11:24.482146+02:00
PDB$SEED(2):Pluggable database PDB$SEED closing

So we can see that once the pdb$seed is opened in read write mode, datapatch is failing due to the fact it is not possible to extend the SYS.SYS_LOB0000023009C00008$$ lob segment.

Let’s look into the ASM to see if there is enough disk space.

ASMCMD> lsdg
State    Type  Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  N         512             512   4096  4194304   49072128      56708                0           18902              0             Y  DATAC4/
MOUNTED  HIGH  N         512             512   4096  4194304   16356864   13063776                0         4354592              0             N  RECOC4/
MOUNTED  HIGH  N         512             512   4096  4194304  163568640  163563936                0        54521312              0             N  SPRC4/
ASMCMD>

There is not a lot of usable space for the DATA Disk Group, certainly not enough, but still some space. Let’s try to open the pdb$seed in read write and to create a table.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO

SQL> shutdown immediate
Pluggable Database closed.

SQL> alter pluggable database open read write;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ WRITE NO

SQL> create table xxx (n number) tablespace system;

Table created.

SQL> insert into xxx values (10);

1 row created.

SQL> commit;

Commit complete.

SQL> drop table xxx purge;

Table dropped.

This is possible. Let’s try to resize system data file.

SQL> alter database datafile 2 resize 2048M;
alter database datafile 2 resize 2048M
*
ERROR at line 1:
ORA-01237: cannot extend datafile 2
ORA-01110: data file 2: '+DATAC4/CDBTEST_CHZ2/165C22D4CFFCA759E0638A534664B46C/DATAFILE/system.615.1181302705'
ORA-17505: ksfdrsz:1 Failed to resize file to size 262144 blocks
ORA-15041: diskgroup "DATAC4" space exhausted

This is not possible and it is confirmed DATA Disk Group does not have enough available space.

I first will reopen the pdb$seed in read only.

SQL> shutdown immediate
Pluggable Database closed.

SQL> alter pluggable database open read only;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
SQL>

Following DocId would explain the problem. One or more disk having 0 free MB:

Datapatch failed with ORA-01691: unable to extend lob segment on MGMTDB (Doc ID 2352895.1)

Let’s check, connecting to the +ASM instance.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
+ASM1

SQL> select group_number, name from v$asm_diskgroup;

GROUP_NUMBER NAME
------------ ------------------------------
           1 DATAC4
           2 RECOC4
           3 SPRC4

SQL> select DISK_NUMBER, FREE_MB from v$asm_disk where group_number=1 and FREE_MB=0;

DISK_NUMBER    FREE_MB
----------- ----------
         48          0

SQL> select count(*) from v$asm_disk where group_number=1 and FREE_MB  0;

  COUNT(*)
----------
        95

SQL>

Yes, this is exactly the problem! Disk Number 48 has got 0 free MB.

Resolution

Let’s resolve the problem.

Through the OCI console, I added 1 TB to the ASM. The usable space of the disk groups then became as following:

ASMCMD> lsdg
State    Type  Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  HIGH  Y         512             512   4096  4194304   50959872    1932164                0          644054              0             Y  DATAC4/
MOUNTED  HIGH  N         512             512   4096  4194304   16356864   13058556                0         4352852              0             N  RECOC4/
MOUNTED  HIGH  N         512             512   4096  4194304  163568640  163563936                0        54521312              0             N  SPRC4/
ASMCMD>

I checked to ensure there is no more disks with 0 free MB.

[grid@exacc-cl01n1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Mon May 5 16:28:11 2025
Version 23.7.0.25.01

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai Enterprise Edition Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems
Version 23.7.0.25.01

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
+ASM1

SQL> select DISK_NUMBER, FREE_MB from v$asm_disk where group_number=1 and FREE_MB=0;

no rows selected

SQL> select count(*) from v$asm_disk where group_number=1 and FREE_MB  0;

  COUNT(*)
----------
        96

SQL>

I resume the dbaascli move operation, which became successful.

[root@exacc-cl01n1 ~]# dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume
DBAAS CLI version 25.1.1.0.0
Executing command database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume
Job id: 7e59d18a-ca1b-4808-a927-c3ce9131b968
Session log: /var/opt/oracle/log/CDBTEST/database/move/dbaastools_2025-05-05_05-52-50-PM_350983.log
Loading PILOT...
Session ID of the current execution is: 15932
Log file location: /var/opt/oracle/log/CDBTEST/database/move/pilot_2025-05-05_05-52-54-PM_351354
-----------------
Running initialization job
Completed initialization job
-----------------
Acquiring write lock: cdbtest
Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_3
Running datapatch_and_recompile_invalid_objects job
Datapatch execution on database 'CDBTEST' is in progress
Datapatch execution on database 'CDBTEST' is complete
Recompilation of invalid objects on database 'CDBTEST' is in progress
Recompilation of invalid objects on database 'CDBTEST' is complete
Completed datapatch_and_recompile_invalid_objects job
-----------------
Running update_pdb_status job
Completed update_pdb_status job
-----------------
Running post_patch_lock_manager job
Completed post_patch_lock_manager job
Releasing lock: cdbtest
Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1
Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_3
-----------------
Running invoke_backup_asst job
Completed invoke_backup_asst job
-----------------
Running post_move_validation job
Completed post_move_validation job
-----------------
Running generate_dbsystem_details job
Acquiring native write lock: global_dbsystem_details_generation
Releasing native lock: global_dbsystem_details_generation
Completed generate_dbsystem_details job

dbaascli execution completed
You have new mail in /var/spool/mail/root

And I checked database patch version for the cdb$root and the pdb.

oracle@exacc-cl01n1:~/ [rdbms1900] CDBTEST1
2025-05-05_18-01-21::DMK_ENV.pm::read_oratab            ::INFO ==> changed ORACLE_HOME found in oratab: +ASM1 - old=/u01/app/19.0.0.0/grid, new=/u02/app/23.0.0.0/gridhome_1

 **********************************
 INSTANCE_NAME   : CDBTEST1
 DB_NAME         : CDBTEST
 DB_UNIQUE_NAME  : CDBTEST_CHZ2
 STATUS          : OPEN READ WRITE
 LOG_MODE        : ARCHIVELOG
 USERS/SESSIONS  : 2/7
 DATABASE_ROLE   : PRIMARY
 FLASHBACK_ON    : YES
 FORCE_LOGGING   : YES
 VERSION         : 19.26.0.0.0
 CDB_ENABLED     : YES
 PDBs            : PDB_001T  PDB$SEED
 **********************************

 PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only
 Statustime: 2025-05-05 18:01:22

oracle@exacc-cl01n1:~/ [CDBTEST1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 18:01:25 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> set lines 300
SQL> set tab off
SQL> set pages 500

SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';

Session altered.

SQL> col action_time for a30
SQL> select action_time, patch_id, source_version, target_version, status, description from dba_registry_sqlpatch;

ACTION_TIME                      PATCH_ID SOURCE_VERSION  TARGET_VERSION  STATUS                    DESCRIPTION
------------------------------ ---------- --------------- --------------- ------------------------- ----------------------------------------------------------------------------------------------------
18-APR-24 10.08.56.234640 AM     36199232 19.1.0.0.0      19.23.0.0.0     SUCCESS                   OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)
18-APR-24 10.19.15.621942 AM     36233263 19.1.0.0.0      19.23.0.0.0     SUCCESS                   Database Release Update : 19.23.0.0.240416 (36233263)
02-OCT-24 11.46.11.005940 AM     26749785 19.23.0.0.0     19.23.0.0.0     SUCCESS                   PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
02-OCT-24 11.46.13.343508 AM     27605010 19.23.0.0.0     19.23.0.0.0     SUCCESS                   DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
02-OCT-24 11.47.34.300679 AM     36420641 19.23.0.0.0     19.23.0.0.0     SUCCESS                   DATAPUMP BUNDLE PATCH 19.23.0.0.0
05-MAY-25 05.27.25.226657 PM     36199232 19.23.0.0.0     19.26.0.0.0     SUCCESS                   OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)
05-MAY-25 05.27.26.115643 PM     37102264 19.23.0.0.0     19.26.0.0.0     SUCCESS                   OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)
05-MAY-25 05.27.26.697687 PM     26749785 19.23.0.0.0     19.26.0.0.0     SUCCESS                   PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
05-MAY-25 05.27.28.734483 PM     27605010 19.23.0.0.0     19.26.0.0.0     SUCCESS                   DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
05-MAY-25 05.27.28.839820 PM     36420641 19.23.0.0.0     19.26.0.0.0     SUCCESS                   DATAPUMP BUNDLE PATCH 19.23.0.0.0
05-MAY-25 05.28.35.890001 PM     37260974 19.23.0.0.0     19.26.0.0.0     SUCCESS                   Database Release Update : 19.26.0.0.250121 (37260974)
05-MAY-25 05.28.44.283893 PM     26749785 19.23.0.0.0     19.26.0.0.0     SUCCESS                   PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
05-MAY-25 05.28.46.272241 PM     27605010 19.23.0.0.0     19.26.0.0.0     SUCCESS                   DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
05-MAY-25 05.29.22.101438 PM     37470729 19.23.0.0.0     19.26.0.0.0     SUCCESS                   DATAPUMP BUNDLE PATCH 19.26.0.0.0

14 rows selected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         4 PDB_001T                 READ WRITE NO

SQL> alter session set container=PDB_001T;

Session altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 PDB_001T                 READ WRITE NO

SQL> select action_time, patch_id, source_version, target_version, status, description from dba_registry_sqlpatch;

ACTION_TIME                      PATCH_ID SOURCE_VERSI TARGET_VERSI STATUS               DESCRIPTION
------------------------------ ---------- ------------ ------------ -------------------- ----------------------------------------------------------------------------------------------------
19-JUN-21 09.56.16.648813 AM     32067171 19.1.0.0.0   19.1.0.0.0   SUCCESS              OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171)
19-JUN-21 09.56.16.641919 AM     32218454 19.1.0.0.0   19.10.0.0.0  SUCCESS              Database Release Update : 19.10.0.0.210119 (32218454)
19-JUN-21 09.56.17.529234 AM     28318139 19.1.0.0.0   19.1.0.0.0   SUCCESS              ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL
19-JUN-21 09.56.18.407598 AM     28555193 19.1.0.0.0   19.1.0.0.0   WITH ERRORS          DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT
19-JUN-21 10.13.13.165620 AM     28555193 19.10.0.0.0  19.10.0.0.0  SUCCESS              DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT
06-MAR-22 10.57.35.074017 AM     32067171 19.13.0.0.0  19.13.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171)
06-MAR-22 10.57.39.351946 AM     33192694 19.10.0.0.0  19.10.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694)
06-MAR-22 10.57.36.095921 AM     28318139 19.13.0.0.0  19.13.0.0.0  SUCCESS              ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL
06-MAR-22 10.57.37.111786 AM     28555193 19.13.0.0.0  19.13.0.0.0  SUCCESS              DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT
06-MAR-22 10.57.39.348199 AM     33192793 19.10.0.0.0  19.13.0.0.0  SUCCESS              Database Release Update : 19.13.0.0.211019 (33192793)
06-MAR-22 10.57.40.368792 AM     30944402 19.10.0.0.0  19.10.0.0.0  SUCCESS              SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS
06-MAR-22 10.57.41.384551 AM     33522539 19.10.0.0.0  19.10.0.0.0  SUCCESS              MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027
19-AUG-23 09.58.11.262033 AM     33192694 19.18.0.0.0  19.18.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694)
19-AUG-23 09.58.13.172768 AM     34786990 19.13.0.0.0  19.13.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)
19-AUG-23 09.58.11.269998 AM     30944402 19.18.0.0.0  19.18.0.0.0  SUCCESS              SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS
19-AUG-23 09.58.11.271219 AM     33522539 19.18.0.0.0  19.18.0.0.0  SUCCESS              MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027
19-AUG-23 09.58.13.169832 AM     34765931 19.13.0.0.0  19.18.0.0.0  SUCCESS              DATABASE RELEASE UPDATE : 19.18.0.0.230117 (REL-JAN230131) (34765931)
19-AUG-23 09.58.13.340768 AM     26749785 19.13.0.0.0  19.13.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
19-AUG-23 09.58.13.346063 AM     27605010 19.13.0.0.0  19.13.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
19-AUG-23 09.58.14.253669 AM     34972375 19.13.0.0.0  19.13.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.18.0.0.0
17-AUG-24 08.59.17.242116 AM     34786990 19.18.0.0.0  19.23.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)
17-AUG-24 08.59.17.362926 AM     36199232 19.18.0.0.0  19.23.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)
17-AUG-24 08.59.18.040319 AM     26749785 19.18.0.0.0  19.23.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
17-AUG-24 08.59.18.971587 AM     27605010 19.18.0.0.0  19.23.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
17-AUG-24 08.59.19.038902 AM     34972375 19.18.0.0.0  19.23.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.18.0.0.0
17-AUG-24 08.59.50.930598 AM     36233263 19.18.0.0.0  19.23.0.0.0  SUCCESS              Database Release Update : 19.23.0.0.240416 (36233263)
17-AUG-24 08.59.56.646477 AM     26749785 19.18.0.0.0  19.23.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
17-AUG-24 08.59.57.504574 AM     27605010 19.18.0.0.0  19.23.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
17-AUG-24 09.00.21.159959 AM     36420641 19.18.0.0.0  19.23.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.23.0.0.0
05-MAY-25 05.31.01.733241 PM     36199232 19.23.0.0.0  19.26.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)
05-MAY-25 05.31.03.413696 PM     37102264 19.23.0.0.0  19.26.0.0.0  SUCCESS              OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)
05-MAY-25 05.31.04.821299 PM     26749785 19.23.0.0.0  19.26.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
05-MAY-25 05.31.06.205197 PM     27605010 19.23.0.0.0  19.26.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
05-MAY-25 05.31.06.379357 PM     36420641 19.23.0.0.0  19.26.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.23.0.0.0
05-MAY-25 05.32.25.947558 PM     37260974 19.23.0.0.0  19.26.0.0.0  SUCCESS              Database Release Update : 19.26.0.0.250121 (37260974)
05-MAY-25 05.32.32.616612 PM     26749785 19.23.0.0.0  19.26.0.0.0  SUCCESS              PERF_DIAG  NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK
05-MAY-25 05.32.33.746247 PM     27605010 19.23.0.0.0  19.26.0.0.0  SUCCESS              DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999
05-MAY-25 05.35.48.324333 PM     37470729 19.23.0.0.0  19.26.0.0.0  SUCCESS              DATAPUMP BUNDLE PATCH 19.26.0.0.0

38 rows selected.

And finally I checked PDB violations.

oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/ [CDBTEST1 (CDB$ROOT)] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 18:10:56 2025
Version 19.26.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.26.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         4 PDB_001T                 READ WRITE NO

SQL> alter session set container=PDB_001T;

Session altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 PDB_001T                 READ WRITE NO

SQL> col name for a15
SQL> col message for a120
SQL> col cause for a50
SQL> set lines 300
SQL> col cause for a20
SQL> col type for a10

SQL> select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where status  'RESOLVED';

NAME            CAUSE                TYPE       MESSAGE                                                                                                                  STATUS
--------------- -------------------- ---------- ------------------------------------------------------------------------------------------------------------------------ ---------------------------
PDB_001T  OPTION               WARNING    Database option APS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING
PDB_001T  OPTION               WARNING    Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                          PENDING
PDB_001T  OPTION               WARNING    Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                               PENDING
PDB_001T  OPTION               WARNING    Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING
PDB_001T  OPTION               WARNING    Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                            PENDING
PDB_001T  OPTION               WARNING    Database option OWM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING
PDB_001T  OPTION               WARNING    Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING
PDB_001T  OPTION               WARNING    Database option XOQ mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.                              PENDING

8 rows selected.

SQL>

Yes, that’s it, all is ok now.

To wrap up…

The database move on another higher version dbhome on the ExaCC failed on the datapatch step, due to the fact one of the Disk had 0 free MB. Adding space to the ASM resolved the issue.

L’article dbaascli database move failing with ORA-01691 on ExaCC est apparu en premier sur dbi Blog.

What brings the M-Files and Microsoft 365 Strategic Partnership

Sun, 2025-07-27 04:11
Official image of the M-Files and Microsoft 365 strategic partnership

On the 1st of July 2025 M-Files and Microsoft announced their strategic partnership with the headline “M-Files and Microsoft Announce Strategic Partnership Revolutionising Document Management”
I was wondering myself what does this will mean for us and a M-Files Partner and for our customers. What are the new features and benefits. In this blog article I will share the information and my perspective about this phantasmic announcement.

One of the hot topics is the fundamental shift of how enterprise content is managed within the M-Files and the Microsoft ecosystem.

Main Features

Thanks to the strategic partnership between M-Files and Microsoft, M-Files will now be able to use features such as the ones below. This unlocks the native Microsoft 365 capabilities for M-Files AI-curated content.

  • Microsoft 365 and its API-only service
  • SharePoint Embedded
  • Document co-authoring

This powerful solution is native to Microsoft 365, offering the best of both worlds.

M-Files strengthen:

  • Automation of M-Files including their strengths in metadata
  • Use one client to access data and features to make work easier

Microsoft’s power:

  • Copilot capabilities
  • Collaboration tools
  • Security integeration
Benefits we get out of the strategic partnership
  • Generative AI results based on the M-Files content
  • Be insured that the results are accurate, relevant and can be trusted
  • Ground the fundamental base for the AI journey
  • Native document co-authoring with Microsoft applications
  • Native collaboration with the trusted Microsoft tools
  • M-Files automated content government and permission and secure content remains within the Microsoft security boundary
  • Use of Microsoft Purview to use compliance and governance policies
Requirements and availibility of the solution

Requirements for M-Files and Micrososft 365:

  • M-Files Cloud
  • M-Files Business platform
  • Microsoft 365 subscription

Desktop co-authoring is already enabled for M-Files Cloud customers with a Business Platform subscription. Detailed instructions on how to unlock all the benefits and capabilities will be available in August 2025. Stay tuned — I will share further details as soon as they are available!

Conclusion

As has been noted in previous blogs, the direction of travel is towards incorporating AI in the optimum way. And to ensure optimal security boundaries.

This announcement is important because it provides a centralised, scalable environment for managing and supporting your business requirements. Topics include effortlessly finding documents, managing the entire document lifecycle, effective collaboration, gaining actionable insights faster, and ensuring robust governance. Using AI to achieve all this will boost productivity and reduce risk for your business.

If you would like to hear the perspective of a Microsoft architect on this strategic partnership, you can watch this video.

I hope this blog gives you a good idea of the strategic partnership between M-Files and Microsoft. Keep an eye out for more blog posts about M-Files on this channel, and be sure to check out the other blogs that are around the M-Files topic.

Do not hesitate to get in touch with us if you have any questions, or contact me directly if you prefer.

L’article What brings the M-Files and Microsoft 365 Strategic Partnership est apparu en premier sur dbi Blog.

Dctm – Invalid/Missing XSRF token on D2

Wed, 2025-07-23 13:25

During an upgrade project to Documentum 23.4, I faced yet again another interesting behavior from D2. The Documentum Server upgrade happened properly for all components like the Connection Broker, Repositories, D2, etc… The OTDS was also setup and available/working properly without too much trouble. However, the deployment of the D2 war file was a bit of another story.

As usual, we try to make it so that Documentum and all its components are setup as securely as possible. From a WebServer point of view, that include a bunch of Best Practices that we add into our deployments / custom images (when using containers), and D2 isn’t without rest. One of such things is for example to setup the Tomcat and D2 application to work only with cookies that have the “secure” and “httpOnly” flags. That is done in a few locations, but in recent versions of D2, there is additional parameters to help control this kind of behavior inside the ESAPI.properties file.

Note: there are often confusions about the “httpOnly” flag for cookies, so I think a quick reminder wouldn’t hurt. The “secure” flag means that the cookie can only be sent through HTTPS (except when using localhost), so it’s much harder to get access to it. The “httpOnly” one, contrary to his name, doesn’t mean that the cookie is only for HTTP communications, but it means that it cannot be accessed by client’s scripts like JavaScript. Therefore, sensitive cookies should have both flags, so that they go through the network securely and even when it arrives on the target client’s browser, its access is protected.

Therefore, as a good practice, I went ahead and configured D2 as secure as I could, even before a 1st deployment, and that included these 4 parameters:

[tomcat@d2-0 war_prep]$ grep -B1 -E "ForceHttpOnly|ForceSecure" WEB-INF/classes/ESAPI.properties
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=true
# Whlie doing a cross site access through https make the below flag to true 
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 war_prep]$

Once my D2 WAR file was ready and configured, I tried to deploy it on Tomcat. No errors/issues during the deployment/startup of D2. However, accessing the D2 UI ended up with a pretty and infinite loading logo of D2. You probably have all seen that happen at some point:

Nothing on the D2 logs (generated through the logback.xml or log4j2.properties), but on the Tomcat logs, I could see the stack related to that issue when I accessed the URL a few minutes after Tomcat was fully up&running:

2025-07-08 14:25:56,379 UTC INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [$CATALINA_HOME/webapps/D2/D2.war] has finished in [57,704] ms
2025-07-08 14:25:56,382 UTC INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["https-jsse-nio-8080"]
2025-07-08 14:25:56,400 UTC INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [57846] milliseconds
2025-07-08 14:29:36,966 UTC SEVERE [https-jsse-nio-8080-exec-42] org.apache.catalina.core.ApplicationContext.log Key[type=com.emc.x3.server.services.labels.RpcLabelServiceImpl, annotation=[none]]: An RpcTokenException was thrown while processing this call.
	com.google.gwt.user.client.rpc.RpcTokenException: Invalid RPC token (Missing XSRF token: not on request, client IP=xxx.xxx.xxx.xxx)
		at com.emc.x3.server.D2XsrfProtectedServiceServlet.validateXsrfToken(D2XsrfProtectedServiceServlet.java:33)
		at com.google.gwt.user.server.rpc.AbstractXsrfProtectedServiceServlet.onAfterRequestDeserialized(AbstractXsrfProtectedServiceServlet.java:66)
		at com.emc.x3.server.GuiceRemoteServiceServlet.processCall(GuiceRemoteServiceServlet.java:120)
		at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:373)
		at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
		at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:590)
		at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658)
		at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:290)
		at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:280)
		at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:184)
		at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:89)
		at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
		at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
		at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
		at com.emc.x3.portal.server.filters.authc.X3OTDSAuthenticationFilter.executeChain(X3OTDSAuthenticationFilter.java:1106)
		at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
		at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
		at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
		at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
		at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
		at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
		at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
		at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:458)
		at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:373)
		at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
		at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
		at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:387)
		at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:370)
		at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
		at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
		at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:121)
		at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:133)
		at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
		at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
		at com.emc.x3.portal.server.filters.X3SessionTimeoutFilter.doFilter(X3SessionTimeoutFilter.java:52)
		at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
		at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
		at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167)
		at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
		at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483)
		at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
		at org.apache.catalina.valves.StuckThreadDetectionValve.invoke(StuckThreadDetectionValve.java:185)
		at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
		at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:663)
		at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:731)
		at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
		at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344)
		at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:397)
		at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
		at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:905)
		at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1741)
		at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
		at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1190)
		at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
		at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
		at java.base/java.lang.Thread.run(Thread.java:840)

I checked that the XSRF token was indeed generated, and it appeared to be present in the request, at least as shown in the Browser’s Network traces (Developer Tools). It was being generated and forwarded by the browser with both the “secure” and “httpOnly” flags. So, what was the issue then? It took me a bit of time, but I could pinpoint the issue to the ESAPI.properties file and more specifically to the 4 properties I mentioned above, that control the flags of both cookies and sessions. To be exact, I expected the “httpOnly” flag for the cookies to be the issue, since it would hide the XSRF_TOKEN from JavaScript on the client-side. Keeping the content of the WAR file exploded folder the same, I tried to switch this 1 parameter back to false, which is the default value:

[tomcat@d2-0 ~]$ esapi_file="$CATALINA_HOME/webapps/D2/WEB-INF/classes/ESAPI.properties"
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ grep -B1 -E "ForceHttpOnly|ForceSecure" ${esapi_file}
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=true
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ sed -i 's,\(HttpUtilities.ForceHttpOnlyCookies\)=true,\1=false,' ${esapi_file}
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ grep -B1 -E "ForceHttpOnly|ForceSecure" ${esapi_file}
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=false
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 ~]$

After a restart of Tomcat, the issue was completely gone and the login to D2 through the OTDS was working successfully… Since I could deploy D2-REST, D2-Smartview and D2-Config with all 4 ESAPI.properties parameter set to “true“, I asked OT if it was expected that only D2 has a problem with “HttpUtilities.ForceHttpOnlyCookies=true“. After a few days of exchange, I got the feedback that it’s not documented but it’s apparently required for D2 to NOT have the “httpOnly” flag because of the XSRF_TOKEN. They will see to create a KB for that topic (update: it was created and it’s available here: KB0845279). If you really need to have both flags set, then you will not have any other choice than switching to the new UI, D2-Smartview.

L’article Dctm – Invalid/Missing XSRF token on D2 est apparu en premier sur dbi Blog.

Scheduling OpenSCAP reports in SUSE Multi-Linux Manager

Wed, 2025-07-23 09:44

As we’ve recently supported some customers on SUSE Multi Linux Manager I’d like share something which was not as easy to implement as it appeared to be in the first place. But first of all, what is SUSE Multi Linux Manager? It is basically a fork of Spacewalk which was also used as the upstream project by the Red Hat Satellite product. But as Spacewalk was dis-continued and the project on Github was archived some people decided to fork in and started a new project called Uyuni, and this is now the upstream project for SUSE Multi Linux Manager. One of the great things about Uyuni and SUSE Multi Linux Manager is, that it supports various Linux distributions such as SUSE and openSUSE distributions, Red Hat, Rocky, Oracle and Alma Linux, Debian, Ubuntu, and also ancient versions of CentOS if you still depend on them.

I am not going into the setup or basic configuration as you can already find related bogs here and more information in the documentation:

What I want to look at in this post is automatic scheduling of OpenSCAP scans/reports. When this requirement came up, it seemed pretty easy to do, as you can easily schedule such a scan against a single system. As you can see below I have a Red Hat 9 system registered to my SUSE Multi Linux Server:

What you can easily do out of the box is to manually schedule an OpenSCAP scan:

Once the scan completes, it becomes visible under the “List Scan” tab and you can browse into the details:

Quite easy to do but still a manual action. As we wanted to have it automated the obvious choice was to create a “Recurring Action”:

This gives you to option to create and configure a “Recurring Action”:

The issue is, there is no pre-defined “Custom State” which is scheduling an OpenSCAP scan:

The very same is true for “System Groups”, which you normally would use because otherwise you’d need to schedule that on every single system:

The last option seemed to be something under “Schedule” but this only gives you a list of what you already have:

At this point we were stuck and had to talk to SUSE support, which really was a great experience by the way. It turned out there is no easy, build-in, way to do this. A feature request has been logged, but of course there is no guarantee that it will be implemented.

But, there is a workaround, not a very beautiful one, but at least it works. SUSE Multi Linux Manager (and Uyuni of course) come with an API and there is one call for triggering an OpenSCAP scan. Using this, a custom state channel can be created which in turn calls the API to trigger the scan:

The “SLS Contents” actually contains the code (Python in this case) which is taking to the API and triggers the scan:

/usr/local/bin/schedule_xccdf_scan.py:
  file.managed:
    - user: root
    - group: root
    - mode: 755
    - contents: |
        #!/usr/bin/python3
        import xmlrpc.client

        client = xmlrpc.client.ServerProxy('https://suma.dwe.local/rpc/api')
        key = client.auth.login('admin', 'xxxx')
        client.system.scap.scheduleXccdfScan(
            key,
            1000010000,
            '/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml',
            '--profile xccdf_org.ssgproject.content_profile_cis_server_l1'
        )
        client.auth.logout(key)

schedule_xccdf_scan:
  cmd.run:
    - name: /usr/local/bin/schedule_xccdf_scan.py
    - require:
      - file: /usr/local/bin/schedule_xccdf_scan.py

I am not going into the code itself, this should be easy to understand. The important part is the system ID in line 14. This defines the system you want the scan to happen on (you can also provide an array of systems, see the API documentation linked above).

As soon as you have this, you can schedule this automatically as a recurring action on either the system itself, or a group of systems in “System Groups”:

Not as easy as it could be, and the systems are still hard coded in the Python code, but at least we have something that works. Hope that helps.

L’article Scheduling OpenSCAP reports in SUSE Multi-Linux Manager est apparu en premier sur dbi Blog.

SQL Server 2025 – Optimized Locking

Tue, 2025-07-22 03:32

Within the past few days I was investigating one of the new features of SQL Server 2025: Optimized Locking. I was curious about the capabilities, the behaviour and as well the limits of the feature. Optimized Locking is based on two primary components:

  • Transaction ID (TID)
  • Lock after qualification (LAQ)

Here we have the transaction ID working as a unique identifier for a transaction. Each row which is modified by this transaction will be labelled with its transaction ID. This produces only one single lock on the TID which is used instead of many key- or RID-locks. To be precise: Update- and Exclusive Locks will be placed, but released immediately without waiting for the transaction to be commited.
This behaviour helps lock manager enormously to keep the locks he has to maintain at a minimum and thus saves a lot of space (memory).

Lock after qualification (LAQ) at the other hand provides a silent qualification for rows affected by an update in the background without having the need to place Shared Locks on those while scanning through. Only if a row has been qualified – means it will be affected by this update – an attempt to place an Update Lock will happen.
In order to benefit from this mechanism, Read Committed Snapshot Isolation (RCSI) must be enabled on database-level.

My Demo environment looks as follows:

  • SQL Server 2025 CTP 2.0 (17.0.700.9)
  • SQL Server Management Studio 21 (21.3.6) with Copilot activated

My colleague Stéphane Haby wrote blog posts about SQL Server 2025, for example this one:


In the meantime there was SQL Server 2025 CTP 2.1 released by Microsoft with a few improvements explained on BOL:

https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2025?view=sql-server-ver17

Copilot is not part of the game when we discuss Optimized Locking, but as AI is omnipresent these days, I want to mention the blog post of my colleague Steven Naudet where he describes the enabling process for Copilot in SSMS:

Now, back to business, back to Optimized Locking – Let’s see this performance tuning feature in action!

First things first – I’ve created a new database called “OptimizedLocking” and checked what features are in place. To determine if Optimized Locking is enabled on a database you can us either the function DATABASEPROPERTYEX or grab the information from sys.databases directly:

-- Query the dedicated DATABASEPROPERTYEX für Optimized Locking
USE [OptimizedLocking]
GO
SELECT DATABASEPROPERTYEX(DB_NAME(), 'IsOptimizedLockingOn') AS IsOptimizedLockingOn
GO
USE [master]
GO
SELECT name AS DatabaseName, is_read_committed_snapshot_on, is_optimized_locking_on, is_accelerated_database_recovery_on 
FROM sys.databases
WHERE name = 'OptimizedLocking'
GO

Why do I request additionally the columns “is_read_committed_snapshot_on” and “is_accelerated_database_recovery_on”?
Well, the latter is absolutely necessary to get Optimized Locking work and RCSI enables lock after qualification (LAQ) which I will show you a little bit later.

Now it’s time to create a simple table within the database “OptimizedLocking”:

-- Create a table to show locks
USE [OptimizedLocking]
GO
DROP TABLE IF EXISTS T1
GO
CREATE TABLE T1
(
T1_ID INT NOT NULL,
Value INT
);

INSERT INTO T1 (T1_ID, Value) 
VALUES (1,10),(2,20),(3,30);
GO

SELECT * FROM dbo.T1
GO

SELECT DB_NAME(database_id) AS DatabaseName, OBJECT_NAME(object_id) AS TableName, index_type_desc FROM sys.dm_db_index_physical_stats(DB_ID('OptimizedLocking'), OBJECT_ID('dbo.T1'), NULL, NULL, 'DETAILED')
GO

The content of the table T1 looks as follows – and it’s of course a Heap because I didn’t neither add a PRIMARY KEY constraint to any of it’s column nor a CLUSTERED INDEX was specified:

I ran two concurrent update statements, the first will change the row with the ID = 1 (in the left pane of SSMS) and the second one tries to change the row with the ID = 2 (in the middle pane). Within the pane on the right I placed the statement to show you the locks which are present.
The second statement is blocked, because it can not acquire the necessary Update Lock which is needed to update the row:

Now I change the database in terms of setting OPTIMIZED_LOCKING to ON and I do the same demo as above again. As mentioned above, ACCELERATED_DATABASE_RECOVERY is necessary as well to fulfil this:

USE [master]
GO
ALTER DATABASE [OptimizedLocking] SET ACCELERATED_DATABASE_RECOVERY = ON;
ALTER DATABASE [OptimizedLocking] SET OPTIMIZED_LOCKING = ON;
GO

As you can see now, the behaviour remains the same, the second statement is blocked trying to place a Shared Lock on the rows – but we don’t have those 4 “normal” locks we had before but only 2 XACT Locks instead, means we have a reduction of the amount of locks that have to be managed by the lock manager:

Let’s go one step further and enable RCSI on database-level and see what happens now:

USE [master]
GO
ALTER DATABASE [OptimizedLocking] SET READ_COMMITTED_SNAPSHOT = ON;
GO

Et voilà, both statement have successfully had their Exclusive Lock requests (escalated from the Update Lock placed in advance) approved/granted:

Heap vs. Clustered Index

But what happens now, if we use a Clustered Index instead of leaving our data (un)organized as a Heap? Let’s dig into this as well by sorting the T1_ID column using a clustered index:

USE [OptimizedLocking]
GO
CREATE CLUSTERED INDEX CI_T1 ON dbo.T1 (T1_ID)
GO

All the features that we switched on earlier are now deactivated to be able to begin from scratch again:

But what’s going on now? None of the former mentioned features is enabled but both statements have their requested locks granted:

This is because the potentially critical Exclusive Locks (X) are placed on different keys (i.e. different rows) and the Intent Exclusive Locks (IX) set on page level are compatible with each other.

Wrap-Up

During my consulting at customer site I often see database with a huge amount of Heaps.
Don’t misunderstand me, Heaps are great on tables, where we expect only INSERTS (e.g. a logging-table) but if we deal with the other DML-Statements as well at the same time, a Clustered Index would be the better choice.
Related to this demo and the feature “Optimized Locking” can be said, that if we are using a Clustered Index on our tables, we don’t need this feature to be enabled in terms of concurrency, but regarding the overhead for the lock manager it’s definitely worth using “Optimized Locking”.

But wait: until now we dealt with the default isolation level of SQL Server – READ_COMMITTED. What will happen if we turn this into a higher level, for example SERIALIZABLE? Will Optimized Locking support us in having fewer locks and blockings?
The answer is no – not at all. Do you agree?
And this makes sense regarding the mechanism of such an isolation level like SERIALIZABLE. When we choose such a high isolation level, we are (or at least should be) aware that we are blocking others for a (longer) period of time because we tell SQL Server to behave like this. Everyone who is  familiar with the locking behaviour in depth in SQL Server using different isolation levels knows, that SQL Server has to change/align his behaviour to be able to take the ACID principle into account.

Isolation Level SERIALIZABLE

Here is the output of the same demo I did above several times except that I executed the first UPDATE statement with the transaction isolation level SERIALIZABLE. As you can see, the second UPDATE statement (in the middle pane) is blocked during the attempt of placing an Exclusive Lock on the row with ID = 2 because the first UPDATE statement had to place an Exclusive Lock on the range where the row with ID = 2 is part of, based on the requirements of the Isolation Level:

Conclusion

From my point of view, “Optimized Locking” is a real game changer. Regardless of the underlying structure (Heap or Clustered Index), locking becomes easier and in a certain way more “lightweight”.

As we saw, the optimal way to use this feature is to have turned on Read Committed Snapshot Isolation (RCSI) in addition and – if this make sense for storing the data in this way in terms of the access pattern – use a Clustered Index to organize tables. Either way, however, we benefit from the fact that the Lock Manager has to manage fewer locks which saves a significant amount of memory in any case.

L’article SQL Server 2025 – Optimized Locking est apparu en premier sur dbi Blog.

dbhome prepatch failing on ODA due to missing sqldeveloper files

Mon, 2025-07-21 01:40

I have been recently patching RAC databases on ODA 2-HA configured with Data Guard from version 19.20 to 19.26. During prepatch I have been facing an issue with missing sqldeveloper files, for which I could find a solution. I would like to share this solution with you as it might help you win some time.

Read more: dbhome prepatch failing on ODA due to missing sqldeveloper files Problem description

From the node0, I have been running prepatch report.

[root@node0 ~]# /opt/oracle/dcs/bin/odacli create-prepatchreport --dbhome --dbhomeid 5fc7e39d-adff-4903-b308-b4046840a38f -v 19.26.0.0.0

Job details
----------------------------------------------------------------
                     ID:  68d89fc0-d5ed-47b5-a885-a5896b865262
            Description:  Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1
                 Status:  Created
                Created:  July 15, 2025 14:59:07 CEST
                Message:  Use 'odacli describe-prepatchreport -i 68d89fc0-d5ed-47b5-a885-a5896b865262' to check details of results

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------


Checking the prepatch report, it was suggest kind of freezing evaluating the DBHome.

[root@node0 ~]# odacli describe-prepatchreport -i 68d89fc0-d5ed-47b5-a885-a5896b865262

Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  68d89fc0-d5ed-47b5-a885-a5896b865262
            Description:  Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1
                 Status:  RUNNING
                Created:  July 15, 2025 2:59:07 PM CEST
                 Result:

Node Name
---------------
node0

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__DB__
Validate DB Home ID             Success   Validated DB Home ID:
                                          5fc7e39d-adff-4903-b308-b4046840a38f
Validate patching tag           Success   Validated patching tag: 19.26.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate minimum agent version  Success   Validated minimum agent version
Is GI upgraded                  Success   Validated GI is upgraded
Validate available space for    Success   Validated free space required under
db                                        /u01
Validate glogin.sql file        Success   Successfully verified glogin.sql
                                          won't break patching
Validate dbHomesOnACFS          Success   User has configured disk group for
configured                                Database homes on ACFS
Validate Oracle base            Success   Successfully validated Oracle Base
Is DB clone available           Success   Successfully validated clone file
                                          exists
Evaluate DBHome patching with   Running
RHP
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution

Node Name
---------------
node1

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__DB__
Validate DB Home ID             Success   Validated DB Home ID:
                                          5fc7e39d-adff-4903-b308-b4046840a38f
Validate patching tag           Success   Validated patching tag: 19.26.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate minimum agent version  Success   Validated minimum agent version
Is GI upgraded                  Success   Validated GI is upgraded
Validate available space for    Success   Validated free space required under
db                                        /u01
Validate glogin.sql file        Success   Successfully verified glogin.sql
                                          won't break patching
Validate dbHomesOnACFS          Success   User has configured disk group for
configured                                Database homes on ACFS
Validate Oracle base            Success   Successfully validated Oracle Base
Is DB clone available           Success   Successfully validated clone file
                                          exists
Evaluate DBHome patching with   Running
RHP
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution


Looking to the prepatch job status, it finished in Failure 10 min after starting.

[root@node0 ~]# odacli describe-job -i 68d89fc0-d5ed-47b5-a885-a5896b865262

Job details
----------------------------------------------------------------
                     ID:  68d89fc0-d5ed-47b5-a885-a5896b865262
            Description:  Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1
                 Status:  Failure (To view Error Correlation report, run "odacli describe-job -i 68d89fc0-d5ed-47b5-a885-a5896b865262 --ecr" command)
                Created:  July 15, 2025 14:59:07 CEST
                Message:  Use 'odacli describe-prepatchreport -i ' to check prepatch resultsDCS-10292:One or more checks failed while checking for patching readiness.

Task Name                                Node Name                 Start Time                               End Time                                 Status
---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ----------------
task:TaskLockWrapper_37358               node0                 July 15, 2025 14:59:12 CEST              July 15, 2025 15:11:40 CEST              Failure
task:TaskSequential_37359                node0                 July 15, 2025 14:59:12 CEST              July 15, 2025 15:11:40 CEST              Failure
Setting up SSH equivalence               node0                 July 15, 2025 14:59:12 CEST              July 15, 2025 14:59:14 CEST              Success
Setting up SSH equivalence               node0                 July 15, 2025 14:59:14 CEST              July 15, 2025 14:59:16 CEST              Success
Run patching pre-checks                  node0                 July 15, 2025 14:59:16 CEST              July 15, 2025 15:11:40 CEST              Success
task:TaskSequential_39928                node0                 July 15, 2025 14:59:41 CEST              July 15, 2025 15:04:28 CEST              Failure
Creating ACFS database home              node0                 July 15, 2025 14:59:47 CEST              July 15, 2025 14:59:47 CEST              Success
Validating dbHome available space        node0                 July 15, 2025 14:59:47 CEST              July 15, 2025 14:59:47 CEST              Success
Validating dbHome available space        node1                 July 15, 2025 14:59:47 CEST              July 15, 2025 14:59:47 CEST              Success
Creating DbHome Directory                node1                 July 15, 2025 14:59:48 CEST              July 15, 2025 14:59:48 CEST              Success
Create required directories              node0                 July 15, 2025 14:59:48 CEST              July 15, 2025 14:59:48 CEST              Success
Extract DB clone                         node0                 July 15, 2025 14:59:48 CEST              July 15, 2025 15:00:43 CEST              Success
ProvDbHome by using RHP                  node0                 July 15, 2025 15:00:43 CEST              July 15, 2025 15:02:27 CEST              Success
Enable DB options                        node0                 July 15, 2025 15:02:27 CEST              July 15, 2025 15:02:35 CEST              Success
Creating wallet for DB Client            node0                 July 15, 2025 15:02:38 CEST              July 15, 2025 15:02:38 CEST              Success
task:TaskSequential_40059                node0                 July 15, 2025 15:02:39 CEST              July 15, 2025 15:04:28 CEST              Failure
Precheck Patch databases by RHP          node0                 July 15, 2025 15:02:39 CEST              July 15, 2025 15:04:28 CEST              Failure
Check pre-check status                   node0                 July 15, 2025 15:11:40 CEST              July 15, 2025 15:11:40 CEST              Failure

[root@node0 ~]#

This was a job result for the second attempt. The first attempt would showed following tasks:

Task Name                                Node Name                 Start Time                               End Time                                 Status
---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ----------------
task:TaskLockWrapper_62015               node0                 July 15, 2025 17:57:06 CEST              July 15, 2025 18:09:43 CEST              Failure
task:TaskSequential_62016                node0                 July 15, 2025 17:57:06 CEST              July 15, 2025 18:09:43 CEST              Failure
Setting up SSH equivalence               node0                 July 15, 2025 17:57:06 CEST              July 15, 2025 17:57:08 CEST              Success
Setting up SSH equivalence               node0                 July 15, 2025 17:57:08 CEST              July 15, 2025 17:57:10 CEST              Success
Run patching pre-checks                  node0                 July 15, 2025 17:57:10 CEST              July 15, 2025 18:09:43 CEST              Success
task:TaskSequential_66113                node0                 July 15, 2025 17:57:40 CEST              July 15, 2025 18:02:29 CEST              Failure
Creating ACFS database home              node0                 July 15, 2025 17:57:46 CEST              July 15, 2025 17:57:46 CEST              Success
Validating dbHome available space        node0                 July 15, 2025 17:57:46 CEST              July 15, 2025 17:57:46 CEST              Success
Validating dbHome available space        node1                 July 15, 2025 17:57:46 CEST              July 15, 2025 17:57:46 CEST              Success
Creating DbHome Directory                node1                 July 15, 2025 17:57:47 CEST              July 15, 2025 17:57:47 CEST              Success
Create required directories              node0                 July 15, 2025 17:57:47 CEST              July 15, 2025 17:57:47 CEST              Success
Extract DB clone                         node0                 July 15, 2025 17:57:47 CEST              July 15, 2025 17:58:41 CEST              Success
ProvDbHome by using RHP                  node0                 July 15, 2025 17:58:41 CEST              July 15, 2025 18:00:26 CEST              Success
Enable DB options                        node0                 July 15, 2025 18:00:27 CEST              July 15, 2025 18:00:35 CEST              Success
Creating wallet for DB Client            node0                 July 15, 2025 18:00:38 CEST              July 15, 2025 18:00:38 CEST              Success
task:TaskSequential_66226                node0                 July 15, 2025 18:00:39 CEST              July 15, 2025 18:02:29 CEST              Failure
Precheck Patch databases by RHP          node0                 July 15, 2025 18:00:39 CEST              July 15, 2025 18:02:29 CEST              Failure
Check pre-check status                   node0                 July 15, 2025 18:09:43 CEST              July 15, 2025 18:09:43 CEST              Failure

Checking the log of the dcs agent, I could find following:

"taskResult" : "DCS-10001:Internal error encountered: PRGT-388 : Pre-patch verification through Configuration Verification Utility reported one or more errors.\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-sqlcl.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/low-level-api.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jaxb-api.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-core.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orajsoda.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-codec.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/javax.json.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-logging.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-net.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-mapping.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-utility.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-annotations.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_cert.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-http.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ojdbc8.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-common.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/oraclepki.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_core.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/sql.exe\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/sql\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/README.md\" does not exist on node \"node0\"..",

The describe job with –ecr option would give me the same:

[root@node0 ~]# odacli describe-job -i 68d89fc0-d5ed-47b5-a885-a5896b865262 --ecr
ODA Assistant - Error Correlation report
----------------------------------------
          Failed job ID:  68d89fc0-d5ed-47b5-a885-a5896b865262
            Description:  Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1
             Start Time:  July 15, 2025 14:59:12 CEST
               End Time:  July 15, 2025 15:11:40 CEST
         EC report path: /opt/oracle/dcs/da/da_repo/546ca4ac-97e6-4038-bf6d-90986c7f21bf.json

Failed Task Messages
--------------------
[Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1] -  Use 'odacli describe-prepatchreport -i ' to check prepatch resultsDCS-10292:One or more checks failed while checking for patching readiness.
[task:TaskLockWrapper_37358] -  DCS-10292:One or more checks failed while checking for patching readiness.
[task:TaskSequential_39928] -  DCS-10001:Internal error encountered: PRGT-388 : Pre-patch verification through Configuration Verification Utility reported one or more errors. PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-sqlcl.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/low-level-api.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jaxb-api.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-core.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orajsoda.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-codec.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/javax.json.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-logging.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-net.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-mapping.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-utility.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-annotations.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_cert.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-http.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ojdbc8.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-common.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/oraclepki.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_core.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/sql.exe" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/sql" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/README.md" does not exist on node "node0"..
...
...
...

So the problem is definitively that prepatch is expecting the in source dbhome to patch, a directory sqldeveloper, which is not existing, did not exist since a while and is not existing in any other dbhome from any other ODA customer had (and customer has got several ODAs and dbhome). What a strange “new” bug!

Resolution

I first checked and could confirm that my source dbhome really does not have any sqldeveloper foloder.

[root@node0 ~]# ls -ld /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sql*
drwxr-xr-x 4 oracle oinstall 20480 Jul 17  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl
drwxr-xr-x 3 oracle oinstall 20480 Apr 17  2019 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlj
drwxr-xr-x 7 oracle oinstall 20480 Jul 17  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlpatch
drwxr-xr-x 6 oracle oinstall 20480 Apr 30 13:56 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlplus
[root@node0 ~]#

Looking quickly on the oracle support, I could find following Doc ID, which was exactly my problem:

ODA db home prepatch report keeps running/hangs at “Evaluate DBHome patching with RHP” (Doc ID 3067737.1)

OK, but reading on the solution described in the Doc ID, it will not help me that much :

If you have old dbhomes on this ODA (or) any other ODA please copy "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_x/sqldeveloper/sqldeveloper" folder to the source home for which prepatch report is being run.
Then re-run the dbhome patching which should complete without any issues.

Checking on an old backup I did few years ago I could find a sqldeveloper zip file in a patch storage directory.

oracle@node0:/mnt/tmp/backup_dbhome/tsrpri/OraDB19000_home3/ [DB11] tar tvf dbhome_3.tarz | grep -i sqldevelop
-rw-r--r-- oracle/oinstall  23557640 2023-07-17 16:37 ./dbhome_3/.patch_storage/35320081_Jul_15_2023_12_54_11/files/sqldeveloper.zip

I uncompressed the archive file, and extracted the content of the zip file.

[oracle@node0 software_patching]$ unzip sqldeveloper.zip
Archive:  sqldeveloper.zip
  inflating: sqldeveloper/sqldeveloper/lib/6462746f6f6c732d6e65742e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6462746f6f6c732d636f6d6d6f6e2e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/636f6d6d6f6e732d6c6f6767696e672e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6a6178622d6170692e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6f6a646263382e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6a61636b736f6e2d636f72652e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6c6f772d6c6576656c2d6170692e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6a61636b736f6e2d6461746162696e642e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6f7261636c65706b692e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6f7364745f636572742e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6f72616931386e2e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/687474706d696d652e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/786d6c70617273657276322d73616e732d6a6178702d73657276696365732e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/706c7567696e2d6170692e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6a61636b736f6e2d616e6e6f746174696f6e732e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6f72616931386e2d6d617070696e672e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/706f6d2e786d6c_o_
  inflating: sqldeveloper/sqldeveloper/lib/636f6d6d6f6e732d636f6465632e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6f72616a736f64612e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/616e746c722d72756e74696d652e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6462746f6f6c732d73716c636c2e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6a617661782e6a736f6e2e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6f7364745f636f72652e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/ext/736c66346a2d6a646b31342e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/ext/6c69717569626173652d636f72652e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/ext/6f7261636c652d6c69717569626173652e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/ext/736c66346a2d6170692e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6462746f6f6c732d687474702e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/68747470636c69656e742e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6f72616931386e2d7574696c6974792e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/737472696e6774656d706c6174652e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6a7363682e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/68747470636f72652e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6a6c696e652e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/786462362e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/lib/6a7a6c69622e6a6172_o_
  inflating: sqldeveloper/sqldeveloper/bin/524541444d452e6d64_o_
  inflating: sqldeveloper/sqldeveloper/bin/73716c2e657865_o_
  inflating: sqldeveloper/sqldeveloper/bin/73716c_o_
  inflating: sqldeveloper/modules/6a617661782e736572766c65742e6a617661782e736572766c65742d6170692e6a6172_o_
[oracle@node0 software_patching]$

But that zip file did not really content any missing file the prepatch was complaining about… I anyhow decided to copy it to my source dbhome. This dbhome will not be used any more. As you might know, the prepatch will first create a new dbhome for the patching process to run an out-of-place patching, so moving the database to this new dbhome and running datapatch. So any file that I copy to my current source dbhome will not have any impact.

[root@node0 ~]# ls -ld /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sql*
drwxr-xr-x 4 oracle oinstall 20480 Jul 17  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl
drwxr-xr-x 3 oracle oinstall 20480 Apr 17  2019 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlj
drwxr-xr-x 7 oracle oinstall 20480 Jul 17  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlpatch
drwxr-xr-x 6 oracle oinstall 20480 Apr 30 13:56 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlplus

[root@node0 ~]# cp -pR /mnt/tmp/software_patching/sqldeveloper /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/

[root@node0 ~]# ls -ld /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sql*
drwxr-xr-x 4 oracle oinstall 20480 Jul 17  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl
drwxr-xr-x 4 oracle oinstall 20480 Jul 15 17:00 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper
drwxr-xr-x 3 oracle oinstall 20480 Apr 17  2019 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlj
drwxr-xr-x 7 oracle oinstall 20480 Jul 17  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlpatch
drwxr-xr-x 6 oracle oinstall 20480 Apr 30 13:56 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlplus
[root@node0 ~]#

As expected there is no jar files in the copied directory.

[root@node0 ~]# ls -ltrh /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/
total 25M
-rw-r--r-- 1 oracle oinstall 1.9M Jul 17  2023 786d6c70617273657276322d73616e732d6a6178702d73657276696365732e6a6172_o_
-rw-r--r-- 1 oracle oinstall 257K Jul 17  2023 786462362e6a6172_o_
-rw-r--r-- 1 oracle oinstall 222K Jul 17  2023 737472696e6774656d706c6174652e6a6172_o_
-rw-r--r-- 1 oracle oinstall  11K Jul 17  2023 706f6d2e786d6c_o_
-rw-r--r-- 1 oracle oinstall  73K Jul 17  2023 706c7567696e2d6170692e6a6172_o_
-rw-r--r-- 1 oracle oinstall 300K Jul 17  2023 6f7364745f636f72652e6a6172_o_
-rw-r--r-- 1 oracle oinstall 201K Jul 17  2023 6f7364745f636572742e6a6172_o_
-rw-r--r-- 1 oracle oinstall 410K Jul 17  2023 6f72616a736f64612e6a6172_o_
-rw-r--r-- 1 oracle oinstall 1.6M Jul 17  2023 6f72616931386e2e6a6172_o_
-rw-r--r-- 1 oracle oinstall 490K Jul 17  2023 6f72616931386e2d7574696c6974792e6a6172_o_
-rw-r--r-- 1 oracle oinstall  88K Jul 17  2023 6f72616931386e2d6d617070696e672e6a6172_o_
-rw-r--r-- 1 oracle oinstall 301K Jul 17  2023 6f7261636c65706b692e6a6172_o_
-rw-r--r-- 1 oracle oinstall 4.0M Jul 17  2023 6f6a646263382e6a6172_o_
-rw-r--r-- 1 oracle oinstall 500K Jul 17  2023 6c6f772d6c6576656c2d6170692e6a6172_o_
-rw-r--r-- 1 oracle oinstall  65K Jul 17  2023 6a7a6c69622e6a6172_o_
-rw-r--r-- 1 oracle oinstall 276K Jul 17  2023 6a7363682e6a6172_o_
-rw-r--r-- 1 oracle oinstall 263K Jul 17  2023 6a6c696e652e6a6172_o_
-rw-r--r-- 1 oracle oinstall 123K Jul 17  2023 6a6178622d6170692e6a6172_o_
-rw-r--r-- 1 oracle oinstall  84K Jul 17  2023 6a617661782e6a736f6e2e6a6172_o_
-rw-r--r-- 1 oracle oinstall 1.3M Jul 17  2023 6a61636b736f6e2d6461746162696e642e6a6172_o_
-rw-r--r-- 1 oracle oinstall 318K Jul 17  2023 6a61636b736f6e2d636f72652e6a6172_o_
-rw-r--r-- 1 oracle oinstall  66K Jul 17  2023 6a61636b736f6e2d616e6e6f746174696f6e732e6a6172_o_
-rw-r--r-- 1 oracle oinstall  41K Jul 17  2023 687474706d696d652e6a6172_o_
-rw-r--r-- 1 oracle oinstall 320K Jul 17  2023 68747470636f72652e6a6172_o_
-rw-r--r-- 1 oracle oinstall 749K Jul 17  2023 68747470636c69656e742e6a6172_o_
-rw-r--r-- 1 oracle oinstall 429K Jul 17  2023 6462746f6f6c732d73716c636c2e6a6172_o_
-rw-r--r-- 1 oracle oinstall 131K Jul 17  2023 6462746f6f6c732d6e65742e6a6172_o_
-rw-r--r-- 1 oracle oinstall  50K Jul 17  2023 6462746f6f6c732d687474702e6a6172_o_
-rw-r--r-- 1 oracle oinstall 7.1M Jul 17  2023 6462746f6f6c732d636f6d6d6f6e2e6a6172_o_
-rw-r--r-- 1 oracle oinstall  61K Jul 17  2023 636f6d6d6f6e732d6c6f6767696e672e6a6172_o_
-rw-r--r-- 1 oracle oinstall 328K Jul 17  2023 636f6d6d6f6e732d636f6465632e6a6172_o_
-rw-r--r-- 1 oracle oinstall 160K Jul 17  2023 616e746c722d72756e74696d652e6a6172_o_
drwxr-xr-x 2 oracle oinstall  20K Jul 15 17:00 ext
[root@node0 ~]#

Strangely, the missing file the prepatch report is expecting, comes from sqlcl.

[root@node0 ~]# ls -ltrh /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl/lib
total 37M
-rw-r--r-- 1 oracle oinstall 1.8M Jul 16  2023 ucp.jar
-rw-r--r-- 1 oracle oinstall 8.3K Jul 16  2023 slf4j-jdk14.jar
-rw-r--r-- 1 oracle oinstall 306K Jul 16  2023 osdt_core.jar
-rw-r--r-- 1 oracle oinstall 206K Jul 16  2023 osdt_cert.jar
-rw-r--r-- 1 oracle oinstall 441K Jul 16  2023 orajsoda.jar
-rw-r--r-- 1 oracle oinstall 301K Jul 16  2023 oraclepki.jar
-rw-r--r-- 1 oracle oinstall 126K Jul 16  2023 javax.json.jar
-rw-r--r-- 1 oracle oinstall 237K Jul 16  2023 jansi.jar
-rw-r--r-- 1 oracle oinstall  74K Jul 16  2023 jackson-annotations.jar
-rw-r--r-- 1 oracle oinstall 2.8M Jul 16  2023 guava-with-lf.jar
-rw-r--r-- 1 oracle oinstall 296K Jul 16  2023 dbtools-net.jar
-rw-r--r-- 1 oracle oinstall 354K Jul 16  2023 commons-codec.jar
-rw-r--r-- 1 oracle oinstall 1.9M Jul 16  2023 xmlparserv2_sans_jaxp_services.jar
-rw-r--r-- 1 oracle oinstall 260K Jul 16  2023 xdb.jar
-rw-r--r-- 1 oracle oinstall 248K Jul 16  2023 ST4.jar
-rw-r--r-- 1 oracle oinstall 361K Jul 16  2023 sshd-sftp.jar
-rw-r--r-- 1 oracle oinstall 120K Jul 16  2023 sshd-scp.jar
-rw-r--r-- 1 oracle oinstall 920K Jul 16  2023 sshd-core.jar
-rw-r--r-- 1 oracle oinstall  42K Jul 16  2023 sshd-contrib.jar
-rw-r--r-- 1 oracle oinstall 921K Jul 16  2023 sshd-common.jar
-rw-r--r-- 1 oracle oinstall  41K Jul 16  2023 slf4j-api.jar
-rw-r--r-- 1 oracle oinstall 496K Jul 16  2023 orai18n-utility.jar
-rw-r--r-- 1 oracle oinstall  89K Jul 16  2023 orai18n-mapping.jar
-rw-r--r-- 1 oracle oinstall 1.6M Jul 16  2023 orai18n.jar
-rw-r--r-- 1 oracle oinstall 4.9M Jul 16  2023 ojdbc8.jar
-rw-r--r-- 1 oracle oinstall 976K Jul 16  2023 jline3.jar
-rw-r--r-- 1 oracle oinstall 376K Jul 16  2023 jdbcrest.jar
-rw-r--r-- 1 oracle oinstall 126K Jul 16  2023 jaxb-api.jar
-rw-r--r-- 1 oracle oinstall  56K Jul 16  2023 javax.activation-api.jar
-rw-r--r-- 1 oracle oinstall  28K Jul 16  2023 jackson-jr-stree.jar
-rw-r--r-- 1 oracle oinstall  98K Jul 16  2023 jackson-jr-objects.jar
-rw-r--r-- 1 oracle oinstall 367K Jul 16  2023 jackson-core.jar
-rw-r--r-- 1 oracle oinstall 830K Jul 16  2023 httpcore5.jar
-rw-r--r-- 1 oracle oinstall 780K Jul 16  2023 httpclient5.jar
-rw-r--r-- 1 oracle oinstall 693K Jul 16  2023 dbtools-sqlcl.jar
-rw-r--r-- 1 oracle oinstall  80K Jul 16  2023 dbtools-http.jar
-rw-r--r-- 1 oracle oinstall 131K Jul 16  2023 dbtools-datapump.jar
-rw-r--r-- 1 oracle oinstall 486K Jul 16  2023 dbtools-data.jar
-rw-r--r-- 1 oracle oinstall 7.5M Jul 16  2023 dbtools-common.jar
-rw-r--r-- 1 oracle oinstall  63K Jul 16  2023 commons-logging.jar
-rw-r--r-- 1 oracle oinstall 164K Jul 16  2023 antlr-runtime.jar
drwxr-xr-x 2 oracle oinstall  20K Jul 17  2023 ext
[root@node0 ~]#

So I decided to copy the files to my new sqldeveloper lib folder.

[root@node0 ~]# cp -p /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl/lib/*.jar /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/

[root@node0 ~]# ls -ltrh /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/*.jar
-rw-r--r-- 1 oracle oinstall 1.8M Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ucp.jar
-rw-r--r-- 1 oracle oinstall 8.3K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/slf4j-jdk14.jar
-rw-r--r-- 1 oracle oinstall 306K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_core.jar
-rw-r--r-- 1 oracle oinstall 206K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_cert.jar
-rw-r--r-- 1 oracle oinstall 441K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orajsoda.jar
-rw-r--r-- 1 oracle oinstall 301K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/oraclepki.jar
-rw-r--r-- 1 oracle oinstall 126K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/javax.json.jar
-rw-r--r-- 1 oracle oinstall 237K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jansi.jar
-rw-r--r-- 1 oracle oinstall  74K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-annotations.jar
-rw-r--r-- 1 oracle oinstall 2.8M Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/guava-with-lf.jar
-rw-r--r-- 1 oracle oinstall 296K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-net.jar
-rw-r--r-- 1 oracle oinstall 354K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-codec.jar
-rw-r--r-- 1 oracle oinstall 1.9M Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/xmlparserv2_sans_jaxp_services.jar
-rw-r--r-- 1 oracle oinstall 260K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/xdb.jar
-rw-r--r-- 1 oracle oinstall 248K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ST4.jar
-rw-r--r-- 1 oracle oinstall 361K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-sftp.jar
-rw-r--r-- 1 oracle oinstall 120K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-scp.jar
-rw-r--r-- 1 oracle oinstall 920K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-core.jar
-rw-r--r-- 1 oracle oinstall  42K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-contrib.jar
-rw-r--r-- 1 oracle oinstall 921K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-common.jar
-rw-r--r-- 1 oracle oinstall  41K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/slf4j-api.jar
-rw-r--r-- 1 oracle oinstall 496K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-utility.jar
-rw-r--r-- 1 oracle oinstall  89K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-mapping.jar
-rw-r--r-- 1 oracle oinstall 1.6M Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n.jar
-rw-r--r-- 1 oracle oinstall 4.9M Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ojdbc8.jar
-rw-r--r-- 1 oracle oinstall 976K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jline3.jar
-rw-r--r-- 1 oracle oinstall 376K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jdbcrest.jar
-rw-r--r-- 1 oracle oinstall 126K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jaxb-api.jar
-rw-r--r-- 1 oracle oinstall  56K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/javax.activation-api.jar
-rw-r--r-- 1 oracle oinstall  28K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-jr-stree.jar
-rw-r--r-- 1 oracle oinstall  98K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-jr-objects.jar
-rw-r--r-- 1 oracle oinstall 367K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-core.jar
-rw-r--r-- 1 oracle oinstall 830K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/httpcore5.jar
-rw-r--r-- 1 oracle oinstall 780K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/httpclient5.jar
-rw-r--r-- 1 oracle oinstall 693K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-sqlcl.jar
-rw-r--r-- 1 oracle oinstall  80K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-http.jar
-rw-r--r-- 1 oracle oinstall 131K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-datapump.jar
-rw-r--r-- 1 oracle oinstall 486K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-data.jar
-rw-r--r-- 1 oracle oinstall 7.5M Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-common.jar
-rw-r--r-- 1 oracle oinstall  63K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-logging.jar
-rw-r--r-- 1 oracle oinstall 164K Jul 16  2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/antlr-runtime.jar
[root@node0 ~]#

I did the same for the missing file in the bin directory.

[root@node0 ~]# cp -p /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl/bin/* /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/

[root@node0 ~]# ls -ltrh /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin
total 372K
-rw-r--r-- 1 oracle oinstall  119 Jul 16  2023 version.txt
-rwxr-xr-x 1 oracle oinstall 148K Jul 16  2023 sql.exe
-rwxr-xr-x 1 oracle oinstall  21K Jul 16  2023 sql
-rw-r--r-- 1 oracle oinstall 2.4K Jul 16  2023 README.md
-rw-r--r-- 1 oracle oinstall 4.3K Jul 16  2023 license.txt
-rwxr-xr-x 1 oracle oinstall 1.2K Jul 16  2023 dependencies.txt
-rw-r--r-- 1 oracle oinstall  15K Jul 17  2023 73716c_o_
-rw-r--r-- 1 oracle oinstall 126K Jul 17  2023 73716c2e657865_o_
-rw-r--r-- 1 oracle oinstall  419 Jul 17  2023 524541444d452e6d64_o_
[root@node0 ~]#

I’m still convinced this is a bug where it is looking for a sqldeveloper folder when the process wanted to check sqlcl, because all the missing files are the one from sqlcl. Anyhow, let’s try to move forward with the resolution.

There is a last jar file that the prepatch is still complaining on, low-level-api.jar, which I could not find in any of the source dbhome. But I could find it in the new dbhome, here dbhome5, that the prepatch process created during the failed job (the initial steps, Extract DB clone, of the prepatch command was run successfully).

[root@node0 ~]# find /u01/app/odaorahome/oracle/ -name low-level-api.jar
/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/sqlcl/lib/low-level-api.jar

I decided to copy it as well to the source sqldeveloper lib folder.

[root@node0 ~]# cp -p /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/sqlcl/lib/low-level-api.jar /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/
[root@node0 ~]#

And I run the prepatch again.

[root@node0 ~]# /opt/oracle/dcs/bin/odacli create-prepatchreport --dbhome --dbhomeid 5fc7e39d-adff-4903-b308-b4046840a38f -v 19.26.0.0.0

Job details
----------------------------------------------------------------
                     ID:  fe0dfd66-79b8-4f38-afe2-b2cef1d15943
            Description:  Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1
                 Status:  Created
                Created:  July 15, 2025 17:37:31 CEST
                Message:  Use 'odacli describe-prepatchreport -i fe0dfd66-79b8-4f38-afe2-b2cef1d15943' to check details of results

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------

[root@node0 ~]#

Which that time was successfull!

[root@node0 ~]# odacli describe-job -i fe0dfd66-79b8-4f38-afe2-b2cef1d15943

Job details
----------------------------------------------------------------
                     ID:  fe0dfd66-79b8-4f38-afe2-b2cef1d15943
            Description:  Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1
                 Status:  Success
                Created:  July 15, 2025 17:37:31 CEST
                Message:  Successfully finished prechecking for patching the databases [DB1_IWB] with rolling option

Task Name                                Node Name                 Start Time                               End Time                                 Status
---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ----------------
Setting up SSH equivalence               node0                 July 15, 2025 17:37:37 CEST              July 15, 2025 17:37:39 CEST              Success
Setting up SSH equivalence               node0                 July 15, 2025 17:37:39 CEST              July 15, 2025 17:37:40 CEST              Success
Run patching pre-checks                  node0                 July 15, 2025 17:37:40 CEST              July 15, 2025 17:47:11 CEST              Success
Precheck Patch databases by RHP          node0                 July 15, 2025 17:38:05 CEST              July 15, 2025 17:39:56 CEST              Success
TDE parameter update                     node1                 July 15, 2025 17:39:56 CEST              July 15, 2025 17:39:56 CEST              Success

[root@node0 ~]# odacli describe-prepatchreport -i fe0dfd66-79b8-4f38-afe2-b2cef1d15943

Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  fe0dfd66-79b8-4f38-afe2-b2cef1d15943
            Description:  Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1
                 Status:  SUCCESS
                Created:  July 15, 2025 5:37:31 PM CEST
                 Result:  All pre-checks succeeded

Node Name
---------------
node0

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__DB__
Validate DB Home ID             Success   Validated DB Home ID:
                                          5fc7e39d-adff-4903-b308-b4046840a38f
Validate patching tag           Success   Validated patching tag: 19.26.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate minimum agent version  Success   Validated minimum agent version
Is GI upgraded                  Success   Validated GI is upgraded
Validate available space for    Success   Validated free space required under
db                                        /u01
Validate glogin.sql file        Success   Successfully verified glogin.sql
                                          won't break patching
Validate dbHomesOnACFS          Success   User has configured disk group for
configured                                Database homes on ACFS
Validate Oracle base            Success   Successfully validated Oracle Base
Is DB clone available           Success   Successfully validated clone file
                                          exists
Evaluate DBHome patching with   Success   Successfully validated updating
RHP                                       dbhome with RHP.  and local patching
                                          is possible
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution

Node Name
---------------
node1

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__DB__
Validate DB Home ID             Success   Validated DB Home ID:
                                          5fc7e39d-adff-4903-b308-b4046840a38f
Validate patching tag           Success   Validated patching tag: 19.26.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate minimum agent version  Success   Validated minimum agent version
Is GI upgraded                  Success   Validated GI is upgraded
Validate available space for    Success   Validated free space required under
db                                        /u01
Validate glogin.sql file        Success   Successfully verified glogin.sql
                                          won't break patching
Validate dbHomesOnACFS          Success   User has configured disk group for
configured                                Database homes on ACFS
Validate Oracle base            Success   Successfully validated Oracle Base
Is DB clone available           Success   Successfully validated clone file
                                          exists
Evaluate DBHome patching with   Success   Successfully validated updating
RHP                                       dbhome with RHP.  and local patching
                                          is possible
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution



[root@node0 ~]#

And, of course, later in my activities during the maintenance windows, the patching of the dbhome was successful.

[root@node0 ~]# /opt/oracle/dcs/bin/odacli update-dbhome -i 5fc7e39d-adff-4903-b308-b4046840a38f -v 19.26.0.0.0
{
  "jobId" : "b4467935-edd0-46a6-8c84-04e216d05973",
  "status" : "Created",
  "message" : "",
  "reports" : [ ],
  "createTimestamp" : "July 16, 2025 00:21:17 CEST",
  "resourceList" : [ ],
  "description" : "DB Home Patching to 19.26.0.0.0: Home ID is 5fc7e39d-adff-4903-b308-b4046840a38f",
  "updatedTime" : "July 16, 2025 00:21:17 CEST",
  "jobType" : null,
  "cpsMetadata" : null
}
[root@node0 ~]#

[root@node0 ~]# odacli describe-job -i b4467935-edd0-46a6-8c84-04e216d05973

Job details
----------------------------------------------------------------
                     ID:  b4467935-edd0-46a6-8c84-04e216d05973
            Description:  DB Home Patching to 19.26.0.0.0: Home ID is 5fc7e39d-adff-4903-b308-b4046840a38f
                 Status:  Success
                Created:  July 16, 2025 00:21:17 CEST
                Message:

Task Name                                Node Name                 Start Time                               End Time                                 Status
---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ----------------
Creating wallet for DB Client            node0                 July 16, 2025 00:21:52 CEST              July 16, 2025 00:21:53 CEST              Success
Patch databases by RHP - [DB1]           node0                 July 16, 2025 00:21:53 CEST              July 16, 2025 00:27:37 CEST              Success
Updating database metadata               node1                 July 16, 2025 00:27:37 CEST              July 16, 2025 00:27:37 CEST              Success
Set log_archive_dest for Database        node0                 July 16, 2025 00:27:37 CEST              July 16, 2025 00:27:38 CEST              Success
Generating and saving BOM                node0                 July 16, 2025 00:27:39 CEST              July 16, 2025 00:28:30 CEST              Success
Generating and saving BOM                node1                 July 16, 2025 00:27:39 CEST              July 16, 2025 00:28:32 CEST              Success
TDE parameter update                     node1                 July 16, 2025 00:29:23 CEST              July 16, 2025 00:29:23 CEST              Success

[root@node0 ~]#

To wrap up…

I faced this prepatch issue during the preparation of my maintenance windows activity, where I had to patching several RAC databases using separate dbhome and configured with Data Guard. The maintenance windows where we can stop the application is only from 12am to 5am. Therefore I could not take the risk to loose time and try if the force option from the update-dbhome will do the work. And I was also convinced, that knowing the prepatch status would neither be success nor failed, the patching command could not be executed. The second point is that the force option can usually be helpful if there is some failure with orachk that you know would not have any impact. In my case, a failed step on the task “Evaluate DBHome patching with RHP” is something that needs to be resolved before patching the dbhome.

I had to run the same resolution for all my dbhome on the primary and standby side.

L’article dbhome prepatch failing on ODA due to missing sqldeveloper files est apparu en premier sur dbi Blog.

SQL Server 2025 – OPTIMIZED_SP_EXECUTESQL

Mon, 2025-07-21 01:08

Within the past few days I was investigating another new feature of SQL Server 2025: OPTIMIZED_SP_EXECUTESQL. I was curious about the capabilities and the behaviour.

OPTIMIZED_SP_EXECUTESQL is a database scoped configuration which allows us to get rid of the overhead of compiling and producing execution plans when we deal with dynamic SQL. The first user-session which launches a given dynamic statement using the stored procedure “sp_executesql”, will have to wait until SQL Server goes through the compilation process and has produced an execution plan. Then, all the following user-sessions can benefit from the work which was done earlier in terms of no more compilation work is left to be done.
Without enabling this feature, SQL Server will produce an execution plan for each execution of the same dynamic query (means we lose time because SQL Server has to compile every time we execute a query) – even if the query text is exactly the same.


Furthermore, we are facing plan cache pollution because we have an unnecessarily large number of execution plans in the cache – for nothing! I will dig into a demo to show you the behaviour with and without this feature enabled. First of all, I create a tiny database called “OptimizedSpExecutesql”:

CREATE DATABASE [OptimizedSpExecutesql]
GO

USE [OptimizedSpExecutesql]
GO

CREATE TABLE dbo.Orders 
(
OrderID INT IDENTITY(1,1) PRIMARY KEY,
OrderType Varchar(20),
OrderDate DATETIME DEFAULT GETDATE()
)
GO

INSERT INTO dbo.Orders (OrderType) VALUES
('Bentley'), ('Aston Martin'), ('Bentley'), ('Bugatti'),
('Lamborghini'), ('Bentley'), ('Aston Martin'), ('Chevrolet'),
('Bentley'), ('Bugatti'), ('Aston Martin'), ('Lamborghini');
GO

The result set looks as follows:

Now I check, if the feature is disable to simulate the “old” world (i.e. the old behaviour):

-- Turn off the features - just to be sure
ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZED_SP_EXECUTESQL = OFF;   
GO
ALTER DATABASE SCOPED CONFIGURATION SET ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY = OFF;
GO

SELECT name, value, is_value_default FROM sys.database_scoped_configurations
WHERE name IN ('OPTIMIZED_SP_EXECUTESQL', 'ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY')
GO

As you can see, this is the default value for both of the database scoped settings:

Let’s cleanup the cache first and then execute the TSQL-Statement shown below. I will execute the TSQL-Statement with “ostress.exe” to simulate 100 sessions executing the statement once per each session:

-- Clean the plan cache first
DBCC FREEPROCCACHE
GO

-- Define a parameterized query which will be executed via ostress.exe afterwards
-- This code below is just to mention the TSQL which will be executed
DECLARE @SQL NVarchar(MAX)
SET @SQL = N'SELECT COUNT(*) FROM dbo.Orders WHERE OrderType = @Type';

DECLARE @Params NVARCHAR(MAX) = N'@Type Varchar(20)';
EXEC sys.sp_executesql @SQL, @Params, @Type = N'Bentley';  
GO

Here is the execution of the statement with “ostress.exe”:

And the output of the plan cache shows us, that SQL Server has produced several execution plans for the same sql_handle (i.e. the same statement):

The result here is a little bit unpredictable – sometimes you get 10 different plans, sometimes 85 and so on and so forth.
But all executions have one thing in common: we end up with an enormous amount of execution plans cached within the plan cache for exactly the same – means one! – query…

The feature turned on

Now I want to show you the behaviour of SQL Server with the new feature turned on. Therefore, I will enable the database scoped setting(s) to on.
By-the-way: As you can see in the code section below, I turned on 2 switches at the same time. The first one is the feature itself which helps us to reduce compiling to a minimum while the second one just helps us to refresh the statistics asynchronously at low priority (instead of doing blockings through a synchronous SCH-M lock) – means that Statistic Updates stalls are avoided because SQL Server will use current statistics and updates those later if he isn’t able to get a SCH-M lock placed at the moment.

-- Change the behaviour having the feature turned on
USE [OptimizedSpExecutesql]
GO

ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZED_SP_EXECUTESQL = ON;   
GO
ALTER DATABASE SCOPED CONFIGURATION SET ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY = ON;
GO

SELECT name, value, is_value_default FROM sys.database_scoped_configurations
WHERE name IN ('OPTIMIZED_SP_EXECUTESQL', 'ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY')
GO

After clearing the cache and executing the TSQL-Statement via “ostress.exe” again, we see within the plan cache following content:

One plan with an execution count of 100 – as expected and also as desired!

Get a deeper insight into what is going on behind the scenes

I want to show you what is captured within an Extended Events Session when we execute the same query as above – once without the new feature and once with it.
The Extended Events Session looks as follows:

CREATE EVENT SESSION [QueryPerformanceCapture] ON SERVER 
ADD EVENT sqlserver.sql_batch_completed (
ACTION (sqlos.task_time, sqlserver.client_app_name, sqlserver.database_id, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text)
WHERE ([sqlserver].[sql_text] LIKE N'%WHERE OrderType = @Type%')),
ADD EVENT sqlserver.sql_statement_recompile (
ACTION (sqlserver.client_app_name, sqlserver.database_id, sqlserver.session_id, sqlserver.sql_text)
WHERE ([sqlserver].[sql_text] LIKE N'%WHERE OrderType = @Type%')),
ADD EVENT sqlserver.sp_statement_completed (
ACTION (sqlserver.session_id, sqlserver.sql_text, sqlserver.client_app_name, sqlserver.database_id)
WHERE ([sqlserver].[sql_text] LIKE N'%WHERE OrderType = @Type%')),
ADD EVENT sqlserver.query_post_compilation_showplan(
ACTION(sqlserver.sql_text, sqlserver.session_id, sqlserver.client_app_name, sqlserver.database_id)
WHERE ([sqlserver].[sql_text] LIKE N'%WHERE OrderType = @Type%'))
ADD TARGET package0.event_file (
SET filename = N'C:\DATEN\SQL\SQL_SERVER_HOME\MSSQL17.MSSQLSERVER\ANALYSIS\QueryPerformanceCapture.xel', max_file_size = (10), max_rollover_files = (5))
WITH (
    MAX_MEMORY = 4096 KB,
    EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS,
    MAX_DISPATCH_LATENCY = 30 SECONDS,
    MAX_EVENT_SIZE = 0 KB,
    MEMORY_PARTITION_MODE = NONE,
    TRACK_CAUSALITY = OFF,
    STARTUP_STATE = ON
);
GO

-- Start the event session
ALTER EVENT SESSION [QueryPerformanceCapture] ON SERVER  
STATE = START;
GO

As you can see, during the first execution of the TSQL-Statement via “ostress.exe”, 100 events were captured in terms of completion (because we launched 100 executions of the code) and 47 events related to compilations:

Regarding the time spent on those 47 compile events, we land in this example on 16813998 nanoseconds which is 16.8 seconds:

As soon as I turned on the feature on database level, we are facing only 1 event related to compilation:

Regarding the time spent here on compiling now, we observe that this is 91142 nanoseconds (i.e. only 0.09 seconds) because SQL Server had to do the work only once:


Conclusion

From my point of view, “OPTIMIZED_SP_EXECUTESQL” is as well as “Optimized Locking” a real game changer. Or should I say: could be?
Well, it depends as always… It depends on the workload you are dealing with on your SQL Server or database. If you see a lot of dynamic SQL, enabling this feature is really valuable otherwise it won’t have an effect. Therefore, I recommend to enable this feature not instantaneous on all your databases, but to monitor your workload first and then make your decision after evaluating.

L’article SQL Server 2025 – OPTIMIZED_SP_EXECUTESQL est apparu en premier sur dbi Blog.

Oracle AutoUpgrade? Easy way to upgrade your Oracle Database! Part I

Thu, 2025-07-17 05:21
Introduction

The AutoUpgrade utility is designed to automate the upgrade process, both before starting upgrades, during upgrade deployments, and during postupgrade checks and configuration migration

The Oracle AutoUpgrade Tool is part of Oracle Home since the version 12.1.0.2.
It’s a java-based program which can be download on MySupport Oracle : ID 2485457.1
It’s recommanded to use the lastest version (as usual, like for opatch by example).

The most important point is that Oracle desupport DBUA (Database Upgrade Assistant) and other manual upgrade to Oracle 23ai. AutoUpgrade is the only supported method.

This tab show the supported sources database version and the supported target database version for AutoUpgrade

Source Database Version11.2.0.412.2.0.1 (12.2 + DBJAN2019RU and newer)12.1.0.218c (18.5 and newer)12.2.0.119c (19.3 and newer)18c21c (21.3 and newer)19c23ai (23.4 and newer)Any future release The AutoUpgrade workflow

Using AutoUpgrade is running four simple steps :

  • Prepare
  • Analyze
  • Fixups
  • Deploy
Prepare

The first step is about downloading all binairies that we need to patching or upgrade our databases. And to create a configuration file used by AutoUpgrade for known what to do and where do it.

  • Download lastest version of AutoUpgrade -> My Oracle Support ID 2485457.1
  • Download lastet version of opatch -> My Oracle Support ID 224346.1
  • Download DBRU + OJVM+ DPBP+ Most Important One-off Patches -> My Oracle Support ID 2118136.2 + ID 555.1

The configuration file used by AutoUpgrade must contain some informations like :

  • Source Oracle Home
  • Target Oracle Home
  • Patches repository
  • Source Database SID
  • and other advanced option

We will see that later in this blog, no worries!

Analyze

It’s the classic precheck used during all patch, upgrade process lead by Oracle.

Fixups

Automatic or manual Fixups, if necessary.

Deploy

It’s the main step where AutoUpgrade run the upgrade process on your source database.
In case of failure, AutoUpgrade is able to restart where it fails on… After have fixed the errors by yourself!

A simple example

For this labs, we have :

  • Oracle Linux Server 8
  • Oracle Database Home 19.24 installed in /u01/app/oracle/product/19.24/dbhome_1
  • Oracle standalone database : DB19

We want to patch this database to the 19.28 version, according to the Best Pratices (for this check Mike Dietrich blogs : https://mikedietrichde.com/ ).

Preparation

We download the following patches :

  • Patch 37960098 : DBRU 19.28
  • Patch 38170982 : Data Pump Bundle Patches (My Oracle Support ID 2819284.1)
  • Patch 37847857 : OJVM 19.28
  • Patch 34672698 : [VOS] DB50: ORA-800: soft external error, arguments: [set priority failed], [vktm] , dism(16)
  • Patch 34774667 : [AQ] ORA-7445 in Purge Queue Table / High CPU usage in SVCB Service
  • Patch 29213893 : [QRY OPTIMIZER] DBMS_STATS Failing With Error Ora-1422 When Gathering Stats for User$ Table

We save them in the repository /home/oracle/autoupgrade/patches (for example).

We download in the same manner the lastest version of Opatch following the Oracle MySupport ID 224346.1 :

How to install OPatch utility ?

  1. Download the latest Opatch utility
    Search for Patch 6880880 in MOS or from the URL
    https://updates.oracle.com/download/6880880.html
  2. Take a backup of older version of OPatch utility under $ORACLE_HOME and unzip the downloaded file
cd $ORACLE_HOME
mv OPatch OPatch.bkp
unzip <download directory>/p6880880_<version>_<platform>.zip
cd OPatch
./opatch version

The most recent version of AutoUpgrade, 25.3 is a fully supported version and can be downloaded via this link: version 20250509 or directly from oracle.com.
Oracle strongly recommends that you always use the latest version of AutoUpgrade. AutoUpgrade is backward compatible and can upgrade older releases of Oracle Database as well.
The latest ORDS 25.1 can be found from here which supports AutoUpgrade REST API.

The last step of the preparation is to create the configuration file used by AutoUpgrade for our patching.
An simple config file look like (/home/oracle/patch_DB19.cfg) :

global.global_log_dir=/home/oracle/autoupgrade-patching/log
patch1.sid=DB19
patch1.log_dir=/home/oracle/autoupgrade-patching/DB19/log
patch1.source_home=/u01/app/oracle/product/19.24/dbhome_1
patch1.target_home=/u01/app/oracle/product/19.28/dbhome_1
patch1.restoration=YES
patch1.drop_grp_after_patching=YES
patch1.folder=/home/oracle/autoupgrade/patches
patch1.patch=RU,OJVM,OPATCH,DPBP,34672698,34774667,29213893
patch1.download=NO

An excellent tool for building this configuration file can be found at : https://viniciusdba.com.br/autoupgrade-composer/
I must say : use it!!

Some explanation about the parameters :

  • source_home : the current Oracle Home used by the instance DB19
  • target_home : the new Oracle Home targeted
  • restoration : AutoUpgrade can create a Guaranteed Restore Point before the upgrade/patching operations
  • drop_grp_after_patching=YES : after a successful AutoUpgrade, by default, don’t suppress the GRP created before. With this parameter, it will be dropped after successful patching.
  • folder : the full path to the repository where the patches has been downloaded.
  • patch : what we want to patch :
    • RU : Release Update
    • OJVM: Oracle Java Virtual Machine
    • Opatch
    • DPBP : Data Pump Bundle Patch
    • id of specific patches
    • download : we can ask to Auto Upgrade to download itself the patches.

Now, we have all the stuff to begin our database patching!

Analyze

Just run the following command :

java -jar autoupgrade.jar -config /home/oracle/patch_DB19.cfg -patch -mode analyze

The output look like :

AutoUpgrade Patching 25.3.250509 launched with default internal options
Processing config file ...
+-----------------------------------------+
| Starting AutoUpgrade Patching execution |
+-----------------------------------------+
1 Non-CDB(s) will be analyzed
Type 'help' to list console commands
patch> lsj
+----+-------+---------+---------+-------+----------+-------+----------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|START_TIME|UPDATED|         MESSAGE|
+----+-------+---------+---------+-------+----------+-------+----------------+
| 100|   DB19|PRECHECKS|EXECUTING|RUNNING|  15:15:39|28s ago|Executing Checks|
+----+-------+---------+---------+-------+----------+-------+----------------+
Total jobs 1

patch> status -job 100
Details

	Job No           100
	Oracle SID       DB19
	Start Time       25/07/16 15:15:39
	Elapsed (min):   0
	End time:        N/A

Logfiles

	Logs Base:    /home/oracle/autoupgrade-patching/simple-patching/log/DB19
	Job logs:     /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100
	Stage logs:   /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100/prechecks
	TimeZone:     /home/oracle/autoupgrade-patching/simple-patching/log/DB19/temp
	Remote Dirs:  

Stages
	PENDING          <1 min
	PRECHECKS        ~0 min (RUNNING)

Stage-Progress Per Container

	+--------+---------+
	|Database|PRECHECKS|
	+--------+---------+
	|    DB19|    98 % |
	+--------+---------+
patch> Job 100 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished                  [1]
Jobs failed                    [0]

Please check the summary report at:
/home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.html
/home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.log

We control the success or errors that can be occured during analyze :

# cat /home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.log
==========================================
   AutoUpgrade Patching Summary Report
==========================================
[Date]           Wed Jul 16 15:17:31 GMT 2025
[Number of Jobs] 1
==========================================
[Job ID] 100
==========================================
[DB Name]                DB19
[Version Before AutoUpgrade Patching] 19.24.0.0.240716
[Version After AutoUpgrade Patching]  19.28.0.0.250715
------------------------------------------
[Stage Name]    PENDING
[Status]        SUCCESS
[Start Time]    2025-07-16 15:15:39
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100/pending
------------------------------------------
[Stage Name]    PRECHECKS
[Status]        SUCCESS
[Start Time]    2025-07-16 15:15:39
[Duration]      0:01:52
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100/prechecks
[Detail]        /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100/prechecks/DB19_preupgrade.log
Check passed and no manual intervention needed
------------------------------------------

The analyze is OK, we can proceed further and deploy the patch.

Deploy

Just run the following command :

java -jar autoupgrade.jar -config /home/oracle/patch_DB19.cfg -patch -mode deploy

The output look like :

AutoUpgrade Patching 25.3.250509 launched with default internal options
Processing config file ...
+-----------------------------------------+
| Starting AutoUpgrade Patching execution |
+-----------------------------------------+
1 Non-CDB(s) will be processed
Type 'help' to list console commands
patch> lsj                          
+----+-------+---------+---------+-------+----------+-------+----------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|START_TIME|UPDATED|         MESSAGE|
+----+-------+---------+---------+-------+----------+-------+----------------+
| 101|   DB19|PRECHECKS|EXECUTING|RUNNING|  15:34:17| 5s ago|Executing Checks|
+----+-------+---------+---------+-------+----------+-------+----------------+
Total jobs 1

Details

	Job No           101
	Oracle SID       DB19
	Start Time       25/07/16 15:34:17
	Elapsed (min):   22
	End time:        N/A

Logfiles

	Logs Base:    /home/oracle/autoupgrade-patching/simple-patching/log/DB19
	Job logs:     /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101
	Stage logs:   /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/patching
	TimeZone:     /home/oracle/autoupgrade-patching/simple-patching/log/DB19/temp
	Remote Dirs:  

Stages
	PENDING          <1 min
	GRP              <1 min
	PREACTIONS       <1 min
	PRECHECKS        1 min
	PREFIXUPS        1 min
	EXTRACT          <1 min
	INSTALL          <1 min
	ROOTSH           <1 min
	DBTOOLS          <1 min
	OPATCH           12 min
	PATCHING         ~5 min (RUNNING)
	POSTCHECKS      
	POSTFIXUPS      
	POSTACTIONS     

Stage-Progress Per Container

	The Stage PATCHING does not have any data to show
Job 101 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished                  [1]
Jobs failed                    [0]
Jobs restored                  [0]
Jobs pending                   [0]

---- Drop GRP at your convenience once you consider it is no longer needed ----
Drop GRP from DB19: drop restore point AU_PATCHING_9212_DB191928000


Please check the summary report at:
/home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.html
/home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.log

We control the success or errors that can be occured during the deploiement:

# cat /home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.log
==========================================
   AutoUpgrade Patching Summary Report
==========================================
[Date]           Wed Jul 16 15:56:46 GMT 2025
[Number of Jobs] 1
==========================================
[Job ID] 101
==========================================
[DB Name]                DB19
[Version Before AutoUpgrade Patching] 19.24.0.0.240716
[Version After AutoUpgrade Patching]  19.28.0.0.250715
------------------------------------------
[Stage Name]    PENDING
[Status]        SUCCESS
[Start Time]    2025-07-16 15:34:18
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/pending
------------------------------------------
[Stage Name]    GRP
[Status]        SUCCESS
[Start Time]    2025-07-16 15:34:18
[Duration]      0:00:01
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/grp
[Detail]        Please drop the following GRPs after {0} completes:
                 AU_PATCHING_9212_DB191928000
------------------------------------------
[Stage Name]    PREACTIONS
[Status]        SUCCESS
[Start Time]    2025-07-16 15:34:19
[Duration]      0:00:00
------------------------------------------
[Stage Name]    PRECHECKS
[Status]        SUCCESS
[Start Time]    2025-07-16 15:34:19
[Duration]      0:01:21
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/prechecks
[Detail]        /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/prechecks/DB19_preupgrade.log
                Check passed and no manual intervention needed
------------------------------------------
[Stage Name]    PREFIXUPS
[Status]        SUCCESS
[Start Time]    2025-07-16 15:35:40
[Duration]      0:01:20
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/prefixups
[Detail]        /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/prefixups/prefixups.html
------------------------------------------
[Stage Name]    EXTRACT
[Status]        SUCCESS
[Start Time]    2025-07-16 15:37:00
[Duration]      0:00:49
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/extract
------------------------------------------
[Stage Name]    INSTALL
[Status]        SUCCESS
[Start Time]    2025-07-16 15:37:50
[Duration]      0:00:52
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/install
------------------------------------------
[Stage Name]    ROOTSH
[Status]        SUCCESS
[Start Time]    2025-07-16 15:38:43
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/rootsh
------------------------------------------
[Stage Name]    DBTOOLS
[Status]        SUCCESS
[Start Time]    2025-07-16 15:38:43
[Duration]      0:00:02
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/dbtools
------------------------------------------
[Stage Name]    OPATCH
[Status]        SUCCESS
[Start Time]    2025-07-16 15:38:45
[Duration]      0:12:19
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/opatch
------------------------------------------
[Stage Name]    PATCHING
[Status]        SUCCESS
[Start Time]    2025-07-16 15:51:05
[Duration]      0:05:37
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/patching
------------------------------------------
[Stage Name]    POSTCHECKS
[Status]        SUCCESS
[Start Time]    2025-07-16 15:56:42
[Duration]      0:00:00
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/postchecks
[Detail]        /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/postchecks/DB19_postupgrade.log
                Check passed and no manual intervention needed
------------------------------------------
[Stage Name]    POSTFIXUPS
[Status]        SUCCESS
[Start Time]    2025-07-16 15:56:43
[Duration]      0:00:02
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/postfixups
[Detail]        /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/postfixups/postfixups.html
------------------------------------------
[Stage Name]    POSTACTIONS
[Status]        SUCCESS
[Start Time]    2025-07-16 15:56:46
[Duration]      0:00:00
------------------------------------------
Checking
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jul 16 16:05:31 2025
Version 19.28.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.28.0.0.0

SQL> col oracle_home format a60
select sys_context('USERENV','ORACLE_HOME') as oracle_home from dual; 

ORACLE_HOME
------------------------------------------------------------
/u01/app/oracle/product/19.28/dbhome_1

SQL> col comp_id format a10
col version_full format a15
col status format a15    
select comp_id,
       version_full,
       status
from dba_registry
order by comp_id;  

COMP_ID    VERSION_FULL    STATUS
---------- --------------- ---------------
CATALOG    19.28.0.0.0	   VALID
CATPROC    19.28.0.0.0	   VALID
OWM	   19.28.0.0.0	   VALID
RAC	   19.28.0.0.0	   OPTION OFF
XDB	   19.28.0.0.0	   VALID
Conclusion

Finaly in few simple steps, we have patching our database. The AutoUpgrade utility is a very powerful tool, easy to use and manage.

I hope this blog help you to understand how AutoUpgrade works.
And I will encourage you to be familiar with this wonderful tool!

In Part II, I will show you some advanced patching that AutoUpgrade can handle!

See you soon!

L’article Oracle AutoUpgrade? Easy way to upgrade your Oracle Database! Part I est apparu en premier sur dbi Blog.

SQL Server: New SQL Server Management Studio (SSMS)  landing page

Tue, 2025-07-01 01:37

After the new connectivity and drivers landing page for SQL Server with .Net, Java, Python, C++, Go and PHP languages, you have a new landing page for SQL Server Management Studio (SSMS) :

https://learn.microsoft.com/en-us/ssms

You will see 6 parts in the main panel:

  • Overview
  • Get Started
  • Install
  • Customize SSMS
  • What’s new in SSMS?
  • Troubleshoot

If you have a look on the left menu, you will find more like Copilot in SQL Server Management Studio (Preview) and have a deep dive into this subject:

To follow how to enable and use it, follow the blog of Steven Naudet: Step-by-Step Guide to Enabling Copilot in SSMS

One part I like in the documentation is the “Tips and Tricks” in “Get Started” to be more efficient when you use it:

https://learn.microsoft.com/en-us/ssms/tutorials/ssms-tricks

Have a good start with the new landing page of SSMS!

L’article SQL Server: New SQL Server Management Studio (SSMS)  landing page est apparu en premier sur dbi Blog.

M-Files Image analysis

Mon, 2025-06-30 08:50

Nowadays, business data is no longer just documents or text files. It is necessary to take into account the management of unstructured data such as photos and other pictures.

Why Combine M-Files with Azure Computer Vision?

M-Files excels at organizing and managing documents based on metadata rather than folder structures. It works well as soon as Objects have associated Metadata to categorize and search them.

I already wrote some blogs about the awesome capabilities offered by M-files to analyze and help you to organize efficiently your data, like Intelligent Metadata Layer (aka IML) or with Aino your virtual assistant.

Looking for something?

These tools are really powerful, but also limited to text documents (Office documents, pdf, …).

No worries M-files has a solution to remediate that. it is called M-Files Image Analysis.

This additional module uses Computer Vision API of Microsoft Cognitive Services,

What it does?

After setting up the link between our M-Files Vault and the Computer Vision API, we are ready to play with images. We have several suggestions possible depending of the needs.

  • Caption: provides a sentence describing the picture
  • Tag: suggest main keywords related to the image (object, activities,…)
  • Tag/Secondary: Same idea as Tag but may be lass precise
  • Category: Provide a descriptive group like: animals, outdoor, …

But you can also use this module to extract text from the image

Extrat text from a picture

and finally you can also get more basic properties like Format (jpeg), Height and Width.

There is a very last capability which is Face recognition. It is supposed to identify the gender and age of people appearing in the photo, but it was not working well for me (I will test it deeper later)

Final Thoughts

Integrating M-Files with Azure Computer Vision transforms static images into actionable, searchable content. This is a significant advancement for organizations managing unstructured data, allowing them to easily categorize and leverage their images and improve efficiency.

L’article M-Files Image analysis est apparu en premier sur dbi Blog.

How to Contribute to the YaK Project: Adding a New Infrastructure Provider (e.g., OpenShift Virtualization)

Tue, 2025-06-24 07:42

YaK is a platform-agnostic framework designed to streamline and automate the deployment of infrastructure and middleware components (database, K8S,etc..) across various environments. With its open-source core, YaK enables developers, DevOps teams, and IT professionals to manage infrastructure provisioning and component deployment efficiently, without being tied to a specific vendor. Whether you’re working with AWS, Azure, Oracle Cloud Infrastructure (OCI), or on-premises servers, YaK provides a consistent and flexible solution for your deployment needs. To have more informations, visit https://yak4all.io/

In this guide, we’ll walk you through the process of contributing to the YaK project by adding support for a new infrastructure provider, using OpenShift Virtualization as an example. We’ll cover the steps to fork the necessary repositories, implement the required changes, and integrate your contributions into the main project.

Overview of the YaK Project Architecture

YaK is organized into multiple GitLab repositories, each responsible for a specific part of the system, here the ones we will participate:

  • yakenv: Environment container image with all the packages and dependencies
  • yak_core: Is using yakenv as based image, core logic
  • yak_runner: Is using yak_core as based image, backend logic for automation jobs
  • yak_ui: Web-based frontend

These components work together to provide a modular, extensible framework for managing your different infrastructures and attached components.

Step 1: Fork the Repositories

Before starting development, fork each of the required repositories into your own GitLab account:

  1. Go to each project page:
  2. Click the Fork button at the top right of the page.
  3. Choose your namespace (e.g., your GitLab username or organization).
  4. Repeat for all repositories.
Step 2: Set Up Your Local Environment

Clone the forked repositories to your local machine:

git clone https://gitlab.com/<your-namespace>/yakenv.git
git clone https://gitlab.com/<your-namespace>/yak_core.git
git clone https://gitlab.com/<your-namespace>/yak_backend/yak_runner.git
git clone https://gitlab.com/<your-namespace>/yak_frontend/yak_ui.git

Then install dependencies (Docker or Podman) and ensure each project builds correctly. Refer to the README.md files for project-specific setup instructions.

Step 3: Implement the New Infrastructure Provider

To add OpenShift Virtualization as a new provider, you will need to make coordinated changes across the different components:

yakenv
  • Dockerfile: Add to this file the commands to install Ansible collections and/or CLI packages that you will need to interract with your new provider.
    Here’s the lines we added for Openshift Virtualization:
# Install Kubevirt collection (OpenShift Virtualization)
RUN ansible-galaxy collection install kubevirt.core==2.1.0 --collections-path /etc/ansible/collections --force

# Install virtctl CLI (Kubevirt)
RUN curl -lL https://github.com/kubevirt/kubevirt/releases/download/v1.5.1/virtctl-v1.5.1-linux-amd64 -o /usr/bin/virtctl &amp;&amp; chmod +x /usr/bin/virtctl
  • requirements.txt: The python pip packages to install.
    Here’s the line we added for Openshift Virtualization:
kubernetes==22.6.0
yak_core
  • collections/ansible_collections/yak/core/roles: This is the directory were you will find all the tasks to manage the infrastructures and the servers storage.
    In this directory you will find the infrastructures which is a template for a new infrastructures, you can copy it and rename it as the name of your infrastructure (replace white spaces by underscores _) here openshift_virtualization
  • collections/ansible_collections/yak/core/roles/openshift_virtualization/defaults/main.yml: Define the variables defaults values:
---
state: present
openshift_namespace: default
custom_labels: {}
instance_type: u1.medium
kubeconfig_path: ~/yak/kubeconfig
preference_name: ""

rhel_activation_key: "{{ lookup('env','RHEL_ACTIVATION_KEY') }}"
rhel_org_id: "{{ lookup('env','RHEL_ORG_ID') }}"
  • collections/ansible_collections/yak/core/roles/openshift_virtualization/tasks: the tasks files for your infrastructure. There is the minimum files:
  1. main.yml: the default task file, it will redirect to the tasks depending of the state. Here is the content:
---
- include_tasks: check_credential.yml
- include_tasks: "{{ state }}.yml"

  1. check_credentials.yml: Step to valid that the provided credentials are valid
---
- block:
  - name: List all VMs - Verifying credentials are valid
    delegate_to: localhost
    kubevirt.core.kubevirt_vm_info:
      namespace: "{{ openshift_namespace }}"
    retries: 3
    delay: 5
  rescue:
  - fail:
      msg: 'Please ensure your OpenShift credentials are valid.'

  1. present.yml: Steps to create the server’s VM on your infrastructure. If there is some differences to create a linux or windows server, you can include sub-tasks files (here we developed create_linux.yml and create_windows.yml)
# Copyright: (c) 2025, dbi services, distributed without any warranty under the terms of the GNU General Public License v3
- ansible.builtin.include_tasks: 
    file: get_vm_state.yml

- delegate_to: localhost
  set_fact:
    number_of_vms: "{{ r_get_vm_state.resources | length }}"

- block:
  - include_tasks: "create_{{ os_type }}.yml"
  - include_tasks: post_config.yml
  when: number_of_vms | int == 0

- when: number_of_vms | int > 0
  block:
    - fail:
        msg: "Cannot Deploy as the VirtualMachine already exists."
  1. started.yml: Steps to start the server’s VM
---
- name: Power On
  delegate_to: localhost
  ansible.builtin.shell: "virtctl --kubeconfig {{ openshift_virtualization_kubeconfig }} --namespace {{ openshift_namespace }} start {{ machine_name }}"
  register: r_openshift_virtualization_vm_started

- include_tasks: "post_config.yml"

  1. stopped.yml: Steps to power off your server
---
- name: Power Off
  delegate_to: localhost
  ansible.builtin.shell: "virtctl --kubeconfig {{ openshift_virtualization_kubeconfig }} --namespace {{ openshift_namespace }} stop {{ machine_name }}"
  register: r_openshift_virtualization_vm_stopped

  1. absent.yml: Steps to destroy the server
---
- name: Delete the VirtualMachine
  delegate_to: localhost
  kubevirt.core.kubevirt_vm:
    namespace: "{{ openshift_namespace }}"
    state: absent
    name: "{{ machine_name }}"
  register: r_openshift_virtualization_vm_absent

yak_runner
  • sql/upgrade/2.2.0/02-infrastructures.sql: Add in the infrastructures jsonb specification schema, the new provider with its specific parameters
UPDATE jsonb_specifications
SET schema = 
'{
  "variableName": "variables",
  "dataType": "object",
  "niceName": "Variables",
  "isSelectable": true,
  "selectorType": "providerName",
  "selectorOption": [
    "aws",
    "azure",
    "oci",
    "openshift_virtualization"
  ],
  "aws": [
    ...
  ],
  "azure": [
    ...
  ],
  "oci": [
    ...
  ],
  "openshift_virtualization": [
    {
      "dataType": "string",
      "defaultValue": "default",
      "isOneOffSetting": false,
      "maxLength": 100,
      "mandatory": true,
      "variableName": "network_name",
      "usage": "Network name to connect",
      "regex": null,
      "niceName": "Network Name",
      "minLength": 1
    },
    {
      "dataType": "string",
      "defaultValue": "default",
      "isOneOffSetting": false,
      "maxLength": 100,
      "mandatory": true,
      "variableName": "domain_device_interface_name",
      "usage": "Domain device interface name",
      "regex": null,
      "niceName": "Domain Device Interface Name",
      "minLength": 1
    },
    { "variableName": "custom_labels", "niceName": "Infrastructure labels", "dataType": "dictionary", "mandatory": false }
  ]
}
'::jsonb
WHERE name = 'infrastructureVariables';
  • sql/upgrade/2.2.0/05-server_secret_type_parameters.sql: Add in the according tables the new provider and if needed, new secret type with its parameters:
insert into providers values
(5,	'openshift_virtualization', true, '{}', 'OpenShift Virtualization');

insert into secret_types values
(7,	'kubeconfig',true,true,false,5);

-- Data: secret_type_parameters
insert into secret_type_parameters (secret_type_id, variable_name, nice_name, data_type_id, mandatory, options, min_length, max_length) values
-- Kubeconfig
(7, 'K8S_AUTH_KUBECONFIG', 'OpenShift Kubeconfig', 1, true, null, 1, 5120);
  • sql/upgrade/2.2.0/06-provider_images.sql: Add new images for your new provider and allow admin to manage them:
insert into provider_images (name, provider_id, operating_system_type_id, ansible_user, variables, user_id) values
-- OpenShift Virtualization linux
('datasource_centos-stream9', 5, 1, 'admin', '{"dataSource": {"name": "centos-stream9", "namespace": "openshift-virtualization-os-images"}}', 1),
('datasource_centos-stream10', 5, 1, 'admin', '{"dataSource": {"name": "centos-stream10", "namespace": "openshift-virtualization-os-images"}}', 1),
('datasource_fedora', 5, 1, 'admin', '{"dataSource": {"name": "fedora", "namespace": "openshift-virtualization-os-images"}}', 1),
('datasource_rhel8', 5, 1, 'admin', '{"dataSource": {"name": "rhel8", "namespace": "openshift-virtualization-os-images"}}', 1),
('datasource_rhel9', 5, 1, 'admin', '{"dataSource": {"name": "rhel9", "namespace": "openshift-virtualization-os-images"}}', 1),
('datasource_ubuntu24-04', 5, 1, 'admin', '{"dataSource": {"name": "ubuntu24-04", "namespace": "openshift-virtualization-os-images"}}', 1),
('datasource_win2k22', 5, 1, 'admin', '{"dataSource": {"name": "win2k22", "namespace": "openshift-virtualization-os-images"}}', 1),
('datasource_win2k25', 5, 1, 'admin', '{"dataSource": {"name": "win2k25", "namespace": "openshift-virtualization-os-images"}}', 1),
('datasource_win10', 5, 1, 'admin', '{"dataSource": {"name": "win10", "namespace": "openshift-virtualization-os-images"}}', 1),
('datasource_win11', 5, 1, 'admin', '{"dataSource": {"name": "win11", "namespace": "openshift-virtualization-os-images"}}', 1);

insert into projects_provider_images (project_id, image_id, user_id) 
    select 1, id, 1 from provider_images where provider_id=5;
  • sql/upgrade/2.2.0/07-servers.sql: Adding server parameters for the new provider :
UPDATE jsonb_specifications
SET schema = 
'{
  "variableName": "variables",
  "niceName": "Variables",
  "dataType": "object",
  "isSelectable": true,
  "selectorType": "providerName",
  "selectorOption": ["aws", "azure", "oci", "openshift_virtualization", "on_premises"],
  "aws": [
    ...
  ],
  "azure": [
    ...
  ],
  "oci": [
    ...
  ],
   "on_premises": [
    ...
  ],
  "openshift_virtualization": [
    {
      "variableName": "providerImageId",
      "niceName": "Provider Image ID",
      "dataType": "integer",
      "hidden": true,
      "mandatory": false,
      "isOneOffSetting": false
    },
    {
      "dataType": "string",
      "defaultValue": null,
      "isOneOffSetting": true,
      "maxLength": 100,
      "mandatory": true,
      "variableName": "providerImageName",
      "usage": "See documentation",
      "regex": null,
      "options": "values.vProviderImages",
      "niceName": "Provider Image Name",
      "minLength": 1
    },
    {
      "variableName": "providerShapeId",
      "niceName": "Provider Shape ID",
      "dataType": "integer",
      "hidden": true,
      "mandatory": false,
      "isOneOffSetting": false
    },
    {
      "dataType": "string",
      "defaultValue": null,
      "isOneOffSetting": true,
      "maxLength": 100,
      "mandatory": true,
      "variableName": "providerShapeName",
      "usage": "See documentation",
      "regex": null,
      "options": "values.vProviderShapes",
      "niceName": "Provider Shape Name",
      "minLength": 1
    },
    { 
      "variableName": "openshift_namespace",
      "niceName": "Namespace",
      "defaultValue": "default",
      "dataType": "string",
      "mandatory": true
    },
    { 
      "variableName": "preference_name",
      "niceName": "Preference Name",
      "defaultValue": "",
      "dataType": "string",
      "mandatory": true
    },
    { "variableName": "custom_labels", "niceName": "Server labels", "dataType": "dictionary", "mandatory": false }

  ]
}' :: jsonb
WHERE name = 'serverVariables';

  • sql/upgrade/2.2.0/11-components-storage-jsonb.sql: Adding component disks parameters for the provider:
UPDATE jsonb_specifications
SET schema = 
    '{
      "dataType": "object",
      "on_premises": [],
      "isSelectable": true,
      "selectorType": "providerName",
      "variableName": "storageVariables",
      "selectorOption": [
        "aws",
        "azure",
        "oci",
        "on_premises",
        "openshift_virtualization"
      ],
      "aws": [
        ...
       ],
      "azure": [
        ...
      ],
      "oci": [
        ...
      ],
      "on_premises": [
       ...
      ],
      "openshift_virtualization": [
        {
          "dataType": "array",
          "isSelectable": true,
          "selectorType": "osType",
          "variableName": "filesystems",
          "niceName": "File systems",
          "selectorOption": [
            "Linux",
            "Windows"
          ],
          "Linux": [
            {
                "variableName": "mount_point",
                "niceName": "Mount point, e.g. /app",
                "dataType": "string",
                "defaultValue": null,
                "isOneOffSetting": true,
                "maxLength": 100,
                "mandatory": true,
                "minLength": 1
            },
            {
                "variableName": "filesystem_type",
                "niceName": "Filesystem type",
                "dataType": "string",
                "mandatory": true,
                "isOneOffSetting": false,
                "defaultValue": "xfs"
            },
            {
                "variableName": "size_gb",
                "niceName": "Mount point size in GB",
                "dataType": "integer",
                "mandatory": true,
                "isOneOffSetting": false,
                "defaultValue": 20
            },
            {
              "dataType": "integer",
              "isOneOffSetting": true,
              "mandatory": true,
              "variableName": "max_size_gb",
              "niceName": "Disk size",
              "min": 1,
              "defaultValue": -1,
              "usage": "Defines the size of the virtual disk at cloud provider end. 5% overhead is automatically added to each disk for metadata storage."
            },
            {
              "dataType": "string",
              "niceName": "Storage Class Name",
              "mandatory": false,
              "maxLength": 100,
              "minLength": 0,
              "defaultValue": "",
              "variableName": "volume_storageClassName",
              "isOneOffSetting": true
            }
          ],
          "Windows": [
            {
                "variableName": "drive_letter",
                "niceName": "Drive letter, e.g. F",
                "dataType": "string",
                "defaultValue": null,
                "isOneOffSetting": true,
                "maxLength": 100,
                "mandatory": true,
                "minLength": 1
            },
            {
                "variableName": "partition_label",
                "niceName": "Partition label",
                "dataType": "string",
                "mandatory": true,
                "isOneOffSetting": false,
                "defaultValue": "data"
            },
            {
                "variableName": "size_gb",
                "niceName": "Mount point size in GB",
                "dataType": "integer",
                "mandatory": true,
                "isOneOffSetting": false,
                "defaultValue": 20
            },
            {
              "dataType": "string",
              "niceName": "Storage Class Name",
              "mandatory": false,
              "maxLength": 100,
              "minLength": 0,
              "defaultValue": "",
              "variableName": "volume_storageClassName",
              "isOneOffSetting": true
            }
          ]
        }
      ]
    }' :: jsonb
  WHERE name = 'componentStorageVariables';

  • sql/upgrade/2.2.0/12-provider_disks_parameters_root_jsonb.sql: Adding server disks parameters for the provider:
UPDATE jsonb_specifications
SET schema = 
    '{
      "dataType": "object",
      "on_premises": [],
      "isSelectable": true,
      "selectorType": "providerName",
      "variableName": "storageVariables",
      "selectorOption": [
        "aws",
        "azure",
        "oci",
        "on_premises",
        "openshift_virtualization"
      ],
      "aws": [
        ...
       ],
      "azure": [
        ...
      ],
      "oci": [
        ...
      ],
      "on_premises": [
       ...
      ],
      
  "openshift_virtualization": [
    {
      "Linux": [
        {
          "dataType": "integer",
          "niceName": "Size in GB",
          "mandatory": true,
          "defaultValue": 30,
          "variableName": "volume_size",
          "isOneOffSetting": true
        },
        {
          "dataType": "string",
          "niceName": "Storage Class Name",
          "mandatory": false,
          "maxLength": 100,
          "minLength": 0,
          "defaultValue": "",
          "variableName": "volume_storageClassName",
          "isOneOffSetting": true
        },
        {
          "options": [
            "Block",
            "Filesystem"
          ],
          "dataType": "string",
          "niceName": "Disk type",
          "mandatory": true,
          "maxLength": 100,
          "minLength": 1,
          "defaultValue": "Filesystem",
          "variableName": "volume_mode",
          "isOneOffSetting": true
        }
      ],
      "Windows": [
        {
          "dataType": "integer",
          "niceName": "Size in GB",
          "mandatory": true,
          "defaultValue": 30,
          "variableName": "volume_size",
          "isOneOffSetting": true
        },
        {
          "dataType": "string",
          "niceName": "Storage Class Name",
          "mandatory": false,
          "maxLength": 100,
          "minLength": 0,
          "defaultValue": "",
          "variableName": "volume_storageClassName",
          "isOneOffSetting": true
        },
        {
          "options": [
            "Block",
            "Filesystem"
          ],
          "dataType": "string",
          "niceName": "Disk type",
          "mandatory": true,
          "maxLength": 100,
          "minLength": 1,
          "defaultValue": "Filesystem",
          "variableName": "volume_mode",
          "isOneOffSetting": true
        }
      ],
      "dataType": "object",
      "isSelectable": true,
      "selectorType": "providerImageOsType",
      "variableName": "openshift_virtualization_root_disk_parameters",
      "selectorOption": [
        "Linux",
        "Windows"
      ]
    },
    {
      "variableName": "providerImageOsType",
      "niceName": "Operating System",
      "dataType": "string",
      "mandatory": true,
      "isOneOffSetting": true,
      "options": ["Linux", "Windows"],
      "readonly": true
    }
  ]
    }' :: jsonb
  WHERE name = 'componentStorageVariables';

  • sql/upgrade/2.2.0/13-provider-shapes.sql: Adding shapes for the provider:
insert into provider_shapes (name, provider_id, variables, user_id) values
('openshift-virtualization-u1_medium-shape',   5, '{"instance_type": "u1.medium"}', 1),
('openshift-virtualization-u1_large-shape',   5, '{"instance_type": "u1.large"}', 1),
('openshift-virtualization-u1_xlarge-shape',   5, '{"instance_type": "u1.xlarge"}', 1);

insert into projects_provider_shapes (project_id, shape_id, user_id) 
    select 1, id, 1 from provider_shapes where provider_id=5;

yak_ui
  • yak-ui/src/assets/providers/virt-icon.png: Adding the new provider logo in provider logos:
  • yak-ui/src/utils/imageList.ts: Adding the new logo path in the ProviderLogos dict:
import OnPremiseLogo from "@/assets/providers/2860404-200-min.png";
import LinuxLogo from "@/assets/linux.png";
import WindowsLogo from "@/assets/windows.png";
import OpenShiftVirtualizationLogo from "@/assets/providers/virt-icon.png";

export const ProviderLogos: HashTable<any> = {
  aws: AwsLogo,
  oci: OciLogo,
  'on_premises': OnPremiseLogo,
  azure: AzureLogo,
  'openshift_virtualization': OpenShiftVirtualizationLogo,
};

...
  • yak-ui/src/components/YakTabContainer.vue: Adding in containerClass.specialTabs and cardClass.specialTabs lists the new provider :
...
const containerClass = computed(() => {
  const specialTabs = ["aws", "oci", "azure", "on_premises", "openshift_virtualization"];
  return specialTabs.includes(tab.value)
    ? "container-grid-custom"
    : "container-custom";
});

const cardClass = computed(() => {
  const specialTabs = ["aws", "oci", "azure", "on_premises", "openshift_virtualization"];
  return specialTabs.includes(tab.value) ? "card-grid-custom" : "card-custom";
});
...
  • yak-ui/src/components/customs/StorageEditor.vue: Adding in providerTypeField.options list the new provider :
...
const providerTypeField: Field = {
  variableName: "providerName",
  niceName: "providerName",
  dataType: "string",
  mandatory: true,
  isOneOffSetting: true,
  options: ["aws", "azure", "oci", "on_premises", "openshift_virtualization"],
};
...
  • yak-ui/src/data_structures/vComponents.ts: Adding in children.options list parameter for allGroups.push call the new provider :
...
                    allGroups.push({
                      variableName: group.name,
                      niceName: "group " + group.name,
                      mandatory: true,
                      dataType: "object",
                      isOneOffSetting: false,
                      children: [{
                        variableName: "providerName",
                        niceName: "Provider",
                        dataType: "string",
                        mandatory: true,
                        isOneOffSetting: true,
                        options: ["aws", "azure", "oci", "on_premises", "openshift_virtualization"],
...
  • yak-ui/src/data_structures/vSecrets.ts: Adding in vSecretTypeName list the new secret type :
...
              vSecretTypeName: [
                "aws credentials",
                "azure credentials",
                "oci credentials",
                "kubeconfig",
                "ssh key",
                "winrm",
              ],
...
Step 4: Test Your Changes

Each component has its own set of unit and integration tests. Be sure to:

  • Run tests locally with each change.
    All sub-projects of the YaK have a Dockerfile on root directory. Build them in this order to respect the dependencies:
    1. yakenv
    2. yak_core
    3. yak_runner
    4. yak_postgres
    5. yak_graphile
    6. yak_ui

Once images are built, run them on Docker/Podman on your computer, or on a Container orchestrator (Kubernetes/Openshift/…)

  • Validate that your provider appears correctly in the UI and can be selected.

New infrastructure declaration:

New infrastructure declared:

New secret declaration:

Provider images appeared well:

Provider shapes appeared well:

New server declaration:

  • Test full cluster lifecycle operations (deploy, stop, start, destroy, delete) for OpenShift Virtualization.

Deploy server:

Result in Openshift:

Stop server:

Start server:

Destroy server:

Step 5: Contribute Your Changes

Once development is complete and tested:

  1. Push your changes to your fork.
  2. Create a Merge Request (MR) from your fork to the main project repository for each affected sub-project:
  3. In your MR description, include:
    • A summary of the changes
    • A test plan and screenshots if applicable
    • Any known limitations or issues

Follow the contribution guidelines of the YaK project for naming conventions, commit messages, and code standards.

You can take my merge requests as an exemple:

Step 6: Collaborate with the Community

After submitting your MR:

  • Respond to any feedback or questions from project maintainers.
  • Be open to revising your code based on review suggestions.
  • Once approved, your changes will be merged into the main project!
Conclusion

Adding a new infrastructure provider to YaK, such as OpenShift Virtualization, is a powerful way to contribute to the project and support broader use cases. With a clear architecture and modular codebase, YaK makes it straightforward for developers to extend its capabilities.

YaK can also be integrated with various Red Hat products to enhance your automation workflows – check out our guide on Integrating YaK into Red Hat Ansible Automation Platform to learn more.

L’article How to Contribute to the YaK Project: Adding a New Infrastructure Provider (e.g., OpenShift Virtualization) est apparu en premier sur dbi Blog.

Step-by-Step Guide to Enabling Copilot in SSMS

Mon, 2025-06-23 06:45

With the latest version of SSMS, SSMS 21, there is now integration with Copilot.

In this blog post, we’ll go through the setup step by step, with screenshots, to get everything ready to use Copilot in SSMS.
There is already some official documentation on this, but I believe many people will find it clearer to see everything illustrated.

Copilot is not free: you need an Azure subscription which pays according to the number and complexity of prompts.

I will use the API Key but you can also use Microsoft Entra to authenticate to Azure OpenAI for Copilot in SSMS which is probably safer.

Azure OpenAI Service

To start with, we’ll use the Azure OpenAI service.

Click “Create Azure OpenAI”

Create a Resource group if needed and choose a name.

Leave the “All networks” option selected on the Network pane.

AI Model deployment

Now, In the newly created resource click on “Go to Azure AI Foundry Portal”.

Then, on the left panel choose “Deployments” and click “Deploy model” and select “Deploy base model”.

Select “gpt-4o” and Confirm. It’s the only supported model for now.

Then click on “Customize”.

Here you can modify the name and select a Deployment type.
Here, I recommend reading the documentation to choose the option that best fits your needs.
Doc: Deployment types for Azure AI Foundry Models

SSMS

Now go to SSMS and click “Copilot”.

The Azure OpenAI Endpoint and the API Key can be found on the Azure OpenAI Service page, by clicking on “Click here to view endpoints”.

Finally the Open AI Deployment is the name we defined earlier. “ai-snrel-ssms-deployment” for me.

Note that the configuration can be edited under “Tools”, “Options”, “Copilot”.

We are done. I can now start chatting with the AI.

Hopefully, this blog made it easier for you to set up and start using Copilot in SSMS.

Regarding privacy it is written here that no prompts will be used by OpenAI to improve or train models so it should be quite safe.

L’article Step-by-Step Guide to Enabling Copilot in SSMS est apparu en premier sur dbi Blog.

SQL Server 2025 – Retirement of SQL Server Reporting Services (SSRS)

Fri, 2025-06-20 04:37

Microsoft has announced the end of the SQL Server Reporting Services (SSRS) service. In other words, Microsoft will no longer provide new versions of SSRS. Currently, the latest available version is the 2022 version (https://www.microsoft.com/fr-FR/download/details.aspx?id=104502).

However, it is important to note that this service will be supported until January 11, 2033. Moreover, it will still be possible to use SSRS and host databases from this service on a recent version of SQL Server.

The replacement for SQL Server Reporting Services (SSRS) is Power BI Report Server (PBIRS). This is the service to adopt for future installations. In fact, we have observed that many clients have been using Power BI Report Server for several years.

Reference: https://www.microsoft.com/en-us/sql-server/blog/2025/06/19/enhancing-reporting-and-analytics-with-sql-server-2025-tools-and-services/


I use SQL Server Reporting Services and would like to migrate to Power BI Report Server. What are my options?

Several solutions are possible (the following list is neither exhaustive nor in any particular order):

  • Migrate the databases that support the SSRS service to PBIRS
    • Current environment:
      • Server 1 (S1):
        • Hosted services: SSRS and SQL Server Database Engine. The SSRS service and my databases are hosted on my S1 server.
    • New environment:
      • Server 2 (S2):
        • Hosted services: PBIRS and SQL Server Database Engine. The PBIRS service and the database engine are hosted on my S2 server.
        • We will migrate the databases from S1 to S2.
        • The PBIRS service will connect to the migrated databases.

Reference: https://learn.microsoft.com/en-us/power-bi/report-server/migrate-report-server


  • Use the RDL Migration Tool (RdlMigration) to migrate reports from SSRS to PBIRS

Reference: https://github.com/microsoft/RdlMigration


  • Use the Reporting Services PowerShell API, which allows for fine-grained control over the migration of reports

Reference: https://github.com/microsoft/ReportingServicesTools


I am currently using SSRS 2022 and would like to migrate to Power BI Report Server. What are my options?

According to the documentation (as of now), it is currently not possible to migrate from SSRS 2022 to Power BI Report Server.

Reference: https://learn.microsoft.com/en-us/power-bi/report-server/migrate-report-server


Thank you, Amine Haloui.

L’article SQL Server 2025 – Retirement of SQL Server Reporting Services (SSRS) est apparu en premier sur dbi Blog.

Experiencing SQLBits 2025 in London

Wed, 2025-06-18 13:24

This week, I’m in London attending the SQLBits 2025 conference.
Since joining dbi services as a consultant, this is my second time taking part in the event.
The last time was in 2022, but I attended remotely due to the uncertainties surrounding international travel at the time, as COVID-19 was still a concern.
You can actually find the blog post I wrote back then: SQLBits 2022 – Levelling up my Performance Tuning skills.

This year, I’m here in person and fully intend to make the most of the experience.

For context, SQLBits 2025 is one of the largest Microsoft data platform events in Europe. Taking place in London from June 18 to 21, it features dozens of sessions covering SQL Server, Azure, Power BI, AI, and data engineering. The event brings together experts, MVPs, and Microsoft product team members for intense learning and networking. Known for its high-quality content and friendly atmosphere, SQLBits is a must-attend event for anyone working in the Microsoft data ecosystem.

As you can see on my badge, I’ve chosen to attend a wide range of sessions during the first two days, and on Friday I’ll be taking part in a full-day training on “Real-World Data Engineering: Practical Skills for Microsoft Fabric” led by Prathy Kamasani.

On the agenda, agenda, you’ll see that the trending topics stand out. Personally, I’m very interested in sessions on SQL Server 2025, which has just been released in public beta, or sessions on the SQL Server engine in general, but of course these aren’t the majority of sessions.
The trends are now:

  • AI and AI Agents, with multiple sessions exploring how AI is being integrated into the Microsoft data ecosystem including intelligent assistants, automation, and real-world use cases.
  • Microsoft Fabric is front and center, everywhere.
  • Analytics and Power BI continue to be key pillars, with advanced sessions on DAX, large-scale reporting, data modeling, and real-time analytics.
  • Data Transformation and DataOps are well represented, especially through practical sessions on orchestration, data pipelines, automation, and governance.

Today I attended the following sessions:

  • How Microsoft’s new, open-source Data API builder fits into Solution Architecture
  • Unlocking the Power of Open Mirroring in Microsoft Fabric
  • AI Agents in Action: Enhancing Applications with Azure AI Services
  • Azure SQL Managed Instance Demo Party
  • Empower Your Data with Real-Time Intelligence leveraging Data Activator within Fabric
  • From B-Trees to V-Order. Told differently than usual
  • Resolving Deadlocks in SQL Server: Practical Demo

It’s a huge amount of information, subjects to explore and ideas for solving real-life problems for our customers.

That was only the first day, but I’m already extremely satisfied with the conference: from the quality of the sessions and the content to the expertise of the speakers. I’m looking forward to making the most of the upcoming sessions, and I highly recommend that any Microsoft Data professional consider attending SQLBits. If not this year, then why not next year?

L’article Experiencing SQLBits 2025 in London est apparu en premier sur dbi Blog.

Parallel execution of Ansible roles

Tue, 2025-06-10 13:56
Introduction

You can run a playbook for specific host(s), a group of hosts, or “all” (all hosts of the inventory).

Ansible will then run the tasks in parallel on the specified hosts. To avoid an overload, the parallelism – called “forks” – is limited to 5 per default.

A task with a loop (e.g. with_items:) will be executed serially per default. To run it in parallel, you can use the “async” mode.

But unfortunately, this async mode will not work to include roles or other playbooks in the loop. In this blog post we will see a workaround to run roles in parallel (on the same host).

Parallelization over the ansible hosts

In this example, we have 3 hosts (dbhost1, dbhost2, dbhost3) in the dbservers group
(use ansible-inventory --graph to see all your groups) and we run the following sleep1.yml playbook

- name: PLAY1
  hosts: [dbservers]
  gather_facts: no
  tasks:
    - ansible.builtin.wait_for: timeout=10

The tasks of the playbook will run in parallel on all hosts of the dbservers group, but not more at the same time as specified with the “forks” parameter. (specified in ansible.cfg, shell-variable ANSIBLE_FORKS, commandline parameter –forks)
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html

$ time ansible-playbook sleep1.yml --forks 2
...
ok: [dbhost1]  #appears after 10sec
ok: [dbhost2]  #appears after 10sec
ok: [dbhost3]  #appears after 20sec
...
real    0m22.384s

With forks=2 the results of dbhost1 and dbhost2 will both be returned after 10 seconds (sleep 10 in parallel). dbhost3 has to wait until one of the running tasks is completed. So the playbook will complete after approx. 20 seconds. If forks is 1, then it takes 30s, if forks is 3, it takes 10s (plus overhead).

Parallelization of loops

Per default, a loop is not run in parallel

- name: PLAY2A
  hosts: localhost
  tasks:
    - set_fact:
        sleepsec: [ 1, 2, 3, 4, 5, 6, 7 ]

    - name: nonparallel loop
      ansible.builtin.wait_for: "timeout={{item}} "
      with_items: "{{sleepsec}}"
      register: loop_result

This sequential run will take at least 28 seconds.

To run the same loop in parallel, use “async”
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_async.html

- name: PLAY2B
  hosts: localhost
  gather_facts: no
  tasks:
    - name: parallel loop
      ansible.builtin.wait_for: "timeout={{item}}"
      with_items: "{{sleepsec}}"
      register: loop_result
      async: 600  # Maximum runtime in seconds. Adjust as needed.
      poll: 0     # Fire and continue (never poll here)

    # in the meantime, you can run other tasks

    - name: Wait for parallel loop to finish (poll)
      async_status:
        jid: "{{ item.ansible_job_id }}"
      register: loop_check
      until: loop_check.finished
      delay: 1      # Check every 1 seconds
      retries: 600  # Retry up to 600 times. 
                    # delay*retries should be "async:" above
      with_items: "{{ loop_result.results }}"

In the first task we start all sleeps in parallel. It will timeout after 600 seconds. We will not wait for the result (poll: 0). A later task polls the background processes until all parallel loops are finished. This execution only takes a little bit more than 7 seconds (the longest sleep plus some overhead). Between the loop and the poll you can add other tasks to use the waiting time for something more productive. Or if you know your loop takes at least 1 minute, then you can add to reduce the overhead of the polling loop, an ansible.builtin.wait_for: "timeout=60".

For example, we have an existing role to create and configure a new useraccount with many, sometimes longer running steps, e.g. add to LDAP, create NFS share, create a certificate, send a welcome-mail, ….; most of these tasks are not bound to a specific host, and will run on “localhost” calling a REST-API.

The following code example is a dummy role for copy/paste to see how it works with parallel execution.

# roles/create_user/tasks/main.yml    
    - debug: var=user
    - ansible.builtin.wait_for: timeout=10

Now we have to create many useraccounts and would like to do that in parallel. We use the code above and adapt it:

- name: PLAY3A
  hosts: localhost
  gather_facts: no
  tasks:
    - set_fact:
        users: [ 'Dave', 'Eva', 'Hans' ]

    - name: parallel user creation
      ansible.builtin.include_role: name=create_user
      with_items: "{{users}}"
      loop_control:
        loop_var: user
      register: loop_result
      async: 600
      poll: 0

But unfortunately, Ansible will not accept include_role:
ERROR! 'poll' is not a valid attribute for a IncludeRole

The only solution is to rewrite the role and to run every task with the async mode.

But is there no better solution to re-use existing roles? Let’s see…

Parallel execution of roles in a loop

As we already know

  • Ansible can run playbooks/tasks in parallel over different hosts (hosts parameter of the play).
  • Ansible can run tasks with a loop in parallel with the async option, but
  • Ansible can NOT run tasks with a loop in parallel for include_role or include_tasks

So, the trick will be to run the roles on “different” hosts. There is a special behavior of localhost. Well-known is the localhost IP 127.0.0.1; But also 127.0.0.2 to 127.255.255.254 refer to localhost (check it with ‘ping’). For our create-user script: we will run it on “different” localhosts in parallel. For that, we create a host-group at runtime with localhost addresses. The number of these localhost IP’s is equal to the number of users to create.

users[0] is Dave. It will be created on 127.0.0.1
users[1] is Eva. It will be created on 127.0.0.2
users[2] is Hans. It will be created on 127.0.0.3

- name: create dynamic localhosts group
  hosts: localhost
  gather_facts: no
  vars:
    users: [ 'Dave', 'Eva', 'Hans' ]
  tasks:
    # Create a group of localhost IP's; 
    # Ansible will treat it as "different" hosts.
    # To know, which locahost-IP should create which user:
    # The last 2 numbers of the IP matches the element of the {{users}} list:
    # 127.0.1.12 -> (1*256 + 12)-1 = 267 -> users[267]
    # -1: first Array-Element is 0, but localhost-IP starts at 127.0.0.1
    - name: create parallel execution localhosts group
      add_host:
        name: "127.0.{{item|int // 256}}.{{ item|int % 256 }}"
        group: localhosts
      with_sequence:  start=1  end="{{users|length}}" 

- name: create useraccounts
  hosts: [localhosts]  # [ 127.0.0.1, 127.0.0.2, ... ]
  connection: local
  gather_facts: no
  vars:
    users: [ 'Dave', 'Eva', 'Hans' ]
  # this play runs in parallel over the [localhosts] 
  tasks:
    - set_fact:
        ip_nr: "{{ inventory_hostname.split('.') }}"

    - name: parallel user creation
      ansible.builtin.include_role:
        name: create_user
      vars:
        user: "{{ users[ (ip_nr[2]|int*256 + ip_nr[3]|int-1) ] }}"

In this example: With forks=3 it runs in 11 seconds. With forks=1 (no parallelism) it takes 32 seconds.

The degree of parallelism (forks) depends on your use-case and your infrastructure. If you have to restore files, probably the network-bandwith, disk-I/O or the number of tape-slots is limited. Choose a value of forks that does not overload your infrastructure.

If some tasks or the whole role has to be run on another host than localhost (e.g. create a local useraccount on a server), then you can use delegate_to: "{{remote_host}}".

This principle can ideally be used for plays that are not bound to a specific host, usually for tasks that will run from localhost and calling a REST-API without logging in with ssh to a server.

Summary

Ansible is optimized to run playbooks on different hosts in parallel. The degree of parallelism can be limited by the “forks” parameter (default 5).

Ansible can run loops in parallel with the async mode. Unfortunately that does not work if we include a role or tasks.

The workaround to run roles in parallel on the same host is to assign every loop item to a different host, and then to run the role on different hosts. For the different hosts we can use the localhost IP’s between 127.0.0.1 and 127.255.255.254 to build a dynamic host-group; the number corresponds to number of loop items

L’article Parallel execution of Ansible roles est apparu en premier sur dbi Blog.

M-Files Backup considerations

Tue, 2025-06-10 11:24
Once upon a time in IT project land

What an achievement, it’s been a good while your team is working on a huge project and it’s finally completed. All related documentation is well registered and managed into your knowledge workers enterprise content management, M-Files.

Time to get some rest and move on other tasks. Hence time is passing until dark moments show up and bring your attention to round to review and check your previous project documents to find answers regarding colleagues questionings. Bad luck, it looks like several content have disappeared. Even worse, you discovered others are corrupted. Leaving aside the root cause of all this, your first objective now is to retrieve your precious documents. And guess what, backups are your best friends.

M-Files deployment context

Before considering technical details, let’s assume your M-Files vault(s) is on-premise deployed. Note that if you are working with M-Files Cloud vault(s) then M-Files Corporation is maintaining and backing up your vault for you. M-Files Cloud offers a default standard backup service as part of every M-Files subscription. Moreover, it is possible to extend your backup capabilities based on retention and recovery point settings.

This said, do mind we shall have different situations:

  • M-Files Server + Microsoft SQL Database + object files distributed either on a single server or across multiple servers
  • M-Files Server + Firebird SQL Database + object files only on a single server

One must admit Firebird SQL Database is a nice choice when you start working with M-Files in order to be confidant quickly and easily with the tool. But keep in mind, as soon as you are managing millions of objects, furthermore in a production context, it is highly recommended to review your local server infrastructure and consider working with / migrating to Microsoft SQL database.

M-Files server-specific data

Whatever M-Files configuration type is in place, you must always take care of your master database. This is where server-specific data reside and it is always stored in an embedded Firebird SQL database. Information stored are:

  • Information on transactions since the last backup
  • Login accounts
    • M-Files login passwords
    • Personal information
    • License types
    • Server-level roles
  • Scheduled jobs
  • Notification settings
  • Licenses
  • M-Files Web settings

As you can imagine, these information are important and must be taken into account with care as much as your Document Vault backup does. Hence, I would recommend to take regular Master Database backups according to your needs and system criticality.

M-Files vault-specific data

According to database software used to store these information, either an embedded Firebird or Microsoft SQL server, M-Files provides you the possibility to do full and differential backup. Differential backups contain changes since the last full backup which includes all object files and vault metadata such as:

  • Objects with their version history, files, and metadata
  • Value lists, property definitions and classes
  • Metadata structure
  • Users, user groups and named access control lists
  • User permissions and vault roles
  • Workflows
  • Connections to other systems
  • Event log
  • Vault applications and scripts

Note it is possible to store object files inside Microsoft SQL server database, whereas this is not an option with Firebird (always stored in file system). Nevertheless, this may lead to performance issues in large environment and should be consider with care if not avoided.

In case of using embedded Firebird database, simply schedule all vaults back up jobs in M-Files Admin.

With Microsoft SQL server database engine, you must back up both the Microsoft SQL database and the files in the file system separately. It is important to always back up Microsoft SQL database first and then the file system data to avoid any references to non-existing object files.

Other data to consider

Some secondary data are not saved during above backup procedures. Instead, these are re-created after a restore operation. Think about index files, PDF renditions and thumbnails stored on M-Files server hard drive and pay attention to rebuilding duration. In large vault, the amount of time to rebuild search indexes can take a lot of time. Hopefully, it is feasible to back up and restore them accordingly. Depending on search engine used, procedure can be slightly different but feasible.

Last, do not forget any system modification(s) such as Windows registry, notification message templates or whatever M-Files Server installation folder files since these will not be included in Master or Vault backups.

Advices

Things are always easier said than done but it might get even worse than ever if you do not pay attention to few M-Files backup best practices. Hardware failure, accidental deletions, cyberattack, logical and or human errors happen. To be prepared to face such issues:

  • Plan, do, act and check your backup plan considering your systems criticality, Business and end-users SLA and constraints.
  • Do not hesitate to lean on your IT Administrator support teams and M-Files Architect to validate choices made and review potential architecture weakness.
  • Avoid taking data file system snapshot on active M-Files system servers where data is stored (inconsistency and damaged files may occurred).
  • Apply the 3-2-1 backup strategy for the best (store 3 copies of your data, in at least 2 types of storage media with1 in a secure off-site location)

On top of this, OR/DR test exercise shall prevent and reveal most of your backup plan gap(s). Thus you will be prepared to solve backups issues validating, in passing, their integrities.

If you need further assistance, do not hesitate to contact dbi services for any M-Files support and guidance around this topic and others.

Enjoy your knowledge work automation tasks with M-Files.

L’article M-Files Backup considerations est apparu en premier sur dbi Blog.

Azure Bootcamp Switzerland 2025

Tue, 2025-06-10 02:48

Last week I attended the Azure Bootcamp Switzerland 2025 with my colleague Adrien from the Cloud team. It was the 1st time for us but it was already the 6th edition of the event. This is a community event and non-profit event on an obvious topic: Azure

It’s a quite intensive day, 2 keynotes and 15 presentations split in 3 different streams. We took the ticket some ago and I was quite surprised by the amount of people interested in Azure topics. Indeed, just before the keynote, they announced the event was sold out. I know already the location since it’s at the same place than the Data Community Conference I participated last year.

Since we were two people at the event, we were able to split in the different streams. I followed sessions on really different topics: building AI application for Swiss government, Building Data platform, FinOps, Platform engineering with Terraform and DNS for hybrid environments.

All presentations were interesting but I’ll bring the focus on 2 of them: Cost observability with new FinOps standard FOCUS and DNS in hybrid environments because I think these 2 topics are really in important in the current deployments. The 1st one is quite obvious, you should not go to public Cloud without monitoring your costs. And the 2nd one, in my opinion, many companies are deploying Cloud but still have on-premise workload and it’s important to know how to live in this hybrid world.

Level up your Cost Observability with the new FinOps standard FOCUS and Microsoft Fabric

We discussed this FinOps topic already within the Cloud team at dbi services. But we may have overlooked the FOCUS specification by trying to create our own mapping in the different Cloud cost usage exports. The public Cloud providers are not strictly following the specification and there are some tips in the slides. Using Fabrics is then a good example on how the cost data can be processed.

DNS in hybrid cloud environments – can it be that hard?

On DNS side, the main take-away of this presentation is to correctly plan your DNS needs. In most cases, you will not use fixed IPs when you work with public Cloud. Usually, IPs are provided automatically when network interfaces are created. Then DNS is a critical component to ensure a good communication between all your instances/services. The DNS should be part of your network design from the beginning to ease the management of the DNS and avoid DNS private zones scattered at different places.

L’article Azure Bootcamp Switzerland 2025 est apparu en premier sur dbi Blog.

M-Files Online June 2025 (25.6) Sneak Preview

Sun, 2025-06-08 04:07

The June 2025 release of the M-Files Client is just around the corner, and as an M-Files partner, we have the opportunity to access the Early Access Release. As it rained heavily over the Whitsun holiday, I decided to spend some time installing the new version and exploring its features and improvements.
A quick look at the release notes revealed that this update would introduce the much-anticipated graphical workflow feature. This allows end users to easily observe the workflow status. But that’s not all, as you can read below.

New Features and Enhancement highlights Visual Workflows

During the M-Files Global Conference 2025, the launch of visual workflows was announced to great enthusiasm. I actually heard about the implementation of this feature several months ago and have been waiting for it ever since. I am therefore very happy that it is now available, as it will provide real added value for M-Files end users. Graphical views are easier for people to understand when they want to quickly grasp the status of a task or, in this case, the workflow of a document.

The screenshot below shows the new graphical view. As you can see, the current workflow status and the next step are displayed at the bottom. Furthermore, if you are the person with the next action in the workflow, there are buttons above the workflow status to easily perform the action.
For me, the action buttons in the new client are key, as the new client does not have the left-hand actions bar that the old client has. In my opinion, this implementation enhances the end-user experience and encourages end-users to migrate to the new, modern client.

Global Search

With global search enabled in the M-Files client, you can search all the vaults you have access to. Furthermore, the home screen now displays all the vaults that you have access to. You can access them with just one click.
You can also perform global searches directly using the search box on the home screen. This enhancement is especially useful for users who have access to multiple M-Files vaults, like me. It makes my work faster and smoother and is a great feature.

Co-authoring improvements

Create new Office documents and co-author them in Microsoft 365 Office desktop applications. This new release makes co-authoring the default behaviour when opening a document. See the latest live document changes in M-Files. Any user with the necessary access rights can co-author a document and save new versions of the document to M-Files.

M-Files URL improvements

The implementation of this feature is a clear statement of the willingness to integrate the known functions from the M-Files Classic Client in the new and modern M-Files Client. I had a look in to the client development roadmap and it confirms me the commitment of M-Files to enhance the new and much faster client to make them available for the end-users. The next logical step was to enable and improve URL handling. Please find below a brief summary of the URL improvement news.

  • Open view and edit specified document in Web via links
  • Support for search links
  • Property-based search in Web and Desktop via Links
  • External ID support for objects in grouping levels
Conclusion

I am pleased to inform you that a new client will be available in June for all M-Files end-users. In my professional opinion, the most significant enhancement is the implementation of the graphical workflow, and of course the deeper integration within the Microsoft ecosystem, with the enablement of co-authoring with M-Files.

I hope you are also feeling as overwhelmed as I am about this new M-Files Client. Should you wish to see them in action, please do not hesitate to contact us or me. We will be happy to arrange a demonstration to illustrate the potential of M-Files to enhance your business.

L’article M-Files Online June 2025 (25.6) Sneak Preview est apparu en premier sur dbi Blog.

Pages