Yann Neuhaus
Migrate your database to 26ai on ODA
Oracle 26ai is not yet available on Bare Metal ODA, but it will probably be in a few weeks. For now, 26ai is available as a DB System on Oracle Database Appliance, meaning that it’s already possible to test this new version. Let’s try to migrate a 19c Bare Metal PDB to a 26ai DB System using 3 different methods.
Prerequisites26ai is only available with the minimum release 19.29, meaning on ODA X8-2 and newer models. You will need to patch your ODA to 19.29 first if you want to test 26ai. 26ai is only compatible with multitenant architecture, so another prerequisite is to migrate your non-container databases to PDBs. If you’re still using non-CDB architecture, and among these 3 methods, only the first one will work for you.
The recommended patching method as told by Oracle is using AutoUpgrade: please refer to this blog post if you want more details.
On my side, I will explore these 3 other possibilities:
- Datapump Export/Import
- PDB copy through a DB link
- PDB unplug/plug
These files are needed for creating a 26ai DB System:
36524660 => System image for 26ai DB Systems
36524627 => the GI clone needed for deploying newer 26ai GI version
36524642 => the DB clone for deploying new version 26ai
Be sure to choose the very latest 19.29 patch when downloading these files, download link from MOS will first propose older versions.
My 19c source databaseMy source database is a 19c container database with one PDB running on an ODA X8-2M. I want to migrate this PDB to 26ai:
su - oracle
. oraenv <<< CPROD19
ORACLE_SID = [CPROD19] ? The Oracle base remains unchanged with value /u01/app/odaorabase/oracle
sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jan 8 09:53:43 2026
Version 19.29.0.0.0
Copyright (c) 1982, 2025, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.29.0.0.0
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRODDB01 READ WRITE NO
exit
Provision a new 26ai DB System
It’s not yet possible to deploy a 26ai container database on Bare Metal, so let’s create a DB System for testing this migration. First unzip and register the VM template, GI 26ai and DB 26ai:
cd /opt/dbi
unzip -o p36524660_1929000_Linux-x86-64.zip
unzip -o p36524627_1929000_Linux-x86-64.zip
unzip -o p36524642_1929000_Linux-x86-64.zip
odacli update-repository -f /opt/dbi/odacli-dcs-23.26.0.0.0-251116-ODAVM-19.29.0.0.zip
odacli update-repository -f /opt/dbi/odacli-dcs-23.26.0.0.0-251116-GI-23.26.0.0.zip
odacli update-repository -f /opt/dbi/odacli-dcs-23.26.0.0.0-251116-DB-23.26.0.0.zip
sleep 30 ; odacli list-jobs | tail -n 4
d4700315-db8c-4522-af55-0fddd262bfe4 Repository Update 2025-12-08 15:20:14 CET Success
ba7d452f-e03d-46d0-a607-fd7c758cd1b1 Repository Update 2025-12-08 15:20:59 CET Success
bfc102d1-985b-4792-8054-03709aa8d949 Repository Update 2025-12-08 15:21:20 CET Success
odacli describe-dbsystem-image | grep 23.26
DBVM 23.26.0.0.0 23.26.0.0.0
GI 23.26.0.0.0 23.26.0.0.0
DB 23.26.0.0.0 23.26.0.0.0
A CPU pool is needed for DB Systems, if you don’t have one. A 2-core CPU pool is enough:
odacli create-cpupool -c 2 -n cpupool2c -dbs
Now let’s create a json file, then create a new DB System based on this file:
cat /opt/dbi/create_dbs-04-tst-cdb.json
{
"system": {
"name": "dbs-04-tst",
"shape": "dbs2",
"systemPassword": "***********",
"timeZone": "Europe/Zurich",
"diskGroup": "DATA",
"cpuPoolName": "cpupool2c",
"enableRoleSeparation": true,
"customRoleSeparation": {
"groups": [
{
"name": "oinstall",
"id": 1001,
"role": "oinstall"
},
{
"name": "dbaoper",
"id": 1002,
"role": "dbaoper"
},
{
"name": "dba",
"id": 1003,
"role": "dba"
},
{
"name": "asmadmin",
"id": 1004,
"role": "asmadmin"
},
{
"name": "asmoper",
"id": 1005,
"role": "asmoper"
},
{
"name": "asmdba",
"id": 1006,
"role": "asmdba"
}
],
"users": [
{
"name": "grid",
"id": 1001,
"role": "gridUser"
},
{
"name": "oracle",
"id": 1000,
"role": "oracleUser"
}
]
}
},
"database": {
"name": "CTEST26",
"uniqueName": "CTEST26_S1",
"domainName": "dbi-lab.ch",
"adminPassword": "**********",
"version": "23.26.0.0.0",
"edition": "EE",
"type": "SI",
"dbClass": "OLTP",
"shape": "odb2",
"role": "PRIMARY",
"enableDbConsole": false,
"enableFlashStorage": false,
"redundancy": null,
"characterSet": {
"characterSet": "AL32UTF8",
"nlsCharacterset": "AL16UTF16",
"dbTerritory": "AMERICA",
"dbLanguage": "AMERICAN"
},
"rmanBackupPassword": null,
"enableTDE": false,
"isCdb": true,
"pdbName": "pdb1",
"pdbAdminUser": "pdbadmin",
"tdePassword": null
},
"network": {
"domainName": "dbi-lab.ch",
"ntpServers": [
"85.195.224.28"
],
"dnsServers": [
"10.16.0.250"
],
"nodes": [
{
"name": "dbs-04-tst",
"ipAddress": "10.16.0.247",
"netmask": "255.255.255.0",
"gateway": "10.16.0.1",
"number": 0
}
],
"publicVNetwork": "pubnet"
},
"grid": {
"language": "en"
}
}
odacli create-dbsystem -p /opt/dbi/create_dbs-04-tst-cdb.json
Enter password for system "dbs-04-tst":
Retype password for system "dbs-04-tst":
Enter administrator password for DB "CTEST26":
Retype administrator password for DB "CTEST26":
...
35 minutes later, my new DB System is ready to use:
odacli describe-job -i c79ecbbf-dd0c-4b75-ba10-249c78b25f33
Job details
----------------------------------------------------------------
ID: c79ecbbf-dd0c-4b75-ba10-249c78b25f33
Description: DB System dbs-04-tst creation
Status: Success
Created: January 08, 2026 10:23:39 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Create DB System metadata January 08, 2026 10:23:39 CET January 08, 2026 10:23:41 CET Success
Persist new DB System January 08, 2026 10:23:41 CET January 08, 2026 10:23:41 CET Success
Remove unused network interfaces January 08, 2026 10:23:41 CET January 08, 2026 10:23:41 CET Success
Validate DB System creation January 08, 2026 10:23:41 CET January 08, 2026 10:23:46 CET Success
Create the infra quota group January 08, 2026 10:23:46 CET January 08, 2026 10:23:50 CET Success
Setup DB System environment January 08, 2026 10:23:46 CET January 08, 2026 10:23:46 CET Success
Create DB System ASM volume January 08, 2026 10:23:50 CET January 08, 2026 10:24:01 CET Success
Create DB System ACFS filesystem January 08, 2026 10:24:01 CET January 08, 2026 10:24:13 CET Success
Create DB System ACFS mount point January 08, 2026 10:24:01 CET January 08, 2026 10:24:01 CET Success
Create DB System VM ACFS snapshots January 08, 2026 10:24:13 CET January 08, 2026 10:24:28 CET Success
Setup ASM client cluster config January 08, 2026 10:24:28 CET January 08, 2026 10:24:30 CET Success
Create temporary SSH key pair January 08, 2026 10:24:30 CET January 08, 2026 10:24:31 CET Success
Create BM certificates in ACFS location January 08, 2026 10:24:31 CET January 08, 2026 10:24:32 CET Success
Create DB System cloud-init config January 08, 2026 10:24:32 CET January 08, 2026 10:24:32 CET Success
Provision DB System VM(s) January 08, 2026 10:24:32 CET January 08, 2026 10:24:35 CET Success
Create u01 vDisk for DB System January 08, 2026 10:24:35 CET January 08, 2026 10:24:36 CET Success
Add DB System to Clusterware January 08, 2026 10:24:36 CET January 08, 2026 10:24:37 CET Success
Attach disks to DB System January 08, 2026 10:24:36 CET January 08, 2026 10:24:36 CET Success
Attach u01 vDisk to DB System January 08, 2026 10:24:36 CET January 08, 2026 10:24:36 CET Success
Add extra PCI slots January 08, 2026 10:24:37 CET January 08, 2026 10:24:37 CET Success
Create Network Filters January 08, 2026 10:24:37 CET January 08, 2026 10:24:37 CET Success
Export clones repository January 08, 2026 10:24:37 CET January 08, 2026 10:24:37 CET Success
Start DB System January 08, 2026 10:24:37 CET January 08, 2026 10:24:40 CET Success
Wait DB System VM first boot January 08, 2026 10:24:40 CET January 08, 2026 10:26:49 CET Success
Setup Mutual TLS (mTLS) January 08, 2026 10:26:49 CET January 08, 2026 10:26:51 CET Success
Cleanup temporary SSH key pair January 08, 2026 10:26:51 CET January 08, 2026 10:26:52 CET Success
Copy ASM client cluster config January 08, 2026 10:26:51 CET January 08, 2026 10:26:51 CET Success
Delete Mutual TLS (mTLS) Artifacts January 08, 2026 10:26:51 CET January 08, 2026 10:26:51 CET Success
Wait DB System VM DCS Agent readiness January 08, 2026 10:26:52 CET January 08, 2026 10:27:12 CET Success
Add Network Filters January 08, 2026 10:27:12 CET January 08, 2026 10:27:14 CET Success
Install DB System January 08, 2026 10:27:14 CET January 08, 2026 10:53:57 CET Success
Wait DB System VM DCS Agent readiness January 08, 2026 10:53:57 CET January 08, 2026 10:55:58 CET Success
Set DB System as configured January 08, 2026 10:55:58 CET January 08, 2026 10:55:58 CET Success
odacli list-dbsystems
Name Shape GI version DB info Status Created Updated
-------------------- ---------- ------------------ ------------------------------ ---------------------- ------------------------ ------------------------
dbs-04-tst dbs2 23.26.0.0.0 23.26(CONFIGURED=1) CONFIGURED 2026-01-08 10:23:39 CET 2026-01-08 10:55:58 CET
Migration using datapump
Let’s consider a classic migration using datapump. This tool has been the official one for migration for nearly 2 decades now.
Let’s first create a new PDB inside the container database of the new DB System:
ssh root@dbs-04-tst
su - oracle
. oraenv <<< CTEST26
sqlplus / as sysdba
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
alter pluggable database PDB1 close immediate;
Pluggable database altered.
drop pluggable database PDB1 including datafiles;
Pluggable database dropped.
create pluggable database MIG26DTPUMP admin user pdbadmin identified by "*********";
Pluggable database created.
alter pluggable database MIG26DTPUMP open;
Pluggable database altered.
alter pluggable database MIG26DTPUMP save state;
Pluggable database altered.
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 MIG26DTPUMP READ WRITE NO
alter session set container=MIG26DTPUMP;
create tablespace users datafile size 100M autoextend on maxsize 10G;
Tablespace created.
exit
On source database, non-CDB or PDB it doesn’t matter, export data using datapump:
su - oracle
. oraenv <<< CPROD19
export ORACLE_PDB_SID=PRODDB01
sqlplus / as sysdba
create or replace directory mig as '/backup/dumps/';
col current_scn for 999999999999
select current_scn from v$database;
CURRENT_SCN
---------------
6292616
exit
expdp system directory=MIG full=y dumpfile=PRODDB01_20260112.dmp logfile=exp_PRODDB01_20260112.log flashback_scn=6292616
Export: Release 19.0.0.0.0 - Production on Mon Jan 12 10:36:52 2026
Version 19.29.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Password:
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/******** directory=MIG full=y dumpfile=PRODDB01_20260112.dmp logfile=exp_PRODDB01_20260112.log flashback_scn=6292616
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type DATABASE_EXPORT/STATISTICS/MARKER
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/TABLESPACE
...
. . exported "WMSYS"."WM$METADATA_MAP" 11.16 KB 0 rows
. . exported "MDSYS"."RDF_PARAM$" 6.515 KB 3 rows
. . exported "SYS"."AUDTAB$TBS$FOR_EXPORT" 5.960 KB 2 rows
. . exported "SYS"."DBA_SENSITIVE_DATA" 8.070 KB 0 rows
. . exported "SYS"."DBA_TSDP_POLICY_PROTECTION" 7.656 KB 0 rows
. . exported "SYS"."NACL$_ACE_EXP" 9.812 KB 0 rows
. . exported "SYS"."NACL$_HOST_EXP" 6.921 KB 1 rows
. . exported "SYS"."NACL$_WALLET_EXP" 5.929 KB 0 rows
. . exported "DBI"."CONSULTANTS" 5.937 KB 1 rows
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
/backup/dumps/PRODDB01_20260112.dmp
Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Mon Jan 12 10:38:52 2026 elapsed 0 00:01:54
ls -lrth /backup/dumps/*20260112*
-rw-r-----. 1 oracle asmadmin 34M Jan 12 10:38 /backup/dumps/PRODDB01_20260112.dmp
-rw-r--r--. 1 oracle asmadmin 9.8K Jan 12 10:38 /backup/dumps/exp_PRODDB01_20260112.log
On target database, let’s do the import:
ssh root@dbs-04-tst
su - oracle
. oraenv <<< CTEST26
export ORACLE_PDB_SID=MIG26DTPUMP
sqlplus / as sysdba
create or replace directory mig as '/backup/dumps/';
exit
vi /backup/dumps/imp_PRODDB01.par
directory=MIG
schemas=dbi
dumpfile=PRODDB01_20260112.dmp
logfile=imp_PRODDB01_to_MIG26DTPUMP_20260112.log
exclude=statistics
impdp system parfile=/backup/dumps/imp_PRODDB01.par
Import: Release 23.26.0.0.0 - for Oracle Cloud and Engineered Systems on Mon Jan 12 10:51:39 2026
Version 23.26.0.0.0
Copyright (c) 1982, 2025, Oracle and/or its affiliates. All rights reserved.
Password:
Connected to: Oracle AI Database 26ai Enterprise Edition Release 23.26.0.0.0 - for Oracle Cloud and Engineered Systems
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Warning: possible data loss in character set conversions
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01": system/******** parfile=/backup/dumps/imp_PRODDB01.par
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA/LOGREP
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "DBI"."CONSULTANTS" 5.9 KB 1 rows
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully completed at Mon Jan 12 10:51:54 2026 elapsed 0 00:00:08
Let’s now compute statistics for my schema:
sqlplus / as sysdba
exec dbms_stats.gather_schema_stats(ownname => 'DBI');
PL/SQL procedure successfully completed.
exit;
Let’s do a quick check on data:
sqlplus / as sysdba
select start_date from dbi.consultants where name like 'DUB%';
START_DAT
---------
04-SEP-17
exit
That’s fine.
Pros & cons
- + Same procedure as previous migrations
- + Reorganize data and indexes
- + Keep the source database untouched
- – Rather slow when database is big
If the source database is a PDB, it can be copied through a DB link, this is a very convenient way of migrating a database.
First, it’s weird but the PDB administrator of the source DB must have the CREATE PLUGGABLE DATABASE privilege:
su - oracle
. oraenv <<< CPROD19
sqlplus / as sysdba
alter session set container=PRODDB01;
grant create pluggable database to pdbadmin;
exit
On the DB System, let’s create a new TNS entry for the 19c PDB:
ssh dbs-04-tst
su - oracle
. oraenv <<< CTEST26
vi $ORACLE_HOME/network/admin/tnsnames.ora
...
PRODDB01 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dbioda01.dbi-lab.ch)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = proddb01.dbi-lab.ch)
)
)
On the target CDB, create a new database link to the source database:
CREATE DATABASE LINK PRODDB01 CONNECT TO PDBADMIN IDENTIFIED BY "*********" USING 'PRODDB01';
select count(*) from dual@PRODDB01;
COUNT(*)
----------
1
Now let’s copy the source PDB under a new name on the 26ai container:
create pluggable database MIG26DBLINK from PRODDB01@PRODDB01;
Pluggable database created.
alter pluggable database MIG26DBLINK open;
Opening this PDB will take some minutes because it automatically triggers the upgrade from 19c to 26ai:
tail -n 15 --f /u01/app/oracle/diag/rdbms/ctest26_s1/CTEST26/trace/alert_CTEST26.log
...
MIG26DBLINK(3):Starting Upgrade on PDB Open
2026-01-12T10:57:49.606651+01:00
MIG26DBLINK(3):alter pluggable database application APP$CDB$CATALOG begin install '19.0.0.0.0' on error capture
MIG26DBLINK(3):Completed: alter pluggable database application APP$CDB$CATALOG begin install '19.0.0.0.0' on error capture
MIG26DBLINK(3):alter pluggable database application APP$CDB$CATALOG end install '19.0.0.0.0'
MIG26DBLINK(3):Completed: alter pluggable database application APP$CDB$CATALOG end install '19.0.0.0.0'
MIG26DBLINK(3):alter pluggable database application APP$CDB$CATALOG begin upgrade
'19.0.0.0.0' to '23.0.0.0.0.partial' on error capture
MIG26DBLINK(3):Completed: alter pluggable database application APP$CDB$CATALOG begin upgrade
'19.0.0.0.0' to '23.0.0.0.0.partial' on error capture
...
MIG26DBLINK(3):SERVER COMPONENT id=CATJAVA: status=UPGRADED, version=23.26.0.0.0, timestamp=2026-01-12 11:09:54 Container=MIG26DBLINK Id=3
2026-01-12T11:09:58.200636+01:00
MIG26DBLINK(3):Resize operation completed for file# 19, fname +DATA/CTEST26_S1/482EE16D86417106E063F700240A6555/DATAFILE/undotbs1.294.1222340161, old size 450560K, new size 455680K
2026-01-12T11:09:59.496448+01:00
MIG26DBLINK(3):Resize operation completed for file# 19, fname +DATA/CTEST26_S1/482EE16D86417106E063F700240A6555/DATAFILE/undotbs1.294.1222340161, old size 455680K, new size 460800K
2026-01-12T11:10:26.701609+01:00
MIG26DBLINK(3):oracle :
MIG26DBLINK(3):SERVER COMPONENT id=XDB: status=UPGRADED, version=23.26.0.0.0, timestamp=2026-01-12 11:10:26 Container=MIG26DBLINK Id=3
2026-01-12T11:10:36.378304+01:00
MIG26DBLINK(3):Resize operation completed for file# 19, fname +DATA/CTEST26_S1/482EE16D86417106E063F700240A6555/DATAFILE/undotbs1.294.1222340161, old size 460800K, new size 465920K
2026-01-12T11:10:45.077998+01:00
MIG26DBLINK(3):Resize operation completed for file# 19, fname +DATA/CTEST26_S1/482EE16D86417106E063F700240A6555/DATAFILE/undotbs1.294.1222340161, old size 465920K, new size 471040K
...
MIG26DBLINK(3):***************************************************************
MIG26DBLINK(3):WARNING: Pluggable Database MIG26DBLINK with pdb id - 3 is
MIG26DBLINK(3): altered with errors or warnings. Please look into
MIG26DBLINK(3): PDB_PLUG_IN_VIOLATIONS view for more details.
MIG26DBLINK(3):***************************************************************
MIG26DBLINK(3):--ATTENTION--
MIG26DBLINK(3):Errors reported while opening PDB (ContainerId: 3) and have been recorded in pdb_alert$ table.
MIG26DBLINK(3) Error Violation: SQL Patch, Cause: '23.26.0.0.0 Release_Update 2509270520' is installed in the CDB but no release updates are installed in the PDB, Action: Call datapatch to install in the PDB or the CDB
2026-01-12T11:13:31.632629+01:00
MIG26DBLINK(3):Opening pdb with no Resource Manager plan active
Violations: Type: 1, Count: 1
Violations: Type: 2, Count: 2
Completed: Pluggable database MIG26DBLINK opened read write
Completed: alter pluggable database MIG26DBLINK open
After a couple of minutes, the PDB is opened with errors:
Warning: PDB altered with errors.
exit
Let’s apply datapatch on this very PDB:
$ORACLE_HOME/OPatch/datapatch -pdbs MIG26DBLINK
SQL Patching tool version 23.26.0.0.0 Lim on Mon Jan 12 11:14:55 2026
Copyright (c) 2012, 2025, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/product/23.0.0.0/dbhome_1/cfgtoollogs/sqlpatch/sqlpatch_sid_CTEST26_ts_2026_01_12_11_14_55_pid_5074/sqlpatch_invocation.log
Connecting to database...OK
Gathering database info...done
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of interim SQL patches:
No interim patches found
Current state of release update SQL patches:
Binary registry:
23.26.0.0.0 Release_Update 250927052030: Installed
PDB MIG26DBLINK:
Applied 23.26.0.0.0 Release_Update 250927052030 successfully
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: MIG26DBLINK
No interim patches need to be rolled back
No release update patches need to be installed
No interim patches need to be applied
Bypass install queue:
For the following PDBs: MIG26DBLINK
No interim rollbacks will bypass install
Patch 38404116 (Database Release Update : 23.26.0.0.0 (38404116) Gold Image): will bypass install
Apply from 23.26.0.0.0 Release_Update 250927052030 to 23.26.0.0.0 Release_Update 250927052030
No interim applys will bypass install
Installation queue after removing bypass entries...
Installation queue:
For the following PDBs: MIG26DBLINK
No interim patches need to be rolled back
No release update patches need to be installed
No interim patches need to be applied
Processing bypass install queue:
Patch 38404116 apply (pdb MIG26DBLINK): SUCCESS (bypass_install)
SQL Patching tool complete on Mon Jan 12 11:15:03 2026
Everything is fine, now let’s close and open this PDB in normal mode:
sqlplus / as sysdba
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 MIG26DBLINK READ WRITE YES
4 MIG26DTPUMP READ WRITE NO
alter pluggable database MIG26DBLINK close immediate;
alter pluggable database MIG26DBLINK open;
alter pluggable database MIG26DBLINK save state;
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 MIG26DBLINK READ WRITE NO
4 MIG26DTPUMP READ WRITE NO
Let’s do a quick data check:
sqlplus / as sysdba
alter session set container=MIG26DBLINK;
select start_date from dbi.consultants where name like 'DUB%';
START_DAT
---------
04-SEP-17
exit
Database is ready to use.
Pros & cons
- + Maybe the easiest way of migrating a PDB
- + Keep the source database untouched
- + Rather fast
- + No downtime of the source database
- – Need to apply datapatch
If the source database is a PDB, it can be unplugged and plugged back to another container. This is how multitenant architecture was presented when it first came to 12cR1.
Let’s unplug the source PDB:
su - oracle
. oraenv <<< CPROD19
sqlplus / as sysdba
alter pluggable database PRODDB01 close immediate;
Pluggable database altered.
alter pluggable database PRODDB01 unplug into '/backup/dumps/PRODDB01.xml';
Pluggable database altered.
exit
Now let’s plug this PDB to the new 26ai container:
ssh root@dbs-04-tst
su - oracle
. oraenv <<< CTEST26
sqlplus / as sysdba
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 MIG26DBLINK READ WRITE NO
4 MIG26DTPUMP READ WRITE NO
create pluggable database MIG26REPLUG using '/backup/dumps/PRODDB01.xml' COPY;
create pluggable database MIG26REPLUG using '/backup/dumps/PRODDB01.xml' COPY
*
ERROR at line 1:
ORA-19505: failed to identify file
"+DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/system.302.122233608
1"
ORA-17503: ksfdopn:2 Failed to open file
+DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/system.302.1222336081
ORA-15260: permission denied on ASM disk group
Help: https://docs.oracle.com/error-help/db/ora-19505/
Files from the source database are not readable by this DB System. Let’s change the permission on these files from the Bare Metal ASM instance:
exit
exit
exit
su - grid
asmcmd ls -lt +DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/* --permission
Type Redund Striped Time Sys User Group Permission Name
DATAFILE MIRROR COARSE JAN 12 11:00:00 Y oracle@e7d9d36f6b157fdabf89026bcf48d757 rw------- USERS.297.1222336081
DATAFILE MIRROR COARSE JAN 12 11:00:00 Y oracle@e7d9d36f6b157fdabf89026bcf48d757 rw------- UNDOTBS1.306.1222336081
DATAFILE MIRROR COARSE JAN 12 11:00:00 Y oracle@e7d9d36f6b157fdabf89026bcf48d757 rw------- SYSTEM.302.1222336081
DATAFILE MIRROR COARSE JAN 12 11:00:00 Y oracle@e7d9d36f6b157fdabf89026bcf48d757 rw------- SYSAUX.305.1222336081
asmcmd ls -lt +DATA/CTEST26_S1/DATAFILE/* --permission
Type Redund Striped Time Sys User Group Permission Name
DATAFILE MIRROR COARSE JAN 12 10:00:00 Y oracle@b94b488f82b1cf5cbf58f54162aafac0 rw------- USERS.299.1222339507
DATAFILE MIRROR COARSE JAN 12 10:00:00 Y oracle@b94b488f82b1cf5cbf58f54162aafac0 rw------- UNDOTBS1.693.1222338993
DATAFILE MIRROR COARSE JAN 12 10:00:00 Y oracle@b94b488f82b1cf5cbf58f54162aafac0 rw------- SYSTEM.906.1222338951
DATAFILE MIRROR COARSE JAN 12 10:00:00 Y oracle@b94b488f82b1cf5cbf58f54162aafac0 rw------- SYSAUX.832.1222338987
asmcmd chmod a+r +DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/*
asmcmd ls -lt +DATA/CPROD19_S1/450AA2873245B6F5E063F100240AC35A/DATAFILE/* --permission
Type Redund Striped Time Sys User Group Permission Name
DATAFILE MIRROR COARSE JAN 12 11:00:00 Y oracle@e7d9d36f6b157fdabf89026bcf48d757 rw-r--r-- USERS.297.1222342153
DATAFILE MIRROR COARSE JAN 12 11:00:00 Y oracle@e7d9d36f6b157fdabf89026bcf48d757 rw-r--r-- UNDOTBS1.301.1222341995
DATAFILE MIRROR COARSE JAN 12 11:00:00 Y oracle@e7d9d36f6b157fdabf89026bcf48d757 rw-r--r-- SYSTEM.305.1222341995
DATAFILE MIRROR COARSE JAN 12 11:00:00 Y oracle@e7d9d36f6b157fdabf89026bcf48d757 rw-r--r-- SYSAUX.302.1222341995
exit
ssh root@dbs-04-tst
su - oracle
. oraenv <<< CTEST26
sqlplus / as sysdba
create pluggable database MIG26REPLUG using '/backup/dumps/PRODDB01.xml' COPY;
Pluggable database created.
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 MIG26DBLINK READ WRITE NO
4 MIG26DTPUMP READ WRITE NO
6 MIG26REPLUG MOUNTED
alter pluggable database MIG26REPLUG open;
It will also take a couple of minutes for this upgrade to 26ai. You will see corresponding traces in the alert.log file of the 26ai container:
tail -f /u01/app/oracle/diag/rdbms/ctest26_s1/CTEST26/trace/alert_CTEST26.log
...
2026-01-12T11:34:21.075364+01:00
MIG26REPLUG(6):Starting Upgrade on PDB Open
...
MIG26REPLUG(6):***************************************************************
MIG26REPLUG(6):WARNING: Pluggable Database MIG26REPLUG with pdb id - 6 is
MIG26REPLUG(6): altered with errors or warnings. Please look into
MIG26REPLUG(6): PDB_PLUG_IN_VIOLATIONS view for more details.
MIG26REPLUG(6):***************************************************************
MIG26REPLUG(6):--ATTENTION--
MIG26REPLUG(6):Errors reported while opening PDB (ContainerId: 6) and have been recorded in pdb_alert$ table.
MIG26REPLUG(6) Error Violation: SQL Patch, Cause: '23.26.0.0.0 Release_Update 2509270520' is installed in the CDB but no release updates are installed in the PDB, Action: Call datapatch to install in the PDB or the CDB
2026-01-12T11:49:56.836062+01:00
MIG26REPLUG(6):Opening pdb with no Resource Manager plan active
Violations: Type: 1, Count: 1
Violations: Type: 2, Count: 1
Completed: Pluggable database MIG26REPLUG opened read write
Completed: alter pluggable database MIG26REPLUG open
Let’s have a look at my PDBs and apply the datapatch on MIG26REPLUG:
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 MIG26DBLINK READ WRITE NO
4 MIG26DTPUMP READ WRITE NO
6 MIG26REPLUG READ WRITE YES
exit
$ORACLE_HOME/OPatch/datapatch -pdbs MIG26REPLUG
SQL Patching tool version 23.26.0.0.0 Lim on Mon Jan 12 11:51:42 2026
Copyright (c) 2012, 2025, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/product/23.0.0.0/dbhome_1/cfgtoollogs/sqlpatch/sqlpatch_sid_CTEST26_ts_2026_01_12_11_51_42_pid_24026/sqlpatch_invocation.log
Connecting to database...OK
Gathering database info...done
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of interim SQL patches:
No interim patches found
Current state of release update SQL patches:
Binary registry:
23.26.0.0.0 Release_Update 250927052030: Installed
PDB MIG26REPLUG:
Applied 23.26.0.0.0 Release_Update 250927052030 successfully
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: MIG26REPLUG
No interim patches need to be rolled back
No release update patches need to be installed
No interim patches need to be applied
Bypass install queue:
For the following PDBs: MIG26REPLUG
No interim rollbacks will bypass install
Patch 38404116 (Database Release Update : 23.26.0.0.0 (38404116) Gold Image): will bypass install
Apply from 23.26.0.0.0 Release_Update 250927052030 to 23.26.0.0.0 Release_Update 250927052030
No interim applys will bypass install
Installation queue after removing bypass entries...
Installation queue:
For the following PDBs: MIG26REPLUG
No interim patches need to be rolled back
No release update patches need to be installed
No interim patches need to be applied
Processing bypass install queue:
Patch 38404116 apply (pdb MIG26REPLUG): SUCCESS (bypass_install)
SQL Patching tool complete on Mon Jan 12 11:51:50 2026
Now let’s restart the PDB in normal mode:
sqlplus / as sysdba
alter pluggable database MIG26REPLUG close immediate;
alter pluggable database MIG26REPLUG open;
alter pluggable database MIG26REPLUG save state;
show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 MIG26DBLINK READ WRITE NO
4 MIG26DTPUMP READ WRITE NO
6 MIG26REPLUG READ WRITE NO
This PDB is now running on 26ai.
Let’s check my consultants table:
alter session set container=MIG26REPLUG;
select start_date from dbi.consultants where name like 'DUB%';
START_DAT
---------
04-SEP-17
exit
Pros & cons
- + Easy migration
- + Preserve source files with the copy option
- + Rather fast
- – Need a downtime of the source PDB
- – Reuse of source database is only possible by unplugging and plugging back the PDB with the xml file
There are multiple ways of migrating to 26ai. There is quite a lot of new features coming with 26ai. You will use them or not, but trying to migrate to this version is something to start thinking about in 2026.
L’article Migrate your database to 26ai on ODA est apparu en premier sur dbi Blog.
Upgrade Classic Extracts to Integrated Mode Before Migrating to GoldenGate 23ai
Classic Extracts have been deprecated in GoldenGate 19c and shouldn’t be part of your GoldenGate replications for a long time now. Yet, some customers still have them, and with GoldenGate 19c support coming to an end in a few months, it is high time you upgrade to GoldenGate 23ai.
However, GoldenGate 23ai doesn’t support classic extract, and the migration utility provided by Oracle will not take care of them. So, what to do with your classic extracts before migrating to GoldenGate 23ai ?
Upgrade the extracts to integrated modeThe natural solution is to migrate the extracts from the classic mode to the integrated mode. It might not be possible, depending on your situation, but it is the easiest way to get your extracts ready for a GoldenGate 23ai migration.
To do so, we will use the info extract <extract> upgrade command, which tells us whether the extract is ready for an upgrade. Log in with the correct alias, stop the extract and register the extract before running the command.
GGSCI (vmogg) > dblogin useridalias ggadmin
Successfully logged into database.
GGSCI (vmogg as ggadmin@DB1) > stop extract ext
Sending STOP request to EXTRACT EXT ...
Request processed.
GGSCI (vmogg as ggadmin@DB1) > info extract ext
EXTRACT EXT Last Started 2025-12-21 10:13 Status STOPPED
Checkpoint Lag 00:00:00 (updated 00:00:10 ago)
Log Read Checkpoint Oracle Redo Logs
2025-12-21 10:19:13 Seqno 10, RBA 18472448
SCN 0.2170788 (2170788)
GGSCI (vmogg as ggadmin@DB1) > register extract ext database
2025-12-21 10:21:13 INFO OGG-02003 Extract EXT successfully registered with database at SCN 2170800.
GGSCI (vmogg as ggadmin@DB1) > info extract ext upgrade
You might need to run the command multiple times, depending on the output of the last command. Once the extract is ready to be upgraded, you will see the following output.
GGSCI (vmogg as ggadmin@DB1) > info extract ext upgrade
Extract EXT is ready to be upgraded to integrated capture.
After this, run the upgrade command and restart the extract. The extract is now in Oracle Integrated Redo Logs mode.
GGSCI (vmogg as ggadmin@DB1) > alter extract ext upgrade integrated tranlog
Extract EXT successfully upgraded to integrated capture.
GGSCI (vmogg as ggadmin@DB1) > info extract ext
EXTRACT EXT Initialized 2025-12-21 10:13 Status STOPPED
Checkpoint Lag 00:00:00 (updated 00:11:43 ago)
Log Read Checkpoint Oracle Integrated Redo Logs
2025-12-21 10:19:13
SCN 0.2170788 (2170788)
GGSCI (vmogg as ggadmin@DB1) > start extract ext
Sending START request to MANAGER ...
EXTRACT EXT starting
If your extract doesn’t start, have a detailed look at the parameters used in the extract with view params ext. You could use parameters for your extract that were accepted for classic extracts but not supported for integrated extracts.
Repeat this procedure for all your classic extracts in all your GoldenGate deployments, and you will be ready to migrate to GoldenGate 23ai !
OGG-08241 when registering the extract
If you get the following error when registering the extract:
OGG-08241 ERROR: This EXTRACT EXT is already registered with the database.
Check the dba_capture view of the database you are trying to register the extract into. If you see an extract already registered, it might mean that you previously registered an integrated extract with the same name.
SELECT capture_name, queue_name, queue_owner, source_database, start_scn from dba_capture;
‘Recovery SCN has not reached SCN’ error
If you receive this error when running the upgrade command, you should try restarting the extract and stopping it again.
Extract ext is not ready to be upgraded because recovery SCN ... has not reached SCN ...
start extract ext
stop extract ext
info extract ext upgrade
What if my extract is ABENDED ?
Technically, the info extract ... upgrade command works for ABENDED extracts. But depending on the reason your extract ended up in this state, you might not be able to upgrade.
L’article Upgrade Classic Extracts to Integrated Mode Before Migrating to GoldenGate 23ai est apparu en premier sur dbi Blog.
Containerize Vue 3 application with GitLab CI/CD
In this article, I will explain how I containerized my Vuetify + Vue 3 application with GitLab CI/CD.
My goal was to deploy my application on Kubernetes. To achieve this, I need to:
- Build the Vue 3 application
- Build the Docker image
- Push the image to the GitLab registry
Quick overview: overall, the full project uses a micro-service architecture and runs on Kubernetes. The Vue 3 project only contains the UI, and we containerize it and serve it with an Nginx image. The backend is a REST API built with NestJS, and we containerize it separately.
Add a Dockerfile to the projectFirst, I need a Dockerfile to build the image for my application. In this Dockerfile, I use a double-stage build. As a result, a final image containing only what is strictly necessary.
# Build the Vue.js application
FROM node:current-alpine AS build
COPY . ./app
WORKDIR /app
RUN npm install
RUN npm run build
# Final Nginx container
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
The important part in this Dockerfile is the multi-stage build.
The first part, with the Node container, build the application, but production does not require all the tools used during this step.
As a result, the second step copies only the dist folder, the result of the build, and embeds it into an Nginx container to serve the generated files.
In the second step, I add the .gitlab-ci.yml file to the project root directory.
This file configures the pipeline, I use the docker-in-docker service to build the image. First, I login into the registry of my project. Next, I build and push the image.
stages:
- build
build:
# Use the official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
# Login to the gitlab registry
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
# Build and push the image
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
# Run this job where a Dockerfile exists
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile
Note: all the variables ($CI_REGISTRY_IMAGE, $CI_COMMIT_SHA…) used in the .gitlab-ci.yml are predefined variables provided by GitLab CI/CD.
Build the containerOnce I push the .gitlab-ci.yml file, GitLab automatically triggers the pipeline following the rules definition.
After completion, the status of the pipeline is green and the status passed
As expected, the image is available in the registry of the project.
Conclusion
In summary, properly containerizing a Vue application is easy, but it requires to separate build and execution. A multi-step build with an Nginx container produces a lightweight, production-ready image.
L’article Containerize Vue 3 application with GitLab CI/CD est apparu en premier sur dbi Blog.
SQL Server Always-On: Centralizing Backup History Across Replicas
Monitoring backups in a SQL Server Always On Availability Group can often feel like a game of hide and seek where the rules change every time you flip a failover switch. On paper, your backup strategy is solid ; you’ve configured your replica priorities and your jobs are running like clockwork. But when you query the backup history to ensure everything is under control, you realize the truth is fragmented.
Because the msdb database is local to each instance, each replica only knows about the backups it performed itself. If your backups happen on the secondary today and the primary tomorrow, no single node holds the complete story. This leads to inconsistent monitoring reports, “false positive” alerts, and a lot of manual jumping between instances just to answer a simple question: “Are we actually protected with consistent backups?”.
Before we dive deeper, a quick disclaimer for the lucky ones: If you are running SQL Server 2022 and have implemented Contained Availability Groups you can stop reading here, you are safe. In a Contained AG, the system databases are replicated alongside your data, meaning the backup history is finally unified and follows the group.
For the others, in this post, we’re going to explore how to stop chasing metadata across your cluster. We’ll look at the limitations of the local msdb, compare different ways to consolidate a unified backup view using Linked Servers, OPENDATASOURCE, and PowerShell, and see how to build a monitoring query that finally tells the whole truth, regardless of where the backup actually ran.
The traditional approach is therefore to query the msdb through the AG listener, pointing to the primary replica. Using a simple TSQL query to find the most recent backup, we get the following results.
SELECT
bs.database_name,
MAX(bs.backup_finish_date) AS LastBackup
FROM msdb.dbo.backupset bs
WHERE bs.database_name = 'StackOverflow2010'
GROUP BY bs.database_name;
As we can see, the two backup dates differ (14:39 being the time the database was created in the AG and 15:15 being the time of the last backup). As a result, if a failover occurs, the main msdb will contain incomplete data because the primary msdb will be the one on SQLAGVM2.
This traditional approach is perfectly acceptable for standalone instances because there is only one source of truth, but in the case of Always-On, we need a more robust solution. Let’s examine the different possibilities.
The good old Linked ServersIf you want to keep everything within the SQL Engine, Linked Servers are your go-to tool. The idea is simple: from your Primary replica, you reach out to the Secondary, query its msdb, and union the results with your local data.
SELECT
@@SERVERNAME AS [PrimaryServer],
bs.database_name,
MAX(bs.backup_finish_date) AS LastBackup
FROM msdb.dbo.backupset bs
WHERE bs.database_name = 'StackOverflow2010'
GROUP BY bs.database_name
UNION ALL
SELECT
'REMOTE_MSDB' AS [ReportingServer],
bs.database_name,
MAX(bs.backup_finish_date) AS LastBackup
FROM [REMOTE_MSDB].msdb.dbo.backupset bs
WHERE bs.database_name = 'StackOverflow2010'
GROUP BY bs.database_name;
On paper, the best way to secure a Linked Server is to use the Self option, ensuring that your own permissions are carried over to the next node. It’s the most transparent approach for auditing and security. However, this is where we often hit a silent wall: the Double-Hop. Unless Kerberos delegation is perfectly configured in your domain, NTLM will prevent your identity from traveling to the second replica. You’ll end up with a connection error, not because of a lack of permissions, but because your identity simply couldn’t make the trip. To determine the type of authentication protocol you are using, use the following query.
SELECT auth_scheme
FROM sys.dm_exec_connections
WHERE session_id = @@SPID;
To bypass this hurdle, it is common to see Fixed Logins being used as a pragmatic workaround. But by hardcoding credentials to make the bridge work, we create a permanent, pre-authenticated tunnel. From a security standpoint, this can facilitate lateral movement if one instance is ever compromised, a major concern in modern Zero Trust architectures. Furthermore, it obscures your audit logs, as the remote server only sees the service account instead of the actual user. These hidden complexities and security risks are precisely why many DBAs are now moving toward more decoupled, scalable alternatives.
The “On-the-Fly” Connection: OPENDATASOURCETo address the frustrations of Linked Servers, another T-SQL path often explored is the use of ad hoc queries via OPENDATASOURCE.
The concept is tempting: instead of building a permanent bridge (Linked Server), you use a temporary ladder ; OPENDATASOURCE allows you to define a connection string directly inside your T-SQL statement, reaching out to a remote replica only for the duration of that specific query. It feels lightweight and dynamic because it requires no pre-configured server objects.
In theory, you would use it like this to pull backup history from your secondary node:
SELECT
@@SERVERNAME AS [Source Server],
bs.database_name,
MAX(bs.backup_finish_date) AS LastBackup
FROM OPENDATASOURCE('MSOLEDBSQL', 'Data Source=SQLAGVM1,1432;Integrated Security=SSPI').msdb.dbo.backupset bs
WHERE bs.database_name = 'StackOverflow2010'
GROUP BY bs.database_name;
However, if the environment is not properly configured, an error will occur immediately.
Msg 15281, Level 16, State 1, Line 1
SQL Server blocked access to STATEMENT 'OpenRowset/OpenDatasource' of component 'Ad Hoc Distributed Queries' because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of 'Ad Hoc Distributed Queries' by using sp_configure.
By default, SQL Server locks this door. It is a protective measure to prevent users from using the SQL instance as a jumping point to query any other server on the network. To get past this error, a sysadmin must explicitly enable Ad Hoc Distributed Queries. This requires tweaking the advanced configuration of the instance.
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
EXEC sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;
GO
Once these options have been reconfigured, access to the remote msdb is finally possible.
While we successfully made OPENDATASOURCE work, the trade-offs are significant. We have ended up with a solution that is brittle, difficult to maintain, and a potential liability:
- Hardcoding & Maintenance: Every replica requires a manually adapted connection string. If a server is renamed, a port is migrated, or a password expires, the entire monitoring logic collapses.
- Security & Shadow IT Risk: Enabling Ad Hoc Distributed Queries opens a permanent hole in your instance. You aren’t just allowing your script to run; you are allowing any sysadmin to connect to any external server, creating a ghost feature that can be easily misused.
In short, we are fighting the SQL engine’s security defaults just to get a simple timestamp. For a truly robust and scalable solution, it is time to look beyond T-SQL.
Here comes the superhero PowershellPowerShell steps in as the ultimate lifesaver, delivering high-level automation and pure execution simplicity. It allows you to centralize and control your scripts from an external management server, piloting your entire fleet remotely without cluttering your SQL instances.
By leveraging the power of dbatools, we shatter the limitations of traditional T-SQL. We gain dynamic flexibility and enhanced security by bypassing risky configurations, all while maintaining total control over how we manipulate the retrieved data. The security gain does not come from PowerShell itself, but from removing risky SQL Server surface area features and centralizing access control on a hardened management host. This works because PowerShell establishes direct connections from your management host to each replica. This bypasses the NTLM double-hop issue entirely, as your identity is never passed between servers, removing any need for complex Kerberos delegation or risky fixed logins.
Here is how to put this into practice. By using the Availability Group Listener as your unique gateway, you can dynamically discover the cluster topology and query all member nodes.
$Listener = 'SQLLISTENER,1432'
$TargetDB = 'StackOverflow2010'
try {
$Nodes = Invoke-DbaQuery -SqlInstance $Listener -Database 'master' -Query "SELECT replica_server_name FROM sys.dm_hadr_availability_replica_cluster_states"
$BackupQuery = "SELECT SERVERPROPERTY('ServerName') as [InstanceName], MAX(backup_finish_date) as [LastBackup] FROM msdb.dbo.backupset WHERE database_name = '$TargetDB' GROUP BY database_name"
write-output $Nodes
$Results = foreach ($Node in $Nodes.replica_server_name) {
write-output $Node
Invoke-DbaQuery -SqlInstance $Node -Database 'msdb' -Query $BackupQuery -ErrorAction SilentlyContinue
}
$Results | Where-Object { $_.LastBackup } | Sort-Object LastBackup -Descending | Select-Object -First 1 | Format-Table -AutoSize
}
catch {
Write-Error "Error : $($_.Exception.Message)"
}
This script is just a first step, but it lays the foundation for truly scalable infrastructure management. By shifting the logic to PowerShell instead of overloading our SQL instances, we achieve a robust and extensible method capable of handling large Always-On ecosystems without additional manual effort.
In this example, the listener name is hardcoded for the sake of clarity. However, the true strength of this approach lies in its ability to work behind the scenes with a dynamic inventory. In a Production environment, you would typically query a CMDB or a centralized configuration file to automatically populate the list of instances. This transforms a simple check into a silent, reliable automation that adapts seamlessly as your SQL environment evolves.
While we wrote the T-SQL manually in this example for the sake of clarity, it is worth noting that dbatools offers an even more streamlined approach with the Get-DbaBackupHistory command. This native function eliminates the need for manual queries entirely, returning rich metadata objects that are ready to be filtered and aggregated across your entire fleet.
$Nodes.replica_server_name | Get-DbaBackupHistory -Database 'StackOverflow2010' | Sort-Object End -Descending | Select-Object -First 1
Final Thoughts: Taking Control of the Always-On Fleet
To wrap up, remember that technical skills are only as good as the management strategy behind them. Transitioning away from legacy methods toward a PowerShell-driven approach is about gaining control over your environment. Here is what you should keep in mind:
- Beyond T-SQL Boundaries: While Linked Servers or OPENDATASOURCE might work for quick fixes, they quickly become bottlenecks and security risks in hardened infrastructures.
- Object-Oriented Efficiency: By using PowerShell and dbatools, you stop managing raw text and start handling objects. This allows you to effortlessly filter, sort, and aggregate data from multiple Always-On nodes to extract a single, reliable source of truth.
- Smarter Security: Running queries externally via dedicated management shells ensures you maintain a high security posture without needing to enable high-risk surface area features on your SQL instances.
The real game-changer is the Central Management Server. By centralizing your logic on a dedicated administration machine, you stop scattering scripts across every instance. This server becomes your orchestrator: it pulls from your inventory (CMDB, central tables), broadcasts tasks across your entire fleet, and consolidates the results. This is exactly the approach we take at dbi services to manage the extensive SQL Server environments under our care. We leverage PowerShell and the dbatools module as the backbone of our scripting architecture. This allows us to collect data in the most comprehensive and optimized way possible, ensuring we deliver top-tier service to our clients.
This is how you move from artisanal, server-by-server management to an industrial-grade automation capable of piloting hundreds of instances with the same simplicity as a single one.
L’article SQL Server Always-On: Centralizing Backup History Across Replicas est apparu en premier sur dbi Blog.
PostgreSQL: An introduction to row locking
Sometimes it is necessary to troubleshoot locks in PostgreSQL to figure out what is going on in the system. While locks are important and necessary in a database system, too many locks can also be an issue, performance wise and resource consumption wise. There are several levels of locks, e.g. table level and row level locks. Depending on what kind of locks are causing an issue you need to take other actions to resolve it.
We’ll start with some basics and a very simple playground: One simple table containing one row:
postgres=# create table t (a int);
CREATE TABLE
postgres=# insert into t values (1);
INSERT 0 1
One source of information about locking is pg_locks. Do we already see something in this catalog view? Indeed we do:
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart from pg_locks;
locktype | database | relation | pid | mode | granted | waitstart
------------+----------+----------+------+-----------------+---------+-----------
relation | postgres | pg_locks | 9119 | AccessShareLock | t |
virtualxid | | | 9119 | ExclusiveLock | t |
(2 rows)
What we see here is our own session (ignore the second row with “virtualxid” for now):
postgres=# select pg_backend_pid();
pg_backend_pid
----------------
9119
(1 row)
The question is why? We only did a simple select, but even a select which does not modify the database needs locking because while we’re reading nobody else is allowed to modify the underlying objects. This is what “AccessShareLock” is about: Others can still read, but not change.
If you do the same in a second session you might expect to see four rows in total, but:
-- session 2
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart from pg_locks;
locktype | database | relation | pid | mode | granted | waitstart
------------+----------+----------+------+-----------------+---------+-----------
relation | postgres | pg_locks | 9673 | AccessShareLock | t |
virtualxid | | | 9673 | ExclusiveLock | t |
(2 rows)
Seems a bit surprising but pg_locks shows only locks from active processes and once the statement completes the process is not active anymore and the lock is gone.
So what we can do to exclude our own locks is this:
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart from pg_locks where pid != pg_backend_pid();
locktype | database | relation | pid | mode | granted | waitstart
----------+----------+----------+-----+------+---------+-----------
(0 rows)
The story is a bit different if we do the same thing in a transaction in the first session and then query the view in the second session:
-- session 1
postgres=# begin;
BEGIN
postgres=*# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart from pg_locks where pid != pg_backend_pid();
locktype | database | relation | pid | mode | granted | waitstart
----------+----------+----------+-----+------+---------+-----------
(0 rows)
-- session 2
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart from pg_locks where pid != pg_backend_pid();
locktype | database | relation | pid | mode | granted | waitstart
------------+----------+----------+------+-----------------+---------+-----------
relation | postgres | pg_locks | 9119 | AccessShareLock | t |
virtualxid | | | 9119 | ExclusiveLock | t |
(2 rows)
Because the first session is still active and the transaction is open, this session still holds the lock. Ending the transaction releases the lock:
-- session 1
postgres=*# end;
COMMIT
-- session 2
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart from pg_locks where pid != pg_backend_pid();
locktype | database | relation | pid | mode | granted | waitstart
----------+----------+----------+-----+------+---------+-----------
(0 rows)
Lets start doing something with our table in session two:
-- session 2
postgres=# begin;
BEGIN
postgres=*# select * from t;
a
---
1
(1 row)
How does it look like in session one?
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart from pg_locks where pid != pg_backend_pid();
locktype | database | relation | pid | mode | granted | waitstart
------------+----------+----------+------+-----------------+---------+-----------
relation | postgres | t | 9673 | AccessShareLock | t |
virtualxid | | | 9673 | ExclusiveLock | t |
(2 rows)
This is consistent to the results above. As the second session is reading from the table and the transaction is still open, the lock is still there, no surprise. Lets update the row in the still open transaction and then have a look again in session one:
-- session 2
postgres=*# update t set a = 2 where a = 1;
UPDATE 1
-- session 1
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart from pg_locks where pid != pg_backend_pid();
locktype | database | relation | pid | mode | granted | waitstart
---------------+----------+----------+------+------------------+---------+-----------
relation | postgres | t | 9673 | AccessShareLock | t |
relation | postgres | t | 9673 | RowExclusiveLock | t |
virtualxid | | | 9673 | ExclusiveLock | t |
transactionid | | | 9673 | ExclusiveLock | t |
(4 rows)
Now we got a “RowExclusiveLock” in addition and that means nobody else can do anything with the new row/tuple we’ve just created. When someone else (third session) is trying to perform the same update this will block:
-- session 3
postgres=# begin;
BEGIN
postgres=*# update t set a = 2 where a = 1;
… and the results in the first session look like this (I have added an order by and the “page” and “tuple” columns):
-- session 1
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart,page,tuple from pg_locks where pid != pg_backend_pid() order by pid;
locktype | database | relation | pid | mode | granted | waitstart | page | tuple
---------------+----------+----------+------+------------------+---------+-------------------------------+------+-------
relation | postgres | t | 9673 | RowExclusiveLock | t | | |
virtualxid | | | 9673 | ExclusiveLock | t | | |
transactionid | | | 9673 | ExclusiveLock | t | | |
relation | postgres | t | 9673 | AccessShareLock | t | | |
tuple | postgres | t | 9691 | ExclusiveLock | t | | 0 | 1
transactionid | | | 9691 | ExclusiveLock | t | | |
relation | postgres | t | 9691 | RowExclusiveLock | t | | |
virtualxid | | | 9691 | ExclusiveLock | t | | |
transactionid | | | 9691 | ShareLock | f | 2026-01-08 09:14:00.330564+01 | |
(9 rows)
What we see now is, that the third session (pid=9691) is trying to get the lock but can’t and must wait (waitstart) for the second session to release the lock. We can also see that the row/tuple in question is in block/page 0 and the tuple/row is the first one in that block. Once session two either commits or rollbacks the update in session three will proceed but there is nothing to update as there is no row which matches the where clause anymore.
-- session 2
postgres=*# commit;
COMMIT
-- session 3
UPDATE 0
postgres=*# select * from t;
a
---
2
(1 row)
postgres=*# end;
COMMIT
In session one we can verify that all locks are gone:
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart,page,tuple from pg_locks where pid != pg_backend_pid() order by pid;
locktype | database | relation | pid | mode | granted | waitstart | page | tuple
----------+----------+----------+-----+------+---------+-----------+------+-------
(0 rows)
Another example with 5 rows in the table and two sessions update only the last row:
-- session 1
postgres=# truncate t;
TRUNCATE TABLE
postgres=# insert into t values (1),(2),(3),(4),(5);
INSERT 0 5
-- session 2
postgres=# begin;
BEGIN
postgres=*# update t set a = -1 where a = 5;
UPDATE 1
-- session 3
postgres=# begin;
BEGIN
postgres=*# update t set a = -1 where a = 5;
-- session 1
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart,page,tuple from pg_locks where pid != pg_backend_pid() order by pid;
locktype | database | relation | pid | mode | granted | waitstart | page | tuple
---------------+----------+----------+------+------------------+---------+-------------------------------+------+-------
relation | postgres | t | 9673 | RowExclusiveLock | t | | |
virtualxid | | | 9673 | ExclusiveLock | t | | |
transactionid | | | 9673 | ExclusiveLock | t | | |
virtualxid | | | 9691 | ExclusiveLock | t | | |
transactionid | | | 9691 | ShareLock | f | 2026-01-08 09:58:38.867534+01 | |
tuple | postgres | t | 9691 | ExclusiveLock | t | | 0 | 5
transactionid | | | 9691 | ExclusiveLock | t | | |
relation | postgres | t | 9691 | RowExclusiveLock | t | | |
(8 rows)
Now it is row number 5 in block 0.
The rows with “transactionid” and “virtualxid” are about the transactions, details here. Those values are also recorded in pg_locks:
-- session 1
postgres=# select locktype,database::regdatabase,relation::regclass,pid,mode,granted,waitstart,page,tuple,transactionid,virtualxid from pg_locks where pid != pg_backend_pid() order by pid;
locktype | database | relation | pid | mode | granted | waitstart | page | tuple | transactionid | virtualxid
---------------+----------+----------+------+------------------+---------+-------------------------------+------+-------+---------------+------------
relation | postgres | t | 9673 | RowExclusiveLock | t | | | | |
virtualxid | | | 9673 | ExclusiveLock | t | | | | | 24/22
transactionid | | | 9673 | ExclusiveLock | t | | | | 836 |
virtualxid | | | 9691 | ExclusiveLock | t | | | | | 25/18
transactionid | | | 9691 | ShareLock | f | 2026-01-08 09:58:38.867534+01 | | | 836 |
tuple | postgres | t | 9691 | ExclusiveLock | t | | 0 | 5 | |
transactionid | | | 9691 | ExclusiveLock | t | | | | 837 |
relation | postgres | t | 9691 | RowExclusiveLock | t | | | | |
When there are only a few locks in the database this is an easy way to find out who is blocking whom but if you have hundreds of sessions and locks there is a more convenient way to check for blockers and waiters, and this is the function pg_blocking_pids:
-- session 1
postgres=# select pg_blocking_pids(9691);
pg_blocking_pids
------------------
{9673}
(1 row)
Another option to check more details in regards to row locking is the extension pgrowlocks:
-- session 1
postgres=# create extension pgrowlocks;
CREATE EXTENSION
postgres=# select * from t as a, pgrowlocks('t') as b where b.locked_row = a.ctid;
a | locked_row | locker | multi | xids | modes | pids
---+------------+--------+-------+-------+-------------------+--------
5 | (0,5) | 836 | f | {836} | {"No Key Update"} | {9673}
(1 row)
In the next post we’ll look more closely on transaction IDs and virtual transaction IDs when it comes to locking.
L’article PostgreSQL: An introduction to row locking est apparu en premier sur dbi Blog.
Planning GoldenGate 23ai Migration Before Support Expires
At the moment, three versions of GoldenGate are still under Premier Support, but this will very soon change, with GoldenGate 19c and 21c coming out of Premier Support in April this year ! I give below the official Oracle support end dates, as per 2026.
GoldenGate ReleaseRelease dateEnd of Premier SupportEnd of Extended Support23aiMay 2024June 2029June 203221cApril 2021April 2026None19cMay 2019April 2026April 2027Of course, you should always consult the official updated resource (last update in November 2025). In fact, Oracle already extended the support for GoldenGate 19c once, so it’s always good to stay updated on this issue.
I don’t think it should come as a surprise that anything older than GoldenGate 19c is out of support for quite some time now.
Another valuable lesson from this table is that you should never play too much with intermediate release, except if you really need them. If you upgraded to GoldenGate 21c and have Extended Support, your GoldenGate won’t be supported ! However, GoldenGate 19c Extended Support ends only in April 2027.
Now would be the perfect time to upgrade to the latest version of GoldenGate, 23ai (soon to be 26ai). With that in mind, the upgrade process is more or less complex, depending on your current situation. I tried to summarize all situations in the following diagram. More explanations are available below, and full upgrade examples will follow in future blog posts.
Diagram representing migration processes to GoldenGate 23ai, depending on the source version and architecture.
Upgrading any Microservices Architecture to GoldenGate 23ai
If you already opted for GoldenGate Microservices Architecture, and you should have since it was first released nearly ten years ago, then the upgrade process is straightforward, with only one step to upgrade to GoldenGate 23ai. Again, the procedure itself is out of scope for this blog.
Upgrading from GoldenGate Classic Architecture to GoldenGate 23aiIf you still have a Classic Architecture installation running, the upgrade process will be more complex, and depends on the version of GoldenGate you have.
Classic Architecture 19c/21c to 23aiIf your version of GoldenGate is not too old (19c/21c), then you can use the Oracle Migration Utility available with the patch 37274898 (now KB100447 in MOS). I will give real migration examples when I have time to write more on the subject.
Classic Architecture 18c or older to 23aiIf you still have a GoldenGate 18c or older with a Classic Architecture, then you will need a two steps process to upgrade to GoldenGate 23ai.
- First a standard upgrade to a more recent version of GoldenGate, either 19c or 21c, with Classic Architecture.
- Then, use the Oracle Migration Utility mentioned above, to migrate from Classic Architecture to GoldenGate 23ai with Microservices Architecture.
As always, keeping your GoldenGate installations up-to-date is the best way to ensure you stay supported, secured and avoid a painful migration process when it becomes too late.
L’article Planning GoldenGate 23ai Migration Before Support Expires est apparu en premier sur dbi Blog.
Dctm – Incorrect r_object_id reference in OTDS
I recently faced another pretty interesting case with an incorrect r_object_id reference in OTDS. In a non-production (dev) Active Directory (Azure EntraID) that is used as source of truth for the users/groups, it looked like there was an account which was a bit messed-up. By that, I mean that the mail address, user name and other fields from that account were actually using the values from another account, which was completely separate. That resulted in a bit of mess inside OTDS and inside the Documentum Resource used behind it.
Dctm user not synchronizedThe main problem was that at that customer, the Documentum “dm_user.user_name” is set to a custom “Display Name” value, which isn’t the default Display Name of the AD but a custom attribute generated/filled with some logic behind. With OTDS, you can add some logic into the Resource through JavaScript, but you often still rely on the background AD to get correct details. For that specific account (USERID01 below), that custom Display Name was set to the value from another account (USERID02). Because of that, OTDS mapped the user to a certain Documentum account. Later, the messed-up account was corrected but OTDS wasn’t able to recognize that the Dctm reference that it had stored was, then, incorrect and therefore it couldn’t update the account anymore.
Initially, the Application Team reached out to me because they weren’t able to see a value for the “dm_user.user_global_unique_id” field (of USERID01). That parameter, when using OTDS, is supposed to contain the “oTObjectGUID” (by default mapped to AD objectGUID). So, I quickly checked inside the Documentum Repository to see if this account was synched or not, if there were duplicate accounts or some other evident issues:
[dmadmin@cs-0 mop]$ iapi $DOCBASE_NAME -Udmadmin -Pxxx
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2023. OpenText Corporation
All rights reserved.
Client Library Release 23.4.0000.0180
Connecting to Server using docbase REPO01
[DM_SESSION_I_SESSION_START]info: "Session 010f123480c63b32 started for user dmadmin."
Connected to OpenText Documentum Server running Release 23.4.0000.0143 Linux64.Oracle
Session id is s0
API> ?,c,select r_object_id, user_login_name, user_name from dm_user where upper(user_login_name) like '%USERID01%';
r_object_id user_login_name user_name
---------------- --------------------- -----------------------------
110f12348000c96b USERID01 User Display Name 01
(1 row affected)
API> dump,c,110f12348000c96b
...
USER ATTRIBUTES
user_name : User Display Name 01
user_os_name : USERID01
user_address : user01@domain.com
user_group_name :
user_privileges : 0
owner_def_permit : 7
world_def_permit : 3
group_def_permit : 5
default_folder : /Home/User Display Name 01
user_db_name :
description :
acl_domain : User Display Name 01
acl_name : dm_450f12348000f652
user_os_domain :
home_docbase :
user_state : 0
client_capability : 2
globally_managed : F
user_delegation :
workflow_disabled : F
alias_set_id : 0000000000000000
user_source : OTDS
user_ldap_dn :
user_xprivileges : 32
failed_auth_attempt : 0
user_admin :
user_global_unique_id :
user_login_name : USERID01
user_login_domain :
user_initials :
user_password : ****************
user_web_page :
first_failed_auth_utc_time : nulldate
last_login_utc_time : 1/3/2022 13:39:00
deactivated_utc_time : nulldate
deactivated_ip_addr :
restricted_folder_ids []: <none>
root_log_dir :
SYSTEM ATTRIBUTES
r_is_group : F
r_modify_date : 11/6/2025 07:12:07
r_has_events : F
r_object_id : 110f12348000c96b
APPLICATION ATTRIBUTES
INTERNAL ATTRIBUTES
i_is_replica : F
i_vstamp : 13
API> exit
Bye
[dmadmin@cs-0 mop]$
As you can see above, the “user_global_unique_id” is empty and some other fields also confirmed that the OTDS synchronization wasn’t done / working. Based on the OTDS Resource configuration, there should have been a lot of changes to that account, which wasn’t the case here.
OTDS user consolidationTherefore, I tried to trigger a consolidation on the user itself from the OTDS UI, to see what the logs were saying:
Thread Id: 654,623 Time: 11/18/25, 8:53 AM Class: org.identityconnectors.framework.api.operations.UpdateApiOp Method: update Level: OK Message: Enter: update(ObjectClass: __ACCOUNT__, Attribute: {Name=__UID__, Value=[110f12348004510b]}, [Attribute: {Name=__TENANT__, Value=[]}, Attribute: {Name=user_os_name, Value=[]}, Attribute: {Name=AccountDisabled, Value=[false]}, Attribute: {Name=user_global_unique_id, Value=[9ibc9uqrfKafp/K1LRb0og==]}, Attribute: {Name=user_rename_unlock_locked_obj, Value=[T]}, Attribute: {Name=client_capability, Value=[2]}, Attribute: {Name=user_login_domain, Value=[]}, Attribute: {Name=user_name, Value=[User Display Name 01]}, Attribute: {Name=user_ldap_dn, Value=[CN=USERID01,OU=Users,DC=domain,DC=com]}, Attribute: {Name=user_address, Value=[user01@domain.com]}, Attribute: {Name=user_login_name, Value=[USERID01]}, Attribute: {Name=user_xprivileges, Value=[32]}, Attribute: {Name=create_default_cabinet, Value=[F]}, Attribute: {Name=user_privileges, Value=[0]}, Attribute: {Name=__PARTITION__, Value=[PARTITION01]}, Attribute: {Name=__OTDS_UUID__, Value=[7748abe9-7a51-9d43-b6c4-13ba639481b5]}, Attribute: {Name=user_type, Value=[dm_user]}, Attribute: {Name=user_rename_enabled, Value=[T]}, Attribute: {Name=default_folder, Value=[/Home/User Display Name 01]}, Attribute: {Name=__NAME__, Value=[User Display Name 01]}], null)
Thread Id: 654,631 Time: 11/18/25, 8:53 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f12348000051b]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@68f2554c})
Thread Id: 654,634 Time: 11/18/25, 8:53 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f12348000051e]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@6fa3934d})
Thread Id: 654,635 Time: 11/18/25, 8:53 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f123480007dd8]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@41d7fb29})
Thread Id: 654,637 Time: 11/18/25, 8:53 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f12348000451d]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@7a229867})
Thread Id: 654,638 Time: 11/18/25, 8:53 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f12348000451e]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@4e95620c})
Thread Id: 654,665 Time: 11/18/25, 8:53 AM Class: org.identityconnectors.framework.api.operations.UpdateApiOp Method: update Level: OK Message: Enter: update(ObjectClass: __ACCOUNT__, Attribute: {Name=__UID__, Value=[110f12348004510b]}, [Attribute: {Name=__TENANT__, Value=[]}, Attribute: {Name=user_os_name, Value=[]}, Attribute: {Name=AccountDisabled, Value=[false]}, Attribute: {Name=user_global_unique_id, Value=[9ibc9uqrfKafp/K1LRb0og==]}, Attribute: {Name=user_rename_unlock_locked_obj, Value=[T]}, Attribute: {Name=client_capability, Value=[2]}, Attribute: {Name=user_login_domain, Value=[]}, Attribute: {Name=user_name, Value=[User Display Name 01]}, Attribute: {Name=user_ldap_dn, Value=[cn=USERID01,ou=users,dc=domain,dc=com]}, Attribute: {Name=user_address, Value=[user01@domain.com]}, Attribute: {Name=user_login_name, Value=[USERID01]}, Attribute: {Name=user_xprivileges, Value=[32]}, Attribute: {Name=create_default_cabinet, Value=[F]}, Attribute: {Name=user_privileges, Value=[0]}, Attribute: {Name=__PARTITION__, Value=[PARTITION01]}, Attribute: {Name=__OTDS_UUID__, Value=[7748abe9-7a51-9d43-b6c4-13ba639481b5]}, Attribute: {Name=user_type, Value=[dm_user]}, Attribute: {Name=user_rename_enabled, Value=[T]}, Attribute: {Name=default_folder, Value=[/Home/User Display Name 01]}, Attribute: {Name=__NAME__, Value=[User Display Name 01]}], null)
Thread Id: 654,623 Time: 11/18/25, 8:53 AM Class: org.identityconnectors.framework.api.operations.UpdateApiOp Method: update Level: OK Message: Exception:
com.opentext.otds.connectors.rest.HttpException: Error updating member: 400 Bad Request <!doctype html><html lang="en"><head><title>HTTP Status 400 – Bad Request</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 400 – Bad Request</h1></body></html>
at com.opentext.otds.connectors.rest.RestSession.updateMember(RestSession.java:314)
at com.opentext.otds.connectors.rest.RestConnector$9.endOfAttributes(RestConnector.java:557)
at com.opentext.otds.connectors.rest.RestConnector$9.endOfAttributes(RestConnector.java:526)
at com.opentext.otds.connectors.rest.RestAttributeParser.parse(RestAttributeParser.java:29)
at com.opentext.otds.connectors.rest.RestConnector.updateMember(RestConnector.java:564)
...
Thread Id: 654,635 Time: 11/18/25, 8:53 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Exception:
com.opentext.otds.connectors.rest.HttpException: Error adding group members: 404 Not Found <!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1></body></html>
at com.opentext.otds.connectors.rest.RestSession.addMembersToGroup(RestSession.java:387)
at com.opentext.otds.connectors.rest.RestConnector$3.endOfAttributes(RestConnector.java:222)
at com.opentext.otds.connectors.rest.RestConnector$3.endOfAttributes(RestConnector.java:172)
at com.opentext.otds.connectors.rest.RestAttributeParser.parse(RestAttributeParser.java:29)
...
So OTDS Consolidation ends-up with “400 Bad Request” and “404 Not Found” errors. While looking into the message in more details, you can see that it has a “__UID__” attribute (“Attribute: {Name=__UID__, Value=[110f12348004510b]}“) which represents the r_object_id of the dm_user, normally… But in this case, it’s different from the one in the dump above. And that r_object_id doesn’t exist in the Repository anymore:
[dmadmin@cs-0 ~]$ iapi $DOCBASE_NAME -Udmadmin -Pxxx << EOC
> dump,c,110f12348004510b
> exit
> EOC
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2023. OpenText Corporation
All rights reserved.
Client Library Release 23.4.0000.0180
Connecting to Server using docbase REPO01
[DM_SESSION_I_SESSION_START]info: "Session 010f123480c63b54 started for user dmadmin."
Connected to OpenText Documentum Server running Release 23.4.0000.0143 Linux64.Oracle
Session id is s0
API> ...
[DM_API_E_EXIST]error: "Document/object specified by 110f12348004510b does not exist."
[DM_SESSION_W_FETCH_FAILED]warning: "Fetch of object with handle 110f12348004510b and type NULL failed."
API> Bye
[dmadmin@cs-0 ~]$
What happened exactly? I’m not sure, I tried to ask the people that were looking at that case / trying to fix the AD account, but nobody could tell me if there was another account that was removed / re-assigned. I can only assume OTDS initially created a duplicate account (USERID02) because it couldn’t map the existing account in the Repository since the user_name was messed-up in the AD. And that someone either deleted or re-assigned the OTDS account to the other account (which was initially created by the LDAP Sync years ago, before the migration to OTDS in the past few days/weeks).
Reflection on the way to fix itIn any cases, what can be done to fix the situation…? The problem is that now OTDS references a r_object_id that doesn’t exist in Documentum and that prevent the synchronization to happen properly. Even if an account exists in the Repository with the expected user_login_name / user_name, since it’s not the one OTDS has internally, it’s not doing anything.
I could see 2 options to solve the case:
- Since it’s part of a Synchronized Partition, you cannot delete the OTDS account. Therefore, a first solution could be to exclude this specific account from the Partition (by updating the user filter to specifically exclude that one), execute a consolidation to make sure all references of it disappears before re-adding the account again.
- Find and update the wrong reference in the OTDS DB
The safest approach is probably the #1, but since I was in the middle of debugging the issue, I wanted to try the second one instead. This is to confirm how OTDS behaves and maybe to get more insights on how the DB is organized… Who knows, it might be helpful at some point in the future ;).
From what I already knew, the users are put inside the “MEMBERS” table but that would only contain things from the Source system (AD here). The possible external details like the “r_object_id” I’m looking for here would usually be in the “RESOURCEACCESS” table instead. Therefore, I found my user through its user id, fetched the UUID that is the cross-reference used by OTDS and was indeed able to find the wrong reference there:
SQL> SET PAGES 999;
SQL> SET WRAP OFF;
SQL> SET LINESIZE 50;
SQL> SELECT UUID, NAME FROM MEMBERS WHERE NAME='USERID01';
UUID NAME
-------------------------------- -----------------
7BE9748AA57149D3BC641A948363B1B5 USERID01
SQL> SELECT MEMBERID, IDINRESOURCE FROM RESOURCEACCESS WHERE MEMBERID='7BE9748AA57149D3BC641A948363B1B5';
MEMBERID IDINRESOURCE
-------------------------------- -----------------
7BE9748AA57149D3BC641A948363B1B5 110f12348004510b
But what if you aren’t certain if this is the only reference in the whole DB…? I was in this position too. I knew I could find what I was looking for in these 2 tables, but I wasn’t 100% certain if it was the only place. Well, I sure hoped so, as having duplicates / multiple references for the same thing in the DB could be pretty bad.
Let’s crawl the DB to see if there are any other references then! First of all, I created a small SQL file which contains the connection to the DB, the generation of all needed SELECT statements for each column of each table which have a string data_type and finally the execution of the generated selects:
[dmadmin@cs-0 mop]$ ls -l
total 4
-rw-r----- 1 dmadmin dmadmin 536 Nov 18 09:03 generate_and_execute_selects.sql
[dmadmin@cs-0 mop]$
[dmadmin@cs-0 mop]$ cat generate_and_execute_selects.sql
CONNECT otds-user/otds-password@otds-tns
spool selects.sql
set pagesize 0
set linesize 10000
set trims on
SELECT
'SELECT ''' || table_name || ''' AS table_name, ''' ||
column_name || ''' AS column_name, "' ||
column_name || '" AS value_found ' ||
'FROM ' || owner || '.' || table_name ||
' WHERE "' || column_name || '" LIKE ''%110f12348004510b%'';'
FROM
all_tab_columns
WHERE
data_type IN ('VARCHAR2','NVARCHAR2','CHAR','NCHAR','CLOB','RAW')
AND owner = 'OTDS';
spool off
set echo on
@selects.sql
quit
[dmadmin@cs-0 mop]$
Note 1: I put the CONNECT inside the SQL file, but you can obviously set it at command line or through variables, so it doesn’t show in the bash history. It’s just an example.
Note 2: You can change the string that you are looking for, it’s in between the ‘%string%‘ (line 11 of the SQL file). You can also change the data_type as required (line 15 of the SQL file). For OTDS on Oracle, only VARCHAR2, CLOB and RAW are used.
Note 3: I’m using double quotes (“) around the column_name. This is because, unfortunately, some columns for OTDS appears to be created by the product with case sensitivity… Therefore, without these double quotes, you wouldn’t be able to query for the 2 tables: “SCHEMA_HISTORY” and “BOOTSTRAP_HISTORY“. Without it, the DB Crawling would succeed for all tables except these 2 where you would get an “invalid identifier” on the column name… As an example, you can try to execute these two queries:
select * from SCHEMA_HISTORY WHERE description like '%setup%'; -- this might not work
select * from SCHEMA_HISTORY WHERE "description" like '%setup%'; -- this works
Once you have the above SQL file, you can execute it. As mentioned, it will start with the generation of another SQL file, containing all needed SELECT statements for all columns of all tables. And it will then execute all these SELECT statements one by one. All the logs will be inside the “selects.log” file in this case:
[dmadmin@cs-0 mop]$ sqlplus /nolog @generate_and_execute_selects.sql > selects.log
[dmadmin@cs-0 mop]$
[dmadmin@cs-0 mop]$ echo $?
0
[dmadmin@cs-0 mop]$
[dmadmin@cs-0 mop]$ ls -l
total 96
-rw-r----- 1 dmadmin dmadmin 536 Nov 18 09:03 generate_and_execute_selects.sql
-rw-r----- 1 dmadmin dmadmin 54518 Nov 18 09:03 selects.log
-rw-r----- 1 dmadmin dmadmin 25122 Nov 18 09:03 selects.sql
[dmadmin@cs-0 mop]$
I usually start a first execution with “@selects.sql” being commented in the “generate_and_execute_selects.sql” file. This is to generate the SELECT statements without executing them. I can first verify if everything looks good before cleaning and re-executing without the line commented out.
For OTDS, there are 164 SELECT statements being generated. For this environment, there are 252 columns owned by “OTDS” in total, but 49+39 of these that have a TIMESTAMP+NUMBER data_type, respectively, which gives us the 164 results, as expected:
[dmadmin@cs-0 mop]$ cat selects.sql
SELECT 'ACCESSROLEMEMBERS' AS table_name, 'TENANTID' AS column_name, "TENANTID" AS value_found FROM OTDS.ACCESSROLEMEMBERS WHERE "TENANTID" LIKE '%110f12348004510b%';
SELECT 'ACCESSROLEMEMBERS' AS table_name, 'ACCESSROLEID' AS column_name, "ACCESSROLEID" AS value_found FROM OTDS.ACCESSROLEMEMBERS WHERE "ACCESSROLEID" LIKE '%110f12348004510b%';
SELECT 'ACCESSROLEMEMBERS' AS table_name, 'MEMBERID' AS column_name, "MEMBERID" AS value_found FROM OTDS.ACCESSROLEMEMBERS WHERE "MEMBERID" LIKE '%110f12348004510b%';
SELECT 'SCHEMA_HISTORY' AS table_name, 'version' AS column_name, "version" AS value_found FROM OTDS.SCHEMA_HISTORY WHERE "version" LIKE '%110f12348004510b%';
SELECT 'SCHEMA_HISTORY' AS table_name, 'description' AS column_name, "description" AS value_found FROM OTDS.SCHEMA_HISTORY WHERE "description" LIKE '%110f12348004510b%';
...
...
SELECT 'JOBS' AS table_name, 'NAME' AS column_name, "NAME" AS value_found FROM OTDS.JOBS WHERE "NAME" LIKE '%110f12348004510b%';
SELECT 'JOBS' AS table_name, 'TARGET' AS column_name, "TARGET" AS value_found FROM OTDS.JOBS WHERE "TARGET" LIKE '%110f12348004510b%';
SELECT 'BOOTSTRAP_HISTORY' AS table_name, 'version' AS column_name, "version" AS value_found FROM OTDS.BOOTSTRAP_HISTORY WHERE "version" LIKE '%110f12348004510b%';
SELECT 'BOOTSTRAP_HISTORY' AS table_name, 'description' AS column_name, "description" AS value_found FROM OTDS.BOOTSTRAP_HISTORY WHERE "description" LIKE '%110f12348004510b%';
SELECT 'BOOTSTRAP_HISTORY' AS table_name, 'type' AS column_name, "type" AS value_found FROM OTDS.BOOTSTRAP_HISTORY WHERE "type" LIKE '%110f12348004510b%';
SELECT 'BOOTSTRAP_HISTORY' AS table_name, 'script' AS column_name, "script" AS value_found FROM OTDS.BOOTSTRAP_HISTORY WHERE "script" LIKE '%110f12348004510b%';
SELECT 'BOOTSTRAP_HISTORY' AS table_name, 'installed_by' AS column_name, "installed_by" AS value_found FROM OTDS.BOOTSTRAP_HISTORY WHERE "installed_by" LIKE '%110f12348004510b%';
164 rows selected.
[dmadmin@cs-0 mop]$
To verify if all SELECT statements were executed successfully and to check the list of occurrences of the specific string, you can use such commands:
[dmadmin@cs-0 mop]$ grep '^SQL> SELECT' selects.log | wc -l
164
[dmadmin@cs-0 mop]$
[dmadmin@cs-0 mop]$ grep -i 'ERROR' selects.log | wc -l
0
[dmadmin@cs-0 mop]$
[dmadmin@cs-0 mop]$ grep 'no rows selected' selects.log | wc -l
163
[dmadmin@cs-0 mop]$
[dmadmin@cs-0 mop]$ grep -B1 '[^%]110f12348004510b' selects.log
SQL> SELECT 'RESOURCEACCESS' AS table_name, 'IDINRESOURCE' AS column_name, "IDINRESOURCE" AS value_found FROM OTDS.RESOURCEACCESS WHERE "IDINRESOURCE" LIKE '%110f12348004510b%';
RESOURCEACCESS IDINRESOURCE 110f12348004510b
[dmadmin@cs-0 mop]$
Therefore, that confirms that all 164 SELECT statements were executed, there was 0 errors, 163 empty results (no match in the full column) and only a single query that matched. This means that the only reference present for the Documentum r_object_id in the OTDS DB is indeed stored inside the “RESOURCEACCESS.IDINRESOURCE” column. So let’s update / correct that value:
SQL> SELECT MEMBERID, IDINRESOURCE FROM RESOURCEACCESS WHERE MEMBERID='7BE9748AA57149D3BC641A948363B1B5';
MEMBERID IDINRESOURCE
-------------------------------- -----------------
7BE9748AA57149D3BC641A948363B1B5 110f12348004510b
SQL> UPDATE RESOURCEACCESS SET IDINRESOURCE='110f12348000c96b' WHERE MEMBERID='7BE9748AA57149D3BC641A948363B1B5';
1 row updated.
SQL> SELECT MEMBERID, IDINRESOURCE FROM RESOURCEACCESS WHERE MEMBERID='7BE9748AA57149D3BC641A948363B1B5';
MEMBERID IDINRESOURCE
-------------------------------- -----------------
7BE9748AA57149D3BC641A948363B1B5 110f12348000c96b
Now that it’s corrected, the final step is to trigger a new consolidation for that specific account, to see if OTDS is now able to update the account in Documentum:
Thread Id: 655,026 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.UpdateApiOp Method: update Level: OK Message: Enter: update(ObjectClass: __ACCOUNT__, Attribute: {Name=__UID__, Value=[110f12348000c96b]}, [Attribute: {Name=__TENANT__, Value=[]}, Attribute: {Name=user_os_name, Value=[]}, Attribute: {Name=AccountDisabled, Value=[false]}, Attribute: {Name=user_global_unique_id, Value=[9ibc9uqrfKafp/K1LRb0og==]}, Attribute: {Name=user_rename_unlock_locked_obj, Value=[T]}, Attribute: {Name=client_capability, Value=[2]}, Attribute: {Name=user_login_domain, Value=[]}, Attribute: {Name=user_name, Value=[User Display Name 01]}, Attribute: {Name=user_ldap_dn, Value=[CN=USERID01,OU=Users,DC=domain,DC=com]}, Attribute: {Name=user_address, Value=[user01@domain.com]}, Attribute: {Name=user_login_name, Value=[USERID01]}, Attribute: {Name=user_xprivileges, Value=[32]}, Attribute: {Name=create_default_cabinet, Value=[F]}, Attribute: {Name=user_privileges, Value=[0]}, Attribute: {Name=__PARTITION__, Value=[PARTITION01]}, Attribute: {Name=__OTDS_UUID__, Value=[7748abe9-7a51-9d43-b6c4-13ba639481b5]}, Attribute: {Name=user_type, Value=[dm_user]}, Attribute: {Name=user_rename_enabled, Value=[T]}, Attribute: {Name=default_folder, Value=[/Home/User Display Name 01]}, Attribute: {Name=__NAME__, Value=[User Display Name 01]}], null)
Thread Id: 655,032 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f123480007dd8]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@289175b7})
Thread Id: 655,033 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f12348000051b]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@2551d69b})
Thread Id: 655,036 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f12348000051e]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@3f885a33})
Thread Id: 655,038 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f12348000451d]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@39b49b96})
Thread Id: 655,040 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Enter: create(ObjectClass: DIRECTIONAL_ASSOCIATION, [Attribute: {Name=ToObjClassName, Value=[__GROUP__]}, Attribute: {Name=FromObjClassName, Value=[__MEMBER__]}, Attribute: {Name=From, Value=[[Multiple Members]]}, Attribute: {Name=To, Value=[120f12348000451e]}, Attribute: {Name=Type, Value=[MemberOfGroup]}], OperationOptions: {Members=[Ljava.lang.String;@4504edb8})
Thread Id: 655,066 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.UpdateApiOp Method: update Level: OK Message: Enter: update(ObjectClass: __ACCOUNT__, Attribute: {Name=__UID__, Value=[110f12348000c96b]}, [Attribute: {Name=__TENANT__, Value=[]}, Attribute: {Name=user_os_name, Value=[]}, Attribute: {Name=AccountDisabled, Value=[false]}, Attribute: {Name=user_global_unique_id, Value=[9ibc9uqrfKafp/K1LRb0og==]}, Attribute: {Name=user_rename_unlock_locked_obj, Value=[T]}, Attribute: {Name=client_capability, Value=[2]}, Attribute: {Name=user_login_domain, Value=[]}, Attribute: {Name=user_name, Value=[User Display Name 01]}, Attribute: {Name=user_ldap_dn, Value=[cn=USERID01,ou=users,dc=domain,dc=com]}, Attribute: {Name=user_address, Value=[user01@domain.com]}, Attribute: {Name=user_login_name, Value=[USERID01]}, Attribute: {Name=user_xprivileges, Value=[32]}, Attribute: {Name=create_default_cabinet, Value=[F]}, Attribute: {Name=user_privileges, Value=[0]}, Attribute: {Name=__PARTITION__, Value=[PARTITION01]}, Attribute: {Name=__OTDS_UUID__, Value=[7748abe9-7a51-9d43-b6c4-13ba639481b5]}, Attribute: {Name=user_type, Value=[dm_user]}, Attribute: {Name=user_rename_enabled, Value=[T]}, Attribute: {Name=default_folder, Value=[/Home/User Display Name 01]}, Attribute: {Name=__NAME__, Value=[User Display Name 01]}], null)
Thread Id: 655,026 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.UpdateApiOp Method: update Level: OK Message: Return: Attribute: {Name=__UID__, Value=[110f12348000c96b]}
Thread Id: 655,033 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Return: Attribute: {Name=__UID__, Value=[[Multiple Members]__com.opentext.otds.connectors.reserved.string____MEMBER__%%%__GROUP__%%%MemberOfGroup__com.opentext.otds.connectors.reserved.string__120f12348000051b]}
Thread Id: 655,032 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Return: Attribute: {Name=__UID__, Value=[[Multiple Members]__com.opentext.otds.connectors.reserved.string____MEMBER__%%%__GROUP__%%%MemberOfGroup__com.opentext.otds.connectors.reserved.string__120f123480007dd8]}
Thread Id: 655,066 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.UpdateApiOp Method: update Level: OK Message: Return: Attribute: {Name=__UID__, Value=[110f12348000c96b]}
Thread Id: 655,038 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Return: Attribute: {Name=__UID__, Value=[[Multiple Members]__com.opentext.otds.connectors.reserved.string____MEMBER__%%%__GROUP__%%%MemberOfGroup__com.opentext.otds.connectors.reserved.string__120f12348000451d]}
Thread Id: 655,040 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Return: Attribute: {Name=__UID__, Value=[[Multiple Members]__com.opentext.otds.connectors.reserved.string____MEMBER__%%%__GROUP__%%%MemberOfGroup__com.opentext.otds.connectors.reserved.string__120f12348000451e]}
Thread Id: 655,036 Time: 11/18/25, 9:11 AM Class: org.identityconnectors.framework.api.operations.CreateApiOp Method: create Level: OK Message: Return: Attribute: {Name=__UID__, Value=[[Multiple Members]__com.opentext.otds.connectors.reserved.string____MEMBER__%%%__GROUP__%%%MemberOfGroup__com.opentext.otds.connectors.reserved.string__120f12348000051e]}
The logs look clean and the same is shown on Documentum:
[dmadmin@cs-0 ~]$ iapi $DOCBASE_NAME -Udmadmin -Pxxx << EOC
> dump,c,110f12348000c96b
> exit
> EOC
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2023. OpenText Corporation
All rights reserved.
Client Library Release 23.4.0000.0180
Connecting to Server using docbase REPO01
[DM_SESSION_I_SESSION_START]info: "Session 010f123480c63b66 started for user dmadmin."
Connected to OpenText Documentum Server running Release 23.4.0000.0143 Linux64.Oracle
Session id is s0
API> ...
USER ATTRIBUTES
user_name : User Display Name 01
user_os_name :
user_address : user01@domain.com
user_group_name :
user_privileges : 0
owner_def_permit : 7
world_def_permit : 3
group_def_permit : 5
default_folder : /Home/User Display Name 01
user_db_name :
description :
acl_domain : User Display Name 01
acl_name : dm_450f12348000f652
user_os_domain :
home_docbase :
user_state : 0
client_capability : 2
globally_managed : F
user_delegation :
workflow_disabled : F
alias_set_id : 0000000000000000
user_source : OTDS
user_ldap_dn : cn=USERID01,ou=users,dc=domain,dc=com
user_xprivileges : 32
failed_auth_attempt : 0
user_admin :
user_global_unique_id : 9ibc9uqrfKafp/K1LRb0og==
user_login_name : USERID01
user_login_domain :
user_initials :
user_password : ****************
user_web_page :
first_failed_auth_utc_time : nulldate
last_login_utc_time : 1/3/2022 13:39:00
deactivated_utc_time : nulldate
deactivated_ip_addr :
restricted_folder_ids []: <none>
root_log_dir :
SYSTEM ATTRIBUTES
r_is_group : F
r_modify_date : 11/18/2025 09:11:38
r_has_events : F
r_object_id : 110f12348000c96b
APPLICATION ATTRIBUTES
INTERNAL ATTRIBUTES
i_is_replica : F
i_vstamp : 15
API> Bye
[dmadmin@cs-0 ~]$
The account was just updated, the “user_global_unique_id” has been populated with some others as well. Therefore, the OTDS synchronization is now correct for that account. In summary, this case was pretty specific, with issues inside the AD directly, which created a bit of a mess in OTDS and Documentum with a mapping to a wrong user which is apparently not present anymore, etc… But once the AD was fixed, I was able to reset the r_object_id reference in OTDS, so that it could restore itself to a correct behavior.
L’article Dctm – Incorrect r_object_id reference in OTDS est apparu en premier sur dbi Blog.
Dctm – MFA & non-MFA within OTDS based on target Applications
In OpenText Directory Services (OTDS), you have multiple ways to handle two-factor authentication (2FA) or multi-factor authentication (MFA). As a quick reminder, 2FA is simply the fact that you need to prove your identity in 2 different ways (e.g. a username/password + a One-Time-Password or a token or a biometric signature or …) while the MFA is at least 2 different ways (i.e. it includes 2FA, but you can have more than 2 factors if needed). In this blog, I will talk about a customer case that I faced recently, with mixed MFA & non-MFA.
OTDS capabilities for 2FA/MFAOTDS itself provides some 2FA capabilities, which you can enable globally, for all Users/Groups/OU/Partitions (OU = Organizational Units), or partially for some specific Users/Groups/OU/Partitions. That would be the first option if you don’t have anything else. However, you might already have 2FA or MFA managed outside of OTDS, like within your Azure Tenant. In that case, you can delegate the 2FA or MFA to the Auth Handler. This can be done, for example, by creating an OAuth 2 / OpenID Connect handler that could be connected to an Azure Application. In this case, users accessing an OTDS-managed client (Dctm, D2, …) would be redirected to OTDS, which could (or not) redirect automatically to your external SSO provider (Azure in previous example) that could perform the 2FA / MFA.
The reason why I’m writing this blog is because there are actually some limits in the way OTDS works. It’s a bit difficult to explain, but basically it looks like OTDS was made for rather small and simple setup / organizations and might not really be adapted for a fully centralized and global identity management system. From my point of view, it seems like OTDS was designed with the expectation that a specific user would only be present in a single partition. In theory, that could indeed make sense. But what if you have a very big enterprise with hundred of thousands of real users? What if you have a huge set of users and groups for some Applications? Somehow, it feels like OTDS was designed to be deployed not as a centralized solution but just as a side-application of each and every OpenText environment. If you have 10 Documentum Repositories on sandbox, 10 for dvlp, 10 for test, 10 for integration and 10 for prod, then you can obviously deploy 10 OTDS on sandbox (1 per Repo), 10 on dvlp, etc… But that just means that much more work! Technically, it feels like it was designed for that purpose. However, for simplicity, most companies will most probably not do that and instead might deploy 1 OTDS for sandbox, 1 for dvlp, etc. Or maybe even just 1 OTDS for sandbox/dvlp, 1 OTDS for test/integration and 1 for prod. To still keep some segregation between types of environments while reducing as much as possible the overhead for the operational management.
Important note: starting with Dctm 24.4, users need to be assigned a license by OTDS, to be able to login to Dctm clients. I won’t talk about that specific aspect in this blog, but just know that what I will describe later down below might have an impact on the license topic too (because of the randomness)…
Currently, in OTDS, you define Partitions (most of the time synchronized partition(s) which imports users/groups from an AD). Then you have Resources which represent a Documentum Repository in our case, for user push to Dctm and authentication. Resources come with Access Roles, which defines which Users/Groups/OU/Partitions are going to be synchronized to the associated Resource(s). Then you have OAuth Clients for your end-user application like D2. And finally, Auth Handlers, which basically represents authentication sources for users (already generated cookie, LDAP, external SSO providers, …).
Design problems?The problem with the current approach is when you have a lot of users/groups and a lot of resources. Let’s assume that you have 500’000 users and 50’000 groups in your AD. How would you setup OTDS for 10 different Documentum Repositories, each of these repositories with thousands of users and groups, knowing that each Repositories might share parts of the users and groups (e.g. a userA has access to 5 of the 10 Repositories). You will have to create Resources for each Repository but how will you configure the Access Roles? You will obviously NOT manually select 10’000 users inside the Access Role, and you will probably not select manually 1’000 groups either. Especially so that, if you do that manually, then when something gets added to the AD in the future, it won’t automatically be sync to the Repository unless you manually do it on OTDS too… That’s not maintainable. What other solutions do you have then? Using an OU? That’s not really usable in most cases, as all users would most probably be under some “USERS” OU (and in there, you might have region-specific branches like EU/NA/JP or other split), and you cannot do exclusions. So how do you select multiple OU but excluding some of the lower OU? Most AD weren’t setup with end-user applications in mind, so the OU definitions usually do not reflect application needs for users import/sync. Therefore, the only remaining “practical” solution is Partitions…
In turns, you will probably end-up creating multiple Partitions with the goal that 1 Partition = 1 Repository. That’s actually pretty much how the LDAP Synchronization was working before OTDS was introduced… In Documentum, you configured your dm_ldap_config object with user source, group source, user filters, group filters and an LDAP hostname/port. That’s the same thing as an OTDS partition… And another Repository could have a very different configuration altogether. Therefore, you will have 10 Partitions, named based on your Applications (like APP1, APP2, …, APP10) and 10 Resources (1 per Repository for the user push through the JMS dmotdsrest URL). If we take the case of our “userA” from before, then that means you will have, in OTDS, 5 accounts “userA” from 5 of the Partitions (since this account needs access to 5 of the 10 applications, it must be part of some of the groups that these 5 applications are importing and therefore the Partition will import these groups with our userA).
Here comes the first problem: users are used to login to Applications with their userIDs, which might be an email or the sAMAccountName for example. And usually, they don’t want to change that. So, when OTDS comes in and tells you that you will have to login with “USERID@PARTITION“, so that the account actually used can be uniquely identified by OTDS, most people will just say “f*** off”, and rightly so! It’s not exactly the same topic, but you can take a look at two previous blog posts here and there where I talked about using sAMAccountName instead of the default userPrincipalName for logins.
But that’s not all, there is a second problem the comes from the first one: how do you restrict / link which OAuth Client uses which user / which partition for the authentication? Well, you can’t… For segregation, you will most probably create 1 OAuth Client for each D2 of the 10 Repository. So APP1 = REPO1 will have an OAUTHCLIENT1 for its D2. APP2 = REPO2 will have an OAUTHCLIENT2 for its D2, etc… In the OAuth Client, you can restrict which Partition will be allowed to login, but it doesn’t control which Partition will actually be used before that. Let’s assume userA has access to APP1 through APP5 and it wants to login to APP5. The OAUTHCLIENT5 will therefore be used in this case. When the user access D2, it gets redirected to OTDS. OTDS doesn’t know which user it is, so either it displays the OTDS login page or redirects to an external Auth Handler like the Azure integration I was talking about previously. Azure will return the user “USERA@DOMAIN-NAME.COM” to OTDS. However, OTDS has 5 accounts that match that, in the APP1 to APP5 Partitions, and it will pick randomly one of them. Therefore, APP2 or APP3 might be used in this case, while the OAUTHCLIENT5 was the one that initiated the request. Which means that you cannot restrict a specific Partition for the login, because you don’t know which one might have been selected by OTDS.
MFA & non-MFA in OTDSI haven’t talked much about the MFA & non-MFA since the beginning of this blog, but fear not, it is coming. In addition to all the complexity above, what would happen if you had different classification per Application? For example, let’s assume REPO1 is a High-Risk application that requires MFA while REPO2 is a Low-Risk application that does not require MFA. How do you manage that with OTDS?
And here comes the third problem: you can’t either. As mentioned at the beginning of this blog, OTDS only allows 2FA to be configured based on Users/Groups/OU/Partitions. You might think that this could then work if we set APP1 with 2FA and APP2 without. But as I was trying to explain on the second problem above, when you have the same accounts in multiple Partitions, you cannot control which one OTDS will use. That means that userA accessing APP2 (Low-Risk/non-MFA) might actually use the account userA from APP1, which means that user will have to enter MFA details for the non-MFA application. In this way, it’s annoying but not “critical”. However, you cannot make sure that the opposite will never happen (bypassing MFA for the High-Risk/MFA protected application). That was with the OTDS 2FA solution, but what if you are using an external Auth Handler (like Azure) to provide the MFA capability?
Well, here comes the fourth problem (which is quite similar to the second one): you cannot control which Auth Handler is being used by an OAuth Client. In above case, you would have defined 2 SSO Providers with Azure, one that can be named “SSO_MFA” (which would be MFA enabled on Azure side and force all users to provide additional authentication details) and another “SSO_NON_MFA” (which would not be MFA enabled on Azure side). Therefore, userA accessing the D2 of REPO1 (supposed to be MFA) would use the OAUTHCLIENT1. Since you have 2 SSO providers, you would probably make them both as non-default (“Active By Default: false” in the Auth Handler “Parameters” page). In this case, you would end-up on the OTDS login page and it would be up to the user to select either “SSO_MFA” or “SSO_NON_MFA“… Which means you are NOT restricting/forcing the MFA in the end… Nothing would prevent userA to always click on “SSO_NON_MFA” and it would actually work for him. If you make the “SSO_MFA” Auth Handler as default, then users will always have to provide MFA details, even if they are trying to access APP2 (which wouldn’t require that), so that’s not really a solution either.
WorkaroundWhat to do then? Since it looks like there are no other alternatives, at that customer, we simply installed another OTDS… We are using our own containers and automation in Kubernetes Clusters, so adding a new environment wasn’t much work, but that just mean that instead of having X OTDS (like 1 for sandbox/dvlp, 1 for test/integration and 1 for prod), we had to deploy 2X OTDS – always one that will be dedicated to MFA-enabled applications and one for the rest (non-MFA applications). I asked OpenText to open an improvement/feature request (OCTIM77DS4926338), to add the capability to link an OAuth Client to Auth Handler(s). With the reasoning that when a user tries to login to their end-user applications, it’s always linked to a specific OAuth Client no? So why can’t that Application/OAuth Client specify that it needs to use (can be made optional) a specific Auth Handler? If you can select multiple, then you would allow for default SSO but still have the capability to fallback to username/password (through the LDAP credentials for example) or other solutions you might have configured. There might be some work / update to be done on the cookie management to make that possible, but I will let OT look into it. In any cases, the workaround is already in place and it works as expected for the customer, so he is happy with that.
L’article Dctm – MFA & non-MFA within OTDS based on target Applications est apparu en premier sur dbi Blog.
M-Files Sharing Center: Improved collaboration
As I’ve said many times, I like M-Files because each monthly release not only fixes bugs, but also introduces new features. This time (December 2025), they worked on the sharing feature.
It’s not entirely new, as it was already possible to create links to share documents with external users. However, it was clearly not user-friendly and difficult to manage. Let’s take a closer look.
The Sharing Center in M-Files allows users to share content with external users by generating links that can be public or passcode-protected.
Key capabilities include:Access Control: Modify or revoke permissions instantly to maintain security.
Audit & Compliance: Track sharing history for regulatory compliance and internal governance.
External Collaboration: Share documents with partners or clients without exposing your entire repository.
The capabilities will evolve in the next releases.
Centralized Overview: See all active shares in one place. No more guessing who has access!
Editing Permissions: Allow external users to update shared documents.
And lot of new features that will be added in the upcoming releases!
A good internal content management system is crucial for maintaining data consistency and integrity. But what about when you share these documents via email, a shared directory, or cloud storage?
You have control over your documents and can see who has accessed them and when.
Need to end a collaboration? With one click, you can immediately revoke access.
Providing an official way to share documents with external people helps prevent users from using untrusted cloud services and other methods that can break security policies.
It’s hard to make it any simpler. Just right-click on the document and select “Share.”
A pop-up will ask you for an email address and an expiration date, and that’s it!
When an external user goes to the link, he or she will be asked to enter his or her email address. A passcode will then be sent to that email address for authentication.
Another interesting feature is that the generated link remains the same when you add or remove users, change permissions, or stop and resume sharing.
Small TipsSet expiration dates: The easier it is to share a document, the easier it is to forget about it. Therefore, it is important to set an expiration date for shared documents.
Use role-based permissions: Sharing information outside the organization is sensitive, so controlling who can perform this action is important.
Regularly review active shares: Even if an expiration date has been set, it is a good habit to ensure that only necessary access remains.
M-Files already provides a great tool for external collaboration. Hubshare. With this collaboration portal, you can do much more than share documents. Of course, this tool incurs an additional cost. M-Files Sharing Center solves another problem: how to share documents occasionally outside the organization without compromising security or the benefits M-Files provides internally.
The first version of the Sharing Center is currently limited to downloading, but an editing feature and a dashboard for quickly identifying shared documents across the repository will be included in a next release. These features’ simplicity and relevance will undoubtedly make it stand out even more from its competitors.
If you are curious about it, feel free to ask us!
L’article M-Files Sharing Center: Improved collaboration est apparu en premier sur dbi Blog.
Data Anonymization as a Service with Delphix Continuous Compliance
In the era of digital transformation, attack surfaces are constantly evolving and cyberattack techniques are becoming increasingly sophisticated. Maintaining the confidentiality, integrity, and availability of data is therefore a critical challenge for organizations, both from an operational and a regulatory standpoint (GDPR, ISO 27001, NIST). Therefore, data anonymization is crucial today.
Contrary to a widely held belief, the risk is not limited to the production environment. Development, testing, and pre-production environments are prime targets for attackers, as they often benefit from weaker security controls. The use of production data that is neither anonymized nor pseudonymized directly exposes organizations to data breaches, regulatory non-compliance, and legal sanctions.
Development teams require realistic datasets in order to:
- Test application performance
- Validate complex business processes
- Reproduce error scenarios
- Train Business Intelligence or Machine Learning algorithms
However, the use of real data requires the implementation of anonymization or pseudonymization mechanisms ensuring:
- Preservation of functional and referential consistency
- Prevention of data subject re-identification
Among the possible anonymization techniques, the main ones include:
- Dynamic Data Masking, applied on-the-fly at access time but which does not anonymize data physically
- Tokenization, which replaces a value with a surrogate identifier
- Cryptographic hashing, with or without salting
In this scenario, a full backup of the production database is restored into a development environment. Anonymization is then applied using manually developed SQL scripts or ETL processes.
This approach presents several critical weaknesses:
- Temporary exposure of personal data in clear text
- Lack of formal traceability of anonymization processes
- Risk of human error in scripts
- Non-compliance with GDPR requirements
This model should therefore be avoided in regulated environments.
Data copy via a Staging Database in ProductionThis model introduces an intermediate staging database located within a security perimeter equivalent to that of production. Anonymization is performed within this secure zone before replication to non-production environments.
This approach makes it possible to:
- Ensure that no sensitive data in clear text leaves the secure perimeter
- Centralize anonymization rules
- Improve overall data governance
However, several challenges remain:
- Versioning and auditability of transformation rules
- Governance of responsibilities between teams (DBAs, security, business units)
- Maintaining inter-table referential integrity
- Performance management during large-scale anonymization
In this architecture, Delphix is integrated as the central engine for data virtualization and anonymization. The Continuous Compliance module enables process industrialization through:
- An automated data profiler identifying sensitive fields
- Deterministic or non-deterministic anonymization algorithms
- Massively parallelized execution
- Orchestration via REST APIs integrable into CI/CD pipelines
- Full traceability of processing for audit purposes
This approach enables the rapid provisioning of compliant, reproducible, and secure databases for all technical teams.
Database anonymization should no longer be viewed as a one-time constraint but as a structuring process within the data lifecycle. It is based on three fundamental pillars:
- Governance
- Pipeline industrialization
- Regulatory compliance
An in-house implementation is possible, but it requires a high level of organizational maturity, strong skills in anonymization algorithms, data engineering, and security, as well as a strict audit framework. Solutions such as Delphix provide an industrialized response to these challenges while reducing both operational and regulatory risks.
To take this further, Microsoft’s article explaining the integration of Delphix into Azure pipelines analyzes the same issues discussed above, but this time in the context of the cloud : Use Delphix for Data Masking in Azure Data Factory and Azure Synapse Analytics
What’s next ?This use case is just one example of how Delphix can be leveraged to optimize data management and compliance in complex environments. In upcoming articles, we will explore other recurring challenges, highlighting both possible in-house approaches and industrialized solutions with Delphix, to provide a broader technical perspective on data virtualization, security, and performance optimization.
What about you ?How confident are you about the management of your confidential data?
If you have any doubts, please don’t hesitate to reach out to me to discuss them !
L’article Data Anonymization as a Service with Delphix Continuous Compliance est apparu en premier sur dbi Blog.
Avoiding common ECM pitfalls with M-Files
Enterprise Content Management (ECM) systems promise efficiency, compliance, and better collaboration.
Many organizations struggle to realize these benefits because of common pitfalls in traditional ECM implementations.
Let’s explore these challenges and see how M-Files addresses them with its unique approach.
Pitfall 1: Information SilosTraditional ECM systems often replicate the same problem they aim to solve: Information silos. Documents remain locked in departmental repositories or legacy systems, making cross-functional collaboration difficult.
How does M-Files address this challenge?
M-Files connects to existing repositories without requiring migration. Its connector architecture allows users to access content from multiple sources through a single interface, breaking down silos without disrupting operations. Additionally, workflows defined in M-Files can be applied to linked content, providing incredible flexibility.
With folder-based systems, users must know exactly where a document is stored. This leads to wasted time spent searching for files. It also increases the risk of creating duplicates because users who cannot find a document may be tempted to create or add it again. Another common issue is that permissions are defined by folder. This means that, in order to give users access to data, the same file must be copied to multiple locations.
How M-Files Solves It:
Instead of folders, M-Files uses metadata-driven organization. Users search by “what” the document is (e.g., invoice, contract) rather than “where” it’s stored. This makes retrieval fast and intuitive and it allows users to personalize how the data is displayed based on their needs.
If an ECM system is hard to use, like lot of information to be filled or very restricted possibilities, employees will bypass it, creating compliance risks, inefficiencies and data loss.
How M-Files address that:
M-Files seamlessly integrates with familiar tools, such as Microsoft Teams, Outlook, and SharePoint, reducing friction. Its simple interface allows users to quickly become familiar with the software, and its AI-based suggestions make manipulating data easy, ensuring rapid adoption.
Regulated industries face strict requirements for document control, audit trails, and retention. Traditional ECM systems often require manual processes to stay compliant.
How M-Files helps:
M-Files includes a workflow engine that automates compliance with version control, audit logs, and retention policies. This ensures approvals and signatures occur in the correct order, thereby reducing human error and delays.
As organizations grow, ECM systems can become bottlenecks due to their rigid architectures.
M-Files offers several solutions: cloud, on-premises, and hybrid deployment options, ensuring scalability and flexibility. Its architecture supports global operations without sacrificing performance and helps streamline costs.
Selecting an ECM involves more than comparing the costs (licenses, infrastructure, etc.) of different market players. It is also, above all else, a matter of identifying the productivity gains, reduction in repetitive task workload, and efficiency that such a solution provides.
If you’re feeling overwhelmed by the vast world of ECM, don’t hesitate to ask us for help.
L’article Avoiding common ECM pitfalls with M-Files est apparu en premier sur dbi Blog.
Exascale storage architecture
In this blog post, we will explore the Exascale storage architecture and processes pertaining to Exascale. But before diving in, a quick note to avoid any confusion about Exascale: even though in the previous article, we focused on Exascale Infrastructure and its various benefits in terms of small footprint and hyper-elasticity based on modern cloud characteristics, it is important to keep in mind that Exascale is an Exadata technology and not a cloud-only technology. You can benefit from it in non-cloud deployments as well such as on Exadata Database Machine deployed in your data centers.
Exascale storage componentsHere is the overall picture:
Exascale cluster
An Exascale Cluster is composed of Exadata storage servers to provide storage to Grid Infrastructure clusters and the databases. An Exadata storage server can belong to only one Exascale cluster.
Software services (included in the Exadata System Software stack) run on each Exadata cluster for managing the cluster resources made available to GI and databases, namely pool disks, storage pools, vaults, block volumes and many other.
With Exadata Database Service on Exascale Infrastructure, the number of Exadata storage servers included in the Exascale cluster can be quite huge, hundreds maybe even thousands storage servers, to enable cloud-scale storage resource pooling.
Storage poolsA storage pool is a collection of pool disks (see below for details about pool disks).
Each Exascale cluster requires at least one storage pool.
A storage pool can be dynamically reconfigured by changing pool disks size, allocating more pool disks or adding Exadata storage servers.
The pool disks found inside a storage pool must be of the same media type.
Pool disksA pool disk is physical storage space allocated from an Exascale-specific cell disk to be integrated in a storage pool.
Each storage server physical disk has a LUN in the storage server OS and a cell disk is created as a container for all Exadata-related partitions within the LUN. Each partition in a cell disk is then designated as a pool disk for Exascale (or grid disk in case of ASM is used instead of Exascale).
A media type is associated to each pool disk based on the underlying storage device and can be one of the following :
- HC : points to high capacity storage using hard disk drives
- EF : based on extreme flash storage devices
- XT : corresponds to extended storage using hard disk drives found in Exadata Storage Server X11M Extended (XT) hardware
Vaults are logical storage containers used for storing files and are allocated from storage pools. By default, without specific provisioning attributes, vaults can use all resources from all storage pools of an Exascale cluster.
Here are the two main services provided by vaults:
- Security isolation: a security perimeter is associated with vaults based on user access controls which guarantee a strict isolation of data between users and clusters.
- Resource control: storage pool resources usage is configured at the vault level for attributes like storage space, IOPS, XRMEM and flash cache sizing.
For those familiar with ASM, think of vaults as the equivalent to ASM disk groups. For example, instance parameters like ‘db_create_file_dest’ and ‘db_recovery_file_dest’ reference vaults instead of ASM disk groups by using ‘@vault_name’ syntax.
Since attributes like redundancy, type of file content, type of media are positioned at the file level instead at the logical storage container, there is no need to organize vaults in the same manner as we did for disk groups. For instance, we don’t need to create a first vault for data and a second vault for recovery files as we are used to with ASM.
Beside database files, vaults can also store other types of files even though it is recommended to store non database files on block volumes. That’s because Exascale is optimized for storing large files such as database files whereas regular files are typically much smaller and fit more on block volumes.
FilesThe main files found on Exascale storage are Database and Grid Infrastructure files. Beyond that, all objects in Exascale are represented as files of a certain type. Each file type has storage attributes defined in a template. The file storage attributes are :
- mediaType : HC, EF or XT
- redundancy : currently high
- contentType : DATA and RECO
This makes a huge difference with ASM where different redundancy needs required to create different disk groups. With Exascale, it is now possible to store files with different redundancy requirements in the same storage structure (vaults). This also enables to optimize usage of the storage capacity.
Files are composed of extents of 8MB in size which are mirrored and stripped across all vault’s storage resources.
The tight integration of the database kernel with Exascale makes it possible for Exascale to automatically understand the type of file the database asks to create and thus applies the appropriate attributes defined in the file template. This prevents Exascale to store data and recovery files extents (more on extents in the next section) on the same disks and also guarantees that mirrored extents are located on different storage servers than the primary extent.
File extentsRemember that in Exascale, storage management moved from the compute servers to the storage servers. Specifically, this means the building blocks of files, namely extents, are managed by the storage servers.
The new data structure used for extent management is a mapping table which tracks for each file extent the location of the primary and mirror copy extents in the storage servers. This mapping table is cached by each database server and instance to retrieve its file extents location. Once the database has the extent location, it can directly make an I/O call to the appropriate storage server. In case the mapping table is no more up-to-date because of database physical structure changes or storage servers addition or removal, an I/O call can be rejected, triggering a mapping table refresh allowing the I/O call to be retried.
Exascale Block Store with RDMA-enabled block volumes
Block volumes can be allocated from storage pools to store regular files on file systems like ACFS or XFS. They also enable centralization of VC VM images, thus cutting the dependency of VM images to internal compute node storage and streamlining migrations between physical database servers. Clone, snapshot and backup and restore features for block volumes can leverage all resources of the available storage servers.
Exascale storage servicesFrom a software perspective, Exascale is composed of a number of software services available in the Exadata System Software (since release 24.1). These software services run mainly on the Exadata Storage Servers but also on the Exadata Database Servers.
Exascale storage server services ServiceDescriptionEGS – Cluster ServicesEGS (Exascale Global Services) main task is to manage the storage allocated to storage pools. In addition, EGS also controls storage cluster membership, security and identity services as well as monitoring the other Exascale services.ERS – Control ServicesERS (Exascale RESTful Services) provide the management endpoint for all Exascale management operations. The new Exascale command-line interface (ESCLI), used for monitoring and management functions, leverages ERS for command execution.EDS – Exascale Vault Manager ServicesEDS (Exascale Data Services) are responsible for files and vaults metadata management and are made up of two groups of services:System Vault Manager and User Vault Manager.
System Vault Manager (SYSEDS) manages Exascale vaults metadata, such as the security perimeter through ACL and vaults attributes
User Vault Manager (USREDS) manages Exascale files metadata, such as ACLs and attributes as well as clones and snapshots metadataBSM – Block Store ManagerBSM manages Exascale block storage metadata and controls all block store management operations like volume creation, attachment, detachment, modification or snapshot.BSW – Block Store WorkerThese services perform the actual requests from clients and translate them to storage server I/O.IFD – Instant Failure DetectionIFD service watches for failures which could arise in the Exascale cluster and triggers recovery actions when needed.Exadata Cell ServicesExadata cell services are required for Exascale to function and both services work in conjunction to provide the Exascale features. Exascale database server services ServiceDescriptionEGS – Cluster ServicesEGS instances will run on database servers when the Exadata configuration has fewer than five storage serversBSW – Block Store WorkerServices requests from block store clients and performs the resulting storage server I/OESNP – Exascale Node ProxyESNP provides Exascale cluster state to GI and Database processes.EDV – Exascale Direct VolumeEDV service exposes Exascale block volumes to Exadata compute nodes and runs I/O requests on EDV devices.EGSB/EDSBPer database instance services that maintain metadata about the Exascale cluster and vaults
Below diagram depicts how the various Exascale services are dispatched on the storage and compute nodes:
Wrap-up
By rearchitecting Exadata storage management with focus on space efficiency, flexibility and elasticity, Exascale can now overcome the main limitations of ASM:
- diskgroup resizing complexity and time-consuming rebalance operation
- space distribution among DATA and RECO diskgroups requiring rigorous estimation of storage needs for each
- sparse diskgroup requirement for cloning with a read-only test master or use of ACFS without smart scan
- redundancy configuration at the diskgroup level
The below links will provide you further details on the matter:
Oracle Exadata Exascale advantages blog
Oracle Exadata Exascale storage fundamentals blog
Oracle And Me blog – Why ASM Needed an Heir Worthy of the 21st Century
Oracle And Me blog – New Exascale architecture
More on the exciting Exascale technology in coming posts …
L’article Exascale storage architecture est apparu en premier sur dbi Blog.
Forgejo: Organizations, Repositories and Actions
In the last post we’ve deployed Forgejo on FreeBSD 15. In this post we’re going to do something with it and that is: We’ll create a new organization, a new repository, and finally we want to create a simple action. An “Action” is what GitLab calls a pipeline.
Creating a new organization is just a matter of a few clicks:
The only change to the default settings is the visibility, which is changed to private. The interface directly switches to the new organizations once it is created:
The next step is to create and initialize a new repository, which is also just a matter of a few clicks:
All the defaults, except for the “private” flag.
To clone this repository locally you’ll need to add your public ssh key to your user’s profile:
Once you have that, the repository can be cloned as usual:
dwe@ltdwe:~/Downloads$ git clone ssh://git@192.168.122.66/dwe/myrepo.git
Cloning into 'myrepo'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
Receiving objects: 100% (3/3), done.
dwe@ltdwe:~/Downloads$ ls -la myrepo/
total 4
drwxr-xr-x 1 dwe dwe 26 Dec 15 09:41 .
drwxr-xr-x 1 dwe dwe 910 Dec 15 09:41 ..
drwxr-xr-x 1 dwe dwe 122 Dec 15 09:41 .git
-rw-r--r-- 1 dwe dwe 16 Dec 15 09:41 README.md
So far so good, lets create a new “Action”. Before we do that, we need to check that actions are enabled for the repository:
What we need now is a so-called “Runner”. A “Runner” is a daemon that fetches work from an Forgejo instance, executes and returns back the result. For the “Runner” we’ll use a Debian 13 minimal setup:
root@debian13:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 13 (trixie)"
NAME="Debian GNU/Linux"
VERSION_ID="13"
VERSION="13 (trixie)"
VERSION_CODENAME=trixie
DEBIAN_VERSION_FULL=13.2
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
The only requirement is to have Git, curl and jq installed, so:
root@debian13:~$ apt install -y git curl jq
root@debian13:~$ git --version
git version 2.47.3
Downloading and installing the runner (this is a copy/paste from the official documentation):
root@debian13:~$ export ARCH=$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')
root@debian13:~$ echo $ARCH
amd64
root@debian13:~$ export RUNNER_VERSION=$(curl -X 'GET' https://data.forgejo.org/api/v1/repos/forgejo/runner/releases/latest | jq .name -r | cut -c 2-)
root@debian13:~$ echo $RUNNER_VERSION
12.1.2
root@debian13:~$ export FORGEJO_URL="https://code.forgejo.org/forgejo/runner/releases/download/v${RUNNER_VERSION}/forgejo-runner-${RUNNER_VERSION}-linux-${ARCH}"
root@debian13:~$ wget -O forgejo-runner ${FORGEJO_URL}
root@debian13:~$ chmod +x forgejo-runner
root@debian13:~$ wget -O forgejo-runner.asc ${FORGEJO_URL}.asc
root@debian13:~$ gpg --keyserver hkps://keys.openpgp.org --recv EB114F5E6C0DC2BCDD183550A4B61A2DC5923710
gpg: directory '/root/.gnupg' created
gpg: keybox '/root/.gnupg/pubring.kbx' created
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key A4B61A2DC5923710: public key "Forgejo <contact@forgejo.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
root@debian13:~$ gpg --verify forgejo-runner.asc forgejo-runner && echo "✓ Verified" || echo "✗ Failed"
gpg: Signature made Sat 06 Dec 2025 11:10:50 PM CET
gpg: using EDDSA key 0F527CF93A3D0D0925D3C55ED0A820050E1609E5
gpg: Good signature from "Forgejo <contact@forgejo.org>" [unknown]
gpg: aka "Forgejo Releases <release@forgejo.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: EB11 4F5E 6C0D C2BC DD18 3550 A4B6 1A2D C592 3710
Subkey fingerprint: 0F52 7CF9 3A3D 0D09 25D3 C55E D0A8 2005 0E16 09E5
✓ Verified
Move that to a location which is in the PATH:
root@debian13:~$ mv forgejo-runner /usr/local/bin/forgejo-runner
root@debian13:~$ forgejo-runner -v
forgejo-runner version v12.1.2
As usual, a separate user should be created to run a service:
root@debian13:~$ groupadd runner
root@debian13:~$ useradd -g runner -m -s /bin/bash runner
As the runner will use Docker, Podman or LXC to execute the Actions, we’ll need to install Podman as well:
root@debian13:~$ apt install -y podman podman-docker
root@debian13:~$ podman --version
podman version 5.4.2
root@debian13:~$ systemctl enable --now podman.socket
root@debian13:~$ machinectl shell runner@
Connected to the local host. Press ^] three times within 1s to exit session.
runner@debian13:~$ systemctl --user enable --now podman.socket
Created symlink '/home/runner/.config/systemd/user/sockets.target.wants/podman.socket' → '/usr/lib/systemd/user/podman.socket'.
Now we need to register the runner with the Forgejo instance. Before we can do that, we need to fetch the registration token:
Back on the runner, register it:
root@debian13:~$ su - runner
runner@debian13:~$ forgejo-runner register
INFO Registering runner, arch=amd64, os=linux, version=v12.1.2.
WARN Runner in user-mode.
INFO Enter the Forgejo instance URL (for example, https://next.forgejo.org/):
http://192.168.122.66:3000/
INFO Enter the runner token:
BBE3MbNuTl0Wl52bayiRltJS8ciagRqghe7bXIXE
INFO Enter the runner name (if set empty, use hostname: debian13):
runner1
INFO Enter the runner labels, leave blank to use the default labels (comma-separated, for example, ubuntu-20.04:docker://node:20-bookworm,ubuntu-18.04:docker://node:20-bookworm):
INFO Registering runner, name=runner1, instance=http://192.168.122.66:3000/, labels=[docker:docker://data.forgejo.org/oci/node:20-bullseye].
DEBU Successfully pinged the Forgejo instance server
INFO Runner registered successfully.
runner@debian13:~$
This will make the new runner visible in the interface, but it is in “offline” state:
Time to startup the runner:
root@debian13:~$ cat /etc/systemd/system/forgejo-runner.service
[Unit]
Description=Forgejo Runner
Documentation=https://forgejo.org/docs/latest/admin/actions/
After=docker.service
[Service]
ExecStart=/usr/local/bin/forgejo-runner daemon
ExecReload=/bin/kill -s HUP $MAINPID
# This user and working directory must already exist
User=runner
WorkingDirectory=/home/runner
Restart=on-failure
TimeoutSec=0
RestartSec=10
[Install]
WantedBy=multi-user.target
root@debian13:~$ systemctl daemon-reload
root@debian13:~$ systemctl enable forgejo-runner
root@debian13:~$ systemctl start forgejo-runner
Once the runner is running, the status in the interface will switch to “Idle”:
Ready for our first “Action”. Actions are defined as a yaml file in a specific directory of the repository:
dwe@ltdwe:~/Downloads/myrepo$ mkdir -p .forgejo/workflows/
dwe@ltdwe:~/Downloads/myrepo$ cat .forgejo/workflows/demo.yaml
on: [push]
jobs:
test:
runs-on: docker
steps:
- run: echo All good!
dwe@ltdwe:~/Downloads/myrepo$ git add .forgejo/
dwe@ltdwe:~/Downloads/myrepo$ git commit -m "my first action"
[main f9aa487] my first action
1 file changed, 6 insertions(+)
create mode 100644 .forgejo/workflows/demo.yaml
dwe@ltdwe:~/Downloads/myrepo$ git push
What that does: Whenever there is a “push” to the repository, a job will be executed on the runner with label “docker” which doesn’t do more than printing “All good!”. If everything went fine you should see the result under “Actions” section of the repository:
Nice, now we’re ready to do some real work, bust this is the topic for the next post.
L’article Forgejo: Organizations, Repositories and Actions est apparu en premier sur dbi Blog.
AI Isn’t Your Architect: Real-World Issues in a Vue project
In my previous article I generated a Rest NestJS API using AI.
Today, I will create a small UI to authenticate users via the API. I will use this simple case to show the limits of coding with AI and what you need to be attentive to.
I will create my interface with Vue 3 and Vuetify still using the GitHub Copilot agent on Vs Code.
Initializing the projectI create the new Vuetify project with the npm command:
npm create vuetify@latest
To avoid CORS request between the Vuetify project and the API project, I’m configuring a proxy into Vite like in my other article.
In the AI chat, I also initialize my context
Remember:
- You are a full-stack TypeScript developer.
- You follow best practices in development and security.
- You will work on this NestJS project.
To guide the AI, I’m exporting the Open API definition into a file in my project: /api-docs/open-api.json
Connecting to API, first issueFirst, I want to connect my UI to the API, and I ask the AI the following:
Connect the application to the API. The API url path is /api
The result is not what I expected… My goal was to generate a simple class that makes requests to API with support for JWT tokens, but by default the AI wanted to add the Axios library to the project.
I’m not saying that Axios is a bad library, but it’s far too complicated for my usage and will add too many dependencies to the project, and therefore more maintenance.
So I’m skipping the installation of the library and I’m stopping the AI agent.
To continue and generate the desired code, I ask the AI:
I don't want to use axios, connect the application to the API with native typescript code
With this prompt, the generated code is fine.
Authentication Service, hidden issueWithout going into the details, I asked the AI to create my authentication form and the associated service:
Add a page /login to authenticate users, Use vuetify for login form.
Add a service to authenticate the users using the api endpoint /auth/login
The api return jwt token.
When the user is authenticated, redirect the user to /home
If a user accesses /index without authentication redirect the user to /login
The result looks good and works:
At first glance, the code works and I can authenticate myself. But the problem comes from the code itself:
The localStorage is accessible by all scripts, thus vulnerable to XSS attacks.
JWT access tokens should not be stored in persistent storage accessible by JavaScript, such as localStorage. To reduce the risk of XSS attacks, it is preferable to store the access token in a Vue service variable rather than in persistent browser storage.
Note: When stored in memory, the token will be lost at every page refresh, which requires implementing a refresh token mechanism. The refresh token should be stored in an HttpOnly cookie, allowing the access token to have a short expiration time and significantly limiting the impact of a potential attack.
To solve the issue I asked the AI the following:
Don't use localStorage to store the token, it's a security issue
Using GPT5-min, it only does the work:
With Claude Haiku 4.5, we have a short notice:
Why does this happen?
I tried different AI models in GitHub Copilot, but, from GPT to Claude, the result was similar. Most AIs generate code with Axios and localStorage for this use, because they replicate the most common patterns found in their training data, not the most up-to-date or secure practices.
Axios is overrepresented in tutorials because it offers a simple, opinionated HTTP abstraction that is easier for an AI to reason about than the lower-level fetch API.
The storage of JWT in localStorage is still widely shown online as it reflects old frontend authentication practices that prioritized simplicity over security. It keeps the token easily accessible to JavaScript and avoids the processing of cookies and refresh token rotation. Although largely discouraged today, these examples remain overrepresented in the tutorials and training data used by AI models.
In short, AI prioritizes widely recognized patterns and simplicity of implementation over minimalism and real-world security considerations.
ConclusionAlthough AI is an incredible tool that helps us in our development work, it is important to understand the limits of this tool. With AI, the new role of developers is to imagine the code architecture, ask AI, evaluate the result and review the code. As its name indicates very well, “Copilot” is your co-pilot, you must remain the pilot.
AI can write code, but it does not understand the consequences of architectural decisions.
L’article AI Isn’t Your Architect: Real-World Issues in a Vue project est apparu en premier sur dbi Blog.
The truth about nested transactions in SQL Server
Working with transactions in SQL Server can feel like navigating a maze blindfolded. On paper, nested transactions look simple enough, start one, start another, commit them both, but under the hood, SQL Server plays by a very different set of rules. And that’s exactly where developers get trapped.
In this post, we’re going to look at what really happens when you try to use nested transactions in SQL Server. We’ll walk through a dead-simple demo, expose why @@TRANCOUNT is more illusion than isolation, and see how a single rollback can quietly unravel your entire call chain. If you’ve ever assumed nested transactions can behave the same way as in Oracle for example, this might clarify a few things you didn’t expect !
Before diving into the demonstration, let’s set up a simple table in tempdb and illustrate how nested transactions behave in SQL Server.
IF OBJECT_ID('tempdb..##DemoLocks') IS NOT NULL
DROP TABLE ##DemoLocks;
CREATE TABLE ##DemoLocks (id INT IDENTITY, text VARCHAR(50));
BEGIN TRAN MainTran;
BEGIN TRAN InnerTran;
INSERT INTO ##DemoLocks (text) VALUES ('I''m just a speedy insert ! Nothing to worry about');
COMMIT TRAN InnerTran;
WAITFOR DELAY '00:00:10';
ROLLBACK TRAN MainTran;
Let’s see how locks behave after committing the nested transaction and entering the WAITFOR phase. If nested transactions provided isolation between each other, no locks should remain since the transaction no longer works on any object. The following query shows all locks associated with my query specifically and the ##Demolocks table we are working on.
SELECT
l.request_session_id AS SPID,
r.blocking_session_id AS BlockingSPID,
resource_associated_entity_id,
DB_NAME(l.resource_database_id) AS DatabaseName,
OBJECT_NAME(p.object_id) AS ObjectName,
l.resource_type AS ResourceType,
l.resource_description AS ResourceDescription,
l.request_mode AS LockMode,
l.request_status AS LockStatus,
t.text AS SQLText
FROM sys.dm_tran_locks l
LEFT JOIN sys.dm_exec_requests r
ON l.request_session_id = r.session_id
LEFT JOIN sys.partitions p
ON l.resource_associated_entity_id = p.hobt_id
OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) t
where t.text like 'IF OBJECT%'
and OBJECT_NAME(p.object_id) = '##DemoLocks'
ORDER BY l.request_session_id, l.resource_type;
And the result :
All of this was just smoke and mirrors !
We clearly see in the image 2 persistent locks of different types:
- LockMode IX: Intent lock on a data page of the
##DemoLockstable. This indicates that a lock is active on one of its sub-elements to optimize the engine’s lock checks. - LockMode X: Exclusive lock on a
RID(Row Identifier) for data writing (here, ourINSERT).
For more on locks and their usage : sys.dm_tran_locks (Transact-SQL) – SQL Server | Microsoft Learn
In conclusion, SQL Server does not allow nested transactions to maintain isolation from each other, and causes nested transactions to remain dependent on their main transaction, which prevents the release of locks. Therefore, the rollback of MainTran causes the above query to leave the table empty, even with a COMMIT at the nested transaction level. This behavior still respects the ACID properties (Atomicity, Consistency, Isolation, and Durability), which are crucial for maintaining data validity and reliability in database management systems.
Now that we have shown that nested transactions have no useful effect on lock management and isolation, let’s see if they have even worse consequences. To do this, let’s create the following code and observe how SQL Server behaves under intensive nested transaction creation. This time, we will add SQL Server’s native @@TRANCOUNT variable, which allows us to analyze the number of open transactions currently in progress.
CREATE PROCEDURE dbo.NestedProc
@level INT
AS
BEGIN
BEGIN TRANSACTION;
PRINT 'Level ' + CAST(@level AS VARCHAR(3)) + ', @@TRANCOUNT = ' + CAST(@@TRANCOUNT AS VARCHAR(3));
IF @level < 100
BEGIN
SET @level += 1
EXEC dbo.NestedProc @level;
END
COMMIT TRANSACTION;
END
GO
EXEC dbo.NestedProc 1;
This procedure recursively creates 100 nested transactions, if we manage to go that far… Let’s look at the output.
Level 1, @@TRANCOUNT = 1
[...]
Level 32, @@TRANCOUNT = 32
Msg 217, Level 16, State 1, Procedure dbo.NestedProc, Line 12 [Batch Start Line 15]
Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32).
Indeed, SQL Server imposes various limitations on nested transactions which imply that if they are mismanaged, the application may suddenly suffer a killed query, which can be very dangerous. These limitations are in place to act as safeguards against infinite nesting loops of nested transactions.
Furthermore, we see that @@TRANCOUNT increments with each new BEGIN TRANSACTION, but it does not reflect the true number of active main transactions; i.e., there are 32 transactions ongoing but only 1 can actually release locks.
I understand, we cannot stop here. I need to go get my old Oracle VM from my garage and fire it up.
Oracle has a pragma called AUTONOMOUS_TRANSACTION that allows creating independent transactions inside a main transaction. Let’s see this in action with a small code snippet.
CREATE TABLE test_autonomous (
id NUMBER PRIMARY KEY,
msg VARCHAR2(100)
);
/
CREATE OR REPLACE PROCEDURE auton_proc IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO test_autonomous (id, msg) VALUES (2, 'Autonomous transaction');
COMMIT;
END;
/
CREATE OR REPLACE PROCEDURE main_proc IS
BEGIN
INSERT INTO test_autonomous (id, msg) VALUES (1, 'Main transaction');
auton_proc;
ROLLBACK;
END;
/
In this code, we create two procedures:
main_proc, the main procedure, inserts the first row into the table.
auton_proc, called by main_proc, adds a second row to the table.
auton_proc is committed while main_proc is rolled back. Let’s observe the result:
SQL> SELECT * FROM test_auton;
ID MSG
---------- --------------------------------------------------
2 Autonomous transaction
Now that’s a true nested transaction ! Here, the nested transaction achieves isolation and can persist independently of its main transaction.
SummaryIn summary, SQL Server and Oracle can handle nested transactions in different ways. In SQL Server, nested transactions do not create real isolation: @@TRANCOUNT may increase, but a single main transaction actually controls locks and the persistence of changes. Internal limits, like the maximum nesting of 32 procedures, show that excessive nested transactions can cause critical errors.
In contrast, Oracle, thanks to PRAGMA AUTONOMOUS_TRANSACTION, allows truly independent transactions within a main transaction. These autonomous transactions can be committed or rolled back without affecting the main transaction, providing a real mechanism for nested isolation.
As Brent Ozar points out, SQL Server also has a SAVE TRANSACTION command, which allows you to save a state after a nested transaction has been committed, for example. This command therefore provides more flexibility in managing nested transactions but does not provide complete isolation of sub-transactions. Furthermore, as Brent Ozar emphasizes, this command is complex and requires careful analysis of its behavior and the consequences it entails.
Another approach to bypass SQL Server’s nested-transaction limitations is to manage transaction coordination directly at the application level, where each logical unit of work can be handled independently.
The lesson is clear: appearances can be deceiving! Understanding the actual behavior of transactions in each DBMS is crucial for designing reliable applications and avoiding unpleasant surprises.
L’article The truth about nested transactions in SQL Server est apparu en premier sur dbi Blog.
MongoDB DMK 2.3: new features
The latest MongoDB DMK release (2.3.1) introduces a lot of new features and important changes, which I will describe here.
dbi services provides the DMK (Database Management Kit) to its customers for multiple technologies: Oracle, Postgres, MongoDB, etc. This toolkit is provided free of charge to all clients who work with dbi services on a consulting project.
The DMK is a set of standardized tools aiming at easing the work of DBAs, by having dbi’s best practices embedded in common scripts across all the database servers of an organization.
New features of the MongoDB DMK Rewriting of the projectThe most significant changes in the MongoDB DMK is the rewriting of all old Perl scripts into new Python scripts. On top of being more adapted to the MongoDB ecosystem, these will improve modularity for customers wanting to write their own packages.
It means that all utility scripts are now named .py instead of .sh, and apart from new features that have been added, the basic behavior stays the same for all of them.
Before release 2.3.0, only one configuration file existed in $DMK_HOME/etc. There is now a second configuration file in ~/.DMK/dmk.conf.local, which will overwrite default configuration options. See the GitBook section on Using DMK for more information.
The Optimal Flexible Architecture (OFA) has new recommendations. Specifically, the new default architecture is the following:
/u01for binaries and admin folders/u02for database files/u03for journal files/u04for log files/u90for backup files
Even though dbi suggests OFA as a good standard for MongoDB installations, we know that a lot of legacy installations will not use this kind of architecture. This is why the DMK is now more versatile, and with the use of the local configuration file described above, it has never been easier to adapt the DMK to your needs.
New aliases and environment variablesSome aliases were changed in this release, others were added. See Environment Variables and Aliases in the documentation for more information.
mgstart,mgstop,mgrestartare new aliases to manage a MongoDB instance.vicnow opens the MongoDB instance configuration file.vilstnow opens the$DMK_HOME/etc/mongodb.lstfile.sta,lsta,tsta,rstaare new aliases for instance status display.vil,cdl,talare new aliases to view, access and tail log files of MongoDB instances.dmkcopens DMK default configuration file.dmklopens DMK local configuration file, which overrides the default configuration file.
- A script named
set_local_dmk_config.pywas created to automate local configuration file changes. See Environment Variables for more details. - Backups are no longer compressed by default, and the option to compress them has been added to the
dmk_dbbackup.pyscript. - And of course, corrections of bugs.
Installing the DMK is always fairly easy. If you follow the OFA, just unzip the package and source dmk.sh.
[root@vm00 ~]$ su - mongodb
[mongodb@vm00 ~]$ unzip -oq dmk_mongodb-2.3.1.zip -d /u01/app/mongodb/local
[mongodb@vm00 ~]$ . /u01/app/mongodb/local/dmk/bin/dmk.sh
2025-12-04 10:03:48 | INFO | DMK_HOME environment variable is not defined.
2025-12-04 10:03:48 | INFO | First time installation of DMK.
2025-12-04 10:03:48 | INFO | DMK has been extracted to /u01/app/mongodb/local/dmk
2025-12-04 10:03:48 | INFO | Using DMK_HOME=/u01/app/mongodb/local/dmk
2025-12-04 10:03:48 | INFO | Default configuration file '/u01/app/mongodb/local/dmk/etc/dmk.conf.default' does not exist. Creating it.
2025-12-04 10:03:48 | INFO | Copying template file '/u01/app/mongodb/local/dmk/templates/etc/dmk.conf.unix' to '/u01/app/mongodb/local/dmk/etc/dmk.conf.default'
2025-12-04 10:03:48 | INFO | Local configuration file does not exist. Creating it.
2025-12-04 10:03:48 | INFO | Copying template file '/u01/app/mongodb/local/dmk/templates/etc/dmk.conf.local.template' to '/home/mongodb/.dmk/dmk.conf.local'
2025-12-04 10:03:48 | INFO | Creating symlink '/u01/app/mongodb/local/dmk/etc/dmk.conf.local' to '/home/mongodb/.dmk/dmk.conf.local'
2025-12-04 10:03:48 | WARNING | MONGO_BASE environment variable is not set. Trying to retrieve it from DMK_HOME (/u01/app/mongodb/local/dmk).
2025-12-04 10:03:48 | WARNING | MONGO_BASE set to '/u01/app/mongodb' based on DMK_HOME location.
2025-12-04 10:03:48 | WARNING | If you're running DMK for the first time, you can ignore these warnings.
2025-12-04 10:03:48 | WARNING | Otherwise, please set MONGO_BASE in /home/mongodb/.DMK before sourcing DMK.
2025-12-04 10:03:48 | WARNING | File '/u01/app/mongodb/etc/mongodb.lst' does not exist. Creating an empty file.
2025-12-04 10:03:48 | INFO | Creating DMK source file at '/home/mongodb/.DMK' with the following content:
2025-12-04 10:03:48 | INFO | DMK_HOME=/u01/app/mongodb/local/dmk
2025-12-04 10:03:48 | INFO | PYTHON_BIN=/usr/bin/python3
2025-12-04 10:03:48 | INFO | MONGO_BASE=/u01/app/mongodb
2025-12-04 10:03:48 | WARNING | Please make sure to source the .DMK file in your shell profile (e.g., .bash_profile).
2025-12-04 10:03:48 | WARNING | An example is provided at /u01/app/mongodb/local/dmk/templates/profile/dmk.mongodb.profile
If you don’t follow the OFA, you should define the following mandatory variables before running the DMK, inside the /home/mongodb/.DMK file:
DMK_HOME: path to the DMK main folderPYTHON_BIN: path to the Python binaries (3.6+ necessary, which is the default for Linux 8-like platforms)MONGO_BASE
[root@vm00 ~]$ su - mongodb
[mongodb@vm00 ~]$ echo "DMK_HOME=/u01/app/mongodb/local/dmk" > ~/.DMK
[mongodb@vm00 ~]$ echo "PYTHON_BIN=/usr/bin/python3" >> ~/.DMK
[mongodb@vm00 ~]$ echo "MONGO_BASE=/u01/app/mongodb" >> ~/.DMK
[mongodb@vm00 ~]$ cat ~/.DMK
export DMK_HOME=/u01/app/mongodb/local/dmk
export PYTHON_BIN=/usr/bin/python3
export MONGO_BASE=/u01/app/mongodb
Loading DMK at login
If you want the DMK to be loaded when logging in, you should add the following code block to the .bash_profile of the mongodb user:
# BEGIN DMK BLOCK
if [ -z "$DMK_HOME" ]; then
if [ -f "$HOME/.DMK" ]; then
. "$HOME/.DMK"
else
echo "$HOME/.DMK file does not exist"
return 1
fi
fi
# Launched at login
. ${DMK_HOME}/bin/dmk.sh && ${PYTHON_BIN} ${DMK_HOME}/bin/dmk_status.py --table --all
# END DMK BLOCK
After this, you can just log in again. The installation is complete !
Migrating from a former version of the DMKIf you already have the MongoDB DMK installed on your systems, there are a few more steps to take for this specific upgrade, because we switched from old Perl libraries to Python.
You first need to adapt the .DMK file, as described in the installation steps.
[mongodb@vm00 ~]$ cat ~/.DMK
export DMK_HOME=/u01/app/mongodb/local/dmk
export PYTHON_BIN=/usr/bin/python3
export MONGO_BASE=/u01/app/mongodb
Then, move the former DMK folder and unzip the new version of the DMK. The old DMK should be a hidden directory, otherwise DMK will consider it as a custom package !
mongodb@vm00:/home/mongodb/ [DUMMY] cd /u01/app/mongodb/local/
mongodb@vm00:/u01/app/mongodb/local/ [DUMMY] ls -l
drwxrwx---. 10 mongodb mongodb 118 Jul 1 04:34 dmk
mongodb@vm00:/u01/app/mongodb/local/ [DUMMY] mv dmk .dmk_old
mongodb@vm00:/u01/app/mongodb/local/ [DUMMY] unzip /u01/app/mongodb/artifacts/dmk_mongodb-2.3.1.zip
mongodb@vm00:/u01/app/mongodb/local/ [DUMMY] ls -ail
100690250 drwxrwx---. 8 mongodb mongodb 96 Jul 1 04:24 dmk
33554663 drwxrwx---. 10 mongodb mongodb 118 Jul 1 04:34 .dmk_old
Update your .bash_profile to remove all traces of the former DMK loading mechanism. Here is an example of the minimal DMK block in the template file:
# BEGIN DMK BLOCK
if [ -z "$DMK_HOME" ]; then
if [ -f "$HOME/.DMK" ]; then
. "$HOME/.DMK"
else
echo "$HOME/.DMK file does not exist. It is needed to source DMK at login. Run '. <DMK_HOME>/bin/dmk.sh' or 'source <DMK_HOME>/bin/dmk.sh' to source DMK manually this time."
return 1
fi
fi
# Launched at login
. ${DMK_HOME}/bin/dmk.sh && ${PYTHON_BIN} ${DMK_HOME}/bin/dmk_status.py --table --all
# END DMK BLOCK
Last but not least, you will have to customize your local DMK configuration file ~/.dmk/dmk.conf.local. You can use the set_local_dmk_config.py script to help yourself with the modifications.
mongodb@vm00:/u01/app/mongodb/admin/ [mdb01] set_local_dmk_config.py INSTANCE MONGO_JOURNAL "\${MONGO_DATA_ROOT}/\${MONGO_INSTANCE}/journal"
Backup created: /home/mongodb/.dmk/dmk.conf.bak_20251024_084959
Updated MONGO_JOURNAL in [INSTANCE]
Old value: var::MONGO_JOURNAL::=::nowarn::"${MONGO_JOURNAL_ROOT}/${MONGO_INSTANCE}"::
New value: var::MONGO_JOURNAL::=::nowarn::"${MONGO_DATA_ROOT}/${MONGO_INSTANCE}/journal"::
Use 'dmkc' and 'dmkl' aliases to quickly view default and local configuration files.
For any questions regarding the MongoDB DMK, take a look at the documentation or feel free to contact me.
L’article MongoDB DMK 2.3: new features est apparu en premier sur dbi Blog.
What is Forgejo and getting it up and running on FreeBSD 15
In recent customer projects I had less to do with PostgreSQL but more with reviewing infrastructures and give recommendations about what and how to improve. In all of those projects GitLab is used in one way or the other. Some only use it for managing their code in Git and work on issues, others use pipelines to build their stuff, and others almost use the full set of features. Gitlab is a great product, but sometimes you do not need the full set of features so I started to look for alternatives mostly because of my own interest. One of the more popular choices seemed to be Gitea but as a company was created around it, a fork was created and this is Forgejo. The FAQ summarizes the most important topics around the project pretty well, so please read it.
As FreeBSD 15 was released on 2. December that’s the perfect chance to get that up and running there and have a look how it feels like. I am not going into the installation of FreeBSD 15, this really is straight forward. I just want to mention that I opted for the “packaged base system” instead of the distributions sets which is currently in tech preview. What that means is that the whole system is installed and managed with packages and you don’t need freebsd-update anymore. Although it is still available, it will not work anymore if you try to use it:
root@forgejo:~ $ cat /etc/os-release
NAME=FreeBSD
VERSION="15.0-RELEASE"
VERSION_ID="15.0"
ID=freebsd
ANSI_COLOR="0;31"
PRETTY_NAME="FreeBSD 15.0-RELEASE"
CPE_NAME="cpe:/o:freebsd:freebsd:15.0"
HOME_URL="https://FreeBSD.org/"
BUG_REPORT_URL="https://bugs.FreeBSD.org/"
root@forgejo:~ $ freebsd-update fetch
freebsd-update is incompatible with the use of packaged base. Please see
https://wiki.freebsd.org/PkgBase for more information.
Coming back to Forgejo: On FreeBSD this is available as a package, so you can just go ahead and install it:
root@forgejo:~$ pkg search forgejo
forgejo-13.0.2_1 Compact self-hosted Git forge
forgejo-act_runner-9.1.0_2 Act runner is a runner for Forgejo based on the Gitea Act runner
forgejo-lts-11.0.7_1 Compact self-hosted Git forge
forgejo7-7.0.14_3 Compact self-hosted Git service
root@forgejo:~ $ pkg install forgejo
Updating FreeBSD-ports repository catalogue...
FreeBSD-ports repository is up to date.
Updating FreeBSD-ports-kmods repository catalogue...
FreeBSD-ports-kmods repository is up to date.
Updating FreeBSD-base repository catalogue...
FreeBSD-base repository is up to date.
All repositories are up to date.
The following 32 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
FreeBSD-clibs-lib32: 15.0 [FreeBSD-base]
brotli: 1.1.0,1 [FreeBSD-ports]
...
Number of packages to be installed: 32
The process will require 472 MiB more space.
100 MiB to be downloaded.
Proceed with this action? [y/N]: y
Message from python311-3.11.13_1:
--
Note that some standard Python modules are provided as separate ports
as they require additional dependencies. They are available as:
py311-gdbm databases/py-gdbm@py311
py311-sqlite3 databases/py-sqlite3@py311
py311-tkinter x11-toolkits/py-tkinter@py311
=====
Message from git-2.51.0:
--
If you installed the GITWEB option please follow these instructions:
In the directory /usr/local/share/examples/git/gitweb you can find all files to
make gitweb work as a public repository on the web.
All you have to do to make gitweb work is:
1) Please be sure you're able to execute CGI scripts in
/usr/local/share/examples/git/gitweb.
2) Set the GITWEB_CONFIG variable in your webserver's config to
/usr/local/etc/git/gitweb.conf. This variable is passed to gitweb.cgi.
3) Restart server.
If you installed the CONTRIB option please note that the scripts are
installed in /usr/local/share/git-core/contrib. Some of them require
other ports to be installed (perl, python, etc), which you may need to
install manually.
=====
Message from git-lfs-3.6.1_8:
--
To get started with Git LFS, the following commands can be used:
1. Setup Git LFS on your system. You only have to do this once per
repository per machine:
$ git lfs install
2. Choose the type of files you want to track, for examples all ISO
images, with git lfs track:
$ git lfs track "*.iso"
3. The above stores this information in gitattributes(5) files, so
that file needs to be added to the repository:
$ git add .gitattributes
4. Commit, push and work with the files normally:
$ git add file.iso
$ git commit -m "Add disk image"
$ git push
=====
Message from forgejo-13.0.2_1:
--
Before starting forgejo for the first time, you must set a number of
secrets in the configuration file. For your convenience, a sample file
has been copied to /usr/local/etc/forgejo/conf/app.ini.
You need to replace every occurence of CHANGE_ME in the file with
sensible values. Please refer to the official documentation at
https://forgejo.org for details.
You will also likely need to create directories for persistent storage.
Run
su -m git -c 'forgejo doctor check'
to check if all prerequisites have been met.
What I really like with the FreeBSD packages is, that they usually give clear instructions on what to do. We’ll go with the web-based installer, so:
root@forgejo:~ $ chown git:git /usr/local/etc/forgejo/conf
root@forgejo:~ $ rm /usr/local/etc/forgejo/conf/app.ini
root@forgejo:~ $ service -l | grep for
forgejo
root@forgejo:~ $ service forgejo enable
forgejo enabled in /etc/rc.conf
root@forgejo:~ $ service forgejo start
2025/12/12 14:16:42 ...etting/repository.go:318:loadRepositoryFrom() [W] SCRIPT_TYPE "bash" is not on the current PATH. Are you sure that this is the correct SCRIPT_TYPE?
[1] Check paths and basic configuration
- [E] Failed to find configuration file at '/usr/local/etc/forgejo/conf/app.ini'.
- [E] If you've never ran Forgejo yet, this is normal and '/usr/local/etc/forgejo/conf/app.ini' will be created for you on first run.
- [E] Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.
- [E] Cannot proceed without a configuration file
FAIL
Command error: stat /usr/local/etc/forgejo/conf/app.ini: no such file or directory
2025/12/12 14:16:42 ...etting/repository.go:318:loadRepositoryFrom() [W] SCRIPT_TYPE "bash" is not on the current PATH. Are you sure that this is the correct SCRIPT_TYPE?
[1] Check paths and basic configuration
- [E] Failed to find configuration file at '/usr/local/etc/forgejo/conf/app.ini'.
- [E] If you've never ran Forgejo yet, this is normal and '/usr/local/etc/forgejo/conf/app.ini' will be created for you on first run.
- [E] Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.
- [E] Cannot proceed without a configuration file
FAIL
Command error: stat /usr/local/etc/forgejo/conf/app.ini: no such file or directory
Seems bash is somehow expected, but this is not available right now:
root@forgejo:~ $ which bash
root@forgejo:~ $
Once more:
root@forgejo:~ $ pkg install bash
root@forgejo:~ $ service forgejo stop
Stopping forgejo.
root@forgejo:~ $ service forgejo start
[1] Check paths and basic configuration
- [E] Failed to find configuration file at '/usr/local/etc/forgejo/conf/app.ini'.
- [E] If you've never ran Forgejo yet, this is normal and '/usr/local/etc/forgejo/conf/app.ini' will be created for you on first run.
- [E] Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.
- [E] Cannot proceed without a configuration file
FAIL
Command error: stat /usr/local/etc/forgejo/conf/app.ini: no such file or directory
root@forgejo:~ $ service forgejo status
forgejo is running as pid 3448.
The web installer is available on port 3000 and you can choose between the usual database backends:
To keep it simple I went with SQLite3, kept everything at the default and provided the Administrator information further down the screen. Before the installer succeeded I had to create these two directories:
root@forgejo:~ $ mkdir /usr/local/share/forgejo/data/
root@forgejo:~ $ chown git:git /usr/local/share/forgejo/data/
root@forgejo:~ $ mkdir /usr/local/share/forgejo/log
root@forgejo:~ $ chown git:git /usr/local/share/forgejo/log
Once that was done it went fine and this is the welcome screen:
As with the other tools in that area there are the common sections like “Issues”, “Pull requests”, and “Milestones”.
In the next post we’re going to create an organization, a repository and try to create a simple, how GitLab calls it, pipeline.
L’article What is Forgejo and getting it up and running on FreeBSD 15 est apparu en premier sur dbi Blog.
How effective is AI on a development project?
In this article, I will try to evaluate the benefits of AI on a development project and what concrete changes it makes to our development practices.
The test case and the approachI chose a familiar environment for my comparison: a new NestJS project from scratch.
For my project, I want to:
- Use a .env file for configuration
- Connect to a PostgreSQL database
- Store users in a database table
- Create a CRUD API to manage my users
- Manage JWT authentication based on my user list
- Secure CRUD routes for authenticated users using a guard
To help me, I’m going to use the GitHub Copilot agent with the GTP5-mini model. I’ll ask it to generate code on my behalf, as much as possible. However, I’ll continue to follow NestJS best practices by using the documentation recommendations and initializing the project myself. I will focus on prompting, initializing the context and reviewing the code generated by the AI.
For better results, I will develop the application step by step and control the generated code at each step.
Intialize the projectAt first, I initialize a new NestJS project using the CLI, as mentioned in the documentation:
npm i -g @nestjs/cli
nest new nestjs-project
First contact with the AI agent
I start by opening the project in VSCode and I open a new chat with the AI agent. I’m trying to give it some general instructions for the rest of the tasks:
Remember:
- You are a full-stack TypeScript developer.
- You follow best practices in development and security.
- You will be working on this NestJS project.
The AI agent discovers the project:
First Task, add application configuration
I followed the documentation to add configuration support using .env files
I’ve manually added the required package:
npm i --save @nestjs/config
And asked the AI to generate the code:
@nestjs/config is installed. Add support for .env in the application. The config file must contain the credentials to access to the database (host, database name, user, password).
Second Task, connect to the database and create the users table
I want to use TypeORM to manage my database connections and migrations.
First, I install the required packages:
npm install --save @nestjs/typeorm typeorm pg
And then ask the AI agent to generate the code:
I will use typeorm and postgres. Connect the application to the database.
Save the credentials in the .env file.
Use the credentials:
- host: localhost
- name: nestjs,
- user: nest-user
- password XXX
Note : Be careful when you send credentials to AI
Next request to the AI agent: create a migration to initialize the database schema:
Add a migration to create a "users" table with the following fields: id (primary key), username (string), email (string), password (string), is_admin (boolean), disabled (boolean), created_at (timestamp), updated_at (timestamp).
In addition, in my package.json, the agent adds the migration command to npm in the project:
"typeorm:migration:run": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js migration:run -d ./data-source.ts",
To simplify the process, I asked the AI agent to generate a default user for my application:
In the migration, add a default admin user with the following values:
username: "admin"
email: "admin@example.com"
password: "Admin@123" (hash the password using bcrypt)
is_admin: true
disabled: false
After the completion by the AI agent, I run the migration.
First module, service and controller for users with CRUD endpointsNow, I ask the agent to create the users module with detailed endpoints:
Add a module, service, and controller for "users" with the following endpoints:
- GET /users: Retrieve a list of all users.
- GET /users/:id: Retrieve a user by ID.
- POST /users: Create a new user.
- PUT /users/:id: Update a user by ID.
- DELETE /users/:id: Delete a user by ID.
This step is very quick, and the code is generated in 4min only !
Add Swagger documentationTo test the first REST module, I ask the AI to add Swagger UI to the project.
As with the other steps, I add the packages myself:
npm install --save @nestjs/swagger
Note: This step is very tricky for the AI, if you don’t specify the already installed package, it will try to install an outdated version.
Then, I ask the AI agent to generate the code:
@nestjs/swagger is installed
Add swagger to the application.
Document the users endpoints.
In few minutes, we have the API documentation:
During API testing, I noticed that the password hash was returned in the user list. However, initially, I had instructed the AI to follow security best practices…
I asked the AI agent to fix this issue:
The users password field must be excluded from the responses.
Last task, add JWT authentication
As authentication mechanism, I use JWT tokens provided by passport library.
I install the required packages:
npm install --save @nestjs/passport
npm install --save @nestjs/jwt passport-jwt
npm install --save-dev @types/passport-jwt
Then, I ask the AI agent to generate the code:
Implement the JWT authentication strategy, @nestjs/jwt passport-jwt and @types/passport-jwt are installed.
Add a login endpoint that returns a JWT token when provided with valid user credentials (username and password from the users table).
And I instruct the AI to use .env file for the JWT secret and expiration:
Add the JWT secrets and expiration into the .env file, Fix the typescript errors, Improve the swagger documentation for login endpoint (message definition)
Now, I want to secure the users endpoints to allow only authenticated users and ask the agent the following:
Add a guard on the users endpoints to allow only connected users
Last point, I want to be able to authenticate on the Swagger interface, so I ask it:
Add the ability to authenticate on the Swagger interface with a bearer token.
Conclusion
All of this took me around 1h30 to complete, including prompting and reviewing the steps.
Reading the documentation, understanding the technologies, adding the dependancies remained the same.
The initial estimate, without AI, was between 2 and 4 hours to complete the project :
TaskEstimated TimeAI Coding / promptingReview.env15–30 min6 min5 minConnexion PostgreSQL20–40 min4 min2 minTable User + migration15–25 min7 min2 minCRUD Users30–45 min5 min10 minSwagger UI15–30 min6 min6 minAuth JWT30–60 min12 min15 minGuards15–30 min5 min5 minTOTAL2h20 – 4h2045min45 minDuring development, AI makes certain errors or inaccuracies like TypeScripts compilation errors, password or security issues such as returning the password hash in the user list. However, the time spent to review and correct these issues is largely compensated by the speed of code generation.
At the end, coding with AI is very fast, the generated code with a well documented technology (NestJS) is good.
Even if formulating a clear request requires careful consideration and wording, coding is comfortable. However, the job is no longer the same; it now requires good code planning and architecture and the ability to review the generated code. Coding with AI can be effective, but only if you have a clear idea of what you want from the very beginning, use clear instructions and leave no room for interpretation by the AI.
L’article How effective is AI on a development project? est apparu en premier sur dbi Blog.
OGG-10556 when starting extract from GoldenGate 23ai web UI
Another day, another not-so-documented GoldenGate error, this time about the OGG-10556 error, which I had when setting up replication on a new GoldenGate installation. After making changes from the web UI in an extract, I ended with the following error when starting it:
2025-10-16 14:02:30 ERROR OGG-02024 An attempt to gather information about the logmining server configuration from the Oracle database failed.
2025-10-15 12:06:51 ERROR OGG-10556 No data found when executing SQL statement <SELECT apply_name FROM all_apply WHERE apply_name = SUBSTR(UPPER('OGG$' || :1), 1, 30)>.
Since the exact configuration is not relevant here, I will not add it to the blog. After some trial and error, it all came down to the extract settings in the web UI (not the configuration file). From the web UI, you can find the list of PDBs on which the extract is registered. In my case, because of the modifications I made, the PDB was not listed in the Registered PDB Containers section anymore:
After registering the PDB again, and restarting the extract, it worked !
NB: You’re wondering why you had the issue even without modifying the extract ? This might be because of how slow the GoldenGate UI can be. You cannot add an extract without specifying a PDB. However, the PDB list appears dynamically, sometimes a few seconds after selecting the connection. And in the meantime, it is possible to create an invalid extract !
The PDB list sometimes appear a few seconds after selecting the connection
L’article OGG-10556 when starting extract from GoldenGate 23ai web UI est apparu en premier sur dbi Blog.
Understanding XML performance pitfalls in SQL Server
Working with XML in SQL Server can feel like taming a wild beast. It’s undeniably flexible and great for storing complex hierarchical data, but when it comes to querying efficiently, many developers hit a wall. That’s where things get interesting.
In this post, we’ll dive into a real-world scenario with half a million rows, put two XML query methods head-to-head .exist() vs .value(), and uncover how SQL Server handles them under the scenes.
To demonstrate this, we’ll use SQL Server 2022 Developer Edition and create a table based on the open-source StackOverflow2010 database, derived from the Posts table, but storing part of the original data in XML format. We will also add a few indexes to simulate an environment with a minimum level of optimization.
CREATE TABLE dbo.PostsXmlPerf
(
PostId INT NOT NULL PRIMARY KEY,
PostTypeId INT NOT NULL,
CreationDate DATETIME NOT NULL,
Score INT NOT NULL,
Body NVARCHAR(MAX) NOT NULL,
MetadataXml XML NOT NULL
);
INSERT INTO dbo.PostsXmlPerf (PostId, PostTypeId, CreationDate, Score, Body, MetadataXml)
SELECT TOP (500000)
p.Id,
p.PostTypeId,
p.CreationDate,
p.Score,
p.Body,
(
SELECT
p.OwnerUserId AS [@OwnerUserId],
p.LastEditorUserId AS [@LastEditorUserId],
p.AnswerCount AS [@AnswerCount],
p.CommentCount AS [@CommentCount],
p.FavoriteCount AS [@FavoriteCount],
p.ViewCount AS [@ViewCount],
(
SELECT TOP (5)
c.Id AS [Comment/@Id],
c.Score AS [Comment/@Score],
c.CreationDate AS [Comment/@CreationDate]
FROM dbo.Comments c
WHERE c.PostId = p.Id
FOR XML PATH(''), TYPE
)
FOR XML PATH('PostMeta'), TYPE
)
FROM dbo.Posts p
ORDER BY p.Id;
CREATE nonclustered INDEX IX_PostsXmlPerf_CreationDate
ON dbo.PostsXmlPerf (CreationDate);
CREATE nonclustered INDEX IX_PostsXmlPerf_PostTypeId
ON dbo.PostsXmlPerf (PostTypeId);
Next, let’s create two queries designed to interrogate the column that contains XML data, in order to extract information based on a condition applied to a value stored within that XML.
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
DBCC FREEPROCCACHE;
SELECT PostId, Score
FROM dbo.PostsXmlPerf
WHERE MetadataXml.exist('/PostMeta[@OwnerUserId="8"]') = 1;
DBCC FREEPROCCACHE;
SELECT PostId, Score
FROM dbo.PostsXmlPerf
WHERE MetadataXml.value('(/PostMeta/@OwnerUserId)[1]', 'INT') = 8
Comparing logical and physical reads, we notice something interesting:
Logical ReadsCPU Time.exist()125’91200:00:05.954.value()125’91200:00:03.125At first glance, the number of pages read is identical, but .exist() is clearly taking more time. Why? Execution plans reveal that .exist() sneaks in a Merge Join, adding overhead.
Additionally, on both execution plans we can see a small yellow bang icon. On the first plan, it’s just a memory grant warning, but the second one is more interesting:
Alright, a bit strange ; let’s move forward with some tuning and maybe this warning will disappear.
To help with querying, it can be useful to create a more targeted index for XML queries.
Let’s create an index on the column that contains XML. However, as you might expect, it’s not as straightforward as indexing a regular column. For an XML column, you first need to create a primary XML index, which physically indexes the overall structure of the column (similar to a clustered index), and then a secondary XML index, which builds on the primary index and is optimized for a specific type of query (value, path, or property) – to know more about XML indexes : Microsoft Learn, MssqlTips.
So, let’s create these indexes !
CREATE PRIMARY XML INDEX IX_XML_Primary_MetadataXml
ON dbo.PostsXmlPerf (MetadataXml);
CREATE XML INDEX IX_XML_Value_MetadataXml
ON dbo.PostsXmlPerf (MetadataXml)
USING XML INDEX IX_XML_Primary_MetadataXml FOR Value;
Let’s rerun the performance tests with our two queries above, making sure to flush the buffer cache between each execution.
Logical ReadsCPU Time.exist()400:00:00.031.value()125’91200:00:03.937
The inevitable happened: the implicit conversion makes it impossible to use the secondary XML index due to a data type mismatch, preventing an actual seek on it. We do see a seek in the second execution plan, but it occurs for every row in the table (500’000 executions) and is essentially just accessing the underlying physical structure stored in the clustered index. In reality, this ‘seek’ is SQL Server scanning the XML to retrieve the exact value of the requested field (in this case, OwnerUserId).
This conversion issue occurs because the function .exist() returns a BIT, while the function .value() returns a SQL type.
This difference in return type can lead to significant performance problems when tuning queries that involve XML.
As explained by Microsoft: “For performance reasons, instead of using the value() method in a predicate to compare with a relational value, use exist() with sql:column()“
Working with XML in SQL Server can be powerful, but it can quickly become tricky to manage. .exist() and .value() might seem similar, but execution differences and type conversions can have a huge performance impact. Proper XML indexing is essential, and knowing your returned data types can save you from hours of head-scratching. Most importantly, before deciding to store data as XML, consider whether it’s truly necessary ; relational databases are not natively optimized for XML and can introduce complexity and performance challenges.
Sometimes, a simpler and highly effective approach is to extract frequently queried XML fields at the application level and store them in separate columns. This makes them much easier to index and query, reducing overhead while keeping your data accessible.
If your application relies heavily on semi-structured data or large volumes of XML/JSON, it’s worth considering alternative engines. For instance, MongoDB provides native document storage and fast queries on JSON/BSON, while PostgreSQL offers XML and JSONB support with powerful querying functions. Choosing the right tool for the job can simplify your architecture and significantly improve performance.
And to dive even deeper into the topic, with a forthcoming article focused this time on XML storage, keep an eye on the dbi services blogs !
L’article Understanding XML performance pitfalls in SQL Server est apparu en premier sur dbi Blog.


