Feed aggregator
Connecting to Oracle Database 23ai on Oracle@GCP with VS Code: A DBA’s Guide
Get connected to Oracle Database 23ai on Oracle@GCP in under 10 minutes using VS Code and SQL Developer extension. This step-by-step guide walks through wallet retrieval, connection configuration, and security best practices for DBAs managing cloud databases. No complexity, just straightforward instructions to establish secure connections and start working with your Autonomous Database instance.
The post Connecting to Oracle Database 23ai on Oracle@GCP with VS Code: A DBA’s Guide appeared first on DBASolved.
Oracle@GCP Setup: A Senior DBA’s Guide to Simple Enterprise Database Deployment
After decades of managing Oracle databases – from 8i in the Army to the latest 23ai deployments – I’ve seen […]
The post Oracle@GCP Setup: A Senior DBA’s Guide to Simple Enterprise Database Deployment appeared first on DBASolved.
Oracle GoldenGate LOB Replication to Snowflake: Getting It Right
Recently I’ve been helping clients navigate one of the more challenging aspects of Oracle GoldenGate implementations – replicating Large Objects […]
The post Oracle GoldenGate LOB Replication to Snowflake: Getting It Right appeared first on DBASolved.
dbaascli database move failing with ORA-01691 on ExaCC
I recently had to move a database from a 19.23 to a 19.26 dbhome on an ExaCC. This will include the database to be patched. The command dbaascli database move failed during the datapatch steps with following error:
DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM
With this blog I would like to share my troubleshooting with you and how I resolved the problem.
Moving database to new dbhomeIn order to move/patch the database I used dbaascli database move command.
[root@exacc-cl01n1 ~]# dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST DBAAS CLI version 25.1.1.0.0 Executing command database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST Job id: 17f6fef5-2143-4bb7-8007-a27cf5232099 Session log: /var/opt/oracle/log/CDBTEST/database/move/dbaastools_2025-05-05_02-47-51-PM_237874.log Loading PILOT... Session ID of the current execution is: 15920 Log file location: /var/opt/oracle/log/CDBTEST/database/move/pilot_2025-05-05_02-47-55-PM_238225 ----------------- Running initialization job Completed initialization job ----------------- Running validate_user_input job Completed validate_user_input job ----------------- Running validate_database job [WARNING] [DBAAS-70643] Following pluggable databases '{CDBTEST=[PDB_001T]}' do not have services configured. ACTION: Make sure to configure the services of pluggable databases so that pluggable databases are started after the database bounce. Completed validate_database job ----------------- Running validate_creg_file_existence job Completed validate_creg_file_existence job ----------------- Running validate_source_home job Completed validate_source_home job ----------------- Running validate_major_version job Completed validate_major_version job ----------------- Running validate_oracle_home_type job Completed validate_oracle_home_type job ----------------- Running check_target_source_home_not_same job Completed check_target_source_home_not_same job ----------------- Running validate_home_existence job Completed validate_home_existence job ----------------- Running validate_home_consistency job Completed validate_home_consistency job ----------------- Running validate_home_options job Completed validate_home_options job ----------------- Running validate_disk_space job Completed validate_disk_space job ----------------- Acquiring write lock: cdbtest Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1 Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_3 Running pre_patch_lock_manager job Completed pre_patch_lock_manager job ----------------- Running enable_wallet_root job Completed enable_wallet_root job ----------------- Running copy_config_files job Completed copy_config_files job ----------------- Running stop_database_instance-exacc-cl01n1 job Completed stop_database_instance-exacc-cl01n1 job ----------------- Running update_database_resource-exacc-cl01n1 job Completed update_database_resource-exacc-cl01n1 job ----------------- Running start_database_instance-exacc-cl01n1 job Completed start_database_instance-exacc-cl01n1 job ----------------- Running exacs_post_patch_node_updation job Completed exacs_post_patch_node_updation job ----------------- Running update_dba_directories job Completed update_dba_directories job ----------------- Running datapatch_and_recompile_invalid_objects job Datapatch execution on database 'CDBTEST' is in progress Execution of datapatch_and_recompile_invalid_objects failed [FATAL] [DBAAS-60022] Command '/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/datapatch -verbose' has failed on nodes [exacc-cl01n1]. *MORE DETAILS* Result of node:exacc-cl01n1 [Interim patch 28318139 (ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.36.095921 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 28555193 (DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.37.111786 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 30944402 (SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.269998 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 32067171 (OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171)):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 06-MAR-22 10.57.35.074017 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 33192694 (OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694)):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.262033 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 33522539 (MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 19-AUG-23 09.58.11.271219 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 34786990 (OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990)):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.17.242116 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 34972375 (DATAPUMP BUNDLE PATCH 19.18.0.0.0):, Binary registry: Not installed, PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.19.038902 AM, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 36199232 (OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)):, Binary registry: Not installed, PDB PDB_001T: Applied successfully on 17-AUG-24 08.59.17.362926 AM, PDB CDB$ROOT: Applied successfully on 18-APR-24 10.08.56.234640 AM, PDB PDB$SEED: Applied successfully on 18-APR-24 10.19.33.039940 AM, Interim patch 36420641 (DATAPUMP BUNDLE PATCH 19.23.0.0.0):, Binary registry: Not installed, PDB PDB_001T: Applied successfully on 17-AUG-24 09.00.21.159959 AM, PDB CDB$ROOT: Applied successfully on 02-OCT-24 11.47.34.300679 AM, PDB PDB$SEED: Applied successfully on 02-OCT-24 11.50.34.101031 AM, Interim patch 37102264 (OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)):, Binary registry: Installed, PDB PDB_001T: Not installed, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Interim patch 37470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0):, Binary registry: Installed, PDB PDB_001T: Not installed, PDB CDB$ROOT: Not installed, PDB PDB$SEED: Not installed, Current state of release update SQL patches:, Binary registry:, 19.26.0.0.0 Release_Update 250118124854: Installed, PDB PDB_001T:, Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 17-AUG-24 08.59.50.930598 AM, PDB CDB$ROOT:, Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 18-APR-24 10.19.15.621942 AM, PDB PDB$SEED:, Applied 19.23.0.0.0 Release_Update 240406004238 successfully on 18-APR-24 10.28.06.019862 AM, Adding patches to installation queue and performing prereq checks...done, Installation queue:, For the following PDBs: CDB$ROOT PDB$SEED PDB_001T, The following interim patches will be rolled back:, 36199232 (OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232)), 26749785 (PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK), 27605010 (DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999), 36420641 (DATAPUMP BUNDLE PATCH 19.23.0.0.0), Patch 37260974 (Database Release Update : 19.26.0.0.250121 (37260974)):, Apply from 19.23.0.0.0 Release_Update 240406004238 to 19.26.0.0.0 Release_Update 250118124854, The following interim patches will be applied:, 37102264 (OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264)), 26749785 (PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK), 27605010 (DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999), 37470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0), DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM, ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN, INSERT INTO sys.dba_registry_sqlpatch_ru_info, (patch_id,, patch_uid,, patch_descriptor,, ru_version,, ru_build_description,, ru_build_timestamp,, patch_directory), VALUES, (:patch_id,, :patch_uid,, :patch_descriptor,, :ru_version,, :ru_build_description,, TO_TIMESTAMP(:ru_build_timestamp, 'YYMMDDHH24MISS'),, :patch_directory);, COMMIT;, END;" with ParamValues: :patch_descriptor=OCIXMLTypePtr=SCALAR(0x4b4dcb0), :patch_directory='PK.........y4Z�&(.�...........37260974_rollback.sql�.]s.ֺ.��+t��ng�.o����v.�?2�����F..�$ . [���.@.q���dQ.�Tqc��û��z���rv�8.�������.���..����(M�ϳ��´.]%�,u �z��.�8���.��.s�f�����,J�Wa.�a6�y�.zp�������,..�?�l���t���..�/fW�(j�}pp�8^��..~w�N.�?�..�.��.(Z���e��.������b���.�������K6E�+�?��A.ln._�.=.�qp����������.������ǫ.qﻋ��.���.�.β �\/x��.M�Q�_�q�ý7.-N�Tory�|���y�&E..y��i��..)�N.�.7��.��������oqxz|�|st�X...������G'��.N�_�.?..*�.�..../����.��?|�pWn.�*��;;;.}��.�.�+����..fJ�`., 7+��z�>.? �Y�.�.��...�.,...N��9��.��.Y�rp����O��8_���O��%�.E�F�.��t�|��.��.�Q|�L�����y.��[�'�|.���.�.y�ë.�:xy�..pS>��.|U.��r�/j�?=:�\...������.�������ry�����QY�J.�.}N��.�.S%������j�^j.=7T�Z�e.W�z.�?..�>��?�w�.{w9{������/Nj�������f])�.�����.O�*� ��϶!����?�.��&)���I�tzv�x��., �LU.�IV�..7.R7.y��&h{*�3.�mJm}n(, _�...��2�Lݰ�r*ç..�S7ղ@�V��.1>���'�2�.��., ...�9.m)s.�S.ȴ�T��K.�.$!'�..;�Z�.#T�.��(�f.��.�i.pBK��h�fX*ߩ...��i�, �.*d�.�e..6.Oy!*�.:�.MQ4s���.�...�SW��|��...F�OC�,e�..�"/��:5�...', :patch_id='37260974', :patch_uid='26040769', :ru_build_description="Release_Update", :ru_build_timestamp="250118124854", :ru_version="19.26.0.0.0"] at /u02/app/oracle/product/19.0.0.0/dbhome_3/sqlpatch/sqlpatch.pm line 5337., Please refer to MOS Note 1609718.1 and/or the invocation log, /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_258023_2025_05_05_14_50_06/sqlpatch_invocation.log, for information on how to resolve the above errors., SQL Patching tool complete on Mon May 5 14:50:40 2025] Exit code of the operation:1 Releasing lock: cdbtest Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1 Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_3 *** Executing jobs which need to be run always... *** ----------------- Running post_patch_lock_manager job Completed post_patch_lock_manager job ******** PLUGIN EXECUTION FAILED ******** To resume this failed session, run the following command: dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume
As we can see the command failed with following error:
[FATAL] [DBAAS-60022] Command '/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/datapatch -verbose' has failed on nodes [exacc-cl01n1].
7470729 (DATAPUMP BUNDLE PATCH 19.26.0.0.0), DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM
Troubleshooting
So let’s troubleshoot!
The database has been moved to the new dbhome, and the dbaascli command failed on datapatch level.
oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [rdbms1900] CDBTEST1 2025-05-05_14-56-12::DMK_ENV.pm::read_oratab ::INFO ==> changed ORACLE_HOME found in oratab: +ASM1 - old=/u01/app/19.0.0.0/grid, new=/u02/app/23.0.0.0/gridhome_1 ********************************** INSTANCE_NAME : CDBTEST1 DB_NAME : CDBTEST DB_UNIQUE_NAME : CDBTEST_CHZ2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : 2/7 DATABASE_ROLE : PRIMARY FLASHBACK_ON : YES FORCE_LOGGING : YES VERSION : 19.26.0.0.0 CDB_ENABLED : YES PDBs : PDB_001T PDB$SEED ********************************** PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only Statustime: 2025-05-05 14:56:12 oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [CDBTEST1 (CDB$ROOT)] echo $ORACLE_HOME /u02/app/oracle/product/19.0.0.0/dbhome_3
And the PDB is opened read write.
oracle@exacc-cl01n1:/u02/app/oracle/local/dmk/etc/ [CDBTEST1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 14:56:31 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 PDB_001T READ WRITE NO SQL>
Let’s try to run datapatch manually, expecting it will fail with same errors. But it might help me to extract more easily datapatch result.
oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/ [CDBTEST1 (CDB$ROOT)] cd OPatch/ oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/ [CDBTEST1 (CDB$ROOT)] ./datapatch -verbose SQL Patching tool version 19.26.0.0.0 Production on Mon May 5 14:57:00 2025 Copyright (c) 2012, 2025, Oracle. All rights reserved. Log file for this invocation: /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_309940_2025_05_05_14_57_00/sqlpatch_invocation.log Connecting to database...OK Gathering database info...done Note: Datapatch will only apply or rollback SQL fixes for PDBs that are in an open state, no patches will be applied to closed PDBs. Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation (Doc ID 1585822.1) Bootstrapping registry and package to current versions...done Determining current state...done Current state of interim SQL patches: Interim patch 26749785 (PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK): Binary registry: Not installed PDB PDB_001T: Rolled back successfully on 17-AUG-24 08.59.18.040319 AM PDB CDB$ROOT: Not installed PDB PDB$SEED: Not installed Interim patch 26749785 (PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK): Binary registry: Not installed PDB PDB_001T: Applied successfully on 17-AUG-24 08.59.56.646477 AM PDB CDB$ROOT: Applied successfully on 02-OCT-24 11.46.11.005940 AM PDB PDB$SEED: Applied successfully on 02-OCT-24 11.47.47.794119 AM ... ... ... DBD::Oracle::st execute failed: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM ORA-06512: at line 2 (DBD ERROR: OCIStmtExecute) [for Statement "BEGIN INSERT INTO sys.dba_registry_sqlpatch_ru_info (patch_id, patch_uid, patch_descriptor, ru_version, ru_build_description, ru_build_timestamp, patch_directory) VALUES (:patch_id, :patch_uid, :patch_descriptor, :ru_version, :ru_build_description, TO_TIMESTAMP(:ru_build_timestamp, 'YYMMDDHH24MISS'), :patch_directory); COMMIT; END;" with ParamValues: :patch_descriptor=OCIXMLTypePtr=SCALAR(0x4611c98), :patch_directory='PK.........y4Z▒&(.▒...........37260974_rollback.sql▒.]s.ֺ.▒▒+t▒▒ng▒.o▒▒▒▒v.▒͌?2▒▒▒▒▒F..▒$ . [▒▒▒.@.q▒▒▒dQ.▒Tqc▒▒û▒▒z▒▒▒rv▒8.▒▒▒▒▒▒▒.▒▒▒..▒▒▒▒(M▒ϳ▒▒´.]%▒,u ▒z▒▒.▒8▒▒▒.▒▒.s▒f▒▒▒▒▒,J▒Wa.▒a6▒y▒.zp▒▒▒▒▒▒▒,..▒?▒l▒▒▒t▒▒▒..▒/fW▒(j▒}pp▒8^▒▒..~w▒N.▒?▒..▒.▒▒.(Z▒▒▒e▒▒.▒▒▒▒▒▒b▒▒▒.▒▒▒▒▒▒▒K6E▒+▒?▒▒A.ln._▒.=.▒ܸqp▒▒▒▒▒▒▒▒▒▒.▒▒▒▒▒▒ǫ.qﻋ▒▒.▒▒▒.▒.β ▒\/x▒.M▒Q▒_▒q▒ý7.-N▒Tory▒|▒▒▒y▒&E..y▒▒i▒▒..)▒N.▒.7▒▒.▒▒▒▒▒▒▒▒oqxz|▒|st▒X...▒▒▒▒▒▒G'▒▒.N▒_▒.?..*▒.▒..../▒▒▒▒.▒▒?|▒pWn.▒*▒▒;;;.}▒▒.▒.▒+▒▒▒▒..fJ▒`. 7+▒▒z▒>.? ▒Y▒.▒.▒▒...▒.,...N▒▒9▒.▒▒.Y▒rp▒▒ⷷ▒▒O▒8_▒▒▒O▒▒%▒.E▒F▒.▒▒t▒|▒▒.▒▒.▒Q|▒L▒▒▒▒y.▒▒[▒'▒|.▒▒▒.▒.y▒ë.▒:xy▒..pS>▒▒.|U.▒▒r▒/j▒?=:▒\...▒▒▒▒▒▒.▒▒▒▒▒▒▒ry▒▒▒▒▒QY▒J.▒.}N▒▒.▒.S%▒▒▒▒▒▒j▒^j.=7T▒Z▒e.W▒z.▒?..▒>▒▒?▒w▒.{w9{▒▒▒▒▒▒/Nj▒▒▒▒▒▒▒f])▒.▒▒▒▒.O▒*▒ ▒▒϶!▒▒▒▒?▒.▒▒&)▒▒▒I▒tzv▒x▒▒. ▒LU.▒IV▒..7.R7.y▒▒&h{*▒3.▒mJm}n( _▒...▒▒2▒Lݰ▒r*ç..▒S7ղ@▒V▒▒.1>▒▒▒'▒2▒.▒▒. ...▒9.m)s.▒S.ȴ▒T▒▒K.▒.$!'▒..;▒Z▒.#T▒.▒▒(▒f.▒▒.▒i.pBK▒▒h▒fX*ߩ...▒▒i▒ ▒.*d▒.▒e..6.Oy!*▒.:▒.MQ4s▒▒▒.▒...▒SW▒▒|▒▒...F▒OC▒,e▒..▒"/▒▒:5▒...', :patch_id="37260974", :patch_uid="26040769", :ru_build_description="Release_Update", :ru_build_timestamp="250118124854", :ru_version="19.26.0.0.0"] at /u02/app/oracle/product/19.0.0.0/dbhome_3/sqlpatch/sqlpatch.pm line 5337. Please refer to MOS Note 1609718.1 and/or the invocation log /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_309940_2025_05_05_14_57_00/sqlpatch_invocation.log for information on how to resolve the above errors. SQL Patching tool complete on Mon May 5 14:57:34 2025
Let’s check SYSTEM tablespace usage in the CDB$ROOT.
oracle@exacc-cl01n1:~/ [CDBTEST1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 15:02:00 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> @qdbstbssize.sql PL/SQL procedure successfully completed. Nb Extent Segment Alloc. Space Max. Percent Block Name files Type Mgmnt Mgmnt Size (GB) Free (GB) Size (GB) used % size Log Encrypt Compress ------------------------- ----- ---- ------ ------- ---------- ---------- ---------- ------- ----- --- ------- -------- SYSAUX 1 DATA LM-SYS AUTO 5.95 4.47 32768.00 .00 8 KB YES YES NO SYSTEM 1 DATA LM-SYS MANUAL 1.95 .62 32768.00 .00 8 KB YES YES NO TEMP 1 TEMP LM-UNI MANUAL 1.00 2.31 512.00 -.26 8 KB NO YES NO UNDOTBS1 1 UNDO LM-SYS MANUAL 1.95 1.89 512.00 .01 8 KB YES YES NO USERS 1 DATA LM-SYS AUTO 1.00 .93 32768.00 .00 8 KB YES YES NO ----- ---------- ---------- ---------- TOTAL 5 11.85 10.22 99328.00 SQL>
We can see all is ok. Let’s check the same in the PDB.
SQL> alter session set container=PDB_001T; Session altered. SQL> @qdbstbssize.sql ... ... ... SYSAUX 1 DATA LM-SYS AUTO 82.62 74.52 32768.00 .02 8 KB YES YES NO SYSTEM 1 DATA LM-SYS MANUAL 24.71 .10 32768.00 .08 8 KB YES YES NO TEMP 1 TEMP LM-UNI MANUAL 114.26 342.75 32768.00 -.70 8 KB NO YES NO UNDO 1 UNDO LM-SYS MANUAL 684.57 673.58 32768.00 .03 8 KB YES YES NO USERS 1 DATA LM-SYS AUTO 20.00 19.93 20.00 .33 8 KB YES YES NO ----- ---------- ---------- ---------- TOTAL 2732 7101.04 1385.79 ########## 2732 rows selected. SQL>
It’s all good as well. And the pdb$seed?
SQL> alter session set container=PDB$SEED; Session altered. SQL> @qdbstbssize.sql PL/SQL procedure successfully completed. Nb Extent Segment Alloc. Space Max. Percent Block Name files Type Mgmnt Mgmnt Size (GB) Free (GB) Size (GB) used % size Log Encrypt Compress ------------------------- ----- ---- ------ ------- ---------- ---------- ---------- ------- ----- --- ------- -------- SYSAUX 1 DATA LM-SYS AUTO .59 .08 32768.00 .00 8 KB YES NO NO SYSTEM 1 DATA LM-SYS MANUAL .59 .00 32768.00 .00 8 KB YES NO NO TEMP 1 TEMP LM-UNI MANUAL .18 .00 32768.00 .00 8 KB NO NO NO UNDOTBS1 1 UNDO LM-SYS MANUAL .59 .33 512.00 .05 8 KB YES NO NO ----- ---------- ---------- ---------- TOTAL 4 1.95 .41 98816.00 SQL>
All is ok as well.
Let’s check where this SYS_LOB belongs to. cdb$root? PDB? pdb$seed?
SQL> alter session set container=PDB$SEED; Session altered. SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS'; TABLESPACE_NAME TABLE_NAME ------------------------------ -------------------------------------------------------------------------------------------------------------------------------- SYSTEM REGISTRY$SQLPATCH_RU_INFO SQL> alter session set container=cdb$root; Session altered. SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS'; no rows selected SQL> alter session set container=PDB_001T; Session altered. SQL> select tablespace_name, table_name from dba_lobs where segment_name='SYS_LOB0000023009C00008$$' and owner='SYS'; no rows selected SQL>
So the LOB belongs to the pdb$seed.
From the alert log I can see:
PDB$SEED(2):Pluggable database PDB$SEED opening in read write
PDB$SEED(2):Autotune of undo retention is turned on.
PDB$SEED(2):This instance was first to open pluggable database PDB$SEED (container=2)
PDB$SEED(2):queued attach DA request 0xb2325ed8 for pdb 2, ospid 3457
2025-05-05T15:11:01.061406+02:00
Domain Action Reconfiguration started (domid 2, new da inc 19, cluster inc 2)
Instance 1 is attaching to domain 2
Global Resource Directory partially frozen for domain action
Domain Action Reconfiguration complete (total time 0.0 secs)
2025-05-05T15:11:01.067485+02:00
PDB$SEED(2):Endian type of dictionary set to little
PDB$SEED(2):Undo initialization recovery: Parallel FPTR failed: start:1139322373 end:1139322379 diff:6 ms (0.0 seconds)
PDB$SEED(2):Undo initialization recovery: err:0 start: 1139322373 end: 1139322392 diff: 19 ms (0.0 seconds)
PDB$SEED(2):[3457] Successfully onlined Undo Tablespace 2.
PDB$SEED(2):Undo initialization online undo segments: err:0 start: 1139322392 end: 1139322638 diff: 246 ms (0.2 seconds)
PDB$SEED(2):Undo initialization finished serial:0 start:1139322373 end:1139322643 diff:270 ms (0.3 seconds)
PDB$SEED(2):Database Characterset for PDB$SEED is AL32UTF8
PDB$SEED(2):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x0, new 0x0 (no suplog)
PDB$SEED(2):Opening pdb with no Resource Manager plan active
2025-05-05T15:11:01.806446+02:00
PDB$SEED(2):joxcsys_required_dirobj_exists: directory object exists with required path /u02/app/oracle/product/19.0.0.0/dbhome_3/javavm/admin/, pid 3457 cid 2
Pluggable database PDB$SEED opened read write
2025-05-05T15:11:03.081311+02:00
QPI: opatch file present, opatch
QPI: qopiprep.bat file present
QPI: Cleaning and refreshing metadata..
2025-05-05T15:11:09.187099+02:00
QPI: Cleaning and refreshing metadata..
2025-05-05T15:11:23.765174+02:00
PDB$SEED(2):ORA-1691: unable to extend lobsegment SYS.SYS_LOB0000023009C00008$$ by 1024 in tablespace SYSTEM [PDB$SEED] (ospid 3221)
2025-05-05T15:11:24.482146+02:00
PDB$SEED(2):Pluggable database PDB$SEED closing
So we can see that once the pdb$seed is opened in read write mode, datapatch is failing due to the fact it is not possible to extend the SYS.SYS_LOB0000023009C00008$$ lob segment.
Let’s look into the ASM to see if there is enough disk space.
ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED HIGH N 512 512 4096 4194304 49072128 56708 0 18902 0 Y DATAC4/ MOUNTED HIGH N 512 512 4096 4194304 16356864 13063776 0 4354592 0 N RECOC4/ MOUNTED HIGH N 512 512 4096 4194304 163568640 163563936 0 54521312 0 N SPRC4/ ASMCMD>
There is not a lot of usable space for the DATA Disk Group, certainly not enough, but still some space. Let’s try to open the pdb$seed in read write and to create a table.
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO SQL> shutdown immediate Pluggable Database closed. SQL> alter pluggable database open read write; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ WRITE NO SQL> create table xxx (n number) tablespace system; Table created. SQL> insert into xxx values (10); 1 row created. SQL> commit; Commit complete. SQL> drop table xxx purge; Table dropped.
This is possible. Let’s try to resize system data file.
SQL> alter database datafile 2 resize 2048M; alter database datafile 2 resize 2048M * ERROR at line 1: ORA-01237: cannot extend datafile 2 ORA-01110: data file 2: '+DATAC4/CDBTEST_CHZ2/165C22D4CFFCA759E0638A534664B46C/DATAFILE/system.615.1181302705' ORA-17505: ksfdrsz:1 Failed to resize file to size 262144 blocks ORA-15041: diskgroup "DATAC4" space exhausted
This is not possible and it is confirmed DATA Disk Group does not have enough available space.
I first will reopen the pdb$seed in read only.
SQL> shutdown immediate Pluggable Database closed. SQL> alter pluggable database open read only; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO SQL>
Following DocId would explain the problem. One or more disk having 0 free MB:
Datapatch failed with ORA-01691: unable to extend lob segment on MGMTDB (Doc ID 2352895.1)
Let’s check, connecting to the +ASM instance.
SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- +ASM1 SQL> select group_number, name from v$asm_diskgroup; GROUP_NUMBER NAME ------------ ------------------------------ 1 DATAC4 2 RECOC4 3 SPRC4 SQL> select DISK_NUMBER, FREE_MB from v$asm_disk where group_number=1 and FREE_MB=0; DISK_NUMBER FREE_MB ----------- ---------- 48 0 SQL> select count(*) from v$asm_disk where group_number=1 and FREE_MB 0; COUNT(*) ---------- 95 SQL>
Yes, this is exactly the problem! Disk Number 48 has got 0 free MB.
ResolutionLet’s resolve the problem.
Through the OCI console, I added 1 TB to the ASM. The usable space of the disk groups then became as following:
ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED HIGH Y 512 512 4096 4194304 50959872 1932164 0 644054 0 Y DATAC4/ MOUNTED HIGH N 512 512 4096 4194304 16356864 13058556 0 4352852 0 N RECOC4/ MOUNTED HIGH N 512 512 4096 4194304 163568640 163563936 0 54521312 0 N SPRC4/ ASMCMD>
I checked to ensure there is no more disks with 0 free MB.
[grid@exacc-cl01n1 ~]$ sqlplus / as sysdba SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Mon May 5 16:28:11 2025 Version 23.7.0.25.01 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 23ai Enterprise Edition Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems Version 23.7.0.25.01 SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- +ASM1 SQL> select DISK_NUMBER, FREE_MB from v$asm_disk where group_number=1 and FREE_MB=0; no rows selected SQL> select count(*) from v$asm_disk where group_number=1 and FREE_MB 0; COUNT(*) ---------- 96 SQL>
I resume the dbaascli move operation, which became successful.
[root@exacc-cl01n1 ~]# dbaascli database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume DBAAS CLI version 25.1.1.0.0 Executing command database move --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_3 --dbname CDBTEST --sessionID 15920 --resume Job id: 7e59d18a-ca1b-4808-a927-c3ce9131b968 Session log: /var/opt/oracle/log/CDBTEST/database/move/dbaastools_2025-05-05_05-52-50-PM_350983.log Loading PILOT... Session ID of the current execution is: 15932 Log file location: /var/opt/oracle/log/CDBTEST/database/move/pilot_2025-05-05_05-52-54-PM_351354 ----------------- Running initialization job Completed initialization job ----------------- Acquiring write lock: cdbtest Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_1 Acquiring read lock: _u02_app_oracle_product_19.0.0.0_dbhome_3 Running datapatch_and_recompile_invalid_objects job Datapatch execution on database 'CDBTEST' is in progress Datapatch execution on database 'CDBTEST' is complete Recompilation of invalid objects on database 'CDBTEST' is in progress Recompilation of invalid objects on database 'CDBTEST' is complete Completed datapatch_and_recompile_invalid_objects job ----------------- Running update_pdb_status job Completed update_pdb_status job ----------------- Running post_patch_lock_manager job Completed post_patch_lock_manager job Releasing lock: cdbtest Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_1 Releasing lock: _u02_app_oracle_product_19.0.0.0_dbhome_3 ----------------- Running invoke_backup_asst job Completed invoke_backup_asst job ----------------- Running post_move_validation job Completed post_move_validation job ----------------- Running generate_dbsystem_details job Acquiring native write lock: global_dbsystem_details_generation Releasing native lock: global_dbsystem_details_generation Completed generate_dbsystem_details job dbaascli execution completed You have new mail in /var/spool/mail/root
And I checked database patch version for the cdb$root and the pdb.
oracle@exacc-cl01n1:~/ [rdbms1900] CDBTEST1 2025-05-05_18-01-21::DMK_ENV.pm::read_oratab ::INFO ==> changed ORACLE_HOME found in oratab: +ASM1 - old=/u01/app/19.0.0.0/grid, new=/u02/app/23.0.0.0/gridhome_1 ********************************** INSTANCE_NAME : CDBTEST1 DB_NAME : CDBTEST DB_UNIQUE_NAME : CDBTEST_CHZ2 STATUS : OPEN READ WRITE LOG_MODE : ARCHIVELOG USERS/SESSIONS : 2/7 DATABASE_ROLE : PRIMARY FLASHBACK_ON : YES FORCE_LOGGING : YES VERSION : 19.26.0.0.0 CDB_ENABLED : YES PDBs : PDB_001T PDB$SEED ********************************** PDB color: pdbname=mount, pdbname=open read-write, pdbname=open read-only Statustime: 2025-05-05 18:01:22 oracle@exacc-cl01n1:~/ [CDBTEST1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 18:01:25 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> set lines 300 SQL> set tab off SQL> set pages 500 SQL> alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; Session altered. SQL> col action_time for a30 SQL> select action_time, patch_id, source_version, target_version, status, description from dba_registry_sqlpatch; ACTION_TIME PATCH_ID SOURCE_VERSION TARGET_VERSION STATUS DESCRIPTION ------------------------------ ---------- --------------- --------------- ------------------------- ---------------------------------------------------------------------------------------------------- 18-APR-24 10.08.56.234640 AM 36199232 19.1.0.0.0 19.23.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232) 18-APR-24 10.19.15.621942 AM 36233263 19.1.0.0.0 19.23.0.0.0 SUCCESS Database Release Update : 19.23.0.0.240416 (36233263) 02-OCT-24 11.46.11.005940 AM 26749785 19.23.0.0.0 19.23.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 02-OCT-24 11.46.13.343508 AM 27605010 19.23.0.0.0 19.23.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 02-OCT-24 11.47.34.300679 AM 36420641 19.23.0.0.0 19.23.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.23.0.0.0 05-MAY-25 05.27.25.226657 PM 36199232 19.23.0.0.0 19.26.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232) 05-MAY-25 05.27.26.115643 PM 37102264 19.23.0.0.0 19.26.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264) 05-MAY-25 05.27.26.697687 PM 26749785 19.23.0.0.0 19.26.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 05-MAY-25 05.27.28.734483 PM 27605010 19.23.0.0.0 19.26.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 05-MAY-25 05.27.28.839820 PM 36420641 19.23.0.0.0 19.26.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.23.0.0.0 05-MAY-25 05.28.35.890001 PM 37260974 19.23.0.0.0 19.26.0.0.0 SUCCESS Database Release Update : 19.26.0.0.250121 (37260974) 05-MAY-25 05.28.44.283893 PM 26749785 19.23.0.0.0 19.26.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 05-MAY-25 05.28.46.272241 PM 27605010 19.23.0.0.0 19.26.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 05-MAY-25 05.29.22.101438 PM 37470729 19.23.0.0.0 19.26.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.26.0.0.0 14 rows selected. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 PDB_001T READ WRITE NO SQL> alter session set container=PDB_001T; Session altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 4 PDB_001T READ WRITE NO SQL> select action_time, patch_id, source_version, target_version, status, description from dba_registry_sqlpatch; ACTION_TIME PATCH_ID SOURCE_VERSI TARGET_VERSI STATUS DESCRIPTION ------------------------------ ---------- ------------ ------------ -------------------- ---------------------------------------------------------------------------------------------------- 19-JUN-21 09.56.16.648813 AM 32067171 19.1.0.0.0 19.1.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171) 19-JUN-21 09.56.16.641919 AM 32218454 19.1.0.0.0 19.10.0.0.0 SUCCESS Database Release Update : 19.10.0.0.210119 (32218454) 19-JUN-21 09.56.17.529234 AM 28318139 19.1.0.0.0 19.1.0.0.0 SUCCESS ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL 19-JUN-21 09.56.18.407598 AM 28555193 19.1.0.0.0 19.1.0.0.0 WITH ERRORS DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT 19-JUN-21 10.13.13.165620 AM 28555193 19.10.0.0.0 19.10.0.0.0 SUCCESS DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT 06-MAR-22 10.57.35.074017 AM 32067171 19.13.0.0.0 19.13.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.10.0.0.210119 (32067171) 06-MAR-22 10.57.39.351946 AM 33192694 19.10.0.0.0 19.10.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694) 06-MAR-22 10.57.36.095921 AM 28318139 19.13.0.0.0 19.13.0.0.0 SUCCESS ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL 06-MAR-22 10.57.37.111786 AM 28555193 19.13.0.0.0 19.13.0.0.0 SUCCESS DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT 06-MAR-22 10.57.39.348199 AM 33192793 19.10.0.0.0 19.13.0.0.0 SUCCESS Database Release Update : 19.13.0.0.211019 (33192793) 06-MAR-22 10.57.40.368792 AM 30944402 19.10.0.0.0 19.10.0.0.0 SUCCESS SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS 06-MAR-22 10.57.41.384551 AM 33522539 19.10.0.0.0 19.10.0.0.0 SUCCESS MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027 19-AUG-23 09.58.11.262033 AM 33192694 19.18.0.0.0 19.18.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694) 19-AUG-23 09.58.13.172768 AM 34786990 19.13.0.0.0 19.13.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990) 19-AUG-23 09.58.11.269998 AM 30944402 19.18.0.0.0 19.18.0.0.0 SUCCESS SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS 19-AUG-23 09.58.11.271219 AM 33522539 19.18.0.0.0 19.18.0.0.0 SUCCESS MERGE ON DATABASE RU 19.13.0.0.0 OF 33280027 19-AUG-23 09.58.13.169832 AM 34765931 19.13.0.0.0 19.18.0.0.0 SUCCESS DATABASE RELEASE UPDATE : 19.18.0.0.230117 (REL-JAN230131) (34765931) 19-AUG-23 09.58.13.340768 AM 26749785 19.13.0.0.0 19.13.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 19-AUG-23 09.58.13.346063 AM 27605010 19.13.0.0.0 19.13.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 19-AUG-23 09.58.14.253669 AM 34972375 19.13.0.0.0 19.13.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.18.0.0.0 17-AUG-24 08.59.17.242116 AM 34786990 19.18.0.0.0 19.23.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.18.0.0.230117 (34786990) 17-AUG-24 08.59.17.362926 AM 36199232 19.18.0.0.0 19.23.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232) 17-AUG-24 08.59.18.040319 AM 26749785 19.18.0.0.0 19.23.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 17-AUG-24 08.59.18.971587 AM 27605010 19.18.0.0.0 19.23.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 17-AUG-24 08.59.19.038902 AM 34972375 19.18.0.0.0 19.23.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.18.0.0.0 17-AUG-24 08.59.50.930598 AM 36233263 19.18.0.0.0 19.23.0.0.0 SUCCESS Database Release Update : 19.23.0.0.240416 (36233263) 17-AUG-24 08.59.56.646477 AM 26749785 19.18.0.0.0 19.23.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 17-AUG-24 08.59.57.504574 AM 27605010 19.18.0.0.0 19.23.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 17-AUG-24 09.00.21.159959 AM 36420641 19.18.0.0.0 19.23.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.23.0.0.0 05-MAY-25 05.31.01.733241 PM 36199232 19.23.0.0.0 19.26.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.23.0.0.240416 (36199232) 05-MAY-25 05.31.03.413696 PM 37102264 19.23.0.0.0 19.26.0.0.0 SUCCESS OJVM RELEASE UPDATE: 19.26.0.0.250121 (37102264) 05-MAY-25 05.31.04.821299 PM 26749785 19.23.0.0.0 19.26.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 05-MAY-25 05.31.06.205197 PM 27605010 19.23.0.0.0 19.26.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 05-MAY-25 05.31.06.379357 PM 36420641 19.23.0.0.0 19.26.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.23.0.0.0 05-MAY-25 05.32.25.947558 PM 37260974 19.23.0.0.0 19.26.0.0.0 SUCCESS Database Release Update : 19.26.0.0.250121 (37260974) 05-MAY-25 05.32.32.616612 PM 26749785 19.23.0.0.0 19.26.0.0.0 SUCCESS PERF_DIAG NEED TO HAVE MORE CONTROL IN DICTIONARY FOR AUTO_STATS_ADVISOR_TASK 05-MAY-25 05.32.33.746247 PM 27605010 19.23.0.0.0 19.26.0.0.0 SUCCESS DST UPGRADE SLOW (BEGIN_UPGRADE AND UPGRADE_DATABASE) EVEN WITH PATCH 19704999 05-MAY-25 05.35.48.324333 PM 37470729 19.23.0.0.0 19.26.0.0.0 SUCCESS DATAPUMP BUNDLE PATCH 19.26.0.0.0 38 rows selected.
And finally I checked PDB violations.
oracle@exacc-cl01n1:/u02/app/oracle/product/19.0.0.0/dbhome_3/OPatch/ [CDBTEST1 (CDB$ROOT)] sqh SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 5 18:10:56 2025 Version 19.26.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.26.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 PDB_001T READ WRITE NO SQL> alter session set container=PDB_001T; Session altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 4 PDB_001T READ WRITE NO SQL> col name for a15 SQL> col message for a120 SQL> col cause for a50 SQL> set lines 300 SQL> col cause for a20 SQL> col type for a10 SQL> select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where status 'RESOLVED'; NAME CAUSE TYPE MESSAGE STATUS --------------- -------------------- ---------- ------------------------------------------------------------------------------------------------------------------------ --------------------------- PDB_001T OPTION WARNING Database option APS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option OWM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING PDB_001T OPTION WARNING Database option XOQ mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0. PENDING 8 rows selected. SQL>
Yes, that’s it, all is ok now.
To wrap up…The database move on another higher version dbhome on the ExaCC failed on the datapatch step, due to the fact one of the Disk had 0 free MB. Adding space to the ASM resolved the issue.
L’article dbaascli database move failing with ORA-01691 on ExaCC est apparu en premier sur dbi Blog.
What brings the M-Files and Microsoft 365 Strategic Partnership

On the 1st of July 2025 M-Files and Microsoft announced their strategic partnership with the headline “M-Files and Microsoft Announce Strategic Partnership Revolutionising Document Management”
I was wondering myself what does this will mean for us and a M-Files Partner and for our customers. What are the new features and benefits. In this blog article I will share the information and my perspective about this phantasmic announcement.
One of the hot topics is the fundamental shift of how enterprise content is managed within the M-Files and the Microsoft ecosystem.
Main FeaturesThanks to the strategic partnership between M-Files and Microsoft, M-Files will now be able to use features such as the ones below. This unlocks the native Microsoft 365 capabilities for M-Files AI-curated content.
- Microsoft 365 and its API-only service
- SharePoint Embedded
- Document co-authoring
This powerful solution is native to Microsoft 365, offering the best of both worlds.
M-Files strengthen:
- Automation of M-Files including their strengths in metadata
- Use one client to access data and features to make work easier
Microsoft’s power:
- Copilot capabilities
- Collaboration tools
- Security integeration

- Generative AI results based on the M-Files content
- Be insured that the results are accurate, relevant and can be trusted
- Ground the fundamental base for the AI journey
- Native document co-authoring with Microsoft applications
- Native collaboration with the trusted Microsoft tools
- M-Files automated content government and permission and secure content remains within the Microsoft security boundary
- Use of Microsoft Purview to use compliance and governance policies
Requirements for M-Files and Micrososft 365:
- M-Files Cloud
- M-Files Business platform
- Microsoft 365 subscription
Desktop co-authoring is already enabled for M-Files Cloud customers with a Business Platform subscription. Detailed instructions on how to unlock all the benefits and capabilities will be available in August 2025. Stay tuned — I will share further details as soon as they are available!
ConclusionAs has been noted in previous blogs, the direction of travel is towards incorporating AI in the optimum way. And to ensure optimal security boundaries.
This announcement is important because it provides a centralised, scalable environment for managing and supporting your business requirements. Topics include effortlessly finding documents, managing the entire document lifecycle, effective collaboration, gaining actionable insights faster, and ensuring robust governance. Using AI to achieve all this will boost productivity and reduce risk for your business.
If you would like to hear the perspective of a Microsoft architect on this strategic partnership, you can watch this video.
I hope this blog gives you a good idea of the strategic partnership between M-Files and Microsoft. Keep an eye out for more blog posts about M-Files on this channel, and be sure to check out the other blogs that are around the M-Files topic.
Do not hesitate to get in touch with us if you have any questions, or contact me directly if you prefer.
L’article What brings the M-Files and Microsoft 365 Strategic Partnership est apparu en premier sur dbi Blog.
Determining Optimal Index Key Compression Length
In 2008, Richard Foote wrote this still excellent 4-part guide to index key compression.
- Index Compression Part I (Low)
- Index Compression Part II (Down Is The New Up)
- Index Compression Part III (2+2=5)
- Index Compression Part IV (Packt Like Sardines In a Crushd Tin Box)
He started with this comment, that I think is just as valid as it was then:
“Index compression is perhaps one of the most under used and neglected index options available. It has the potential to substantially reduce the overall size of non-Unique indexes and multi-column unique indexes, in some scenarios dramatically so... Not only will it potentially save storage, but if the resultant index contains fewer leaf blocks, that’s potentially fewer LIOs and from the Cost Based Optimizer’s point of view, potentially a cheaper execution plan option.”
Index key compression is a highly effective option for reducing index size and improving index performance.
“Oracle stores each distinct combination of compressed column values found within a specific index leaf block in a ‘Prefix’ table within the leaf block and assigns each combination a unique prefix number.” If the prefix length (the number of leading columns to be compressed) is too great, then the prefix table will contain more entries, ultimately one for every row in the index. The compressed index could end up being larger than the uncompressed index! If the prefix length is too small, then you might not get as much compression as you might with a longer prefix length.
In other words, there is a sweet spot where you will achieve optimal compression. That sweet spot can vary from no compression to compressing all the columns. It will vary from index to index, from partition to partition, and potentially over time as the data in an index changes.
Test Every Option to Determine Optimal CompressionOne way to determine optimal compression is through exhaustive testing. Each index could be rebuilt at each possible compression prefix length, and the size of the index could be compared, and the performance of application processes could be tested.
The following PL/SQL script (available on GitHub) rebuilds each index on a named table at each possible compression length, collects statistics and stores them in a table.
REM index_compression_test.sql
create table gfc_index_compression_stats
(table_name varchar2(128)
,index_name varchar2(128)
,num_rows number
,last_analyzed date
,prefix_length number
,blevel number
,leaf_blocks number
,avg_leaf_blocks_per_key number
,avg_data_blocks_per_key number
,clustering_factor number
,constraint gfc_index_compression_stats_pk primary key (table_name, index_name, prefix_length)
);
DECLARE
l_table_name VARCHAR2(128) := 'PSTREENODE';
l_num_cols INTEGER;
l_sql CLOB;
e_invalid_compress_length EXCEPTION;
PRAGMA EXCEPTION_INIT(e_invalid_compress_length,-25194);
BEGIN
FOR i IN (
SELECT table_name, index_name, column_position prefix_length FROM user_ind_columns
WHERE table_name = l_table_name
UNION
SELECT table_name, index_name, 0 FROM user_indexes
WHERE table_name = l_table_name
ORDER BY table_name, index_name, prefix_length DESC
) LOOP
IF i.prefix_length > 0 THEN
l_sql := 'ALTER INDEX '||i.index_name||' REBUILD COMPRESS '||i.prefix_length;
ELSE
l_sql := 'ALTER INDEX '||i.index_name||' REBUILD NOCOMPRESS';
END IF;
BEGIN
dbms_output.put_line(l_sql);
EXECUTE IMMEDIATE l_sql;
dbms_stats.gather_index_stats(user,i.index_name);
MERGE INTO gfc_index_compression_stats u
USING (SELECT * FROM user_indexes WHERE table_name = i.table_name And index_name = i.index_name) s
ON (u.table_name = s.table_name AND u.index_name = s.index_name AND u.prefix_length = NVL(s.prefix_length,0))
WHEN MATCHED THEN UPDATE SET u.num_rows = s.num_rows, u.last_analyzed = s.last_analyzed, u.blevel = s.blevel, u.leaf_blocks = s.leaf_blocks, u.avg_leaf_blocks_per_key = s.avg_leaf_blocks_per_key, u.avg_data_blocks_per_key = s.avg_data_blocks_per_key, u.clustering_factor = s.clustering_factor
WHEN NOT MATCHED THEN INSERT (table_name, index_name, num_rows, last_analyzed, prefix_length, blevel, leaf_blocks, avg_leaf_blocks_per_key, avg_data_blocks_per_key, clustering_factor)
VALUES (s.table_name, s.index_name, s.num_rows, s.last_analyzed, NVL(s.prefix_length,0), s.blevel, s.leaf_blocks, s.avg_leaf_blocks_per_key, s.avg_data_blocks_per_key, s.clustering_factor);
EXCEPTION
WHEN e_invalid_compress_length THEN NULL;
END;
END LOOP;
END;
/
The following chart presents the data collected by the script above for the PSTREENODE table in PeopleSoft. The number of leaf blocks is graphed against the compression prefix length. The left-hand end of each line shows the uncompressed size of the index. ANALYZE INDEX … VALIDATE STRUCTURE CASCADE;
ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE;
REM calc_opt_comp.sql
REM (c)Go-Faster Consultancy Ltd. 2014
REM see https://blog.psftdba.com/2016/02/implementing-index-compression-and.html
set serveroutput on autotrace off
clear columns
SPOOL calc_opt_comp
REM DROP TABLE sysadm.gfc_index_stats PURGE;
--create working storage table with same structure as INDEX_STATS
CREATE TABLE sysadm.gfc_index_stats
AS SELECT * FROM index_stats
WHERE 1=2
/
ALTER TABLE sysadm.gfc_index_stats
MODIFY name NOT NULL
/
CREATE UNIQUE INDEX sysadm.gfc_index_stats
ON sysadm.gfc_index_stats (name, partition_name)
/
undefine table_name
DECLARE
l_sql VARCHAR2(100);
l_owner VARCHAR2(8) := 'SYSADM';
l_table_name VARCHAR2(30) := '&&table_name';
BEGIN
FOR i IN (
SELECT i.index_name, ip.partition_name
FROM all_indexes i
, all_ind_partitions ip
WHERE i.index_type like '%NORMAL'
AND i.table_owner = l_owner
AND i.partitioned = 'YES'
AND i.table_name = l_table_name
AND ip.index_owner = i.owner
AND ip.index_name = i.index_name
AND ip.subpartition_count = 0
AND ip.segment_created = 'YES'
UNION
SELECT i.index_name, isp.subpartition_name
FROM all_indexes i
, all_ind_subpartitions isp
WHERE i.index_type like '%NORMAL'
AND i.table_owner = l_owner
AND i.partitioned = 'YES'
AND i.table_name = l_table_name
AND isp.index_owner = i.owner
AND isp.index_name = i.index_name
AND isp.segment_created = 'YES'
UNION
SELECT i.index_name, NULL
FROM all_indexes i
WHERE i.index_type like '%NORMAL'
AND i.table_owner = l_owner
AND i.table_name = l_table_name
AND i.partitioned = 'NO'
AND i.segment_created = 'YES'
MINUS
SELECT name, partition_name
FROM sysadm.gfc_index_stats
) LOOP
IF i.partition_name IS NULL THEN
l_sql := 'ANALYZE INDEX '||l_owner||'.'||i.index_name||' VALIDATE STRUCTURE';
ELSE
l_sql := 'ANALYZE INDEX '||l_owner||'.'||i.index_name||' PARTITION ('||i.partition_name||') VALIDATE STRUCTURE';
END IF;
dbms_output.put_line(l_sql);
EXECUTE IMMEDIATE l_sql;
DELETE FROM sysadm.gfc_index_stats g
WHERE EXISTS(
SELECT 'x'
FROM index_stats i
WHERE i.name = g.name
AND (i.partition_name = g.partition_name OR (i.partition_name IS NULL AND g.partition_name IS NULL)));
INSERT INTO sysadm.gfc_index_stats
SELECT i.* FROM index_stats i;
COMMIT;
END LOOP;
END;
/
…
The script produces reports of its analysis. The summary report shows the optimal compression length for each index and lists the columns that are and are not compressed. We can see that the result of the ANALYZE command agrees with the result of the previous test that rebuilt each index at each compression length and measured the size of the index.
Summary Report
Opt Comp Weighted Est.
Prefix Num Average Comp
Table Name Index Name Length FREQ Parts Blocks Saving % Blocks
------------------ ------------------ -------- ---- ----- ------------ -------- ------------
Compress Columns Do Not Compress Columns
----------------------------------------------------------- -----------------------------------------------------------
PSTREENODE PSAPSTREENODE 4 1 0 2,048 41.0 1,208
SETID, TREE_NAME, EFFDT, TREE_BRANCH TREE_NODE, TREE_NODE_NUM, TREE_NODE_NUM_END, TREE_NODE_TYPE
PSBPSTREENODE 8 1 0 1,920 34.0 1,267
SETID, TREE_NAME, TREE_BRANCH, TREE_NODE_NUM, TREE_NODE, TR
EE_NODE_NUM_END, TREE_LEVEL_NUM, TREE_NODE_TYPE
PSDPSTREENODE 3 1 0 1,280 61.0 499
SETID, TREE_NAME, EFFDT PARENT_NODE_NUM
PSFPSTREENODE 2 1 0 1,024 67.0 338
TREE_NAME, EFFDT
PSGPSTREENODE 2 1 0 2,304 35.0 1,498
PARENT_NODE_NAME, TREE_NAME EFFDT, TREE_NODE, SETID
PSHPSTREENODE 2 1 0 2,048 24.0 1,556
TREE_NODE, TREE_NAME EFFDT, SETID, SETCNTRLVALUE, TREE_NODE_NUM
PSIPSTREENODE 3 1 0 1,152 .0 1,152
SETID, TREE_NAME, EFFDT TREE_NODE, TREE_NODE_NUM, TREE_NODE_NUM_END
PS_PSTREENODE 4 1 0 1,792 46.0 968
SETID, SETCNTRLVALUE, TREE_NAME, EFFDT TREE_NODE_NUM, TREE_NODE, TREE_BRANCH
****************** ----- ------------ ------------
sum 0 13,568 8,486
Compression of Partitioned Indexes Detail Report
Opt Comp Est.
Prefix Saving Comp
Table Name Index Name Partition Name Length Blocks % Blocks
------------------ ------------------ ------------------------------ -------- ------------ ------ ------------
…
PSHJRNL_LN JRNL_LNH201612 1 143,264 142.0 -60,171
JRNL_LNH201712 0 88,192 74.0 22,930
JRNL_LNH201812 6 12,240 .0 12,240
JRNL_LNH201912 6 11,104 .0 11,104
…
JRNL_LNH202201 6 13,752 .0 13,752
JRNL_LNH202202 6 5,496 .0 5,496
JRNL_LNH202203 6 6,504 .0 6,504
JRNL_LNH202204 6 5,920 .0 5,920
JRNL_LNH202205 6 6,864 .0 6,864
JRNL_LNH202206 6 13,584 .0 13,584
JRNL_LNH202207 6 12,408 .0 12,408
JRNL_LNH202208 3 212,904 121.0 -44,710
JRNL_LNH202209 0 262,472 111.0 -28,872
JRNL_LNH202210 3 228,552 102.0 -4,571
****************** ------------ ------------
sum 1,625,328 574,550
- Choosing to compress the entire index at a shorter compression. In which case, most of the partitions will be larger, the exception partitions will be small, but the net effect is that the index will be larger.
- Disabling compression on these partitions. Over-compressed indexes are generally only slightly larger than uncompressed indexes, so the benefit is probably only small
- Leave compression at the length that is optimal for most of the partitions, accepting that a few partitions will be over-compressed. This usually results in the smallest index overall.
Dctm – Invalid/Missing XSRF token on D2
During an upgrade project to Documentum 23.4, I faced yet again another interesting behavior from D2. The Documentum Server upgrade happened properly for all components like the Connection Broker, Repositories, D2, etc… The OTDS was also setup and available/working properly without too much trouble. However, the deployment of the D2 war file was a bit of another story.
As usual, we try to make it so that Documentum and all its components are setup as securely as possible. From a WebServer point of view, that include a bunch of Best Practices that we add into our deployments / custom images (when using containers), and D2 isn’t without rest. One of such things is for example to setup the Tomcat and D2 application to work only with cookies that have the “secure” and “httpOnly” flags. That is done in a few locations, but in recent versions of D2, there is additional parameters to help control this kind of behavior inside the ESAPI.properties file.
Note: there are often confusions about the “httpOnly” flag for cookies, so I think a quick reminder wouldn’t hurt. The “secure” flag means that the cookie can only be sent through HTTPS (except when using localhost), so it’s much harder to get access to it. The “httpOnly” one, contrary to his name, doesn’t mean that the cookie is only for HTTP communications, but it means that it cannot be accessed by client’s scripts like JavaScript. Therefore, sensitive cookies should have both flags, so that they go through the network securely and even when it arrives on the target client’s browser, its access is protected.
Therefore, as a good practice, I went ahead and configured D2 as secure as I could, even before a 1st deployment, and that included these 4 parameters:
[tomcat@d2-0 war_prep]$ grep -B1 -E "ForceHttpOnly|ForceSecure" WEB-INF/classes/ESAPI.properties
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=true
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 war_prep]$
Once my D2 WAR file was ready and configured, I tried to deploy it on Tomcat. No errors/issues during the deployment/startup of D2. However, accessing the D2 UI ended up with a pretty and infinite loading logo of D2. You probably have all seen that happen at some point:

Nothing on the D2 logs (generated through the logback.xml or log4j2.properties), but on the Tomcat logs, I could see the stack related to that issue when I accessed the URL a few minutes after Tomcat was fully up&running:
2025-07-08 14:25:56,379 UTC INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [$CATALINA_HOME/webapps/D2/D2.war] has finished in [57,704] ms
2025-07-08 14:25:56,382 UTC INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["https-jsse-nio-8080"]
2025-07-08 14:25:56,400 UTC INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [57846] milliseconds
2025-07-08 14:29:36,966 UTC SEVERE [https-jsse-nio-8080-exec-42] org.apache.catalina.core.ApplicationContext.log Key[type=com.emc.x3.server.services.labels.RpcLabelServiceImpl, annotation=[none]]: An RpcTokenException was thrown while processing this call.
com.google.gwt.user.client.rpc.RpcTokenException: Invalid RPC token (Missing XSRF token: not on request, client IP=xxx.xxx.xxx.xxx)
at com.emc.x3.server.D2XsrfProtectedServiceServlet.validateXsrfToken(D2XsrfProtectedServiceServlet.java:33)
at com.google.gwt.user.server.rpc.AbstractXsrfProtectedServiceServlet.onAfterRequestDeserialized(AbstractXsrfProtectedServiceServlet.java:66)
at com.emc.x3.server.GuiceRemoteServiceServlet.processCall(GuiceRemoteServiceServlet.java:120)
at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:373)
at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:590)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658)
at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:290)
at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:280)
at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:184)
at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:89)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at com.emc.x3.portal.server.filters.authc.X3OTDSAuthenticationFilter.executeChain(X3OTDSAuthenticationFilter.java:1106)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:458)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:373)
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:387)
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:370)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:121)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:133)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at com.emc.x3.portal.server.filters.X3SessionTimeoutFilter.doFilter(X3SessionTimeoutFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
at org.apache.catalina.valves.StuckThreadDetectionValve.invoke(StuckThreadDetectionValve.java:185)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:663)
at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:731)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:397)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:905)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1741)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1190)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
at java.base/java.lang.Thread.run(Thread.java:840)
I checked that the XSRF token was indeed generated, and it appeared to be present in the request, at least as shown in the Browser’s Network traces (Developer Tools). It was being generated and forwarded by the browser with both the “secure” and “httpOnly” flags. So, what was the issue then? It took me a bit of time, but I could pinpoint the issue to the ESAPI.properties file and more specifically to the 4 properties I mentioned above, that control the flags of both cookies and sessions. To be exact, I expected the “httpOnly” flag for the cookies to be the issue, since it would hide the XSRF_TOKEN from JavaScript on the client-side. Keeping the content of the WAR file exploded folder the same, I tried to switch this 1 parameter back to false, which is the default value:
[tomcat@d2-0 ~]$ esapi_file="$CATALINA_HOME/webapps/D2/WEB-INF/classes/ESAPI.properties"
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ grep -B1 -E "ForceHttpOnly|ForceSecure" ${esapi_file}
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=true
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ sed -i 's,\(HttpUtilities.ForceHttpOnlyCookies\)=true,\1=false,' ${esapi_file}
[tomcat@d2-0 ~]$
[tomcat@d2-0 ~]$ grep -B1 -E "ForceHttpOnly|ForceSecure" ${esapi_file}
# Force flags on cookies, if you use HttpUtilities to set cookies
HttpUtilities.ForceHttpOnlySession=true
HttpUtilities.ForceSecureSession=true
HttpUtilities.ForceHttpOnlyCookies=false
# Whlie doing a cross site access through https make the below flag to true
HttpUtilities.ForceSecureCookies=true
[tomcat@d2-0 ~]$
After a restart of Tomcat, the issue was completely gone and the login to D2 through the OTDS was working successfully… Since I could deploy D2-REST, D2-Smartview and D2-Config with all 4 ESAPI.properties parameter set to “true“, I asked OT if it was expected that only D2 has a problem with “HttpUtilities.ForceHttpOnlyCookies=true“. After a few days of exchange, I got the feedback that it’s not documented but it’s apparently required for D2 to NOT have the “httpOnly” flag because of the XSRF_TOKEN. They will see to create a KB for that topic (update: it was created and it’s available here: KB0845279). If you really need to have both flags set, then you will not have any other choice than switching to the new UI, D2-Smartview.
L’article Dctm – Invalid/Missing XSRF token on D2 est apparu en premier sur dbi Blog.
Scheduling OpenSCAP reports in SUSE Multi-Linux Manager
As we’ve recently supported some customers on SUSE Multi Linux Manager I’d like share something which was not as easy to implement as it appeared to be in the first place. But first of all, what is SUSE Multi Linux Manager? It is basically a fork of Spacewalk which was also used as the upstream project by the Red Hat Satellite product. But as Spacewalk was dis-continued and the project on Github was archived some people decided to fork in and started a new project called Uyuni, and this is now the upstream project for SUSE Multi Linux Manager. One of the great things about Uyuni and SUSE Multi Linux Manager is, that it supports various Linux distributions such as SUSE and openSUSE distributions, Red Hat, Rocky, Oracle and Alma Linux, Debian, Ubuntu, and also ancient versions of CentOS if you still depend on them.
I am not going into the setup or basic configuration as you can already find related bogs here and more information in the documentation:
- Uyuni, an open-source configuration and infrastructure management solution for software-defined infrastructure (1) – The server (this is for version 4.x)
- Uyuni, an open-source configuration and infrastructure management solution for software-defined infrastructure (2) – Adding a client (this is for version 4.x)
- SUMA (SUSE Manager) is back and it has a new name: SUSE Multi-Linux
- SUSE Manager installation (this is for version 4)
What I want to look at in this post is automatic scheduling of OpenSCAP scans/reports. When this requirement came up, it seemed pretty easy to do, as you can easily schedule such a scan against a single system. As you can see below I have a Red Hat 9 system registered to my SUSE Multi Linux Server:

What you can easily do out of the box is to manually schedule an OpenSCAP scan:

Once the scan completes, it becomes visible under the “List Scan” tab and you can browse into the details:



Quite easy to do but still a manual action. As we wanted to have it automated the obvious choice was to create a “Recurring Action”:

This gives you to option to create and configure a “Recurring Action”:

The issue is, there is no pre-defined “Custom State” which is scheduling an OpenSCAP scan:

The very same is true for “System Groups”, which you normally would use because otherwise you’d need to schedule that on every single system:

The last option seemed to be something under “Schedule” but this only gives you a list of what you already have:

At this point we were stuck and had to talk to SUSE support, which really was a great experience by the way. It turned out there is no easy, build-in, way to do this. A feature request has been logged, but of course there is no guarantee that it will be implemented.
But, there is a workaround, not a very beautiful one, but at least it works. SUSE Multi Linux Manager (and Uyuni of course) come with an API and there is one call for triggering an OpenSCAP scan. Using this, a custom state channel can be created which in turn calls the API to trigger the scan:


The “SLS Contents” actually contains the code (Python in this case) which is taking to the API and triggers the scan:
/usr/local/bin/schedule_xccdf_scan.py:
file.managed:
- user: root
- group: root
- mode: 755
- contents: |
#!/usr/bin/python3
import xmlrpc.client
client = xmlrpc.client.ServerProxy('https://suma.dwe.local/rpc/api')
key = client.auth.login('admin', 'xxxx')
client.system.scap.scheduleXccdfScan(
key,
1000010000,
'/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml',
'--profile xccdf_org.ssgproject.content_profile_cis_server_l1'
)
client.auth.logout(key)
schedule_xccdf_scan:
cmd.run:
- name: /usr/local/bin/schedule_xccdf_scan.py
- require:
- file: /usr/local/bin/schedule_xccdf_scan.py
I am not going into the code itself, this should be easy to understand. The important part is the system ID in line 14. This defines the system you want the scan to happen on (you can also provide an array of systems, see the API documentation linked above).
As soon as you have this, you can schedule this automatically as a recurring action on either the system itself, or a group of systems in “System Groups”:



Not as easy as it could be, and the systems are still hard coded in the Python code, but at least we have something that works. Hope that helps.
L’article Scheduling OpenSCAP reports in SUSE Multi-Linux Manager est apparu en premier sur dbi Blog.
SQL Server 2025 – Optimized Locking
Within the past few days I was investigating one of the new features of SQL Server 2025: Optimized Locking. I was curious about the capabilities, the behaviour and as well the limits of the feature. Optimized Locking is based on two primary components:
- Transaction ID (TID)
- Lock after qualification (LAQ)
Here we have the transaction ID working as a unique identifier for a transaction. Each row which is modified by this transaction will be labelled with its transaction ID. This produces only one single lock on the TID which is used instead of many key- or RID-locks. To be precise: Update- and Exclusive Locks will be placed, but released immediately without waiting for the transaction to be commited.
This behaviour helps lock manager enormously to keep the locks he has to maintain at a minimum and thus saves a lot of space (memory).
Lock after qualification (LAQ) at the other hand provides a silent qualification for rows affected by an update in the background without having the need to place Shared Locks on those while scanning through. Only if a row has been qualified – means it will be affected by this update – an attempt to place an Update Lock will happen.
In order to benefit from this mechanism, Read Committed Snapshot Isolation (RCSI) must be enabled on database-level.
My Demo environment looks as follows:
- SQL Server 2025 CTP 2.0 (17.0.700.9)
- SQL Server Management Studio 21 (21.3.6) with Copilot activated
My colleague Stéphane Haby wrote blog posts about SQL Server 2025, for example this one:
In the meantime there was SQL Server 2025 CTP 2.1 released by Microsoft with a few improvements explained on BOL:
https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2025?view=sql-server-ver17
Copilot is not part of the game when we discuss Optimized Locking, but as AI is omnipresent these days, I want to mention the blog post of my colleague Steven Naudet where he describes the enabling process for Copilot in SSMS:
Now, back to business, back to Optimized Locking – Let’s see this performance tuning feature in action!
First things first – I’ve created a new database called “OptimizedLocking” and checked what features are in place. To determine if Optimized Locking is enabled on a database you can us either the function DATABASEPROPERTYEX or grab the information from sys.databases directly:
-- Query the dedicated DATABASEPROPERTYEX für Optimized Locking
USE [OptimizedLocking]
GO
SELECT DATABASEPROPERTYEX(DB_NAME(), 'IsOptimizedLockingOn') AS IsOptimizedLockingOn
GO
USE [master]
GO
SELECT name AS DatabaseName, is_read_committed_snapshot_on, is_optimized_locking_on, is_accelerated_database_recovery_on
FROM sys.databases
WHERE name = 'OptimizedLocking'
GO

Why do I request additionally the columns “is_read_committed_snapshot_on” and “is_accelerated_database_recovery_on”?
Well, the latter is absolutely necessary to get Optimized Locking work and RCSI enables lock after qualification (LAQ) which I will show you a little bit later.
Now it’s time to create a simple table within the database “OptimizedLocking”:
-- Create a table to show locks
USE [OptimizedLocking]
GO
DROP TABLE IF EXISTS T1
GO
CREATE TABLE T1
(
T1_ID INT NOT NULL,
Value INT
);
INSERT INTO T1 (T1_ID, Value)
VALUES (1,10),(2,20),(3,30);
GO
SELECT * FROM dbo.T1
GO
SELECT DB_NAME(database_id) AS DatabaseName, OBJECT_NAME(object_id) AS TableName, index_type_desc FROM sys.dm_db_index_physical_stats(DB_ID('OptimizedLocking'), OBJECT_ID('dbo.T1'), NULL, NULL, 'DETAILED')
GO
The content of the table T1 looks as follows – and it’s of course a Heap because I didn’t neither add a PRIMARY KEY constraint to any of it’s column nor a CLUSTERED INDEX was specified:

I ran two concurrent update statements, the first will change the row with the ID = 1 (in the left pane of SSMS) and the second one tries to change the row with the ID = 2 (in the middle pane). Within the pane on the right I placed the statement to show you the locks which are present.
The second statement is blocked, because it can not acquire the necessary Update Lock which is needed to update the row:

Now I change the database in terms of setting OPTIMIZED_LOCKING to ON and I do the same demo as above again. As mentioned above, ACCELERATED_DATABASE_RECOVERY is necessary as well to fulfil this:
USE [master]
GO
ALTER DATABASE [OptimizedLocking] SET ACCELERATED_DATABASE_RECOVERY = ON;
ALTER DATABASE [OptimizedLocking] SET OPTIMIZED_LOCKING = ON;
GO

As you can see now, the behaviour remains the same, the second statement is blocked trying to place a Shared Lock on the rows – but we don’t have those 4 “normal” locks we had before but only 2 XACT Locks instead, means we have a reduction of the amount of locks that have to be managed by the lock manager:

Let’s go one step further and enable RCSI on database-level and see what happens now:
USE [master]
GO
ALTER DATABASE [OptimizedLocking] SET READ_COMMITTED_SNAPSHOT = ON;
GO
Et voilà, both statement have successfully had their Exclusive Lock requests (escalated from the Update Lock placed in advance) approved/granted:

Heap vs. Clustered Index
But what happens now, if we use a Clustered Index instead of leaving our data (un)organized as a Heap? Let’s dig into this as well by sorting the T1_ID column using a clustered index:
USE [OptimizedLocking]
GO
CREATE CLUSTERED INDEX CI_T1 ON dbo.T1 (T1_ID)
GO

All the features that we switched on earlier are now deactivated to be able to begin from scratch again:

But what’s going on now? None of the former mentioned features is enabled but both statements have their requested locks granted:

This is because the potentially critical Exclusive Locks (X) are placed on different keys (i.e. different rows) and the Intent Exclusive Locks (IX) set on page level are compatible with each other.
Wrap-Up
During my consulting at customer site I often see database with a huge amount of Heaps.
Don’t misunderstand me, Heaps are great on tables, where we expect only INSERTS (e.g. a logging-table) but if we deal with the other DML-Statements as well at the same time, a Clustered Index would be the better choice.
Related to this demo and the feature “Optimized Locking” can be said, that if we are using a Clustered Index on our tables, we don’t need this feature to be enabled in terms of concurrency, but regarding the overhead for the lock manager it’s definitely worth using “Optimized Locking”.
But wait: until now we dealt with the default isolation level of SQL Server – READ_COMMITTED. What will happen if we turn this into a higher level, for example SERIALIZABLE? Will Optimized Locking support us in having fewer locks and blockings?
The answer is no – not at all. Do you agree?
And this makes sense regarding the mechanism of such an isolation level like SERIALIZABLE. When we choose such a high isolation level, we are (or at least should be) aware that we are blocking others for a (longer) period of time because we tell SQL Server to behave like this. Everyone who is familiar with the locking behaviour in depth in SQL Server using different isolation levels knows, that SQL Server has to change/align his behaviour to be able to take the ACID principle into account.
Isolation Level SERIALIZABLE
Here is the output of the same demo I did above several times except that I executed the first UPDATE statement with the transaction isolation level SERIALIZABLE. As you can see, the second UPDATE statement (in the middle pane) is blocked during the attempt of placing an Exclusive Lock on the row with ID = 2 because the first UPDATE statement had to place an Exclusive Lock on the range where the row with ID = 2 is part of, based on the requirements of the Isolation Level:

Conclusion
From my point of view, “Optimized Locking” is a real game changer. Regardless of the underlying structure (Heap or Clustered Index), locking becomes easier and in a certain way more “lightweight”.
As we saw, the optimal way to use this feature is to have turned on Read Committed Snapshot Isolation (RCSI) in addition and – if this make sense for storing the data in this way in terms of the access pattern – use a Clustered Index to organize tables. Either way, however, we benefit from the fact that the Lock Manager has to manage fewer locks which saves a significant amount of memory in any case.
L’article SQL Server 2025 – Optimized Locking est apparu en premier sur dbi Blog.
dbhome prepatch failing on ODA due to missing sqldeveloper files
I have been recently patching RAC databases on ODA 2-HA configured with Data Guard from version 19.20 to 19.26. During prepatch I have been facing an issue with missing sqldeveloper files, for which I could find a solution. I would like to share this solution with you as it might help you win some time.
Read more: dbhome prepatch failing on ODA due to missing sqldeveloper files Problem descriptionFrom the node0, I have been running prepatch report.
[root@node0 ~]# /opt/oracle/dcs/bin/odacli create-prepatchreport --dbhome --dbhomeid 5fc7e39d-adff-4903-b308-b4046840a38f -v 19.26.0.0.0 Job details ---------------------------------------------------------------- ID: 68d89fc0-d5ed-47b5-a885-a5896b865262 Description: Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1 Status: Created Created: July 15, 2025 14:59:07 CEST Message: Use 'odacli describe-prepatchreport -i 68d89fc0-d5ed-47b5-a885-a5896b865262' to check details of results Task Name Start Time End Time Status ---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Checking the prepatch report, it was suggest kind of freezing evaluating the DBHome.
[root@node0 ~]# odacli describe-prepatchreport -i 68d89fc0-d5ed-47b5-a885-a5896b865262 Patch pre-check report ------------------------------------------------------------------------ Job ID: 68d89fc0-d5ed-47b5-a885-a5896b865262 Description: Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1 Status: RUNNING Created: July 15, 2025 2:59:07 PM CEST Result: Node Name --------------- node0 Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __DB__ Validate DB Home ID Success Validated DB Home ID: 5fc7e39d-adff-4903-b308-b4046840a38f Validate patching tag Success Validated patching tag: 19.26.0.0.0. Is system provisioned Success Verified system is provisioned Validate minimum agent version Success Validated minimum agent version Is GI upgraded Success Validated GI is upgraded Validate available space for Success Validated free space required under db /u01 Validate glogin.sql file Success Successfully verified glogin.sql won't break patching Validate dbHomesOnACFS Success User has configured disk group for configured Database homes on ACFS Validate Oracle base Success Successfully validated Oracle Base Is DB clone available Success Successfully validated clone file exists Evaluate DBHome patching with Running RHP Validate command execution Success Validated command execution __ORACHK__ Running orachk Success Successfully ran Orachk Validate command execution Success Validated command execution Node Name --------------- node1 Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __DB__ Validate DB Home ID Success Validated DB Home ID: 5fc7e39d-adff-4903-b308-b4046840a38f Validate patching tag Success Validated patching tag: 19.26.0.0.0. Is system provisioned Success Verified system is provisioned Validate minimum agent version Success Validated minimum agent version Is GI upgraded Success Validated GI is upgraded Validate available space for Success Validated free space required under db /u01 Validate glogin.sql file Success Successfully verified glogin.sql won't break patching Validate dbHomesOnACFS Success User has configured disk group for configured Database homes on ACFS Validate Oracle base Success Successfully validated Oracle Base Is DB clone available Success Successfully validated clone file exists Evaluate DBHome patching with Running RHP Validate command execution Success Validated command execution __ORACHK__ Running orachk Success Successfully ran Orachk Validate command execution Success Validated command execution
Looking to the prepatch job status, it finished in Failure 10 min after starting.
[root@node0 ~]# odacli describe-job -i 68d89fc0-d5ed-47b5-a885-a5896b865262 Job details ---------------------------------------------------------------- ID: 68d89fc0-d5ed-47b5-a885-a5896b865262 Description: Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1 Status: Failure (To view Error Correlation report, run "odacli describe-job -i 68d89fc0-d5ed-47b5-a885-a5896b865262 --ecr" command) Created: July 15, 2025 14:59:07 CEST Message: Use 'odacli describe-prepatchreport -i ' to check prepatch resultsDCS-10292:One or more checks failed while checking for patching readiness. Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- task:TaskLockWrapper_37358 node0 July 15, 2025 14:59:12 CEST July 15, 2025 15:11:40 CEST Failure task:TaskSequential_37359 node0 July 15, 2025 14:59:12 CEST July 15, 2025 15:11:40 CEST Failure Setting up SSH equivalence node0 July 15, 2025 14:59:12 CEST July 15, 2025 14:59:14 CEST Success Setting up SSH equivalence node0 July 15, 2025 14:59:14 CEST July 15, 2025 14:59:16 CEST Success Run patching pre-checks node0 July 15, 2025 14:59:16 CEST July 15, 2025 15:11:40 CEST Success task:TaskSequential_39928 node0 July 15, 2025 14:59:41 CEST July 15, 2025 15:04:28 CEST Failure Creating ACFS database home node0 July 15, 2025 14:59:47 CEST July 15, 2025 14:59:47 CEST Success Validating dbHome available space node0 July 15, 2025 14:59:47 CEST July 15, 2025 14:59:47 CEST Success Validating dbHome available space node1 July 15, 2025 14:59:47 CEST July 15, 2025 14:59:47 CEST Success Creating DbHome Directory node1 July 15, 2025 14:59:48 CEST July 15, 2025 14:59:48 CEST Success Create required directories node0 July 15, 2025 14:59:48 CEST July 15, 2025 14:59:48 CEST Success Extract DB clone node0 July 15, 2025 14:59:48 CEST July 15, 2025 15:00:43 CEST Success ProvDbHome by using RHP node0 July 15, 2025 15:00:43 CEST July 15, 2025 15:02:27 CEST Success Enable DB options node0 July 15, 2025 15:02:27 CEST July 15, 2025 15:02:35 CEST Success Creating wallet for DB Client node0 July 15, 2025 15:02:38 CEST July 15, 2025 15:02:38 CEST Success task:TaskSequential_40059 node0 July 15, 2025 15:02:39 CEST July 15, 2025 15:04:28 CEST Failure Precheck Patch databases by RHP node0 July 15, 2025 15:02:39 CEST July 15, 2025 15:04:28 CEST Failure Check pre-check status node0 July 15, 2025 15:11:40 CEST July 15, 2025 15:11:40 CEST Failure [root@node0 ~]#
This was a job result for the second attempt. The first attempt would showed following tasks:
Task Name Node Name Start Time End Time Status
---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ----------------
task:TaskLockWrapper_62015 node0 July 15, 2025 17:57:06 CEST July 15, 2025 18:09:43 CEST Failure
task:TaskSequential_62016 node0 July 15, 2025 17:57:06 CEST July 15, 2025 18:09:43 CEST Failure
Setting up SSH equivalence node0 July 15, 2025 17:57:06 CEST July 15, 2025 17:57:08 CEST Success
Setting up SSH equivalence node0 July 15, 2025 17:57:08 CEST July 15, 2025 17:57:10 CEST Success
Run patching pre-checks node0 July 15, 2025 17:57:10 CEST July 15, 2025 18:09:43 CEST Success
task:TaskSequential_66113 node0 July 15, 2025 17:57:40 CEST July 15, 2025 18:02:29 CEST Failure
Creating ACFS database home node0 July 15, 2025 17:57:46 CEST July 15, 2025 17:57:46 CEST Success
Validating dbHome available space node0 July 15, 2025 17:57:46 CEST July 15, 2025 17:57:46 CEST Success
Validating dbHome available space node1 July 15, 2025 17:57:46 CEST July 15, 2025 17:57:46 CEST Success
Creating DbHome Directory node1 July 15, 2025 17:57:47 CEST July 15, 2025 17:57:47 CEST Success
Create required directories node0 July 15, 2025 17:57:47 CEST July 15, 2025 17:57:47 CEST Success
Extract DB clone node0 July 15, 2025 17:57:47 CEST July 15, 2025 17:58:41 CEST Success
ProvDbHome by using RHP node0 July 15, 2025 17:58:41 CEST July 15, 2025 18:00:26 CEST Success
Enable DB options node0 July 15, 2025 18:00:27 CEST July 15, 2025 18:00:35 CEST Success
Creating wallet for DB Client node0 July 15, 2025 18:00:38 CEST July 15, 2025 18:00:38 CEST Success
task:TaskSequential_66226 node0 July 15, 2025 18:00:39 CEST July 15, 2025 18:02:29 CEST Failure
Precheck Patch databases by RHP node0 July 15, 2025 18:00:39 CEST July 15, 2025 18:02:29 CEST Failure
Check pre-check status node0 July 15, 2025 18:09:43 CEST July 15, 2025 18:09:43 CEST Failure
Checking the log of the dcs agent, I could find following:
"taskResult" : "DCS-10001:Internal error encountered: PRGT-388 : Pre-patch verification through Configuration Verification Utility reported one or more errors.\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-sqlcl.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/low-level-api.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jaxb-api.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-core.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orajsoda.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-codec.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/javax.json.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-logging.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-net.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-mapping.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-utility.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-annotations.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_cert.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-http.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ojdbc8.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-common.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/oraclepki.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_core.jar\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/sql.exe\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/sql\" does not exist on node \"node0\".\nPRVH-0102 : File \"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/README.md\" does not exist on node \"node0\"..",
The describe job with –ecr option would give me the same:
[root@node0 ~]# odacli describe-job -i 68d89fc0-d5ed-47b5-a885-a5896b865262 --ecr ODA Assistant - Error Correlation report ---------------------------------------- Failed job ID: 68d89fc0-d5ed-47b5-a885-a5896b865262 Description: Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1 Start Time: July 15, 2025 14:59:12 CEST End Time: July 15, 2025 15:11:40 CEST EC report path: /opt/oracle/dcs/da/da_repo/546ca4ac-97e6-4038-bf6d-90986c7f21bf.json Failed Task Messages -------------------- [Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1] - Use 'odacli describe-prepatchreport -i ' to check prepatch resultsDCS-10292:One or more checks failed while checking for patching readiness. [task:TaskLockWrapper_37358] - DCS-10292:One or more checks failed while checking for patching readiness. [task:TaskSequential_39928] - DCS-10001:Internal error encountered: PRGT-388 : Pre-patch verification through Configuration Verification Utility reported one or more errors. PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-sqlcl.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/low-level-api.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jaxb-api.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-core.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orajsoda.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-codec.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/javax.json.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-logging.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-net.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-mapping.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-utility.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-annotations.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_cert.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-http.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ojdbc8.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-common.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/oraclepki.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_core.jar" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/sql.exe" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/sql" does not exist on node "node0". PRVH-0102 : File "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/README.md" does not exist on node "node0".. ... ... ...
So the problem is definitively that prepatch is expecting the in source dbhome to patch, a directory sqldeveloper, which is not existing, did not exist since a while and is not existing in any other dbhome from any other ODA customer had (and customer has got several ODAs and dbhome). What a strange “new” bug!
ResolutionI first checked and could confirm that my source dbhome really does not have any sqldeveloper foloder.
[root@node0 ~]# ls -ld /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sql* drwxr-xr-x 4 oracle oinstall 20480 Jul 17 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl drwxr-xr-x 3 oracle oinstall 20480 Apr 17 2019 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlj drwxr-xr-x 7 oracle oinstall 20480 Jul 17 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlpatch drwxr-xr-x 6 oracle oinstall 20480 Apr 30 13:56 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlplus [root@node0 ~]#
Looking quickly on the oracle support, I could find following Doc ID, which was exactly my problem:
ODA db home prepatch report keeps running/hangs at “Evaluate DBHome patching with RHP” (Doc ID 3067737.1)
OK, but reading on the solution described in the Doc ID, it will not help me that much :
If you have old dbhomes on this ODA (or) any other ODA please copy "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_x/sqldeveloper/sqldeveloper" folder to the source home for which prepatch report is being run.
Then re-run the dbhome patching which should complete without any issues.
Checking on an old backup I did few years ago I could find a sqldeveloper zip file in a patch storage directory.
oracle@node0:/mnt/tmp/backup_dbhome/tsrpri/OraDB19000_home3/ [DB11] tar tvf dbhome_3.tarz | grep -i sqldevelop -rw-r--r-- oracle/oinstall 23557640 2023-07-17 16:37 ./dbhome_3/.patch_storage/35320081_Jul_15_2023_12_54_11/files/sqldeveloper.zip
I uncompressed the archive file, and extracted the content of the zip file.
[oracle@node0 software_patching]$ unzip sqldeveloper.zip Archive: sqldeveloper.zip inflating: sqldeveloper/sqldeveloper/lib/6462746f6f6c732d6e65742e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6462746f6f6c732d636f6d6d6f6e2e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/636f6d6d6f6e732d6c6f6767696e672e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6a6178622d6170692e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6f6a646263382e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6a61636b736f6e2d636f72652e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6c6f772d6c6576656c2d6170692e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6a61636b736f6e2d6461746162696e642e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6f7261636c65706b692e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6f7364745f636572742e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6f72616931386e2e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/687474706d696d652e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/786d6c70617273657276322d73616e732d6a6178702d73657276696365732e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/706c7567696e2d6170692e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6a61636b736f6e2d616e6e6f746174696f6e732e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6f72616931386e2d6d617070696e672e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/706f6d2e786d6c_o_ inflating: sqldeveloper/sqldeveloper/lib/636f6d6d6f6e732d636f6465632e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6f72616a736f64612e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/616e746c722d72756e74696d652e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6462746f6f6c732d73716c636c2e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6a617661782e6a736f6e2e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6f7364745f636f72652e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/ext/736c66346a2d6a646b31342e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/ext/6c69717569626173652d636f72652e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/ext/6f7261636c652d6c69717569626173652e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/ext/736c66346a2d6170692e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6462746f6f6c732d687474702e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/68747470636c69656e742e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6f72616931386e2d7574696c6974792e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/737472696e6774656d706c6174652e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6a7363682e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/68747470636f72652e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6a6c696e652e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/786462362e6a6172_o_ inflating: sqldeveloper/sqldeveloper/lib/6a7a6c69622e6a6172_o_ inflating: sqldeveloper/sqldeveloper/bin/524541444d452e6d64_o_ inflating: sqldeveloper/sqldeveloper/bin/73716c2e657865_o_ inflating: sqldeveloper/sqldeveloper/bin/73716c_o_ inflating: sqldeveloper/modules/6a617661782e736572766c65742e6a617661782e736572766c65742d6170692e6a6172_o_ [oracle@node0 software_patching]$
But that zip file did not really content any missing file the prepatch was complaining about… I anyhow decided to copy it to my source dbhome. This dbhome will not be used any more. As you might know, the prepatch will first create a new dbhome for the patching process to run an out-of-place patching, so moving the database to this new dbhome and running datapatch. So any file that I copy to my current source dbhome will not have any impact.
[root@node0 ~]# ls -ld /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sql* drwxr-xr-x 4 oracle oinstall 20480 Jul 17 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl drwxr-xr-x 3 oracle oinstall 20480 Apr 17 2019 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlj drwxr-xr-x 7 oracle oinstall 20480 Jul 17 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlpatch drwxr-xr-x 6 oracle oinstall 20480 Apr 30 13:56 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlplus [root@node0 ~]# cp -pR /mnt/tmp/software_patching/sqldeveloper /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/ [root@node0 ~]# ls -ld /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sql* drwxr-xr-x 4 oracle oinstall 20480 Jul 17 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl drwxr-xr-x 4 oracle oinstall 20480 Jul 15 17:00 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper drwxr-xr-x 3 oracle oinstall 20480 Apr 17 2019 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlj drwxr-xr-x 7 oracle oinstall 20480 Jul 17 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlpatch drwxr-xr-x 6 oracle oinstall 20480 Apr 30 13:56 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlplus [root@node0 ~]#
As expected there is no jar files in the copied directory.
[root@node0 ~]# ls -ltrh /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ total 25M -rw-r--r-- 1 oracle oinstall 1.9M Jul 17 2023 786d6c70617273657276322d73616e732d6a6178702d73657276696365732e6a6172_o_ -rw-r--r-- 1 oracle oinstall 257K Jul 17 2023 786462362e6a6172_o_ -rw-r--r-- 1 oracle oinstall 222K Jul 17 2023 737472696e6774656d706c6174652e6a6172_o_ -rw-r--r-- 1 oracle oinstall 11K Jul 17 2023 706f6d2e786d6c_o_ -rw-r--r-- 1 oracle oinstall 73K Jul 17 2023 706c7567696e2d6170692e6a6172_o_ -rw-r--r-- 1 oracle oinstall 300K Jul 17 2023 6f7364745f636f72652e6a6172_o_ -rw-r--r-- 1 oracle oinstall 201K Jul 17 2023 6f7364745f636572742e6a6172_o_ -rw-r--r-- 1 oracle oinstall 410K Jul 17 2023 6f72616a736f64612e6a6172_o_ -rw-r--r-- 1 oracle oinstall 1.6M Jul 17 2023 6f72616931386e2e6a6172_o_ -rw-r--r-- 1 oracle oinstall 490K Jul 17 2023 6f72616931386e2d7574696c6974792e6a6172_o_ -rw-r--r-- 1 oracle oinstall 88K Jul 17 2023 6f72616931386e2d6d617070696e672e6a6172_o_ -rw-r--r-- 1 oracle oinstall 301K Jul 17 2023 6f7261636c65706b692e6a6172_o_ -rw-r--r-- 1 oracle oinstall 4.0M Jul 17 2023 6f6a646263382e6a6172_o_ -rw-r--r-- 1 oracle oinstall 500K Jul 17 2023 6c6f772d6c6576656c2d6170692e6a6172_o_ -rw-r--r-- 1 oracle oinstall 65K Jul 17 2023 6a7a6c69622e6a6172_o_ -rw-r--r-- 1 oracle oinstall 276K Jul 17 2023 6a7363682e6a6172_o_ -rw-r--r-- 1 oracle oinstall 263K Jul 17 2023 6a6c696e652e6a6172_o_ -rw-r--r-- 1 oracle oinstall 123K Jul 17 2023 6a6178622d6170692e6a6172_o_ -rw-r--r-- 1 oracle oinstall 84K Jul 17 2023 6a617661782e6a736f6e2e6a6172_o_ -rw-r--r-- 1 oracle oinstall 1.3M Jul 17 2023 6a61636b736f6e2d6461746162696e642e6a6172_o_ -rw-r--r-- 1 oracle oinstall 318K Jul 17 2023 6a61636b736f6e2d636f72652e6a6172_o_ -rw-r--r-- 1 oracle oinstall 66K Jul 17 2023 6a61636b736f6e2d616e6e6f746174696f6e732e6a6172_o_ -rw-r--r-- 1 oracle oinstall 41K Jul 17 2023 687474706d696d652e6a6172_o_ -rw-r--r-- 1 oracle oinstall 320K Jul 17 2023 68747470636f72652e6a6172_o_ -rw-r--r-- 1 oracle oinstall 749K Jul 17 2023 68747470636c69656e742e6a6172_o_ -rw-r--r-- 1 oracle oinstall 429K Jul 17 2023 6462746f6f6c732d73716c636c2e6a6172_o_ -rw-r--r-- 1 oracle oinstall 131K Jul 17 2023 6462746f6f6c732d6e65742e6a6172_o_ -rw-r--r-- 1 oracle oinstall 50K Jul 17 2023 6462746f6f6c732d687474702e6a6172_o_ -rw-r--r-- 1 oracle oinstall 7.1M Jul 17 2023 6462746f6f6c732d636f6d6d6f6e2e6a6172_o_ -rw-r--r-- 1 oracle oinstall 61K Jul 17 2023 636f6d6d6f6e732d6c6f6767696e672e6a6172_o_ -rw-r--r-- 1 oracle oinstall 328K Jul 17 2023 636f6d6d6f6e732d636f6465632e6a6172_o_ -rw-r--r-- 1 oracle oinstall 160K Jul 17 2023 616e746c722d72756e74696d652e6a6172_o_ drwxr-xr-x 2 oracle oinstall 20K Jul 15 17:00 ext [root@node0 ~]#
Strangely, the missing file the prepatch report is expecting, comes from sqlcl.
[root@node0 ~]# ls -ltrh /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl/lib total 37M -rw-r--r-- 1 oracle oinstall 1.8M Jul 16 2023 ucp.jar -rw-r--r-- 1 oracle oinstall 8.3K Jul 16 2023 slf4j-jdk14.jar -rw-r--r-- 1 oracle oinstall 306K Jul 16 2023 osdt_core.jar -rw-r--r-- 1 oracle oinstall 206K Jul 16 2023 osdt_cert.jar -rw-r--r-- 1 oracle oinstall 441K Jul 16 2023 orajsoda.jar -rw-r--r-- 1 oracle oinstall 301K Jul 16 2023 oraclepki.jar -rw-r--r-- 1 oracle oinstall 126K Jul 16 2023 javax.json.jar -rw-r--r-- 1 oracle oinstall 237K Jul 16 2023 jansi.jar -rw-r--r-- 1 oracle oinstall 74K Jul 16 2023 jackson-annotations.jar -rw-r--r-- 1 oracle oinstall 2.8M Jul 16 2023 guava-with-lf.jar -rw-r--r-- 1 oracle oinstall 296K Jul 16 2023 dbtools-net.jar -rw-r--r-- 1 oracle oinstall 354K Jul 16 2023 commons-codec.jar -rw-r--r-- 1 oracle oinstall 1.9M Jul 16 2023 xmlparserv2_sans_jaxp_services.jar -rw-r--r-- 1 oracle oinstall 260K Jul 16 2023 xdb.jar -rw-r--r-- 1 oracle oinstall 248K Jul 16 2023 ST4.jar -rw-r--r-- 1 oracle oinstall 361K Jul 16 2023 sshd-sftp.jar -rw-r--r-- 1 oracle oinstall 120K Jul 16 2023 sshd-scp.jar -rw-r--r-- 1 oracle oinstall 920K Jul 16 2023 sshd-core.jar -rw-r--r-- 1 oracle oinstall 42K Jul 16 2023 sshd-contrib.jar -rw-r--r-- 1 oracle oinstall 921K Jul 16 2023 sshd-common.jar -rw-r--r-- 1 oracle oinstall 41K Jul 16 2023 slf4j-api.jar -rw-r--r-- 1 oracle oinstall 496K Jul 16 2023 orai18n-utility.jar -rw-r--r-- 1 oracle oinstall 89K Jul 16 2023 orai18n-mapping.jar -rw-r--r-- 1 oracle oinstall 1.6M Jul 16 2023 orai18n.jar -rw-r--r-- 1 oracle oinstall 4.9M Jul 16 2023 ojdbc8.jar -rw-r--r-- 1 oracle oinstall 976K Jul 16 2023 jline3.jar -rw-r--r-- 1 oracle oinstall 376K Jul 16 2023 jdbcrest.jar -rw-r--r-- 1 oracle oinstall 126K Jul 16 2023 jaxb-api.jar -rw-r--r-- 1 oracle oinstall 56K Jul 16 2023 javax.activation-api.jar -rw-r--r-- 1 oracle oinstall 28K Jul 16 2023 jackson-jr-stree.jar -rw-r--r-- 1 oracle oinstall 98K Jul 16 2023 jackson-jr-objects.jar -rw-r--r-- 1 oracle oinstall 367K Jul 16 2023 jackson-core.jar -rw-r--r-- 1 oracle oinstall 830K Jul 16 2023 httpcore5.jar -rw-r--r-- 1 oracle oinstall 780K Jul 16 2023 httpclient5.jar -rw-r--r-- 1 oracle oinstall 693K Jul 16 2023 dbtools-sqlcl.jar -rw-r--r-- 1 oracle oinstall 80K Jul 16 2023 dbtools-http.jar -rw-r--r-- 1 oracle oinstall 131K Jul 16 2023 dbtools-datapump.jar -rw-r--r-- 1 oracle oinstall 486K Jul 16 2023 dbtools-data.jar -rw-r--r-- 1 oracle oinstall 7.5M Jul 16 2023 dbtools-common.jar -rw-r--r-- 1 oracle oinstall 63K Jul 16 2023 commons-logging.jar -rw-r--r-- 1 oracle oinstall 164K Jul 16 2023 antlr-runtime.jar drwxr-xr-x 2 oracle oinstall 20K Jul 17 2023 ext [root@node0 ~]#
So I decided to copy the files to my new sqldeveloper lib folder.
[root@node0 ~]# cp -p /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl/lib/*.jar /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ [root@node0 ~]# ls -ltrh /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/*.jar -rw-r--r-- 1 oracle oinstall 1.8M Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ucp.jar -rw-r--r-- 1 oracle oinstall 8.3K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/slf4j-jdk14.jar -rw-r--r-- 1 oracle oinstall 306K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_core.jar -rw-r--r-- 1 oracle oinstall 206K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/osdt_cert.jar -rw-r--r-- 1 oracle oinstall 441K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orajsoda.jar -rw-r--r-- 1 oracle oinstall 301K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/oraclepki.jar -rw-r--r-- 1 oracle oinstall 126K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/javax.json.jar -rw-r--r-- 1 oracle oinstall 237K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jansi.jar -rw-r--r-- 1 oracle oinstall 74K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-annotations.jar -rw-r--r-- 1 oracle oinstall 2.8M Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/guava-with-lf.jar -rw-r--r-- 1 oracle oinstall 296K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-net.jar -rw-r--r-- 1 oracle oinstall 354K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-codec.jar -rw-r--r-- 1 oracle oinstall 1.9M Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/xmlparserv2_sans_jaxp_services.jar -rw-r--r-- 1 oracle oinstall 260K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/xdb.jar -rw-r--r-- 1 oracle oinstall 248K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ST4.jar -rw-r--r-- 1 oracle oinstall 361K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-sftp.jar -rw-r--r-- 1 oracle oinstall 120K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-scp.jar -rw-r--r-- 1 oracle oinstall 920K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-core.jar -rw-r--r-- 1 oracle oinstall 42K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-contrib.jar -rw-r--r-- 1 oracle oinstall 921K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/sshd-common.jar -rw-r--r-- 1 oracle oinstall 41K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/slf4j-api.jar -rw-r--r-- 1 oracle oinstall 496K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-utility.jar -rw-r--r-- 1 oracle oinstall 89K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n-mapping.jar -rw-r--r-- 1 oracle oinstall 1.6M Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/orai18n.jar -rw-r--r-- 1 oracle oinstall 4.9M Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ojdbc8.jar -rw-r--r-- 1 oracle oinstall 976K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jline3.jar -rw-r--r-- 1 oracle oinstall 376K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jdbcrest.jar -rw-r--r-- 1 oracle oinstall 126K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jaxb-api.jar -rw-r--r-- 1 oracle oinstall 56K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/javax.activation-api.jar -rw-r--r-- 1 oracle oinstall 28K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-jr-stree.jar -rw-r--r-- 1 oracle oinstall 98K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-jr-objects.jar -rw-r--r-- 1 oracle oinstall 367K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/jackson-core.jar -rw-r--r-- 1 oracle oinstall 830K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/httpcore5.jar -rw-r--r-- 1 oracle oinstall 780K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/httpclient5.jar -rw-r--r-- 1 oracle oinstall 693K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-sqlcl.jar -rw-r--r-- 1 oracle oinstall 80K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-http.jar -rw-r--r-- 1 oracle oinstall 131K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-datapump.jar -rw-r--r-- 1 oracle oinstall 486K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-data.jar -rw-r--r-- 1 oracle oinstall 7.5M Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/dbtools-common.jar -rw-r--r-- 1 oracle oinstall 63K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/commons-logging.jar -rw-r--r-- 1 oracle oinstall 164K Jul 16 2023 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/antlr-runtime.jar [root@node0 ~]#
I did the same for the missing file in the bin directory.
[root@node0 ~]# cp -p /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqlcl/bin/* /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin/ [root@node0 ~]# ls -ltrh /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/bin total 372K -rw-r--r-- 1 oracle oinstall 119 Jul 16 2023 version.txt -rwxr-xr-x 1 oracle oinstall 148K Jul 16 2023 sql.exe -rwxr-xr-x 1 oracle oinstall 21K Jul 16 2023 sql -rw-r--r-- 1 oracle oinstall 2.4K Jul 16 2023 README.md -rw-r--r-- 1 oracle oinstall 4.3K Jul 16 2023 license.txt -rwxr-xr-x 1 oracle oinstall 1.2K Jul 16 2023 dependencies.txt -rw-r--r-- 1 oracle oinstall 15K Jul 17 2023 73716c_o_ -rw-r--r-- 1 oracle oinstall 126K Jul 17 2023 73716c2e657865_o_ -rw-r--r-- 1 oracle oinstall 419 Jul 17 2023 524541444d452e6d64_o_ [root@node0 ~]#
I’m still convinced this is a bug where it is looking for a sqldeveloper folder when the process wanted to check sqlcl, because all the missing files are the one from sqlcl. Anyhow, let’s try to move forward with the resolution.
There is a last jar file that the prepatch is still complaining on, low-level-api.jar, which I could not find in any of the source dbhome. But I could find it in the new dbhome, here dbhome5, that the prepatch process created during the failed job (the initial steps, Extract DB clone, of the prepatch command was run successfully).
[root@node0 ~]# find /u01/app/odaorahome/oracle/ -name low-level-api.jar /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/sqlcl/lib/low-level-api.jar
I decided to copy it as well to the source sqldeveloper lib folder.
[root@node0 ~]# cp -p /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_5/sqlcl/lib/low-level-api.jar /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1/sqldeveloper/sqldeveloper/lib/ [root@node0 ~]#
And I run the prepatch again.
[root@node0 ~]# /opt/oracle/dcs/bin/odacli create-prepatchreport --dbhome --dbhomeid 5fc7e39d-adff-4903-b308-b4046840a38f -v 19.26.0.0.0 Job details ---------------------------------------------------------------- ID: fe0dfd66-79b8-4f38-afe2-b2cef1d15943 Description: Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1 Status: Created Created: July 15, 2025 17:37:31 CEST Message: Use 'odacli describe-prepatchreport -i fe0dfd66-79b8-4f38-afe2-b2cef1d15943' to check details of results Task Name Start Time End Time Status ---------------------------------------- ---------------------------------------- ---------------------------------------- ---------------- [root@node0 ~]#
Which that time was successfull!
[root@node0 ~]# odacli describe-job -i fe0dfd66-79b8-4f38-afe2-b2cef1d15943 Job details ---------------------------------------------------------------- ID: fe0dfd66-79b8-4f38-afe2-b2cef1d15943 Description: Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1 Status: Success Created: July 15, 2025 17:37:31 CEST Message: Successfully finished prechecking for patching the databases [DB1_IWB] with rolling option Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Setting up SSH equivalence node0 July 15, 2025 17:37:37 CEST July 15, 2025 17:37:39 CEST Success Setting up SSH equivalence node0 July 15, 2025 17:37:39 CEST July 15, 2025 17:37:40 CEST Success Run patching pre-checks node0 July 15, 2025 17:37:40 CEST July 15, 2025 17:47:11 CEST Success Precheck Patch databases by RHP node0 July 15, 2025 17:38:05 CEST July 15, 2025 17:39:56 CEST Success TDE parameter update node1 July 15, 2025 17:39:56 CEST July 15, 2025 17:39:56 CEST Success [root@node0 ~]# odacli describe-prepatchreport -i fe0dfd66-79b8-4f38-afe2-b2cef1d15943 Patch pre-check report ------------------------------------------------------------------------ Job ID: fe0dfd66-79b8-4f38-afe2-b2cef1d15943 Description: Patch pre-checks for [DB, ORACHKDB] to 19.26.0.0: DbHome is OraDB19000_home1 Status: SUCCESS Created: July 15, 2025 5:37:31 PM CEST Result: All pre-checks succeeded Node Name --------------- node0 Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __DB__ Validate DB Home ID Success Validated DB Home ID: 5fc7e39d-adff-4903-b308-b4046840a38f Validate patching tag Success Validated patching tag: 19.26.0.0.0. Is system provisioned Success Verified system is provisioned Validate minimum agent version Success Validated minimum agent version Is GI upgraded Success Validated GI is upgraded Validate available space for Success Validated free space required under db /u01 Validate glogin.sql file Success Successfully verified glogin.sql won't break patching Validate dbHomesOnACFS Success User has configured disk group for configured Database homes on ACFS Validate Oracle base Success Successfully validated Oracle Base Is DB clone available Success Successfully validated clone file exists Evaluate DBHome patching with Success Successfully validated updating RHP dbhome with RHP. and local patching is possible Validate command execution Success Validated command execution __ORACHK__ Running orachk Success Successfully ran Orachk Validate command execution Success Validated command execution Node Name --------------- node1 Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __DB__ Validate DB Home ID Success Validated DB Home ID: 5fc7e39d-adff-4903-b308-b4046840a38f Validate patching tag Success Validated patching tag: 19.26.0.0.0. Is system provisioned Success Verified system is provisioned Validate minimum agent version Success Validated minimum agent version Is GI upgraded Success Validated GI is upgraded Validate available space for Success Validated free space required under db /u01 Validate glogin.sql file Success Successfully verified glogin.sql won't break patching Validate dbHomesOnACFS Success User has configured disk group for configured Database homes on ACFS Validate Oracle base Success Successfully validated Oracle Base Is DB clone available Success Successfully validated clone file exists Evaluate DBHome patching with Success Successfully validated updating RHP dbhome with RHP. and local patching is possible Validate command execution Success Validated command execution __ORACHK__ Running orachk Success Successfully ran Orachk Validate command execution Success Validated command execution [root@node0 ~]#
And, of course, later in my activities during the maintenance windows, the patching of the dbhome was successful.
[root@node0 ~]# /opt/oracle/dcs/bin/odacli update-dbhome -i 5fc7e39d-adff-4903-b308-b4046840a38f -v 19.26.0.0.0 { "jobId" : "b4467935-edd0-46a6-8c84-04e216d05973", "status" : "Created", "message" : "", "reports" : [ ], "createTimestamp" : "July 16, 2025 00:21:17 CEST", "resourceList" : [ ], "description" : "DB Home Patching to 19.26.0.0.0: Home ID is 5fc7e39d-adff-4903-b308-b4046840a38f", "updatedTime" : "July 16, 2025 00:21:17 CEST", "jobType" : null, "cpsMetadata" : null } [root@node0 ~]# [root@node0 ~]# odacli describe-job -i b4467935-edd0-46a6-8c84-04e216d05973 Job details ---------------------------------------------------------------- ID: b4467935-edd0-46a6-8c84-04e216d05973 Description: DB Home Patching to 19.26.0.0.0: Home ID is 5fc7e39d-adff-4903-b308-b4046840a38f Status: Success Created: July 16, 2025 00:21:17 CEST Message: Task Name Node Name Start Time End Time Status ---------------------------------------- ------------------------- ---------------------------------------- ---------------------------------------- ---------------- Creating wallet for DB Client node0 July 16, 2025 00:21:52 CEST July 16, 2025 00:21:53 CEST Success Patch databases by RHP - [DB1] node0 July 16, 2025 00:21:53 CEST July 16, 2025 00:27:37 CEST Success Updating database metadata node1 July 16, 2025 00:27:37 CEST July 16, 2025 00:27:37 CEST Success Set log_archive_dest for Database node0 July 16, 2025 00:27:37 CEST July 16, 2025 00:27:38 CEST Success Generating and saving BOM node0 July 16, 2025 00:27:39 CEST July 16, 2025 00:28:30 CEST Success Generating and saving BOM node1 July 16, 2025 00:27:39 CEST July 16, 2025 00:28:32 CEST Success TDE parameter update node1 July 16, 2025 00:29:23 CEST July 16, 2025 00:29:23 CEST Success [root@node0 ~]#
To wrap up…
I faced this prepatch issue during the preparation of my maintenance windows activity, where I had to patching several RAC databases using separate dbhome and configured with Data Guard. The maintenance windows where we can stop the application is only from 12am to 5am. Therefore I could not take the risk to loose time and try if the force option from the update-dbhome will do the work. And I was also convinced, that knowing the prepatch status would neither be success nor failed, the patching command could not be executed. The second point is that the force option can usually be helpful if there is some failure with orachk that you know would not have any impact. In my case, a failed step on the task “Evaluate DBHome patching with RHP” is something that needs to be resolved before patching the dbhome.
I had to run the same resolution for all my dbhome on the primary and standby side.
L’article dbhome prepatch failing on ODA due to missing sqldeveloper files est apparu en premier sur dbi Blog.
SQL Server 2025 – OPTIMIZED_SP_EXECUTESQL
Within the past few days I was investigating another new feature of SQL Server 2025: OPTIMIZED_SP_EXECUTESQL. I was curious about the capabilities and the behaviour.
OPTIMIZED_SP_EXECUTESQL is a database scoped configuration which allows us to get rid of the overhead of compiling and producing execution plans when we deal with dynamic SQL. The first user-session which launches a given dynamic statement using the stored procedure “sp_executesql”, will have to wait until SQL Server goes through the compilation process and has produced an execution plan. Then, all the following user-sessions can benefit from the work which was done earlier in terms of no more compilation work is left to be done.
Without enabling this feature, SQL Server will produce an execution plan for each execution of the same dynamic query (means we lose time because SQL Server has to compile every time we execute a query) – even if the query text is exactly the same.
Furthermore, we are facing plan cache pollution because we have an unnecessarily large number of execution plans in the cache – for nothing! I will dig into a demo to show you the behaviour with and without this feature enabled. First of all, I create a tiny database called “OptimizedSpExecutesql”:
CREATE DATABASE [OptimizedSpExecutesql]
GO
USE [OptimizedSpExecutesql]
GO
CREATE TABLE dbo.Orders
(
OrderID INT IDENTITY(1,1) PRIMARY KEY,
OrderType Varchar(20),
OrderDate DATETIME DEFAULT GETDATE()
)
GO
INSERT INTO dbo.Orders (OrderType) VALUES
('Bentley'), ('Aston Martin'), ('Bentley'), ('Bugatti'),
('Lamborghini'), ('Bentley'), ('Aston Martin'), ('Chevrolet'),
('Bentley'), ('Bugatti'), ('Aston Martin'), ('Lamborghini');
GO
The result set looks as follows:

Now I check, if the feature is disable to simulate the “old” world (i.e. the old behaviour):
-- Turn off the features - just to be sure
ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZED_SP_EXECUTESQL = OFF;
GO
ALTER DATABASE SCOPED CONFIGURATION SET ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY = OFF;
GO
SELECT name, value, is_value_default FROM sys.database_scoped_configurations
WHERE name IN ('OPTIMIZED_SP_EXECUTESQL', 'ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY')
GO
As you can see, this is the default value for both of the database scoped settings:

Let’s cleanup the cache first and then execute the TSQL-Statement shown below. I will execute the TSQL-Statement with “ostress.exe” to simulate 100 sessions executing the statement once per each session:
-- Clean the plan cache first
DBCC FREEPROCCACHE
GO
-- Define a parameterized query which will be executed via ostress.exe afterwards
-- This code below is just to mention the TSQL which will be executed
DECLARE @SQL NVarchar(MAX)
SET @SQL = N'SELECT COUNT(*) FROM dbo.Orders WHERE OrderType = @Type';
DECLARE @Params NVARCHAR(MAX) = N'@Type Varchar(20)';
EXEC sys.sp_executesql @SQL, @Params, @Type = N'Bentley';
GO
Here is the execution of the statement with “ostress.exe”:

And the output of the plan cache shows us, that SQL Server has produced several execution plans for the same sql_handle (i.e. the same statement):

The result here is a little bit unpredictable – sometimes you get 10 different plans, sometimes 85 and so on and so forth.
But all executions have one thing in common: we end up with an enormous amount of execution plans cached within the plan cache for exactly the same – means one! – query…
The feature turned on
Now I want to show you the behaviour of SQL Server with the new feature turned on. Therefore, I will enable the database scoped setting(s) to on.
By-the-way: As you can see in the code section below, I turned on 2 switches at the same time. The first one is the feature itself which helps us to reduce compiling to a minimum while the second one just helps us to refresh the statistics asynchronously at low priority (instead of doing blockings through a synchronous SCH-M lock) – means that Statistic Updates stalls are avoided because SQL Server will use current statistics and updates those later if he isn’t able to get a SCH-M lock placed at the moment.
-- Change the behaviour having the feature turned on
USE [OptimizedSpExecutesql]
GO
ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZED_SP_EXECUTESQL = ON;
GO
ALTER DATABASE SCOPED CONFIGURATION SET ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY = ON;
GO
SELECT name, value, is_value_default FROM sys.database_scoped_configurations
WHERE name IN ('OPTIMIZED_SP_EXECUTESQL', 'ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY')
GO

After clearing the cache and executing the TSQL-Statement via “ostress.exe” again, we see within the plan cache following content:

One plan with an execution count of 100 – as expected and also as desired!
Get a deeper insight into what is going on behind the scenes
I want to show you what is captured within an Extended Events Session when we execute the same query as above – once without the new feature and once with it.
The Extended Events Session looks as follows:
CREATE EVENT SESSION [QueryPerformanceCapture] ON SERVER
ADD EVENT sqlserver.sql_batch_completed (
ACTION (sqlos.task_time, sqlserver.client_app_name, sqlserver.database_id, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text)
WHERE ([sqlserver].[sql_text] LIKE N'%WHERE OrderType = @Type%')),
ADD EVENT sqlserver.sql_statement_recompile (
ACTION (sqlserver.client_app_name, sqlserver.database_id, sqlserver.session_id, sqlserver.sql_text)
WHERE ([sqlserver].[sql_text] LIKE N'%WHERE OrderType = @Type%')),
ADD EVENT sqlserver.sp_statement_completed (
ACTION (sqlserver.session_id, sqlserver.sql_text, sqlserver.client_app_name, sqlserver.database_id)
WHERE ([sqlserver].[sql_text] LIKE N'%WHERE OrderType = @Type%')),
ADD EVENT sqlserver.query_post_compilation_showplan(
ACTION(sqlserver.sql_text, sqlserver.session_id, sqlserver.client_app_name, sqlserver.database_id)
WHERE ([sqlserver].[sql_text] LIKE N'%WHERE OrderType = @Type%'))
ADD TARGET package0.event_file (
SET filename = N'C:\DATEN\SQL\SQL_SERVER_HOME\MSSQL17.MSSQLSERVER\ANALYSIS\QueryPerformanceCapture.xel', max_file_size = (10), max_rollover_files = (5))
WITH (
MAX_MEMORY = 4096 KB,
EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY = 30 SECONDS,
MAX_EVENT_SIZE = 0 KB,
MEMORY_PARTITION_MODE = NONE,
TRACK_CAUSALITY = OFF,
STARTUP_STATE = ON
);
GO
-- Start the event session
ALTER EVENT SESSION [QueryPerformanceCapture] ON SERVER
STATE = START;
GO
As you can see, during the first execution of the TSQL-Statement via “ostress.exe”, 100 events were captured in terms of completion (because we launched 100 executions of the code) and 47 events related to compilations:

Regarding the time spent on those 47 compile events, we land in this example on 16813998 nanoseconds which is 16.8 seconds:

As soon as I turned on the feature on database level, we are facing only 1 event related to compilation:

Regarding the time spent here on compiling now, we observe that this is 91142 nanoseconds (i.e. only 0.09 seconds) because SQL Server had to do the work only once:

Conclusion
From my point of view, “OPTIMIZED_SP_EXECUTESQL” is as well as “Optimized Locking” a real game changer. Or should I say: could be?
Well, it depends as always… It depends on the workload you are dealing with on your SQL Server or database. If you see a lot of dynamic SQL, enabling this feature is really valuable otherwise it won’t have an effect. Therefore, I recommend to enable this feature not instantaneous on all your databases, but to monitor your workload first and then make your decision after evaluating.
L’article SQL Server 2025 – OPTIMIZED_SP_EXECUTESQL est apparu en premier sur dbi Blog.
Oracle AutoUpgrade? Easy way to upgrade your Oracle Database! Part I
The AutoUpgrade utility is designed to automate the upgrade process, both before starting upgrades, during upgrade deployments, and during postupgrade checks and configuration migration
The Oracle AutoUpgrade Tool is part of Oracle Home since the version 12.1.0.2.
It’s a java-based program which can be download on MySupport Oracle : ID 2485457.1
It’s recommanded to use the lastest version (as usual, like for opatch by example).
The most important point is that Oracle desupport DBUA (Database Upgrade Assistant) and other manual upgrade to Oracle 23ai. AutoUpgrade is the only supported method.
This tab show the supported sources database version and the supported target database version for AutoUpgrade
Source Database Version11.2.0.412.2.0.1 (12.2 + DBJAN2019RU and newer)12.1.0.218c (18.5 and newer)12.2.0.119c (19.3 and newer)18c21c (21.3 and newer)19c23ai (23.4 and newer)Any future release The AutoUpgrade workflowUsing AutoUpgrade is running four simple steps :
- Prepare
- Analyze
- Fixups
- Deploy
The first step is about downloading all binairies that we need to patching or upgrade our databases. And to create a configuration file used by AutoUpgrade for known what to do and where do it.
- Download lastest version of AutoUpgrade -> My Oracle Support ID 2485457.1
- Download lastet version of opatch -> My Oracle Support ID 224346.1
- Download DBRU + OJVM+ DPBP+ Most Important One-off Patches -> My Oracle Support ID 2118136.2 + ID 555.1
The configuration file used by AutoUpgrade must contain some informations like :
- Source Oracle Home
- Target Oracle Home
- Patches repository
- Source Database SID
- and other advanced option
We will see that later in this blog, no worries!
AnalyzeIt’s the classic precheck used during all patch, upgrade process lead by Oracle.
FixupsAutomatic or manual Fixups, if necessary.
DeployIt’s the main step where AutoUpgrade run the upgrade process on your source database.
In case of failure, AutoUpgrade is able to restart where it fails on… After have fixed the errors by yourself!
For this labs, we have :
- Oracle Linux Server 8
- Oracle Database Home 19.24 installed in /u01/app/oracle/product/19.24/dbhome_1
- Oracle standalone database : DB19
We want to patch this database to the 19.28 version, according to the Best Pratices (for this check Mike Dietrich blogs : https://mikedietrichde.com/ ).
PreparationWe download the following patches :
- Patch 37960098 : DBRU 19.28
- Patch 38170982 : Data Pump Bundle Patches (My Oracle Support ID 2819284.1)
- Patch 37847857 : OJVM 19.28
- Patch 34672698 : [VOS] DB50: ORA-800: soft external error, arguments: [set priority failed], [vktm] , dism(16)
- Patch 34774667 : [AQ] ORA-7445 in Purge Queue Table / High CPU usage in SVCB Service
- Patch 29213893 : [QRY OPTIMIZER] DBMS_STATS Failing With Error Ora-1422 When Gathering Stats for User$ Table
We save them in the repository /home/oracle/autoupgrade/patches (for example).
We download in the same manner the lastest version of Opatch following the Oracle MySupport ID 224346.1 :
How to install OPatch utility ?- Download the latest Opatch utility
Search for Patch 6880880 in MOS or from the URL
https://updates.oracle.com/download/6880880.html - Take a backup of older version of OPatch utility under $ORACLE_HOME and unzip the downloaded file
cd $ORACLE_HOME
mv OPatch OPatch.bkp
unzip <download directory>/p6880880_<version>_<platform>.zip
cd OPatch
./opatch version
The most recent version of AutoUpgrade, 25.3 is a fully supported version and can be downloaded via this link: version 20250509 or directly from oracle.com.
Oracle strongly recommends that you always use the latest version of AutoUpgrade. AutoUpgrade is backward compatible and can upgrade older releases of Oracle Database as well.
The latest ORDS 25.1 can be found from here which supports AutoUpgrade REST API.
The last step of the preparation is to create the configuration file used by AutoUpgrade for our patching.
An simple config file look like (/home/oracle/patch_DB19.cfg) :
global.global_log_dir=/home/oracle/autoupgrade-patching/log
patch1.sid=DB19
patch1.log_dir=/home/oracle/autoupgrade-patching/DB19/log
patch1.source_home=/u01/app/oracle/product/19.24/dbhome_1
patch1.target_home=/u01/app/oracle/product/19.28/dbhome_1
patch1.restoration=YES
patch1.drop_grp_after_patching=YES
patch1.folder=/home/oracle/autoupgrade/patches
patch1.patch=RU,OJVM,OPATCH,DPBP,34672698,34774667,29213893
patch1.download=NO
An excellent tool for building this configuration file can be found at : https://viniciusdba.com.br/autoupgrade-composer/
I must say : use it!!
Some explanation about the parameters :
- source_home : the current Oracle Home used by the instance DB19
- target_home : the new Oracle Home targeted
- restoration : AutoUpgrade can create a Guaranteed Restore Point before the upgrade/patching operations
- drop_grp_after_patching=YES : after a successful AutoUpgrade, by default, don’t suppress the GRP created before. With this parameter, it will be dropped after successful patching.
- folder : the full path to the repository where the patches has been downloaded.
- patch : what we want to patch :
- RU : Release Update
- OJVM: Oracle Java Virtual Machine
- Opatch
- DPBP : Data Pump Bundle Patch
- id of specific patches
- download : we can ask to Auto Upgrade to download itself the patches.
Now, we have all the stuff to begin our database patching!
AnalyzeJust run the following command :
java -jar autoupgrade.jar -config /home/oracle/patch_DB19.cfg -patch -mode analyze
The output look like :
AutoUpgrade Patching 25.3.250509 launched with default internal options
Processing config file ...
+-----------------------------------------+
| Starting AutoUpgrade Patching execution |
+-----------------------------------------+
1 Non-CDB(s) will be analyzed
Type 'help' to list console commands
patch> lsj
+----+-------+---------+---------+-------+----------+-------+----------------+
|Job#|DB_NAME| STAGE|OPERATION| STATUS|START_TIME|UPDATED| MESSAGE|
+----+-------+---------+---------+-------+----------+-------+----------------+
| 100| DB19|PRECHECKS|EXECUTING|RUNNING| 15:15:39|28s ago|Executing Checks|
+----+-------+---------+---------+-------+----------+-------+----------------+
Total jobs 1
patch> status -job 100
Details
Job No 100
Oracle SID DB19
Start Time 25/07/16 15:15:39
Elapsed (min): 0
End time: N/A
Logfiles
Logs Base: /home/oracle/autoupgrade-patching/simple-patching/log/DB19
Job logs: /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100
Stage logs: /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100/prechecks
TimeZone: /home/oracle/autoupgrade-patching/simple-patching/log/DB19/temp
Remote Dirs:
Stages
PENDING <1 min
PRECHECKS ~0 min (RUNNING)
Stage-Progress Per Container
+--------+---------+
|Database|PRECHECKS|
+--------+---------+
| DB19| 98 % |
+--------+---------+
patch> Job 100 completed
------------------- Final Summary --------------------
Number of databases [ 1 ]
Jobs finished [1]
Jobs failed [0]
Please check the summary report at:
/home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.html
/home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.log
We control the success or errors that can be occured during analyze :
# cat /home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.log
==========================================
AutoUpgrade Patching Summary Report
==========================================
[Date] Wed Jul 16 15:17:31 GMT 2025
[Number of Jobs] 1
==========================================
[Job ID] 100
==========================================
[DB Name] DB19
[Version Before AutoUpgrade Patching] 19.24.0.0.240716
[Version After AutoUpgrade Patching] 19.28.0.0.250715
------------------------------------------
[Stage Name] PENDING
[Status] SUCCESS
[Start Time] 2025-07-16 15:15:39
[Duration] 0:00:00
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100/pending
------------------------------------------
[Stage Name] PRECHECKS
[Status] SUCCESS
[Start Time] 2025-07-16 15:15:39
[Duration] 0:01:52
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100/prechecks
[Detail] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/100/prechecks/DB19_preupgrade.log
Check passed and no manual intervention needed
------------------------------------------
The analyze is OK, we can proceed further and deploy the patch.
DeployJust run the following command :
java -jar autoupgrade.jar -config /home/oracle/patch_DB19.cfg -patch -mode deploy
The output look like :
AutoUpgrade Patching 25.3.250509 launched with default internal options
Processing config file ...
+-----------------------------------------+
| Starting AutoUpgrade Patching execution |
+-----------------------------------------+
1 Non-CDB(s) will be processed
Type 'help' to list console commands
patch> lsj
+----+-------+---------+---------+-------+----------+-------+----------------+
|Job#|DB_NAME| STAGE|OPERATION| STATUS|START_TIME|UPDATED| MESSAGE|
+----+-------+---------+---------+-------+----------+-------+----------------+
| 101| DB19|PRECHECKS|EXECUTING|RUNNING| 15:34:17| 5s ago|Executing Checks|
+----+-------+---------+---------+-------+----------+-------+----------------+
Total jobs 1
Details
Job No 101
Oracle SID DB19
Start Time 25/07/16 15:34:17
Elapsed (min): 22
End time: N/A
Logfiles
Logs Base: /home/oracle/autoupgrade-patching/simple-patching/log/DB19
Job logs: /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101
Stage logs: /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/patching
TimeZone: /home/oracle/autoupgrade-patching/simple-patching/log/DB19/temp
Remote Dirs:
Stages
PENDING <1 min
GRP <1 min
PREACTIONS <1 min
PRECHECKS 1 min
PREFIXUPS 1 min
EXTRACT <1 min
INSTALL <1 min
ROOTSH <1 min
DBTOOLS <1 min
OPATCH 12 min
PATCHING ~5 min (RUNNING)
POSTCHECKS
POSTFIXUPS
POSTACTIONS
Stage-Progress Per Container
The Stage PATCHING does not have any data to show
Job 101 completed
------------------- Final Summary --------------------
Number of databases [ 1 ]
Jobs finished [1]
Jobs failed [0]
Jobs restored [0]
Jobs pending [0]
---- Drop GRP at your convenience once you consider it is no longer needed ----
Drop GRP from DB19: drop restore point AU_PATCHING_9212_DB191928000
Please check the summary report at:
/home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.html
/home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.log
We control the success or errors that can be occured during the deploiement:
# cat /home/oracle/autoupgrade-patching/simple-patching/log/cfgtoollogs/patch/auto/status/status.log
==========================================
AutoUpgrade Patching Summary Report
==========================================
[Date] Wed Jul 16 15:56:46 GMT 2025
[Number of Jobs] 1
==========================================
[Job ID] 101
==========================================
[DB Name] DB19
[Version Before AutoUpgrade Patching] 19.24.0.0.240716
[Version After AutoUpgrade Patching] 19.28.0.0.250715
------------------------------------------
[Stage Name] PENDING
[Status] SUCCESS
[Start Time] 2025-07-16 15:34:18
[Duration] 0:00:00
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/pending
------------------------------------------
[Stage Name] GRP
[Status] SUCCESS
[Start Time] 2025-07-16 15:34:18
[Duration] 0:00:01
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/grp
[Detail] Please drop the following GRPs after {0} completes:
AU_PATCHING_9212_DB191928000
------------------------------------------
[Stage Name] PREACTIONS
[Status] SUCCESS
[Start Time] 2025-07-16 15:34:19
[Duration] 0:00:00
------------------------------------------
[Stage Name] PRECHECKS
[Status] SUCCESS
[Start Time] 2025-07-16 15:34:19
[Duration] 0:01:21
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/prechecks
[Detail] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/prechecks/DB19_preupgrade.log
Check passed and no manual intervention needed
------------------------------------------
[Stage Name] PREFIXUPS
[Status] SUCCESS
[Start Time] 2025-07-16 15:35:40
[Duration] 0:01:20
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/prefixups
[Detail] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/prefixups/prefixups.html
------------------------------------------
[Stage Name] EXTRACT
[Status] SUCCESS
[Start Time] 2025-07-16 15:37:00
[Duration] 0:00:49
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/extract
------------------------------------------
[Stage Name] INSTALL
[Status] SUCCESS
[Start Time] 2025-07-16 15:37:50
[Duration] 0:00:52
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/install
------------------------------------------
[Stage Name] ROOTSH
[Status] SUCCESS
[Start Time] 2025-07-16 15:38:43
[Duration] 0:00:00
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/rootsh
------------------------------------------
[Stage Name] DBTOOLS
[Status] SUCCESS
[Start Time] 2025-07-16 15:38:43
[Duration] 0:00:02
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/dbtools
------------------------------------------
[Stage Name] OPATCH
[Status] SUCCESS
[Start Time] 2025-07-16 15:38:45
[Duration] 0:12:19
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/opatch
------------------------------------------
[Stage Name] PATCHING
[Status] SUCCESS
[Start Time] 2025-07-16 15:51:05
[Duration] 0:05:37
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/patching
------------------------------------------
[Stage Name] POSTCHECKS
[Status] SUCCESS
[Start Time] 2025-07-16 15:56:42
[Duration] 0:00:00
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/postchecks
[Detail] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/postchecks/DB19_postupgrade.log
Check passed and no manual intervention needed
------------------------------------------
[Stage Name] POSTFIXUPS
[Status] SUCCESS
[Start Time] 2025-07-16 15:56:43
[Duration] 0:00:02
[Log Directory] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/postfixups
[Detail] /home/oracle/autoupgrade-patching/simple-patching/log/DB19/101/postfixups/postfixups.html
------------------------------------------
[Stage Name] POSTACTIONS
[Status] SUCCESS
[Start Time] 2025-07-16 15:56:46
[Duration] 0:00:00
------------------------------------------
sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jul 16 16:05:31 2025
Version 19.28.0.0.0
Copyright (c) 1982, 2024, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.28.0.0.0
SQL> col oracle_home format a60
select sys_context('USERENV','ORACLE_HOME') as oracle_home from dual;
ORACLE_HOME
------------------------------------------------------------
/u01/app/oracle/product/19.28/dbhome_1
SQL> col comp_id format a10
col version_full format a15
col status format a15
select comp_id,
version_full,
status
from dba_registry
order by comp_id;
COMP_ID VERSION_FULL STATUS
---------- --------------- ---------------
CATALOG 19.28.0.0.0 VALID
CATPROC 19.28.0.0.0 VALID
OWM 19.28.0.0.0 VALID
RAC 19.28.0.0.0 OPTION OFF
XDB 19.28.0.0.0 VALID
Conclusion
Finaly in few simple steps, we have patching our database. The AutoUpgrade utility is a very powerful tool, easy to use and manage.
I hope this blog help you to understand how AutoUpgrade works.
And I will encourage you to be familiar with this wonderful tool!
In Part II, I will show you some advanced patching that AutoUpgrade can handle!
See you soon!
L’article Oracle AutoUpgrade? Easy way to upgrade your Oracle Database! Part I est apparu en premier sur dbi Blog.
Real-Time Oracle Database to Google BigQuery: A GoldenGate 23ai How-To Guide
For data engineers and architects, enabling real-time data flow from transactional Oracle databases into analytical platforms like Google BigQuery is […]
The post Real-Time Oracle Database to Google BigQuery: A GoldenGate 23ai How-To Guide appeared first on DBASolved.
Oracle GoldenGate 23ai Performance Tuning: Achieving 10M Rows/Hour on X-Small Snowflake Warehouses
Your Snowflake bills are probably 3x higher than they need to be. We wrapped up an implementation where we slashed […]
The post Oracle GoldenGate 23ai Performance Tuning: Achieving 10M Rows/Hour on X-Small Snowflake Warehouses appeared first on DBASolved.
SQL Server: New SQL Server Management Studio (SSMS) landing page
After the new connectivity and drivers landing page for SQL Server with .Net, Java, Python, C++, Go and PHP languages, you have a new landing page for SQL Server Management Studio (SSMS) :
https://learn.microsoft.com/en-us/ssms

You will see 6 parts in the main panel:
- Overview
- Get Started
- Install
- Customize SSMS
- What’s new in SSMS?
- Troubleshoot
If you have a look on the left menu, you will find more like Copilot in SQL Server Management Studio (Preview) and have a deep dive into this subject:

To follow how to enable and use it, follow the blog of Steven Naudet: Step-by-Step Guide to Enabling Copilot in SSMS
One part I like in the documentation is the “Tips and Tricks” in “Get Started” to be more efficient when you use it:
https://learn.microsoft.com/en-us/ssms/tutorials/ssms-tricks

Have a good start with the new landing page of SSMS!
L’article SQL Server: New SQL Server Management Studio (SSMS) landing page est apparu en premier sur dbi Blog.
M-Files Image analysis
Nowadays, business data is no longer just documents or text files. It is necessary to take into account the management of unstructured data such as photos and other pictures.
Why Combine M-Files with Azure Computer Vision?M-Files excels at organizing and managing documents based on metadata rather than folder structures. It works well as soon as Objects have associated Metadata to categorize and search them.
I already wrote some blogs about the awesome capabilities offered by M-files to analyze and help you to organize efficiently your data, like Intelligent Metadata Layer (aka IML) or with Aino your virtual assistant.

These tools are really powerful, but also limited to text documents (Office documents, pdf, …).
No worries M-files has a solution to remediate that. it is called M-Files Image Analysis.
This additional module uses Computer Vision API of Microsoft Cognitive Services,
What it does?After setting up the link between our M-Files Vault and the Computer Vision API, we are ready to play with images. We have several suggestions possible depending of the needs.
- Caption: provides a sentence describing the picture
- Tag: suggest main keywords related to the image (object, activities,…)
- Tag/Secondary: Same idea as Tag but may be lass precise
- Category: Provide a descriptive group like: animals, outdoor, …

But you can also use this module to extract text from the image

and finally you can also get more basic properties like Format (jpeg), Height and Width.
There is a very last capability which is Face recognition. It is supposed to identify the gender and age of people appearing in the photo, but it was not working well for me (I will test it deeper later)
Final ThoughtsIntegrating M-Files with Azure Computer Vision transforms static images into actionable, searchable content. This is a significant advancement for organizations managing unstructured data, allowing them to easily categorize and leverage their images and improve efficiency.
L’article M-Files Image analysis est apparu en premier sur dbi Blog.
Optimising Journal Line Queries: 5. Conclusion
This is the last of five articles that examine the challenges posed by typical queries on the journal line table (PS_JRNL_LN).
After introducing Exadata system statistics, partitioning, and compression and also archiving some historical data, we arrived at the point where the execution plan of the statement changes without needing to use hints.
Original Execution Plan (With Hints)I have used hints to force the original execution plan.
ALTER SESSION SET STATISTICS_LEVEL=ALL;
SELECT /*+LEADING(A) USE_NL(B)
INDEX(B (PS_JRNL_LN.BUSINESS_UNIT PS_JRNL_LN.JOURNAL_ID PS_JRNL_LN.JOURNAL_DATE PS_JRNL_LN.UNPOST_SEQ PS_JRNL_LN.JOURNAL_LINE PS_JRNL_LN.LEDGER))*/
A.FISCAL_YEAR, A.ACCOUNTING_PERIOD, B.BUSINESS_UNIT, B.JOURNAL_ID, TO_CHAR(B.JOURNAL_DATE,'YYYY-MM-DD'), B.LEDGER, B.ACCOUNT
, B.PRODUCT, B.PROJECT_ID, B.CHARTFIELD1, B.CHARTFIELD2, B.CURRENCY_CD, B.AFFILIATE, SUM( B.MONETARY_AMOUNT), B.FOREIGN_CURRENCY
, SUM( B.FOREIGN_AMOUNT), A.REVERSAL_CD, A.REVERSAL_ADJ_PER, A.JRNL_HDR_STATUS, TO_CHAR(A.POSTED_DATE,'YYYY-MM-DD'), A.OPRID, A.DESCR254
, B.DEPTID, A.SOURCE, B.ALTACCT, TO_CHAR(CAST((A.DTTM_STAMP_SEC) AS TIMESTAMP),'YYYY-MM-DD-HH24.MI.SS.FF'), B.LINE_DESCR
FROM PS_JRNL_HEADER A, PS_JRNL_LN B
WHERE (A.BUSINESS_UNIT = B.BUSINESS_UNIT
AND A.JOURNAL_ID = B.JOURNAL_ID
AND A.JOURNAL_DATE = B.JOURNAL_DATE
AND A.UNPOST_SEQ = B.UNPOST_SEQ
AND A.JRNL_HDR_STATUS IN('P','V','U')
AND A.FISCAL_YEAR IN (2024)
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 12
AND B.CHARTFIELD1 IN ('1234567','1234568','1234569')
AND B.LEDGER IN ('LEDGER'))
GROUP BY A.FISCAL_YEAR, A.ACCOUNTING_PERIOD, B.BUSINESS_UNIT, B.JOURNAL_ID, B.JOURNAL_DATE, B.LEDGER, B.ACCOUNT, B.PRODUCT, B.PROJECT_ID, B.CHARTFIELD1,
B.CHARTFIELD2, B.CURRENCY_CD, B.AFFILIATE, B.FOREIGN_CURRENCY, A.REVERSAL_CD, A.REVERSAL_ADJ_PER, A.JRNL_HDR_STATUS, A.POSTED_DATE, A.OPRID, A.DESCR254,
B.DEPTID, A.SOURCE, B.ALTACCT, A.DTTM_STAMP_SEC, B.LINE_DESCR
/
select * from table(dbms_xplan.display_cursor(null,null,'ADVANCED +ADAPTIVE'));
Rather than get the execution plan from EXPLAIN PLAN, I have executed the query with STATISTICS_LEVEL set to ALL, and then displayed the cursor.
The execution plan starts with the PS_JRNL_HEADER table and uses a nested loop join with a lookup of the unique index on PS_JRNL_LN. Although note that the optimizer costs were produced with Exadata system statistics.
Plan hash value: 4030641493 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| E-Time | Pstart| Pstop | A-Rows | A-Time | Buffers | Reads | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | | | 1740K(100)| | | | 209K|00:06:41.85 | 238M| 15M| | 1 | HASH GROUP BY | | 1 | 498K| 108M| 129M| 1740K (1)| 00:01:08 | | | 209K|00:06:41.85 | 238M| 15M| | 2 | NESTED LOOPS | | 1 | 498K| 108M| | 1722K (1)| 00:01:08 | | | 495K|01:03:03.80 | 238M| 15M| | 3 | NESTED LOOPS | | 1 | 498K| 108M| | 1722K (1)| 00:01:08 | | | 459M|00:11:20.66 | 5549K| 4259K| |* 4 | TABLE ACCESS STORAGE FULL | PS_JRNL_HEADER | 1 | 430K| 41M| | 1135 (8)| 00:00:01 | | | 430K|00:00:00.34 | 88642 | 88637 | | 5 | PARTITION RANGE ITERATOR | | 430K| 1 | | | 3 (0)| 00:00:01 | KEY | KEY | 459M|00:10:38.60 | 5460K| 4170K| |* 6 | INDEX RANGE SCAN | PS_JRNL_LN | 430K| 1 | | | 3 (0)| 00:00:01 | KEY | KEY | 459M|00:09:55.80 | 5460K| 4170K| |* 7 | TABLE ACCESS BY LOCAL INDEX ROWID| PS_JRNL_LN | 459M| 1 | 127 | | 4 (0)| 00:00:01 | 1 | 1 | 495K|00:50:25.33 | 233M| 11M| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- … Predicate Information (identified by operation id): --------------------------------------------------- 4 - storage(("A"."FISCAL_YEAR"=2024 AND INTERNAL_FUNCTION("A"."JRNL_HDR_STATUS") AND "A"."ACCOUNTING_PERIOD"<=12 AND "A"."ACCOUNTING_PERIOD">=1)) filter(("A"."FISCAL_YEAR"=2024 AND INTERNAL_FUNCTION("A"."JRNL_HDR_STATUS") AND "A"."ACCOUNTING_PERIOD"<=12 AND "A"."ACCOUNTING_PERIOD">=1)) 6 - access("A"."BUSINESS_UNIT"="B"."BUSINESS_UNIT" AND "A"."JOURNAL_ID"="B"."JOURNAL_ID" AND "A"."JOURNAL_DATE"="B"."JOURNAL_DATE" AND "A"."UNPOST_SEQ"="B"."UNPOST_SEQ" AND "B"."LEDGER"='LEDGER') filter("B"."LEDGER"='LEDGER') 7 - filter(("B"."CHARTFIELD1"='1234567' OR "B"."CHARTFIELD1"='1234568' OR "B"."CHARTFIELD1"='1234569')) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 3 --------------------------------------------------------------------------- 1 - SEL$1 - LEADING(A) 6 - SEL$1 / B@SEL$1 - INDEX(B (PS_JRNL_LN.BUSINESS_UNIT PS_JRNL_LN.JOURNAL_ID PS_JRNL_LN.JOURNAL_DATE PS_JRNL_LN.UNPOST_SEQ PS_JRNL_LN.JOURNAL_LINE PS_JRNL_LN.LEDGER)) - USE_NL(B)
The cost of this execution plan depends mainly upon how many journal header rows are selected. There is a cost of 3 per index lookup, plus another 1 for the table access, making a total of 4 per journal header row. Here we selected 430K rows from PS_JRNL_HEADER, so 430K rows * 4/row = 1720K. We got an actual cost of 1722K. The discrepancy is because the 430K was rounded off by the representation of numbers in the execution plan. Then the cost of the GROUP BY operation is 18K. Hence, the overall cost is 1740K.
The actual execution time of the query was 63 minutes (78 minutes with time taken to fetch 211K rows across the network to the client).
New Execution Plan (Without Hints)Plan hash value: 1053505630 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| E-Time | Pstart| Pstop | A-Rows | A-Time | Buffers | Reads | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | | | 535K(100)| | | | 209K|00:00:10.06 | 27M| 27M| | 1 | HASH GROUP BY | | 1 | 498K| 108M| 129M| 535K (24)| 00:00:21 | | | 209K|00:00:10.06 | 27M| 27M| | * 2 | HASH JOIN | | 1 | 498K| 108M| 46M| 517K (24)| 00:00:21 | | | 495K|00:00:09.23 | 27M| 27M| | 3 | PART JOIN FILTER CREATE | :BF0000 | 1 | 498K| 108M| | 517K (24)| 00:00:21 | | | 430K|00:00:00.66 | 88642 | 88638 | |- 4 | NESTED LOOPS | | 1 | 498K| 108M| 46M| 517K (24)| 00:00:21 | | | 430K|00:00:00.30 | 88642 | 88638 | |- 5 | NESTED LOOPS | | 1 | | | | | | | | 430K|00:00:00.24 | 88642 | 88638 | |- 6 | STATISTICS COLLECTOR | | 1 | | | | | | | | 430K|00:00:00.19 | 88642 | 88638 | | * 7 | TABLE ACCESS STORAGE FULL | PS_JRNL_HEADER | 1 | 430K| 41M| | 1135 (8)| 00:00:01 | | | 430K|00:00:00.13 | 88642 | 88638 | |- 8 | PARTITION RANGE ITERATOR | | 0 | | | | | | KEY | KEY | 0 |00:00:00.01 | 0 | 0 | |- * 9 | INDEX RANGE SCAN | PS_JRNL_LN | 0 | | | | | | KEY | KEY | 0 |00:00:00.01 | 0 | 0 | |- * 10 | TABLE ACCESS BY LOCAL INDEX ROWID| PS_JRNL_LN | 0 | 1 | 127 | | 515K (24)| 00:00:21 | 1 | 1 | 0 |00:00:00.01 | 0 | 0 | | 11 | PARTITION RANGE JOIN-FILTER | | 1 | 498K| 60M| | 515K (24)| 00:00:21 |:BF0000|:BF0000| 815K|00:00:07.65 | 27M| 27M| | * 12 | TABLE ACCESS STORAGE FULL | PS_JRNL_LN | 18 | 498K| 60M| | 515K (24)| 00:00:21 |:BF0000|:BF0000| 815K|00:00:07.55 | 27M| 27M| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Outline Data ------------- /*+ BEGIN_OUTLINE_DATA IGNORE_OPTIM_EMBEDDED_HINTS OPTIMIZER_FEATURES_ENABLE('19.1.0') DB_VERSION('19.1.0') ALL_ROWS OUTLINE_LEAF(@"SEL$1") FULL(@"SEL$1" "A"@"SEL$1") FULL(@"SEL$1" "B"@"SEL$1") LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1") USE_HASH(@"SEL$1" "B"@"SEL$1") USE_HASH_AGGREGATION(@"SEL$1") END_OUTLINE_DATA */ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("A"."JOURNAL_DATE"="B"."JOURNAL_DATE" AND "A"."BUSINESS_UNIT"="B"."BUSINESS_UNIT" AND "A"."JOURNAL_ID"="B"."JOURNAL_ID" AND "A"."UNPOST_SEQ"="B"."UNPOST_SEQ") 7 - storage(("A"."FISCAL_YEAR"=2024 AND INTERNAL_FUNCTION("A"."JRNL_HDR_STATUS") AND "A"."ACCOUNTING_PERIOD"<=12 AND "A"."ACCOUNTING_PERIOD">=1)) filter(("A"."FISCAL_YEAR"=2024 AND INTERNAL_FUNCTION("A"."JRNL_HDR_STATUS") AND "A"."ACCOUNTING_PERIOD"<=12 AND "A"."ACCOUNTING_PERIOD">=1)) 9 - access("A"."BUSINESS_UNIT"="B"."BUSINESS_UNIT" AND "A"."JOURNAL_ID"="B"."JOURNAL_ID" AND "A"."JOURNAL_DATE"="B"."JOURNAL_DATE" AND "A"."UNPOST_SEQ"="B"."UNPOST_SEQ" AND "B"."LEDGER"='LEDGER') filter("B"."LEDGER"='LEDGER') 10 - filter(("B"."CHARTFIELD1"='1234567' OR "B"."CHARTFIELD1"='1234568' OR "B"."CHARTFIELD1"='1234569')) 12 - storage(("B"."LEDGER"='LEDGER' AND INTERNAL_FUNCTION("B"."CHARTFIELD1"))) filter(("B"."LEDGER"='LEDGER' AND INTERNAL_FUNCTION("B"."CHARTFIELD1")))
Optimising Journal Line Queries: 4. Compression
This is the fourth of five articles that examine the challenges posed by typical queries on the journal line table (PS_JRNL_LN).
SELECT p.partition_position, m.*, p.high_value
FROM dba_tab_modifications m
INNER JOIN dba_tab_partitions p
ON p.table_owner = m.table_owner AND p.table_name = m.table_name AND p.partition_name = m.partition_name
WHERE m.table_owner = 'SYSADM' AND m.table_name = 'PS_JRNL_LN'
ORDER BY 1
/
Part Table Drop
Pos# Owner TABLE_NAME PARTITION_NAME S INSERTS UPDATES DELETES TIMESTAMP TRU Segs HIGH_VALUE
---- -------- ------------ -------------------- - --------- --------- --------- ------------------- --- ---- --------------------------------------------------------------------------------
…
40 SYSADM PS_JRNL_LN JRNL_LN_202212 0 0 0 13/10/2024 10:08:56 NO 0 TO_DATE(' 2023-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
41 SYSADM PS_JRNL_LN JRNL_LN_202301 0 0 0 29/09/2024 10:01:16 NO 0 TO_DATE(' 2023-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
42 SYSADM PS_JRNL_LN JRNL_LN_202302 0 0 0 29/09/2024 10:01:16 NO 0 TO_DATE(' 2023-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
43 SYSADM PS_JRNL_LN JRNL_LN_202303 0 0 0 29/09/2024 10:01:16 NO 0 TO_DATE(' 2023-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
44 SYSADM PS_JRNL_LN JRNL_LN_202304 0 0 0 29/09/2024 10:01:16 NO 0 TO_DATE(' 2023-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
45 SYSADM PS_JRNL_LN JRNL_LN_202305 0 0 0 29/09/2024 10:01:16 NO 0 TO_DATE(' 2023-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
46 SYSADM PS_JRNL_LN JRNL_LN_202306 0 0 0 29/09/2024 10:01:16 NO 0 TO_DATE(' 2023-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
47 SYSADM PS_JRNL_LN JRNL_LN_202307 0 0 0 29/09/2024 10:01:16 NO 0 TO_DATE(' 2023-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
48 SYSADM PS_JRNL_LN JRNL_LN_202308 0 0 0 29/09/2024 10:01:16 NO 0 TO_DATE(' 2023-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
49 SYSADM PS_JRNL_LN JRNL_LN_202309 0 0 0 13/10/2024 10:08:56 NO 0 TO_DATE(' 2023-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
50 SYSADM PS_JRNL_LN JRNL_LN_202310 0 0 0 27/10/2024 10:59:45 NO 0 TO_DATE(' 2023-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
51 SYSADM PS_JRNL_LN JRNL_LN_202311 0 0 0 29/09/2024 10:01:16 NO 0 TO_DATE(' 2023-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
52 SYSADM PS_JRNL_LN JRNL_LN_202312 34 193 34 10/12/2024 14:21:38 NO 0 TO_DATE(' 2024-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
53 SYSADM PS_JRNL_LN JRNL_LN_202401 42374 127736 210 12/12/2024 05:27:31 NO 0 TO_DATE(' 2024-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
54 SYSADM PS_JRNL_LN JRNL_LN_202402 34803 92215 0 12/12/2024 05:26:30 NO 0 TO_DATE(' 2024-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
55 SYSADM PS_JRNL_LN JRNL_LN_202403 54940 166263 0 12/12/2024 05:12:29 NO 0 TO_DATE(' 2024-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
56 SYSADM PS_JRNL_LN JRNL_LN_202404 5900 13730 0 13/12/2024 05:29:32 NO 0 TO_DATE(' 2024-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
57 SYSADM PS_JRNL_LN JRNL_LN_202405 6151 13869 0 13/12/2024 05:31:06 NO 0 TO_DATE(' 2024-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
58 SYSADM PS_JRNL_LN JRNL_LN_202406 18317 58263 0 13/12/2024 16:15:49 NO 0 TO_DATE(' 2024-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
59 SYSADM PS_JRNL_LN JRNL_LN_202407 5067792 14937405 0 13/12/2024 16:02:36 NO 0 TO_DATE(' 2024-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
60 SYSADM PS_JRNL_LN JRNL_LN_202408 5217744 15378822 0 13/12/2024 18:02:57 NO 0 TO_DATE(' 2024-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
61 SYSADM PS_JRNL_LN JRNL_LN_202409 65389 243360 160 13/12/2024 12:45:25 NO 0 TO_DATE(' 2024-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
62 SYSADM PS_JRNL_LN JRNL_LN_202410 44839 152210 0 13/12/2024 00:28:54 NO 0 TO_DATE(' 2024-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
63 SYSADM PS_JRNL_LN JRNL_LN_202411 28279594 53637873 27478940 13/12/2024 18:18:00 NO 0 TO_DATE(' 2024-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
64 SYSADM PS_JRNL_LN JRNL_LN_202412 34761590 53485631 27484239 13/12/2024 19:16:11 NO 0 TO_DATE(' 2025-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
65 SYSADM PS_JRNL_LN JRNL_LN_202501 137138 473452 0 13/12/2024 19:18:09 NO 0 TO_DATE(' 2025-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORI1
66 SYSADM PS_JRNL_LN JRNL_LN_202502 0 0 0 10/11/2024 10:08:21 NO 0 TO_DATE(' 2025-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
67 SYSADM PS_JRNL_LN JRNL_LN_202503 466 0 0 13/12/2024 03:59:20 NO 0 TO_DATE(' 2025-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
68 SYSADM PS_JRNL_LN JRNL_LN_202504 0 0 0 17/11/2024 10:03:01 NO 0 TO_DATE(' 2025-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
69 SYSADM PS_JRNL_LN JRNL_LN_202505 0 0 0 17/11/2024 10:03:01 NO 0 TO_DATE(' 2025-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
…
ALTER TABLE ps_jrnl_ln ADD CLUSTERING BY LINEAR ORDER (business_unit, journal_id, journal_date, unpost_Seq);
ALTER TABLE ps_jrnl_ln MODIFY CLUSTERING YES ON LOAD YES ON DATA MOVEMENT;
…
ALTER TABLE ps_jrnl_ln MOVE PARTITION jrnl_ln_202012 COMPRESS FOR QUERY HIGH UPDATE INDEXES ONLINE PARALLEL;
ALTER TABLE ps_jrnl_ln MOVE PARTITION jrnl_ln_202101 COMPRESS FOR QUERY LOW UPDATE INDEXES ONLINE PARALLEL;
…
set serveroutput on timi on
DECLARE
l_table_name VARCHAR2(18) := 'PS_JRNL_LN';
l_part_update BOOLEAN := FALSE;
BEGIN
FOR i IN(
select t.table_name, t.partition_name, t.num_rows, t.blocks stat_blocks, s.blocks seg_blocks
, s.tablespace_name, p.compress_for, t.num_rows/NULLIF(LEAST(t.blocks,s.blocks),0) rpb
from user_segments s
inner join user_tab_partitions p ON p.table_name = s.segment_name AND p.partition_name = s.partition_name
inner join user_tab_statistics t ON s.segment_name = t.table_name AND s.partition_name = t.partition_name and t.blocks>s.blocks
where s.segment_type = 'TABLE PARTITION' and p.compress_for IS NOT NULL and s.segment_name = l_table_name
) LOOP
l_part_update := TRUE;
dbms_output.put_line(i.table_name||' ('||i.partition_name||') '||i.stat_blocks||' => '||i.seg_blocks||' blocks');
dbms_stats.set_table_stats(ownname=>'SYSADM',tabname=>i.table_name,partname=>i.partition_name,numblks=>i.seg_blocks,force=>TRUE);
END LOOP;
IF l_part_update THEN
FOR i IN (
SELECT table_name, sum(blocks) blocks
FROM user_tab_statistics
WHERE table_name = l_table_name AND partition_name IS NOT NULL
GROUP BY table_name
) LOOP
dbms_output.put_line(i.table_name||' = '||i.blocks||' blocks');
dbms_stats.set_table_stats(ownname=>'SYSADM',tabname=>i.table_name,numblks=>i.blocks,force=>TRUE);
END LOOP;
ELSE
dbms_output.put_line(l_table_name||' - no action required');
END IF;
END;
/
ALTER INDEX psdjrnl_ln REBUILD ONLINE TABLESPACE psindex PARALLEL;
ALTER INDEX psdjrnl_ln COALESCE CLEANUP PARALLEL;
ALTER INDEX psdjrnl_ln SHRINK SPACE;
…
ALTER INDEX ps_jrnl_ln REBUILD PARTITION jrnl_ln_202012 ONLINE;
ALTER INDEX ps_jrnl_ln REBUILD PARTITION jrnl_ln_202101 ONLINE;
…
Optimising Journal Line Queries: 3. Partitioning
This is the third of a series of five articles that examine the challenges posed by typical queries on the journal line table (PS_JRNL_LN).
PeopleSoft does not partition tables by default. Application Designer does not support partitioning, mainly because different databases implement partitioning differently. Thus, it is always left to the customer to implement as a customisation. In this article, I am only going to discuss partitioning on Oracle.
Interval PartitioningThis is the sample query that I started with.
SELECT A.FISCAL_YEAR, A.ACCOUNTING_PERIOD, B.BUSINESS_UNIT, B.JOURNAL_ID, TO_CHAR(B.JOURNAL_DATE,'YYYY-MM-DD'), B.LEDGER, B.ACCOUNT
, B.PRODUCT, B.PROJECT_ID, B.CHARTFIELD1, B.CHARTFIELD2, B.CURRENCY_CD, B.AFFILIATE, SUM( B.MONETARY_AMOUNT), B.FOREIGN_CURRENCY
, SUM( B.FOREIGN_AMOUNT), A.REVERSAL_CD, A.REVERSAL_ADJ_PER, A.JRNL_HDR_STATUS, TO_CHAR(A.POSTED_DATE,'YYYY-MM-DD'), A.OPRID, A.DESCR254
, B.DEPTID, A.SOURCE, B.ALTACCT, TO_CHAR(CAST((A.DTTM_STAMP_SEC) AS TIMESTAMP),'YYYY-MM-DD-HH24.MI.SS.FF'), B.LINE_DESCR
FROM PS_JRNL_HEADER A, PS_JRNL_LN B
WHERE (A.BUSINESS_UNIT = B.BUSINESS_UNIT
AND A.JOURNAL_ID = B.JOURNAL_ID
AND A.JOURNAL_DATE = B.JOURNAL_DATE
AND A.UNPOST_SEQ = B.UNPOST_SEQ
AND A.JRNL_HDR_STATUS IN('P','V','U')
AND A.FISCAL_YEAR IN (2024)
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 12
AND B.CHARTFIELD1 IN ('1234567','1234568','1234569')
AND B.LEDGER IN ('LEDGER'))
GROUP BY A.FISCAL_YEAR, A.ACCOUNTING_PERIOD, B.BUSINESS_UNIT, B.JOURNAL_ID, B.JOURNAL_DATE, B.LEDGER, B.ACCOUNT, B.PRODUCT
, B.PROJECT_ID, B.CHARTFIELD1, B.CHARTFIELD2, B.CURRENCY_CD, B.AFFILIATE, B.FOREIGN_CURRENCY, A.REVERSAL_CD, A.REVERSAL_ADJ_PER
, A.JRNL_HDR_STATUS, A.POSTED_DATE, A.OPRID, A.DESCR254, B.DEPTID, A.SOURCE, B.ALTACCT, A.DTTM_STAMP_SEC, B.LINE_DESCR
It would have been desirable to have been able to partition PS_JRNL_LN by FISCAL_YEAR. However, that column is only present on the parent table, PS_JRNL_HEADER. Oracle can do referential partitioning, where the child table is partitioned by an attribute of a column in the parent table. The parent table must also be partitioned similarly, thus producing a 1:1 mapping of partitions between the parent and child tables. However, this feature also requires the presence of an enforced foreign key constraint between parent and child tables.
PeopleSoft has never implemented or supported database-enforced referential integrity constraints (again, mainly because it was done differently on different databases). Although it is tempting to add a foreign key constraint between these tables, that would be a customisation to PeopleSoft that Oracle would not support. The application would then have to insert parent rows before child rows and delete child rows before deleting parent rows. It has never been tested against these constraints.
Therefore, it is only possible to consider partitioning by a column on PS_JRNL_LN. A column in the unique key is an obvious choice.
- Depending on how BUSINESS_UNIT is set up and used, you might be able to list sub-partition by this column, and split journal lines down into several subpartitions. However, it is almost inevitable that the volumes will be heavily skewed.
- It is tempting to range partition on JOURNAL_ID. Although this column usually contains an entirely numeric value, it is in fact defined as a character (VARCHAR2) data type. Therefore, it is not possible to interval partition upon it. Periodically, it would be necessary to add partitions manually.
- The alternative is to interval range partition on JOURNAL_DATE. I chose to define a monthly interval. I specified the first few partitions for whole years because at this customer, these partitions contained less data after archiving. Thereafter, Oracle automatically creates monthly partitions as data is inserted.
CREATE TABLE PS_JRNL_LN
(BUSINESS_UNIT VARCHAR2(5 CHAR) NOT NULL
,JOURNAL_ID VARCHAR2(10 CHAR) NOT NULL
,JOURNAL_DATE DATE NOT NULL
,UNPOST_SEQ NUMBER(*,0) NOT NULL
,JOURNAL_LINE NUMBER(9,0) NOT NULL
,LEDGER VARCHAR2(10 CHAR) NOT NULL
...
) PARTITION BY RANGE (JOURNAL_DATE) INTERVAL (NUMTOYMINTERVAL(1, 'MONTH'))
(PARTITION JRNL_LN_2016 VALUES LESS THAN (TO_DATE('2017-01-01', 'YYYY-MM-DD'))
,PARTITION JRNL_LN_2017 VALUES LESS THAN (TO_DATE('2018-01-01', 'YYYY-MM-DD'))
,PARTITION JRNL_LN_2018 VALUES LESS THAN (TO_DATE('2019-01-01', 'YYYY-MM-DD'))
,PARTITION JRNL_LN_2019 VALUES LESS THAN (TO_DATE('2020-01-01', 'YYYY-MM-DD'))
)
/
rem rename_jrnl_ln_partitions.sql
rem requires https://github.com/davidkurtz/psscripts/blob/master/psftapi.sql
spool rename_jrnl_ln_partitions.lst
set serveroutput on
DECLARE
l_high_value DATE;
l_sql CLOB;
l_new_partition_name VARCHAR2(30);
BEGIN
psft_ddl_lock.set_ddl_permitted(TRUE);
FOR i IN (
select /*+LEADING(r upt upkc utc)*/ r.recname, upt.table_name, utp.partition_name, utp.high_value, upt.interval interval_size
from sysadm.psrecdefn r
INNER JOIN user_part_tables upt ON upt.table_name = DECODE(r.sqltablename,' ','PS_'||r.recname,r.sqltablename)
AND upt.partitioning_type = 'RANGE' and upt.interval IS NOT NULL
INNER JOIN user_part_key_columns upkc ON upkc.name = upt.table_name AND upkc.object_Type = 'TABLE' and upkc.column_position = 1
INNER JOIN user_tab_columns utc ON utc.table_name = upkc.name AND utc.column_name = upkc.column_name
INNER JOIN user_tab_partitions utp ON utp.table_name = upt.table_name AND utp.partition_name like 'SYS_P%'
WHERE r.recname = 'JRNL_LN' AND r.rectype = 0
AND (utc.data_type = 'DATE' OR utc.data_type like 'TIMESTAMP%')
) LOOP
l_sql := 'SELECT '||i.high_value||'-'||i.interval_size||' FROM DUAL';
EXECUTE IMMEDIATE l_sql INTO l_high_value;
l_new_partition_name := i.recname||'_'||TO_CHAR(l_high_value,'YYYYMM');
l_sql := 'ALTER TABLE '||i.table_name||' RENAME PARTITION '||i.partition_name||' TO '||l_new_partition_name;
IF i.partition_name != l_new_partition_name THEN
dbms_output.put_line(l_sql);
EXECUTE IMMEDIATE l_sql;
END IF;
END LOOP;
FOR i IN (
select /*+LEADING(r upi upkc utc)*/ r.recname, upi.index_name, uip.partition_name, uip.high_value, upi.interval interval_size
from sysadm.psrecdefn r
INNER JOIN user_part_indexes upi ON upi.table_name = DECODE(r.sqltablename,' ','PS_'||r.recname,r.sqltablename)
AND upi.partitioning_type = 'RANGE' and upi.interval IS NOT NULL
INNER JOIN user_part_key_columns upkc ON upkc.name = upi.index_name AND upkc.object_Type = 'INDEX' and upkc.column_position = 1
INNER JOIN user_tab_columns utc ON utc.table_name = upi.table_name AND utc.column_name = upkc.column_name
INNER JOIN user_ind_partitions uip ON uip.index_name = upi.index_name
AND (uip.partition_name like 'SYS_P%' OR SUBSTR(uip.partition_name,1+LENGTH(r.recname),1) != SUBSTR(upi.index_name,3,1))
WHERE r.recname = 'JRNL_LN' AND r.rectype = 0
AND (utc.data_type = 'DATE' OR utc.data_type like 'TIMESTAMP%')
) LOOP
l_sql := 'SELECT '||i.high_value||'-'||i.interval_size||' FROM DUAL';
EXECUTE IMMEDIATE l_sql INTO l_high_value;
l_new_partition_name := i.recname||SUBSTR(i.index_name,3,1)||TO_CHAR(l_high_value,'YYYYMM');
l_sql := 'ALTER INDEX '||i.index_name||' RENAME PARTITION '||i.partition_name||' TO '||l_new_partition_name;
IF i.partition_name != l_new_partition_name THEN
dbms_output.put_line(l_sql);
EXECUTE IMMEDIATE l_sql;
END IF;
END LOOP;
psft_ddl_lock.set_ddl_permitted(FALSE);
END;
/
spool off
Pages
