Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 2 weeks 13 min ago

M-Files Sharing Center: Improved collaboration

Mon, 2025-12-22 04:50

As I’ve said many times, I like M-Files because each monthly release not only fixes bugs, but also introduces new features. This time (December 2025), they worked on the sharing feature.
It’s not entirely new, as it was already possible to create links to share documents with external users. However, it was clearly not user-friendly and difficult to manage. Let’s take a closer look.

What is M-Files Sharing Center?

The Sharing Center in M-Files allows users to share content with external users by generating links that can be public or passcode-protected.

Key capabilities include:

Access Control: Modify or revoke permissions instantly to maintain security.
Audit & Compliance: Track sharing history for regulatory compliance and internal governance.
External Collaboration: Share documents with partners or clients without exposing your entire repository.

What’s on the horizon:

The capabilities will evolve in the next releases.
Centralized Overview: See all active shares in one place. No more guessing who has access!
Editing Permissions: Allow external users to update shared documents.
And lot of new features that will be added in the upcoming releases!

What are the advantages?

A good internal content management system is crucial for maintaining data consistency and integrity. But what about when you share these documents via email, a shared directory, or cloud storage?

You have control over your documents and can see who has accessed them and when.
Need to end a collaboration? With one click, you can immediately revoke access.
Providing an official way to share documents with external people helps prevent users from using untrusted cloud services and other methods that can break security policies.

How it works

It’s hard to make it any simpler. Just right-click on the document and select “Share.”
A pop-up will ask you for an email address and an expiration date, and that’s it!

Sharing Center in action

When an external user goes to the link, he or she will be asked to enter his or her email address. A passcode will then be sent to that email address for authentication.

External access to the document

Another interesting feature is that the generated link remains the same when you add or remove users, change permissions, or stop and resume sharing.

Small Tips

Set expiration dates: The easier it is to share a document, the easier it is to forget about it. Therefore, it is important to set an expiration date for shared documents.
Use role-based permissions: Sharing information outside the organization is sensitive, so controlling who can perform this action is important.
Regularly review active shares: Even if an expiration date has been set, it is a good habit to ensure that only necessary access remains.

At the end

M-Files already provides a great tool for external collaboration. Hubshare. With this collaboration portal, you can do much more than share documents. Of course, this tool incurs an additional cost. M-Files Sharing Center solves another problem: how to share documents occasionally outside the organization without compromising security or the benefits M-Files provides internally.

The first version of the Sharing Center is currently limited to downloading, but an editing feature and a dashboard for quickly identifying shared documents across the repository will be included in a next release. These features’ simplicity and relevance will undoubtedly make it stand out even more from its competitors.

If you are curious about it, feel free to ask us!

L’article M-Files Sharing Center: Improved collaboration est apparu en premier sur dbi Blog.

Data Anonymization as a Service with Delphix Continuous Compliance

Mon, 2025-12-22 04:43
Context

In the era of digital transformation, attack surfaces are constantly evolving and cyberattack techniques are becoming increasingly sophisticated. Maintaining the confidentiality, integrity, and availability of data is therefore a critical challenge for organizations, both from an operational and a regulatory standpoint (GDPR, ISO 27001, NIST). Therefore, data anonymization is crucial today.

Contrary to a widely held belief, the risk is not limited to the production environment. Development, testing, and pre-production environments are prime targets for attackers, as they often benefit from weaker security controls. The use of production data that is neither anonymized nor pseudonymized directly exposes organizations to data breaches, regulatory non-compliance, and legal sanctions.

Why and How to Anonymize Data

Development teams require realistic datasets in order to:

  • Test application performance
  • Validate complex business processes
  • Reproduce error scenarios
  • Train Business Intelligence or Machine Learning algorithms

However, the use of real data requires the implementation of anonymization or pseudonymization mechanisms ensuring:

  • Preservation of functional and referential consistency
  • Prevention of data subject re-identification

Among the possible anonymization techniques, the main ones include:

  • Dynamic Data Masking, applied on-the-fly at access time but which does not anonymize data physically
  • Tokenization, which replaces a value with a surrogate identifier
  • Cryptographic hashing, with or without salting
Direct data copy from Production to Development

In this scenario, a full backup of the production database is restored into a development environment. Anonymization is then applied using manually developed SQL scripts or ETL processes.

This approach presents several critical weaknesses:

  • Temporary exposure of personal data in clear text
  • Lack of formal traceability of anonymization processes
  • Risk of human error in scripts
  • Non-compliance with GDPR requirements

This model should therefore be avoided in regulated environments.

Data copy via a Staging Database in Production

This model introduces an intermediate staging database located within a security perimeter equivalent to that of production. Anonymization is performed within this secure zone before replication to non-production environments.

This approach makes it possible to:

  • Ensure that no sensitive data in clear text leaves the secure perimeter
  • Centralize anonymization rules
  • Improve overall data governance

However, several challenges remain:

  • Versioning and auditability of transformation rules
  • Governance of responsibilities between teams (DBAs, security, business units)
  • Maintaining inter-table referential integrity
  • Performance management during large-scale anonymization
Integration of Delphix Continuous Compliance

In this architecture, Delphix is integrated as the central engine for data virtualization and anonymization. The Continuous Compliance module enables process industrialization through:

  • An automated data profiler identifying sensitive fields
  • Deterministic or non-deterministic anonymization algorithms
  • Massively parallelized execution
  • Orchestration via REST APIs integrable into CI/CD pipelines
  • Full traceability of processing for audit purposes

This approach enables the rapid provisioning of compliant, reproducible, and secure databases for all technical teams.

Conclusion

Database anonymization should no longer be viewed as a one-time constraint but as a structuring process within the data lifecycle. It is based on three fundamental pillars:

  • Governance
  • Pipeline industrialization
  • Regulatory compliance

An in-house implementation is possible, but it requires a high level of organizational maturity, strong skills in anonymization algorithms, data engineering, and security, as well as a strict audit framework. Solutions such as Delphix provide an industrialized response to these challenges while reducing both operational and regulatory risks.

To take this further, Microsoft’s article explaining the integration of Delphix into Azure pipelines analyzes the same issues discussed above, but this time in the context of the cloud : Use Delphix for Data Masking in Azure Data Factory and Azure Synapse Analytics

What’s next ?

This use case is just one example of how Delphix can be leveraged to optimize data management and compliance in complex environments. In upcoming articles, we will explore other recurring challenges, highlighting both possible in-house approaches and industrialized solutions with Delphix, to provide a broader technical perspective on data virtualization, security, and performance optimization.

What about you ?

How confident are you about the management of your confidential data?
If you have any doubts, please don’t hesitate to reach out to me to discuss them !

L’article Data Anonymization as a Service with Delphix Continuous Compliance est apparu en premier sur dbi Blog.

Avoiding common ECM pitfalls with M-Files

Thu, 2025-12-18 07:30

Enterprise Content Management (ECM) systems promise efficiency, compliance, and better collaboration.
Many organizations struggle to realize these benefits because of common pitfalls in traditional ECM implementations.

Avoid common ECM pitfalls

Let’s explore these challenges and see how M-Files addresses them with its unique approach.

Pitfall 1: Information Silos

Traditional ECM systems often replicate the same problem they aim to solve: Information silos. Documents remain locked in departmental repositories or legacy systems, making cross-functional collaboration difficult.
How does M-Files address this challenge?
M-Files connects to existing repositories without requiring migration. Its connector architecture allows users to access content from multiple sources through a single interface, breaking down silos without disrupting operations. Additionally, workflows defined in M-Files can be applied to linked content, providing incredible flexibility.

Pitfall 2: Complex Folder Structure

With folder-based systems, users must know exactly where a document is stored. This leads to wasted time spent searching for files. It also increases the risk of creating duplicates because users who cannot find a document may be tempted to create or add it again. Another common issue is that permissions are defined by folder. This means that, in order to give users access to data, the same file must be copied to multiple locations.
How M-Files Solves It:
Instead of folders, M-Files uses metadata-driven organization. Users search by “what” the document is (e.g., invoice, contract) rather than “where” it’s stored. This makes retrieval fast and intuitive and it allows users to personalize how the data is displayed based on their needs.

Pitfall 3: Poor User Adoption

If an ECM system is hard to use, like lot of information to be filled or very restricted possibilities, employees will bypass it, creating compliance risks, inefficiencies and data loss.
How M-Files address that:
M-Files seamlessly integrates with familiar tools, such as Microsoft Teams, Outlook, and SharePoint, reducing friction. Its simple interface allows users to quickly become familiar with the software, and its AI-based suggestions make manipulating data easy, ensuring rapid adoption.

Pitfall 4: Compliance Gaps

Regulated industries face strict requirements for document control, audit trails, and retention. Traditional ECM systems often require manual processes to stay compliant.
How M-Files helps:
M-Files includes a workflow engine that automates compliance with version control, audit logs, and retention policies. This ensures approvals and signatures occur in the correct order, thereby reducing human error and delays.

Pitfall 5: Limited Scalability

As organizations grow, ECM systems can become bottlenecks due to their rigid architectures.
M-Files offers several solutions: cloud, on-premises, and hybrid deployment options, ensuring scalability and flexibility. Its architecture supports global operations without sacrificing performance and helps streamline costs.

Final Thoughts

Selecting an ECM involves more than comparing the costs (licenses, infrastructure, etc.) of different market players. It is also, above all else, a matter of identifying the productivity gains, reduction in repetitive task workload, and efficiency that such a solution provides.

If you’re feeling overwhelmed by the vast world of ECM, don’t hesitate to ask us for help.

L’article Avoiding common ECM pitfalls with M-Files est apparu en premier sur dbi Blog.

Exascale storage architecture

Tue, 2025-12-16 05:26
In the first article, we discovered Exascale from a high level overview with a focus on Exascale Infrastructure. In this new article, we will dive into the Exascale storage architecture, with details about the physical components as well as the services and processes managing this new storage management approach for Exadata.

In this blog post, we will explore the Exascale storage architecture and processes pertaining to Exascale. But before diving in, a quick note to avoid any confusion about Exascale: even though in the previous article, we focused on Exascale Infrastructure and its various benefits in terms of small footprint and hyper-elasticity based on modern cloud characteristics, it is important to keep in mind that Exascale is an Exadata technology and not a cloud-only technology. You can benefit from it in non-cloud deployments as well such as on Exadata Database Machine deployed in your data centers.

Exascale storage components

Here is the overall picture:

Exascale cluster

An Exascale Cluster is composed of Exadata storage servers to provide storage to Grid Infrastructure clusters and the databases. An Exadata storage server can belong to only one Exascale cluster.

Software services (included in the Exadata System Software stack) run on each Exadata cluster for managing the cluster resources made available to GI and databases, namely pool disks, storage pools, vaults, block volumes and many other.

With Exadata Database Service on Exascale Infrastructure, the number of Exadata storage servers included in the Exascale cluster can be quite huge, hundreds maybe even thousands storage servers, to enable cloud-scale storage resource pooling.

Storage pools

A storage pool is a collection of pool disks (see below for details about pool disks).

Each Exascale cluster requires at least one storage pool.

A storage pool can be dynamically reconfigured by changing pool disks size, allocating more pool disks or adding Exadata storage servers.

The pool disks found inside a storage pool must be of the same media type.

Pool disks

A pool disk is physical storage space allocated from an Exascale-specific cell disk to be integrated in a storage pool.

Each storage server physical disk has a LUN in the storage server OS and a cell disk is created as a container for all Exadata-related partitions within the LUN. Each partition in a cell disk is then designated as a pool disk for Exascale (or grid disk in case of ASM is used instead of Exascale).

A media type is associated to each pool disk based on the underlying storage device and can be one of the following :

Vaults

Vaults are logical storage containers used for storing files and are allocated from storage pools. By default, without specific provisioning attributes, vaults can use all resources from all storage pools of an Exascale cluster.

Here are the two main services provided by vaults:

  1. Security isolation: a security perimeter is associated with vaults based on user access controls which guarantee a strict isolation of data between users and clusters.
  2. Resource control: storage pool resources usage is configured at the vault level for attributes like storage space, IOPS, XRMEM and flash cache sizing.

For those familiar with ASM, think of vaults as the equivalent to ASM disk groups. For example, instance parameters like ‘db_create_file_dest’ and ‘db_recovery_file_dest’ reference vaults instead of ASM disk groups by using ‘@vault_name’ syntax.

Since attributes like redundancy, type of file content, type of media are positioned at the file level instead at the logical storage container, there is no need to organize vaults in the same manner as we did for disk groups. For instance, we don’t need to create a first vault for data and a second vault for recovery files as we are used to with ASM.

Beside database files, vaults can also store other types of files even though it is recommended to store non database files on block volumes. That’s because Exascale is optimized for storing large files such as database files whereas regular files are typically much smaller and fit more on block volumes.

Files

The main files found on Exascale storage are Database and Grid Infrastructure files. Beyond that, all objects in Exascale are represented as files of a certain type. Each file type has storage attributes defined in a template. The file storage attributes are :

  • mediaType : HC, EF or XT
  • redundancy : currently high
  • contentType : DATA and RECO

This makes a huge difference with ASM where different redundancy needs required to create different disk groups. With Exascale, it is now possible to store files with different redundancy requirements in the same storage structure (vaults). This also enables to optimize usage of the storage capacity.

Files are composed of extents of 8MB in size which are mirrored and stripped across all vault’s storage resources.

The tight integration of the database kernel with Exascale makes it possible for Exascale to automatically understand the type of file the database asks to create and thus applies the appropriate attributes defined in the file template. This prevents Exascale to store data and recovery files extents (more on extents in the next section) on the same disks and also guarantees that mirrored extents are located on different storage servers than the primary extent.

File extents

Remember that in Exascale, storage management moved from the compute servers to the storage servers. Specifically, this means the building blocks of files, namely extents, are managed by the storage servers.

The new data structure used for extent management is a mapping table which tracks for each file extent the location of the primary and mirror copy extents in the storage servers. This mapping table is cached by each database server and instance to retrieve its file extents location. Once the database has the extent location, it can directly make an I/O call to the appropriate storage server. In case the mapping table is no more up-to-date because of database physical structure changes or storage servers addition or removal, an I/O call can be rejected, triggering a mapping table refresh allowing the I/O call to be retried.

Exascale mapping table Exascale Block Store with RDMA-enabled block volumes

Block volumes can be allocated from storage pools to store regular files on file systems like ACFS or XFS. They also enable centralization of VC VM images, thus cutting the dependency of VM images to internal compute node storage and streamlining migrations between physical database servers. Clone, snapshot and backup and restore features for block volumes can leverage all resources of the available storage servers.

Exascale storage services

From a software perspective, Exascale is composed of a number of software services available in the Exadata System Software (since release 24.1). These software services run mainly on the Exadata Storage Servers but also on the Exadata Database Servers.

Exascale storage server services ServiceDescriptionEGS – Cluster ServicesEGS (Exascale Global Services) main task is to manage the storage allocated to storage pools. In addition, EGS also controls storage cluster membership, security and identity services as well as monitoring the other Exascale services.ERS – Control ServicesERS (Exascale RESTful Services) provide the management endpoint for all Exascale management operations. The new Exascale command-line interface (ESCLI), used for monitoring and management functions, leverages ERS for command execution.EDS – Exascale Vault Manager ServicesEDS (Exascale Data Services) are responsible for files and vaults metadata management and are made up of two groups of services:
System Vault Manager and User Vault Manager.

System Vault Manager (SYSEDS) manages Exascale vaults metadata, such as the security perimeter through ACL and vaults attributes

User Vault Manager (USREDS) manages Exascale files metadata, such as ACLs and attributes as well as clones and snapshots metadataBSM – Block Store ManagerBSM manages Exascale block storage metadata and controls all block store management operations like volume creation, attachment, detachment, modification or snapshot.BSW – Block Store WorkerThese services perform the actual requests from clients and translate them to storage server I/O.IFD – Instant Failure DetectionIFD service watches for failures which could arise in the Exascale cluster and triggers recovery actions when needed.Exadata Cell ServicesExadata cell services are required for Exascale to function and both services work in conjunction to provide the Exascale features. Exascale database server services ServiceDescriptionEGS – Cluster ServicesEGS instances will run on database servers when the Exadata configuration has fewer than five storage serversBSW – Block Store WorkerServices requests from block store clients and performs the resulting storage server I/OESNP – Exascale Node ProxyESNP provides Exascale cluster state to GI and Database processes.EDV – Exascale Direct VolumeEDV service exposes Exascale block volumes to Exadata compute nodes and runs I/O requests on EDV devices.EGSB/EDSBPer database instance services that maintain metadata about the Exascale cluster and vaults

Below diagram depicts how the various Exascale services are dispatched on the storage and compute nodes:

Exascale processes Wrap-up

By rearchitecting Exadata storage management with focus on space efficiency, flexibility and elasticity, Exascale can now overcome the main limitations of ASM:

  • diskgroup resizing complexity and time-consuming rebalance operation
  • space distribution among DATA and RECO diskgroups requiring rigorous estimation of storage needs for each
  • sparse diskgroup requirement for cloning with a read-only test master or use of ACFS without smart scan
  • redundancy configuration at the diskgroup level

The below links will provide you further details on the matter:

Oracle Exadata Exascale blog

Oracle Exadata Exascale advantages blog

Oracle Exadata Exascale storage fundamentals blog

Oracle Exascale documentation

Oracle Exadata architecture

Oracle And Me blog – Why ASM Needed an Heir Worthy of the 21st Century

Oracle And Me blog – New Exascale architecture

More on the exciting Exascale technology in coming posts …

L’article Exascale storage architecture est apparu en premier sur dbi Blog.

Forgejo: Organizations, Repositories and Actions

Mon, 2025-12-15 07:13

In the last post we’ve deployed Forgejo on FreeBSD 15. In this post we’re going to do something with it and that is: We’ll create a new organization, a new repository, and finally we want to create a simple action. An “Action” is what GitLab calls a pipeline.

Creating a new organization is just a matter of a few clicks:

The only change to the default settings is the visibility, which is changed to private. The interface directly switches to the new organizations once it is created:

The next step is to create and initialize a new repository, which is also just a matter of a few clicks:

All the defaults, except for the “private” flag.

To clone this repository locally you’ll need to add your public ssh key to your user’s profile:

Once you have that, the repository can be cloned as usual:

dwe@ltdwe:~/Downloads$ git clone ssh://git@192.168.122.66/dwe/myrepo.git
Cloning into 'myrepo'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
Receiving objects: 100% (3/3), done.
dwe@ltdwe:~/Downloads$ ls -la myrepo/
total 4
drwxr-xr-x 1 dwe dwe  26 Dec 15 09:41 .
drwxr-xr-x 1 dwe dwe 910 Dec 15 09:41 ..
drwxr-xr-x 1 dwe dwe 122 Dec 15 09:41 .git
-rw-r--r-- 1 dwe dwe  16 Dec 15 09:41 README.md

So far so good, lets create a new “Action”. Before we do that, we need to check that actions are enabled for the repository:

What we need now is a so-called “Runner”. A “Runner” is a daemon that fetches work from an Forgejo instance, executes and returns back the result. For the “Runner” we’ll use a Debian 13 minimal setup:

root@debian13:~$ cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 13 (trixie)"
NAME="Debian GNU/Linux"
VERSION_ID="13"
VERSION="13 (trixie)"
VERSION_CODENAME=trixie
DEBIAN_VERSION_FULL=13.2
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

The only requirement is to have Git, curl and jq installed, so:

root@debian13:~$ apt install -y git curl jq
root@debian13:~$ git --version
git version 2.47.3

Downloading and installing the runner (this is a copy/paste from the official documentation):

root@debian13:~$ export ARCH=$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')
root@debian13:~$ echo $ARCH
amd64
root@debian13:~$ export RUNNER_VERSION=$(curl -X 'GET' https://data.forgejo.org/api/v1/repos/forgejo/runner/releases/latest | jq .name -r | cut -c 2-)
root@debian13:~$ echo $RUNNER_VERSION
12.1.2
root@debian13:~$ export FORGEJO_URL="https://code.forgejo.org/forgejo/runner/releases/download/v${RUNNER_VERSION}/forgejo-runner-${RUNNER_VERSION}-linux-${ARCH}"
root@debian13:~$ wget -O forgejo-runner ${FORGEJO_URL}
root@debian13:~$ chmod +x forgejo-runner
root@debian13:~$ wget -O forgejo-runner.asc ${FORGEJO_URL}.asc
root@debian13:~$ gpg --keyserver hkps://keys.openpgp.org --recv EB114F5E6C0DC2BCDD183550A4B61A2DC5923710
gpg: directory '/root/.gnupg' created
gpg: keybox '/root/.gnupg/pubring.kbx' created
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key A4B61A2DC5923710: public key "Forgejo <contact@forgejo.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1
root@debian13:~$ gpg --verify forgejo-runner.asc forgejo-runner && echo "✓ Verified" || echo "✗ Failed"
gpg: Signature made Sat 06 Dec 2025 11:10:50 PM CET
gpg:                using EDDSA key 0F527CF93A3D0D0925D3C55ED0A820050E1609E5
gpg: Good signature from "Forgejo <contact@forgejo.org>" [unknown]
gpg:                 aka "Forgejo Releases <release@forgejo.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: EB11 4F5E 6C0D C2BC DD18  3550 A4B6 1A2D C592 3710
     Subkey fingerprint: 0F52 7CF9 3A3D 0D09 25D3  C55E D0A8 2005 0E16 09E5
✓ Verified

Move that to a location which is in the PATH:

root@debian13:~$ mv forgejo-runner /usr/local/bin/forgejo-runner
root@debian13:~$ forgejo-runner -v
forgejo-runner version v12.1.2

As usual, a separate user should be created to run a service:

root@debian13:~$ groupadd runner
root@debian13:~$ useradd -g runner -m -s /bin/bash runner

As the runner will use Docker, Podman or LXC to execute the Actions, we’ll need to install Podman as well:

root@debian13:~$ apt install -y podman podman-docker
root@debian13:~$ podman --version
podman version 5.4.2
root@debian13:~$ systemctl enable --now podman.socket
root@debian13:~$ machinectl shell runner@
Connected to the local host. Press ^] three times within 1s to exit session.
runner@debian13:~$ systemctl --user enable --now podman.socket
Created symlink '/home/runner/.config/systemd/user/sockets.target.wants/podman.socket' → '/usr/lib/systemd/user/podman.socket'.

Now we need to register the runner with the Forgejo instance. Before we can do that, we need to fetch the registration token:

Back on the runner, register it:

root@debian13:~$ su - runner
runner@debian13:~$ forgejo-runner register
INFO Registering runner, arch=amd64, os=linux, version=v12.1.2. 
WARN Runner in user-mode.                         
INFO Enter the Forgejo instance URL (for example, https://next.forgejo.org/): 
http://192.168.122.66:3000/
INFO Enter the runner token:                      
BBE3MbNuTl0Wl52bayiRltJS8ciagRqghe7bXIXE
INFO Enter the runner name (if set empty, use hostname: debian13): 
runner1
INFO Enter the runner labels, leave blank to use the default labels (comma-separated, for example, ubuntu-20.04:docker://node:20-bookworm,ubuntu-18.04:docker://node:20-bookworm): 

INFO Registering runner, name=runner1, instance=http://192.168.122.66:3000/, labels=[docker:docker://data.forgejo.org/oci/node:20-bullseye]. 
DEBU Successfully pinged the Forgejo instance server 
INFO Runner registered successfully.              
runner@debian13:~$ 

This will make the new runner visible in the interface, but it is in “offline” state:

Time to startup the runner:

root@debian13:~$ cat /etc/systemd/system/forgejo-runner.service
[Unit]
Description=Forgejo Runner
Documentation=https://forgejo.org/docs/latest/admin/actions/
After=docker.service

[Service]
ExecStart=/usr/local/bin/forgejo-runner daemon
ExecReload=/bin/kill -s HUP $MAINPID

# This user and working directory must already exist
User=runner 
WorkingDirectory=/home/runner
Restart=on-failure
TimeoutSec=0
RestartSec=10

[Install]
WantedBy=multi-user.target

root@debian13:~$ systemctl daemon-reload
root@debian13:~$ systemctl enable forgejo-runner
root@debian13:~$ systemctl start forgejo-runner

Once the runner is running, the status in the interface will switch to “Idle”:

Ready for our first “Action”. Actions are defined as a yaml file in a specific directory of the repository:

dwe@ltdwe:~/Downloads/myrepo$ mkdir -p .forgejo/workflows/
dwe@ltdwe:~/Downloads/myrepo$ cat .forgejo/workflows/demo.yaml
on: [push]
jobs:
  test:
    runs-on: docker
    steps:
      - run: echo All good!

dwe@ltdwe:~/Downloads/myrepo$ git add .forgejo/
dwe@ltdwe:~/Downloads/myrepo$ git commit -m "my first action"
[main f9aa487] my first action
 1 file changed, 6 insertions(+)
 create mode 100644 .forgejo/workflows/demo.yaml
dwe@ltdwe:~/Downloads/myrepo$ git push

What that does: Whenever there is a “push” to the repository, a job will be executed on the runner with label “docker” which doesn’t do more than printing “All good!”. If everything went fine you should see the result under “Actions” section of the repository:

Nice, now we’re ready to do some real work, bust this is the topic for the next post.

L’article Forgejo: Organizations, Repositories and Actions est apparu en premier sur dbi Blog.

AI Isn’t Your Architect: Real-World Issues in a Vue project

Mon, 2025-12-15 06:37

In my previous article I generated a Rest NestJS API using AI.
Today, I will create a small UI to authenticate users via the API. I will use this simple case to show the limits of coding with AI and what you need to be attentive to.

I will create my interface with Vue 3 and Vuetify still using the GitHub Copilot agent on Vs Code.

Initializing the project

I create the new Vuetify project with the npm command:

npm create vuetify@latest

To avoid CORS request between the Vuetify project and the API project, I’m configuring a proxy into Vite like in my other article.

In the AI chat, I also initialize my context

Remember:
- You are a full-stack TypeScript developer.
- You follow best practices in development and security.
- You will work on this NestJS project.

To guide the AI, I’m exporting the Open API definition into a file in my project: /api-docs/open-api.json

Connecting to API, first issue

First, I want to connect my UI to the API, and I ask the AI the following:

Connect the application to the API. The API url path is /api

The result is not what I expected… My goal was to generate a simple class that makes requests to API with support for JWT tokens, but by default the AI wanted to add the Axios library to the project.

I’m not saying that Axios is a bad library, but it’s far too complicated for my usage and will add too many dependencies to the project, and therefore more maintenance.

So I’m skipping the installation of the library and I’m stopping the AI agent.

To continue and generate the desired code, I ask the AI:

I don't want to use axios, connect the application to the API with native typescript code

With this prompt, the generated code is fine.

Authentication Service, hidden issue

Without going into the details, I asked the AI to create my authentication form and the associated service:

Add a page /login to authenticate users, Use vuetify for login form.
Add a service to authenticate the users using the api endpoint /auth/login
The api return jwt token.
When the user is authenticated, redirect the user to /home
If a user accesses /index without authentication redirect the user to /login

The result looks good and works:

At first glance, the code works and I can authenticate myself. But the problem comes from the code itself:

The localStorage is accessible by all scripts, thus vulnerable to XSS attacks.

JWT access tokens should not be stored in persistent storage accessible by JavaScript, such as localStorage. To reduce the risk of XSS attacks, it is preferable to store the access token in a Vue service variable rather than in persistent browser storage.

Note: When stored in memory, the token will be lost at every page refresh, which requires implementing a refresh token mechanism. The refresh token should be stored in an HttpOnly cookie, allowing the access token to have a short expiration time and significantly limiting the impact of a potential attack.

To solve the issue I asked the AI the following:

Don't use localStorage to store the token, it's a security issue

Using GPT5-min, it only does the work:

With Claude Haiku 4.5, we have a short notice:

Why does this happen?

I tried different AI models in GitHub Copilot, but, from GPT to Claude, the result was similar. Most AIs generate code with Axios and localStorage for this use, because they replicate the most common patterns found in their training data, not the most up-to-date or secure practices.

Axios is overrepresented in tutorials because it offers a simple, opinionated HTTP abstraction that is easier for an AI to reason about than the lower-level fetch API.

The storage of JWT in localStorage is still widely shown online as it reflects old frontend authentication practices that prioritized simplicity over security. It keeps the token easily accessible to JavaScript and avoids the processing of cookies and refresh token rotation. Although largely discouraged today, these examples remain overrepresented in the tutorials and training data used by AI models.

In short, AI prioritizes widely recognized patterns and simplicity of implementation over minimalism and real-world security considerations.

Conclusion

Although AI is an incredible tool that helps us in our development work, it is important to understand the limits of this tool. With AI, the new role of developers is to imagine the code architecture, ask AI, evaluate the result and review the code. As its name indicates very well, “Copilot” is your co-pilot, you must remain the pilot.

AI can write code, but it does not understand the consequences of architectural decisions.

L’article AI Isn’t Your Architect: Real-World Issues in a Vue project est apparu en premier sur dbi Blog.

The truth about nested transactions in SQL Server

Mon, 2025-12-15 04:42

Working with transactions in SQL Server can feel like navigating a maze blindfolded. On paper, nested transactions look simple enough, start one, start another, commit them both, but under the hood, SQL Server plays by a very different set of rules. And that’s exactly where developers get trapped.

In this post, we’re going to look at what really happens when you try to use nested transactions in SQL Server. We’ll walk through a dead-simple demo, expose why @@TRANCOUNT is more illusion than isolation, and see how a single rollback can quietly unravel your entire call chain. If you’ve ever assumed nested transactions can behave the same way as in Oracle for example, this might clarify a few things you didn’t expect !

Practical example

Before diving into the demonstration, let’s set up a simple table in tempdb and illustrate how nested transactions behave in SQL Server.

IF OBJECT_ID('tempdb..##DemoLocks') IS NOT NULL
    DROP TABLE ##DemoLocks;

CREATE TABLE ##DemoLocks (id INT IDENTITY, text VARCHAR(50));

BEGIN TRAN MainTran;

BEGIN TRAN InnerTran;
INSERT INTO ##DemoLocks (text) VALUES ('I''m just a speedy insert ! Nothing to worry about');
COMMIT TRAN InnerTran;

WAITFOR DELAY '00:00:10';

ROLLBACK TRAN MainTran;

Let’s see how locks behave after committing the nested transaction and entering the WAITFOR phase. If nested transactions provided isolation between each other, no locks should remain since the transaction no longer works on any object. The following query shows all locks associated with my query specifically and the ##Demolocks table we are working on.

SELECT 
    l.request_session_id AS SPID,
    r.blocking_session_id AS BlockingSPID,
    resource_associated_entity_id,
    DB_NAME(l.resource_database_id) AS DatabaseName,
    OBJECT_NAME(p.object_id) AS ObjectName,
    l.resource_type AS ResourceType,
    l.resource_description AS ResourceDescription,
    l.request_mode AS LockMode,
    l.request_status AS LockStatus,
    t.text AS SQLText
FROM sys.dm_tran_locks l
LEFT JOIN sys.dm_exec_requests r
    ON l.request_session_id = r.session_id
LEFT JOIN sys.partitions p
    ON l.resource_associated_entity_id = p.hobt_id
OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) t
where t.text like 'IF OBJECT%'
    and OBJECT_NAME(p.object_id) = '##DemoLocks'
ORDER BY l.request_session_id, l.resource_type;

And the result :

All of this was just smoke and mirrors !
We clearly see in the image 2 persistent locks of different types:

  • LockMode IX: Intent lock on a data page of the ##DemoLocks table. This indicates that a lock is active on one of its sub-elements to optimize the engine’s lock checks.
  • LockMode X: Exclusive lock on a RID (Row Identifier) for data writing (here, our INSERT).

    For more on locks and their usage : sys.dm_tran_locks (Transact-SQL) – SQL Server | Microsoft Learn

In conclusion, SQL Server does not allow nested transactions to maintain isolation from each other, and causes nested transactions to remain dependent on their main transaction, which prevents the release of locks. Therefore, the rollback of MainTran causes the above query to leave the table empty, even with a COMMIT at the nested transaction level. This behavior still respects the ACID properties (Atomicity, Consistency, Isolation, and Durability), which are crucial for maintaining data validity and reliability in database management systems.

Now that we have shown that nested transactions have no useful effect on lock management and isolation, let’s see if they have even worse consequences. To do this, let’s create the following code and observe how SQL Server behaves under intensive nested transaction creation. This time, we will add SQL Server’s native @@TRANCOUNT variable, which allows us to analyze the number of open transactions currently in progress.

 CREATE PROCEDURE dbo.NestedProc
    @level INT
AS
BEGIN
    BEGIN TRANSACTION;

    PRINT 'Level ' + CAST(@level AS VARCHAR(3)) + ', @@TRANCOUNT = ' + CAST(@@TRANCOUNT AS VARCHAR(3));

    IF @level < 100
    BEGIN
        SET @level += 1
        EXEC dbo.NestedProc @level;
    END

    COMMIT TRANSACTION;
END
GO

EXEC dbo.NestedProc 1;

This procedure recursively creates 100 nested transactions, if we manage to go that far… Let’s look at the output.

Level 1, @@TRANCOUNT = 1
[...]
Level 32, @@TRANCOUNT = 32

Msg 217, Level 16, State 1, Procedure dbo.NestedProc, Line 12 [Batch Start Line 15]
Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32).

Indeed, SQL Server imposes various limitations on nested transactions which imply that if they are mismanaged, the application may suddenly suffer a killed query, which can be very dangerous. These limitations are in place to act as safeguards against infinite nesting loops of nested transactions.
Furthermore, we see that @@TRANCOUNT increments with each new BEGIN TRANSACTION, but it does not reflect the true number of active main transactions; i.e., there are 32 transactions ongoing but only 1 can actually release locks.

Ok but we still didn’t see any real nested transaction !

I understand, we cannot stop here. I need to go get my old Oracle VM from my garage and fire it up.
Oracle has a pragma called AUTONOMOUS_TRANSACTION that allows creating independent transactions inside a main transaction. Let’s see this in action with a small code snippet.

CREATE TABLE test_autonomous (
    id NUMBER PRIMARY KEY,
    msg VARCHAR2(100)
);
/

CREATE OR REPLACE PROCEDURE auton_proc IS
    PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
    INSERT INTO test_autonomous (id, msg) VALUES (2, 'Autonomous transaction');
    COMMIT;
END;
/

CREATE OR REPLACE PROCEDURE main_proc IS
BEGIN
    INSERT INTO test_autonomous (id, msg) VALUES (1, 'Main transaction');
    auton_proc;
    ROLLBACK;
END;
/

In this code, we create two procedures:

  • main_proc, the main procedure, inserts the first row into the table.
  • auton_proc, called by main_proc, adds a second row to the table.

auton_proc is committed while main_proc is rolled back. Let’s observe the result:

SQL> SELECT * FROM test_auton;

        ID MSG
---------- --------------------------------------------------
         2 Autonomous transaction

Now that’s a true nested transaction ! Here, the nested transaction achieves isolation and can persist independently of its main transaction.

Summary

In summary, SQL Server and Oracle can handle nested transactions in different ways. In SQL Server, nested transactions do not create real isolation: @@TRANCOUNT may increase, but a single main transaction actually controls locks and the persistence of changes. Internal limits, like the maximum nesting of 32 procedures, show that excessive nested transactions can cause critical errors.

In contrast, Oracle, thanks to PRAGMA AUTONOMOUS_TRANSACTION, allows truly independent transactions within a main transaction. These autonomous transactions can be committed or rolled back without affecting the main transaction, providing a real mechanism for nested isolation.

As Brent Ozar points out, SQL Server also has a SAVE TRANSACTION command, which allows you to save a state after a nested transaction has been committed, for example. This command therefore provides more flexibility in managing nested transactions but does not provide complete isolation of sub-transactions. Furthermore, as Brent Ozar emphasizes, this command is complex and requires careful analysis of its behavior and the consequences it entails.
Another approach to bypass SQL Server’s nested-transaction limitations is to manage transaction coordination directly at the application level, where each logical unit of work can be handled independently.

The lesson is clear: appearances can be deceiving! Understanding the actual behavior of transactions in each DBMS is crucial for designing reliable applications and avoiding unpleasant surprises.

L’article The truth about nested transactions in SQL Server est apparu en premier sur dbi Blog.

MongoDB DMK 2.3: new features

Mon, 2025-12-15 02:00

The latest MongoDB DMK release (2.3.1) introduces a lot of new features and important changes, which I will describe here.

dbi services provides the DMK (Database Management Kit) to its customers for multiple technologies: Oracle, Postgres, MongoDB, etc. This toolkit is provided free of charge to all clients who work with dbi services on a consulting project.

The DMK is a set of standardized tools aiming at easing the work of DBAs, by having dbi’s best practices embedded in common scripts across all the database servers of an organization.

New features of the MongoDB DMK Rewriting of the project

The most significant changes in the MongoDB DMK is the rewriting of all old Perl scripts into new Python scripts. On top of being more adapted to the MongoDB ecosystem, these will improve modularity for customers wanting to write their own packages.

It means that all utility scripts are now named .py instead of .sh, and apart from new features that have been added, the basic behavior stays the same for all of them.

DMK configuration file

Before release 2.3.0, only one configuration file existed in $DMK_HOME/etc. There is now a second configuration file in ~/.DMK/dmk.conf.local, which will overwrite default configuration options. See the GitBook section on Using DMK for more information.

New default directories and more versatility

The Optimal Flexible Architecture (OFA) has new recommendations. Specifically, the new default architecture is the following:

  • /u01 for binaries and admin folders
  • /u02 for database files
  • /u03 for journal files
  • /u04 for log files
  • /u90 for backup files

Even though dbi suggests OFA as a good standard for MongoDB installations, we know that a lot of legacy installations will not use this kind of architecture. This is why the DMK is now more versatile, and with the use of the local configuration file described above, it has never been easier to adapt the DMK to your needs.

New aliases and environment variables

Some aliases were changed in this release, others were added. See Environment Variables and Aliases in the documentation for more information.

  • mgstart, mgstop, mgrestart are new aliases to manage a MongoDB instance.
  • vic now opens the MongoDB instance configuration file.
  • vilst now opens the $DMK_HOME/etc/mongodb.lst file.
  • sta, lsta, tsta, rsta are new aliases for instance status display.
  • vil, cdl, tal are new aliases to view, access and tail log files of MongoDB instances.
  • dmkc opens DMK default configuration file.
  • dmkl opens DMK local configuration file, which overrides the default configuration file.
Other changes
  • A script named set_local_dmk_config.py was created to automate local configuration file changes. See Environment Variables for more details.
  • Backups are no longer compressed by default, and the option to compress them has been added to the dmk_dbbackup.py script.
  • And of course, corrections of bugs.
Installing the DMK for the first time

Installing the DMK is always fairly easy. If you follow the OFA, just unzip the package and source dmk.sh.

[root@vm00 ~]$ su - mongodb
[mongodb@vm00 ~]$ unzip -oq dmk_mongodb-2.3.1.zip -d /u01/app/mongodb/local
[mongodb@vm00 ~]$ . /u01/app/mongodb/local/dmk/bin/dmk.sh
2025-12-04 10:03:48 | INFO | DMK_HOME environment variable is not defined.
2025-12-04 10:03:48 | INFO | First time installation of DMK.
2025-12-04 10:03:48 | INFO | DMK has been extracted to /u01/app/mongodb/local/dmk
2025-12-04 10:03:48 | INFO | Using DMK_HOME=/u01/app/mongodb/local/dmk
2025-12-04 10:03:48 | INFO | Default configuration file '/u01/app/mongodb/local/dmk/etc/dmk.conf.default' does not exist. Creating it.
2025-12-04 10:03:48 | INFO | Copying template file '/u01/app/mongodb/local/dmk/templates/etc/dmk.conf.unix' to '/u01/app/mongodb/local/dmk/etc/dmk.conf.default'
2025-12-04 10:03:48 | INFO | Local configuration file does not exist. Creating it.
2025-12-04 10:03:48 | INFO | Copying template file '/u01/app/mongodb/local/dmk/templates/etc/dmk.conf.local.template' to '/home/mongodb/.dmk/dmk.conf.local'
2025-12-04 10:03:48 | INFO | Creating symlink '/u01/app/mongodb/local/dmk/etc/dmk.conf.local' to '/home/mongodb/.dmk/dmk.conf.local'
2025-12-04 10:03:48 | WARNING | MONGO_BASE environment variable is not set. Trying to retrieve it from DMK_HOME (/u01/app/mongodb/local/dmk).
2025-12-04 10:03:48 | WARNING | MONGO_BASE set to '/u01/app/mongodb' based on DMK_HOME location.
2025-12-04 10:03:48 | WARNING | If you're running DMK for the first time, you can ignore these warnings.
2025-12-04 10:03:48 | WARNING | Otherwise, please set MONGO_BASE in /home/mongodb/.DMK before sourcing DMK.
2025-12-04 10:03:48 | WARNING | File '/u01/app/mongodb/etc/mongodb.lst' does not exist. Creating an empty file.
2025-12-04 10:03:48 | INFO | Creating DMK source file at '/home/mongodb/.DMK' with the following content:
2025-12-04 10:03:48 | INFO | DMK_HOME=/u01/app/mongodb/local/dmk
2025-12-04 10:03:48 | INFO | PYTHON_BIN=/usr/bin/python3
2025-12-04 10:03:48 | INFO | MONGO_BASE=/u01/app/mongodb
2025-12-04 10:03:48 | WARNING | Please make sure to source the .DMK file in your shell profile (e.g., .bash_profile).
2025-12-04 10:03:48 | WARNING | An example is provided at /u01/app/mongodb/local/dmk/templates/profile/dmk.mongodb.profile

If you don’t follow the OFA, you should define the following mandatory variables before running the DMK, inside the /home/mongodb/.DMK file:

  • DMK_HOME: path to the DMK main folder
  • PYTHON_BIN: path to the Python binaries (3.6+ necessary, which is the default for Linux 8-like platforms)
  • MONGO_BASE
[root@vm00 ~]$ su - mongodb
[mongodb@vm00 ~]$ echo "DMK_HOME=/u01/app/mongodb/local/dmk" > ~/.DMK
[mongodb@vm00 ~]$ echo "PYTHON_BIN=/usr/bin/python3" >> ~/.DMK
[mongodb@vm00 ~]$ echo "MONGO_BASE=/u01/app/mongodb" >> ~/.DMK

[mongodb@vm00 ~]$ cat ~/.DMK
export DMK_HOME=/u01/app/mongodb/local/dmk
export PYTHON_BIN=/usr/bin/python3
export MONGO_BASE=/u01/app/mongodb
Loading DMK at login

If you want the DMK to be loaded when logging in, you should add the following code block to the .bash_profile of the mongodb user:

# BEGIN DMK BLOCK
if [ -z "$DMK_HOME" ]; then
  if [ -f "$HOME/.DMK" ]; then
    . "$HOME/.DMK"
  else
    echo "$HOME/.DMK file does not exist"
    return 1
  fi
fi

# Launched at login
. ${DMK_HOME}/bin/dmk.sh && ${PYTHON_BIN} ${DMK_HOME}/bin/dmk_status.py --table --all
# END DMK BLOCK

After this, you can just log in again. The installation is complete !

Migrating from a former version of the DMK

If you already have the MongoDB DMK installed on your systems, there are a few more steps to take for this specific upgrade, because we switched from old Perl libraries to Python.

You first need to adapt the .DMK file, as described in the installation steps.

[mongodb@vm00 ~]$ cat ~/.DMK
export DMK_HOME=/u01/app/mongodb/local/dmk
export PYTHON_BIN=/usr/bin/python3
export MONGO_BASE=/u01/app/mongodb

Then, move the former DMK folder and unzip the new version of the DMK. The old DMK should be a hidden directory, otherwise DMK will consider it as a custom package !

mongodb@vm00:/home/mongodb/ [DUMMY] cd /u01/app/mongodb/local/
mongodb@vm00:/u01/app/mongodb/local/ [DUMMY] ls -l
drwxrwx---. 10 mongodb mongodb 118 Jul  1 04:34 dmk
mongodb@vm00:/u01/app/mongodb/local/ [DUMMY] mv dmk .dmk_old
mongodb@vm00:/u01/app/mongodb/local/ [DUMMY] unzip /u01/app/mongodb/artifacts/dmk_mongodb-2.3.1.zip
mongodb@vm00:/u01/app/mongodb/local/ [DUMMY] ls -ail
100690250 drwxrwx---.  8 mongodb mongodb  96 Jul  1 04:24 dmk
 33554663 drwxrwx---. 10 mongodb mongodb 118 Jul  1 04:34 .dmk_old

Update your .bash_profile to remove all traces of the former DMK loading mechanism. Here is an example of the minimal DMK block in the template file:

# BEGIN DMK BLOCK
if [ -z "$DMK_HOME" ]; then
    if [ -f "$HOME/.DMK" ]; then
        . "$HOME/.DMK"
    else
        echo "$HOME/.DMK file does not exist. It is needed to source DMK at login. Run '. <DMK_HOME>/bin/dmk.sh' or 'source <DMK_HOME>/bin/dmk.sh' to source DMK manually this time."
        return 1
    fi
fi

# Launched at login
. ${DMK_HOME}/bin/dmk.sh && ${PYTHON_BIN} ${DMK_HOME}/bin/dmk_status.py --table --all
# END DMK BLOCK

Last but not least, you will have to customize your local DMK configuration file ~/.dmk/dmk.conf.local. You can use the set_local_dmk_config.py script to help yourself with the modifications.

mongodb@vm00:/u01/app/mongodb/admin/ [mdb01] set_local_dmk_config.py INSTANCE MONGO_JOURNAL "\${MONGO_DATA_ROOT}/\${MONGO_INSTANCE}/journal"
Backup created: /home/mongodb/.dmk/dmk.conf.bak_20251024_084959
Updated MONGO_JOURNAL in [INSTANCE]
Old value: var::MONGO_JOURNAL::=::nowarn::"${MONGO_JOURNAL_ROOT}/${MONGO_INSTANCE}"::
New value: var::MONGO_JOURNAL::=::nowarn::"${MONGO_DATA_ROOT}/${MONGO_INSTANCE}/journal"::
Use 'dmkc' and 'dmkl' aliases to quickly view default and local configuration files.

For any questions regarding the MongoDB DMK, take a look at the documentation or feel free to contact me.

L’article MongoDB DMK 2.3: new features est apparu en premier sur dbi Blog.

What is Forgejo and getting it up and running on FreeBSD 15

Fri, 2025-12-12 08:11

In recent customer projects I had less to do with PostgreSQL but more with reviewing infrastructures and give recommendations about what and how to improve. In all of those projects GitLab is used in one way or the other. Some only use it for managing their code in Git and work on issues, others use pipelines to build their stuff, and others almost use the full set of features. Gitlab is a great product, but sometimes you do not need the full set of features so I started to look for alternatives mostly because of my own interest. One of the more popular choices seemed to be Gitea but as a company was created around it, a fork was created and this is Forgejo. The FAQ summarizes the most important topics around the project pretty well, so please read it.

As FreeBSD 15 was released on 2. December that’s the perfect chance to get that up and running there and have a look how it feels like. I am not going into the installation of FreeBSD 15, this really is straight forward. I just want to mention that I opted for the “packaged base system” instead of the distributions sets which is currently in tech preview. What that means is that the whole system is installed and managed with packages and you don’t need freebsd-update anymore. Although it is still available, it will not work anymore if you try to use it:

root@forgejo:~ $ cat /etc/os-release 
NAME=FreeBSD
VERSION="15.0-RELEASE"
VERSION_ID="15.0"
ID=freebsd
ANSI_COLOR="0;31"
PRETTY_NAME="FreeBSD 15.0-RELEASE"
CPE_NAME="cpe:/o:freebsd:freebsd:15.0"
HOME_URL="https://FreeBSD.org/"
BUG_REPORT_URL="https://bugs.FreeBSD.org/"
root@forgejo:~ $ freebsd-update fetch
freebsd-update is incompatible with the use of packaged base.  Please see
https://wiki.freebsd.org/PkgBase for more information.

Coming back to Forgejo: On FreeBSD this is available as a package, so you can just go ahead and install it:

root@forgejo:~$ pkg search forgejo
forgejo-13.0.2_1               Compact self-hosted Git forge
forgejo-act_runner-9.1.0_2     Act runner is a runner for Forgejo based on the Gitea Act runner
forgejo-lts-11.0.7_1           Compact self-hosted Git forge
forgejo7-7.0.14_3              Compact self-hosted Git service
root@forgejo:~ $ pkg install forgejo
Updating FreeBSD-ports repository catalogue...
FreeBSD-ports repository is up to date.
Updating FreeBSD-ports-kmods repository catalogue...
FreeBSD-ports-kmods repository is up to date.
Updating FreeBSD-base repository catalogue...
FreeBSD-base repository is up to date.
All repositories are up to date.
The following 32 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        FreeBSD-clibs-lib32: 15.0 [FreeBSD-base]
        brotli: 1.1.0,1 [FreeBSD-ports]
...
Number of packages to be installed: 32

The process will require 472 MiB more space.
100 MiB to be downloaded.

Proceed with this action? [y/N]: y
Message from python311-3.11.13_1:

--
Note that some standard Python modules are provided as separate ports
as they require additional dependencies. They are available as:

py311-gdbm       databases/py-gdbm@py311
py311-sqlite3    databases/py-sqlite3@py311
py311-tkinter    x11-toolkits/py-tkinter@py311
=====
Message from git-2.51.0:

--
If you installed the GITWEB option please follow these instructions:

In the directory /usr/local/share/examples/git/gitweb you can find all files to
make gitweb work as a public repository on the web.

All you have to do to make gitweb work is:
1) Please be sure you're able to execute CGI scripts in
   /usr/local/share/examples/git/gitweb.
2) Set the GITWEB_CONFIG variable in your webserver's config to
   /usr/local/etc/git/gitweb.conf. This variable is passed to gitweb.cgi.
3) Restart server.


If you installed the CONTRIB option please note that the scripts are
installed in /usr/local/share/git-core/contrib. Some of them require
other ports to be installed (perl, python, etc), which you may need to
install manually.
=====
Message from git-lfs-3.6.1_8:

--
To get started with Git LFS, the following commands can be used:

  1. Setup Git LFS on your system. You only have to do this once per
     repository per machine:

     $ git lfs install

  2. Choose the type of files you want to track, for examples all ISO
     images, with git lfs track:

     $ git lfs track "*.iso"

  3. The above stores this information in gitattributes(5) files, so
     that file needs to be added to the repository:

     $ git add .gitattributes

  4. Commit, push and work with the files normally:

     $ git add file.iso
     $ git commit -m "Add disk image"
     $ git push
=====
Message from forgejo-13.0.2_1:

--
Before starting forgejo for the first time, you must set a number of
secrets in the configuration file. For your convenience, a sample file
has been copied to /usr/local/etc/forgejo/conf/app.ini.

You need to replace every occurence of CHANGE_ME in the file with
sensible values. Please refer to the official documentation at
https://forgejo.org for details.

You will also likely need to create directories for persistent storage.
Run
    su -m git -c 'forgejo doctor check'
to check if all prerequisites have been met.

What I really like with the FreeBSD packages is, that they usually give clear instructions on what to do. We’ll go with the web-based installer, so:

root@forgejo:~ $ chown git:git /usr/local/etc/forgejo/conf
root@forgejo:~ $ rm /usr/local/etc/forgejo/conf/app.ini
root@forgejo:~ $ service -l | grep for
forgejo
root@forgejo:~ $ service forgejo enable
forgejo enabled in /etc/rc.conf
root@forgejo:~ $ service forgejo start
2025/12/12 14:16:42 ...etting/repository.go:318:loadRepositoryFrom() [W] SCRIPT_TYPE "bash" is not on the current PATH. Are you sure that this is the correct SCRIPT_TYPE?

[1] Check paths and basic configuration
 - [E] Failed to find configuration file at '/usr/local/etc/forgejo/conf/app.ini'.
 - [E] If you've never ran Forgejo yet, this is normal and '/usr/local/etc/forgejo/conf/app.ini' will be created for you on first run.
 - [E] Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.
 - [E] Cannot proceed without a configuration file
FAIL
Command error: stat /usr/local/etc/forgejo/conf/app.ini: no such file or directory

2025/12/12 14:16:42 ...etting/repository.go:318:loadRepositoryFrom() [W] SCRIPT_TYPE "bash" is not on the current PATH. Are you sure that this is the correct SCRIPT_TYPE?

[1] Check paths and basic configuration
 - [E] Failed to find configuration file at '/usr/local/etc/forgejo/conf/app.ini'.
 - [E] If you've never ran Forgejo yet, this is normal and '/usr/local/etc/forgejo/conf/app.ini' will be created for you on first run.
 - [E] Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.
 - [E] Cannot proceed without a configuration file
FAIL
Command error: stat /usr/local/etc/forgejo/conf/app.ini: no such file or directory

Seems bash is somehow expected, but this is not available right now:

root@forgejo:~ $ which bash
root@forgejo:~ $ 

Once more:

root@forgejo:~ $ pkg install bash
root@forgejo:~ $ service forgejo stop
Stopping forgejo.
root@forgejo:~ $ service forgejo start

[1] Check paths and basic configuration
 - [E] Failed to find configuration file at '/usr/local/etc/forgejo/conf/app.ini'.
 - [E] If you've never ran Forgejo yet, this is normal and '/usr/local/etc/forgejo/conf/app.ini' will be created for you on first run.
 - [E] Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.
 - [E] Cannot proceed without a configuration file
FAIL
Command error: stat /usr/local/etc/forgejo/conf/app.ini: no such file or directory
root@forgejo:~ $ service forgejo status
forgejo is running as pid 3448.

The web installer is available on port 3000 and you can choose between the usual database backends:

To keep it simple I went with SQLite3, kept everything at the default and provided the Administrator information further down the screen. Before the installer succeeded I had to create these two directories:

root@forgejo:~ $ mkdir /usr/local/share/forgejo/data/
root@forgejo:~ $ chown git:git /usr/local/share/forgejo/data/
root@forgejo:~ $ mkdir /usr/local/share/forgejo/log
root@forgejo:~ $ chown git:git /usr/local/share/forgejo/log

Once that was done it went fine and this is the welcome screen:

As with the other tools in that area there are the common sections like “Issues”, “Pull requests”, and “Milestones”.

In the next post we’re going to create an organization, a repository and try to create a simple, how GitLab calls it, pipeline.

L’article What is Forgejo and getting it up and running on FreeBSD 15 est apparu en premier sur dbi Blog.

How effective is AI on a development project?

Fri, 2025-12-12 01:49

In this article, I will try to evaluate the benefits of AI on a development project and what concrete changes it makes to our development practices.

The test case and the approach

I chose a familiar environment for my comparison: a new NestJS project from scratch.

For my project, I want to:

  • Use a .env file for configuration
  • Connect to a PostgreSQL database
  • Store users in a database table
  • Create a CRUD API to manage my users
  • Manage JWT authentication based on my user list
  • Secure CRUD routes for authenticated users using a guard

To help me, I’m going to use the GitHub Copilot agent with the GTP5-mini model. I’ll ask it to generate code on my behalf, as much as possible. However, I’ll continue to follow NestJS best practices by using the documentation recommendations and initializing the project myself. I will focus on prompting, initializing the context and reviewing the code generated by the AI.

For better results, I will develop the application step by step and control the generated code at each step.

Intialize the project

At first, I initialize a new NestJS project using the CLI, as mentioned in the documentation:

npm i -g @nestjs/cli
nest new nestjs-project
First contact with the AI agent

I start by opening the project in VSCode and I open a new chat with the AI agent. I’m trying to give it some general instructions for the rest of the tasks:

Remember:
- You are a full-stack TypeScript developer.
- You follow best practices in development and security.
- You will be working on this NestJS project.

The AI agent discovers the project:

First Task, add application configuration

I followed the documentation to add configuration support using .env files

I’ve manually added the required package:

npm i --save @nestjs/config

And asked the AI to generate the code:

@nestjs/config is installed. Add support for .env in the application. The config file must contain the credentials to access to the database (host, database name, user, password).
Second Task, connect to the database and create the users table

I want to use TypeORM to manage my database connections and migrations.

First, I install the required packages:

npm install --save @nestjs/typeorm typeorm pg

And then ask the AI agent to generate the code:

I will use typeorm and postgres. Connect the application to the database.
Save the credentials in the .env file.
Use the credentials:
- host: localhost
- name: nestjs, 
- user: nest-user 
- password XXX

Note : Be careful when you send credentials to AI

Next request to the AI agent: create a migration to initialize the database schema:

Add a migration to create a "users" table with the following fields: id (primary key), username (string), email (string), password (string), is_admin (boolean), disabled (boolean), created_at (timestamp), updated_at (timestamp).

In addition, in my package.json, the agent adds the migration command to npm in the project:

  "typeorm:migration:run": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js migration:run -d ./data-source.ts",

To simplify the process, I asked the AI agent to generate a default user for my application:

In the migration, add a default admin user with the following values:
    username: "admin"
    email: "admin@example.com"
    password: "Admin@123" (hash the password using bcrypt)
    is_admin: true
    disabled: false

After the completion by the AI agent, I run the migration.

First module, service and controller for users with CRUD endpoints

Now, I ask the agent to create the users module with detailed endpoints:

Add a module, service, and controller for "users" with the following endpoints:
- GET /users: Retrieve a list of all users.
- GET /users/:id: Retrieve a user by ID.
- POST /users: Create a new user.
- PUT /users/:id: Update a user by ID.
- DELETE /users/:id: Delete a user by ID.

This step is very quick, and the code is generated in 4min only !

Add Swagger documentation

To test the first REST module, I ask the AI to add Swagger UI to the project.

As with the other steps, I add the packages myself:

npm install --save @nestjs/swagger

Note: This step is very tricky for the AI, if you don’t specify the already installed package, it will try to install an outdated version.

Then, I ask the AI agent to generate the code:

@nestjs/swagger is installed
Add swagger to the application. 
Document the users endpoints.

In few minutes, we have the API documentation:

During API testing, I noticed that the password hash was returned in the user list. However, initially, I had instructed the AI ​​to follow security best practices…

I asked the AI agent to fix this issue:

The users password field must be excluded from the responses.
Last task, add JWT authentication

As authentication mechanism, I use JWT tokens provided by passport library.

I install the required packages:

npm install --save @nestjs/passport
npm install --save @nestjs/jwt passport-jwt
npm install --save-dev @types/passport-jwt

Then, I ask the AI agent to generate the code:

Implement the JWT authentication strategy, @nestjs/jwt passport-jwt and @types/passport-jwt are installed.
Add a login endpoint that returns a JWT token when provided with valid user credentials (username and password from the users table).

And I instruct the AI to use .env file for the JWT secret and expiration:

Add the JWT secrets and expiration into the .env file, Fix the typescript errors, Improve the swagger documentation for login endpoint (message definition)

Now, I want to secure the users endpoints to allow only authenticated users and ask the agent the following:

Add a guard on the users endpoints to allow only connected users

Last point, I want to be able to authenticate on the Swagger interface, so I ask it:

Add the ability to authenticate on the Swagger interface with a bearer token.
Conclusion

All of this took me around 1h30 to complete, including prompting and reviewing the steps.

Reading the documentation, understanding the technologies, adding the dependancies remained the same.

The initial estimate, without AI, was between 2 and 4 hours to complete the project :

TaskEstimated TimeAI Coding / promptingReview.env15–30 min6 min5 minConnexion PostgreSQL20–40 min4 min2 minTable User + migration15–25 min7 min2 minCRUD Users30–45 min5 min10 minSwagger UI15–30 min6 min6 minAuth JWT30–60 min12 min15 minGuards15–30 min5 min5 minTOTAL2h20 – 4h2045min45 min

During development, AI makes certain errors or inaccuracies like TypeScripts compilation errors, password or security issues such as returning the password hash in the user list. However, the time spent to review and correct these issues is largely compensated by the speed of code generation.

At the end, coding with AI is very fast, the generated code with a well documented technology (NestJS) is good.

Even if formulating a clear request requires careful consideration and wording, coding is comfortable. However, the job is no longer the same; it now requires good code planning and architecture and the ability to review the generated code. Coding with AI can be effective, but only if you have a clear idea of what you want from the very beginning, use clear instructions and leave no room for interpretation by the AI.

L’article How effective is AI on a development project? est apparu en premier sur dbi Blog.

OGG-10556 when starting extract from GoldenGate 23ai web UI

Fri, 2025-12-12 01:00

Another day, another not-so-documented GoldenGate error, this time about the OGG-10556 error, which I had when setting up replication on a new GoldenGate installation. After making changes from the web UI in an extract, I ended with the following error when starting it:

2025-10-16 14:02:30  ERROR   OGG-02024  An attempt to gather information about the logmining server configuration from the Oracle database failed.

2025-10-15 12:06:51  ERROR   OGG-10556  No data found when executing SQL statement <SELECT apply_name  FROM all_apply  WHERE apply_name = SUBSTR(UPPER('OGG$' || :1), 1, 30)>.

Since the exact configuration is not relevant here, I will not add it to the blog. After some trial and error, it all came down to the extract settings in the web UI (not the configuration file). From the web UI, you can find the list of PDBs on which the extract is registered. In my case, because of the modifications I made, the PDB was not listed in the Registered PDB Containers section anymore:

After registering the PDB again, and restarting the extract, it worked !

NB: You’re wondering why you had the issue even without modifying the extract ? This might be because of how slow the GoldenGate UI can be. You cannot add an extract without specifying a PDB. However, the PDB list appears dynamically, sometimes a few seconds after selecting the connection. And in the meantime, it is possible to create an invalid extract !

The PDB list sometimes appear a few seconds after selecting the connection

L’article OGG-10556 when starting extract from GoldenGate 23ai web UI est apparu en premier sur dbi Blog.

Understanding XML performance pitfalls in SQL Server

Thu, 2025-12-11 14:38
Context

Working with XML in SQL Server can feel like taming a wild beast. It’s undeniably flexible and great for storing complex hierarchical data, but when it comes to querying efficiently, many developers hit a wall. That’s where things get interesting.

In this post, we’ll dive into a real-world scenario with half a million rows, put two XML query methods head-to-head .exist() vs .value(), and uncover how SQL Server handles them under the scenes.

Practical example

To demonstrate this, we’ll use SQL Server 2022 Developer Edition and create a table based on the open-source StackOverflow2010 database, derived from the Posts table, but storing part of the original data in XML format. We will also add a few indexes to simulate an environment with a minimum level of optimization.

CREATE TABLE dbo.PostsXmlPerf
(
    PostId        INT           NOT NULL PRIMARY KEY,
    PostTypeId    INT           NOT NULL,
    CreationDate  DATETIME      NOT NULL,
    Score         INT           NOT NULL,
    Body          NVARCHAR(MAX) NOT NULL,
    MetadataXml   XML           NOT NULL
);

INSERT INTO dbo.PostsXmlPerf (PostId, PostTypeId, CreationDate, Score, Body, MetadataXml)
SELECT TOP (500000)
       p.Id,
       p.PostTypeId,
       p.CreationDate,
       p.Score,
       p.Body,
       (
           SELECT  
               p.OwnerUserId     AS [@OwnerUserId],
               p.LastEditorUserId AS [@LastEditorUserId],
               p.AnswerCount     AS [@AnswerCount],
               p.CommentCount    AS [@CommentCount],
               p.FavoriteCount   AS [@FavoriteCount],
               p.ViewCount       AS [@ViewCount],
               (
                   SELECT TOP (5)
                          c.Id           AS [Comment/@Id],
                          c.Score        AS [Comment/@Score],
                          c.CreationDate AS [Comment/@CreationDate]
                   FROM dbo.Comments c
                   WHERE c.PostId = p.Id
                   FOR XML PATH(''), TYPE
               )
           FOR XML PATH('PostMeta'), TYPE
       )
FROM dbo.Posts p
ORDER BY p.Id;

CREATE nonclustered INDEX IX_PostsXmlPerf_CreationDate
ON dbo.PostsXmlPerf (CreationDate);

CREATE nonclustered INDEX IX_PostsXmlPerf_PostTypeId
ON dbo.PostsXmlPerf (PostTypeId);

Next, let’s create two queries designed to interrogate the column that contains XML data, in order to extract information based on a condition applied to a value stored within that XML.

SET STATISTICS IO ON;
SET STATISTICS TIME ON;

DBCC FREEPROCCACHE;

SELECT PostId, Score
FROM dbo.PostsXmlPerf
WHERE MetadataXml.exist('/PostMeta[@OwnerUserId="8"]') = 1;
DBCC FREEPROCCACHE;

SELECT PostId, Score
FROM dbo.PostsXmlPerf
WHERE MetadataXml.value('(/PostMeta/@OwnerUserId)[1]', 'INT') = 8

Comparing logical and physical reads, we notice something interesting:

Logical ReadsCPU Time.exist()125’91200:00:05.954.value()125’91200:00:03.125

At first glance, the number of pages read is identical, but .exist() is clearly taking more time. Why? Execution plans reveal that .exist() sneaks in a Merge Join, adding overhead.
Additionally, on both execution plans we can see a small yellow bang icon. On the first plan, it’s just a memory grant warning, but the second one is more interesting:

Alright, a bit strange ; let’s move forward with some tuning and maybe this warning will disappear.
To help with querying, it can be useful to create a more targeted index for XML queries.
Let’s create an index on the column that contains XML. However, as you might expect, it’s not as straightforward as indexing a regular column. For an XML column, you first need to create a primary XML index, which physically indexes the overall structure of the column (similar to a clustered index), and then a secondary XML index, which builds on the primary index and is optimized for a specific type of query (value, path, or property) – to know more about XML indexes : Microsoft Learn, MssqlTips.
So, let’s create these indexes !

CREATE PRIMARY XML INDEX IX_XML_Primary_MetadataXml
ON dbo.PostsXmlPerf (MetadataXml);

CREATE XML INDEX IX_XML_Value_MetadataXml
ON dbo.PostsXmlPerf (MetadataXml)
USING XML INDEX IX_XML_Primary_MetadataXml FOR Value;

Let’s rerun the performance tests with our two queries above, making sure to flush the buffer cache between each execution.

Logical ReadsCPU Time.exist()400:00:00.031.value()125’91200:00:03.937

The inevitable happened: the implicit conversion makes it impossible to use the secondary XML index due to a data type mismatch, preventing an actual seek on it. We do see a seek in the second execution plan, but it occurs for every row in the table (500’000 executions) and is essentially just accessing the underlying physical structure stored in the clustered index. In reality, this ‘seek’ is SQL Server scanning the XML to retrieve the exact value of the requested field (in this case, OwnerUserId).
This conversion issue occurs because the function .exist() returns a BIT, while the function .value() returns a SQL type.
This difference in return type can lead to significant performance problems when tuning queries that involve XML.
As explained by Microsoft: “For performance reasons, instead of using the value() method in a predicate to compare with a relational value, use exist() with sql:column()

Key take-aways

Working with XML in SQL Server can be powerful, but it can quickly become tricky to manage. .exist() and .value() might seem similar, but execution differences and type conversions can have a huge performance impact. Proper XML indexing is essential, and knowing your returned data types can save you from hours of head-scratching. Most importantly, before deciding to store data as XML, consider whether it’s truly necessary ; relational databases are not natively optimized for XML and can introduce complexity and performance challenges.

Sometimes, a simpler and highly effective approach is to extract frequently queried XML fields at the application level and store them in separate columns. This makes them much easier to index and query, reducing overhead while keeping your data accessible.

If your application relies heavily on semi-structured data or large volumes of XML/JSON, it’s worth considering alternative engines. For instance, MongoDB provides native document storage and fast queries on JSON/BSON, while PostgreSQL offers XML and JSONB support with powerful querying functions. Choosing the right tool for the job can simplify your architecture and significantly improve performance.

And to dive even deeper into the topic, with a forthcoming article focused this time on XML storage, keep an eye on the dbi services blogs !

L’article Understanding XML performance pitfalls in SQL Server est apparu en premier sur dbi Blog.

Two days at the KCD Suisse Romande

Thu, 2025-12-11 01:14

The 4th and 5th december, I attended KCD Suisse Romande in Geneva. It was a great event in a great place with great people. I really enjoyed the talks and the workshops.

First Day: the workshops

After a meeting at the entrance of the CERN, we went to attend the workshops:

20000 issues sous les mers – Moving Like a Fish in a Tempestuous Sea

A very interesting workshop reproducing a possible real use case:

We are given a Kubernetes infrastructure, we only have one Kubeconfig, we need to discover, debug, and fix the problems on the cluster.

A good exercise very close to real life !

Platform Engineering in the Age of AI: Secure the Software Supply Chain, Empower the Developer

Apart from the somewhat misleading title (nothing about AI in the workshop), a good overview of the possibilities of Openshift to build a self-service catalog, deploy and provision applications.

The visit of the CERN

At the end of the day a visit of one facility of the CERN was organized. For my part, I visited the Antimatter Factory. A great experience to know more about antimatter.

Second Day: the talks

The second day at the CERN auditorium was dedicated to talks.

After the keynote, we followed various talks that allowed us to learn about real life scenario of Kubernetes implementations in different environments, such as public structures, pension funds, banks, etc.

One talk focused on building an AI platform based on Kubernetes, which was very interesting. How to build a scalable and efficient infrastructure for AI workloads and how to scale up pods quickly was a very interesting subject.

Conclusion

Overall, the KCD Suisse Romande was a great event, the location at the CERN was incredible. Both workshops and talks were very interesting and confirmed that Kubernetes is now a key technology in the IT world. For a first edition of the KCD Suisse Romande, it was a great success, and I look forward to the next edition!

L’article Two days at the KCD Suisse Romande est apparu en premier sur dbi Blog.

How to patch your ODA to 19.29?

Mon, 2025-12-08 10:32
Introduction

Patch 19.29 is now available for Oracle Database Appliance series. Let’s find out what’s new and how to apply this patch.

What’s new?

The most important new component is probably database 26ai (as a DB System only). But don’t get fooled, 26ai is only the new name of 23ai (23.26). It doesn’t matter, and this is nice to see this latest version coming to the ODA. 26ai is also coming to all on-premise systems.

The other change that comes with this version is the way of applying the system patch. Several years ago, you should have applied the server patch and the GI patch separately. It was then grouped within the same update-server command we’re using since years. Now the split makes a come back and update-server is replaced by update-servercomponents for system stuff, and update-gihome for GI. Also note that updade-dcsagent vanished in 19.27.

Improvements have also been made on the security side, with SE Linux now being enabled. Regarding DB Systems, CPU and memory allocation is now more flexible, this is great. Note that a DB System is no more limited to 1 container database since 19.23, making virtualization on ODA more appealing than before.

My overall feeling about this patch is “maturity and stability”. It’s all that we need for this kind of platform.

Which ODA is compatible with this 19.29 release?

The latest ODAs X11-HA, X11-L and X11-S are supported, as well as X10, X9-2 and X8-2 series. X7-2 series and older ones are not supported anymore. If you own one from these older generations, you should have a renewal plan for the coming months. I still recommend keeping your ODA 7 years, not less, not more. This blog post is still relevant today: https://www.dbi-services.com/blog/why-you-should-consider-keeping-your-oda-more-than-5-years/.

Is this patch a cumulative one?

The rule is now well established: you can apply a patch on top of the four previous ones. 19.29 can then be applied on top of 19.28, 19.27, 19.26 and 19.25. It’s why it makes sense patching once a year: this is the perfect balance between moderate security needs and ease of patching.

In my lab, I will use an ODA X8-2M running 19.28 with one DB home, one database and one DB System. This procedure should apply the same way on your ODA a soon as you’re using 19.25 or later.

Is there also a patch for my databases?

Only databases version 19c are supported for bare metal. You should be able to patch a 23ai database running as a DB System to 26ai, but you’d probably better deploying a new DB System and unplug/plug your PDBs into the brand new DB System.

Download the patch and clone files

These files are mandatory:

  • 38427251 => the patch itself
  • 30403673 => the GI clone needed for deploying newer 19c GI version
  • 30403662 => the DB clone for deploying newer version of 19c

These files are optional:

  • 30403643 => ISO file for reimaging, not needed for patching
  • 36524660 => System image for 26ai DB Systems
  • 36524627 => the GI clone needed for deploying new 26ai GI version
  • 36524642 => the DB clone for deploying new 26ai DB version
  • 32451228 => The newer system image for 19c DB Systems

Be sure to choose the very latest 19.29 when downloading some files, download link from MOS will first propose older versions for GI clones, DB clones and ISO files.

Prepare the patching

Before starting, please check these prerequisites:

  • filesystems /, /opt, /u01 and /root have at least 20% of available free space
  • additional manually installed rpms must be removed
  • revert profile scripts to default’s one (for grid and oracle users)
  • make sure you’ve planned a sufficient downtime (4+ hours depending on the number of databases and DB Systems)
  • do a sanity reboot before patching to kill zombie processes
  • use ODABR to make snapshots of the important filesystems prior patching: this tool is now included in the software distribution
Version precheck

Start to check current versions of the various components:

odacli describe-component
System Version
--------------
19.28.0.0.0

System Node Name
----------------
dbioda01

Local System Version
--------------------
19.28.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                      19.28.0.0.0          up-to-date

GI                                       19.28.0.0.250715     up-to-date

DB {
     OraDB19000_home7                    19.28.0.0.250715     up-to-date
     [CPROD19]
}

DCSCONTROLLER                            19.28.0.0.0          up-to-date

DCSCLI                                   19.28.0.0.0          up-to-date

DCSAGENT                                 19.28.0.0.0          up-to-date

DCSADMIN                                 19.28.0.0.0          up-to-date

OS                                       8.10                 up-to-date

ILOM                                     5.1.4.25.r160118     up-to-date

BIOS                                     52140100             up-to-date

LOCAL CONTROLLER FIRMWARE {
     [c4]                                8000D9AB             up-to-date
}

SHARED CONTROLLER FIRMWARE {
     [c0, c1]                            VDV1RL06             up-to-date
}

LOCAL DISK FIRMWARE {
     [c2d0, c2d1]                        XC311132             up-to-date
}

HMP                                      2.4.10.1.600         up-to-date

List the DB homes, databases, DB Systems and VMs:

odacli list-dbhomes
ID                                       Name                 DB Version           DB Edition Home Location                                            Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
e120c4c9-91b9-47b4-a234-b8ada12fce69     OraDB19000_home7     19.28.0.0.250715     EE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_7     CONFIGURED



odacli list-databases
ID                                       DB Name    DB Type  DB Version           CDB     Class    Edition  Shape    Storage  Status       DB Home ID               
---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ----------------------------------------
976a80f2-4653-469f-8cd4-ddc1a21aff51     CPROD19    SI       19.28.0.0.250715     true    OLTP     EE       odb8     ASM      CONFIGURED   e120c4c9-91b9-47b4-a234-b8ada12fce69


odacli list-dbsystems
Name                  Shape       GI version          DB info                         Status                  Created                   Updated
--------------------  ----------  ------------------  ------------------------------  ----------------------  ------------------------  ------------------------
dbs-03-tst            dbs2        19.28.0.0.250715    19.28(CONFIGURED=1)             CONFIGURED              2025-12-03 15:05:31 CET   2025-12-03 15:47:19 CET

odacli list-vms
No data found for resource VM.
Update the DCS components

Updating DCS components is the first step, after registering the patch file:

cd /opt/dbi
unzip p38427251_1929000_Linux-x86-64.zip

odacli update-repository -f /opt/dbi/oda-sm-19.29.0.0.0-251117-server.zip
sleep 30 ; odacli describe-job -i "91189839-e855-48ea-a6b1-7d9695da52a5"
Job details
----------------------------------------------------------------
                     ID:  7e69a05f-61fe-4b13-af5d-d78cfb7f11a9
            Description:  Repository Update
                 Status:  Success
                Created:  December 03, 2025 16:09:45 CET
                Message:  /opt/dbi/oda-sm-19.29.0.0.0-251117-server.zip

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Unzip bundle                             December 03, 2025 16:09:49 CET           December 03, 2025 16:10:00 CET           Success


odacli describe-component
System Version
--------------
19.28.0.0.0

System Node Name
----------------
dbioda01

Local System Version
--------------------
19.28.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                      19.28.0.0.0          19.29.0.0.0

GI                                       19.28.0.0.250715     19.29.0.0.251021

DB {
     OraDB19000_home7                    19.28.0.0.250715     19.29.0.0.251021
     [CPROD19]
}

DCSCONTROLLER                            19.28.0.0.0          19.29.0.0.0

DCSCLI                                   19.28.0.0.0          19.29.0.0.0

DCSAGENT                                 19.28.0.0.0          19.29.0.0.0

DCSADMIN                                 19.28.0.0.0          19.29.0.0.0

OS                                       8.10                 up-to-date

ILOM                                     5.1.4.25.r160118     5.1.5.22.r165351

BIOS                                     52140100             52160100

LOCAL CONTROLLER FIRMWARE {
     [c4]                                8000D9AB             up-to-date
}

SHARED CONTROLLER FIRMWARE {
     [c0, c1]                            VDV1RL06             up-to-date
}

LOCAL DISK FIRMWARE {
     [c2d0, c2d1]                        XC311132             up-to-date
}

HMP                                      2.4.10.1.600         up-to-date

Let’s update the DCS components to 19.29:

odacli update-dcsadmin -v 19.29.0.0.0

sleep 60 ; odacli describe-job -i "f2d216d5-f60d-46d6-a967-900c6e137421"
Job details
----------------------------------------------------------------
                     ID:  f2d216d5-f60d-46d6-a967-900c6e137421
            Description:  DcsAdmin patching to 19.29.0.0.0
                 Status:  Success
                Created:  December 03, 2025 16:12:26 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Dcs-admin upgrade                        December 03, 2025 16:12:26 CET           December 03, 2025 16:12:36 CET           Success
Ping DCS Admin                           December 03, 2025 16:12:36 CET           December 03, 2025 16:13:43 CET           Success



sleep 30 ;  odacli update-dcscomponents -v 19.29.0.0.0
{
  "jobId" : "5cd855d4-7b35-44dc-84f9-4625f84d461b",
  "status" : "Success",
  "message" : "Update-dcscomponents is successful on all the node(s): DCS-Agent shutdown is successful. MySQL upgrade is successful. Metadata schema update is done. Script '/opt/oracle/dcs/log/jobfiles/5cd855d4-7b35-44dc-84f9-4625f84d461b/apply_metadata_change.sh' ran successfully. dcsagent RPM upgrade is successful. dcscli RPM upgrade is successful. dcscontroller RPM upgrade is successful. ahf RPM upgrade is successful.  Successfully reset the Keystore password. HAMI RPM and DCS ensemble update was successful.  Skipped removing old Libs. Successfully ran setupAgentAuth.sh ",
  "reports" : null,
  "createTimestamp" : "December 03, 2025 16:14:32 PM CET",
  "description" : "Update-dcscomponents job completed and is not part of Agent job list",
  "updatedTime" : "December 03, 2025 16:19:11 PM CET",
  "jobType" : null,
  "externalRequestId" : null,
  "action" : null
}
System patching

Let’s do the prepatching of the system with the new -sc option:

odacli create-prepatchreport -sc -v 19.29.0.0.0

sleep 180 ; odacli describe-prepatchreport -i 317b0f75-fed7-480b-9dba-af7c635fabea

Prepatch Report
------------------------------------------------------------------------
                 Job ID:  317b0f75-fed7-480b-9dba-af7c635fabea
            Description:  Patch pre-checks for [OS, ILOM, ORACHKSERVER, SERVER] to 19.29.0.0.0
                 Status:  SUCCESS
                Created:  December 3, 2025 4:20:13 PM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbioda01

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions     Success   Validated minimum supported versions.
Validate patching tag           Success   Validated patching tag: 19.29.0.0.0.
Is patch location available     Success   Patch location is available.
Verify OS patch                 Success   No dependencies found for RPMs being
                                          removed, updated and installed. Check
                                          /opt/oracle/dcs/log/jobfiles/
                                          yumdryrunout_2025-12-03_16-20-
                                          29.0193.1_251.log file for more
                                          details
Validate command execution      Success   Validated command execution

__ILOM__
Validate ILOM server reachable  Success   Successfully connected with ILOM
                                          server using public IP and USB
                                          interconnect
Validate supported versions     Success   Validated minimum supported versions.
Validate patching tag           Success   Validated patching tag: 19.29.0.0.0.
Is patch location available     Success   Patch location is available.
Checking Ilom patch Version     Success   Successfully verified the versions
Patch location validation       Success   Successfully validated location
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution

__SERVER__
Validate local patching         Success   Successfully validated server local
                                          patching
Validate all KVM ACFS           Success   All KVM ACFS resources are running
resources are running
Validate DB System VM states    Success   All DB System VMs states are expected
Validate DB System AFD state    Success   All DB Systems are on required
                                          versions
Validate command execution      Success   Validated command execution

OK let’s do the system patch:

odacli update-servercomponents -v 19.29.0.0.0
...

The server will reboot at the end of the patching. Let’s then check the job:

odacli describe-job -i "7f52ba58-f0d5-4055-864b-caae4209ce29"

Job details
----------------------------------------------------------------
                     ID:  7f52ba58-f0d5-4055-864b-caae4209ce29
            Description:  Server Patching to 19.29.0.0.0
                 Status:  Success
                Created:  December 03, 2025 16:26:08 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Deactivate Unit[dnf-makecache.timer]     December 03, 2025 16:26:14 CET           December 03, 2025 16:26:15 CET           Success
Validate ILOM server reachable           December 03, 2025 16:26:14 CET           December 03, 2025 16:26:14 CET           Success
Validating GI user metadata              December 03, 2025 16:26:14 CET           December 03, 2025 16:26:14 CET           Success
Deactivate Unit[kdump.service]           December 03, 2025 16:26:15 CET           December 03, 2025 16:26:15 CET           Success
Modify BM udev rules                     December 03, 2025 16:26:15 CET           December 03, 2025 16:26:35 CET           Success
Stop oakd                                December 03, 2025 16:26:35 CET           December 03, 2025 16:26:39 CET           Success
Creating repositories using yum          December 03, 2025 16:26:39 CET           December 03, 2025 16:26:41 CET           Success
Updating YumPluginVersionLock rpm        December 03, 2025 16:26:41 CET           December 03, 2025 16:26:44 CET           Success
Applying OS Patches                      December 03, 2025 16:26:44 CET           December 03, 2025 16:32:20 CET           Success
Applying HMP Patches                     December 03, 2025 16:32:20 CET           December 03, 2025 16:32:23 CET           Success
Creating repositories using yum          December 03, 2025 16:32:20 CET           December 03, 2025 16:32:20 CET           Success
Oda-hw-mgmt upgrade                      December 03, 2025 16:32:23 CET           December 03, 2025 16:32:52 CET           Success
Patch location validation                December 03, 2025 16:32:23 CET           December 03, 2025 16:32:23 CET           Success
Setting SELinux mode                     December 03, 2025 16:32:23 CET           December 03, 2025 16:32:23 CET           Success
Applying Firmware local Disk Patches     December 03, 2025 16:32:53 CET           December 03, 2025 16:32:57 CET           Success
OSS Patching                             December 03, 2025 16:32:53 CET           December 03, 2025 16:32:53 CET           Success
Applying Firmware local Controller Patch December 03, 2025 16:32:57 CET           December 03, 2025 16:33:01 CET           Success
Checking Ilom patch Version              December 03, 2025 16:33:01 CET           December 03, 2025 16:33:01 CET           Success
Patch location validation                December 03, 2025 16:33:01 CET           December 03, 2025 16:33:01 CET           Success
Save password in Wallet                  December 03, 2025 16:33:01 CET           December 03, 2025 16:33:02 CET           Success
Apply Ilom patch                         December 03, 2025 16:33:02 CET           December 03, 2025 16:43:49 CET           Success
Disabling IPMI v2                        December 03, 2025 16:33:02 CET           December 03, 2025 16:33:02 CET           Success
Copying Flash Bios to Temp location      December 03, 2025 16:43:49 CET           December 03, 2025 16:43:49 CET           Success
Start oakd                               December 03, 2025 16:43:49 CET           December 03, 2025 16:44:06 CET           Success
Add SYSNAME in Env                       December 03, 2025 16:44:06 CET           December 03, 2025 16:44:06 CET           Success
Cleanup JRE Home                         December 03, 2025 16:44:06 CET           December 03, 2025 16:44:06 CET           Success
Starting the clusterware                 December 03, 2025 16:44:06 CET           December 03, 2025 16:45:42 CET           Success
Generating and saving BOM                December 03, 2025 16:45:42 CET           December 03, 2025 16:46:12 CET           Success
Update System version                    December 03, 2025 16:45:42 CET           December 03, 2025 16:45:42 CET           Success
Update lvm.conf file                     December 03, 2025 16:45:42 CET           December 03, 2025 16:45:42 CET           Success
PreRebootNode Actions                    December 03, 2025 16:46:12 CET           December 03, 2025 16:47:17 CET           Success
Reboot Node                              December 03, 2025 16:47:17 CET           December 03, 2025 16:47:17 CET           Success
GI patching

Let’s unzip and register the patch file, and do the precheck for GI:

cd /opt/dbi
unzip -o p30403673_1929000_Linux-x86-64.zip

odacli update-repository -f /opt/dbi/odacli-dcs-19.29.0.0.0-251117-GI-19.29.0.0.zip
sleep 30 ; odacli describe-job -i "2e15156c-451b-470f-a523-03c4d024b726"

Job details
----------------------------------------------------------------
                     ID:  2e15156c-451b-470f-a523-03c4d024b726
            Description:  Repository Update
                 Status:  Success
                Created:  December 03, 2025 17:29:43 CET
                Message:  /opt/dbi/odacli-dcs-19.29.0.0.0-251117-GI-19.29.0.0.zip

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Unzip bundle                             December 03, 2025 17:29:45 CET           December 03, 2025 17:30:32 CET           Success


odacli create-prepatchreport -gi -v 19.29.0.0.0

sleep 180 ; odacli describe-prepatchreport -i be3c7eb0-7bd8-4295-aa7f-ecd9e104d66f

Prepatch Report
------------------------------------------------------------------------
                 Job ID:  be3c7eb0-7bd8-4295-aa7f-ecd9e104d66f
            Description:  Patch pre-checks for [RHPGI, GI] to 19.29.0.0.0
                 Status:  SUCCESS
                Created:  December 3, 2025 5:30:45 PM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbioda01

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__RHPGI__
Evaluate GI patching            Success   Successfully validated GI patching
Validate command execution      Success   Validated command execution

__GI__
Validate GI metadata            Success   Successfully validated GI metadata
Validate supported GI versions  Success   Successfully validated minimum version
Validate available space        Success   Validated free space under /u01
Is clusterware running          Success   Clusterware is running
Validate patching tag           Success   Validated patching tag: 19.29.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate ASM in online          Success   ASM is online
Validate kernel log level       Success   Successfully validated the OS log
                                          level
Validate minimum agent version  Success   GI patching enabled in current
                                          DCSAGENT version
Validate Central Inventory      Success   oraInventory validation passed
Validate patching locks         Success   Validated patching locks
Validate clones location exist  Success   Validated clones location
Validate DB start dependencies  Success   DBs START dependency check passed
Validate DB stop dependencies   Success   DBs STOP dependency check passed
Validate space for clones       Success   Clones volume is already created
volume
Validate command execution      Success   Validated command execution

Let’s apply the GI update now:

odacli update-gihome -v 19.29.0.0.0
sleep 400 ;  odacli describe-job -i "5eaaa4ad-996b-4024-858d-a0f0082705d5"

Job details
----------------------------------------------------------------
                     ID:  5eaaa4ad-996b-4024-858d-a0f0082705d5
            Description:  Patch GI with RHP to 19.29.0.0.0
                 Status:  Success
                Created:  December 03, 2025 17:37:30 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Starting the clusterware                 December 03, 2025 17:37:44 CET           December 03, 2025 17:37:44 CET           Success
Creating GI home directories             December 03, 2025 17:37:45 CET           December 03, 2025 17:37:45 CET           Success
Extract GI clone                         December 03, 2025 17:37:45 CET           December 03, 2025 17:37:45 CET           Success
Provisioning Software Only GI with RHP   December 03, 2025 17:37:45 CET           December 03, 2025 17:37:45 CET           Success
Registering image                        December 03, 2025 17:37:45 CET           December 03, 2025 17:37:45 CET           Success
Registering image                        December 03, 2025 17:37:45 CET           December 03, 2025 17:37:45 CET           Success
Registering working copy                 December 03, 2025 17:37:45 CET           December 03, 2025 17:37:45 CET           Success
Patch GI with RHP                        December 03, 2025 17:38:21 CET           December 03, 2025 17:43:07 CET           Success
Set CRS ping target                      December 03, 2025 17:43:07 CET           December 03, 2025 17:43:07 CET           Success
Updating .bashrc                         December 03, 2025 17:43:07 CET           December 03, 2025 17:43:07 CET           Success
Updating GI home metadata                December 03, 2025 17:43:07 CET           December 03, 2025 17:43:07 CET           Success
Updating GI home version                 December 03, 2025 17:43:07 CET           December 03, 2025 17:43:12 CET           Success
Updating All DBHome version              December 03, 2025 17:43:12 CET           December 03, 2025 17:43:17 CET           Success
Starting the clusterware                 December 03, 2025 17:43:38 CET           December 03, 2025 17:43:39 CET           Success
Validate ACFS resources are running      December 03, 2025 17:43:39 CET           December 03, 2025 17:43:39 CET           Success
Validate DB System VMs states            December 03, 2025 17:43:39 CET           December 03, 2025 17:43:40 CET           Success
Validate GI availability                 December 03, 2025 17:43:39 CET           December 03, 2025 17:43:39 CET           Success
Patch CPU Pools distribution             December 03, 2025 17:43:40 CET           December 03, 2025 17:43:40 CET           Success
Patch DB System domain config            December 03, 2025 17:43:40 CET           December 03, 2025 17:43:40 CET           Success
Patch KVM CRS type                       December 03, 2025 17:43:40 CET           December 03, 2025 17:43:40 CET           Success
Patch VM vDisks CRS dependencies         December 03, 2025 17:43:40 CET           December 03, 2025 17:43:40 CET           Success
Save custom VNetworks to storage         December 03, 2025 17:43:40 CET           December 03, 2025 17:43:41 CET           Success
Add network filters to DB Systems        December 03, 2025 17:43:41 CET           December 03, 2025 17:43:41 CET           Success
Create network filters                   December 03, 2025 17:43:41 CET           December 03, 2025 17:43:41 CET           Success
Patch DB Systems vDisks CRS dependencies December 03, 2025 17:43:41 CET           December 03, 2025 17:43:42 CET           Success
Patch DB Systems custom scale metadata   December 03, 2025 17:43:42 CET           December 03, 2025 17:43:42 CET           Success

No reboot is needed for this patch.

Check the versions
odacli describe-component
System Version
--------------
19.29.0.0.0

System Node Name
----------------
dbioda01

Local System Version
--------------------
19.29.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                      19.29.0.0.0          up-to-date

GI                                       19.29.0.0.251021     up-to-date

DB {
     OraDB19000_home7                    19.28.0.0.250715     19.29.0.0.251021
     [CPROD19]
}

DCSCONTROLLER                            19.29.0.0.0          up-to-date

DCSCLI                                   19.29.0.0.0          up-to-date

DCSAGENT                                 19.29.0.0.0          up-to-date

DCSADMIN                                 19.29.0.0.0          up-to-date

OS                                       8.10                 up-to-date

ILOM                                     5.1.5.22.r165351     up-to-date

BIOS                                     52160100             up-to-date

LOCAL CONTROLLER FIRMWARE {
     [c4]                                8000D9AB             up-to-date
}

SHARED CONTROLLER FIRMWARE {
     [c0, c1]                            VDV1RL06             up-to-date
}

LOCAL DISK FIRMWARE {
     [c2d0, c2d1]                        XC311132             up-to-date
}

HMP                                      2.4.10.1.600         up-to-date
Patching the storage

Patching the storage is only needed if describe-component tells you that you’re not up-to-date. On my X8-2M, it wasn’t needed. If your ODA needs the storage patch, it’s easy:

odacli update-storage -v 19.29.0.0.0
odacli describe-job -i ...

The server will reboot when done.

Patching the DB homes

It’s now time to patch the DB home and the database on my ODA. Let’s first unzip and register the patch file in the repository:

unzip -o p30403662_1929000_Linux-x86-64.zip
odacli update-repository -f /opt/dbi/odacli-dcs-19.29.0.0.0-251117-DB-19.29.0.0.zip 
sleep 30; odacli describe-job -i "480c3911-d673-47fd-b6c5-f65b2cc4a1b9"

Job details
----------------------------------------------------------------
                     ID:  480c3911-d673-47fd-b6c5-f65b2cc4a1b9
            Description:  Repository Update
                 Status:  Success
                Created:  December 03, 2025 17:51:13 CET
                Message:  /opt/dbi/odacli-dcs-19.29.0.0.0-251117-DB-19.29.0.0.zip

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Unzip bundle                             December 03, 2025 17:51:13 CET           December 03, 2025 17:51:49 CET           Success



odacli list-dbhomes
ID                                       Name                 DB Version           DB Edition Home Location                                            Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
e120c4c9-91b9-47b4-a234-b8ada12fce69     OraDB19000_home7     19.28.0.0.250715     EE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_7     CONFIGURED

Let’s check if the patch can be applied, then patch this DB home:

odacli create-prepatchreport -d -i e120c4c9-91b9-47b4-a234-b8ada12fce69 -v 19.29.0.0.0

sleep 600; odacli describe-prepatchreport -i fed289b2-848a-460f-9ba7-ef87c2a08dca

Prepatch Report
------------------------------------------------------------------------
                 Job ID:  fed289b2-848a-460f-9ba7-ef87c2a08dca
            Description:  Patch pre-checks for [DB, RHPDB, ORACHKDB] to 19.29.0.0.0: DbHome is OraDB19000_home7
                 Status:  SUCCESS
                Created:  December 3, 2025 5:53:51 PM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbioda01

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__DB__
Validate DB Home ID             Success   Validated DB Home ID:
                                          e120c4c9-91b9-47b4-a234-b8ada12fce69
Validate patching tag           Success   Validated patching tag: 19.29.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate minimum agent version  Success   Validated minimum agent version
Is GI upgraded                  Success   Validated GI is upgraded
Validate available space for    Success   Validated free space required under
db                                        /u01
Validate there is usable        Success   Successfully validated Oracle Base
space under oracle base                   usable space
Validate glogin.sql file        Success   Successfully verified glogin.sql
                                          won't break patching
Validate dbHomesOnACFS          Success   User has configured disk group for
configured                                Database homes on ACFS
Validate Oracle base            Success   Successfully validated Oracle Base
Is DB clone available           Success   Successfully validated clone file
                                          exists
Validate command execution      Success   Validated command execution

__RHPDB__
Evaluate DBHome patching with   Success   Successfully validated updating
RHP                                       dbhome with RHP.  and local patching
                                          is possible
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution

odacli update-dbhome -i e120c4c9-91b9-47b4-a234-b8ada12fce69 -v 19.29.0.0.0

sleep 600;  odacli describe-job -i "b2676a55-96de-4101-8686-98c6a88b8477"
Job details
----------------------------------------------------------------
                     ID:  b2676a55-96de-4101-8686-98c6a88b8477
            Description:  DB Home Patching to 19.29.0.0.0: Home ID is e120c4c9-91b9-47b4-a234-b8ada12fce69
                 Status:  Success
                Created:  December 03, 2025 18:05:45 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Creating wallet for DB Client            December 03, 2025 18:06:34 CET           December 03, 2025 18:06:34 CET           Success
Patch databases by RHP - [CPROD19]       December 03, 2025 18:06:34 CET           December 03, 2025 18:13:06 CET           Success
Updating database metadata               December 03, 2025 18:13:06 CET           December 03, 2025 18:13:06 CET           Success
Upgrade pwfile to 12.2                   December 03, 2025 18:13:06 CET           December 03, 2025 18:13:09 CET           Success
Set log_archive_dest for Database        December 03, 2025 18:13:09 CET           December 03, 2025 18:13:12 CET           Success
Populate PDB metadata                    December 03, 2025 18:13:13 CET           December 03, 2025 18:13:14 CET           Success
Generating and saving BOM                December 03, 2025 18:13:15 CET           December 03, 2025 18:13:54 CET           Success
TDE parameter update                     December 03, 2025 18:14:24 CET           December 03, 2025 18:14:24 CET           Success

Let’s check if everything is fine:

odacli list-dbhomes
ID                                       Name                 DB Version           DB Edition Home Location                                            Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
e120c4c9-91b9-47b4-a234-b8ada12fce69     OraDB19000_home7     19.28.0.0.250715     EE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_7     CONFIGURED
57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb     OraDB19000_home9     19.29.0.0.251021     EE         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9     CONFIGURED

odacli list-databases
ID                                       DB Name    DB Type  DB Version           CDB     Class    Edition  Shape    Storage  Status       DB Home ID               
---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ----------------------------------------
976a80f2-4653-469f-8cd4-ddc1a21aff51     CPROD19    SI       19.29.0.0.251021     true    OLTP     EE       odb8     ASM      CONFIGURED   57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb

Let’s now remove the old DB home:

odacli delete-dbhome -i e120c4c9-91b9-47b4-a234-b8ada12fce69
...
Cleanse the old patches

Don’t forget to remove the previous patch from the repository:

odacli cleanup-patchrepo -comp all -v 19.28.0.0.0

odacli describe-job -i "76ba3e95-bb71-4ebe-b7b2-f3cac07d89dd"
Job details
----------------------------------------------------------------
                     ID:  76ba3e95-bb71-4ebe-b7b2-f3cac07d89dd
            Description:  Cleanup patchrepos
                 Status:  Success
                Created:  December 03, 2025 18:19:22 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Cleanup Repository                       December 03, 2025 18:19:22 CET           December 03, 2025 18:19:22 CET           Success
Cleanup old ASR rpm                      December 03, 2025 18:19:22 CET           December 03, 2025 18:19:22 CET           Success

Old GI binaries are still there, it’s better removing them manually:

du -hs /u01/app/19.2*
14G     /u01/app/19.28.0.0
14G     /u01/app/19.29.0.0

rm -rf /u01/app/19.28.0.0
Post-patching tasks

You will need to put back your specific configuration:

  • add your additional RPMs
  • put back your profile scripts for grid and oracle users
  • check if monitoring still works
Patching the DB System

If you use DB Systems on your ODA, meaning that some of your databases are running in dedicated VMs, you will need to apply the patch inside each DB System. As the repository is shared, patch files are already available for DB Systems. Applying the patch is similar to what you did on bare metal:

ssh dbs-03-tst

odacli update-dcsadmin -v 19.29.0.0.0

sleep 60 ; odacli describe-job -i 43df9afb-adc1-479c-8987-c8d24f056c02

Job details
----------------------------------------------------------------
                     ID:  43df9afb-adc1-479c-8987-c8d24f056c02
            Description:  DcsAdmin patching to 19.29.0.0.0
                 Status:  Success
                Created:  December 08, 2025 10:15:13 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Dcs-admin upgrade                        December 08, 2025 10:15:14 CET           December 08, 2025 10:15:25 CET           Success
Ping DCS Admin                           December 08, 2025 10:15:25 CET           December 08, 2025 10:16:33 CET           Success

sleep 30 ;  odacli update-dcscomponents -v 19.29.0.0.0
{
  "jobId" : "91fd5f67-5ba8-4636-9046-0fe1921a659e",
  "status" : "Success",
  "message" : "Update-dcscomponents is successful on all the node(s): DCS-Agent shutdown is successful. MySQL upgrade is successful. Metadata schema update is done. Script '/opt/oracle/dcs/log/jobfiles/91fd5f67-5ba8-4636-9046-0fe1921a659e/apply_metadata_change.sh' ran successfully. dcsagent RPM upgrade is successful. dcscli RPM upgrade is successful. dcscontroller RPM upgrade is successful. ahf RPM upgrade is successful.  Successfully reset the Keystore password. HAMI RPM and DCS ensemble update was successful.  Skipped removing old Libs. Successfully ran setupAgentAuth.sh ",
  "reports" : null,
  "createTimestamp" : "December 08, 2025 10:17:38 AM CET",
  "description" : "Update-dcscomponents job completed and is not part of Agent job list",
  "updatedTime" : "December 08, 2025 10:23:37 AM CET",
  "jobType" : null,
  "externalRequestId" : null,
  "action" : null
}

odacli create-prepatchreport -sc -v 19.29.0.0.0

sleep 20 ; odacli describe-prepatchreport -i f2f90339-16b8-49a5-be8c-408dd0e9f28b
ps -ef | grep pmon

Prepatch Report
------------------------------------------------------------------------
                 Job ID:  f2f90339-16b8-49a5-be8c-408dd0e9f28b
            Description:  Patch pre-checks for [OS, ORACHKSERVER, SERVER] to 19.29.0.0.0
                 Status:  SUCCESS
                Created:  December 8, 2025 10:25:17 AM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbs-03-tst

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions     Success   Validated minimum supported versions.
Validate patching tag           Success   Validated patching tag: 19.29.0.0.0.
Is patch location available     Success   Patch location is available.
Verify OS patch                 Success   No dependencies found for RPMs being
                                          removed, updated and installed. Check
                                          /opt/oracle/dcs/log/jobfiles/
                                          yumdryrunout_2025-12-08_10-25-
                                          34.0670.1_222.log file for more
                                          details
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution

__SERVER__
Validate local patching         Success   Successfully validated server local
                                          patching
Validate all KVM ACFS           Success   All KVM ACFS resources are running
resources are running
Validate DB System VM states    Success   All DB System VMs states are expected
Enable support for Multi-DB     Success   No need to convert the DB System
Validate DB System AFD state    Success   AFD is not configured
Validate command execution      Success   Validated command execution

odacli update-servercomponents -v 19.29.0.0.0

The DB System will reboot.

odacli describe-job -i 5a23ae5b-43ed-4c39-ba79-21cd8a125b79

Job details
----------------------------------------------------------------
                     ID:  5a23ae5b-43ed-4c39-ba79-21cd8a125b79
            Description:  Server Patching to 19.29.0.0.0
                 Status:  Success
                Created:  December 08, 2025 10:30:04 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Validating GI user metadata              December 08, 2025 10:30:13 CET           December 08, 2025 10:30:13 CET           Success
Deactivate Unit[dnf-makecache.timer]     December 08, 2025 10:30:14 CET           December 08, 2025 10:30:14 CET           Success
Deactivate Unit[kdump.service]           December 08, 2025 10:30:14 CET           December 08, 2025 10:30:15 CET           Success
Modify DBVM udev rules                   December 08, 2025 10:30:15 CET           December 08, 2025 10:30:36 CET           Success
Creating repositories using yum          December 08, 2025 10:30:36 CET           December 08, 2025 10:30:39 CET           Success
Updating YumPluginVersionLock rpm        December 08, 2025 10:30:39 CET           December 08, 2025 10:30:42 CET           Success
Applying OS Patches                      December 08, 2025 10:30:42 CET           December 08, 2025 10:34:11 CET           Success
Creating repositories using yum          December 08, 2025 10:34:11 CET           December 08, 2025 10:34:12 CET           Success
Applying HMP Patches                     December 08, 2025 10:34:12 CET           December 08, 2025 10:34:15 CET           Success
Setting SELinux mode                     December 08, 2025 10:34:15 CET           December 08, 2025 10:34:15 CET           Success
Oda-hw-mgmt upgrade                      December 08, 2025 10:34:16 CET           December 08, 2025 10:34:44 CET           Success
Patch location validation                December 08, 2025 10:34:16 CET           December 08, 2025 10:34:16 CET           Success
Cleanup JRE Home                         December 08, 2025 10:34:45 CET           December 08, 2025 10:34:45 CET           Success
Generating and saving BOM                December 08, 2025 10:34:56 CET           December 08, 2025 10:35:09 CET           Success
Update System version                    December 08, 2025 10:34:56 CET           December 08, 2025 10:34:56 CET           Success
PreRebootNode Actions                    December 08, 2025 10:35:09 CET           December 08, 2025 10:35:09 CET           Success
Reboot Node                              December 08, 2025 10:35:09 CET           December 08, 2025 10:35:09 CET           Success

odacli create-prepatchreport -gi -v 19.29.0.0.0

sleep 240 ; odacli describe-prepatchreport -i 56c7b4b1-3787-42af-b4b0-0fa6715a91f7

Prepatch Report
------------------------------------------------------------------------
                 Job ID:  56c7b4b1-3787-42af-b4b0-0fa6715a91f7
            Description:  Patch pre-checks for [RHPGI, GI] to 19.29.0.0.0
                 Status:  SUCCESS
                Created:  December 8, 2025 10:37:05 AM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbs-03-tst

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__RHPGI__
Evaluate GI patching            Success   Successfully validated GI patching
Validate command execution      Success   Validated command execution

__GI__
Validate GI metadata            Success   Successfully validated GI metadata
Validate supported GI versions  Success   Successfully validated minimum version
Validate available space        Success   Validated free space under /u01
Is clusterware running          Success   Clusterware is running
Validate patching tag           Success   Validated patching tag: 19.29.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate BM versions            Success   Validated BM server components
                                          versions
Validate kernel log level       Success   Successfully validated the OS log
                                          level
Validate minimum agent version  Success   GI patching enabled in current
                                          DCSAGENT version
Validate Central Inventory      Success   oraInventory validation passed
Validate patching locks         Success   Validated patching locks
Validate clones location exist  Success   Validated clones location
Validate command execution      Success   Validated command execution

odacli update-gihome -v 19.29.0.0.0

sleep 600 ; odacli describe-job -i 571205f2-bdf2-43a2-944a-ec2765148446

odacli describe-job -i 571205f2-bdf2-43a2-944a-ec2765148446

Job details
----------------------------------------------------------------
                     ID:  571205f2-bdf2-43a2-944a-ec2765148446
            Description:  Patch GI with RHP to 19.29.0.0.0
                 Status:  Success
                Created:  December 08, 2025 10:43:47 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Starting the clusterware                 December 08, 2025 10:44:07 CET           December 08, 2025 10:44:07 CET           Success
Registering image                        December 08, 2025 10:44:08 CET           December 08, 2025 10:44:08 CET           Success
Registering working copy                 December 08, 2025 10:44:08 CET           December 08, 2025 10:44:09 CET           Success
Creating GI home directories             December 08, 2025 10:44:09 CET           December 08, 2025 10:44:09 CET           Success
Extract GI clone                         December 08, 2025 10:44:09 CET           December 08, 2025 10:44:09 CET           Success
Provisioning Software Only GI with RHP   December 08, 2025 10:44:09 CET           December 08, 2025 10:44:09 CET           Success
Registering image                        December 08, 2025 10:44:09 CET           December 08, 2025 10:44:09 CET           Success
Patch GI with RHP                        December 08, 2025 10:44:49 CET           December 08, 2025 10:49:16 CET           Success
Set CRS ping target                      December 08, 2025 10:49:16 CET           December 08, 2025 10:49:16 CET           Success
Updating .bashrc                         December 08, 2025 10:49:16 CET           December 08, 2025 10:49:16 CET           Success
Updating GI home metadata                December 08, 2025 10:49:16 CET           December 08, 2025 10:49:17 CET           Success
Updating GI home version                 December 08, 2025 10:49:17 CET           December 08, 2025 10:49:24 CET           Success
Updating All DBHome version              December 08, 2025 10:49:24 CET           December 08, 2025 10:49:30 CET           Success
Patch DB System on BM                    December 08, 2025 10:50:05 CET           December 08, 2025 10:50:11 CET           Success
Starting the clusterware                 December 08, 2025 10:50:05 CET           December 08, 2025 10:50:05 CET           Success


odacli list-dbhomes
ID                                       Name                 DB Version           DB Edition Home Location                                            Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
46268d88-e958-4c16-b45b-c32d5e0203fb     OraDB19000_home1     19.28.0.0.250715     EE         /u01/app/oracle/product/19.0.0.0/dbhome_1                CONFIGURED

odacli create-prepatchreport -d -i 46268d88-e958-4c16-b45b-c32d5e0203fb -v 19.29.0.0.0

sleep 600 ;  odacli describe-prepatchreport -i eb00906a-0ecc-4a9d-968a-272d4c3719f4

Prepatch Report
------------------------------------------------------------------------
                 Job ID:  eb00906a-0ecc-4a9d-968a-272d4c3719f4
            Description:  Patch pre-checks for [DB, RHPDB, ORACHKDB] to 19.29.0.0.0: DbHome is OraDB19000_home1
                 Status:  SUCCESS
                Created:  December 8, 2025 11:27:25 AM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbs-03-tst

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__DB__
Validate DB Home ID             Success   Validated DB Home ID:
                                          46268d88-e958-4c16-b45b-c32d5e0203fb
Validate patching tag           Success   Validated patching tag: 19.29.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate minimum agent version  Success   Validated minimum agent version
Is GI upgraded                  Success   Validated GI is upgraded
Validate available space for    Success   Validated free space required under
db                                        /u01
Validate there is usable        Success   Successfully validated Oracle Base
space under oracle base                   usable space
Validate glogin.sql file        Success   Successfully verified glogin.sql
                                          won't break patching
Is DB clone available           Success   Successfully validated clone file
                                          exists
Validate command execution      Success   Validated command execution

__RHPDB__
Evaluate DBHome patching with   Success   Successfully validated updating
RHP                                       dbhome with RHP.  and local patching
                                          is possible
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution


odacli update-dbhome -i 46268d88-e958-4c16-b45b-c32d5e0203fb -v 19.29.0.0.0

sleep 600 ; odacli describe-job -i aac90798-a6a4-4740-bfa6-77bcb80cba7c

Job details
----------------------------------------------------------------
                     ID:  aac90798-a6a4-4740-bfa6-77bcb80cba7c
            Description:  DB Home Patching to 19.29.0.0.0: Home ID is 46268d88-e958-4c16-b45b-c32d5e0203fb
                 Status:  Success
                Created:  December 08, 2025 11:36:57 CET
                Message:

Task Name                                Start Time                               End Time                                 Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Creating wallet for DB Client            December 08, 2025 11:37:43 CET           December 08, 2025 11:37:43 CET           Success
Patch databases by RHP - [CTEST19]       December 08, 2025 11:37:43 CET           December 08, 2025 11:45:11 CET           Success
Updating database metadata               December 08, 2025 11:45:11 CET           December 08, 2025 11:45:12 CET           Success
Upgrade pwfile to 12.2                   December 08, 2025 11:45:12 CET           December 08, 2025 11:45:17 CET           Success
Set log_archive_dest for Database        December 08, 2025 11:45:17 CET           December 08, 2025 11:45:21 CET           Success
Populate PDB metadata                    December 08, 2025 11:45:22 CET           December 08, 2025 11:45:24 CET           Success
Generating and saving BOM                December 08, 2025 11:45:24 CET           December 08, 2025 11:46:10 CET           Success
TDE parameter update                     December 08, 2025 11:46:39 CET           December 08, 2025 11:46:39 CET           Success

odacli list-databases
ID                                       DB Name    DB Type  DB Version           CDB     Class    Edition  Shape    Storage  Status       DB Home ID
---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ----------------------------------------
54e88627-a3cf-4696-956b-6262bbd51cf0     CTEST19    SI       19.29.0.0.251021     true    OLTP     EE       odb2     ASM      CONFIGURED   85b6e4eb-5db4-4165-bfb3-e3da52dc4777

odacli delete-dbhome -i 46268d88-e958-4c16-b45b-c32d5e0203fb
...

odacli describe-component
System Version
--------------
19.29.0.0.0

System Node Name
----------------
dbs-03-tst

Local System Version
--------------------
19.29.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                      19.29.0.0.0          up-to-date

GI                                       19.29.0.0.251021     up-to-date

DB {
     OraDB19000_home2                    19.29.0.0.251021     up-to-date
     [CTEST19]
}

DCSCONTROLLER                            19.29.0.0.0          up-to-date

DCSCLI                                   19.29.0.0.0          up-to-date

DCSAGENT                                 19.29.0.0.0          up-to-date

DCSADMIN                                 19.29.0.0.0          up-to-date

OS                                       8.10                 up-to-date

Don’t forget to apply this procedure on each of your DB Systems.

Provision a new 26ai DB System

This is an optional step, only if you’d like to try 26ai database. First unzip and register the VM template, GI 26ai and DB 26ai:

unzip -o p36524660_1929000_Linux-x86-64.zip
unzip -o p36524627_1929000_Linux-x86-64.zip
unzip -o p36524642_1929000_Linux-x86-64.zip

odacli update-repository -f /opt/dbi/odacli-dcs-23.26.0.0.0-251116-ODAVM-19.29.0.0.zip
odacli update-repository -f /opt/dbi/odacli-dcs-23.26.0.0.0-251116-GI-23.26.0.0.zip
odacli update-repository -f /opt/dbi/odacli-dcs-23.26.0.0.0-251116-DB-23.26.0.0.zip

sleep 30 ; odacli list-jobs | tail -n 4
d4700315-db8c-4522-af55-0fddd262bfe4     Repository Update                                                           2025-12-08 15:20:14 CET             Success
ba7d452f-e03d-46d0-a607-fd7c758cd1b1     Repository Update                                                           2025-12-08 15:20:59 CET             Success
bfc102d1-985b-4792-8054-03709aa8d949     Repository Update                                                           2025-12-08 15:21:20 CET             Success

odacli describe-dbsystem-image | grep 23.26
DBVM                  23.26.0.0.0           23.26.0.0.0
GI                    23.26.0.0.0           23.26.0.0.0
DB                    23.26.0.0.0           23.26.0.0.0

Now let’s create a json file based on the one I used to create my 19.28 DB System and adjust some parameters. Then create the DB System:

cat create_dbs-03-tst-cdb.json | sed 's/dbs-03-tst/dbs-04-tst/g' | sed 's/10.16.0.146/10.16.0.147/g' | sed 's/CTEST19/CTEST26/g' | sed 's/19.28.0.0.250715/23.26.0.0.0/g' > create_dbs-04-tst-cdb.json

odacli create-dbsystem -p /opt/dbi/create_dbs-04-tst-cdb.json

odacli describe-job -i c0c8b0a0-5033-46b5-81a1-f326f6caa761
...

35 minutes later, my new DB System is ready to use:

Name                  Shape	  GI version          DB info                         Status                  Created                   Updated
--------------------  ----------  ------------------  ------------------------------  ----------------------  ------------------------  ------------------------
dbs-03-tst            dbs2        19.29.0.0.251021    19.29(CONFIGURED=1)             CONFIGURED              2025-12-03 15:05:31 CET   2025-12-08 10:50:06 CET
dbs-04-tst            dbs2        23.26.0.0.0         23.26(CONFIGURED=1)             CONFIGURED              2025-12-08 15:32:48 CET   2025-12-08 16:09:10 CET
Conclusion

Applying this patch is rather easy. Remember these key points when using an ODA:

  • keep it clean
  • keep it under control
  • keep it updated

L’article How to patch your ODA to 19.29? est apparu en premier sur dbi Blog.

Customer experience – How to change ip address in a fully clustered environment

Mon, 2025-11-24 07:48

If you have a clusturized environment already up and running, but you want or need change the complete ip adress of your wall with minimal downtime (full downtime during operation: 3 min).

You’re in the right place, I will show you how we can do that.

Context

Two-node clustered environment in sql 2022 with OS environment in 2022 .In 1 node, there is a standalone instance and an Always-on instance with the other node. During the changes if you have just a standalone instance, your instance will be unvailable.

Here’s the step-by-step procedure for modify IPs adresses of a complete clustering environnement:


1) Change Ip of the two virtual machine
Change into Network parameters like VLAN settings and MAC address ( on vsphere)

Than control IP on netword card
Access the network interface card on the respective nodes and make the change.


2) Change Ip of the cluster card on each vm
Modify on network card and change dns if necessary


3) Change ip of the cluster ( on windows failover cluster)
In the Failover Cluster Manager pane, select your cluster and expand Cluster Core Resources.
Right-click the cluster, and select Properties >IP address.
Change the IP address of the failover cluster using the Edit option and click OK.
Click Apply.

4) Change ip of the the instance sql
If you have specific IP for your instance add the new IP on network card

However you can just change the IP on on sql configuration manager

Restart the instance
Control connection with the changes
5) Change ip of the listener

Go to the AG in the failover cluster manager, locate the server name in the bottom panel, right-click and go to properties and change the static IP address.

Problem we encountered

As it was a cluster, the two IP ranges didn’t have the same firewall rules. This initially blocked the hardware part of the system, as well as the AG witness, which was unable to control the state of the two nodes. The network team then set the same rules on both ranges, and all was well.

L’article Customer experience – How to change ip address in a fully clustered environment est apparu en premier sur dbi Blog.

Exascale Infrastructure : new flavor of Exadata Database Service

Mon, 2025-11-24 02:30
Oracle unveiled Exadata Database Service on Exascale Infrastructure in summer 2024. In this blog post, the first of a series dedicated to Exascale, we will dive into its architecture after a brief presentation of what Exascale is. Introduction

Exadata on Exascale Infrastructure is a new deployment option for Exadata Database Service. It comes in addition to the well known Exadata Cloud@Customer or Exadata on Dedicated Infrastructure options already available. It is based on new storage management technology decoupling database and Grid Infrastructure clusters from the Exadata underlying storage servers by integrating the database kernel directly with the Exascale storage structures.

What is Exascale Infrastructure ?

Simply put, Exascale is the next-generation of Oracle Exadata Database Services. It combines a cloud storage approach for flexibilty and hyper-elasticity with the performance of Exadata Infrastructure. It introduces a loosely-coupled shared and multitenant architecture where Database and Grid Infrastructure clusters are decoupled from the underlying Exadata storage servers which become a pool of shared storage resources available for multiple Grid Infrastructure clusters and databases.

Strict data isolation provides secure storage sharing while storage pooling enables flexible and dynamic provisioning combined with better storage space and processing utilization.

Advanced snapshot and cloning features, leveraging Redirect-On-Write technology instead of Copy-On-Write, enable space-efficient thin clones from any read/write database or pluggable database. Read-only test master are therefore a thing of the past. These features alone make Exascale a game-changer for database refreshes, CI/CD workflows and pipelines, development environments provisioning, all done with single SQL commands and much faster than before.

Block storage services allow the creation of arbitrary-sized block volumes for use by numerous applications. Exascale block volumes are also used to store Exadata database server virtual machines images enabling :

  • creation of more virtual machines
  • removal of local storage dependency inside the Exadata compute nodes
  • seamless migration of virtual machines between different compute nodes

Finally, the following hardware and software considerations complete this brief presentation of Exascale:

  • runs on 2-socket Oracle Exadata system hardware with RoCE Network Fabric (X8M-2 or later)
  • Database 23ai release 23.5.0 or later is required for full-featured native Database file storage in Exascale
  • Exascale block volumes support databases using older Database software releases back to Oracle Database 19c
  • Exascale is built into Exadata System Software (since release 24.1)
Architecture

The main point with Exascale architecture is cloud-scale, multi-tenant resource pooling, both for storage and compute.

Storage pooling

Exascale is based on pooling storage servers which provide services such as Storage Pools, Vaults and Volume Management. Vaults, which can be considered an equivalent of ASM diskgroups, are directly accessed by the database kernel.

File and extent management is done by Exadata System Software, thus freeing the compute layer from ASM processes and memory structures for database files access and extents management (with Database 23ai or 26ai; for Database 19c, ASM is still required). With storage management moving to storage servers, resource management becomes more flexible and more memory and CPU resources are available on compute nodes to process database tasks.

Exascale also provides redundancy, caching, file metadata management, snapshots and clones as well as security and data integrity features.

Of course, since Exascale is built on top of Exadata, you benefit from features like RDMA, RoCE, Smart Flash Cache, XRMEM, Smart Scan, Storage Indexes, Columnar Caching.

Compute pooling

On the compute side, we have database-optimized servers which run Database 23ai or newer and Grid Infrastructure Cluster management software. The physical database servers host the VM Clusters and are managed by Oracle. Unlike Exadata on Dedicated Infrastructure, there is no need to provision an infrastructure before going ahead with VM Cluster creation and configuration : in Exascale, you only deal with VM Clusters.

VM file systems are centrally-hosted by Oracle on RDMA-enabled block volumes in a system-vault. VM images used by the VM Clusters are no more hosted on local storage on the database servers. This enables the number of VMs running on the database servers to raise from 12 to 50.

Each VM Cluster accesses a Database Vault storing the database files with strict isolation from other VM Clusters database files.

Virtual Cloud Network

Client and backup connectivity is provided by Virtual Cloud Network (VCN) services.

This loosely-coupled, shared and multitenant architecture enables far greater flexibilty than what was possible with ASM or even Exadata on Dedicated Infrastructure. Exascale’s hyper-elasticity enables to start with very small VM Clusters and then scale as the workloads increase, with Oracle managing the infrastructure automatically. You can start as small as 1 VM per cluster (up to 10), 8 eCPUs per VM (up to 200), 22GB of memory per VM, 220GB file system storage per VM and 300GB Vault storage per VM Cluster (up to 100TB). Memory is tightly coupled to eCPUs configuration with 2.75GB per eCPU and thus does not scale independently from the eCPUs number.

For those new to eCPU, it is a standard billing metric based on the number of cores per hour elastically allocated from a pool of compute and storage servers. eCPUs are not tied to the make, model or clock speed of the underlying processor. By contrast, an OCPU is the equivalent of one physical core with hyper-threading enabled.

To summarize

To best understand what Exascale Infrastructure is and introduces, here is the wording which best describes this new flavor of Exadata Database Service :

  • loosely-coupling of compute and storage cloud
  • hyper-elasticity
  • shared and multi-tenant service model
  • ASM-less (for 23ai or newer)
  • Exadata performance, reliability, availability and security at any scale
  • CI/CD friendly
  • pay-per-use model

Stay tuned for more on Exascale …

L’article Exascale Infrastructure : new flavor of Exadata Database Service est apparu en premier sur dbi Blog.

OGG-00423 when performing initial load with GoldenGate 23ai

Mon, 2025-11-24 01:00

Very quick piece of blog today to tackle the OGG-00423 error. There is not much information online on the matter, and the official Oracle documentation doesn’t help. If you ever stumble upon an OGG-00423 error when setting up GoldenGate replication, remember that it’s most certainly related to grants given to the GoldenGate user. An example of the error is given below, after starting an initial load:

2025-10-27 09:52:40  ERROR   OGG-00423  Could not find definition for pdb_source.app_source.t1.

This error happened when doing an initial load with the following configuration:

extract extini
useridalias cdb01
extfile aa
SOURCECATALOG pdb_source
table app_source.t1, SQLPredicate "As of SCN 3899696";

This error should be present whether you use SOURCECATALOG or the full three-part TABLE name.

For replicats, a common solution for OGG-00423 is to use the ASSUMETARGETDEFS parameter in the configuration file, but this is a replication-only parameter, and there is no such thing for the initial load. In this case, the error was due to the user defined in the cdb01 alias lacking select on the specified t1 table:

[oracle@vmogg ~]$ sqlplus c##ggadmin
Enter password:

SQL> alter session set container=pdb_source;
Session altered.

SQL> select * from app_source.t1;
select * from app_source.t1
                           *
ERROR at line 1:
ORA-00942: table or view does not exist

After granting the correct SELECT privilege to the GoldenGate user, it works ! Here is an example on how to grant this select. You might want to grant it differently, depending on security aspects in your deployments:

sqlplus / as sysdba
ALTER SESSION SET CONTAINER=PDB_SOURCE;
GRANT SELECT ANY TABLE TO C##GGADMIN CONTAINER=CURRENT;

NB: When testing GoldenGate setups, a good practice to debug OGG-errors if you do not use a DBA user for replication is to temporarily grant DBA (or a DBA role) to the GoldenGate user. This way, you can quickly track down the root cause of your problem, at least if it’s related to grants.

L’article OGG-00423 when performing initial load with GoldenGate 23ai est apparu en premier sur dbi Blog.

SQL Server 2025 release

Wed, 2025-11-19 08:55

Microsoft has announced the release of SQL Server 2025. The solution can be downloaded using the following link: https://www.microsoft.com/en-us/sql-server/

Among the new features available, we have:

  • The introduction of the Standard Developer Edition, which offers the same features as the Standard Edition, but is free when used in a non-production environment, similar to the Enterprise Developer Edition (formerly Developer Edition).
  • The removal of the Web Edition.
  • The Express Edition can now host databases of up to 50 GB. In practice, it is quite rare to see our customers use the Express Edition. It generally serves as a toolbox or for very specific scenarios where the previous 10 GB limit was not an issue. Therefore, lifting this limit will not have a major impact due to the many other restrictions it still has.
  • The introduction of AI capabilities within the database engine, including vector indexes, vector data types, and the corresponding functions.
  • For development purposes, SQL Server introduces a native JSON data type and adds support for regular expressions (regex).
  • Improvements to Availability Groups, such as the ability to offload full, differential, and transaction log backups to a secondary replica.
  • The introduction of optimized locking and a new ZSTD algorithm for backup compression.

We also note the release of SQL Server Management Studio 22.

References :

https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2025?view=sql-server-ver17
https://techcommunity.microsoft.com/blog/sqlserver/sql-server-2025-is-now-generally-available/4470570
https://learn.microsoft.com/en-us/ssms/release-notes-22

Thank you, Amine Haloui.

L’article SQL Server 2025 release est apparu en premier sur dbi Blog.

ORA-44001 when setting up GoldenGate privileges on a CDB

Thu, 2025-11-13 02:00

I was recently setting up GoldenGate for a client when I was struck by a ORA-44001 error. I definitely wasn’t the first one to come across this while playing with grants on GoldenGate users, but nowhere could I find the exact reason for the issue. Not a single question or comment on that matter offered a solution.

The problem occurs when running the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE package described in the documentation. An example given by the documentation is the following:

EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE => 'c##ggadmin', CONTAINER => 'ALL');

And the main complaint mentioned regarding this command was the following ORA-44001 error:

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##ggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2

The solution is in fact quite simple. But I decided to investigate it a bit further, playing with the multitenant architecture. In this blog, I will use an Oracle 19c CDB with a single pluggable database named PDB1.

Do you have read-only PDBs on your CDB ?

For me, it was really the only thing that mattered when encountering this error. On a CDB with tens of PDBs, you might have some PDBs in read-only mode. Whether it’s to keep templates aside, or for temporary restrictions on a specific PDB. Let’s try to replicate the error.

First example: PDB in read-write, grant operation succeeds

If you first try to grant the admin privileges with a PDB in read-write, it succeeds:

SQL> alter pluggable database pdb1 open read write;

Pluggable database altered.

SQL> create user c##oggadmin identified by ogg;

User created.

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');

PL/SQL procedure successfully completed.
Second example: PDB in read-only before the user creation, grant operation fails with ORA-44001

If you first put the PDB in read-only mode, and then create the user, then the user doesn’t exist, and you get the ORA-44001 when granting privileges.

SQL> drop user c##oggadmin;

User dropped.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

SQL> create user c##oggadmin identified by ogg;

User created.

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2
Third example: PDB in read-only after the user creation, grant operation fails with ORA-16000

Where this gets tricky is the order in which you write the query. If you create the user before putting a PDB in read-only, you get another error, because the user actually exists:

SQL> drop user c##oggadmin;

User dropped.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read write;

Pluggable database altered.

SQL> create user c##oggadmin identified by ogg;

User created.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 93
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 84
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 123
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3635
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2

As often with Oracle, the error messages can be misleading. The third example clearly points to the issue, while the second one is tricky to debug (even though it is completely valid).

Should I create a GoldenGate user at the CDB-level ?

Depending on your replication configuration, you might need to create a common user instead of multiple users per PDB. For instance, this is strictly required when setting up a downstream extract. However, in general, it might be a bad idea to create a common C##GGADMIN user and granting it privileges with CONTAINER => ALL, because you might not want such a privileged user to exist on all your PDBs.

L’article ORA-44001 when setting up GoldenGate privileges on a CDB est apparu en premier sur dbi Blog.

Alfresco – Solr search result inconsistencies

Wed, 2025-11-12 04:38

We recently encountered an error at a customer’s site. Their Alfresco environment was behaving strangely.
Sometimes the search results worked, and sometimes they did not get the expected results.

The context

The environment is composed of 2 Alfresco7 nodes in cluster and 2 Solr 6.6 nodes load balanced (in active-active mode).

Sometimes the customer isn’t able to retrieve the document he created recently.

Investigation steps

Since we have load balancing in place, the first step is to confirm that everything is okay on the two nodes.

  • I checked that Alfresco is running as expected. Nothing out of the ordinary; the processes are there and there are no errors in the log files and everything is green in the admin console.
  • Then, I checked the alfresco-global.properties on both nodes to ensure the configuration is the same. We never know. I also checked the way we connect to Solr and confirmed that the load-balanced URL is being used.
  • At this point, it is almost certain that the problem is with Solr. We will start by checking the administration console. Because we have load balancing, we must connect to each node individually and cannot use the URL in alfresco-global.properties.
  • At first glance, everything seems fine, but a closer inspection of the Core Admin panel reveals a difference of several thousand “NumDocs” between the two nodes. These values may differ because they are internal Solr files. However, the discrepancy is too high in my opinion.
  • How can this assumption be verified? Move to any core and run a query to list all the files (cm:name:*). On the first node, the query returns an error. On the second node, I received an answer similar to the one below:
  • Now moving to the server where I have the error, in the logs there are errors like:
2025-11-10 15:33:32.466 ERROR (searcherExecutor-137-thread-1-processing-x:alfresco-3) [   x:alfresco-3] o.a.s.c.SolrCore null:org.alfresco.service.cmr.dictionary.DictionaryException10100009 d_dictionary.model.err.parse.failure
        at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:113)
        at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:99)
        at org.alfresco.solr.tracker.ModelTracker.loadPersistedModels(ModelTracker.java:181)
        at org.alfresco.solr.tracker.ModelTracker.<init>(ModelTracker.java:142)
        at org.alfresco.solr.lifecycle.SolrCoreLoadListener.createModelTracker(SolrCoreLoadListener.java:341)
        at org.alfresco.solr.lifecycle.SolrCoreLoadListener.newSearcher(SolrCoreLoadListener.java:139)
        at org.apache.solr.core.SolrCore.lambda$getSearcher$15(SolrCore.java:2249)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.jibx.runtime.JiBXException: Error accessing document
        at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:293)
        at org.jibx.runtime.impl.UnmarshallingContext.toStart(UnmarshallingContext.java:446)
        at org.jibx.runtime.impl.UnmarshallingContext.unmarshalElement(UnmarshallingContext.java:2750)
        at org.jibx.runtime.impl.UnmarshallingContext.unmarshalDocument(UnmarshallingContext.java:2900)
        at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:108)
        ... 11 more
Caused by: java.io.EOFException: input contained no data
        at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:3003)
        at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
        at org.xmlpull.mxp1.MXParser.parseProlog(MXParser.java:1410)
        at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1395)
        at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
        at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:291)
        ... 15 more
  • It looks like the problem is related to the model definition. We need to check if the models are still there in ../solr_data/models. The models are still in place, but one of them is 0 KB.
  • So we need to force delete the empty file and restart Solr to force the model to be reimported.

After taking these actions, we reimported the model file and the errors in the logs disappeared. In the admin console, we can see NumDocs increasing again. When we re-run the query, we get a result.

L’article Alfresco – Solr search result inconsistencies est apparu en premier sur dbi Blog.

Pages