Feed aggregator

ORA-44001 when setting up GoldenGate privileges on a CDB

Yann Neuhaus - 2 hours 10 min ago

I was recently setting up GoldenGate for a client when I was struck by a ORA-44001 error. I definitely wasn’t the first one to come across this while playing with grants on GoldenGate users, but nowhere could I find the exact reason for the issue. Not a single question or comment on that matter offered a solution.

The problem occurs when running the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE package described in the documentation. An example given by the documentation is the following:

EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE => 'c##ggadmin', CONTAINER => 'ALL');

And the main complaint mentioned regarding this command was the following ORA-44001 error:

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##ggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2

The solution is in fact quite simple. But I decided to investigate it a bit further, playing with the multitenant architecture. In this blog, I will use an Oracle 19c CDB with a single pluggable database named PDB1.

Do you have read-only PDBs on your CDB ?

For me, it was really the only thing that mattered when encountering this error. On a CDB with tens of PDBs, you might have some PDBs in read-only mode. Whether it’s to keep templates aside, or for temporary restrictions on a specific PDB. Let’s try to replicate the error.

First example: PDB in read-write, grant operation succeeds

If you first try to grant the admin privileges with a PDB in read-write, it succeeds:

SQL> alter pluggable database pdb1 open read write;

Pluggable database altered.

SQL> create user c##oggadmin identified by ogg;

User created.

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');

PL/SQL procedure successfully completed.
Second example: PDB in read-only before the user creation, grant operation fails with ORA-44001

If you first put the PDB in read-only mode, and then create the user, then the user doesn’t exist, and you get the ORA-44001 when granting privileges.

SQL> drop user c##oggadmin;

User dropped.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

SQL> create user c##oggadmin identified by ogg;

User created.

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2
Third example: PDB in read-only after the user creation, grant operation fails with ORA-16000

Where this gets tricky is the order in which you write the query. If you create the user before putting a PDB in read-only, you get another error, because the user actually exists:

SQL> drop user c##oggadmin;

User dropped.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read write;

Pluggable database altered.

SQL> create user c##oggadmin identified by ogg;

User created.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 93
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 84
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 123
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3635
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2

As often with Oracle, the error messages can be misleading. The third example clearly points to the issue, while the second one is tricky to debug (even though it is completely valid).

Should I create a GoldenGate user at the CDB-level ?

Depending on your replication configuration, you might need to create a common user instead of multiple users per PDB. For instance, this is strictly required when setting up a downstream extract. However, in general, it might be a bad idea to create a common C##GGADMIN user and granting it privileges with CONTAINER => ALL, because you might not want such a privileged user to exist on all your PDBs.

L’article ORA-44001 when setting up GoldenGate privileges on a CDB est apparu en premier sur dbi Blog.

Alfresco – Solr search result inconsistencies

Yann Neuhaus - Wed, 2025-11-12 04:38

We recently encountered an error at a customer’s site. Their Alfresco environment was behaving strangely.
Sometimes the search results worked, and sometimes they did not get the expected results.

The context

The environment is composed of 2 Alfresco7 nodes in cluster and 2 Solr 6.6 nodes load balanced (in active-active mode).

Sometimes the customer isn’t able to retrieve the document he created recently.

Investigation steps

Since we have load balancing in place, the first step is to confirm that everything is okay on the two nodes.

  • I checked that Alfresco is running as expected. Nothing out of the ordinary; the processes are there and there are no errors in the log files and everything is green in the admin console.
  • Then, I checked the alfresco-global.properties on both nodes to ensure the configuration is the same. We never know. I also checked the way we connect to Solr and confirmed that the load-balanced URL is being used.
  • At this point, it is almost certain that the problem is with Solr. We will start by checking the administration console. Because we have load balancing, we must connect to each node individually and cannot use the URL in alfresco-global.properties.
  • At first glance, everything seems fine, but a closer inspection of the Core Admin panel reveals a difference of several thousand “NumDocs” between the two nodes. These values may differ because they are internal Solr files. However, the discrepancy is too high in my opinion.
  • How can this assumption be verified? Move to any core and run a query to list all the files (cm:name:*). On the first node, the query returns an error. On the second node, I received an answer similar to the one below:
  • Now moving to the server where I have the error, in the logs there are errors like:
2025-11-10 15:33:32.466 ERROR (searcherExecutor-137-thread-1-processing-x:alfresco-3) [   x:alfresco-3] o.a.s.c.SolrCore null:org.alfresco.service.cmr.dictionary.DictionaryException10100009 d_dictionary.model.err.parse.failure
        at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:113)
        at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:99)
        at org.alfresco.solr.tracker.ModelTracker.loadPersistedModels(ModelTracker.java:181)
        at org.alfresco.solr.tracker.ModelTracker.<init>(ModelTracker.java:142)
        at org.alfresco.solr.lifecycle.SolrCoreLoadListener.createModelTracker(SolrCoreLoadListener.java:341)
        at org.alfresco.solr.lifecycle.SolrCoreLoadListener.newSearcher(SolrCoreLoadListener.java:139)
        at org.apache.solr.core.SolrCore.lambda$getSearcher$15(SolrCore.java:2249)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.jibx.runtime.JiBXException: Error accessing document
        at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:293)
        at org.jibx.runtime.impl.UnmarshallingContext.toStart(UnmarshallingContext.java:446)
        at org.jibx.runtime.impl.UnmarshallingContext.unmarshalElement(UnmarshallingContext.java:2750)
        at org.jibx.runtime.impl.UnmarshallingContext.unmarshalDocument(UnmarshallingContext.java:2900)
        at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:108)
        ... 11 more
Caused by: java.io.EOFException: input contained no data
        at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:3003)
        at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
        at org.xmlpull.mxp1.MXParser.parseProlog(MXParser.java:1410)
        at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1395)
        at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
        at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:291)
        ... 15 more
  • It looks like the problem is related to the model definition. We need to check if the models are still there in ../solr_data/models. The models are still in place, but one of them is 0 KB.
  • So we need to force delete the empty file and restart Solr to force the model to be reimported.

After taking these actions, we reimported the model file and the errors in the logs disappeared. In the admin console, we can see NumDocs increasing again. When we re-run the query, we get a result.

L’article Alfresco – Solr search result inconsistencies est apparu en premier sur dbi Blog.

PostgreSQL 19: Logical replication of sequences

Yann Neuhaus - Wed, 2025-11-12 02:50

Logical replication in PostgreSQL got a lot of features and performance improvements over the last releases. It was introduced in PostgreSQL 10 back in 2017, and PostgreSQL 9.6 (in 2016) introduced logical decoding which is the basis for logical replication. Today logical replication is really mature and from my point of view only two major features are missing: DDL replication and the replication of sequences. The latter is now possible with the upcoming PostgreSQL 19 next year, and this is what this post is about.

Before we can see how this works we need a logical replication setup. An easy method to set this up is to create a physical replica and then transform that into a logical replica using pg_createsubscriber:

postgres@:/home/postgres/ [pgdev] psql -c "create table t ( a int primary key generated always as identity, b text)"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -c "insert into t (b) values ('aaaa')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -c "insert into t (b) values ('bbbb')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] pg_basebackup --pgdata=/var/tmp/dummy --write-recovery-conf --checkpoint=fast
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/dummy/postgresql.auto.conf 
postgres@:/home/postgres/ [pgdev] pg_createsubscriber --all --pgdata=/var/tmp/dummy --subscriber-port=8888 --publisher-server="host=localhost,port=5432"
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/dummy start
2025-11-11 13:21:20.818 CET - 1 - 9669 -  - @ - 0LOG:  redirecting log output to logging collector process
2025-11-11 13:21:20.818 CET - 2 - 9669 -  - @ - 0HINT:  Future log output will appear in directory "pg_log".
2025-11-11 13:21:21.250 CET - 1 - 9684 -  - @ - 0LOG:  redirecting log output to logging collector process
2025-11-11 13:21:21.250 CET - 2 - 9684 -  - @ - 0HINT:  Future log output will appear in directory "pg_log".

Once this is done we have a logical replica and the data is synchronized:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from t"
 a |  b   
---+------
 1 | aaaa
 2 | bbbb
(2 rows)

A quick check the replication is ongoing:

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values('cccc');"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from t"
 a |  b   
---+------
 1 | aaaa
 2 | bbbb
 3 | cccc
(3 rows)

The “generated always as identidy” we used above to create the table automatically created a sequence for us:

ostgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences;"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 3

Checking the same sequence on the replica clearly shows that the sequence is not synchronized (last_value is still a 2):

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences;"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 2

The reason is, that sequences are not synchronized automatically:

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_publication;"
Expanded display is on.
-[ RECORD 1 ]---+-------------------------------
oid             | 16397
pubname         | pg_createsubscriber_5_f58e9acd
pubowner        | 10
puballtables    | t
puballsequences | f
pubinsert       | t
pubupdate       | t
pubdelete       | t
pubtruncate     | t
pubviaroot      | f
pubgencols      | n

As there currently is no way to enable sequence synchronization for an existing publication we can either drop and re-create or add an additional publication and subscription just for the sequences:

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "create publication pubseq for all sequences;"
CREATE PUBLICATION
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_publication;"
Expanded display is on.
-[ RECORD 1 ]---+-------------------------------
oid             | 16397
pubname         | pg_createsubscriber_5_f58e9acd
pubowner        | 10
puballtables    | t
puballsequences | f
pubinsert       | t
pubupdate       | t
pubdelete       | t
pubtruncate     | t
pubviaroot      | f
pubgencols      | n
-[ RECORD 2 ]---+-------------------------------
oid             | 16398
pubname         | pubseq
pubowner        | 10
puballtables    | f
puballsequences | t
pubinsert       | t
pubupdate       | t
pubdelete       | t
pubtruncate     | t
pubviaroot      | f
pubgencols      | n

The publication has sequence replication enabled and the subscription to consume this can be created like this:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create subscription subseq connection 'host=localhost port=5432' publication pubseq"
CREATE SUBSCRIPTION

Now the sequence is visible in pg_subscription_rel on the subscriber:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_subscription_rel;"
 srsubid | srrelid | srsubstate |  srsublsn  
---------+---------+------------+------------
   24589 |   16385 | r          | 
   24590 |   16384 | r          | 0/04004780   -- sequence
(2 rows)

State “r” means ready, so the sequence should have synchronized, and indeed:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 3

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 3

Adding new rows to the table, which also increases the last_value of the sequence, should also synchronize the sequences:

postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values ('eeee')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values ('ffff')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 6

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 4

… but is not happening automatically. To get them synchronized you need to refresh the subscription:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "alter subscription subseq refresh sequences"
Expanded display is on.
ALTER SUBSCRIPTION
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname    | public
sequencename  | t_a_seq
sequenceowner | postgres
data_type     | integer
start_value   | 1
min_value     | 1
max_value     | 2147483647
increment_by  | 1
cycle         | f
cache_size    | 1
last_value    | 6

Great, this reduces the work to fix the sequences quite a bit and is really helpful. As usual, thanks to all involved.

L’article PostgreSQL 19: Logical replication of sequences est apparu en premier sur dbi Blog.

PostgreSQL 19: The “WAIT FOR” command

Yann Neuhaus - Tue, 2025-11-11 03:32

When you go for replication and you don’t use synchronous replication there is always a window when data written on the primary is not yet available in the replica. This is known as “replication lag” and can be monitored using the pg_stat_replication catalog view. A recent commit to PostgreSQL 19 implements a way to wait for data to be visible on the replica without switching to synchronous replication, and this is what the “WAIT FOR” command is for.

Before we can see how that works we need a replica, because when you try to execute this command on a primary you’ll get this:

postgres=# select version();
                                        version                                        
---------------------------------------------------------------------------------------
 PostgreSQL 19devel on x86_64-linux, compiled by gcc-15.1.1, 64-bit
(1 row)

postgres=# WAIT FOR LSN '0/306EE20';
ERROR:  recovery is not in progress
HINT:  Waiting for the replay LSN can only be executed during recovery.
postgres=# 

So, let’s create a replica and start it up:

postgres@:/home/postgres/ [pgdev] mkdir /var/tmp/dummy
postgres@:/home/postgres/ [pgdev] pg_basebackup --pgdata=/var/tmp/dummy --write-recovery-conf --checkpoint=fast
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/dummy/postgresql.auto.conf 
postgres@:/home/postgres/ [pgdev] chmod 700 /var/tmp/dummy
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/dummy start
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select pg_is_in_recovery()"
 pg_is_in_recovery 
-------------------
 t
(1 row)

As nothing is happening on the primary right now, data on the primary and the replica is exactly the same:

postgres=# select usename,sent_lsn,write_lsn,flush_lsn,replay_lsn from pg_stat_replication;
 usename  |  sent_lsn  | write_lsn  | flush_lsn  | replay_lsn 
----------+------------+------------+------------+------------
 postgres | 0/03000060 | 0/03000060 | 0/03000060 | 0/03000060

To see how “WAIT FOR” behaves we need to a little cheating and pause WAL replaying on the replica (we could also cut the the network between the primary and the replica):

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_wal_replay_pause();" 
 pg_wal_replay_pause 
---------------------
 
(1 row)
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_is_wal_replay_paused();" 
 pg_is_wal_replay_paused 
-------------------------
 t
(1 row)

On the primary, create a table and get the current LSN:

postgres@:/home/postgres/ [pgdev] psql -c "create table t(a int)"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -c "insert into t values(1)"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -c "select pg_current_wal_insert_lsn();"
 pg_current_wal_insert_lsn 
---------------------------
 0/03018CA8
(1 row)

As WAL replay on the replica is paused, the “WAIT FOR” command will now block:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "WAIT FOR LSN '0/03018CA8'" 

This is the intended behavior as we want to make sure that we can see all the data up to this LSN. Once we resume WAL replay on the replica the “WAIT FOR” command will return success as all the data reached the replica:

postgres@:/home/postgres/ [DEV] psql -p 8888 -c "select * from pg_wal_replay_resume();"
 pg_wal_replay_resume 
----------------------
 
(1 row)

… the other session will unblock:

postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "WAIT FOR LSN '0/03018CA8'" 
 status  
---------
 success
(1 row)

So, starting with PostgreSQL 19 next year, there is a way for applications to make sure that a replica reached all the data up to a specific LSN by blocking until the data is there.

L’article PostgreSQL 19: The “WAIT FOR” command est apparu en premier sur dbi Blog.

Setting up TLS encryption and authentication in MongoDB

Yann Neuhaus - Mon, 2025-11-10 02:00

When securing a MongoDB deployment, protecting sensitive data is paramount. MongoDB supports encryption throughout the lifecycle of the data, with three primary types of data encryption :

  • Encryption in transit
  • Encryption at rest
  • Encryption in use


Among these, encryption in transit is fundamental : it protects data as it moves between your application and the database. In MongoDB, this is achieved through TLS (Transport Layer Security), which ensures that communication remains private and secure. You have two options when it comes to using TLS for your database :

  • Using TLS for encryption only.
  • Using TLS both for encryption and authentication to the database.

Setting up TLS for encryption Create a Certificate Authority

We’ll first create a Certificate Authority. These certificates will be self-signed, which is fine for testing, but you shouldn’t use self-signed certificates in a production environment ! On Linux, use the openssl library to generate the certificates.

openssl req -newkey rsa:4096 -nodes -x509 -days 365 -keyout ca.key -out ca.pem -subj "/C=CH/ST=ZH/L=Zurich/O=dbi/OU=MongoDBA/CN=vm.domain.com"

Here is a description of some important parameters of the commands :

  • -newkey rsa:4096 : Generates a new private key and a certificate request using RSA with a 4096-bit key size.
  • -nodes : Skips password encryption of the private key. Without it, OpenSSL would prompt you to set a passphrase.
  • -x509 : Generates a self-signed certificate. x509 is supported by MongoDB.
  • -days 365 : Validity of the certificate in days.
  • -keyout ca.key : Filename for the private key.
  • -out ca.pem : Filename for the certificate.
  • -subj "..." : Provides the subject’s Distinguished Name (DN). If you don’t specify it, OpenSSL will prompt for each field.
Create a Server Certificate

Then, we’ll create the server certificate for the MongoDB instance. In the openssl-server.cnf file below, you should change the req_distinguished_name fields with what you used while creating the Certificate Authority, and replace vm.domain.com by the name of your machine.

If you only have an IP and no DNS entry for your VM, use IP.1 instead of DNS.1 in the alt_names section.

cat > openssl-server.cnf <<EOF
[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no

[ req_distinguished_name ]
C = CH
ST = ZH
L = Zurich
O = dbi
OU = MongoDBA
CN = myVM

[ v3_req ]
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = myVM
EOF

Then, generates the certificate with these commands :

openssl req -newkey rsa:4096 -nodes -keyout mongodb-server.key -out mongodb-server.csr -config openssl-server.cnf

openssl x509 -req -in mongodb-server.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out mongodb-server.crt -days 365 -extensions v3_req -extfile openssl-server.cnf

cat mongodb-server.key mongodb-server.crt > mongodb-server.pem
Create a Client Certificate

Finally, we’ll create a client certificate. The process is the same, with a few tweaks :

  • OU should be different from the one from the server certificate. It is not mandatory for the communication, but it will be for the authentication if you decide to enable it.
  • CN should also be different.
  • extendedKeyUsage should be set with clientAuth instead of serverAuth.
cat > openssl-client.cnf <<EOF
[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no

[ req_distinguished_name ]
C = CH
ST = ZH
L = Zurich
O = dbi
OU = MongoDBAClient
CN = userApp

[ v3_req ]
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
EOF

The creation of the certificate is the same.

openssl req -newkey rsa:4096 -nodes -keyout mongodb-client.key -out mongodb-client.csr -config openssl-client.cnf

openssl x509 -req -in mongodb-client.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out mongodb-client.crt -days 365 -extensions v3_req -extfile openssl-client.cnf

cat mongodb-client.key mongodb-client.crt > mongodb-client.pem
MongoDB Configuration Change

Make sure to set permissions correctly for your certificates.

chmod 600 ca.pem mongodb-server.pem mongodb-client.pem
chown mongod: ca.pem mongodb-server.pem mongodb-client.pem

Now, you can change your MongoDB configuration file to include the certificates. Simply add the net.tls part to your mongod.conf file.

net:
  bindIp: yourIP
  port: 27017
  tls:
    mode: requireTLS
    certificateKeyFile: /path/to/mongodb-server.pem
    CAFile: /path/to/ca.pem

You can now restart your MongoDB instance with systemctl restart mongod (or whatever you’re using), and then try the connection to your instance for your client. Of course, the port mentioned in the net.port field of your configuration file shouldn’t be blocked by your firewall.

> mongosh --host myVM --port 27017 --tls --tlsCertificateKeyFile mongodb-client.pem --tlsCAFile ca.pem
Current Mongosh Log ID:	682c9641bbe4593252ee7c8c
Connecting to:		mongodb://vmIP:27017/?directConnection=true&tls=true&tlsCertificateKeyFile=Fclient.pem&tlsCAFile=ca.pem&appName=mongosh+2.5.1
Using MongoDB:		8.0.9
Using Mongosh:		2.5.1

For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/

test>

You’re now connected to your MongoDB instance through TLS ! And if you’re not using the certificate, the requireTLS mode prevents the connection from being established, and generated these error messages in your logs :

{"t":{"$date":"2025-05-21T04:43:55.277+00:00"},"s":"I",  "c":"EXECUTOR", "id":22988,   "ctx":"conn52","msg":"Error receiving request from client. Ending connection from remote","attr":{"error":{"code":141,"codeName":"SSLHandshakeFailed","errmsg":"The server is configured to only allow SSL connections"},"remote":"IP:50100","connectionId":52}}

If you want to learn more about MongoDB logs, I wrote a blog on this topic: MongoDB Log Analysis : A Comprehensive Guide.

Setting up TLS authentication

Now that you’re connected, we will set up authentication so that you can be connected as a specific user to MongoDB. Using the already established connection, create a user in the $external database. Each client certificate that you create can be mapped to one MongoDB user. Retrieve the username that you will use :

> openssl x509 -in mongodb-client.pem -inform PEM -subject -nameopt RFC2253 | grep subject
subject=CN=userApp,OU=MongoDBA,O=dbi,L=Zurich,ST=ZH,C=CH

And then create the user in the $external database, using the existing MongoDB connection :

test> db.getSiblingDB("$external").runCommand({
  createUser: "CN=userApp,OU=MongoDBA,O=dbi,L=Zurich,ST=ZH,C=CH",
  roles: [
    { role: "userAdminAnyDatabase", db: "admin" }
  ]
});

To check that everything works as intended, you can try to display collections in the admin database. For the moment, there is no error because you have all the rights to do it (no authorization is enforced).

test> use admin
switched to db admin
admin> show collections;
system.users
system.version

You can now edit the MongoDB configuration file by adding the net.tls.allowConnectionsWithoutCertificates set to true, and the security.authorization flag set to enabled. The mongod.conf file should look like this :

net:
  bindIp: X.X.X.X
  port: XXXXX
  tls:
    mode: requireTLS
    certificateKeyFile: /path/to/mongodb-server.pem
    CAFile: /path/to/ca.pem
    allowConnectionsWithoutCertificates: false

security:
  authorization: enabled

After restarting with systemctl restart mongod, you can now connect again. If you use the same command as before, you will log in without authentication, and get the error below whenever you try to do anything :

MongoServerError[Unauthorized]: Command listCollections requires authentication

So you should now connect via this command :

mongosh --host vmIP --port 27017 --tls --tlsCertificateKeyFile mongodb-client.pem --tlsCAFile ca.pem --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509

If you want to show the admin collections, you will now get an error, because your user only has the userAdminAnyDatabase role granted (this role was chosen during the user creation, see above).

admin> show collections
MongoServerError[Unauthorized]: not authorized on admin to execute command { listCollections: 1, filter: {}, cursor: {}, nameOnly: true, authorizedCollections: false, lsid: { id: UUID("9c48b4c4-7702-49ce-a97c-52763b2ad6b3") }, $db: "admin" }

But it’s fine, you can grant yourself more roles (readWriteAnyDatabase, for instance) and create new users if you want.

The communication between the client and the server is now fully secured. Congratulations !

MongoServerError[BadValue]

Side note : if you ever encounter this error:

MongoServerError[BadValue]: Cannot create an x.509 user with a subjectname that would be recognized as an internal cluster member

… make sure to follow the RFC-2253 standards. For instance, you could have this error if one of the field is too long. Also, as a reminder, the client certificate should have a different Distinguished Name (DN) than the server certificate (see documentation for more information).

L’article Setting up TLS encryption and authentication in MongoDB est apparu en premier sur dbi Blog.

Problem with loading data from XML decimal point with german nls-settings

Tom Kyte - Fri, 2025-11-07 17:20
<code>-- Settings select banner from v$version; --Oracle Database 23c Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free select * from v$nls_parameters where parameter in ('NLS_NUMERIC_CHARACTERS','NLS_TERRITORY','NLS_LANGUAGE'); -- NLS_LANGUAGE GERMAN 3 -- NLS_TERRITORY GERMANY 3 -- NLS_NUMERIC_CHARACTERS ,. 3 create table asktom_gpx_tab (i number ,xmldata xmltype ); INSERT INTO asktom_gpx_tab -- (i,xmldata) VALUES (1 ,xmltype( '<?xml version="1.0" encoding="UTF-8" standalone="no" ?> <gpx> <trkpt lat="49.773210" lon="8.930600"><ele>340.8</ele></trkpt> <trkpt lat="49.772980" lon="8.930280"><ele>342.0</ele></trkpt> <trkpt lat="49.772740" lon="8.929900"><ele>341.5</ele></trkpt> </gpx>') ); commit; select elevation from asktom_gpx_tab t, xmltable('/gpx/trkpt' passing t.xmldata columns elevation varchar2(20) path 'ele') x1; -- 340,8 -- converted because NLS sets decimal sign to comma -- 342.0 -- why is this value not converted to comma??? -- 341,5 -- converted because NLS sets decimal sign to comma -- I found a Question here with a supposed solution but it doesnt work right select to_number(elevation,'FM9999D99','nls_numeric_characters = '',.''') from asktom_gpx_tab t, xmltable('/gpx/trkpt' passing t.xmldata columns elevation varchar2(20) path 'ele') x1; -- 340,8 -- ok -- 3420 -- This is definitely a wrong value -- 341,5 -- ok</code> Shouldnt all values be treated the same way?
Categories: DBA Blogs

What are the recommended BIG-IP F5 settings for routing HTTP traffic to ORDS 24.4 without SSL in an Oracle EBS-integrated environment?

Tom Kyte - Fri, 2025-11-07 17:20
What are the recommended BIG-IP F5 settings for routing HTTP traffic to ORDS 24.4 without SSL in an Oracle EBS-integrated environment?
Categories: DBA Blogs

impdp commit size parameter

Tom Kyte - Fri, 2025-11-07 17:20
Is there a commit size parameter for datapump import utility in the future? Currently there is none. I see direct path insert and parallel process are there and if so will adding commit size parameter help in faster data loading.
Categories: DBA Blogs

Performance Analysis of SQL over DB Link

Tom Kyte - Fri, 2025-11-07 17:20
Hello and thank you for your response. Sorry I could not provide a live example SQL. I have an application running in JBoss connecting to a database Reporting_DB and executes SQLs there. The SQLs actually would access views in the Reporting_DB. The views are accessing tables which are in a remote DB, say ReadOnlyStandby_DB accessed over a DB link to a remote database. Here is a simple schematic: Application --Network--> Reporting_DB ---DB Link--> ReadOnlyStandby_DB ORM sql ---> View --> Table SQL Signature: Example of SQL seen in Reporting_DB: The application uses some ORM (hibernate), so the SQLs are always changing as seen in the Reporting_DB: select this_.Column1 as Column1_485_0_, this_.Column2 as Column2_485_0_, this_.Column3 as AUDIT3_485_0 etc. Example of SQL seen in ReadOnlyStandby_DB The sql seen in the ReadOnlyStandby_DB looks like this: SELECT "A1"."col1" ... FROM "TABLE_1" where CONDITION ... UNION ALL ... The users are complaining that the performance is slow, developers are sharing application logs that show SQLs are timing out with slowness. The final database where SQL is executed is a Data Guard read only Standby database. I have OEM on Reporting_DB and ReadOnlyStandby_DB but the app server (JBOSS) is external to my jurisdiction and have no access or insight. How can I get the following details: 1) Connect the dots for session: How to connect the dots from App, db session to Reporting_DB, db session to ReadOnlyStandby_DB 2) Session Trace: How to trace session coming from DB link on ReadOnlyStandby_DB 3) SQL Analysis: The SQL on Reporting_DB is not the same on ReadOnlyStandby_DB; it seems to change. How to connect SQL "a" on Reporting_DB and its related SQL "b" on ReadOnlyStandby_DB ?
Categories: DBA Blogs

Materialized view based on View expanded to underlying objects

Tom Kyte - Fri, 2025-11-07 17:20
Hello All I have created a Materialized view using the following code <code>CREATE MATERIALIZED VIEW "APPS"."XXBST_UNPACK_PENDING_MV" ("CUST_ACCOUNT_ID", "CUSTOMER_NAME", "SUPPLIER_ID", "SUPPLIER_NAME", "SHIPMENT_ID", "SHIPMENT_NUMBER", "UNPACK_DATE", "DAYS_IN_UNPACK_PENDING_STATUS") SEGMENT CREATION IMMEDIATE ORGANIZATION HEAP PCTFREE 10 PCTUSED 0 INITRANS 2 MAXTRANS 255 NOCOMPRESS NOLOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "APPS_TS_TX_DATA" BUILD DEFERRED USING INDEX REFRESH COMPLETE ON DEMAND START WITH sysdate+0 NEXT sysdate + (1/24/60)*30 USING DEFAULT LOCAL ROLLBACK SEGMENT USING TRUSTED CONSTRAINTS EVALUATE USING CURRENT EDITION DISABLE ON QUERY COMPUTATION DISABLE QUERY REWRITE AS SELECT * FROM xxbst_unpack_pending_v;</code> This is in a non prod environment which has just been restored from a backup due to the code being changed in prod to this lowest level expanded code <code>CREATE MATERIALIZED VIEW "APPS"."XXBST_UNPACK_PENDING_MV" ("CUST_ACCOUNT_ID", "CUSTOMER_NAME", "SUPPLIER_ID", "SUPPLIER_NAME", "SHIPMENT_ID", "SHIPMENT_NUMBER", "UNPACK_DATE", "DAYS_IN_UNPACK_PENDING_STATUS") SEGMENT CREATION IMMEDIATE ON PREBUILT TABLE WITHOUT REDUCED PRECISION USING INDEX REFRESH FORCE ON DEMAND USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS EVALUATE USING CURRENT EDITION DISABLE ON QUERY COMPUTATION DISABLE QUERY REWRITE AS SELECT "A1"."CUST_ACCOUNT_ID" "CUST_ACCOUNT_ID","A1"."CUSTOMER_NAME" "CUSTOMER_NAME","A1"."SUPPLIER_ID" "SUPPLIER_ID","A1"."SUPPLIER_NAME" "SUPPLIER_NAME","A1"."SHIPMENT_ID" "SHIPMENT_ID","A1"."SHIPMENT_NUMBER" "SHIPMENT_NUMBER","A1"."UNPACK_DATE" "UNPACK_DATE","A1"."DAYS_IN_UNPACK_PENDING_STATUS" "DAYS_IN_UNPACK_PENDING_STATUS" FROM (SELECT "A7"."CUST_ACCOUNT_ID" "CUST_ACCOUNT_ID","APPS"."XXBST_DEV_UTILS_PKG"."GET_CUST_SHORT_NAME"("A7"."CUST_ACCOUNT_ID") "CUSTOMER_NAME","A3"."VENDOR_ID" "SUPPLIER_ID","A3"."VENDOR_NAME" "SUPPLIER_NAME","A10"."SHIPMENT_ID" "SHIPMENT_ID","A10"."SHIPMENT_NUMBER" "SHIPMENT_NUMBER",MAX("A7"."CREATION_DATE") "UNPACK_DATE",TRUNC(SYSDATE)-TRUNC(MAX("A7"."CREATION_DATE")) "DAYS_IN_UNPACK_PENDING_STATUS" FROM "XXBST"."XXBST_TNT_SHPMNT" "A10","XXBST"."XXBST_TNT_MV_PLAN" "A9","XXBST"."XXBST_TNT_MV" "A8","XXICE"."XXICE_SM_RCV_HEADERS" "A7","XXBST"."XXBST_TNT_HEADERS_ALL" "A6","XXBST"."XXBST_TNT_LINES_ALL" "A5","XXBST"."XXBST_BOM_ORDER_HEADERS_ALL" "A4", (SELECT "A12"."VENDOR_ID" "VENDOR_ID","A12"."VENDOR_NAME" "VENDOR_NAME" FROM "AP"."AP_SUPPLIERS" "A12","AR"."HZ_PARTIES" "A11" WHERE "A12"."PARTY_ID"="A11"."PARTY_ID") "A3","XXBST"."XXBST_TNT_MV_PLAN" "A2" WHERE "A10"."SHIPMENT_ID"="A7"."SHIPMENT_ID" AND "A10"."SHIPMENT_ID"="A9"."SHIPMENT_ID" AND "A9"."PLAN_ID"="A8"."PLAN_ID" AND "A8"."MV_TYPE"='DELIVERY' AND "A8"."MV_STATUS"='UNPACK_PENDING' AND "A6"."SHIPM...
Categories: DBA Blogs

Export Oracle Label Security info from one DB to be imported into another DB

Tom Kyte - Fri, 2025-11-07 17:20
Hi Sir, I'm trying to copy the Oracle Label Security information from one database and apply it to another one. I tried to use expdp/impdp and dbms_metadata.get_ddl, but I didn't have success using them. Is there a special package or tool to get this work done? Thanks in advance. Valerio Almeida
Categories: DBA Blogs

Oracle GoldenGate Response Files: Understanding the Evolution from 12c to 23ai

DBASolved - Fri, 2025-11-07 09:48

Your Oracle GoldenGate 21c response file won't work for 23ai deployments—and that's by design. Oracle fundamentally restructured GoldenGate configuration with 23ai, introducing deployment-level administrator separation, granular directory control, Configuration Service options, and enhanced security with TLS 1.3 support. This detailed comparison reveals the critical differences between response file schemas, explains why backward compatibility breaks, and provides practical migration strategies for maintaining automation across GoldenGate versions from 12c through 23ai. Learn how to leverage new capabilities like remote metrics and centralized configuration management while keeping your existing deployments running smoothly.

The post Oracle GoldenGate Response Files: Understanding the Evolution from 12c to 23ai appeared first on DBASolved.

Categories: DBA Blogs

Securing an Existing Unsecure GoldenGate Installation

Yann Neuhaus - Fri, 2025-11-07 01:00

You might have an existing unsecure GoldenGate installation that you would like to secure, whether it’s for security reasons or because you would like to dissociate the installation and its securing process. After searching everywhere in the Oracle documentation for how to proceed, I decided to try, investigate and eventually even asked Oracle directly. Here is the answer.

For a TL;DR version of the answer, please go to the end of the blog, but in the meantime, here was my reasoning.

Setup differences between a secure and unsecure GoldenGate installation Installation differences

From an installation perspective, the difference between a secure and unsecure installation is narrow. I talked earlier about graphic and silent GoldenGate installations, and for the silent installation, the following response file parameters are the only one involved in this security aspect:

# SECTION C - SECURITY MANAGER
SECURITY_ENABLED=false

# SECTION H - SECURITY
TLS_1_2_ENABLED=false
TLS_1_3_ENABLED=false
FIPS_ENABLED=false
SERVER_CERTIFICATE=
SERVER_CERTIFICATE_KEY_FILE=
SERVER_CA_CERTIFICATES_FILE=
CLIENT_CERTIFICATE=
CLIENT_CERTIFICATE_KEY_FILE=
CLIENT_CA_CERTIFICATES_FILE

*_ENABLED parameters are just flags that should be set to true to secure the installation (at least for SECURITY_ENABLED and one TLS parameter), and then you need to provide the certificate files (client and server, three for each).

To summarize, there is not much you have to do to configure a secure GoldenGate setup. So it shouldn’t be that difficult to enable these security features after installation: one flag, and a few certificates.

Configuration differences

From a configuration perspective, there are not many differences either. Looking at the deploymentConfiguration.dat file for both secure and unsecure service managers, the only difference lies in the SecurityManager.config.securityDetails section. After cleaning what is similar, here are the differences:

# Secure installation
            "securityDetails": {
                "network": {
                    "common": {
                        "fipsEnabled": false,
                    },
                    "inbound": {
                        "authMode": "clientOptional_server",
                        "cipherSuites": [
                            "TLS_AES_256_GCM_SHA384",
                            "TLS_AES_128_GCM_SHA256",
                            "TLS_CHACHA20_POLY1305_SHA256"
                        ],
                        "protocolVersion": "TLS_ALL"
                    },
                    "outbound": {
                        "authMode": "clientOptional_server",
                    }
                }
            },

# Unsecure installation
            "securityDetails": {
                "network": {
                    "common": {
                        "fipsEnabled": false,
                    },
                    "inbound": {
                        "authMode": "clientOptional_server",
                        "cipherSuites": "^((?!anon|RC4|NULL|3DES).)*$",
                    },
                    "outbound": {
                        "authMode": "client_server",
                    }
                }
            },

Basically, securityDetails.outbound.authMode is set to clientOptional_server on one side, and client_server on the other. And the unsecure configuration has a different securityDetails.inbound.cipherSuites parameter, and a missing securityDetails.protocolVersion parameter.

But nothing in the configuration points to the wallet files, locates in $OGG_ETC_HOME/ssl. So, how to add them here ?

Can you secure an unsecure GoldenGate installation ?

When connecting to an unsecure GoldenGate service manager, you still have the ability to add and manage certificates from the UI, the same way you would do on a secure installation:

Certificates Management tab on an unsecured GoldenGate service manager UI

It is unfortunate, but just adding the certificates from the UI doesn’t make your installation secure. In fact, even after modifying the deploymentConfiguration.dat files, the last piece missing in the configuration, as described above, it doesn’t work. You will only end up with a broken installation, even when doing the same with all your deployments and restarting everything.

Is there really no way to secure an already existing GoldenGate installation ?

Unfortunately, not at this point. And it was confirmed earlier this week on the MOSC forums by Gopal Gaur, Senior Principal Software Engineer working on GoldenGate at Oracle.

You can not convert non secure deployment into secure deployment, you will need a new service manager that supports sever side SSL/TLS.

You can not convert non secure deployment into secure deployment at this stage, we have an opened enhancement for this.

To wrap up, bad news: it is not possible to secure an existing GoldenGate installation, but good news, Oracle is apparently working on it. In the meantime, just re-install GoldenGate

L’article Securing an Existing Unsecure GoldenGate Installation est apparu en premier sur dbi Blog.

Incremental Backups of a Standby Database

Hemant K Chitale - Fri, 2025-11-07 00:16

 In response to a comment on a previous Blog Post RMAN Backup of a Standby Database, I show incremental Backups as well below


RMAN> backup as compressed backupset incremental level 0 database;
backup as compressed backupset incremental level 0 database;
Starting backup at 07-NOV-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=219 device type=DISK
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf
input datafile file number=00003 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_sysaux_n16dxhob_.dbf
input datafile file number=00004 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_undotbs1_n16dxhom_.dbf
input datafile file number=00009 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_undotbs2_n16dxhon_.dbf
input datafile file number=00007 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_users_n16dxhoo_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2nxdx_.bkp tag=TAG20251107T055853 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:15
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_system_n16dz6sq_.dbf
input datafile file number=00011 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_sysaux_n16dz6sr_.dbf
input datafile file number=00012 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_undotbs1_n16dz6t1_.dbf
input datafile file number=00013 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_undo_2_n16dz6t2_.dbf
input datafile file number=00014 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_users_n16dz6tc_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/1476C653214704CFE0635A38A8C08494/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2q8kr_.bkp tag=TAG20251107T055853 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00060 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-SYSTEM_FNO-60.dbf
input datafile file number=00061 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-SYSAUX_FNO-61.dbf
input datafile file number=00062 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-UNDOTBS1_FNO-62.dbf
input datafile file number=00063 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-UNDO_2_FNO-63.dbf
input datafile file number=00064 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-USERS_FNO-64.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/38AE431466FE1FDBE0635A38A8C085D8/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2rzo7_.bkp tag=TAG20251107T055853 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_system_n16f20wr_.dbf
input datafile file number=00006 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_sysaux_n16f20x3_.dbf
input datafile file number=00008 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_undotbs1_n16f20xg_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/14769E258FBB5FD8E0635A38A8C09D43/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2tprh_.bkp tag=TAG20251107T055853 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55
Finished backup at 07-NOV-25

Starting Control File and SPFILE Autobackup at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/autobackup/2025_11_07/o1_mf_s_1216533563_njv2wg2k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-NOV-25


RMAN>
RMAN> exit
exit

RMAN Client Diagnostic Trace file : /u01/app/oracle/diag/clients/user_oracle/host_4144547424_110/trace/ora_2133_140607082957312.trc

Recovery Manager complete.
[oracle@stdby trace]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Nov 7 06:04:10 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> set pages600 linesize 132
SQL> select database_role, open_mode from v$database
  2  /

DATABASE_ROLE    OPEN_MODE
---------------- --------------------
PHYSICAL STANDBY MOUNTED

SQL>
SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0
[oracle@stdby trace]$ 

[oracle@stdby trace]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Nov 7 06:04:55 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=1162136313, not open)

RMAN> list backup of datafile 1;
list backup of datafile 1;
using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
320     Incr 0  1.08G      DISK        00:01:10     07-NOV-25
        BP Key: 320   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T055853
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2nxdx_.bkp
  List of Datafiles in backup set 320
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
  1    0  Incr 13423959   07-NOV-25              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf


RMAN>
RMAN> select current_scn from v$database;
select current_scn from v$database;
CURRENT_SCN
-----------
   13442704


RMAN>

[oracle@stdby trace]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Nov 7 06:07:11 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.25.0.0.0

SQL> select current_scn, database_role, open_mode from v$database;

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13442704 PHYSICAL STANDBY MOUNTED

SQL> /

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13443585 PHYSICAL STANDBY MOUNTED

SQL>


SQL> SQL> l
  1* select current_scn, database_role, open_mode from v$database
SQL> /

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13443585 PHYSICAL STANDBY MOUNTED

SQL> /

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13449819 PHYSICAL STANDBY MOUNTED

SQL> /

CURRENT_SCN DATABASE_ROLE    OPEN_MODE
----------- ---------------- --------------------
   13449819 PHYSICAL STANDBY MOUNTED

SQL>


[oracle@stdby trace]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Nov 7 06:10:58 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=1162136313, not open)

RMAN> backup as compressed backupset incremental level 1 database;
backup as compressed backupset incremental level 1 database;
Starting backup at 07-NOV-25
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=219 device type=DISK
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf
input datafile file number=00003 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_sysaux_n16dxhob_.dbf
input datafile file number=00004 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_undotbs1_n16dxhom_.dbf
input datafile file number=00009 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_undotbs2_n16dxhon_.dbf
input datafile file number=00007 name=/Standby_DB/oradata/STDBY/datafile/o1_mf_users_n16dxhoo_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3dg1n_.bkp tag=TAG20251107T061125 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_system_n16dz6sq_.dbf
input datafile file number=00011 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_sysaux_n16dz6sr_.dbf
input datafile file number=00012 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_undotbs1_n16dz6t1_.dbf
input datafile file number=00013 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_undo_2_n16dz6t2_.dbf
input datafile file number=00014 name=/Standby_DB/oradata/STDBY/1476C653214704CFE0635A38A8C08494/datafile/o1_mf_users_n16dz6tc_.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/1476C653214704CFE0635A38A8C08494/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3do5w_.bkp tag=TAG20251107T061125 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00060 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-SYSTEM_FNO-60.dbf
input datafile file number=00061 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-SYSAUX_FNO-61.dbf
input datafile file number=00062 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-UNDOTBS1_FNO-62.dbf
input datafile file number=00063 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-UNDO_2_FNO-63.dbf
input datafile file number=00064 name=/Standby_DB/oradata/STDBY/PRDAPPTST/data_D-RACDB_TS-USERS_FNO-64.dbf
channel ORA_DISK_1: starting piece 1 at 07-NOV-25
channel ORA_DISK_1: finished piece 1 at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/38AE431466FE1FDBE0635A38A8C085D8/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3dw8p_.bkp tag=TAG20251107T061125 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_system_n16f20wr_.dbf
skipping datafile 00005 because it has not changed
input datafile file number=00006 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_sysaux_n16f20x3_.dbf
skipping datafile 00006 because it has not changed
input datafile file number=00008 name=/Standby_DB/oradata/STDBY/14769E258FBB5FD8E0635A38A8C09D43/datafile/o1_mf_undotbs1_n16f20xg_.dbf
skipping datafile 00008 because it has not changed
channel ORA_DISK_1: backup cancelled because all files were skipped
Finished backup at 07-NOV-25

Starting Control File and SPFILE Autobackup at 07-NOV-25
piece handle=/Standby_DB/FRA/STDBY/autobackup/2025_11_07/o1_mf_s_1216534110_njv3dzjt_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-NOV-25


RMAN>

RMAN> list backup of datafile 1;
list backup of datafile 1;

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
320     Incr 0  1.08G      DISK        00:01:10     07-NOV-25
        BP Key: 320   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T055853
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2nxdx_.bkp
  List of Datafiles in backup set 320
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
  1    0  Incr 13423959   07-NOV-25              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
325     Incr 1  2.93M      DISK        00:00:05     07-NOV-25
        BP Key: 325   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T061125
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3dg1n_.bkp
  List of Datafiles in backup set 325
  File LV Type Ckp SCN    Ckp Time  Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------- ----------- ------ ----
  1    1  Incr 13449820   07-NOV-25              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf


RMAN>

RMAN> exit
exit


Recovery Manager complete.
[oracle@stdby trace]$ export NLS_DATE_FORMAT=DD_MON_RR_HH24_MI
[oracle@stdby trace]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Nov 7 06:15:24 2025
Version 19.25.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=1162136313, not open)

RMAN> list backup of datafile 1;
list backup of datafile 1;
using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
320     Incr 0  1.08G      DISK        00:01:10     07_NOV_25_06_00
        BP Key: 320   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T055853
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd0_TAG20251107T055853_njv2nxdx_.bkp
  List of Datafiles in backup set 320
  File LV Type Ckp SCN    Ckp Time        Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------------- ----------- ------ ----
  1    0  Incr 13423959   07_NOV_25_05_56              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
325     Incr 1  2.93M      DISK        00:00:05     07_NOV_25_06_11
        BP Key: 325   Status: AVAILABLE  Compressed: YES  Tag: TAG20251107T061125
        Piece Name: /Standby_DB/FRA/STDBY/backupset/2025_11_07/o1_mf_nnnd1_TAG20251107T061125_njv3dg1n_.bkp
  List of Datafiles in backup set 325
  File LV Type Ckp SCN    Ckp Time        Abs Fuz SCN Sparse Name
  ---- -- ---- ---------- --------------- ----------- ------ ----
  1    1  Incr 13449820   07_NOV_25_06_08              NO    /Standby_DB/oradata/STDBY/datafile/o1_mf_system_n16dxho8_.dbf


RMAN>



After the Level-0 Backup I waited for some time to confirm that the SCN at the Standby had advanced.
Then, I ran an Level-1 Backup.

The final listing shows two backups of datafile 1, the first being a Level-0 backup at SCN 13423959 at 05:56 of 07-Nov and the second being a Level-1 backup at SCN 13449820 at 06:08 of 07-Nov/

Of course, I *must* backup ArchiveLogs at the Standby.

Note : Oracle's recommendation is to use an RMAN Catalog when running Database Backups.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator