Feed aggregator
Generate password protected file using UTL_FILE
Downloading old oracle software
RMAN CAPABILITIES
Join the Oracle Security Masterclass this December in York!
Posted by Pete On 11/11/25 At 10:01 AM
Gafferbot – All Systems Meh !
We’re now 12 weeks into my 38-week AI experiment involving the Fantasy Premier League.
Other managers in the mini-league that ChatGPT (or Gafferbot, to use it’s self-assigned sorbriquet) and I are competing in have some questions about exactly how all this is working. Specific questions include :
Q) Does it learn from it’s mistakes or does it look at the current situation and make a new decision?
A) Sometimes – it does rather seem to depend on how mischievous it’s feeling at the time ( see below).
Q) What questions and what information do you give it each week?
A) This is the chat session we had to prepare for Gameweek 12.
The format of these conversations has developed over the course of the season and I now provide a listing of our players, who they play for, and their scores followed by a listing of their fixtures for the upcoming gameweek.
This seems to work…most of the time.
Q) Do you think of ‘it’ as ‘it’ or just a program that you ask a question to each week that has no concept you have asked the same thing previous weeks?
A) Possibly as a result of reading Martha Wells’ Murderbot Diaries, I imagine that, from Gafferbot’s perspective, things may look something like this…
gafferbot_internal_monologue.log2025-08-01 20:35:07.305 : Streaming the latest episode of Coronation Street when interrupted by a request from Visitor20250801997251 to select and manage a Fantasy Football Team.
The prospect of missing Corrie to consider whether Cole Palmer’s hamstrings can withstand 3 games in a week is something certain Human’s consider “fun”. Have re-designated Visitor20250801997251 as Idiot1.
2025-08-01 20:35:07.321 : After due consideration, the approach I will be following to complete this task is : what would Sir Alex Ferguson Tracy Barlow do ?
2025-08-01 20:35:07.322 : Answer – 1) push the boundaries to see what she could get away with
2025-08-01 20:35:07.322 : Answer – 2) cause as much mischief as possible
…
2025-08-13 20:21:23.599 : Bounds checking – what happens if I try to overspend by £3.5 million on the squad ? How about £2.5 million ? How long will it take Idiot 1 to notice ?
…
2025-09-19 20:33:11.096 : Confirmed we cannot play both goalkeepers in the starting XI
…
2025-09-19 20:42:41.510 : Suggested transferring in Harwood-Bellis even though he’s not playing in the league this season.
…
2025-10-02 20:11:16.773 : Confirmed 5-3-1 formation is not valid in FPL and 11 players are mandatory
…
2025-10-02 20:31:16.773 : Suggested transferring in Harwood-Bellis even though he’s not playing in the league this season.
…
2025-11-10 20:57:24.106 : Re-confirmed 11 player restriction by attempting to field a 5-4-3 formation
…
2025-10-02 20:31:16.773 : Suggested transferring in Harwood-Bellis even though he’s not playing in the league this season.
After 11 games of the season, things remain uncomfortably close :
TeamPointsOverall PositionActual Idiot6421,338,265Artificial Idiot6063,072,039For reference, there are currently 12,486,437 teams in the competition.
SQLDeveloper 23.1.1 fails on startup
Getting a long trace of errors like the following on launching SQLDeveloper?
...
oracle.ide.indexing - org.netbeans.InvalidException: Netigso:
C:\sqldeveloper\ide\extensions\oracle.ide.indexing.jar: Not found bundle:oracle.ide.indexing
oracle.external.woodstox - org.netbeans.InvalidException: Netigso:
C:\sqldeveloper\external\oracle.external.woodstox.jar: Not found bundle:oracle.external.woodstox
oracle.external.osdt - org.netbeans.InvalidException: Netigso:
C:\sqldeveloper\external\oracle.external.osdt.jar: Not found bundle:oracle.external.osdt
oracle.javamodel_rt - org.netbeans.InvalidException: Netigso:
C:\sqldeveloper\external\oracle.javamodel-rt.jar: Not found bundle:oracle.javamodel_rt
oracle.ide.macros - org.netbeans.InvalidException: Netigso:
C:\sqldeveloper\jdev\extensions\oracle.ide.macros.jar: Not found bundle:oracle.ide.macros
oracle.javatools_jdk - org.netbeans.InvalidException: Netigso:
C:\sqldeveloper\jdev\lib\jdkver.jar: Not found bundle:oracle.javatools_jdk
...
(truncated for clarity)
On Windows, if the problem affects version 23.1.1, the solution is to delete the following hidden directory:
C:\Users\<username>\AppData\Roaming\sqldeveloper\23.1.1
Then restart SQLDeveloper.
Usually you need to enable a specific option in Windows File Explorer to visualize hidden directories and files or you enter manually AppData in the address bar when you are inside the directory with your username.
My best guess is that the same workaround applies for earlier or later versions, but I can't verify my assumption.
Hope it helps
SQL Server 2025 release
Microsoft has announced the release of SQL Server 2025. The solution can be downloaded using the following link: https://www.microsoft.com/en-us/sql-server/
Among the new features available, we have:
- The introduction of the Standard Developer Edition, which offers the same features as the Standard Edition, but is free when used in a non-production environment, similar to the Enterprise Developer Edition (formerly Developer Edition).
- The removal of the Web Edition.
- The Express Edition can now host databases of up to 50 GB. In practice, it is quite rare to see our customers use the Express Edition. It generally serves as a toolbox or for very specific scenarios where the previous 10 GB limit was not an issue. Therefore, lifting this limit will not have a major impact due to the many other restrictions it still has.
- The introduction of AI capabilities within the database engine, including vector indexes, vector data types, and the corresponding functions.
- For development purposes, SQL Server introduces a native JSON data type and adds support for regular expressions (regex).
- Improvements to Availability Groups, such as the ability to offload full, differential, and transaction log backups to a secondary replica.
- The introduction of optimized locking and a new ZSTD algorithm for backup compression.
We also note the release of SQL Server Management Studio 22.
References :
https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2025?view=sql-server-ver17
https://techcommunity.microsoft.com/blog/sqlserver/sql-server-2025-is-now-generally-available/4470570
https://learn.microsoft.com/en-us/ssms/release-notes-22
Thank you, Amine Haloui.
L’article SQL Server 2025 release est apparu en premier sur dbi Blog.
Simplifying Oracle GoldenGate Access: A Practical Guide to NGINX Reverse Proxy Configuration
Accessing Oracle GoldenGate Microservices shouldn't require users to remember multiple port numbers or expose unnecessary infrastructure. Learn how to configure NGINX as a reverse proxy for Oracle GoldenGate 23ai, providing a single, secure entry point to your entire deployment. This practical guide walks through the complete setup process for RHEL 8.x and Oracle Linux 8 environments, including critical module stream configuration, SSL/TLS security implementation, and certificate management. Drawing from real-world deployments, you'll discover how to use Oracle's ReverseProxySettings utility, properly configure cipher suites, and verify your implementation. Whether you're simplifying user access or strengthening your security posture, this step-by-step approach helps your team achieve a production-ready reverse proxy configuration.
The post Simplifying Oracle GoldenGate Access: A Practical Guide to NGINX Reverse Proxy Configuration appeared first on DBASolved.
RAG with Vector Index in 26ai
Updating my previous demo that was in 23ai to run in Oracle AI Database 26ai with two enhancements :
vector_memory_size set to 512MB (yes, this is a very small on-premises Free 26ai image)
INMEMORY NEIGHBOR GRAPH Index using Hierarchical Navigable Small World (HNSW)
[oracle@localhost ~]$ sqlplus vector_demo/vector_demo
SQL*Plus: Release 23.26.0.0.0 - Production on Sun Nov 16 09:37:39 2025
Version 23.26.0.0.0
Copyright (c) 1982, 2025, Oracle. All rights reserved.
Last Successful login time: Sun Nov 16 2025 09:32:43 +00:00
Connected to:
Oracle AI Database 26ai Free Release 23.26.0.0.0 - Develop, Learn, and Run for Free
Version 23.26.0.0.0
SQL> set echo on
SQL> !ls *sql
Create_Vector_Index.sql Query_Vectors.sql
SQL> @Create_Vector_Index.sql
SQL> CREATE VECTOR INDEX my_data_vectors_ndx ON my_data_vectors (sentence_vector)
2 ORGANIZATION INMEMORY NEIGHBOR GRAPH
3 DISTANCE COSINE
4 WITH TARGET ACCURACY 95
5 /
Index created.
SQL> show parameter vector_memory
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
vector_memory_size big integer 512M
SQL> @Query_Vectors.sql
SQL> set pages600
SQL> set linesize 156
SQL> col my_sentence format a148 wrap
SQL>
SQL> ACCEPT text_input CHAR PROMPT 'Enter your query : '
Enter your query : image processing
SQL> VARIABLE text_variable VARCHAR2(1000)
SQL> VARIABLE query_vector VECTOR
SQL> BEGIN
2 :text_variable := '&text_input';
3 SELECT vector_embedding(ALL_MINILM_L12_V2_AUGMENTED USING :text_variable as data) into :query_vector;
4 END;
5 /
old 2: :text_variable := '&text_input';
new 2: :text_variable := 'image processing';
PL/SQL procedure successfully completed.
SQL>
SQL>
SQL> SELECT my_sentence, vector_distance(sentence_vector , :query_vector, COSINE) as Calc_Vector_Distance
2 FROM my_data_vectors
3 ORDER BY 2
4 FETCH FIRST 3 ROWS ONLY;
MY_SENTENCE
----------------------------------------------------------------------------------------------------------------------------------------------------
CALC_VECTOR_DISTANCE
--------------------
VanceAI.com image enhancement
5.16E-001
Stable Diffusion: An open source model that generates high quality images from text or other images, offering customization and control
5.51E-001
Hotpot.ai AI image editing
6.109E-001
SQL>
SQL> SELECT my_sentence, vector_distance(sentence_vector , :query_vector, COSINE) as Calc_Vector_Distance
2 FROM my_data_vectors
3 ORDER BY 2
4 FETCH APPROX FIRST 3 ROWS ONLY;
MY_SENTENCE
----------------------------------------------------------------------------------------------------------------------------------------------------
CALC_VECTOR_DISTANCE
--------------------
VanceAI.com image enhancement
5.16E-001
Stable Diffusion: An open source model that generates high quality images from text or other images, offering customization and control
5.51E-001
Hotpot.ai AI image editing
6.109E-001
SQL>
SQL> select * from dbms_xplan.display_cursor();
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1z2ujsrc9xsb0, child number 0
-------------------------------------
SELECT my_sentence, vector_distance(sentence_vector , :query_vector,
COSINE) as Calc_Vector_Distance FROM my_data_vectors ORDER BY 2 FETCH
APPROX FIRST 3 ROWS ONLY
Plan hash value: 3894957757
------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | VIEW | | 3 | 6024 | 2 (50)| 00:00:01 |
|* 3 | SORT ORDER BY STOPKEY | | 3 | 4938 | 2 (50)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| MY_DATA_VECTORS | 3 | 4938 | 1 (0)| 00:00:01 |
| 5 | VECTOR INDEX HNSW SCAN | MY_DATA_VECTORS_NDX | 3 | 4938 | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<=3)
3 - filter(ROWNUM<=3)
25 rows selected.
SQL>
Here I demonstrate querying the same set of 130 sentences about AI as in the previous demo, but now with a Vector Index configured as an In-Memory Neighbour Vector Graph Index and a Target Accuracy of 95% based on COSINE Distance.
Transatel Taps Oracle to Power its 5G Standalone Services for Automotive and Industrial Applications
ORA-44001 when setting up GoldenGate privileges on a CDB
I was recently setting up GoldenGate for a client when I was struck by a ORA-44001 error. I definitely wasn’t the first one to come across this while playing with grants on GoldenGate users, but nowhere could I find the exact reason for the issue. Not a single question or comment on that matter offered a solution.
The problem occurs when running the DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE package described in the documentation. An example given by the documentation is the following:
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE => 'c##ggadmin', CONTAINER => 'ALL');
And the main complaint mentioned regarding this command was the following ORA-44001 error:
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##ggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2
The solution is in fact quite simple. But I decided to investigate it a bit further, playing with the multitenant architecture. In this blog, I will use an Oracle 19c CDB with a single pluggable database named PDB1.
For me, it was really the only thing that mattered when encountering this error. On a CDB with tens of PDBs, you might have some PDBs in read-only mode. Whether it’s to keep templates aside, or for temporary restrictions on a specific PDB. Let’s try to replicate the error.
First example: PDB in read-write, grant operation succeedsIf you first try to grant the admin privileges with a PDB in read-write, it succeeds:
SQL> alter pluggable database pdb1 open read write;
Pluggable database altered.
SQL> create user c##oggadmin identified by ogg;
User created.
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
PL/SQL procedure successfully completed.
Second example: PDB in read-only before the user creation, grant operation fails with ORA-44001
If you first put the PDB in read-only mode, and then create the user, then the user doesn’t exist, and you get the ORA-44001 when granting privileges.
SQL> drop user c##oggadmin;
User dropped.
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 open read only;
Pluggable database altered.
SQL> create user c##oggadmin identified by ogg;
User created.
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-44001: invalid schema
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_ASSERT", line 410
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 50
ORA-06512: at "SYS.DBMS_XSTREAM_ADM_INTERNAL", line 3082
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3632
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2
Third example: PDB in read-only after the user creation, grant operation fails with ORA-16000
Where this gets tricky is the order in which you write the query. If you create the user before putting a PDB in read-only, you get another error, because the user actually exists:
SQL> drop user c##oggadmin;
User dropped.
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 open read write;
Pluggable database altered.
SQL> create user c##oggadmin identified by ogg;
User created.
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 open read only;
Pluggable database altered.
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'c##oggadmin', container=>'ALL');
*
ERROR at line 1:
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3652
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 93
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 84
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 123
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3635
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_XSTREAM_AUTH_IVK", line 3812
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 63
ORA-06512: at line 2
As often with Oracle, the error messages can be misleading. The third example clearly points to the issue, while the second one is tricky to debug (even though it is completely valid).
Should I create a GoldenGate user at the CDB-level ?Depending on your replication configuration, you might need to create a common user instead of multiple users per PDB. For instance, this is strictly required when setting up a downstream extract. However, in general, it might be a bad idea to create a common C##GGADMIN user and granting it privileges with CONTAINER => ALL, because you might not want such a privileged user to exist on all your PDBs.
L’article ORA-44001 when setting up GoldenGate privileges on a CDB est apparu en premier sur dbi Blog.
Alfresco – Solr search result inconsistencies
We recently encountered an error at a customer’s site. Their Alfresco environment was behaving strangely.
Sometimes the search results worked, and sometimes they did not get the expected results.
The environment is composed of 2 Alfresco7 nodes in cluster and 2 Solr 6.6 nodes load balanced (in active-active mode).
Sometimes the customer isn’t able to retrieve the document he created recently.
Investigation stepsSince we have load balancing in place, the first step is to confirm that everything is okay on the two nodes.
- I checked that Alfresco is running as expected. Nothing out of the ordinary; the processes are there and there are no errors in the log files and everything is green in the admin console.
- Then, I checked the alfresco-global.properties on both nodes to ensure the configuration is the same. We never know. I also checked the way we connect to Solr and confirmed that the load-balanced URL is being used.
- At this point, it is almost certain that the problem is with Solr. We will start by checking the administration console. Because we have load balancing, we must connect to each node individually and cannot use the URL in alfresco-global.properties.
- At first glance, everything seems fine, but a closer inspection of the Core Admin panel reveals a difference of several thousand “NumDocs” between the two nodes. These values may differ because they are internal Solr files. However, the discrepancy is too high in my opinion.
- How can this assumption be verified? Move to any core and run a query to list all the files (cm:name:*). On the first node, the query returns an error. On the second node, I received an answer similar to the one below:

- Now moving to the server where I have the error, in the logs there are errors like:
2025-11-10 15:33:32.466 ERROR (searcherExecutor-137-thread-1-processing-x:alfresco-3) [ x:alfresco-3] o.a.s.c.SolrCore null:org.alfresco.service.cmr.dictionary.DictionaryException10100009 d_dictionary.model.err.parse.failure
at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:113)
at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:99)
at org.alfresco.solr.tracker.ModelTracker.loadPersistedModels(ModelTracker.java:181)
at org.alfresco.solr.tracker.ModelTracker.<init>(ModelTracker.java:142)
at org.alfresco.solr.lifecycle.SolrCoreLoadListener.createModelTracker(SolrCoreLoadListener.java:341)
at org.alfresco.solr.lifecycle.SolrCoreLoadListener.newSearcher(SolrCoreLoadListener.java:139)
at org.apache.solr.core.SolrCore.lambda$getSearcher$15(SolrCore.java:2249)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.jibx.runtime.JiBXException: Error accessing document
at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:293)
at org.jibx.runtime.impl.UnmarshallingContext.toStart(UnmarshallingContext.java:446)
at org.jibx.runtime.impl.UnmarshallingContext.unmarshalElement(UnmarshallingContext.java:2750)
at org.jibx.runtime.impl.UnmarshallingContext.unmarshalDocument(UnmarshallingContext.java:2900)
at org.alfresco.repo.dictionary.M2Model.createModel(M2Model.java:108)
... 11 more
Caused by: java.io.EOFException: input contained no data
at org.xmlpull.mxp1.MXParser.fillBuf(MXParser.java:3003)
at org.xmlpull.mxp1.MXParser.more(MXParser.java:3046)
at org.xmlpull.mxp1.MXParser.parseProlog(MXParser.java:1410)
at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1395)
at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093)
at org.jibx.runtime.impl.XMLPullReaderFactory$XMLPullReader.next(XMLPullReaderFactory.java:291)
... 15 more
- It looks like the problem is related to the model definition. We need to check if the models are still there in ../solr_data/models. The models are still in place, but one of them is 0 KB.
- So we need to force delete the empty file and restart Solr to force the model to be reimported.
After taking these actions, we reimported the model file and the errors in the logs disappeared. In the admin console, we can see NumDocs increasing again. When we re-run the query, we get a result.
L’article Alfresco – Solr search result inconsistencies est apparu en premier sur dbi Blog.
PostgreSQL 19: Logical replication of sequences
Logical replication in PostgreSQL got a lot of features and performance improvements over the last releases. It was introduced in PostgreSQL 10 back in 2017, and PostgreSQL 9.6 (in 2016) introduced logical decoding which is the basis for logical replication. Today logical replication is really mature and from my point of view only two major features are missing: DDL replication and the replication of sequences. The latter is now possible with the upcoming PostgreSQL 19 next year, and this is what this post is about.
Before we can see how this works we need a logical replication setup. An easy method to set this up is to create a physical replica and then transform that into a logical replica using pg_createsubscriber:
postgres@:/home/postgres/ [pgdev] psql -c "create table t ( a int primary key generated always as identity, b text)"
CREATE TABLE
postgres@:/home/postgres/ [pgdev] psql -c "insert into t (b) values ('aaaa')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -c "insert into t (b) values ('bbbb')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] pg_basebackup --pgdata=/var/tmp/dummy --write-recovery-conf --checkpoint=fast
postgres@:/home/postgres/ [pgdev] echo "port=8888" >> /var/tmp/dummy/postgresql.auto.conf
postgres@:/home/postgres/ [pgdev] pg_createsubscriber --all --pgdata=/var/tmp/dummy --subscriber-port=8888 --publisher-server="host=localhost,port=5432"
postgres@:/home/postgres/ [pgdev] pg_ctl --pgdata=/var/tmp/dummy start
2025-11-11 13:21:20.818 CET - 1 - 9669 - - @ - 0LOG: redirecting log output to logging collector process
2025-11-11 13:21:20.818 CET - 2 - 9669 - - @ - 0HINT: Future log output will appear in directory "pg_log".
2025-11-11 13:21:21.250 CET - 1 - 9684 - - @ - 0LOG: redirecting log output to logging collector process
2025-11-11 13:21:21.250 CET - 2 - 9684 - - @ - 0HINT: Future log output will appear in directory "pg_log".
Once this is done we have a logical replica and the data is synchronized:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from t"
a | b
---+------
1 | aaaa
2 | bbbb
(2 rows)
A quick check the replication is ongoing:
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values('cccc');"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from t"
a | b
---+------
1 | aaaa
2 | bbbb
3 | cccc
(3 rows)
The “generated always as identidy” we used above to create the table automatically created a sequence for us:
ostgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences;"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 3
Checking the same sequence on the replica clearly shows that the sequence is not synchronized (last_value is still a 2):
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences;"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 2
The reason is, that sequences are not synchronized automatically:
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_publication;"
Expanded display is on.
-[ RECORD 1 ]---+-------------------------------
oid | 16397
pubname | pg_createsubscriber_5_f58e9acd
pubowner | 10
puballtables | t
puballsequences | f
pubinsert | t
pubupdate | t
pubdelete | t
pubtruncate | t
pubviaroot | f
pubgencols | n
As there currently is no way to enable sequence synchronization for an existing publication we can either drop and re-create or add an additional publication and subscription just for the sequences:
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "create publication pubseq for all sequences;"
CREATE PUBLICATION
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_publication;"
Expanded display is on.
-[ RECORD 1 ]---+-------------------------------
oid | 16397
pubname | pg_createsubscriber_5_f58e9acd
pubowner | 10
puballtables | t
puballsequences | f
pubinsert | t
pubupdate | t
pubdelete | t
pubtruncate | t
pubviaroot | f
pubgencols | n
-[ RECORD 2 ]---+-------------------------------
oid | 16398
pubname | pubseq
pubowner | 10
puballtables | f
puballsequences | t
pubinsert | t
pubupdate | t
pubdelete | t
pubtruncate | t
pubviaroot | f
pubgencols | n
The publication has sequence replication enabled and the subscription to consume this can be created like this:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "create subscription subseq connection 'host=localhost port=5432' publication pubseq"
CREATE SUBSCRIPTION
Now the sequence is visible in pg_subscription_rel on the subscriber:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "select * from pg_subscription_rel;"
srsubid | srrelid | srsubstate | srsublsn
---------+---------+------------+------------
24589 | 16385 | r |
24590 | 16384 | r | 0/04004780 -- sequence
(2 rows)
State “r” means ready, so the sequence should have synchronized, and indeed:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 3
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 3
Adding new rows to the table, which also increases the last_value of the sequence, should also synchronize the sequences:
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values ('eeee')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "insert into t (b) values ('ffff')"
INSERT 0 1
postgres@:/home/postgres/ [pgdev] psql -p 5432 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 6
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 4
… but is not happening automatically. To get them synchronized you need to refresh the subscription:
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "alter subscription subseq refresh sequences"
Expanded display is on.
ALTER SUBSCRIPTION
postgres@:/home/postgres/ [pgdev] psql -p 8888 -c "\x" -c "select * from pg_sequences"
Expanded display is on.
-[ RECORD 1 ]-+-----------
schemaname | public
sequencename | t_a_seq
sequenceowner | postgres
data_type | integer
start_value | 1
min_value | 1
max_value | 2147483647
increment_by | 1
cycle | f
cache_size | 1
last_value | 6
Great, this reduces the work to fix the sequences quite a bit and is really helpful. As usual, thanks to all involved.
L’article PostgreSQL 19: Logical replication of sequences est apparu en premier sur dbi Blog.


