Feed aggregator
Problem with loading data from XML decimal point with german nls-settings
What are the recommended BIG-IP F5 settings for routing HTTP traffic to ORDS 24.4 without SSL in an Oracle EBS-integrated environment?
impdp commit size parameter
Performance Analysis of SQL over DB Link
Materialized view based on View expanded to underlying objects
Export Oracle Label Security info from one DB to be imported into another DB
APEX Error 572 autopsy
A couple of days ago I exported an application from APEX 24.1.5 and imported it into APEX 24.2.9 on my OCI free tier instance.
Whilst the application itself was working fine, I got a problem when I tried to edit page 4 in the App Builder: the spinner kept spinning for an unusual amount of time and eventually stopped showing an empty page, and in the error notification area APEX showed "Error: 572".
Now, according to other users this Error 572 was a symptom of an overloaded database, which seemed strange to me because the page is not particularly complex and I could open other pages in the editor without problems.
After various unsuccessful attempts to see if I could gather some meaningful information from the APEX Builder debug logs (see this interesting Steve Muench's blog posting on how to do that), I decided to have a closer look at the differences between the previous version of this page and the current, problematic one.
Here are some relevant facts:
- The current version of the page can be edited without problems in APEX 24.1.5.
- If I export this single page from 24.1.5 and import into 24.2.9 the problem persists, so if there is a problem, it's not caused by the import process of the whole application.
- The "new" version of the page works correctly when I run the application, so, APEX metadata must be good.
- The only difference between the old version and the new version is in the source of the query I am using to populate an Interactive Report.
- If I import the old version of the page, then the App Builder editor resumes working properly.
The difference between the old and the new query is the following CASE function, the old version did not contain the lines in yellow:
case
when json_value(e.value,'$.email[0]') is null
then apex_lang.message('NO_RECIPIENT')
when json_value(e.value,'$.email[1]') is null
then apex_lang.message('SINGLE_RECIPIENT')
else apex_lang.message('OTHER_RECIPIENTS')
end as RECIPIENTSIf I comment out the two lines in yellow in the page source, import the page and try to edit it, everything is back to normal.
The tricky part in this type of problems is in that if you receive this error upon entering the editor then you cannot delete the offending page because Error code 572 is breaking the delete page functionality, so your only option is to replace the page with an import of a working version.
If you receive this error while you are still inside the editor of the page, you may have a chance of reverting the page to its previous state.
In my case it was easy, I had an older working version of the page, but if you don't, then you must work around the problem in some other fashion, in the worst case I presume you must edit the import file and get rid of some components.
In conclusion, my wild guess is that something wrong is happening at parse time and in particular I strongly suspect that the absence of meaningful values for the bind variables used in the conditions of my query are breaking the logic of the JSON_VALUE function that relies on the content of the JSON array "email", which at parse time of course is empty, resulting in this weird error.
Moreover, the IR query handling must have changed in some way between the two APEX releases.
Now, it could be that Error 572 is also returned when there is a "database overload", but in this specific case I can't see any correlation.
Or may be there is something else going on that I can't fathom right now.
Tip of the day: hide (or show) multiple buttons or other page components with a single dynamic action
Need to hide or show multiple buttons or page items basing on a common client-side condition?
Instead of creating a single dynamic action for each button, define a common CSS class (i.e. mybuttons) on each of the buttons and then use a single DA with jQuery selector like ".mybuttons" for the Hide action and one for the Show action, thus reducing the page complexity.
And you can also extend this to other components like page items or regions, if they follow the same logic.
Natural Sort in SQL
To sort strings in SQL is fairly straightforward:
select * from mytable order by name;
This uses the default collation which is at its core based on a character-by-character comparison, which almost always matches what humans would consider to be the “right” order – unless decimal numbers are found, in which case this will sort strings like ‘abc-2000’ before ‘abc-30’.
Natural Sort, on the other hand, compares strings using a more complex algorithm that finds embedded decimal numbers and sorts them according to their numeric value.
I’ve taken the algorithm from https://github.com/sourcefrog/natsort and implemented it in PL/SQL:
create or replace type natural_sort as object (
v varchar2(32767),
order member function compare(p_other natural_sort) return integer
);
/
create or replace type body natural_sort as
order member function compare(p_other natural_sort) return integer is
function is_whitespace(a in char) return boolean is
begin
return a is not null and ascii(a) <= 32;
end is_whitespace;
function is_digit(a in char) return boolean is
begin
return a is not null and ascii(a) between 48 and 57;
end is_digit;
function compare_right(a in varchar2, b in varchar2) return number is
bias number := 0;
ia number := 0;
ib number := 0;
ca char(1 char);
cb char(1 char);
begin
-- The longest run of digits wins. That aside, the greatest
-- value wins, but we can't know that it will until we've scanned
-- both numbers to know that they have the same magnitude, so we
-- remember it in bias.
loop
ca := substr(a, ia, 1);
cb := substr(b, ib, 1);
pragma inline (is_digit, 'yes');
if not is_digit(ca) and not is_digit(cb) then
return bias;
pragma inline (is_digit, 'yes');
elsif not is_digit(ca) then
return -1;
pragma inline (is_digit, 'yes');
elsif not is_digit(cb) then
return 1;
elsif ca < cb then
if bias = 0 then
bias := -1;
end if;
elsif ca > cb then
if bias = 0 then
bias := 1;
end if;
elsif ca = 0 and cb = 0 then
return bias;
end if;
ia := ia + 1;
ib := ib + 1;
end loop;
end compare_right;
function natcompare(a in varchar2, b in varchar2) return number is
ia number := 0;
ib number := 0;
nza number;
nzb number;
ca char(1 char);
cb char(1 char);
ret number;
begin
loop
-- only count the number of zeroes leading the last number compared
nza := 0;
nzb := 0;
ca := substr(a, ia, 1);
cb := substr(b, ib, 1);
-- skip over leading spaces or zeros
pragma inline (is_whitespace, 'yes');
while is_whitespace( ca ) or ca = '0' loop
if ca = '0' then
nza := nza + 1;
else
-- only count consecutive zeroes
nza := 0;
end if;
ia := ia + 1;
ca := substr(a, ia, 1);
end loop;
pragma inline (is_whitespace, 'yes');
while is_whitespace( cb ) or cb = '0' loop
if cb = '0' then
nzb := nzb + 1;
else
-- only count consecutive zeroes
nzb := 0;
end if;
ib := ib + 1;
cb := substr(b, ib, 1);
end loop;
if ca is null and cb is null then
-- The strings compare the same. Break the tie with the
-- default comparison.
if a < b then
return -1;
elsif a > b then
return 1;
end if;
end if;
-- process run of digits
pragma inline (is_digit, 'yes');
if is_digit(ca) and is_digit(cb) then
ret := compare_right(substr(a, ia), substr(b, ib));
if ret != 0 then
return ret;
end if;
end if;
if ca < cb then
return -1;
elsif ca > cb then
return 1;
end if;
ia := ia + 1;
ib := ib + 1;
end loop;
end natcompare;
begin
if v is null and p_other.v is null then
return 0;
elsif v is null and p_other.v is not null then
return 1;
elsif v is not null and p_other.v is null then
return -1;
elsif v = p_other.v then
return 0;
end if;
return natcompare(v, p_other.v);
end compare;
end;
/
This creates a type in the schema called natural_sort which can be used in SQL and PL/SQL very simply, e.g.:
select * from my_table order by natural_sort(name);
This is my test script:
select * from apex_string.split( q'[
fred
pic2
pic100a
pic120
pic121
jane
tom
pic02a
pic3
pic4
1-20
pic100
pic02000
10-20
1-02
1-2
x2-y7
x8-y8
x2-y08
x2-g8
pic01
pic02
pic 6
pic 7
pic 5
pic05
pic 5
pic 5 something
pic 4 else
1.001
1.2
1.002
1.02
1.09
1.101
1.102
1.010
1.10
1.200
1.199
1.198
1.1
2000-1-10
2000-1-2
1999-12-25
2000-3-23
1999-3-3
]' )
order by natural_sort(column_value);
The result of the above test query is:
1-02
1-2
1-20
1.001
1.1
1.002
1.02
1.2
1.09
1.010
1.10
1.101
1.102
1.198
1.199
1.200
10-20
1999-3-3
1999-12-25
2000-1-2
2000-1-10
2000-3-23
fred
jane
pic01
pic02
pic02a
pic2
pic3
pic 4 else
pic4
pic 5
pic 5
pic 5 something
pic05
pic 6
pic 7
pic100
pic100a
pic120
pic121
pic02000
tom
x2-g8
x2-y7
x2-y08
x8-y8
Warning: this will be slower than the built-in sorting by Oracle, and will not get any benefit from an index on the column; therefore it’s only really suitable when sorting a relatively small number of records, e.g. for display of a small set of records to a user.
pgconf.eu 2025 – RECAP
I was fortunate to be able to attend at the pgconf.eu 2025.
This year event was happening in RIGA and joined together once again key members of the community, contributors, committers, sponsors and users from across the world.
I would summarize this year event with those three main topics : AI/LLM – PG18- Monitoring.
Compared to last year the formula changed a bit regarding the Community events day of Tuesday where for the first time different “Summits” where organized. If you want full details on the event and the schedule as well as the presentation slides of each talk you may find it here : Schedule — PostgreSQL Conference Europe 2025
I had the chance to be chosen as a speaker for the AI Summit. It was quite interesting for me. In total there was 13 short talks (10min) on various topics related to PostgreSQL and AI/LLMs it was dense with a lot of interesting ideas of implementations – you can find the details and slides here PGConf.EU 2025 PostgreSQL AI Summit – PostgreSQL wiki. AI/LLMs are the hot topic of the moment and naturally it came up often during this event, in the talks and in the discussions. You can find the pdf of my presentation here. I explained a business case implementation of a BI self-service agentic RAG to find relevant fields for a target KPI and data marts creation as output. Since the talks were short, it allowed to have a debate at the end between the audience and the speakers. The discussion nicely moderated by organizers was interesting because it exposed the same strong thoughts people have in general about AI/LLMs. A blend of distrust and not fully understanding of what it is about or how it could help organizations. Which, in itself, shows that the PostgreSQL community has the same difficulties at explaining technical challenges versus organizational/human challenges. My view here is that we don’t have technical challenges, they are almost un-relevant to most arguments but rather human relation and understanding of what values a DBA for example, brings to the organization. To me installing and configuring PostgreSQL has no benefits in terms of personal growth so automating it is quite natural and adding AI/LLMs on top is “nice to have” but not fundamentally different than an Ansible playbook. But for the junior DBA this an additional abstraction that can be dangerous because it provides tools that users can’t grasp the full extent of their consequences. This outlines that the main issue of integrating AI/LLMs workflows is more a governance/ C-management issue than a technical one and it can’t be the last excuse for adding to the technological debt.
Jay Miller from Aiven explained how you can fail at exposing PII from LLMs and MCPs. This is rely a relevant topic knowing that more and more organization are facing issues like shadow IT. He also was quite the show host and was funny to hear. I recommend strongly watching the recording when it will be released.
This year was just after the PostgreSQL 18 version release which is one the version that brought major improvements and is initiating changes for future release to come. I was quite enthusiast to listen to Melanie Plagemen on how she worked on the improvements on freezing in this release. I have to say, usually when I am going at an advanced internal talk, I am more confused after than before. But here, Melanie did an amazing job at talking about a technical complex topic without loosing the audience.
Gülçin Yıldırım Jelínek, on her side explained what’s new in PG18 about constraints like NOT ENFORCED and NOT NULL and how to use them. The COO of Cybertec Raj Verma, during a sponsor talk, explained why compliance matters and how to minimize the risks and how PostgreSQL is helping us to be PCI DSS, GDPR, nLPD or HIPAA compliant.
Another interesting talk I was happy to attend was from Floor Drees and Gabriele Bartolini. they explain how they went on joining the CloudNativePG project to the CNCF.
Monitoring
This leads me to another important topic, I wasn’t looking for it but became a bit of a main subject for my over the years as a DBA that was interested in performance tuning. Monitoring on PostgreSQL was introduced by several talks like Luigi Nardi and his idea of workload fingerprint with the DBtune tool they have. Additionally, Lukas Fittl presented pg_stat_plans, an extension which aims at tracking execution plans over time. This is definitely something I am going to try and will push for implementation in the core extensions if not the core code itself.
The reason for that is obvious for me, PostgreSQL is becoming more and more central to enterprise organizations and appart from subject like TDE, monitoring is going to become a key aspect of automation, CloudNativePG and AI/LLM workflows. Having PostgreSQL being able to be monitored better and easier at the core will allow leveraging at all this levels. Cloud companies release that already hence there involvement in similar projects.
In the end, this year was once again the occasion for me to think about many relevant topics and exchange with PostgreSQL hackers as well as users from around the world. I came back home with the head full of ideas to investigate.
Additionally after the conference the videos of the each talks will be uploaded to the pgconf Europe Youtube channel : PostgreSQL Europe, but you can already check previous amazing talks and this year pgday Paris.
So once again the PostgreSQL flag was floating up high !

L’article pgconf.eu 2025 – RECAP est apparu en premier sur dbi Blog.





