Feed aggregator

Problem with loading data from XML decimal point with german nls-settings

Tom Kyte - Fri, 2025-11-07 17:20
<code>-- Settings select banner from v$version; --Oracle Database 23c Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free select * from v$nls_parameters where parameter in ('NLS_NUMERIC_CHARACTERS','NLS_TERRITORY','NLS_LANGUAGE'); -- NLS_LANGUAGE GERMAN 3 -- NLS_TERRITORY GERMANY 3 -- NLS_NUMERIC_CHARACTERS ,. 3 create table asktom_gpx_tab (i number ,xmldata xmltype ); INSERT INTO asktom_gpx_tab -- (i,xmldata) VALUES (1 ,xmltype( '<?xml version="1.0" encoding="UTF-8" standalone="no" ?> <gpx> <trkpt lat="49.773210" lon="8.930600"><ele>340.8</ele></trkpt> <trkpt lat="49.772980" lon="8.930280"><ele>342.0</ele></trkpt> <trkpt lat="49.772740" lon="8.929900"><ele>341.5</ele></trkpt> </gpx>') ); commit; select elevation from asktom_gpx_tab t, xmltable('/gpx/trkpt' passing t.xmldata columns elevation varchar2(20) path 'ele') x1; -- 340,8 -- converted because NLS sets decimal sign to comma -- 342.0 -- why is this value not converted to comma??? -- 341,5 -- converted because NLS sets decimal sign to comma -- I found a Question here with a supposed solution but it doesnt work right select to_number(elevation,'FM9999D99','nls_numeric_characters = '',.''') from asktom_gpx_tab t, xmltable('/gpx/trkpt' passing t.xmldata columns elevation varchar2(20) path 'ele') x1; -- 340,8 -- ok -- 3420 -- This is definitely a wrong value -- 341,5 -- ok</code> Shouldnt all values be treated the same way?
Categories: DBA Blogs

What are the recommended BIG-IP F5 settings for routing HTTP traffic to ORDS 24.4 without SSL in an Oracle EBS-integrated environment?

Tom Kyte - Fri, 2025-11-07 17:20
What are the recommended BIG-IP F5 settings for routing HTTP traffic to ORDS 24.4 without SSL in an Oracle EBS-integrated environment?
Categories: DBA Blogs

impdp commit size parameter

Tom Kyte - Fri, 2025-11-07 17:20
Is there a commit size parameter for datapump import utility in the future? Currently there is none. I see direct path insert and parallel process are there and if so will adding commit size parameter help in faster data loading.
Categories: DBA Blogs

Performance Analysis of SQL over DB Link

Tom Kyte - Fri, 2025-11-07 17:20
Hello and thank you for your response. Sorry I could not provide a live example SQL. I have an application running in JBoss connecting to a database Reporting_DB and executes SQLs there. The SQLs actually would access views in the Reporting_DB. The views are accessing tables which are in a remote DB, say ReadOnlyStandby_DB accessed over a DB link to a remote database. Here is a simple schematic: Application --Network--> Reporting_DB ---DB Link--> ReadOnlyStandby_DB ORM sql ---> View --> Table SQL Signature: Example of SQL seen in Reporting_DB: The application uses some ORM (hibernate), so the SQLs are always changing as seen in the Reporting_DB: select this_.Column1 as Column1_485_0_, this_.Column2 as Column2_485_0_, this_.Column3 as AUDIT3_485_0 etc. Example of SQL seen in ReadOnlyStandby_DB The sql seen in the ReadOnlyStandby_DB looks like this: SELECT "A1"."col1" ... FROM "TABLE_1" where CONDITION ... UNION ALL ... The users are complaining that the performance is slow, developers are sharing application logs that show SQLs are timing out with slowness. The final database where SQL is executed is a Data Guard read only Standby database. I have OEM on Reporting_DB and ReadOnlyStandby_DB but the app server (JBOSS) is external to my jurisdiction and have no access or insight. How can I get the following details: 1) Connect the dots for session: How to connect the dots from App, db session to Reporting_DB, db session to ReadOnlyStandby_DB 2) Session Trace: How to trace session coming from DB link on ReadOnlyStandby_DB 3) SQL Analysis: The SQL on Reporting_DB is not the same on ReadOnlyStandby_DB; it seems to change. How to connect SQL "a" on Reporting_DB and its related SQL "b" on ReadOnlyStandby_DB ?
Categories: DBA Blogs

Materialized view based on View expanded to underlying objects

Tom Kyte - Fri, 2025-11-07 17:20
Hello All I have created a Materialized view using the following code <code>CREATE MATERIALIZED VIEW "APPS"."XXBST_UNPACK_PENDING_MV" ("CUST_ACCOUNT_ID", "CUSTOMER_NAME", "SUPPLIER_ID", "SUPPLIER_NAME", "SHIPMENT_ID", "SHIPMENT_NUMBER", "UNPACK_DATE", "DAYS_IN_UNPACK_PENDING_STATUS") SEGMENT CREATION IMMEDIATE ORGANIZATION HEAP PCTFREE 10 PCTUSED 0 INITRANS 2 MAXTRANS 255 NOCOMPRESS NOLOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "APPS_TS_TX_DATA" BUILD DEFERRED USING INDEX REFRESH COMPLETE ON DEMAND START WITH sysdate+0 NEXT sysdate + (1/24/60)*30 USING DEFAULT LOCAL ROLLBACK SEGMENT USING TRUSTED CONSTRAINTS EVALUATE USING CURRENT EDITION DISABLE ON QUERY COMPUTATION DISABLE QUERY REWRITE AS SELECT * FROM xxbst_unpack_pending_v;</code> This is in a non prod environment which has just been restored from a backup due to the code being changed in prod to this lowest level expanded code <code>CREATE MATERIALIZED VIEW "APPS"."XXBST_UNPACK_PENDING_MV" ("CUST_ACCOUNT_ID", "CUSTOMER_NAME", "SUPPLIER_ID", "SUPPLIER_NAME", "SHIPMENT_ID", "SHIPMENT_NUMBER", "UNPACK_DATE", "DAYS_IN_UNPACK_PENDING_STATUS") SEGMENT CREATION IMMEDIATE ON PREBUILT TABLE WITHOUT REDUCED PRECISION USING INDEX REFRESH FORCE ON DEMAND USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS EVALUATE USING CURRENT EDITION DISABLE ON QUERY COMPUTATION DISABLE QUERY REWRITE AS SELECT "A1"."CUST_ACCOUNT_ID" "CUST_ACCOUNT_ID","A1"."CUSTOMER_NAME" "CUSTOMER_NAME","A1"."SUPPLIER_ID" "SUPPLIER_ID","A1"."SUPPLIER_NAME" "SUPPLIER_NAME","A1"."SHIPMENT_ID" "SHIPMENT_ID","A1"."SHIPMENT_NUMBER" "SHIPMENT_NUMBER","A1"."UNPACK_DATE" "UNPACK_DATE","A1"."DAYS_IN_UNPACK_PENDING_STATUS" "DAYS_IN_UNPACK_PENDING_STATUS" FROM (SELECT "A7"."CUST_ACCOUNT_ID" "CUST_ACCOUNT_ID","APPS"."XXBST_DEV_UTILS_PKG"."GET_CUST_SHORT_NAME"("A7"."CUST_ACCOUNT_ID") "CUSTOMER_NAME","A3"."VENDOR_ID" "SUPPLIER_ID","A3"."VENDOR_NAME" "SUPPLIER_NAME","A10"."SHIPMENT_ID" "SHIPMENT_ID","A10"."SHIPMENT_NUMBER" "SHIPMENT_NUMBER",MAX("A7"."CREATION_DATE") "UNPACK_DATE",TRUNC(SYSDATE)-TRUNC(MAX("A7"."CREATION_DATE")) "DAYS_IN_UNPACK_PENDING_STATUS" FROM "XXBST"."XXBST_TNT_SHPMNT" "A10","XXBST"."XXBST_TNT_MV_PLAN" "A9","XXBST"."XXBST_TNT_MV" "A8","XXICE"."XXICE_SM_RCV_HEADERS" "A7","XXBST"."XXBST_TNT_HEADERS_ALL" "A6","XXBST"."XXBST_TNT_LINES_ALL" "A5","XXBST"."XXBST_BOM_ORDER_HEADERS_ALL" "A4", (SELECT "A12"."VENDOR_ID" "VENDOR_ID","A12"."VENDOR_NAME" "VENDOR_NAME" FROM "AP"."AP_SUPPLIERS" "A12","AR"."HZ_PARTIES" "A11" WHERE "A12"."PARTY_ID"="A11"."PARTY_ID") "A3","XXBST"."XXBST_TNT_MV_PLAN" "A2" WHERE "A10"."SHIPMENT_ID"="A7"."SHIPMENT_ID" AND "A10"."SHIPMENT_ID"="A9"."SHIPMENT_ID" AND "A9"."PLAN_ID"="A8"."PLAN_ID" AND "A8"."MV_TYPE"='DELIVERY' AND "A8"."MV_STATUS"='UNPACK_PENDING' AND "A6"."SHIPM...
Categories: DBA Blogs

Export Oracle Label Security info from one DB to be imported into another DB

Tom Kyte - Fri, 2025-11-07 17:20
Hi Sir, I'm trying to copy the Oracle Label Security information from one database and apply it to another one. I tried to use expdp/impdp and dbms_metadata.get_ddl, but I didn't have success using them. Is there a special package or tool to get this work done? Thanks in advance. Valerio Almeida
Categories: DBA Blogs

APEX Error 572 autopsy

Flavio Casetta - Mon, 2025-11-03 07:35

A couple of days ago I exported an application from APEX 24.1.5 and imported it into APEX 24.2.9 on my OCI free tier instance.

Whilst the application itself was working fine, I got a problem when I tried to edit page 4 in the App Builder: the spinner kept spinning for an unusual amount of time and eventually stopped showing an empty page, and in the error notification area APEX showed "Error: 572".

 

Now, according to other users this Error 572 was a symptom of an overloaded database, which seemed strange to me because the page is not particularly complex and I could open other pages in the editor without problems.

After various unsuccessful attempts to see if I could gather some meaningful information from the APEX Builder debug logs (see this interesting Steve Muench's blog posting on how to do that), I decided to have a closer look at the differences between the previous version of this page and the current, problematic one.

Here are some relevant facts:

  • The current version of the page can be edited without problems in APEX 24.1.5.
  • If I export this single page from 24.1.5 and import into 24.2.9 the problem persists, so if there is a problem, it's not caused by the import process of the whole application.
  • The "new" version of the page works correctly when I run the application, so, APEX metadata must be good.
  • The only difference between the old version and the new version is in the source of the query I am using to populate an Interactive Report.
  • If I import the old version of the page, then the App Builder editor resumes working properly. 

The difference between the old and the new query is the following CASE function, the old version did not contain the lines in yellow: 

case 
  when json_value(e.value,'$.email[0]') is null
  then apex_lang.message('NO_RECIPIENT')
  when json_value(e.value,'$.email[1]') is null
  then apex_lang.message('SINGLE_RECIPIENT')
  else apex_lang.message('OTHER_RECIPIENTS')
end as RECIPIENTS

If I comment out the two lines in yellow in the page source, import the page and try to edit it, everything is back to normal.

The tricky part in this type of problems is in that if you receive this error upon entering the editor then you cannot delete the offending page because Error code 572 is breaking the delete page functionality, so your only option is to replace the page with an import of a working version.

If you receive this error while you are still inside the editor of the page, you may have a chance of reverting the page to its previous state.

In my case it was easy, I had an older working version of the page, but if you don't, then you must work around the problem in some other fashion, in the worst case I presume you must edit the import file and get rid of some components.

In conclusion, my wild guess is that something wrong is happening at parse time and in particular I strongly suspect that the absence of meaningful values for the bind variables used in the conditions of my query are breaking the logic of the JSON_VALUE function that relies on the content of the JSON array "email", which at parse time of course is empty, resulting in this weird error.

Moreover, the IR query handling must have changed in some way between the two APEX releases.

Now, it could be that Error 572 is also returned when there is a "database overload", but in this specific case I can't see any correlation.

Or may be there is something else going on that I can't fathom right now.

Categories: DBA Blogs

Tip of the day: hide (or show) multiple buttons or other page components with a single dynamic action

Flavio Casetta - Wed, 2025-10-29 10:52

Need to hide or show multiple buttons or page items basing on a common client-side condition?

Instead  of creating a single dynamic action for each button, define a common CSS class (i.e. mybuttons) on each of the buttons and then use a single DA with jQuery selector like ".mybuttons" for the Hide action and one for the Show action, thus reducing the page complexity.

 


And you can also extend this to other components like page items or regions, if they follow the same logic.

Categories: DBA Blogs

Natural Sort in SQL

Jeff Kemp - Sun, 2025-10-26 23:06
illustration of an android working hard to sort a stack of papers

To sort strings in SQL is fairly straightforward:

select * from mytable order by name;

This uses the default collation which is at its core based on a character-by-character comparison, which almost always matches what humans would consider to be the “right” order – unless decimal numbers are found, in which case this will sort strings like ‘abc-2000’ before ‘abc-30’.

Natural Sort, on the other hand, compares strings using a more complex algorithm that finds embedded decimal numbers and sorts them according to their numeric value.

I’ve taken the algorithm from https://github.com/sourcefrog/natsort and implemented it in PL/SQL:

create or replace type natural_sort as object (
    v varchar2(32767),
    order member function compare(p_other natural_sort) return integer
);
/

create or replace type body natural_sort as

    order member function compare(p_other natural_sort) return integer is

        function is_whitespace(a in char) return boolean is
        begin
            return a is not null and ascii(a) <= 32;
        end is_whitespace;
    
        function is_digit(a in char) return boolean is
        begin
            return a is not null and ascii(a) between 48 and 57;
        end is_digit;
        
        function compare_right(a in varchar2, b in varchar2) return number is
            bias number := 0;
            ia   number := 0;
            ib   number := 0;
            ca   char(1 char);
            cb   char(1 char);
        begin    
            -- The longest run of digits wins.  That aside, the greatest
            -- value wins, but we can't know that it will until we've scanned
            -- both numbers to know that they have the same magnitude, so we
            -- remember it in bias.
            loop
                ca := substr(a, ia, 1);
                cb := substr(b, ib, 1);
                pragma inline (is_digit, 'yes');
                if not is_digit(ca) and not is_digit(cb) then
                    return bias;
                pragma inline (is_digit, 'yes');
                elsif not is_digit(ca) then
                    return -1;
                pragma inline (is_digit, 'yes');
                elsif not is_digit(cb) then
                    return 1;
                elsif ca < cb then
                    if bias = 0 then
                        bias := -1;
                    end if;
                elsif ca > cb then
                    if bias = 0 then
                        bias := 1;
                    end if;
                elsif ca = 0 and cb = 0 then
                    return bias;
                end if;            
                ia := ia + 1;
                ib := ib + 1;
            end loop;
        end compare_right;
        
        function natcompare(a in varchar2, b in varchar2) return number is    
            ia  number := 0;
            ib  number := 0;
            nza number;
            nzb number;
            ca  char(1 char);
            cb  char(1 char);
            ret number;
        begin
            loop
                -- only count the number of zeroes leading the last number compared
                nza := 0;
                nzb := 0;
        
                ca := substr(a, ia, 1);
                cb := substr(b, ib, 1);    
        
                -- skip over leading spaces or zeros
                pragma inline (is_whitespace, 'yes');
                while is_whitespace( ca ) or ca = '0' loop
                    if ca = '0' then
                        nza := nza + 1;
                    else
                        -- only count consecutive zeroes
                        nza := 0;
                    end if;
                    ia := ia + 1;
                    ca := substr(a, ia, 1);
                end loop;
        
                pragma inline (is_whitespace, 'yes');
                while is_whitespace( cb ) or cb = '0' loop
                    if cb = '0' then
                        nzb := nzb + 1;
                    else
                        -- only count consecutive zeroes
                        nzb := 0;
                    end if;
                    ib := ib + 1;
                    cb := substr(b, ib, 1);
                end loop;
        
                if ca is null and cb is null then
                    -- The strings compare the same. Break the tie with the
                    -- default comparison.
                    if a < b then
                        return -1;
                    elsif a > b then
                        return 1;
                    end if;
                end if;
        
                -- process run of digits
                pragma inline (is_digit, 'yes');
                if is_digit(ca) and is_digit(cb) then
                    ret := compare_right(substr(a, ia), substr(b, ib));
                    if ret != 0 then
                        return ret;
                    end if;
                end if;
        
                if ca < cb then
                    return -1;
                elsif ca > cb then
                    return 1;
                end if;
        
                ia := ia + 1;
                ib := ib + 1;
            end loop;
        end natcompare;

    begin
        if v is null and p_other.v is null then
            return 0;
        elsif v is null and p_other.v is not null then
            return 1;
        elsif v is not null and p_other.v is null then
            return -1;
        elsif v = p_other.v then
            return 0;
        end if;
        return natcompare(v, p_other.v);
    end compare;
    
end;
/

This creates a type in the schema called natural_sort which can be used in SQL and PL/SQL very simply, e.g.:

select * from my_table order by natural_sort(name);

This is my test script:

select * from apex_string.split( q'[
fred
pic2
pic100a
pic120
pic121
jane
tom
pic02a
pic3
pic4
1-20
pic100
pic02000
10-20
1-02
1-2
x2-y7
x8-y8
x2-y08
x2-g8
pic01
pic02
pic 6
pic   7
pic 5
pic05
pic 5 
pic 5 something
pic 4 else
1.001
1.2
1.002
1.02
1.09
1.101
1.102
1.010
1.10
1.200
1.199
1.198
1.1
2000-1-10
2000-1-2
1999-12-25
2000-3-23
1999-3-3
]' )
order by natural_sort(column_value);

The result of the above test query is:

1-02
1-2
1-20
1.001
1.1
1.002
1.02
1.2
1.09
1.010
1.10
1.101
1.102
1.198
1.199
1.200
10-20
1999-3-3
1999-12-25
2000-1-2
2000-1-10
2000-3-23
fred
jane
pic01
pic02
pic02a
pic2
pic3
pic 4 else
pic4
pic 5
pic 5 
pic 5 something
pic05
pic 6
pic   7
pic100
pic100a
pic120
pic121
pic02000
tom
x2-g8
x2-y7
x2-y08
x8-y8

Warning: this will be slower than the built-in sorting by Oracle, and will not get any benefit from an index on the column; therefore it’s only really suitable when sorting a relatively small number of records, e.g. for display of a small set of records to a user.

pgconf.eu 2025 – RECAP

Yann Neuhaus - Sun, 2025-10-26 13:30

I was fortunate to be able to attend at the pgconf.eu 2025.
This year event was happening in RIGA and joined together once again key members of the community, contributors, committers, sponsors and users from across the world.
I would summarize this year event with those three main topics : AI/LLM – PG18- Monitoring.

AI/LLMs


Compared to last year the formula changed a bit regarding the Community events day of Tuesday where for the first time different “Summits” where organized. If you want full details on the event and the schedule as well as the presentation slides of each talk you may find it here : Schedule — PostgreSQL Conference Europe 2025
I had the chance to be chosen as a speaker for the AI Summit. It was quite interesting for me. In total there was 13 short talks (10min) on various topics related to PostgreSQL and AI/LLMs it was dense with a lot of interesting ideas of implementations – you can find the details and slides here PGConf.EU 2025 PostgreSQL AI Summit – PostgreSQL wiki. AI/LLMs are the hot topic of the moment and naturally it came up often during this event, in the talks and in the discussions. You can find the pdf of my presentation here. I explained a business case implementation of a BI self-service agentic RAG to find relevant fields for a target KPI and data marts creation as output. Since the talks were short, it allowed to have a debate at the end between the audience and the speakers. The discussion nicely moderated by organizers was interesting because it exposed the same strong thoughts people have in general about AI/LLMs. A blend of distrust and not fully understanding of what it is about or how it could help organizations. Which, in itself, shows that the PostgreSQL community has the same difficulties at explaining technical challenges versus organizational/human challenges. My view here is that we don’t have technical challenges, they are almost un-relevant to most arguments but rather human relation and understanding of what values a DBA for example, brings to the organization. To me installing and configuring PostgreSQL has no benefits in terms of personal growth so automating it is quite natural and adding AI/LLMs on top is “nice to have” but not fundamentally different than an Ansible playbook. But for the junior DBA this an additional abstraction that can be dangerous because it provides tools that users can’t grasp the full extent of their consequences. This outlines that the main issue of integrating AI/LLMs workflows is more a governance/ C-management issue than a technical one and it can’t be the last excuse for adding to the technological debt.
Jay Miller from Aiven explained how you can fail at exposing PII from LLMs and MCPs. This is rely a relevant topic knowing that more and more organization are facing issues like shadow IT. He also was quite the show host and was funny to hear. I recommend strongly watching the recording when it will be released.

PG18

This year was just after the PostgreSQL 18 version release which is one the version that brought major improvements and is initiating changes for future release to come. I was quite enthusiast to listen to Melanie Plagemen on how she worked on the improvements on freezing in this release. I have to say, usually when I am going at an advanced internal talk, I am more confused after than before. But here, Melanie did an amazing job at talking about a technical complex topic without loosing the audience.
Gülçin Yıldırım Jelínek, on her side explained what’s new in PG18 about constraints like NOT ENFORCED and NOT NULL and how to use them. The COO of Cybertec Raj Verma, during a sponsor talk, explained why compliance matters and how to minimize the risks and how PostgreSQL is helping us to be PCI DSS, GDPR, nLPD or HIPAA compliant.
Another interesting talk I was happy to attend was from Floor Drees and Gabriele Bartolini. they explain how they went on joining the CloudNativePG project to the CNCF.

Monitoring

This leads me to another important topic, I wasn’t looking for it but became a bit of a main subject for my over the years as a DBA that was interested in performance tuning. Monitoring on PostgreSQL was introduced by several talks like Luigi Nardi and his idea of workload fingerprint with the DBtune tool they have. Additionally, Lukas Fittl presented pg_stat_plans, an extension which aims at tracking execution plans over time. This is definitely something I am going to try and will push for implementation in the core extensions if not the core code itself.
The reason for that is obvious for me, PostgreSQL is becoming more and more central to enterprise organizations and appart from subject like TDE, monitoring is going to become a key aspect of automation, CloudNativePG and AI/LLM workflows. Having PostgreSQL being able to be monitored better and easier at the core will allow leveraging at all this levels. Cloud companies release that already hence there involvement in similar projects.

In the end, this year was once again the occasion for me to think about many relevant topics and exchange with PostgreSQL hackers as well as users from around the world. I came back home with the head full of ideas to investigate.

Additionally after the conference the videos of the each talks will be uploaded to the pgconf Europe Youtube channel : PostgreSQL Europe, but you can already check previous amazing talks and this year pgday Paris.

So once again the PostgreSQL flag was floating up high !

L’article pgconf.eu 2025 – RECAP est apparu en premier sur dbi Blog.

Pages

Subscribe to Oracle FAQ aggregator