DBA Blogs
Continuous data capture
I need to propagate changes on a few tables to an external system. I have to do it near realtime and ultimately it will end up on Kafka from my end. The solution target is Oracle 19c and needs to be installed to multiple clients on premises.
Example problem:
Let's say I have table of messages. I have a column called change_id which is populated by trigger from a sequence on insert and update. Theoretically I always know which row was changed last. i might avoid needing to track deletes, not sure yet.
Now I create a procedure to be ran as a batch and collect rows changed from last run. First it gets the max(change_id), then selects all rows between last_run_max_change_id and new_max_change_id. Exports data as json to another table. writes new_max_change_id as last_run_max_change_id. Commits. Another process will handle delivery of json to where it needs to be.
The problem is another long running transaction might have consumed sequence with lower numbers but has not commited when batch was run, thus those change_ids will never be exported. Another problem is I don't have deleted rows.
Solution 1: Golden Gate replication or OpenLogReplicator or something similar. I would have to convince all clients to commit to paying the GG licence, create tables as replication target, export from those tables, delete from them. Licence and getting all clients on board is difficult, because i need one solution for all. Also security, stability and maintenance concerns will likely make clients want to reject such ideas, and it has to be all of them onboard. I have also tried to use Oracle Streams before on another project and had stability issues and ORA-600 errors that were never resolved.
Solution 2: use SCN instead of change_id. SCNs (ora_scnrow) are not indexed and select by scn in where clause on billions of rows is too slow.
Solution 3: Flashback. Have something like SELECT * FROM messages VERSIONS BETWEEN SCN :last_scn AND :current_scn; My concern is if the program doesn't run for a while for whatever problem and reason, the flashback will be lost. I would need a backup solution.
Solution 4: trigger after insert on messages table that will write to export table. Handles insert, update, delete and nothing will be skipped. I select 10k ids ordered, export to json, delete 10k records and commit; I'm worried about big transactions having additional load to write to export table and trigger overhead for each row. Additional context switching. Index contention on export table having rows at the same time inserted by inserts and updates on original table and rows deleted by export batch process. Exports will have to be ordered by change_id.
Conclusion: The only stable and data consistent solution I can think of is solution 4, a trigger. But I'm worried about overhead.
Instead of a trigger, I could check the code and program additional inserts into export log table so that the total number uf updates might be lower but not by much.
A...
Categories: DBA Blogs
Question about Concurrent Statistics Gathering and Maintenance Windows
Hello,
I have a question regarding concurrent statistics gathering in Oracle 19c. Currently, my global preference is set to OFF:
SELECT DBMS_STATS.GET_PREFS('CONCURRENT') FROM DUAL;
-- Result: OFF
I can enable it manually with:
BEGIN
DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','ALL');
END;
/
I understand that enabling CONCURRENT allows Oracle to gather statistics on multiple tables and (sub)partitions at the same time, potentially reducing the total duration. Oracle uses the Job Scheduler and Advanced Queuing to manage these concurrent jobs.
My question:
Is there a relationship between the maintenance windows (like WEEKEND_WINDOW or WEEKNIGHT_WINDOW) and concurrent statistics gathering? Specifically:
When CONCURRENT is enabled, does Oracle automatically schedule these parallel stats jobs within the maintenance windows?
Or is the maintenance window unrelated, and the concurrent gathering runs independently of it?
Thank you for your guidance.
Categories: DBA Blogs
Oracle View Becomes INVALID Intermittently Without DDL on Base Table
All,
I have an Oracle view built on top of a partitioned table. Occasionally the view becomes INVALID, even though there are no DDL changes happening on the base table. The view becomes VALID again automatically when it is accessed or queried, but this behavior is causing issues in production.
I'm trying to understand what could be causing the intermittent invalidation.
<code>
create view my_view as
SELECT dly_fct_id, acct_ref_id, bus_dt
FROM my_table
WHERE bus_dt = TO_DATE('01/01/2500','MM/DD/YYYY');
</code>
Note: The date above is only a placeholder. In reality this predicate changes dynamically based on the ETL run date.
Base table structure:
<code>
CREATE TABLE my_table
(
dly_fct_id NUMBER,
acct_ref_id NUMBER,
bus_dt DATE
)
PARTITION BY RANGE (bus_dt)
INTERVAL (NUMTODSINTERVAL(1,'DAY'))
(
PARTITION p0 VALUES LESS THAN (TO_DATE('01-JAN-1900','DD-MON-YYYY'))
)
COMPRESS;
</code>
Constraint:
<code>
ALTER TABLE my_table
ADD CONSTRAINT xpk_my_table
PRIMARY KEY (dly_fct_id, acct_ref_id)
RELY;
</code>
The table actually contains ~50 columns, but I only included the relevant ones here.
Additional details:
The view does not become invalid daily, but it happens intermittently.
There are no known DDL operations on the table except regular ETL data loads.
The view becomes VALID again automatically when it is queried.
This is happening in a production environment, so we want to understand the root cause.
Questions:
What could cause a view to become INVALID intermittently without explicit DDL on the base table?
Could this be related to partition maintenance, statistics gathering, or constraint changes?
What system views or logs should we check to identify the root cause?
Any guidance on where to investigate would be greatly appreciated.
Categories: DBA Blogs
instnce vs database
is Instances can exist without the database and database can exist without instance
Categories: DBA Blogs
DBA technologies.
Hello i just wanted to ask you a question. Can you please provide me some good online oracle books, tutorials or courses of the dba technologies like RMAN, Golden Gates, Exadata, Data Guard, Data pump and RAC? thanks.
Categories: DBA Blogs
Get hashed password for old user
I am normally using
"<code>SELECT DBMS_METADATA.GET_DDL('USER', 'User_Name') FROM DUAL;</code>"
to get the hashed password ( which I use to re-create the user as it was ), with SYSTEM user.
This unfortunately does not work for users which have password_versions = 10G ( I am using Oracle 19 but I think this happens with 12 as well )
The hashed password in that case is stored in a column of USER$ sys table that SYSTEM user is not allowed to see
However I observed that I can use export datapump to dump the user definition and then import datapump to get a sql file that has the hashed password, even with SYSTEM user ( but it is a long way ).
So how can SYSTEM user get the hashed password?
I would like a simpler way to get a script to re-create a user
Regards
Mauro
Categories: DBA Blogs
Dealing with 100000000+ rows of data.
Well hello i just wanted to ask you when you have 100000000 rows of data and the best partitioning strategy is to use partition by range with subpartition by hash (having a column that is date type called hired and in the range we have YEAR(hired) and in the hash we have MONTH(hired)) how we will know the number of the partitions we will have to create in the table so we can have the best performance when we query data? thanks.If you don't understand my question i can provide the code if you ask me thanks.
Categories: DBA Blogs
Secure Application Role Disable Role
Hi Tom,
I was learning Secure Application Roles and created an example to test it. Everything worked fine to actually create and enable the role, but I'm getting an error disabling the role (actively disabling it, not just letting the end of session do it).
First I'll paste a portion of the script output showing the error, and then I'll paste the .sql script itself that can be reproduced.
=== secure_role_example.txt ===
MYDBA@ORCL > create package body secure_role_pkg as
2 procedure enable_role is
3 begin
4 -- security check here
5 if 1 = 1 then
6 dbms_session.set_role('secure_role');
7 end if;
8 end;
9
10 procedure disable_role is
11 begin
12 dbms_session.set_role('all except secure_role');
13 end;
14 end;
15 /
Package body created.
MYDBA@ORCL > show errors
No errors.
MYDBA@ORCL >
MYDBA@ORCL > grant execute on secure_role_pkg to public;
Grant succeeded.
A@ORCL >
A@ORCL > exec mydba.secure_role_pkg.enable_role;
PL/SQL procedure successfully completed.
A@ORCL >
A@ORCL > select * from session_roles;
ROLE
------------------------------
SECURE_ROLE
A@ORCL >
A@ORCL > exec mydba.secure_role_pkg.disable_role;
BEGIN mydba.secure_role_pkg.disable_role; END;
*
ERROR at line 1:
ORA-01919: role 'SECURE_ROLE' does not exist
ORA-06512: at "SYS.DBMS_SESSION", line 124
ORA-06512: at "MYDBA.SECURE_ROLE_PKG", line 12
ORA-06512: at line 1
A@ORCL >
A@ORCL > select * from session_roles;
ROLE
------------------------------
SECURE_ROLE
=== secure_role_example.sql ===
-- secure_role_example.sql
-- Run this logged in as the mydba user, who has dba role.
-- This example creates a secure application role, which is a role that is
-- tied to and can only be set by a specific invokers rights package. This
-- allows you to procedurally enable a role for a user's session based on
-- criteria you define, and have that role contain all the privs needed to
-- execute a set of packages to run a particular application.
spool secure_role_example.txt;
set echo on;
connect mydba/orcl;
create role secure_role identified using mydba.secure_role_pkg;
create table secure_table (a int, b int);
create package secure_app as
procedure do_stuff;
procedure display_stuff;
end;
/
show errors
create package body secure_app as
procedure do_stuff is
begin
insert into secure_table values (1, 1);
commit;
end;
procedure display_stuff is
l_count number;
begin
select count(*) into l_count from secure_table;
dbms_output.put_line(l_count);
end;
end;
/
show errors
grant execute on secure_app to secure_role;
create package secure_role_pkg authid current_user as
procedure enable_role;
procedure disable_role;
end;
/
show...
Categories: DBA Blogs
Locked user.
Well hello i just wanted to ask you a question. Before 1.5 years i installed 23ai version and i downloaded sql developer. Before 1 month i opened my sql developer and i saw that my hr user is locked so i had to connect to sys user and then i pressed the following code ->
<code>ALTER SESSION SET CONTAINER = FREEPDB1;
ALTER USER hr ACCOUNT UNLOCK; -- or ALTER USER hr IDENTIFIED BY ** (password: hr) ACCOUNT UNLOCK; </code>
My question is from the time that i unlocked the user hr so i can connect again when the account will lock again? is there any query so i can see the remaining time that when is finished it will lock again the hr user? thanks.
Categories: DBA Blogs
Oracle error -1843 "not a valid month"
Hello,
We have a Java process that calls OR stored procedure via JPA.
This procedure was initially declared outside of a package, and everything worked fine, but when we moved it into a package, we encountered date conversion problems related to the incorrectly initialized Oracle NLS_DATE_FORMAT variable.
When our call is made via a Windows SQL client, NLS_DATE_FORMAT has the value 'DD/MM/YYYY'.
However, when the call is made via Java Process, this variable has the value 'DD-MON-RR' => This causes problems in the procedure where date conversions are sometimes used without specifying the format, and Oracle then uses the NLS_DATE_FORMAT variable. This can lead to a TO_DATE(27/02/2026) operation when the session format is 'DD-MON-RR'... and therefore results in the Oracle error -1843 "not a valid month".
When the stored procedure was outside of a package, we forced this format at the beginning of our procedure with the command:
execute immediate 'ALTER SESSION SET NLS_DATE_FORMAT=''dd/mm/yyyy''';
But this doesn't work when the procedure is inside a package because it is apparently compiled and cached before the ALTER SESSION operation is performed...
The code is below:
<code>
CREATE OR REPLACE PACKAGE GRASE_WSDATEBUTOIR_PG is
/*
Traitement effectue par le package :
Appel procedure de recuperation date butoire pour ws sme avantage
--*/
--
-- C O N S T A N T E S P U B L I Q U E S
-- ============================================================================
P_CST_module CONSTANT STD_desc_PG.P_ST_module DEFAULT 'GRASE_WSDATEBUTOIR_PG';
P_CST_fonction_adm CONSTANT STD_desc_PG.P_ST_fonction_adm DEFAULT 'SERVICE';
P_CST_version CONSTANT STD_desc_PG.P_ST_version DEFAULT '26.2.0.01';
P_CST_version_date CONSTANT DATE DEFAULT '23/02/2026';
-- ============================================================================
--+Nom : proc_ws_datebutoir_pr
--+traitement : dans le cadre du ws sme avantage (creation montant)
--+Parametres : I_idpersonn IN co.graseuvp_pg.P_ST_identifiant_personne,
-- I_dtjour IN DATE,
-- I_dttraitement IN DATE,
--+Description : recupere la date butoire necessaire a la creation occ rrx1tmtavtg
-- ****************************************************************************
PROCEDURE ws_datebutoir_pr
(
I_idpersonn IN co.graseuvp_pg.P_ST_identifiant_personne,
I_cdsynretveu IN co.graseuvp_pg.P_ST_code_synthese,
I_termapayer IN co.graseuvp_pg.P_ST_code_terme_a_payer,
O_dtbutoir OUT co.graseuvp_pg.P_ST_date_butoire,
O_dtbutoir_gen OUT co.graseuvp_pg.P_ST_date_butoire,
O_dtbutoir_pts OUT co.graseuvp_pg.P_ST_date_butoire,
O_dtbutoir_cg OUT co.graseuvp_pg.P_ST_date_butoire,
O_code_retour OUT RRX1TERRPGM.coderetour%TYPE,
O_module OUT RRX1TERRPGM.module%TYPE,
O_errproc OUT RRX1TERRPGM.errproc%TYPE,
O_sqltra...
Categories: DBA Blogs
memoptimize for read and triggers
We don't seem to be able memoptimize a table for read when there are triggers on it (this seems to be undocumented however ?).
But: we can add the triggers later on without problem and the table continues to be memoptimized for read.
Question: WHY ? Is it a bug ?
It is a bit annoying because when trying to take advantage of this feature we first have to drop triggers on the table and then re-create them.
<code>
SQL> select banner from v$version where rownum=1;
BANNER
--------------------------------------------------------------------------------
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
SQL> drop table t;
Table dropped.
SQL> create table t( i int primary key, v varchar2(1000)) segment creation immediate;
Table created.
SQL> create trigger tr_t before insert on t for each row begin null; end tr_t;
2 /
Trigger created.
SQL> alter table t MEMOPTIMIZE for read;
alter table t MEMOPTIMIZE for read
*
ERROR at line 1:
ORA-62181: The MEMOPTIMIZE FOR READ feature is not supported on this table.
SQL> drop trigger tr_t;
Trigger dropped.
SQL> alter table t MEMOPTIMIZE for read;
Table altered.
SQL> create trigger tr_t before insert on t for each row begin null; end tr_t;
2 /
Trigger created.
SQL> select MEMOPTIMIZE_READ from user_tables where 'T'=table_name
2 /
MEMOPTIM
--------
ENABLED
SQL>
</code>
Categories: DBA Blogs
materialized view with union all not refreshed
<b>My goal</b> is to create a fast refreshable materialized view (MV) that holds all the combinations of records in specific related tables. So also the 'incomplete combinations'. So the query that is needed contains a full outer join.
<b>First a disclaimer about using LiveSQL.</b>
I tried but got the next message:
"Not Accepting New Users
Please use our V2 product, Oracle Free SQL. Free SQL allows you to learn and share SQL using the latest version of the Oracle database for free! https://freesql.com/"
In this text I lay out a complete case with alle create- and insert-statements needed (I hope it is sufficient).
<b>Update on this disclaimer (20260204):</b>
I started working on freesql (in stead of livesql) and I start to understand the meaning of it.
Small problem: <b>the first statement creating a materialized view fails because of insufficient priviliges.</b>
This happened in freesql db-version 19!
<b>Update on this disclaimer (20260206):</b>
I posted this in the freesql forum (https://forums.oracle.com/ords/apexds/post/ora-01031-insufficient-privileges-when-i-try-to-create-a-ma-4031). The answer was: "REFRESH FAST ON COMMIT isn?t currently supported in FreeSQL, which is why you?re seeing an ORA-01031. I?ve logged a ticket with the dev team to look into supporting this going forward!"
I also tried it in freesql db-version 23: no problem. Even further: The whole problem with creating materialized view logs with the clause "commit scn" is gone and the materialized views are correctly fast refreshed on commit!
As long as I do not have more information, to me it looks like REFRESH FAST ON COMMIT in combination with 'commit scn' in the create materialized view log doesn't work well in version 19c.
I posted this on january 5th, I look forward to an answer.
<b>Update on this disclaimer (20260209):</b>
Next reaction on my question in the freesql-forum:
In Oracle 19c, creating a materialized view with REFRESH FAST ON COMMIT requires the ON COMMIT REFRESH system privilege. That privilege isn?t currently granted in the FreeSQL 19c environment, which is why you see ORA-01031 there.
This is why it works for you in FreeSQL 23ai. I?ve logged a ticket for us to review this difference and look into supporting this more consistently going forward. Thanks again for pointing this out!
<b>Conclusion</b>: It is (still) not possible to demonstrate this question in freesql.
</end disclaimer>
<b>How to achieve my goal</b>
It's not possible to code a full outer join directly into a MV, that I know.
So my plan is this:
1) Every left or right outer join concerning the aforementioned tables is put into a MV with the (+)-notation.
2) These MV's can be used in next MV's.
3) The full outer join (the last step in the example I will give) is implemented as:
a) a MV holding the left outer join
b) a MV holding the right outer join
c) a MV that will 'union all' these two MV's together
d) a MV that picks out the unique rec...
Categories: DBA Blogs
Polymorphic Table Function: TIMESTAMP(9) column loses fractional-second precision (rounded to 6) and TO_CHAR(...FF9) returns zeros
I?m using a Polymorphic Table Function (PTF) and adding a new column TIMESTAMP_CALCULATED of type TIMESTAMP with fractional seconds scale 9. In FETCH_ROWS I populate it with a constant value:
2026-01-01 01:01:01.123456789
However, when selecting the PTF output, the value appears rounded to 6 fractional digits, e.g.:
2026-01-01T01:01:01.123457Z (rounded to microseconds)
Additionally, formatting it with:
<code>
to_char(TIMESTAMP_CALCULATED,'YYYY-MM-DD HH24:MI:SS.FF9')
</code>
produces:
2026-01-01 01:01:01.000000000
This is unexpected: even if the value was rounded to 6 digits, TO_CHAR(...FF9) should not become all zeros.
Reproducible test case (minimal script)
<code>
CREATE TABLE TIMESTAMP_INPUT(ID INTEGER, TIMESTAMP_VALUE TIMESTAMP(9));
CREATE OR REPLACE PACKAGE TIMESTAMP_PTF AS
FUNCTION describe (tab IN OUT DBMS_TF.table_t) RETURN DBMS_TF.describe_t;
PROCEDURE fetch_rows;
FUNCTION my_ptf(tab IN TABLE) RETURN TABLE PIPELINED ROW POLYMORPHIC USING TIMESTAMP_PTF;
END;
/
CREATE OR REPLACE PACKAGE BODY TIMESTAMP_PTF AS
FUNCTION describe (tab IN OUT DBMS_TF.table_t) RETURN DBMS_TF.describe_t IS
new_cols dbms_tf.columns_new_t;
BEGIN
new_cols(1) := dbms_tf.column_metadata_t(name => 'TIMESTAMP_CALCULATED', type => dbms_tf.type_timestamp, scale => 9);
RETURN dbms_tf.describe_t(new_columns => new_cols);
END describe;
PROCEDURE fetch_rows
as
v_rowset dbms_tf.row_set_t;
v_vals_rowset dbms_tf.row_set_t;
v_row_count binary_integer;
v_col_count binary_integer;
begin
dbms_tf.get_row_set(v_rowset, v_row_count, v_col_count);
for i in 1..v_row_count loop
v_vals_rowset(1).tab_timestamp(i) := TO_TIMESTAMP('2026-01-01 01:01:01.123456789','YYYY-MM-DD HH24:MI:SS.FF9');
end loop;
dbms_tf.put_row_set(v_vals_rowset);
end;
END;
/
DELETE TIMESTAMP_INPUT;
INSERT INTO TIMESTAMP_INPUT(ID, TIMESTAMP_VALUE)
VALUES(1, TO_TIMESTAMP('2026-01-01 01:01:01.123456789','YYYY-MM-DD HH24:MI:SS.FF9'));
INSERT INTO TIMESTAMP_INPUT(ID, TIMESTAMP_VALUE)
VALUES(2, TO_TIMESTAMP('2026-01-02 02:02:02.123456789','YYYY-MM-DD HH24:MI:SS.FF9'));
INSERT INTO TIMESTAMP_INPUT(ID, TIMESTAMP_VALUE)
VALUES(3, TO_TIMESTAMP('2026-01-03 03:03:03.123456789','YYYY-MM-DD HH24:MI:SS.FF9'));
COMMIT;
ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF9';
ALTER SESSION SET NLS_TIMESTAMP_TZ_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF9 TZH:TZM';
WITH BASE_DATA AS(
SELECT ID, TIMESTAMP_VALUE FROM TIMESTAMP_INPUT
)
SELECT id,
to_char(TIMESTAMP_VALUE,'YYYY-MM-DD HH24:MI:SS.FF9') TIMESTAMP_VALUE_STR,
DUMP(TIMESTAMP_VALUE) dump_TIMESTAMP_VALUE,
TIMESTAMP_VALUE,
to_char(TIMESTAMP_CALCULATED,'YYYY-MM-DD HH24:MI:SS.FF9') TIMESTAMP_CALCULATED_STR,
DUMP(TIMESTAMP_CALCULATED) DUMP_TIMESTAMP_CALCULATED,
TIMESTAMP_CALCULATED
FROM TIMESTAMP_PTF.my_ptf(BASE_DATA) pf;
drop table TIMESTAMP_INPUT;
drop package TI...
Categories: DBA Blogs
complex crosstab report
Hi all,
I have a hierarchy between levels in my_hierarchy table (between id_lvl and id_lvl_sup).
Each level (lvl) may have items (id_item), and each item has a category (id_item_sup). These data is stored in my_items table.
Now I want to build a report based on the my_view view.
In rows, we have id_lvl1 and id_lvl2.
In columns, we have id_item_sup (1st level) and id_item (2nd level).
In cells, we have the total of items for each id_lvl2.
I want to transform the definition of my_view to "normalize" the number of items that appears in each id_item_sup.
For example, id_item_sup = 2001, we have 4 items.
<code> ID_H ID_ITEM_SUP ORD_ITEM_SUP NB_ITEM_PER_ITEM_SUP
---------- ----------- ------------ --------------------
1000 1002 1002 11
1000 2001 2001 4
1000 2100 2100 5
1000 2200 2200 40
1000 3001 3001 25
1000 3500 3500 13
1000 4001 4001 1
1000 6001 6001 9
1000 7001 7001 3
1000 8001 8001 9
1000 0</code>
I want to "padd" each id_item_sup to reach 10 items (p_nb_item_per_grp column in the view).
The padding consists of adding items with lib_item = '----' at the last of existing items.
<code> ID_H ID_ITEM_SUP ORD_ITEM_SUP ID_ITEM LIB_I ORD_ITEM NB
---------- ----------- ------------ ---------- ----- ---------- ----------
1000 2001 2001 2002 2002 2002 47
1000 2001 2001 2003 2003 2003 501
1000 2001 2001 2004 2004 2004 3
1000 2001 2001 2007 2007 2007 3
1000 2001 2001 9999 ---- 9999 0
1000 2001 2001 9999 ---- 9999 0
1000 2001 2001 9999 ---- 9999 0
1000 2001 2001 9999 ---- 9999 0
1000 2001 2001 9999 ---- 9999 0
1000 2001 2001 9999 ---- 9999 0</code>
Then we have two (02) cases :
- number of items < 10, so we pad items to reach 10 items ;
- number of items >= 10, so we create blocks of 10 items. Then, in all the blocks different from the first block we add " next" to lib_item_sup.
For example, number of items is 11, so we create two (02) blocks :
- The 1st one, with 10 items
- The 2nd one, with one (01) item padded with 9 items with lib '----', and we concatenate " next" to lib_item_sup
Another example, number of i...
Categories: DBA Blogs
elegant Solution for HTML-User Input
Hi Tom,
I got a Web Form that is able to display data to users and read user input back into the database. It is very important to be able to read user input back into the database. This is why I did not use out-of-the-box solutions like "owa_utils.read SQL into table". However, I need a ton of parameters to work with, and as the project grows, I would like to modularize and carry fewer parameters between the procedures. ChatGPT told me that context variables would be nice, and I could read them via owa_utils. However, I did not get this running. Do you have any solution that allows me to modularize and clean my code?
Within the FreeSQL link, there is a smaller example of what I am currently doing. This code works in my schema to run a page that is hosted on my server, except for the page link. I only need to replace the g_page_link with my actual server and schema names, and the code will work on my schema. Unfortunately, I do not know how to generalize this Pagelinke with FreeSQL. I also do not know a server that I could call from there.
To make that clear again: I need help with managing user input, the moment I split my current procedure into a lot of smaller procedures. I would like to avoid passing 20+ in-out parameters between multiple procedures. Can you help me?
Best, Peter
Categories: DBA Blogs
How to get list of internal (like SYS) PL/SQL packages subprograms called in a application PL/SQL package ?
I would like to be able to analyze large PL/SQL code base to identify internal PL/SQL subprograms called in PL/SQL application code.
I know that I can get package level dependencies with DBA_DEPENDENCIES and that PL/Scope should give subprograms level information.
But I cannot make it work with Oracle 26 Free.
Here my scenario:
<code>
SQL>
SQL> alter session set plscope_settings='identifiers:all, statements:all';
Session altered.
SQL> show parameter PLSCOPE
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
plscope_settings string identifiers:all, statements:al
l
SQL>
SQL> CREATE OR REPLACE PACKAGE pcaller AS
2 -- This must be in the spec to be visible to other packages
3 FUNCTION fcaller(param bool) RETURN BOOLEAN;
4 END pcaller;
5 /
Package created.
SQL> show errors
No errors.
SQL>
SQL> CREATE OR REPLACE PACKAGE BODY pcaller AS
2 FUNCTION fcaller(param bool) RETURN BOOLEAN IS
3 BEGIN
4 DBMS_OUTPUT.PUT_LINE('called');
5 DBMS_LOCK.SLEEP(1);
6 SYS.DBMS_RANDOM.SEED(2);
7 RETURN false;
8 END fcaller;
9 END pcaller;
10 /
Package body created.
SQL> show errors
No errors.
SQL> --
SQL> show errors
No errors.
SQL> --
SQL> -- list dependencies
SQL> -- from PL/SQL objects (including triggers) belonging to application schemas
SQL> -- on Oracle internal PL/SQL packages (including owner is 'PUBLIC' but excluding STANDARD package)
SQL> --
SQL> set linesize 200
SQL> --
SQL> col owner format a30
SQL> col name format a30
SQL> col referenced_owner format a30
SQL> col referenced_name format a30
SQL> col object_type format a30
SQL> select d.owner, d.name, o1.object_type, d.referenced_owner, d.referenced_name, o2.object_type
2 from dba_dependencies d
3 join dba_objects o2 on d.referenced_owner = o2.owner and d.referenced_name = o2.object_name
4 join dba_objects o1 on d.owner = o1.owner and d.name = o1.object_name
5 where o1.object_type in ('PACKAGE', 'PACKAGE BODY', 'TRIGGER', 'PROCEDURE', 'FUNCTION') and d.owner in (select username from dba_users where oracle_maintained='N')
6 and o2.object_type in ('PACKAGE', 'PACKAGE BODY','PROCEDURE','FUNCTION', 'SYNONYM')
7 and referenced_name <> 'STANDARD'
8 and d.referenced_owner in (select username from dba_users where oracle_maintained='Y'
9 union select 'PUBLIC' from dual)
10 order by d.owner, d.name, d.referenced_owner, d.referenced_name
11 ;
OWNER NAME OBJECT_TYPE REFERENCED_OWNER REFERENCED_NAME OBJECT_TYPE
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------
TEST PCALLER PACKAGE BODY PUBLIC DBMS_LOCK SYNONYM
TEST PCALLER PACKAGE PUBLIC DBMS_LOCK SYNONYM
TEST PCALLER PACKAGE BODY PUBLIC DBMS_OUTPUT SYNONYM
TEST PCALLER PACKAGE PUBLIC DBMS_OUTPUT SYNONYM
TEST PCALLER PACKAGE SYS DBMS_RANDOM PACKAGE BODY
TEST PCALLER PACKAGE BODY SYS DBMS_RANDOM PACKAGE BOD...
Categories: DBA Blogs
Usage and limitations on PART_NAME parameter in DBMS_REDEFINITION APIs
Hello Connor/Chris,
Hope you are doing well.
Recently, I came to know about the PART_NAME parameter for multiple APIs of DBMS_REDEFINITION package from the documentation. However, I could not find much about what kind of changes are permitted (and prevented) at partition level, either in the documentation or in any MOS notes.
The documentation states below
<i>part_name
Name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined. Can take a comma-delimited list of partition names to be redefined.
</i>
This lead me to believe that I might be able to leverage this feature to perform a large update of a partitioned table column but with one partition at a time using DBMS_REDEFINITION. However, this does not appear to work and fails with error during START_REDEF_TABLE API call.
Would you be able to help in shedding more light on what is the purpose of PART_NAME parameter in DBMS_REDEFINITION APIs?
Below is my set up to reproduce the issue
<code>DROP TABLE TBL_DATEPART PURGE;
DROP TABLE TBL_TSPART PURGE;
CREATE TABLE TBL_DATEPART
(
OBJECT_ID NUMBER NOT NULL
, OWNER VARCHAR2(50) NOT NULL
, OBJECT_TYPE VARCHAR2(50) NOT NULL
, OBJECT_NAME VARCHAR2(200) NOT NULL
, CREATED DATE NOT NULL
, LAST_DDL_TIME DATE NOT NULL
, CONSTRAINT TBL_DATEPART_PK PRIMARY KEY(OBJECT_ID) ENABLE
)
PARTITION BY RANGE (LAST_DDL_TIME)
(
PARTITION Y2019 VALUES LESS THAN (TO_DATE('01-JAN-2020','DD-MON-YYYY'))
, PARTITION Y2020 VALUES LESS THAN (TO_DATE('01-JAN-2021','DD-MON-YYYY'))
, PARTITION Y2021 VALUES LESS THAN (TO_DATE('01-JAN-2022','DD-MON-YYYY'))
, PARTITION YMAX VALUES LESS THAN (MAXVALUE)
);
insert into TBL_DATEPART (object_id, owner, object_type, object_name, created, last_ddl_time)
select object_id, owner, object_type, object_name, created, last_ddl_time
from dba_objects
where last_ddl_time is not null ;
commit;
ALTER TABLE TBL_DATEPART ADD last_ddl_ts TIMESTAMP ;
CREATE TABLE TBL_TSPART
(
OBJECT_ID NUMBER
, OWNER VARCHAR2(50)
, OBJECT_TYPE VARCHAR2(50)
, OBJECT_NAME VARCHAR2(200)
, CREATED DATE
, LAST_DDL_TIME DATE
, LAST_DDL_TS TIMESTAMP
)
PARTITION BY RANGE (LAST_DDL_TIME)
(
PARTITION Y2019 VALUES LESS THAN (TO_TIMESTAMP('01-JAN-2020','DD-MON-YYYY'))
, PARTITION Y2020 VALUES LESS THAN (TO_TIMESTAMP('01-JAN-2021','DD-MON-YYYY'))
, PARTITION Y2021 VALUES LESS THAN (TO_TIMESTAMP('01-JAN-2022','DD-MON-YYYY'))
, PARTITION YMAX VALUES LESS THAN (MAXVALUE)
);
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE(uname => 'NP', tname => 'TBL_DATEPART', options_flag => DBMS_REDEFINITION.CONS_USE_ROWID);
END;
/
BEGIN
-- Below fails. My objective is to be able to update the value of newly added column based on an existing column.
DBMS_REDEFINITION.START_REDEF_TABLE(uname => 'NP', orig_table => 'TBL_DATEPART', int_table => 'TBL_TSPART', col_mapping => ...
Categories: DBA Blogs
Certain event set or not
Hi Tom
How do I confirm that the following event has been set:
SQL>alter system set events '12899 trace name errorstack level 3';
?
ORA-12899 has occured after the event was set but no trace file is found in the udump catalog. The event was not set by me and I am trying to confirm if the event is set or not.
The database is NOT using spfile (sorry !).
I have tried:
SQL>set serveroutput on
SQL>declare
event_level number;
begin
for i in 12000..12999 loop
sys.dbms_system.read_ev(i,event_level);
if (event_level > 0) then
dbms_output.put_line('Event '||to_char(i)||' set at level '||
to_char(event_level));
end if;
end loop;
end;
/
I only get "PL/SQL procedure successfully completed".
On another database I tried running the above PL/SQL, set the event and run the PL/SQL again. All I get back from the database is: "PL/SQL procedure successfully completed".
Can I use dbms_system.read_ev for this at all ?
Thanks,
Frank
Categories: DBA Blogs
ora-4068
HI TOM,
Tnx for this oppurtunity again, Can You explain how does the ORA-4068 occurs and how can I go about it?
How does oracle manages the freelist (8i) on LMT's? And how can we track the freespace, used space, and fragmentation on LMT's?
What's the difference between creating the index with parallel option and not in parallel mode ?
TNX
NOTNA
Categories: DBA Blogs
Partition by Expression
Oracle 26.1 now allows using an Expression in the Partitioning Clause. In earlier releases, you could do this with a Virtual Column defined in the table. However, 26.1 does not require the Virtual Column if it is used simply for the Partitioning Clause.
Here is a demo in 19.12 comparing the two methods :
-- using a Virtual Column "ORDER_MONTH"
SQL> l
1 CREATE TABLE ORDERS_2026_old (
2 order_id NUMBER NOT NULL,
3 order_date DATE NOT NULL,
4 customer_id NUMBER,
5 order_month NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR(order_date, 'MM'))) VIRTUAL
6 )
7 PARTITION BY RANGE (order_month)
8 (
9 PARTITION orders_q1 VALUES LESS THAN (4), -- Months 1, 2, 3
10 PARTITION orders_q2 VALUES LESS THAN (7), -- Months 4, 5, 6
11 PARTITION orders_q3 VALUES LESS THAN (10), -- Months 7, 8, 9
12 PARTITION orders_q4 VALUES LESS THAN (13) -- Months 10, 11, 12
13* )
SQL> /
Table created.
SQL>
-- try the Expression in the Partitioning Clause
SQL> l
1 CREATE TABLE ORDERS_2026_new (
2 order_id NUMBER NOT NULL,
3 order_date DATE NOT NULL,
4 customer_id NUMBER
5 )
6 PARTITION BY RANGE ((TO_NUMBER(TO_CHAR(order_date, 'MM'))))
7 (
8 PARTITION orders_q1 VALUES LESS THAN (4), -- Months 1, 2, 3
9 PARTITION orders_q2 VALUES LESS THAN (7), -- Months 4, 5, 6
10 PARTITION orders_q3 VALUES LESS THAN (10), -- Months 7, 8, 9
11 PARTITION orders_q4 VALUES LESS THAN (13) -- Months 10, 11, 12
12* )
SQL> /
PARTITION BY RANGE ((TO_NUMBER(TO_CHAR(order_date, 'MM'))))
*
ERROR at line 6:
ORA-00904: : invalid identifier
SQL>
And now in 26.1 :
-- using the Virtual Column "ORDER_MONTH", works as expected
SQL> l
1 CREATE TABLE ORDERS_2026_old (
2 order_id NUMBER NOT NULL,
3 order_date DATE NOT NULL,
4 customer_id NUMBER,
5 order_month NUMBER GENERATED ALWAYS AS (TO_NUMBER(TO_CHAR(order_date, 'MM'))) VIRTUAL
6 )
7 PARTITION BY RANGE (order_month)
8 (
9 PARTITION orders_q1 VALUES LESS THAN (4), -- Months 1, 2, 3
10 PARTITION orders_q2 VALUES LESS THAN (7), -- Months 4, 5, 6
11 PARTITION orders_q3 VALUES LESS THAN (10), -- Months 7, 8, 9
12 PARTITION orders_q4 VALUES LESS THAN (13) -- Months 10, 11, 12
13* )
SQL> /
Table created.
SQL>
-- using the Expression for the Partitioning Clause
SQL> l
1 CREATE TABLE ORDERS_2026_NEW (
2 order_id NUMBER NOT NULL,
3 order_date DATE NOT NULL,
4 customer_id NUMBER
5 )
6 PARTITION BY RANGE ((TO_NUMBER(TO_CHAR(order_date, 'MM'))))
7 (
8 PARTITION orders_q1 VALUES LESS THAN (4), -- Months 1, 2, 3
9 PARTITION orders_q2 VALUES LESS THAN (7), -- Months 4, 5, 6
10 PARTITION orders_q3 VALUES LESS THAN (10), -- Months 7, 8, 9
11 PARTITION orders_q4 VALUES LESS THAN (13) -- Months 10, 11, 12
12* )
SQL> /
Table created.
SQL>
SQL> insert into orders_2026_new values (1, sysdate, 1);
1 row created.
SQL>
QL> insert into orders_2026_new values (2, to_date('15-JUN-2026','DD-MON-RRRR'),2);
1 row created.
SQL> commit;
Commit complete.
SQL>
-- gather statistics
SQL> exec dbms_stats.gather_table_stats('','ORDERS_2026_NEW');
PL/SQL procedure successfully completed.
SQL>
-- check number of rows in each partition
SQL> col partition_name format a32
SQL> select partition_name, num_rows
2 from user_tab_partitions
3 where table_name = 'ORDERS_2026_NEW'
4 /
PARTITION_NAME NUM_ROWS
-------------------------------- ----------
ORDERS_Q1 1
ORDERS_Q2 1
ORDERS_Q3 0
ORDERS_Q4 0
SQL>
-- check rows in the specific partitions
-- get all the rows where the month number is less than 4
SQL> select * from orders_2026_new
2 partition for (3)
3 /
ORDER_ID ORDER_DATE CUSTOMER_ID
---------- ------------------ -----------
1 15-FEB-26 1
SQL>
-- get all the rows where the month number is less than 7
SQL> select * from orders_2026_new
2 partition for (6)
3 /
ORDER_ID ORDER_DATE CUSTOMER_ID
---------- ------------------ -----------
2 15-JUN-26 2
SQL>
Since partitioning is by Month Number (would have been the Virtual Column ORDER_MONTH) in the 19c example, I can use the "PARTITION FOR (<month_number>)" in my SELECT statement to query a specific partition.
I can confirm that ORDER_ID 1 is in the first partition (Month Number less than 4) and ORDER_ID 2 is in the second partition (Month Number less than 7)
Categories: DBA Blogs


