Feed aggregator
Bring the consecutive number(strat_val,end_val) record based on id as one record and add new record if the consecutive number is breaks down
https://livesql.oracle.com/next/?compressed_code=H4sIAAAAAAAACo3PwQqCQBCA4fvCvsPcVBjBXTWLboXHCoLOMupCgm2wO%252Fr8oXbo1jKn%252BflgmM4ZYgNM7WiAjeemX3YpYikAAIYeZnLdk5yOVZbgVj2T42amEa6Py6m%252Bf7Ox%252FRrBTq%252FWOCmSoxTbDNYbxzBYfv%252BcmWmcjI8hoggV5gv%252FDwvchcEKVRYmlUZVBtIdqkMY1RlqHUbzEvMqkO6xWN9K0%252Fp2%252FgB1bHIywAEAAA%253D%253D&code_language=PL_SQL&code_format=false
<code>create table test_date
(
id varchar2(10),
start_val NUMBER,
end_val number
);</code>
input data :
<code>insert into test_date values( 'a',1,3);
insert into test_date values( 'a',4,6);
insert into test_date values( 'a',7,10);
insert into test_date values( 'a',12,15);
insert into test_date values( 'a',16,19);
insert into test_date values( 'a',20,22);
insert into test_date values( 'a',35,37);
insert into test_date values( 'a',38,40);</code>
output data:
'a' , 1, 10
'a' , 12, 19
'a' , 35 , 40
Why does DML on a materialized view's master table cause the materialized view staleness to change from FRESH to NEEDS_COMPILE? I would have thought it would have changed to STALE. My understanding about NEEDS_COMPILE is that it is supposed to reflect structural changes (DDL), so I must have some gap in my understanding because I am getting NEEDS_COMPILE when only performing DML (not DDL).
<code>SQL>column mview_name format a20
SQL>create table T ( id NUMBER
2 )
3 /
Table created.
SQL>insert into T ( id ) values ( 1 )
2 /
1 row created.
SQL>create materialized view T_MV
2 ( id
3 )
4 as select id
5 from T
6 /
Materialized view created.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV'
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV FRESH VALID COMPLETE
SQL>select * from T_MV order by id
2 /
ID
-----------
1
SQL>insert into T ( id ) values ( 2 )
2 /
1 row created.
SQL>commit
2 /
Commit complete.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV'
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV NEEDS_COMPILE NEEDS_COMPILE COMPLETE
SQL>select * from T_MV order by id
2 /
ID
-----------
1
SQL>begin
2 dbms_snapshot.refresh( list => user || '.T_MV'
3 , purge_option => 2
4 , atomic_refresh => false
5 , out_of_place => true
6 )
7 ;
8 end;
9 /
PL/SQL procedure successfully completed.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV'
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV FRESH VALID COMPLETE
SQL>select * from T_MV order by id
2 /
ID
-----------
1
2
SQL>insert into T ( id ) values ( 3 )
2 /
1 row created.
SQL>commit
2 /
Commit complete.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV'
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV NEEDS_COMPILE NEEDS_COMPILE ...
It appears USING TRUSTED CONSTRAINTS causes a materialized view to have staleness=UNKNOWN when it would otherwise be FRESH.
Is it possible to have a materialized view with staleness=FRESH when USING TRUSTED CONSTRAINTS?
If not, would the optimizer be less likely to consider a materialized view with staleness=UNKNOWN for query rewrite if query_rewrite_integrity=TRUSTED and query_rewrite_enabled=TRUE? How about if query_rewrite_integrity=ENFORCED?
<code>SQL>column mview_name format a20
SQL>create table T ( id NUMBER
2 )
3 /
Table created.
SQL>insert into T ( id ) values ( 1 )
2 /
1 row created.
SQL>create materialized view T_MV
2 ( id
3 )
4 -- refresh using trusted constraints
5 as select id
6 from T
7 /
Materialized view created.
SQL>create materialized view T_trusted_MV
2 ( id
3 )
4 refresh using trusted constraints
5 as select id
6 from T
7 /
Materialized view created.
SQL>
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name like 'T\_%MV' escape '\' order by 1
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV FRESH VALID COMPLETE
T_TRUSTED_MV UNKNOWN VALID COMPLETE
SQL>begin
2 dbms_snapshot.refresh( list => user || '.T_MV'
3 , purge_option => 2
4 , atomic_refresh => false
5 , out_of_place => true
6 )
7 ;
8 dbms_snapshot.refresh( list => user || '.T_TRUSTED_MV'
9 , purge_option => 2
10 , atomic_refresh => false
11 , out_of_place => true
12 )
13 ;
14 end;
15 /
PL/SQL procedure successfully completed.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name like 'T\_%MV' escape '\' order by 1
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV FRESH VALID COMPLETE
T_TRUSTED_MV UNKNOWN VALID COMPLETE
SQL>select * from T_MV order by id
2 /
ID
-----------
1
SQL>select * from T_trusted_MV order by id
2 /
ID
-----------
1
SQL>drop materialized view T_MV
2 /
Materialized ...
Hi Tom.
Is it possible to return rows from a table in the order that they were inserted?
I have an old query, pre 8.1.6, that always returned rows in the order they were inserted. Now, in version 8.1.7 & 9 they are returned in a seemingly random order.
Thanks
Hi tom,
I have a table with compression, also have partitions and subpartitions. You can see The create DDL on the below. As you can see all of my objects are compressed or nocompressed. Bu i can't see this information on the all_tables table. Compression ad compress_for is turning null. Why i can see the table is compressed on the DDL. What is the point on this issue.
<code>CREATE TABLE EFSSALES1
(
sale_id NUMBER,
sale_date DATE,
region VARCHAR2(10),
amount NUMBER
)
COMPRESS BASIC
PARTITION BY RANGE (sale_date)
SUBPARTITION BY HASH (region)
(
PARTITION sales_2024 VALUES LESS THAN (TO_DATE('01-JAN-2025','DD-MON-YYYY'))
(
SUBPARTITION sp_east1 COMPRESS FOR OLTP,
SUBPARTITION sp_west1 NOCOMPRESS
),
PARTITION sales_2025 VALUES LESS THAN (TO_DATE('01-JAN-2026','DD-MON-YYYY'))
(
SUBPARTITION sp_east2 COMPRESS FOR OLTP,
SUBPARTITION sp_west2 COMPRESS FOR OLTP
)
);
SELECT compression, COMPRESS_FOR FROM all_tables WHERE table_name = 'EFSSALES1' AND owner='COPYCATLIVE'</code>
Hello Tom,
I am wondering what is the benefit of using a recovery catalog over a control file? What are some of the decisions one should consider before going either way?
Please and thank you
I am using Windows 11 64 bit / Oracle 19c .Everything works fine except when I create a nested table and associate it with my table as a column . I can neither query it nor drop the table . When I query I get a message that the connection to the database was reset and when I try to drop it I get a message as:
RA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound], [78561], [], [], [], [], [], [], [], [], [], []
00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
*Cause: This is the generic internal error number for Oracle program
exceptions. It indicates that a process has encountered a low-level,
unexpected condition. The first argument is the internal message
number. This argument and the database version number are critical in
identifying the root cause and the potential impact to your system.
How do I resolve this?
I encountered inexplicable freezing of pl/sql procedure while trying to fetch and insert data into table, however same operation runs successfully when called with sql query in any IDE.
I am trying to realise ETL process in data warehouse database.
I need to load data from a remote database's table into a local one, i have low privileges on the remote database meaning i cant tune network or create materialised view logs there.
Target table is 'long' and daily data that must be transferred is about one million rows 7 columns wide with mostly numeric data and no large objects.
I created db link and a simple view that represents remote table in local database and granted select privilege to <code>'DATAW'</code> schema designed for data storage.
In the <code>'DATAW'</code> i created a procedure for merging data into identical local table. Something like this:
<code>
create procedure merge_t_data
(p_days_offset in number)
as
begin
merge /*+ APPEND */ into t_data_local tgt
using (
select col_id, col2, ..., col7
from pdbadmin.v_data_remote
where updated_at >= trunc(sysdate) - interval '1' day * p_days _offset ) src on (tgt.col_id = src.col_id)
when matched then update
set ...
when not matched then
insert
...;
end;
</code>
When i run the procedure the session acquires wait event 'Sql*net message from dblink' or 'Sql*net more data from dblink' which stays the same forever.
When i check incoming traffic on the server while the procedure is running i see that it is not used at all.
<b>HOWEVER</b>
When i run the same merge operatiion using query like:
<code>
merge /*+ APPEND */ into t_data_local tgt
using (
select col_id, col2, ..., col7
from pdbadmin.v_data_remote
where updated_at >= trunc(sysdate) - interval '1' day * 3 ) src on (tgt.col_id = src.col_id)
when matched then update
set ...
when not matched then
insert
...;
</code>
it runs successfully: i see incoming traffic up to 2Mb, and query finishes after +-10 minutes.
I am the only user of the database, no other people works with it for now.
I have also tried inserting the data into temporary table; using fetch cursor bulk collect; running execute immediate in the procedure, result was the same - execution freezes.
Also worth mentioning that i also successfully realised ETL process for second table: it is much wider: daily data needed for transferring is about 50k rows and the number of collumns is more than 20. I did it with similar merge procedure that runs successfully unlike the previously discussed one.
I want to know if it is possible to achieve success in running my merge procedure for the 'long' table or what might be other solutions to this problem
Hey Tom,
I have a Java application that runs thousands of different types of queries against an Oracle database millions of times. I wanted to save the query preparation time by using the oracle.jdbc.implicitStatementCacheSize JDBC property to cache prepared queries. But I easily end up with an error ORA-01000: maximum open cursors exceeded, even when running a single query at a time and reading it to completion.
In my mind, an open cursor represents a way to scroll through the results of a query via communication with the database server. I don't immediately see a correlation between a statement and a cursor beyond the idea that the statement yields a cursor when executed. But it appears to be deeper than that in the Oracle JDBC driver.
See the following example code that can quickly reproduce what I am experiencing:
<code>
public class OracleCursorExhaustionThroughStatementCaching
{
public static void main(String[] args)
{
try
{
Class.forName("oracle.jdbc.driver.OracleDriver");
final Properties props = new Properties();
props.put("user", "scott");
props.put("password", "tiger");
// CURSORS on the remote system are set currently to 300. Just run more unique queries than there are cursors to reproduce.
// This cache should only be holding statement information and cursors should only be used during an individual query
props.put("oracle.jdbc.implicitStatementCacheSize", "1500"); // commenting/removing this line allows this test to run without error
try (Connection c = DriverManager.getConnection("jdbc:oracle:thin:@someserver:1521/mydb", props))
{
DatabaseMetaData meta = c.getMetaData();
System.out.println("Product: " + meta.getDatabaseProductName());
System.out.println("Version: " + meta.getDatabaseProductVersion());
System.out.println("Driver: " + meta.getDriverVersion());
System.out.println("JVM Version: " + System.getProperty("java.runtime.version"));
for(int i = 0; i < 1000; i++)
{
// Each statement will be closed after executing
try(PreparedStatement ps = c.prepareStatement("select " + i + " from dual")) // for demo a unique query against dual is enough
{
// Being explicit with closing the result set after execution because logically this is the end of the cursor. (Statement close closes it anyway)
try(ResultSet rs = ps.executeQuery())
{
while(rs.next())
; // just read each result set fully, which should bring the cursor to its end
}
}
}
}
} catch(Exception ex)
{
ex.printStackTrace();
}
}
}
</code>
So on my machine and database this code yields the following:
Product: Oracle
Version: Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.22.0.0.0
Driver: 23.8.0.25.04
JVM Vers...
The 21st anniversary of this blog is coming up on the 20th September 2025. I started this blog on the 20th of September 2004 and it has been doing well ever since. I write almost exclusively on the subject of.... [Read More] Posted by Pete On 10/09/25 At 11:55 AM
I did a five part series on the security of AUDSYS.AUD$UNIFIED and showed how it works at a functional level and how it might be secured by Oracle and how we might design a similar system using standard database license.... [Read More] Posted by Pete On 09/09/25 At 08:56 AM
This is the next part of the series looking at the AUDSYS schema and AUD$UNIFIED table that Oracle has created and protected. In the first part we explored what AUDSYS and AUD$UNIFIED looks like in terms of security; in part.... [Read More] Posted by Pete On 03/09/25 At 11:38 AM
This is the next part (4th part) of the series exploring the AUDSYS schema and AUD$UNIFIED table that is READONLY with a lot of INSERTING and sometimes deleting. In the first part we explored the AUDSYS schema and the AUD$UNIFIED.... [Read More] Posted by Pete On 26/08/25 At 08:54 AM
In the two parts of this series on the security of AUDSYS.AUD$UNIFIED we looked at the main security features of the AUDSYS user and the AUD$UNIFIED table so that we could imagine using these same features ourselves. I have taught.... [Read More] Posted by Pete On 19/08/25 At 12:03 PM
In Oracle a user is the same as a schema - well not 100% true - so lets explain a bit. Firstly at a logical high level in Oracle a user is an account used by a real person to.... [Read More] Posted by Pete On 14/08/25 At 08:38 AM
In the recent blog first part of this series on the security of AUDSYS.AUD$UNIFIED we looked at the main features of the AUDSYS user and its ability or design to stop anyone from randomly deleting or updating or doing DDL.... [Read More] Posted by Pete On 12/08/25 At 12:52 PM
I was emailed by Cameron overnight to tell me that he has written a new unwrapper for PL/SQL. There have been no public unwrappers for 9ir2 and lower available on the internet for a long time. There were a number.... [Read More] Posted by Pete On 07/08/25 At 10:00 AM
If you are wondering why your APEX page is not shown in printer-friendly mode after specifying "Yes" in the relevant parameter of the URL, the reason is simply that "Yes" as reported in the documentation of APEX 24.x is wrong, you must specify "YES" in uppercase.
I believe Oracle should accept a case insensitive value for that parameter and avoid a lot of headaches, I mean, it's just a matter of taking the UPPER value and that's it, end of story.
If, for some reason, you cannot change the URL or you don't want to use the URL to activate printer friendly mode, there is still the option to use a conditional before header process where you set the variable: APEX_APPLICATION.g_printer_friendly := TRUE;
The condition could be based on a REQUEST value or on some other expression of your choosing. If, for some reason, you are struggling to understand whether the page is in printer-friendly mode or not, the easy way to know it is by adding temporarily a region with the condition "Current page is in printer-friendly mode" (or the opposite, whichever you prefer).
Pages
|