Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 3 days 12 hours ago

Best practice for rounding issue with date math.

Sat, 2025-05-17 17:33
I ran into this problem with 1% of our records converting views from SQL Server to Oracle where my testing found different results when doing a compare during UAT. We have a system that needs to express time rounded to tenths of hours because of industry norms in aviation. In the example below you can see I am finding the difference between two dates, but the result seems to be rounding incorrectly. Normally to test these issues is to try converting the output to a char to see if there is an unseen precision issue. In this case, I know that Oracle is computing the RAW_DIFF value as .649999999 and is the reason it appears to round incorrectly. What I found surprising is that RAW_CHAR did not return .649999999. <b>My question is what is the best practice for avoiding this case because I can see how an analyst writing a query would expect ROUND(<val>,1) to just work since this is a 1% kind of issue? </b> START_DATE 5/7/2025 10:46:00 PM START_CHAR 2025-05-07 22:46:00 END_DATE 5/7/2025 10:07:00 PM START_CHAR 2025-05-07 22:07:00 RAW_DIFF 0.65 RAW_CHAR .650000000000 -- actually .649999999 depending on the query tool SINGLE_ROUND 0.6 DOUBLE_ROUND 0.7 <code> create table ROUND_TEST ( id NUMBER generated always as identity, start_date DATE, end_date DATE ); insert into round_test (start_date, end_date) values (to_date('5/7/2025 10:46:00 PM','MM/DD/YYYY HH:MI:SS PM'), to_date('5/7/2025 10:07:00 PM','MM/DD/YYYY HH:MI:SS PM')); select t.start_date, to_char(t.start_date, 'YYYY-MM-DD HH24:MI:SS') start_char, t.end_date, to_char(t.end_date, 'YYYY-MM-DD HH24:MI:SS') end_char, (t.start_date - t.end_date) * 24 as raw_diff, to_char((t.start_date - t.end_date) * 24,'9,999.999999999999') as raw_char, round((t.start_date - t.end_date) * 24, 1) as single_round, round(round((t.start_date - t.end_date) * 24, 3), 1) as double_round from round_test t </code>
Categories: DBA Blogs

Get dates based on Quarter of the year

Thu, 2025-05-15 12:26
Tom, Is there a way I can find the first day and last day of a given quarter. In a given year, there are 4 quarters. If I select first quarter then I should be able to get the first day and last day of the first quarter example: quarter = 1 and year is 2007. expected result is 01/01/2007 - 03/31/2007. similarly quarter = 2 and year is 2007 then expected result is 04/01/2007 - 06/20/2007. Thanks in advance.
Categories: DBA Blogs

dbms_sql.bind_variable

Thu, 2025-05-15 12:26
<code> Hi, Tom I wrote the following procedure, there are two PLACEHOLDERs which means two bind variables and the two PLACEHOLERS have the same name, that is to say ,the name of PLACEHOLDERs are identical, they are both "MYTEST". In the procedure, I assign a value to the PLACEHOLDERs by DBMS_SQL.BIND_VARIABLE only once, but if I use 'EXECUTE IMMEDIATE' to execute a dynamic SQL, we have to separately assign value for the TWO PLACEHOLDERs even though their names are same. I'd like to know if the usage of "two PLACEHOLDERs with same name, bind only once" of DBMS_SQL is correct? I have done many experiments of this usage and I got the correct result. Thank a lot Best Regards create table test(id1 number,id2 number); insert into test values(1,2); insert into test values(2,1); insert into test values(2,3); commit; SQL> select * from test; ID1 ID2 ---------- ---------- 1 2 2 1 2 3 set serveroutput on declare v_sql varchar2(32767); v_cur pls_integer; v_execute pls_integer; v_column pls_integer; v_count pls_integer; V_ID pls_integer := 1; begin v_cur := dbms_sql.open_cursor; v_sql := 'select count(*) from (select * from test where id1=:MYTEST or id2=:MYTEST)'; dbms_sql.parse(v_cur,v_sql,dbms_sql.native); dbms_sql.bind_variable(v_cur,'MYTEST',V_ID); dbms_sql.define_column(v_cur,1,v_column); v_execute := dbms_sql.execute(v_cur); if dbms_sql.fetch_rows(v_cur) > 0 then dbms_sql.column_value(v_cur, 1, v_count); end if; dbms_sql.close_cursor(v_cur); dbms_output.put_line('count is '||to_char(v_count)); end; / count is 2 PL/SQL procedure successfully completed.</code>
Categories: DBA Blogs

"proper" way to join v$session and v$sqlarea

Tue, 2025-05-13 03:19
Hello, a few questions: 1. All of the scripts I have written years ago use both address and hash_value to join v$session and v$sqlarea. What I don't quite understand is why it's not sufficient to just join on hash_value (or sql_id). Why was/is address also needed? Doc ID 43716.1 seems to indicate both should be included when doing the join. SQL_ADDRESS RAW(4|8) Used with SQL_HASH_VALUE to identify the SQL statement that is currently being executed. ----> Joins to <<View:V$SQL>> . ADDRESS / HASH_VALUE SQL_HASH_VALUE NUMBER Used with SQL_ADDRESS to identify the SQL statement that is currently being executed. ----> Joins to <<View:V$SQL>> . ADDRESS / HASH_VALUE 2. If I wanted to instead use v$sqlstats, it seems the best way to join v$session to that view is to use sql_id? Note my use-case for doing these joins in the above two questions is to get the full SQL text. 3. Why was Bug 27760729 (for SQL_FULLTEXT in v$sqlstats not actually showing the full SQL text) closed out as not a bug?!?
Categories: DBA Blogs

Sequence will skip some numbers

Tue, 2025-05-13 03:19
Hi, I have a sequence skip number problem. I have this sequence. The cache is 20 and noorder. <code> create sequence seq_serial_no minvalue 1 maxvalue 999 start with 2 increment by 1 cycle </code> The current situation that has arisen is: I have two servers A and B. They connect to the same database instance. Sometimes the sequence is incremented, but sometimes it jumps, and every time it jumps, it jumps to the last_number. Here is two examples. The first example: server value -- comment B 201 2025/04/23 9:14 A 202 2025/04/23 9:22 A 221 2025/04/23 10:35 --skip B 222 2025/04/23 11:08 A 241 2025/04/23 13:22 --skip B 242 2025/04/23 15:13 A 261 2025/04/23 17:41 --skip The second examples: server value -- comment A 541 2025/05/08 14:36 B 542 2025/05/08 15:12 A 561 2025/05/09 9:28 -- skip B 562 2025/05/09 10:18 A 563 2025/05/09 10:46 A 581 2025/05/12 9:17 -- skip B 582 2025/05/12 9:23 A 583 2025/05/12 10:30 B 601 2025/05/12 14:53 -- skip A 602 2025/05/12 15:19 The skipping of numbers looks like it has something to do with the cache being cleared. May I ask if my guess is correct? If it is right, then what would clear the cache for SGA?
Categories: DBA Blogs

Gather Schema Stats

Mon, 2025-05-12 12:17
Hope you're doing well. I'm reaching out to get your inputs on statistics gathering in our Oracle Autonomous Database (version 19c), which we use to support our integration workloads. We have a few very large tables (approximately 70 million records each) that serve as history tables for archiving and auditing purposes. On a daily basis, we dump data from staging tables into these history tables. Since these tables are not frequently queried, we have historically not gathered statistics on them regularly. However, we've recently observed some performance degradation when querying these tables. To address this, we are considering to run GATHER_SCHEMA_STATS nightly on entire schema. We are assuming this could help improve performance. But, we are not very familiar with the DBMS_SCHEMA_STATS APIs. Based on the documentation we are planning to run the following <code> DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'XXDB' , estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE , method_opt => 'FOR ALL COLUMNS SIZE AUTO' , cascade => TRUE); </code> Could you please let us know if this is the right approach to gather schema stats? By any chance if yo have any automation scripts created for this purpose to refer? Thanks in advance.
Categories: DBA Blogs

Upload XLSX file to table in oracle apex

Mon, 2025-05-12 12:17
I want to upload multiple XLSX files, each containing multiple sheets. The data from each sheet should be inserted into a table with the same name as the sheet (the tables already exist) using Oracle APEX. Is this possible?
Categories: DBA Blogs

After Insert Trigger, Synchronous or Async

Mon, 2025-05-12 12:17
Hi Tom, What I need to do is, to monitor a table for changes. For this a trigger needs to be setup. But my doubt is, if we create a trigger after insert or update of my table column on my table for each row, and the trigger does some time consuming task like DBMS_LOCK.SLEEP (60*1), will this block subsequent inserts? I mean, will the next insert be able to run only after the sleep? In short, Is, after insert trigger is blocking or not? Will the cost of trigger amount to the cost of insert statement? I know the before statement will. But it is really confusing for after insert. Thanks in advance, Jimson
Categories: DBA Blogs

SQL ASSERTIONS vs triggers, materialized views with constraints, etc

Fri, 2025-05-09 19:06
I attended a Hotsos session by Toon Koppelaars yesterday morning on Semantic Query Optimization. Among other interesting topics, Toon lamented the lack of any DBMS implementing SQL "assertions". By which he meant a database-enforced constraint than encompassed more than a single column or a single tuple (record). The example he gave was "a manager cannot manage more than 2 departments." One should be able to have a DDL statement "CREATE ASSERTION max_manager_departments AS CHECK... " containing some appropriate SQL statement. But of course no DBMS, including Oracle, allows such. It seemed to me that these were the sorts of constraints that are usually implemented by the database designer in the form of triggers or materialized views with constraints. (Admittedly, as-implemented, most trigger-based constaints fail to account for Oracle's locking mechanisms, but that's an implementation issue). Here's an example of yours: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7144179386439 As a practical matter, are there any assertions that cannot be implemented via triggers or constrained materialized views? Or are there, ahem, "rules of thumb" or guidelines as to when one approach is better than another? It would seem to me that a discussion of "what we can implement (and this way is best)" and "what we can't implement" would be helpful. Thanks!
Categories: DBA Blogs

TM lock on not-modified table

Fri, 2025-05-09 19:06
Hi Tom, Oracle puts a TM lock on a table even if no rows are affected by a DML statement: TX_A>create table t (x int); Table created. -- just to show there are no locks on the table: TX_A>alter table t move; Table altered. -- in another session (marked as TX_B) we now issue a statement which affects no rows: TX_B>delete from t; 0 rows deleted. -- now there's a TM lock: TX_A>alter table t move; alter table t move * ERROR at line 1: ORA-00054: resource busy and acquire with NOWAIT specified TX_B>select type, lmode, 2 decode(lmode,2,'row-share',3,'row-exclusive'), 3 decode(type,'TM',(select object_name from dba_objects where object_id=id1)) name 4 from v$lock 5 where sid = (select sid from v$mystat where rownum=1); TYPE LMODE DECODE(LMODE,2,'ROW-SHARE',3,'ROW-EXCLU NAME ------ ---------- --------------------------------------- -------------------- TM 3 row-exclusive T TX_B>commit; Commit complete. TX_A>alter table t move; Table altered. It seems to me a bit counter-intuitive (which doesn't mean that it's "bad", of course) to retain an unnecessary lock (even a "weak" row-exclusive one) on a not-modified table ... so there's probably a reason that I can't see. Do you happen to know a strong reason for this behaviour - or shall I classify it as "just a quirk" ? -- For sure it's something to remember while coding - ie think about a code such as this: delete from t where ... if (sql%rowcount > 0) then log deletions in logging table commit; end if; This would leave the tm lock on the table preventing DDL possibly forever - the commit has to be moved after the if block, most definitely. Thanks Alberto
Categories: DBA Blogs

Explain plan using DBMS_XPLAN

Fri, 2025-05-09 19:06
Hi Tom, I am trying to view explain plan for a select statement in Livesql built on 19c oracle version. I am able to view the explan result using select * from table(dbms_xplan.display_cursor); but i see Error: cannot fetch last explain plan from PLAN_TABLE while using select * from table(dbms_xplan.display); Could you please help me understand . explain plan for select * from customers where id = 1; select * from TABLE(DBMS_XPLAN.DISPLAY); PLAN_TABLE_OUTPUT Error: cannot fetch last explain plan from PLAN_TABLE select * from customers where id = 1; select * from table(dbms_xplan.display_cursor); PLAN_TABLE_OUTPUT SQL_ID 92qqjqaghv843, child number 1 ------------------------------------- SELECT O.OBJECT_NAME, O.OBJECT_TYPE, O.STATUS, E.TEXT, E.LINE, E.POSITION FROM SYS.DBA_OBJECTS O, SYS.DBA_ERRORS E WHERE O.OBJECT_NAME
Categories: DBA Blogs

Interactive Grid

Fri, 2025-05-09 19:06
El erro al crear un Interactive Grid en la version Oracle APEX 24.2.0 en versiones anteriores funcionaba sin problemas ORA-01400: cannot insert NULL into ("APEX_240200"."WWV_FLOW_IG_REPORTS"."ID")
Categories: DBA Blogs

ASMFD not working when we do an in-place upgrade or migrate to RHEL 7

Fri, 2025-05-09 19:06
We are currently in the process of upgrading our Oracle database servers from RHEL 7 to RHEL 8. We are considering two possible approaches: Migrating to a new server built with RHEL 8. Performing an in-place upgrade on the existing server. In both approaches, we intend to use ASMFD (ASM Filter Driver). However, Oracle Support has advised us to discontinue using ASMFD on RHEL 8, as it is not enabled by default. Instead, they recommend switching to ASMLIB. That said, we see several advantages in continuing to use ASMFD, particularly in terms of I/O filtering and performance benefits. We?re curious to know if the claim about ASMFD not being supported on RHEL 8 or RHEL 9 is accurate. If you?ve successfully implemented ASMFD on RHEL 8 or RHEL 9, we would love to hear about your experience. How did you make it work? Any insights or steps you followed would be greatly appreciated.
Categories: DBA Blogs

400 Bad Request Request Header Or Cookie Too Large

Thu, 2025-05-01 03:31
After accessing a few questions on Ask TOM, I'm getting an error: 400 Bad Request Request Header Or Cookie Too Large This "solves itself" if I try later - I don't know exactly how much later - it typically works the next day again, but it doesn't work immediately if I refresh the page. I can also manually delete the cookies for the site, which works but only until I've read a few questions/answers again, but I'd like to find a way to solve this permanently.
Categories: DBA Blogs

Weird Cost-Based-Optimizer behavior

Thu, 2025-05-01 03:31
Hello! We encountered strange (and logically inconsistent) CBO behavior. To provide some context: we are working with an EAV (entity-attribute-value) schema, all objects are stored in a single table, let's call it "my_objects", another table, "object_ancestry", stores all ancestor/descendant relations. Below is the script that creates all tables in question, populates them with some data, and builds indexes: Environment ----------- ? Oracle 19.22.0.0 on AIX ? optimizer_mode = ALL_ROWS ? cursor_sharing = EXACT ? Full stats gathered on all objects (see script) --- script start <code> declare procedure drop_table(p_table_name in varchar2) is e_no_table exception; pragma exception_init(e_no_table, -942); begin execute immediate 'drop table ' || p_table_name; exception when e_no_table then null; -- ignore end; begin drop_table('object_ancestry'); drop_table('my_objects'); end; / --- OEV (object-entity-value) schema: create table my_objects(object_type_id number, object_id number, object_key varchar2(100)); create table object_ancestry(parent_type_id number, child_type_id number, parent_id number, child_id number); --- adding some data ----- objects insert /*+ append*/ into my_objects(object_type_id, object_id, object_key) select 1 as object_type, level as object_id, 'location_' || level as object_key from dual connect by level <= 1e+06; commit; insert /*+ append*/ into my_objects(object_type_id, object_id, object_key) select 2 as object_type, 1e+06 + level as object_id, 'subregion_' || level as object_key from dual connect by level <= 10000; commit; insert /*+ append*/ into my_objects(object_type_id, object_id, object_key) select 3 as object_type, 2e+06 + level as object_id, 'region_' || level as object_key from dual connect by level <= 100; commit; ----- EOF objects ----- object ancestry insert into object_ancestry(parent_type_id, child_type_id, parent_id, child_id) select subregion.object_type_id as parent_type_id, location.object_type_id as child_type_id, subregion.object_id as parent_id, location.object_id as child_id from my_objects location join my_objects subregion on mod(location.object_id, 10000) = (subregion.object_id - 1e+06 - 1) where location.object_type_id = 1 and subregion.object_type_id = 2 ; commit; insert into object_ancestry(parent_type_id, child_type_id, parent_id, child_id) select region.object_type_id as parent_type_id, subregion.object_type_id as child_type_id, region.object_id as parent_id, subregion.object_id as child_id from my_objects subregion join my_objects region on mod(subregion.object_id - 1e+06, 100) = (region.object_id - 2e+06 - 1) where subregion.object_type_id = 2 and region.object_type_id = 3 ; commit; insert into object_ancestry(parent_type_id, child_type_id, parent_id, child_id) select 3 /*region*/ as parent_type_id, 1 /*location*/ as child_type_id, region_id as parent_...
Categories: DBA Blogs

How to configure proxy server between Golden Gate source and destination servers?

Mon, 2025-04-28 17:20
Environment Information Database : Oracle Database 19c (Multi-Tenant Architecture) - Version 19.25.0.0.0 GoldenGate : Oracle GoldenGate Microservices 19c Replication Setup : Data replication will be configured between: Source : On-premises database server (Server A) Target : Azure VM Cloud database server (Server B) Operating System : Linux x86_64 on-prem. RHEL8 on cloud. Security Concern The security team has flagged the direct connection between the on-premises database server (Server A) and the cloud-based database server (Server B). They have mandated the use of a reverse proxy server to transfer GoldenGate trail files between the source and destination environments. This is to ensure secure data transfer and minimize exposure of sensitive database connections. Challenge The current GoldenGate configuration uses the WebUI for managing database connections and replication processes for both the source and destination. Trail files generated by the Extract process on the source (on-premises) need to be transferred securely to the target (cloud) via the reverse proxy server. The question is: How can we integrate a reverse proxy server into the GoldenGate replication setup to securely transfer trail files between the source and target environments?
Categories: DBA Blogs

Use of dbms_scheuler

Mon, 2025-04-28 17:20
After using dbms_scheduler since its introduction I was surprised to learn that it is not the way application development should go. I was explained that having scheduling outside of the database is better than having it inside the database. And the replacement are crontab, complicated self-made scheduler using Perl, Python, favorite language, preferred third party application, or refreshable materialized views. I did not get the answer to my question why using developed app stored code or other Oracle supplied packages inside the database is good practice and using dbms_scheduler is bad. What is your view on that issue?
Categories: DBA Blogs

Make not reproducible baselines reproducible again

Thu, 2025-04-24 22:06
Hello, I have multiple baselines and after migrating the application many of them got not reproducible. The reason was renaming of some indexes during the migration. I have renamed those indexes back to the original name. My question is how I get Oracle to set those baseline back to reproduced = YES. Non of the DBMS_SPM routines is able to do that. Thanks
Categories: DBA Blogs

Solving a logical problem using analytical functions

Thu, 2025-04-24 22:06
(https://livesql.oracle.com/ords/livesql/s/da34i74lkmxt2mqrtp0k4xsk6 ) (I was able to format better with screenshots on the Oracle forums - https://forums.oracle.com/ords/apexds/post/help-solving-a-logical-problem-using-analytical-query-3778) Hi, I am trying to solve a problem using analytical functions but I am stuck. 1. I have a list of coupons that I can use. The usage sequence is in alphabetical order of the coupon name. +---------+-------+ | Coupons | Value | +---------+-------+ | A | 100 | +---------+-------+ | B | 40 | +---------+-------+ | C | 120 | +---------+-------+ | D | 10 | +---------+-------+ | E | 200 | +---------+-------+ 2. There is a limit (cap) on the total value across all coupons that can be used in a day +----------+-----------+ | Cap Name | Cap Limit | +----------+-----------+ | Cap 1 | 150 | +----------+-----------+ | Cap 2 | 70 | +----------+-----------+ 3. Each coupon is subject to 1 or 2 caps. If it is subject to 2 caps, there is a specified sequence to apply caps. +--------+--------------+----------+ | Coupon | Cap Sequence | Cap Name | +--------+--------------+----------+ | A | 1 | Cap 1 | +--------+--------------+----------+ | A | 2 | Cap 2 | +--------+--------------+----------+ | B | 1 | Cap 2 | +--------+--------------+----------+ | C | 1 | Cap 2 | +--------+--------------+----------+ | C | 2 | Cap 1 | +--------+--------------+----------+ | D | 1 | Cap 1 | +--------+--------------+----------+ | E | 1 | Cap 1 | +--------+--------------+----------+ | E | 2 | Cap 2 | +--------+--------------+----------+ 4. I have to now find how much coupon value could be utilized before my daily cap was reached. i.e. find "coupon usage" and ?Cap Remaining? below. So, If I join the tables above +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | # | Coupon | Value | Cap Name | Cap Sequence | Cap Limit | Coupon Usage | Cap Remaining | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 1 | A | 100 | Cap 1 | 1 | 150 | 100 | 50 | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 2 | A | 100 | Cap 2 | 2 | 70 | 0 | 70 | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 3 | B | 40 | Cap 2 | 1 | 70 | 40 | 30 | +---+--------+-------+----------+--------------+-----------+--------------+---------------+ | 4 | C | 120 | Cap 2 | 1 | 70 | 30 | 0 | +---+--------+-------+----------+--------------+--...
Categories: DBA Blogs

Bitmap indexes and BITMAP CONVERSION TO ROWIDS

Thu, 2025-04-24 04:03
Hello Chris. Hello Connor, I?m struggling with a performance issue ? We have a single big table (167 million of rows, 118 columns). For ad hoc queries we have created a simple Apex application with 15 most important columns as filters. To avoid long running queries we start a simple count(*) query bevor running the main query. Depending of a result of our count(*) we warn or even force the end user for better filtering. select count(*) from big_table where filter_column_1 = filter_value_1 and filter_column_2 = filter_value_2 and filter_column_3 between filter_value_3 and filter_value_4; To support the count(*) query I have created a bitmap index for each filter column. It works fine! Oracle combines the relevant bitmap indexes with a BITMAP AND or BITMAP MERGE. The response time is excellent, maximum a few seconds, often lower than 1 second. Works as expected and I?m happy! But: sometimes the optimizer decides to quit the path of bitmap processing and converts all bitmap indexes with BITMAP CONVERSION TO ROWIDS and uses then HASH JOINs. In this case the response time of the count(*) query is much worse, sometimes even minutes! My questions: ? Why does the optimizer use BITMAP CONVERSION TO ROWIDS? My feeling is, that it is more probably if the where clause is complex (many filters in use) or the count(*) delivers a big number. ? Are there any SQL-hints to avoid BITMAP CONVERSION TO ROWIDS? ? Are there any tips for my count(*) query ( WITH clauses, subqueries, ? ) to avoid BITMAP CONVERSION TO ROWIDS? The DB-Version is 19.24.0.0.0, the big table and all indexes have fresh statistics. Thank you in advance! Christian
Categories: DBA Blogs

Pages