Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 49 min 38 sec ago

DR setup involving replicated database

49 min 38 sec ago
Howdy, The current set up I'm looking at is an OLTP production system running Oracle 19.20 (4 instance RAC) with active data guard. This system is seeding a data warehouse running Oracle 19.20 by way of Oracle GoldenGate via an integrated extract. At present the warehouse does not have a DR solution in place and that's the point of the post. I'm wondering what the best solution would be for a warehouse DR strategy when GoldenGate is in play like this. I assume data guard again but happy to hear other thoughts. The bulk of the questions I have involve the GoldenGate component. I'm not sure how that would need to be set up / configured in order to minimize the complexity in any role transitions from either the transactional or warehouse (or both); and what scenarios can be handled seamlessly and which would require manual intervention. Thanks a bunch! Cheers,
Categories: DBA Blogs

Gather STATS on Partitioned Table and Parallel for Partitioned Table

49 min 38 sec ago
hi I have a Partitioned(List) table by a VERSION_ID, which has around 15 million per partition. We have daily partitioned ID created bulk insert for 15 Million rows with 500 columns and then have 10 updates(MERGE UPDATE) for multiple columns from multiple other tables. is it good to gather stats after insert once and then after multiple update once. What is good practice for performance in gather stats for these partitioned table scenarios's second question, when i use merge on partition table from other partioned table, i am seeing the below in explain plan when i use Parallel DML hint. PDML disabled because single fragment or non partitioned table used
Categories: DBA Blogs

Update Partition table from another Partition table Performance issue

49 min 38 sec ago
Hi I am migrating from Sybase IQ to Oracle 19C. there are many updates happening from one or multiple tables. My Target_TBL Table has 18 Million records per partition and there are 1000's of Partitions. (Partitioned by VersionID). APP_ID is one of the another key column in this table. I have 10 Partitioned tables which are partitioned by APP_ID which has around 10 Million to 15 Million Records. I have 5 non-partitioned Lookup tables which are smaller in size. I have rewritten all the Update statements to Merge in Oracle 19C, all the updates happen for one VersionID only which is in the where clause, and I join the source table using APP_ID and other keycolumn to update 70 to 100% of the records in each updates 1. Target table has a different key column to update the table from partitioned Source tables which are 10 to 15 Million. i have to do this by 10 different Merge Statements 2. Target Tables have different key columns to update from Non-partitioned Lookup table , I have to do this 5 different merge statements In sybase IQ all the multiple updates are completed in 10 Minutes, in Oracle 19C it takes more than 5 hours. I have enabled parallel Query and Parallel DML also. A) Can you suggest a better way to handle these kind of updates B) In few places the explain plan shows (PDML disabled because single fragment or non partitioned table used) . C) I leave the large Source table updates to go with has join's D) I Force the Lookup source table updates to use Neste Loop. Is this good or Not ? E) if i need to use indexes, can i go with local/global Other key column reference for Lookup tables. Appreciate any other suggestions to handle these scenarios. example <code> Merge INTO Target_TBL USING SOURCE_A ON (SOURCE_A.APP_ID=Target_TBL.APP_ID and SOURCE_A.JOB_ID=Target_TBL.JOB_ID) When Matched then update set Target_TBL.email=SOURCE_A.email Where Target_TBL.VersionID = 100 and SOURCE_A.APP_ID = 9876; Merge INTO Target_TBL USING SECOND_B ON (SECOND_B.APP_ID=Target_TBL.APP_ID and SECOND_B.DEPT_ID=Target_TBL.DEPT_ID) When Matched then update set Target_TBL.salary=SECOND_B.salary Where Target_TBL.VersionID = 100 and SECOND_B.APP_ID = 9876; Merge INTO Target_TBL USING Lookup_C ON (Lookup_C.Country_ID=Lookup_C.Country_ID) When Matched then update set Target_TBL.Amount_LOCAL=Lookup_C.Amount_LOCAL Where Target_TBL.VersionID = 100; </code>
Categories: DBA Blogs

Number Data type declaration with different length and performance impact

49 min 38 sec ago
1. I have few number column with data type declared as Number, Number (5), Integer, Numeric(10). I know in few cases the maximum data is 2 digits and I see that is declared as Number(38)/ NUMBER / Numeric(30) /Integer if i don't declare as number(2), instead if i declare as ( Number(38)/ NUMBER / Numeric(30) /Integer) will there be any performance issue when I have a table with millions of records and that is used in updating the data or used in Where clause 2. Varchar2 I have a column with 1 character (Y/N) if i declare this as Varchar2(1 CHAR) instead of VARCHAR2(1 BYTE). Will there be any performance issue when we use this column in where condition for millions of records? 3. IS it advisable to use ANSI Datatypes in table declaration or always preferable to use Oracle Data types, will there be any performance issue? Please advise
Categories: DBA Blogs

PLS-00103: Encountered the symbol "RECORD" when expecting one of the following: array varray table object fixed varying opaque sparse

Thu, 2024-03-14 13:06
Here i am creating record type with reocrds emp and dept with following syntax <code>CREATE TYPE emp_dept_data IS RECORD (empno number(4), ENAME VARCHAR2(10), JOB VARCHAR2(9), HIREDATE DATE, SAL NUMBER(7,2), DNAME VARCHAR2(14) );</code> I am getting error as PLS-00103: Encountered the symbol "RECORD" when expecting one of the following: array varray table object fixed varying opaque sparse Please tell me how to fix it i am using oracle 19C version Record type is used in pipelined function
Categories: DBA Blogs

date fomat containing timezone data

Tue, 2024-03-12 06:26
I would like to know if it is possible to configure the Oracle data format to also capture the timezone that date and time orginated.
Categories: DBA Blogs

Day Light Savings

Tue, 2024-03-12 06:26
Hi Tom Here is the problem description- A reporting tables holds data with the date columns in EST. The report when viewed the same columns are reported in CST. I can add a straight offset and make the report available with the columns in CST. But what happens during the DST changes. Will this strategy work during the DST change period. Thanks
Categories: DBA Blogs

How to find out if a database exists in a given system

Tue, 2024-03-12 06:26
Tom Suppose if a server is given to you , and you are asked to see if there is a database created( or in other words if a database exists in that server) or not, what is the quickest and what are the different ways in which you can find out( in both windows and unix). Thanks in advance Rag
Categories: DBA Blogs

How to find current_schema from within PL/SQL

Tue, 2024-03-12 06:26
I'm trying to figure out why my application occasionally receives the error ORA-00942: table or view does not exist. As the object in question clearly _does_ exist, I assume that the application changes the current_schema to some other schema which doesn't have said object. In order to prove that, I have created a <b>after servererror on database</b> trigger which I would like to write the current_schema to alert.log I first tried to use <b>SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA')</b> to get the schema name, but that always returns the schema holding the trigger (SYS in that case) My second guess was to use the SQL query <b>select SCHEMANAME from V$SESSION where SID = SYS_CONTEXT('USERENV', 'SID')</b> but that also only returned SYS instead of the current schema of the session. Then I tried <b>AUTHID CURRENT_USER</b> but that also didn't solve it. Is there a way? Thanks! Here's is what I tried: 1. connect to my application schema "DSCAN" 2. switch the session to some arbitrary other schema: <code>alter session set current_schema = MDDATA; </code> 3. try to select from some non existing table: <code>select * from abc;</code> Trigger output in alert.log: (what I'd like to see is <b>SN:MDDATA</b> instead of <b>SN:SYS</b>) <code>2024-03-01 14:23:23 2024-03-01T13:23:22.415858+00:00 2024-03-01 14:23:23 PDB1(3):*** SERVERERROR ORA-00942 U:DSCAN SN:SYS #453 * T:unknown * P:DataGrip * M:DataGrip * A: * ID: * CI: *** 2024-03-01 14:23:23 PDB1(3):select * from abc 2024-03-01 14:23:23 PDB1(3):-------------------------------------------------------- 2024-03-01 14:23:23 PDB1(3):ORA-00942: table or view does not exist 2024-03-01 14:23:23 PDB1(3):******************************************************** </code> Here's my trigger code: <code>CREATE OR REPLACE TRIGGER SYS.AFTER_SERVERERROR after servererror on database begin execute immediate ('begin afterservererror; end;'); exception when others then -- ignore any further exceptions to prevent infinite loop null; end; create or replace procedure sys.afterServerError is a ora_name_list_t; s varchar2(32767 byte); -- SQL text s1 varchar2(32767 byte); -- SQL text s2 varchar2(32767 byte); -- SQL text sn varchar2(32767 byte); -- schema name m varchar2(32767 byte); -- Error Message n number; begin if ORA_SERVER_ERROR(1) = 22289 then return; end if; execute immediate 'select SCHEMANAME from V$SESSION where SID = SYS_CONTEXT(''USERENV'', ''SID'')' INTO SN; s := s ||'*** '||ORA_SYSEVENT||' ORA-'||ltrim(to_char(ORA_SERVER_ERROR(1), '00000')) ||' U:'||user ||' SN:'||SN ||' #'||SYS_CONTEXT('USERENV', 'SID') ||' * T:'||SYS_CONTEXT('USERENV', 'TERMINAL') ||' * P:'||SYS_CONTEXT('USERENV', 'CLIENT_PROGRAM_NAME') ||' * M:'||SYS_CONTEXT('USERENV', 'MODULE') ||' * A:'||SYS_CONTEXT('USERENV', 'ACTION') ...
Categories: DBA Blogs

What indexes and partitions are best for manage insert and update of 100 cr records in a table

Mon, 2024-03-11 12:06
In our project we have to create a table that should contain around 100 cr records and 60% records will be update in later time..date is the key factor on this table.. To manage this table what type of indexes and partitions are best suitable to improve the performance of the table when querying from the table. Thanks in Advance.
Categories: DBA Blogs

Full Import default schema

Mon, 2024-03-11 12:06
I doing full import of 10g export into 19c. export/import via user with exp_full_database privilege/imp_full_database have few questions? 1. is it safe to import full=y option. will it overwrite procedures with 10g version/duplicate data in system schema. what about other default schema for eg . *sys schemas/XDB,APEX . will it overwrite? other components dependancies [workspace manager] . how to safely import only application schema including other components dependancies. 2. how to find out dependancies of application schema on components [ dba_features_usage ?]?
Categories: DBA Blogs

Ords error 404

Mon, 2024-03-11 12:06
ORDS Landing Page Shows "ok" For Apex And Sql Developer Instead of "GO" button. I can't access to apex workspace page.when i click on the "ok" it toke me to apage shwing "not found http status code :404 Impossible to meet the requirement with a database. Verify that the IURL of the request is correct and that the corresponding URL-database Have been correctly configured" I am working on windows 10 pro and I installed the following: Oracle 19c Apex 23.2 Ords 23.4 Tomcat 9. 0.86
Categories: DBA Blogs

sql developer v4 how to extract schema - user creation ddl ?

Thu, 2024-03-07 11:46
Hello, I am just lost here. I have a very simple requirement using sql developer. I need to extract user creation script for a list of users. How on earth do I do this please help . Why have they made so simple things so difficult to do or even find. I click on other users there is no export or extract script ? There is Database export in Tools which will create objects within the schema but not the schema creation script. Thanks
Categories: DBA Blogs

How do I declare a VARCHAR2 variable in PL/SQL?

Tue, 2024-03-05 23:06
Hi all, Here is my question: <code>DECLARE l_something VARCHAR2( --<- WHAT NOW?</code> Okay, that's not a good question. Please let me explain. I understand that in Oracle SQL, memory or rather storage, is allocated dynamically for VC2 columns. But that's not the topic. In PL/SQL I can think of two goals: A: allocate as much memory as possibly needed, add that much load to the PGA, grant maximum efficiency to data changes B: allocate as little memory as needed (dynamically), limit load on PGA, be okay with waiting for memory reallocation If my thoughts up to this point are mistaken, please explain (and stop reading here). Otherwise and assuming I understand when to use A over B (and vice versa) in my application, how can I ensure I'm getting the behavior I want? In https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/tuning.htm#LNPLS01203 (that's 11g), I found this: You might need to allocate large VARCHAR2 variables when you are not sure how big an expression result will be. You can conserve memory by declaring VARCHAR2 variables with large sizes, such as 32000, rather than estimating just a little on the high side, such as by specifying 256 or 1000. PL/SQL has an optimization that makes it easy to avoid overflow problems and still conserve memory. Specify a size of more than 4000 characters for the VARCHAR2 variable; PL/SQL waits until you assign the variable, then only allocates as much storage as needed. In the PL/SQL docs for 19c I couldn't find anything like this. I did not take the time to read the whole document, so if it's there and I overlooked it, please tell me where to look, maybe (and stop reading here). Also I'd totally understand the editor if he'd silently deleted that paragraph. And yes, it's 2019 where I am, because time zones. Given the 11g implementation is still up, does that mean I should <code>DECLARE l_something VARCHAR2(4001); l_or_something VARCHAR2(4001 CHAR); --^^??</code> whenever I plan to load data from my VC2 column into my variable?
Categories: DBA Blogs

Global variables for schema name

Tue, 2024-03-05 23:06
Hi Tom, We have two schemas one for customer and one for the transactions done by customer. Here I have a requirement where i have to access tables of Schema-1 from a function created in Schema-2. So my question here is.... can we declare Schema-1 as a global variable and refer where ever it is necessary. Because i have several instances like this in many of our functions in the package. One more problem is i have to move this package from developer instance to QA, UAT and Production, where schema names differ. Right now i am hard-coding the schema-1 prefix which i want to get away with. Example: (This function is created in Schema-2) <code>create or replace function fnc_comm_history(p_loan_code number) return sys_refcursor as c_ref sys_refcursor; begin open c_ref for select a.name, a.event, a.date, b.price from table1 a, Schema-1.table2 b where a.loan_code=b.loan_code and a.loan_code=p_loan_code; return c_ref; end;</code> Regards, Dora.
Categories: DBA Blogs

Trying to create a foreign-key table with on update action, but failed to do so with an error as shown in the question part

Tue, 2024-03-05 05:06
Hello There, Thanks for your valuable time & efforts while giving support. I recently started learning Oracle SQL and PL/SQL for my project purpose. While trying to create a foreign key over in a column in the table with modifier ON UPDATE RESTRICTED, below is the error I am getting <code>CREATE TABLE supplier_groups( group_id NUMBER GENERATED BY DEFAULT AS IDENTITY, group_name VARCHAR2(255) NOT NULL, constraint grp_id PRIMARY KEY (group_id)); CREATE TABLE suppliers ( supplier_id NUMBER GENERATED BY DEFAULT AS IDENTITY, supplier_name VARCHAR2(255) NOT NULL, group_id NUMBER NOT NULL, PRIMARY KEY(supplier_id), CONSTRAINT fk_grpid FOREIGN KEY(group_id) REFERENCES supplier_groups(group_id) ON UPDATE RESTRICT );</code> at UPDATE STATEMENT is shown with pink underlined; stating that: syntax error. partially recognized rules. would anyone help me understand the mistake /error. Thank you again.
Categories: DBA Blogs

Insert not using append hint

Tue, 2024-03-05 05:06
Hi, I am trying to do below insert with append hint and expecting optimizer to use DIrect path load but it is still using conventional path and seeing below line in explain plan, can you please advise: - Direct Load disabled because insert values with no append values hint used <code>explain plan for insert /*+ append */ into employees_bkp(employee_id, first_name, last_name, email, job_id, hire_date) values(310, 'SHIVANIhjhk' , 'GUPTAll', 'SHIVANI.CSE@GMAIL.COM', 'AD_PRES', '23-NOV-2015'); select * from table(dbms_xplan.display()); ------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------ | 0 | INSERT STATEMENT | | 1 | 71 | 1 (0)| 00:00:01 | | 1 | LOAD TABLE CONVENTIONAL | EMPLOYEES_BKP | | | | | ------------------------------------------------------------------------------------------ Note ----- - Direct Load disabled because insert values with no append values hint used</code>
Categories: DBA Blogs

Fetch and pga

Fri, 2024-03-01 09:06
I run in sqlplus a query with huge result set. It has to return million records Set arraysize to 5000 Monitoring v$session, v$process, v$sqlstats while query is running I can see after some minutes executions changes from 0 to 1. Then fetch count begin to be incremented at each stats read cycle and so is doing rows processed count (row counter is incremented by multiple of 5000) At this point I expected fetched rows to be unloaded, step by step, from process pga but I'm surprised to see that process pga grows continuously. So why process is still keeping in memory rows that are already been fetched ? No way to free pga step by step after each fetch ?
Categories: DBA Blogs

I want to compate Table A with Table B store difference in C and update A To B

Fri, 2024-03-01 09:06
I need to compare 2 table A and B I want to store the difference in C and after I want to update B to A. The B table could be updated every second so , I wondering what is the right approach to ensure when I update B to A I m using same value as when I compare B and A Hope this is clear enough !!!
Categories: DBA Blogs

Optimal ways to Capture Data Changes in Oracle 19c

Fri, 2024-03-01 09:06
Hi Tom, Greetings, we have a requirement to capture data changes for multiple tables and push to Kafka topic in the following format similar to GoldenGate provides. "op_type":"Insert or Update","op_ts": "timestamp","current_ts": "timestamp", "before" :"Row Data before change", "after" :"Row Data after change" Since we are not able to use GG at this point in time, exploring combination of PL/SQL and java based solution to push the data to Kafka topic. Please suggest an optimal solution to capture data changes in Oracle 19c. Thanks in advance!
Categories: DBA Blogs

Pages