Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 10 hours 42 min ago

FAST RECOVERY AREA best practice and not recommend

10 hours 42 min ago
Greetings, My question involves the use of the FAST RECOVERY AREA for online redo logs and even controlfiles. Typically, the backup mount points are on slower disks. Therefore, if we were to set DB_RECOVERY_FILE_DEST to that backup mount, my assumption is that Oracle wouldn't recommend placing online redo logs on those slower backup mounts. Also, if we place the control files, now we would decreasing the fault tolerance. The storage backup admin could decide to do some maintenance on the backup system and forget to coordinate with the DBA. I'm just trying to understand when it is a good idea to use FAST RECOVERY AREA and when it is not. I've been working with Oracle for over 20 years and not using FAST RECOVERY AREA has worked out great for me. "When you enable fast recovery in the init.ora file, Oracle Database writes all RMAN backups, archive logs, control file automatic backups, and database copies to the fast recovery area." Oracle 18c documentation Thanks for your help, John
Categories: DBA Blogs

Manual block level recovery steps on standby database using backup from primary database

10 hours 42 min ago
Greetings, As I as was reading the 19c Data Guard manual, I came across a topic in about being able to use a backup taken at the primary to perform a block level recovery at it's standby database. I read that block level recovery can automatically be done if we have purchased an active data guard license. It does also state that it can do it manually, too, but it doesn't give the steps on how to perform it. From Data Guard Manual: 10.2.1.6 Manual Block Media Recovery Are you able to provide the steps to perform a manual block level recovery steps on standby database using backup from primary database without a Catalog? I would be nice if we could simply use the "FOR SERVICE" clause. Thanks, John
Categories: DBA Blogs

Oracle Pivot table

10 hours 42 min ago
I have a Oracle pivot query that works with 19 values in the IN clause but when I add a 20th, it fails with an ORA-01427 error. This works: <code>select * from (select serial_no, buyoff_user || chr(13) || to_char(buyoff_date) || chr(13) || serial_oper_status || chr(13) || NCR as UPD_BY, oper_no from db1.vw_status s where order_no = '123456') pivot (min(UPD_BY) for oper_no in ('10.0','20.0','30.0','40.0','50.0','60.0','70.0','80.0','90.0','100.0','110.0','120.0','130.0','140.0','150.0','160.0','170.0','180.0','190.0')) order by serial_no; </code> This fails: <code>select * from (select serial_no, buyoff_user || chr(13) || to_char(buyoff_date) || chr(13) || serial_oper_status || chr(13) || NCR as UPD_BY, oper_no from db1.vw_status s where order_no = '123456') pivot (min(UPD_BY) for oper_no in ('10.0','20.0','30.0','40.0','50.0','60.0','70.0','80.0','90.0','100.0','110.0','120.0','130.0','140.0','150.0','160.0','170.0','180.0','190.0','200.0')) order by serial_no;</code> I've verified that it is not a data issue and can get the query to run if I take out any one value in the IN clause This works as well, leave '200.0' in and remove '60.0' <code>select * from (select serial_no, buyoff_user || chr(13) || to_char(buyoff_date) || chr(13) || serial_oper_status || chr(13) || NCR as UPD_BY, oper_no from db1.vw_status s where order_no = '123456') pivot (min(UPD_BY) for oper_no in ('10.0','20.0','30.0','40.0','50.0','70.0','80.0','90.0','100.0','110.0','120.0','130.0','140.0','150.0','160.0','170.0','180.0','190.0','200.0')) order by serial_no;</code> Am I going crazy (probably) or is there something I am missing? Thanks.
Categories: DBA Blogs

Can I change an existing column to a computed column with an Alter Table statement

10 hours 42 min ago
Test case can be found at: https://livesql.oracle.com/apex/livesql/s/lgh3hduetd3lqd74jvuolc78k Given: <code>Drop table t1; Create table t1 (c1 Int Default 0 Not Null, c2 Int Default 0 Not Null);</code> Is it then possible to: <code>Alter table t1 Modify c2 as (c1 + 1);</code> When I do I get the following result, so I am beginning to believe this cannot be done: Table dropped Table created Alter table t1 Modify c2 as (c1 + 1) ORA-54026: Real column cannot have an expression Appreciate your help as I have not been able to find any documentation that would confirm this behaviour. Thanks, Aubrey
Categories: DBA Blogs

cursor

10 hours 42 min ago
I have a database and I want to print the following, The last 10 rows The first 10 rows The largest zipcodes number The smallest zipcodes number This is the code I used. import pymysql db = pymysql.connect(host="192.168.0.180",port=4000,user="maxuser",passwd="maxpwd") cursor=db.cursor() cursor.execute("SELECT * FROM zipcodes_one.zipcodes_one where <= MAX_ROWS") results=cursor.fetchall() for result in results: print (result) Please help Thank you
Categories: DBA Blogs

Hostname for oracle database will change

10 hours 42 min ago
We have an oracle database and the hostname in listener and tnsname files is 10.<b>5</b>.6.212 and there are some upgrades from the infrastructure team and they will change vlan for server and will be 10.<b>8</b>.6.212 so what should I do to prevent any impact on my oracle database * Note: the OS is windows server.
Categories: DBA Blogs

How to export data for specific time period using DBMS_DATAPUMP api.

10 hours 42 min ago
We have a requirement where we are looking for DB dumps to be exported using DBMS_DATAPUMP api for specific time period like 01-Jan-2020 to 31-March-2020. I am aware of Flashback time and flashback scn number but it will export only for specific timestamp but not time period. Could you please help me on this.
Categories: DBA Blogs

Fetching last record from a table

Thu, 2021-03-04 10:06
I was wondering if there is some possible way that I can fetch the last record from a table. In this particular case, obviously one is unaware of the number of records in a table. All I want is to write a query whcih will simply fetch all the columns of the last record. I have tried using ROWNUM, but I guess it does'nt work that way. If I generate a report, then I want to base my report on the same query whcih will simply fecth all the columns of the last record. Are there any built-ins in report builder which can fetch the last record of a table? Thanks Hassan
Categories: DBA Blogs

Commit point optimization and Golden Gate

Thu, 2021-03-04 10:06
Hi We have a rather complicated cleaning process. We have implemented it so that we divide key space to 4 separate number spaces and run them concurrently. Each of the 4 processes make decision if that row is to be deleted and if so deletes the row and dependent rows by cascade delete. Because there were occasional deadlocks each deletion is in its own transaction. Code is PL/SQL, so no waits for writing redo logs should be happening (commit point optimization). However in some point of time replication of database was changed from materialized view to Golden Gate (volumes got much bigger and materialized views had serious difficulties in handling them). Now we have encountered issues with Golden Gate, lags of replication when large deletes are done. My question is: if in source system commit point optimization is used, it is also applied in target system ? Should we try to increase interval between commits so number of waits in redo logs writes in target system would get smaller ? Something else ? lh
Categories: DBA Blogs

Insert 400 million faster from one table to another table

Thu, 2021-03-04 10:06
Hi Tom, We have trouble loading 400 million records from one table(temp table) to main table. We have been getting 200 million data earlier from upstream and we were able to load it 3 hrs by DBMS_PARALLEL_EXECUTE concept. Now the data is increased to 400 million and we except 6 to 7 hrs to load. Is there any other approach that the data can be inserted faster. Below is the code used currently. <code> BEGIN DBMS_PARALLEL_EXECUTE.DROP_TASK (L_TASK_NAME); EXCEPTION WHEN OTHERS THEN NULL; END; --- This create task will create a task which can be seen in USER_PARALLEL_EXECUTE_TASKS table DBMS_PARALLEL_EXECUTE.CREATE_TASK (L_TASK_NAME); --This statement chunks the data based on Rowid DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID (L_TASK_NAME, 'TIREC', 'TMP_PROFILE_OFFER_ORG', TRUE, 500000); L_SQL := 'INSERT INTO T_PROFILE_OFFER (PROFILE_ID, ENROLLMENT_ID, MEM_OFFER_ID, OFFER_ID, KIT_ID, OFFER_CODE, START_DATE, END_DATE, OFFER_PRIORITY, ACTIVATION_STATUS, STATUS, CREATE_USER, UPDATE_USER) SELECT PROFILE_ID, ENROLLMENT_ID, MEM_OFFER_ID, OFFER_ID, KIT_ID, OFFER_CODE, START_DATE, END_DATE, OFFER_PRIORITY, ACTIVATION_STATUS, STATUS, CREATE_USER, UPDATE_USER FROM TMP_PROFILE_OFFER_ORG WHERE ROWID BETWEEN :start_id AND :end_id LOG ERRORS INTO ERR$_T_PROFILE_OFFER_MO (''B109: Insert'') REJECT LIMIT UNLIMITED'; --Following statement runs multiple session based on parallel level supplied DBMS_PARALLEL_EXECUTE.RUN_TASK (TASK_NAME => L_TASK_NAME, SQL_STMT => L_SQL, LANGUAGE_FLAG => DBMS_SQL.NATIVE, PARALLEL_LEVEL => L_DEGREE); --If task is failed we need to try until it is finished or 5 attempts to execute it. LOOP EXIT WHEN DBMS_PARALLEL_EXECUTE.TASK_STATUS (L_TASK_NAME) = DBMS_PARALLEL_EXECUTE.FINISHED OR L_ATTEMPTS > L_RETRIES; L_ATTEMPTS := L_ATTEMPTS + 1; DBMS_PARALLEL_EXECUTE.RESUME_TASK (L_TASK_NAME); END LOOP; </code> Thanks, Jaganath K
Categories: DBA Blogs

select purging in V$ARCHIVED_LOG

Thu, 2021-03-04 10:06
Hello, We would like to purge old entries in V$ARCHIVED_LOG in primary and standby databases because we have noted that some queries using V$DATABASE, V$BACKUP_SET and V$ARCHIVED_LOG especially on standby are sometimes slow. We have control_file_record_keep_time set to 25 but on same databases we have a sometimes thousands of entries in V$ARCHIVED_LOG which are older: sometimes 2 month old and sometimes older than 1 year. Example: <code> > show parameter control_file_record_keep_time control_file_record_keep_time integer 25 > select sysdate - min(first_time) from v$archived_log; SYSDATE-MIN(FIRST_TIME) ----------------------- 705.846887 </code> Is there a way to run a selective purging in V$ARCHIVED_LOG in primary databases ? I know MOS Note 845361.1 but this a full purge. Thanks.
Categories: DBA Blogs

Two approaches to see the Segment Advisor findings?

Wed, 2021-03-03 15:50
Greetings, I have come across two approaches to view the findings of the segment advisor. Here they are: 1.) select tablespace_name, segment_name, segment_type, partition_name,recommendations, c1 from table(dbms_space.asa_recommendations('FALSE', 'FALSE', 'FALSE')); 2.) select a.message, b.created from DBA_ADVISOR_FINDINGS a, DBA_ADVISOR_TASKS b where a.task_id=b.task_id and a. task_name like 'SYS_AUTO_SPC%' order by 2 desc; Which is the one that we should use? My guess is that we should rely on the first one, right? The second one shows a history of all of the findings. Also, I am surprised to see that it recommends shrinking a table that will free only 16MB from a table that has 112MB in size. "Enable row movement of the table RC1.JECT3 and perform shrink, estimated savings is 16349883 bytes." Thanks, John
Categories: DBA Blogs

Find all sundays of the year

Wed, 2021-03-03 15:50
How to find all sundays of the year without writing any block or procedure or function?
Categories: DBA Blogs

The "greater" PGA memory size allocated in rman channel

Tue, 2021-03-02 21:06
1) Why the following each rman channel process memory allocated were almost all more than the _pga_max_size? 2) Why the following total rman channel processes' memory allocated could be far more than the global pga_target? <code>SQL> SQL> SQL> SELECT a.ksppinm AS parameter, c.ksppstvl AS VALUE, a.ksppdesc AS description, b.ksppstdf AS "Default?" 2 FROM x$ksppi a, x$ksppcv b, x$ksppsv c WHERE a.indx = b.indx AND a.indx = c.indx AND a.ksppinm IN 3 ('pga_aggregate_target','_pga_max_size','_smm_max_size','_smm_px_max_size') ORDER BY a.ksppinm; PARAMETER VALUE DESCRIPTION Default? --------------------- ------------------ -------------------------------------------------- ---------- _pga_max_size 209715200 Maximum size of the PGA memory for one process TRUE _smm_max_size 102400 maximum work area size in auto mode (serial) TRUE _smm_px_max_size 524288 maximum work area size in auto mode (global) TRUE pga_aggregate_target 1073741824 Target size for the aggregate PGA memory consumed FALSE by the instance SQL> SQL> SQL> select component, current_size/1024/1024 "CURRENT_SIZE", min_size/1024/1024 "MIN_SIZE", max_size/1024/1024 "MAX_SIZE", 2 user_specified_size/1024/1024 "USER_SPECIFIED_SIZE", last_oper_type "TYPE" from v$memory_dynamic_components where rownum < 100; COMPONENT CURRENT_SIZE MIN_SIZE MAX_SIZE USER_SPECIFIED_SIZE TYPE ---------------------------------------------------------------- ------------ ---------- ---------- ------------------- ------------- shared pool 656 528 656 0 GROW large pool 416 416 416 0 STATIC java pool 64 64 64 0 STATIC streams pool 0 0 0 0 STATIC SGA Target 3072 3072 3072 0 STATIC DEFAULT buffer cache 1888 1888 1904 0 SHRINK KEEP buffer cache 0 0 0 0 STATIC RECYCLE buffer cache 0 0 0 0 STATIC DEFAULT 2K buffer cache 0 0 0 0 STATIC DEFAULT 4K buffer cache 0 0 0 0 STATIC DEFAULT 8K buffer cache 0 0 0 0 STATIC DEFAULT 16K buffer cache 0 0 0 0 STATIC DEFAULT 32K buffer cache 0 0 0 0 STATIC Shared IO Pool 0 0 0 0 STATIC PGA Target 1024 1024 1024 1024 STATIC ASM Buffer Cache 0 0 0 0 STATIC 16 rows selected. SQL> SQL> SQL> select sid, spid, s.program, event, pg...
Categories: DBA Blogs

Apex 20.2 ORDS and Apache Tomcat Windows 2016 - Getting it ready for Production

Tue, 2021-03-02 02:46
I have installed and configured Apex 20.2 with ORDS 19 and Apache TomCat 9 on a Windows 2016 Server. The Database is 12c Enterprise Edition without a Container DB. I have an application built for initial registration for an event and I am expecting 3000 to 5000 users hitting the web application after a newspaper advertisement. The Server is on 1 gbps bandwidth. Since it is open registration I opted for no authentication mode i.e. through Apex Public User. I request experts to please guide me with regard to concurrent users / sessions to look for and also the storage related. I expect image attachments about 1 MB for each entry etc. Thanks in Advance. Junaid.
Categories: DBA Blogs

oda-admin password

Tue, 2021-03-02 02:46
I have changed oda-admin password but web front still says password expired. odacli-adm set-credential --password --username oda-admin
Categories: DBA Blogs

To simulate "Google" Like Search

Tue, 2021-03-02 02:46
In one of the databases , we have list of courses , descriptions , outlines , ... We would like to implement "google" like search ( ie) a) If I type the keyword "data" , it look for databases , data mining , big data ... b) If I type the keyword such as "Project Mgmt" , it should auto correct the criteria as "Project Management" ... Kinda of knowledge search It looks like , Oracle Text can't do this . Please let me know what technology/algorithm I can use to achieve this , if possible . Thanks
Categories: DBA Blogs

Applying profiles to identical yet sadly unique SQLs

Tue, 2021-03-02 02:46
Thanks in advance for all the help the team provides, much appreciated! I have a situation where the application I am working with utilizes GTTs to load data in to and filter SQLs. The problem is that in so doing the application is also injecting a cardinality hint into the SQL for the number of rows in the GTT. For example: <code>select * from table1 where column1 in (select /*+ cardinality ( 150 ) */ gtt_column from gtt);</code> So applying a sql profile to that will work just fine but it won't catch the next incarnation of the SQL which could be <code>select * from table1 where column1 in (select /*+ cardinality ( 200 ) */ gtt_column from gtt);</code> Are there any options to account for this aside from application code changes? Thanks ! Simple test case showing unique SQL IDs <code> create global temporary table gtt_test (column1 number) on commit delete rows; create table perm_test as select level as column1 from dual connect by level <= 1000; insert into gtt_test select level from dual connect by level <= 10; select * from perm_test where column1 in (select /*+ cardinality (1) */ * from gtt_test); select * from perm_test where column1 in (select /*+ cardinality (2) */ * from gtt_test); select sql_text, sql_id, executions from v$sql where parsing_schema_name = user and sql_text like '%cardinality%' order by first_load_time desc; </code>
Categories: DBA Blogs

OPS$ORACLE

Tue, 2021-03-02 02:46
Hi. When i do sqlplus system and dont type a password, this happen: SQL*Plus: Release 8.0.6.0.0 - Production on Thu Jun 21 14:13:37 2001 (c) Copyright 1999 Oracle Corporation. All rights reserved. Enter password: ERROR: ORA-01005: null password given; logon denied Enter user-name: Connected to: Oracle8 Enterprise Edition Release 8.0.6.2.0 - Production PL/SQL Release 8.0.6.2.0 - Production SQL> show user USER is "OPS$ORACLE" SQL> What is it USER "OPS$ORACLE" ? What that user can do in his session ? Oracle Parallel Server user? But why oracle dont prompt to put a password ? [], Andre.
Categories: DBA Blogs

Cost for different when two different unique indexes are used on the same table.

Tue, 2021-03-02 02:46
Hi Tom, We have a table which is having 18.2 crore data with 420 columns. We have created 2 index on 2 different columns. N_CUST_SKEY - Index and properly analyzed COD_CUST_ID - Index and properly analyzed Query 1: <code>select * from DIM_KYC where N_CUST_SKEY=14879358670;</code> Explain plan for Query 1 <code>Plan hash value: 1160736793 ------------------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 7110 | 4 (0)| 00:00:01 | | | | 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| DIM_KYC | 1 | 7110 | 4 (0)| 00:00:01 | ROWID | ROWID | |* 2 | INDEX RANGE SCAN | IDX_DIM_KYC_N_CUST_SKEY | 1 | | 3 (0)| 00:00:01 | | | ------------------------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("N_CUST_SKEY"=14879358670) Note ----- - dynamic sampling used for this statement (level=2)</code> Query 2: <code>select * from DIM_KYC where COD_CUST_ID=250781885;</code> Explain plan for Query 2 <code>Plan hash value: 2375061313 ------------------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 108K| 737M| 105K (1)| 00:31:45 | | | | 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| DIM_KYC | 108K| 737M| 105K (1)| 00:31:45 | ROWID | ROWID | |* 2 | INDEX RANGE SCAN | IDX_COD_CUST_ID_DIM_KYC | 108K| | 80 (0)| 00:00:02 | | | ------------------------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("COD_CUST_ID"=250781885) Note ----- - dynamic sampling used for this statement (level=2)</code> Both the columns which are having index are unique but still we are getting two different cost on the same table. Any Suggestion why is this so?
Categories: DBA Blogs

Pages