DBA Blogs

Regex for comma separated strong with predefined list of words that are allowed.

Tom Kyte - Thu, 2017-04-13 11:46
Hello, With this regex : not REGEXP_LIKE (categories, '[^(subcat1| subcat2 |subcat3|null; )]', 'i'), i can specified what kind of values i can be used and this are separated with ';'. but i can do that <code>insert into regexText Values(';') ...
Categories: DBA Blogs

Need to call Restful API using Oracle PL SQL

Tom Kyte - Thu, 2017-04-13 11:46
Hi, I am new to hitting Restful APIs from Oracle. I have this huge Xml (> 4000 characters) which I need to post to a remote restful api endpoint. Please let me know how to accomplish this. Below is my sample code that I am playing with right now....
Categories: DBA Blogs

ORA_HASH Value Collision.

Tom Kyte - Thu, 2017-04-13 11:46
Hi Tom, I am attempting to assign unique value to a an expression value which is distinct and derived from the concatenation of multiple fields. Here's the usage: Query1: create table Table2 parallel 16 as select /*+ parallel(a,16)*/ disti...
Categories: DBA Blogs

Error ORA-01033 After Doing a Switchover in a 12.1 RAC Environment

Pythian Group - Thu, 2017-04-13 08:29

The other day I did a switchover in a RAC environment , which went pretty smooth , but after doing the switchover in the primary, I kept getting the following error:

select dest_name,status,error from gv$archive_dest_status where dest_id=2;

DEST_NAME
--------------------------------------------------------------------------------
STATUS	  ERROR
--------- -----------------------------------------------------------------
LOG_ARCHIVE_DEST_2
ERROR	  ORA-01033: ORACLE initialization or shutdown in progress

LOG_ARCHIVE_DEST_2
ERROR	  ORA-01033: ORACLE initialization or shutdown in progress

I went and checked the standby, and saw the standby was in recover mode and waiting for the redo log

PROCESS STATUS	     CLIENT_P CLIENT_PID	  THREAD#	 SEQUENCE#	     BLOCK#    ACTIVE_AGENTS	 KNOWN_AGENTS
------- ------------ -------- ---------- ---------------- ---------------- ---------------- ---------------- ----------------
ARCH	CONNECTED    ARCH     44474			0		 0		  0		   0		    0
RFS	IDLE	     ARCH     133318			0		 0		  0		   0		    0
RFS	IDLE	     ARCH     50602			0		 0		  0		   0		    0
ARCH	CLOSING      ARCH     44470			1	     21623	      14336		   0		    0
ARCH	CLOSING      ARCH     44476			1	     21624		  1		   0		    0
ARCH	CLOSING      ARCH     44472			2	     19221	      96256		   0		    0
RFS	IDLE	     LGWR     133322			1	     21625	      17157		   0		    0
RFS	IDLE	     LGWR     50620			2	     19222	      36611		   0		    0
MRP0	WAIT_FOR_LOG N/A      N/A			2	     19222	      36617		  0		   0

My first train of thought was that the password file was incorrect, so I recreated them and copied them from the primary to the standby nodes, but I still kept getting the same error. I reviewed the environment with the scripts in DOC ID 1581388.1 and everything seemed alright. It really kept bugging me that the logs were not being applied even though the logs were being shipped to the standby (so it did have to do with the password file), but what really bothered me, was that I had just recreated the password file in $ORACLE_HOME/dbs and I still kept getting the same error.

So after a while of troubleshooting, I found that in the new primary the password file was residing in an ASM Diskgroup, and that was the main culprit. This meant that I had to copy the password file from the ASM diskgroup in the primary to the standby.
Primary

[oracle@localhost trace]$ srvctl config database -d renedb
Database unique name: renedb
Database name: 
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA1/renedb/spfilerenedb.ora
Password file: +DATA1/renedb/PASSWORD/pwrenedb
Domain: 
Start options: open
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: ARCH1,DATA1,REDO
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oper
Database instances: renedb1,renedb2
Configured nodes: localhost,localhost
Database is administrator managed
[oracle@localhost trace]$ exit
-bash-4.1$ sudo su - grid
[sudo] password for pythian: 
[grid@localhost ~]$ . oraenv
ORACLE_SID = [+ASM1] ? 
The Oracle base remains unchanged with value /u01/app/grid
[grid@localhost ~]$ asmcmd
ASMCMD> pwcopy +DATA1/renedb/PASSWORD/pwrenedb /tmp/pwrenedb
copying +DATA1/renedb/PASSWORD/pwrenedb -> /tmp/pwrenedb
ASMCMD> exit

Standby

[oracle@localhost dbs]$ scp 10.10.0.1:/tmp/pwrenedb /tmp/pwrenedb_stby
pwrenedb_stby_phdb                                                                                                                                                                                                    100% 7680     7.5KB/s   00:00    
[oracle@localhost dbs]$ exit
logout
[pythian@localhost ~]$ sudo su - grid
[sudo] password for pythian: 
Last login: Fri Mar 31 21:55:53 MST 2017
[grid@localhost ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
[grid@localhost ~]$ asmcmd
ASMCMD> mkdir DATA/RENEDB/PASSWORD
ASMCMD> pwcopy /tmp/pwrenedb_stby_phdb +DATA/RENEDB/PASSWORD/pwrenedb_stby
copying /tmp/pwrenedb_stby_phdb -> +DATA/RENEDB/PASSWORD/pwrenedb_stby
ASMCMD> exit
[grid@localhost ~]$ exit
logout
[pythian@localhost ~]$ sudo su - oracle
Last login: Sat Apr  1 01:35:46 MST 2017 on pts/4
The Oracle base has been set to /u01/app/oracle
[oracle@localhost dbs]$ srvctl modify database -d renedb_stby -pwfile +DATA/RENEDB/PASSWORD/pwrenedb_stby
[oracle@localhost dbs]$ srvctl config  database -d renedb_stby
Database unique name: renedb_stby
Database name: 
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: /u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfilerenedb_stby.ora
Password file: +DATA/RENEDB/PASSWORD/pwrenedb_stby
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: ARCH,DATA,REDO
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oper
Database instances: renedb_stby1,renedb_stby2
Configured nodes: *******,***********
Database is administrator managed

Once I did this, the standby started applying the redo logs and after the gap was closed the Primary switchover status was “TO STANDBY”
Primary

Primary Site last generated SCN

*******************************

DB_UNIQUE_NAME	SWITCHOVER_STATUS	  CURRENT_SCN
--------------- -------------------- ----------------
renedb	TO STANDBY		 134480468945

Standby

Data Guard Apply Lag

********************

NAME	     LAG_TIME		  DATUM_TIME	       TIME_COMPUTED
------------ -------------------- -------------------- --------------------
apply lag    +00 00:00:00	  04/01/2017 04:05:51  04/01/2017 04:05:52

1 row selected.


Data Guard Gap Problems

***********************

no rows selected

PROCESS STATUS	     CLIENT_P CLIENT_PID	  THREAD#	 SEQUENCE#	     BLOCK#    ACTIVE_AGENTS	 KNOWN_AGENTS
------- ------------ -------- ---------- ---------------- ---------------- ---------------- ---------------- ----------------
ARCH	CONNECTED    ARCH     44474			0		 0		  0		   0		    0
RFS	IDLE	     ARCH     133318			0		 0		  0		   0		    0
RFS	IDLE	     ARCH     50602			0		 0		  0		   0		    0
ARCH	CLOSING      ARCH     44470			1	     21623	      14336		   0		    0
ARCH	CLOSING      ARCH     44476			1	     21624		  1		   0		    0
ARCH	CLOSING      ARCH     44472			2	     19221	      96256		   0		    0
RFS	IDLE	     LGWR     133322			1	     21625	      17157		   0		    0
RFS	IDLE	     LGWR     50620			2	     19222	      36611		   0		    0
MRP0	APPLYING_LOG N/A      N/A			2	     19222	      36617		  33		   33

9 rows selected.

Conclusion
In 12.1 it is recommended as per DOC ID 1984091.1, to have the password file in ASM diskgroups. So once I did this, I was able to workaround error ORA-01033 and able to sleep well!

Note.-  This was originally published in rene-ace.com

Categories: DBA Blogs

Unable MERGE records on TABLE_NAME@DBLink

Tom Kyte - Wed, 2017-04-12 17:26
Hi Chris/Connor, TYPE tab_Varchar2 IS TABLE OF VARCHAR2(500); Before executing MERGE we are fetching ROWIDs of TB_ORDERS in chunk using BULK COLLECT LIMIT 5000; DBLINK is created which is poitning to Archival DB, and we need to MERGE records...
Categories: DBA Blogs

Schedule job "B" to run after job "A" completes and prevent job "A" from running again until job "B" completes.

Tom Kyte - Wed, 2017-04-12 17:26
AskTom, I have an Oracle Scheduled job called "A" that runs every 5 minutes and takes 10 seconds to complete but could run longer. I have a second job called "B" that I want to run at 1 am daily. I don't want job "B" to run at the same time as j...
Categories: DBA Blogs

Large Updates While Resolving Surrogate Keys

Tom Kyte - Wed, 2017-04-12 17:26
We have the need to run a very large import for on hand counts. The table we are updating -- PRODUCT_STORE -- is keyed by a store_id and product_id. However, product_id is a surrogate key and we are given the product_code and store_id and we need t...
Categories: DBA Blogs

Multi Users and permissions

Tom Kyte - Wed, 2017-04-12 17:26
Hi Tom, can I create multi users for one schema? With different restrictions each user. For example I have HR schema and have 3 users let's say userA, userB and userC. userA only have create view, select and update data grants. userB have all of user...
Categories: DBA Blogs

Inline function in a select sub query or create view

Tom Kyte - Wed, 2017-04-12 17:26
I need some suggestion, how can I use an inline function in a select sub query or create view, as up to my knowledge it is possible in oracle 12c. Code: <code>select /*+ WITH_PLSQL */ calc from ( with function calculator (m number, r numbe...
Categories: DBA Blogs

Cursors to XML, Date/Time formatting and null columns

Tom Kyte - Wed, 2017-04-12 17:26
https://livesql.oracle.com/apex/livesql/file/content_ET2CD8U5GO17VY2H0CJ8REEWL.html 11g Release 2 XE, 12c SE Release 1, 12c SE Release 2 Date+Time in XML when converting from cursor Hi Chris, Connor, Maria, I'm working on utPLSQL v3 pro...
Categories: DBA Blogs

Converting JSON object into to Rows which have Parent child relation

Tom Kyte - Wed, 2017-04-12 17:26
Hello Tom, We are getting below JSON data from front end application, for forming SQL condition like "(price < 10.25 OR (category = 2 OR category = 1) OR (name like '%test%' OR in_stock=0))". How I can convert below JSON data into relation...
Categories: DBA Blogs

DML error logging in partitioned table inserts more records

Tom Kyte - Wed, 2017-04-12 17:26
We are facing some issue when we try to use DML error logging while inserting data in a partitioned table. In our programs, we load the data from different tables in some target tables. In one of our program, we are inserting data in a table which...
Categories: DBA Blogs

Data movement over DB Link

Tom Kyte - Wed, 2017-04-12 17:26
Hi Chirs/Connor, Couple of question related to Table data movement from Primary DB to Archival DB. We are having Two DB's - Primary (holding current 2 years of data) Archival (Primary data + historical) We have separate Archival DB u...
Categories: DBA Blogs

Optimizer bug fix makes a query run more than 3 times slower

Bobby Durrett's DBA Blog - Wed, 2017-04-12 15:39

I’m working on an 11.1.0.7 to 11.2.0.4 upgrade and found a handful of queries that run more than 3 times longer on 11.2.0.4 than 11.1.0.7. The data and optimizer statistics are very similar on the two test databases. I’m pretty sure that an optimizer bug fix caused this difference. So, the irony is that a fix to the optimizer that we get with the upgrade to the very stable 11.2.0.4 release is causing a 3x slowdown in the query.

For my testing I’m using the gather_plan_statistics hint and this query to dump out the plan after executing the query:

select * from table(dbms_xplan.display_cursor(null,null,’ALLSTATS’));

I used an outline hint to force the 11.2.0.4 plan to run under 11.1.0.7 and then I looked at the estimated and actual row counts to find a discrepancy. I found one table with estimated row counts that did not look correct on 11.1.0.7 but made sense on 11.2.0.4.

11.1.0.7

---------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name                      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------------------------------------
|* 29 |           TABLE ACCESS BY INDEX ROWID  | MY_TABLE                  |     17 |      1 |      0 |00:00:07.34 |   96306 |       |       |          |
|* 30 |            INDEX RANGE SCAN            | MY_TABLE_PK               |     17 |     16 |    102 |00:00:01.20 |   96255 |       |       |          |

11.2.0.4

---------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name                      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------------------------------------
|* 29 |           TABLE ACCESS BY INDEX ROWID  | MY_TABLE                  |     17 |      8 |      0 |00:00:07.44 |   96306 |       |       |          |
|* 30 |            INDEX RANGE SCAN            | MY_TABLE_PK               |     17 |     17 |    102 |00:00:01.22 |   96255 |       |       |          |

After the range scan in step 30 in the plan in 11.1.0.7 you have an estimate of 16 rows but the table access in step 29 has an estimate of only 1. In 11.2.0.4 the estimate for step 29 is 8 rows.  Given the optimizer statistics, the optimizer should have estimated 8 rows step 29 in 11.1.0.7. It appears that 11.2.0.4 fixed a bug like this.

Here are the predicates for step 29 in the plan:

29 – filter((“PCP”.”MY_FLAG”=’M’ OR “PCP”.”MY_FLAG”=’Y’))

So the column could have value M or Y. The optimizer statistics have 4 distinct values for the column and no histogram. So the optimizer should assume that 1/4 of the rows meet each criteria. So the optimizer should have estimated 1/4 + 1/4 = 1/2 of the rows from step 30 meet the criteria in step 29. So, 17/2 = 8, rounding down. But in 11.1.0.7 it seems that they multiplied the rows from step 30 by 1/4 two times making it 16*1/4*1/4 = 1. It seems that in 11.1.0.7 the optimizer multiplied by 1/4 twice instead of adding them and then multiplying. There is a known bug related to OR conditions in where clauses:

Bug 10623119 – wrong cardinality with ORs and columns with mostly nulls (Doc ID 10623119.8)

Our 11.2.0.4 database includes this bug fix but I don’t know if this fix caused the difference in behavior that I saw. It seems possible that it did.

The interesting thing is that the real row count for step 29 is 0. So, the pre-bug fix plan in 11.1.0.7 actually estimated the row count more accurately by accident. It estimated 1 and the real count was 0. The correct estimate should have been 8, but that is not as close to 0 as 1 is. I think we just happened to have a few queries where the bug resulted in a more accurate estimate than a properly functioning optimizer. But, I’m only looking at the queries whose performance is worse after the upgrade. There may have been other queries that performed better because of this bug fix.

I ended up passing this and a similar query back to a senior SQL developer and he took one look at the query and described it as “ugly”. He fixed both of them in no time so that both queries now run just as fast or faster on 11.2.0.4 than they did on 11.1.0.7.

So, the original query ran faster when the optimizer was not working properly. A human developer simplified the query and then it ran faster when the optimizer was working properly. Maybe the moral of the story is to build simpler and cleaner SQL queries to begin with and if you find a query whose performance declines with better optimizer information then consider improving the query so that it works well with the better functioning optimizer.

Bobby

P.S. Well, I used a cardinality hint to give the 11.2.0.4 optimizer the correct number of rows for the step in the plan and it didn’t change the plan. So, the change in the way the number of rows was calculated may not be the real reason, or the only reason, for the worse plan on 11.2.0.4. Plus, I tried the alter session command mentioned in the comments and it didn’t change the plan either. I guess that something else in 11.2.0.4 caused the change to the less efficient plan but I can’t say what. Still, a developer fixed the queries in no time so that is still the main point…

Categories: DBA Blogs

Tuning SQL Statements Using Explain Plan

Tom Kyte - Tue, 2017-04-11 23:06
Hi Tom , I am new in tuning sql statements. Can u give a methodology of tuning the sql statements and to predict the output of explain plan. When i use explain plan for a particular query, i am able to understand what the result means. can u pls cla...
Categories: DBA Blogs

compare tables with excluded flag

Tom Kyte - Tue, 2017-04-11 23:06
Hi Tom, there is a table with all libraries and books. there is other table with book shells. the book can be included or explicit excluded to/from shell. I have to populate lib_shell table with connections between libraries and shells <cod...
Categories: DBA Blogs

Find the shortest tree to include all fields to fetch all the rows from table.

Tom Kyte - Tue, 2017-04-11 23:06
Hi, I help people in oracle sql, PLSQL but at this time I needed help :), quick help will be much appreciated, e.g. I have table lets say GET_BAYID_TMP2, in that table below rows are there, my project requirement is to get the shortest length o...
Categories: DBA Blogs

Protection against long running query

Tom Kyte - Tue, 2017-04-11 23:06
Hi, I have a web application use to created report on database table. My concern is about database performance. I don't whant user create report that will kill the database. I know, I can use use the max rownum in the SQL, but eaven a max row can ...
Categories: DBA Blogs

Cardinality estimates

Tom Kyte - Tue, 2017-04-11 23:06
Hi, Please have a look at the two SQL statements below, in one I am using RANK and in another I am using ROW_NUMBER. Notice that the cardinality estimate is correct in case we use RANK and in case of ROW_NUMBER it changes to 1. Am I doing someth...
Categories: DBA Blogs

SELECT * FROM TABLE(PL/SQL TABLE) not working

Tom Kyte - Tue, 2017-04-11 23:06
Hi Connor, Chris can you please help, select * from table(cast(t_daz as daz_test_tab)) is not working here.. <code>create table tb_test as select object_name, object_id from user_objects where rownum <= 10; create type daz_test as object (...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs