DBA Blogs

DBMS_PARALLEL_EXECUTE for partitions on a table

Tom Kyte - Thu, 2017-08-10 14:06
Hi, One of our projects involves updating data in a partitioned table, this table has a considerable amount of rows and about 254 partitions, We have a merge script that works great when executed by partition. We currently have it setup like the b...
Categories: DBA Blogs

how to select object from anydata type field?

Tom Kyte - Wed, 2017-08-09 19:46
<code>CREATE OR REPLACE Type t_Au_Delivery_Service As Object ( Yes_No Integer, Delivery_Price Number, Note Varchar2(1000) ); create table AU_AUCTION_PARAM ( id_au_auction_param NUMBER not null, id_au_aucti...
Categories: DBA Blogs

Unable to upgrade db Exception in thread "main" java.lang.UnsatisfiedLinkError libXext.so.6: cannot open shared object file

Tom Kyte - Wed, 2017-08-09 19:46
Dear Team, I am facing issue while i am trying to upgrade my database from 11.2.0.1 to 11.2.0.3. I have extract zip file and also i have create directory everything i have done but no luck. <code>[Oracle@localhost Desktop]$ ll /u01/app/oracle/...
Categories: DBA Blogs

Partner Webcast - New SuperCluster M7 Update, Positioning and Use Cases

Join us to learn more about Oracle Systems, Engineered Systems and Storage. Learn what we understand by “Cloud Insurance” through our Systems Webcast Series for EMEA...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How Pythian Helped Nav Canada Migrate Their Oracle Database to a Microsoft Azure SQL Database

Pythian Group - Wed, 2017-08-09 15:20

A common concern for businesses is optimizing database speed and performance, without raising costs.

Nav Canada supports flight planning with aviation weather, aeronautical information, and online flight planning. To address a need for a more cost-efficient environment for their critical flight planning data, while optimizing performance they decided to migrate from an on-premises Oracle system to Microsoft Azure SQL Database. They turned to the database and cloud experts at Pythian to assist them.

“Pythian’s breadth of expertise spans both traditional on-premises and cloud environments, making Pythian the perfect partner as we migrated critical workloads to the cloud.”

Pascal Chamma, Manager, Nav Canada Database Technology Services.

Pythian was tasked with creating a Proof of Concept (PoC) on an Azure database, while simulating the central SQL database. The project team demonstrated that the same level of performance could be achieved within a more cost-effective cloud environment. The PoC proved that the new system could run a process (3000 lines of PL/SQL code) every 20 seconds, performing calculations on up to 3 minutes of collected data.

Nav Canada is currently implementing the system across a production environment, with Pythian advising them as needed.

Read the full case study to learn how Nav Canada was able to migrate to the cloud to reduce costs, without sacrificing performance.

Categories: DBA Blogs

DB Tablespace dropped

Tom Kyte - Wed, 2017-08-09 01:26
Hi team, We have given the system,sysaux,undo,temp tablespace huge size allocated now the space issue on Os level So, can i drop and re-create a tablespace in Oracle which is oracle tablespace It is downtime activity ? How can we do this ? Note...
Categories: DBA Blogs

Result cache latch contention

Bobby Durrett's DBA Blog - Tue, 2017-08-08 11:29

I recently saw a dramatic example of result cache latch contention. I had just upgraded a database to 11.2.0.4 and several hours later processing pretty much came to a halt.

Of course I am telling you the end before I tell you the beginning. It all started the morning of July 22nd, 2017, a few weeks back. We had worked for a couple of months on an 11.2.0.4 upgrade. I mentioned some issues with the same upgrade in my earlier post. I spent several hours Saturday morning upgrading the database and it all went very smoothly. Then we kicked off the normal batch processing and things seemed fine. Around 1 am the next morning I get paged about an issue. All of the batch processing had ground to a halt on the upgraded database. Needless to say, I was freaking out and afraid that we would have to back out the upgrade. This would have been ugly and time-consuming.

At first I  focused on the top SQL statements in the AWR report. Here are the top few from that morning:

Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id SQL Module SQL Text 17,983.20 452,555 0.04 43.98 10.64 0.00 4yw0zv7ty2t47 SQL*Plus SELECT SLS_EXCL_RSN FROM ( SEL… 3,643.96 0 8.91 13.68 0.19 2fxnrcamtbcc2 SQL*Plus DECLARE return_code number := … 3,637.60 0 8.90 3.67 0.06 18pdd22fh15dc SQL*Plus INSERT /*+ APPEND PARALLEL(TGT…

The top query dominates the others because it takes 43.98% of the total run time so it made sense that this query was the problem. I expected that some queries would change plan with the upgrade. I had used SQL Profiles to lock in other plans that I knew were problematic but I would not be surprised to see new ones. But, looking at the top SQL, sql id 4yw0zv7ty2t47, the plan was the same as before the upgrade. Here is some execution history of the query before and after the July 22 upgrade:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
------------- --------------- ------------------------- ---------------- ------------------
4yw0zv7ty2t47       848116227 15-JUL-17 11.00.28.230 PM           192430         .822087045
4yw0zv7ty2t47       848116227 16-JUL-17 02.00.24.144 AM          2522505         .691503149
4yw0zv7ty2t47       848116227 16-JUL-17 03.00.42.580 AM          2787002         .793723611
4yw0zv7ty2t47       848116227 22-JUL-17 10.00.12.787 PM           220581         23.4686899
4yw0zv7ty2t47       848116227 22-JUL-17 11.00.30.251 PM           455887         20.1393904
4yw0zv7ty2t47       848116227 23-JUL-17 12.00.47.394 AM           445173         20.5407455
4yw0zv7ty2t47       848116227 23-JUL-17 01.00.05.711 AM           369333         28.3195315
4yw0zv7ty2t47       848116227 23-JUL-17 02.00.22.811 AM           465591         39.0232079
4yw0zv7ty2t47       848116227 23-JUL-17 03.00.40.758 AM           452555         39.7370594
4yw0zv7ty2t47       848116227 23-JUL-17 04.00.57.968 AM           458328         39.3421407
4yw0zv7ty2t47       848116227 23-JUL-17 04.09.32.144 AM            10055         39.1518787
4yw0zv7ty2t47       848116227 23-JUL-17 04.11.58.484 AM            18507         39.6002968
4yw0zv7ty2t47       848116227 23-JUL-17 04.15.24.661 AM             5215         39.4672715
4yw0zv7ty2t47       848116227 23-JUL-17 04.16.30.441 AM             8542         39.1123689
4yw0zv7ty2t47       848116227 23-JUL-17 05.00.15.147 AM           321635         39.9827259

So, I stared at this for a couple of hours, getting more and more stressed because my cure-all SQL Profile was not going to help in this situation. The plan had not changed. I could not think, in my sleep deprived state, of a way to resolve this issue that morning. Then, after a quick prayer for help, I noticed the %CPU and %IO columns in the query’s line in the AWR report. 10% CPU, 0% I/O. The query was spending 90% of its time waiting and doing nothing. I had noticed some latch waits on the AWR report but I thought that the CPU on the system was just busy so we had some latch waits.

These were the top three events:

Event Waits Total Wait Time (sec) Wait Avg(ms) % DB time Wait Class latch free 3,658,537 26.9K 7 65.8 Other DB CPU 4749 11.6 db file scattered read 71,549 142.1 2 .3 User I/O

But, now finally I looked back at the latch waits. I went down to the latch section of the AWR report and the Result Cache: RC Latch latch showed up big:

Latch Sleep Breakdown Latch Name Get Requests Misses Sleeps Spin Gets Result Cache: RC Latch 6,742,176 4,142,777 4,143,709 669,430 row cache objects 5,804,568 169,324 6,087 163,272 cache buffers chains 107,393,594 36,532 5,859 30,976

I had noticed the RESULT_CACHE hint in some queries on our test database but never saw the latch contention. Here is the first part of the problem query with the RESULT_CACHE hint.

SQL_ID 4yw0zv7ty2t47
--------------------
SELECT SLS_EXCL_RSN FROM ( SELECT /*+ PARALLEL RESULT_CACHE */ DISTINCT

Here is part of the plan with the result cache step:

Plan hash value: 848116227
------------------------------------------
| Id  | Operation                        |
------------------------------------------
|  11 |  COUNT STOPKEY                   |
|  12 |   VIEW                           |
|  13 |    RESULT CACHE                  |
------------------------------------------

Early in the morning I got the idea of disabling the result cache. I ran this command:

alter system set result_cache_max_size=0 scope=both;

All of the running queries immediately died with this error:

ORA-00600: internal error code, arguments: [qesrcDO_AddRO],…

But, when the jobs were rerun they quickly went to completion. I checked the query performance before and after disabling the result cache and before the upgrade as well and found that post upgrade without the result cache the query ran ten times faster than before.

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
------------- --------------- ------------------------- ---------------- ------------------
4yw0zv7ty2t47       848116227 15-JUL-17 11.00.28.230 PM           192430         .822087045
4yw0zv7ty2t47       848116227 16-JUL-17 02.00.24.144 AM          2522505         .691503149
4yw0zv7ty2t47       848116227 16-JUL-17 03.00.42.580 AM          2787002         .793723611
4yw0zv7ty2t47       848116227 22-JUL-17 10.00.12.787 PM           220581         23.4686899
4yw0zv7ty2t47       848116227 22-JUL-17 11.00.30.251 PM           455887         20.1393904
4yw0zv7ty2t47       848116227 23-JUL-17 12.00.47.394 AM           445173         20.5407455
4yw0zv7ty2t47       848116227 23-JUL-17 01.00.05.711 AM           369333         28.3195315
4yw0zv7ty2t47       848116227 23-JUL-17 02.00.22.811 AM           465591         39.0232079
4yw0zv7ty2t47       848116227 23-JUL-17 03.00.40.758 AM           452555         39.7370594
4yw0zv7ty2t47       848116227 23-JUL-17 04.00.57.968 AM           458328         39.3421407
4yw0zv7ty2t47       848116227 23-JUL-17 04.09.32.144 AM            10055         39.1518787
4yw0zv7ty2t47       848116227 23-JUL-17 04.11.58.484 AM            18507         39.6002968
4yw0zv7ty2t47       848116227 23-JUL-17 04.15.24.661 AM             5215         39.4672715
4yw0zv7ty2t47       848116227 23-JUL-17 04.16.30.441 AM             8542         39.1123689
4yw0zv7ty2t47       848116227 23-JUL-17 05.00.15.147 AM           321635         39.9827259
4yw0zv7ty2t47       848116227 23-JUL-17 05.20.37.524 AM           329457          17.895581
4yw0zv7ty2t47       848116227 23-JUL-17 05.21.15.363 AM           205154         .050141323
4yw0zv7ty2t47       848116227 23-JUL-17 05.25.07.159 AM          1023657         .049949389

If you look through the output you will see that pre-upgrade on July 16th the query averaged about .69 to .79 milliseconds. During the latch contention on July 23rd it averaged about 39 milliseconds, a lot worse. But, after disabling the result cache it averaged .05 milliseconds which is at least 10 times faster than with the result cache before the upgrade.

So, it seems that the result cache hint on this query has always slowed it down. But, the upgrade and the load afterward caused some intense latch contention that we had not seen before. But, it is very cool that disabling the result cache actually made the query faster than it has been in the past.

I don’t think that it makes sense to put a RESULT_CACHE hint in a query that will run in .05 milliseconds without it. The overhead of the result cache made the query run 10 times slower at least. Something about the upgrade resulted in latch contention that caused a system wide problem, but disabling the result cache made the query run faster than it ever had. We could have avoided this problem by leaving off the RESULT_CACHE hint, but it was nice that I could resolve the problem quickly by disabling the result cache using a parameter change.

Bobby

Categories: DBA Blogs

update table A using data from table B when more than one update record available

Tom Kyte - Tue, 2017-08-08 07:06
I have two tables, catalog and interface, both have an item_sku column. The catalog has only one row for each unique item_sku. The interface table may have one, two or three rows for each item_sku. I need to update the catalog table using the inte...
Categories: DBA Blogs

IRR Calculations in SQL or PLSQL

Tom Kyte - Tue, 2017-08-08 07:06
Hello Tom, I would like to if there is any way to calculate IRR (internal rate of return) with the help of stored function/ procedure. IRR is normal function available in Excel which allow user to calculate the IRR on base of cashflows. For E...
Categories: DBA Blogs

Prefixing a packaged procedure call with the package name from inside said package ?

Tom Kyte - Tue, 2017-08-08 07:06
Within a single package two procedures call a function in the package. One prefixes with the package name and the other does not. Is there any difference? Thank you. <code> CREATE OR REPLACE PACKAGE BODY demo AS FUNCTION f1 RETURN NUMBE...
Categories: DBA Blogs

Oracle Scheduler job continue to run though is set to stop after 240 minutes/4 hours

Tom Kyte - Tue, 2017-08-08 07:06
Hi, Although they had set the max job duration to 240 minutes/4 hours, the Oracle Scheduler job continue to run for more than 15 hours until they killed it. How could we get this job to stop once past the max job duration? I had already set tha...
Categories: DBA Blogs

SQL Baseline impact

Tom Kyte - Tue, 2017-08-08 07:06
I got user reported a SQL had been run slow about a month. so I checked the AWR snapshots and generated a baseline from AWR snapshot because the good plan was recorded 20 days ago. The related SQL ran faster the next day but the baseline didn't be us...
Categories: DBA Blogs

Rebuilding the same index in multiple sessions(parallel)

Tom Kyte - Tue, 2017-08-08 07:06
Tried rebuilding the index in multiple sessions,but got the error intiated rebuild in two sessions using below query ALTER INDEX indexname REBUILD One of the session is completed fine but other one failed with below error ORA-08104: this ...
Categories: DBA Blogs

Oracle database Libraries (External Procedures Using C)

Tom Kyte - Tue, 2017-08-08 07:06
Dear Tom, I have an Oracle 12c(12.1.0.2.0) installation in Cent OS Linux. I was learning the Oracle Libraries and I created a library to find the Reverse of a String and I have published the function as given below. CREATE OR REPLACE FUNCTION...
Categories: DBA Blogs

Partner Webcast – Moving Oracle Workloads to Cloud: ISV & Bespoke Applications with ...

Moving & deploying ISV & bespoke applications on the Oracle Cloud enables customers to accelerate innovation and drive digital transformation initiatives while reducing costs. Only Oracle...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Two configuration changes with 11.2.0.4 upgrade

Bobby Durrett's DBA Blog - Mon, 2017-08-07 17:58

A couple of weeks ago I upgraded a major production database from 11.1.0.7 to 11.2.0.4. Our developers kept hitting one ugly bug after another on the minimally patched 11.1.0.7 so I wanted to get them on the most patched up version of Oracle 11. This is on HP-UX Itanium 11.31. I made two key configuration changes that I want to discuss here. I changed the database to use direct I/O and I set a parameter so that the database would not use direct path reads for most table and partition scans. The main point of this blog post is that one change required the other. I moved to direct I/O to improve checkpoint performance but that slowed down queries that repeatedly scanned segments using direct path reads. I set an underscore parameter that prevented direct path reads except on very large table scans and that sped the problem queries back up to normal.

In our testing a coworker ran a massive number of updating jobs at the same time. A truncate took about an hour during this excessive load. I ran a simple test script to truncate a small table during this load and it took over a minute. The truncate spent almost all of its time on these two waits:

  1. local write wait – 61%
  2. enq: RO – fast object reuse – 38%

This was on a Delphix clone of production so the NFS file systems were already forcing the database to use direct I/O. But, I had to max out the database writer processes to get the needed checkpoint performance. I based this decision on my experience with checkpoint performance on another database which I documented in this post: url. I set these parameters:

  1. filesystemio_options=directIO
  2. db_writer_processes=36
  3. dbwr_io_slaves=0

We use these settings on a large data warehouse staging database on 11.2.0.4 and HP-UX 11.31 to tune checkpoint performance there. So, we have tested this configuration in production for several years.

After changing these parameters the truncates ran fast under the same heavy updating load.

We remounted the database file systems with mincache=direct, convosync=direct options on our non-Delphix physical databases when we moved them to direct I/O.

For a long time I have seen issues with slowness of full partition scans on our Delphix clones of the production database that this post is about. Here is a post about this issue: url. I remember hearing about an undocumented underscore parameter that you could use to prevent direct path reads and always meant to look into it but kept holding back. I didn’t want to set the underscore parameter on my Delphix copies of production and not set it on my physical production database. That would make our test database different from production and that could lead to invalid testing. But, an upgrade was a great time to put the parameter in both on Delphix and in production. This was the parameter that I put in:

_small_table_threshold=1000000

I read several good blog posts about this parameter and other ways to deal with direct path reads in Oracle 11. Evidently some behavior changed in Oracle 11 that caused full scans to bypass the buffer cache in more situations, using direct path reads. Some post talked about DBA’s just bumping up the value for _small_table_threshold when they upgraded to Oracle 11 so I tried it and testing proved it out. Here is an earlier post about my work with this parameter: url.

So, the upgrade has given me a chance to not only move our database to a more patched up, and hopefully stable, release but it also was a chance to make two key configuration changes. Without direct I/O our production database was using the Unix file system cache to cache the blocks that were being repeatedly scanned using direct path reads. But, to switch to direct I/O I had to end the direct path reads so that the database buffer cache would be used to cache the scanned blocks. Direct I/O and the 36 database writer processes gave us great checkpoint performance. Disabling direct path read kept the queries that had depended on the Unix filesystem cache running just as fast by allowing them to use the database buffer cache.

Bobby

Categories: DBA Blogs

composite hash - list partitioning

Tom Kyte - Mon, 2017-08-07 12:46
Tom, does Oracle support hash partitioning and range or list partitioning on each hash partition. i.e hash-range composite partition and hash-list composite partition ? "vldb and partitioning manual" does not list this combination - wasnt sure ...
Categories: DBA Blogs

Identifying whether an entry is generated "by default" in a trigger

Tom Kyte - Mon, 2017-08-07 12:46
Dear Tom, When having a table with an autogenerated ID like this: <code>create table ids ( id number generated by default as identity, t varchar2(300));</code> And creating a trigger like this: <code>create or replace trigger ids_trig be...
Categories: DBA Blogs

How to export 44 lakhs of records from a table in oracle?

Tom Kyte - Mon, 2017-08-07 12:46
How to export 44 lakhs of records from a table in oracle?
Categories: DBA Blogs

Rebuilding Indexes

Tom Kyte - Mon, 2017-08-07 12:46
I know this question has been asked earlier and I am sorry to take up this question slot but I am confused regarding rebuilding indexes. If I am interpreting it correctly, you don't recommend rebuilding indexes at all. I have talked to two se...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs