DBA Blogs

GROUP BY returns incorrect number of rows

Tom Kyte - Thu, 2019-02-07 00:46
Hi A bit of an odd one - possibly a bug. <code>DROP TABLE xxibpc_test; CREATE TABLE xxibpc_test AS WITH data AS (SELECT LEVEL l_no FROM dual CONNECT BY LEVEL <= 10000) SELECT 1 instance_number ,1 + l_no session_id ,10...
Categories: DBA Blogs

Replacements for OrdDoc

Tom Kyte - Wed, 2019-02-06 06:26
I have been using OrdDoc to store files and attachments for my applications. It looks like Oracle has deprecated this and will no longer support it in its next release (https://mikedietrichde.com/2018/08/10/oracle-multimedia-is-deprecated-in-oracle-1...
Categories: DBA Blogs

CDC on Views

Tom Kyte - Wed, 2019-02-06 06:26
Hi Tom, I have a view created daily on a detailed table set ( n number of tables based on the load) as below CDR_Detail_20190204_view CDR_Detail_20190205_view CDR_Detail_20190206_view To generate a report , i need to pick all the daily views(...
Categories: DBA Blogs

Can I force query to use the varchar index if the condition uses a number value

Tom Kyte - Wed, 2019-02-06 06:26
Categories: DBA Blogs

Windows Oracle11g to Linux Oracle 12c

Tom Kyte - Wed, 2019-02-06 06:26
Hi, hope you're doing great? I wonder if you can help me by giving me some advices or pointing me to some useful docs. In fact, we are planning to migrate a financial application with most of its business logic is written in Pl/SQL, from Windows Orac...
Categories: DBA Blogs

Performance degradation after database restoration

Tom Kyte - Wed, 2019-02-06 06:26
HI ALL, I HAVE A PROBLEM HERE WE are running an application(FOR REPORTING ONLY) with Oracle database 11.2, Data for this oracle database is getting from SQL server ( users are inserting data into SQL server first and finally with the help o...
Categories: DBA Blogs

Max number of processes

Tom Kyte - Wed, 2019-02-06 06:26
hi - we recently had a code change in our production database. Since then we see that every 2 days we have hundereds of inactive sessions in the database and maximum number of processes is reached. We increased this number a couple of times and every...
Categories: DBA Blogs

Storing JSON file in the database

Tom Kyte - Wed, 2019-02-06 06:26
I created a table with two JSON columns. CREATE TABLE USER.JSON_DOCS ( id RAW(16) NOT NULL, file CLOB, CONSTRAINT json_docs_pk PRIMARY KEY (id), CONSTRAINT json_docs_chk CHECK (file IS JSON) ); Then I tried to storage a JSON in...
Categories: DBA Blogs

View that can be used to get last SQL elapsed time

Tom Kyte - Wed, 2019-02-06 06:26
Hello Gentlemen, We currently don't have OEM tool. Is there a data dictionary view that can be queried that will provide the last elapsed time for a particular query that has an elapsed time of less than 5 seconds? v$sql view has a column called...
Categories: DBA Blogs

Keeping a column constant

Tom Kyte - Tue, 2019-02-05 12:26
Hi, <code>Oracle Database 12c Enterprise Edition Release - 64bit</code> We have a need to keep certain values in a table constant in a cloned database. We store email addresses in a table, and use these for generating email commu...
Categories: DBA Blogs

Unique Indexes Force Hints To Be “Ignored” Part I (What’s Really Happening)

Richard Foote - Tue, 2019-02-05 03:15
As I was compiling my new “Oracle Diagnostics and Performance Tuning” seminar, I realised there were quite a number of indexing performance issues I haven’t discussed here previously. The following is a classic example of what difference a Unique Index can have over a Non-Unique index, while also covering the classic myth that Oracle sometimes […]
Categories: DBA Blogs

Moving file location of an Oracle database whilst preserving file permissions on Windows

Zed DBA's Oracle Blog - Mon, 2019-02-04 12:16

You may want to move an Oracle database from one location to another on a Windows Server, for example one drive to another.  But it’s important to keep all the file permissions preserved, so Oracle can access.

To do this, is simple using xcopy with certain switches as explained in this Microsoft article:


Below is where I needed to move oradata from u: drive to g: drive:

Microsoft Windows [Version 6.3.9600]
(c) 2013 Microsoft Corporation. All rights reserved.

C:\Windows\system32>xcopy u:\oradata g:\oradata /O /X /E /H /K
Does G:\oradata specify a file name
or directory name on the target
(F = file, D = directory)? d
12 File(s) copied


And fast_recovery_area from v: drive to i: drive:

Microsoft Windows [Version 6.3.9600]
(c) 2013 Microsoft Corporation. All rights reserved.

C:\Windows\system32>xcopy v:\fast_recovery_area i:\fast_recovery_area /O /X /E /H /K
Does I:\fast_recovery_area specify a file name
or directory name on the target
(F = file, D = directory)? d
4 File(s) copied


Without using this method, you would have to manually set the permissions, which is time consuming and error prone.

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.


Zed DBA (Zahid Anwar)


Categories: DBA Blogs


Tom Kyte - Fri, 2019-02-01 16:46
Hi Team Have a scenario to select a particular set of rows from a table for further processing. We need to ensure that multi users do not work on the same set of rows. We use SELECT FOR UPDATE SKIP LOCKED in order to achieve this. EG:a simp...
Categories: DBA Blogs

Sending e-mail! -- Oracle 8i specific response

Tom Kyte - Fri, 2019-02-01 16:46
How to send personalized email to clients registered in my portal www.intrainternet.com using the information stored in our Database Oracle 8i automatically?
Categories: DBA Blogs

Recover dropped tables with Virtual Access Restore in #Exasol

The Oracle Instructor - Fri, 2019-02-01 04:34

The technique to recover only certain objects from an ordinary backup is called Virtual Access Restore. Means you create a database from backup that contains only the minimum elements needed to access the objects you request. This database is then removed afterwards.

Let’s see an example. This is my initial setup:

EXAoperation Database page

One database in a 2+1 cluster. Yes it’s tiny because it lives on my notebook in VirtualBox. See here how you can get that too.

It uses the data volume v0000 and I took a backup into the archive volume v0002 already.

EXAoperation volumes

I have a schema named RETAIL there with the table SALES:


By mistake, that table gets dropped:

drop table

And I’m on AUTOCOMMIT, otherwise this could be rolled back in Exasol. Virtual Access Restore to the rescue!

First I need another data volume:

second data volume

Notice the size of the new volume: It is smaller than the overall size of the backup respectively the size of the “production database”! I did that to prove that space is not much of a concern here.

Then I add a second database to the cluster that uses that volume. The connection port (8564) must be different from the port used by the first database and the DB RAM in total must not exceed the licensed size, which is limited to 4 GB RAM in my case:

second database

I did not start that database because for the restore procedure it has to be down anyway. Clicking on the DB Name and then on the Backups button gets me here:

Foreign database backups

No backup shown yet because I didn’t take any backups with exa_db2. Clicking on Show foreign database backups:

Backup choice

The Expiration date must be empty for a Virtual Access Restore, so I just remove it and click Apply. Then I select the Restore Type as Virtual Access and click Restore:

Virtual Access Restore

This will automatically start the second database:

Two databases in one cluster

I connect to exa_db2 with EXAplus, where the Schema Browser gives me the DDL for the table SALES:

ExaPlus Schema Browser get DDL

I take that to exa_db1 and run it there, which gives me the table back but empty. Next I create a connection from exa_db1 to exa_db2 and import the table

create connection exa_db2 
to '' 
user 'sys' identified by 'exasol';

import into retail.sales 
from exa at exa_db2 
table retail.sales;

This took about 2 Minutes:


The second database and then the second data volume can now be dropped. Problem solved!


Categories: DBA Blogs

SQL Profile not used on slightly different query

Bobby Durrett's DBA Blog - Thu, 2019-01-31 15:09

Last week I was asked to help with a performance problem that looked a lot like a problem I fixed in July with a SQL Profile. The query whose plan I fixed back in July was modified by a minor application change over the weekend. A single column that was already in the select clause was added to another part of the select clause. As a result, the SQL_ID for the new query was different than the one for the July query. The SQL Profile from July associated SQL_ID 2w9nb7yvu91g0 with PLAN_HASH_VALUE 1178583502, but since the SQL_ID was now 43r1v8v6fc52q the SQL Profile was no longer used. At first, I thought I would have to redo the work I did in July to create a SQL Profile for the new query. Then I realized that the plan I used in July would work with the new SQL_ID so all I did was create a SQL Profile relating SQL_ID 43r1v8v6fc52q with PLAN_HASH_VALUE 1178583502 and the problem was solved. This is an database running on the HP-UX Itanium platform. Here is a post from 2013 explaining how to create a SQL Profile: url. I thought it would be helpful to use this post to go over the steps that I went through with the July incident and how I originally generated the good plan. Then I wanted to make some comments about the various ways I come up with good plans for SQL Profiles by either generating a new better plan or by finding an older existing better one. Lastly, I wanted to talk about how a given good plan can be used for a variety of similar SQL statements.

The problem query that I worked on in July and many of the other SQL statements that I tune with SQL Profiles have bind variables in their where clauses. Usually the optimizer generates the plan for a query with bind variables once based on the values of the bind variables at that time. Then, unless the plan is flushed out of the shared pool, the query continues to run on the same plan even if it is horribly inefficient for other bind variable values. There is a feature that will cause the optimizer to run different plans based on the bind variable values in some cases but the SQL statements that I keep running into do not seem to use that feature. Since the query I worked on in July had bind variables I assumed that it was a typical case of a plan that worked well for one set of bind variables and that was terribly slow for another set. So, I had to find a set of bind variable values that made the query slow and figure out a better plan for those values. I used my bind2.sql script to extract the bind variable values for the problem query when I was working on the problem in July.

After extracting the bind variables, I used an AWR report to figure out which part of the plan contributed the most to the run time of the query so that I knew which bind variable value was causing the slowdown. Using an AWR report in this way only works if you do not have a bunch of slow SQL statements running at the same time. In this case the problem query 2w9nb7yvu91g0 was dominating the activity on the database with 62.19% of the total elapsed time. If there were a bunch of SQL Statements at the top of this list with similar percent of total values, it might be hard to use the AWR report to find information about this one query.

Since the activity for 2w9nb7yvu91g0 was 87.19% CPU I looked for the segments with the most logical reads. Logical reads are reads from memory, so they consume CPU and not disk I/O. In the graph below the segment for the S_ACCNT_POSTN table has 88.60% of the logical reads so most likely this segment caused the slowness of the query’s plan.

I looked at the plan for 2w9nb7yvu91g0 to see where the most heavily read table was used. This would probably be the source of the slow query performance. I found that it was doing a range scan of an index for the S_ACCNT_POSTN table that had the column POSITION_ID as its first column. This made me suspect that the plan was using the wrong index. If an index was used to retrieve many rows from the table that could take a long time. I did a count on all the rows in the table grouping by POSITION_ID and found that most rows had a specific value for that column. I replaced the actual POSITION_ID values with VALUE1, VALUE2, etc. below to hide the real values.

POSITION_ID            CNT
--------------- ----------
VALUE1             2075039
VALUE2               17671
VALUE3                8965
VALUE4                5830
VALUE5                5502
VALUE6                5070
VALUE7                4907
VALUE8                4903

Next, I verified that the query had an equal condition that related a bind variable to the POSITION_ID column of the problem table. This made me suspect that the plan in the shared pool was generated with a bind variable value for POSITION_ID other than VALUE1. So, that plan would work well for whatever value was used to create it. POSITION_ID would be equal to that value for a small percentage of the rows in the table. But, running the query in SQL*Plus with POSITION_ID=’VALUE1′ caused the optimizer to choose a plan that made sense given that this condition was true for most of the rows in the table. The PLAN_HASH_VALUE for the new plan was 1178583502.

I tested 1178583502 against a variety of possible bind variable values by using an outline hint in SQL*Plus scripts to force that plan no matter which values I tested against. I extracted the outline hint by running the query with POSITION_ID=’VALUE1′ and using this dbms_xplan call:

select * from table(dbms_xplan.display_cursor(null,null,'OUTLINE'));

Then I just added the outline hint to a copy of the same SQL*Plus script and tried various combinations of bind variable values as constants in the where clause just as I had tried VALUE1 for POSITION_ID. I used the values that I had extracted using bind2.sql. After verifying that the new plan worked with a variety of possible bind variable values, I used a SQL Profile to force 2w9nb7yvu91g0 to use 1178583502 and the problem was resolved.

I have just described how I created the original July SQL Profile by running a version of the problem query replacing the bind variables with constants that I knew would cause the original plan to run for a long time. The optimizer chose a better plan for this set of constants than the one locked into the shared pool for the original query. I used the PLAN_HASH_VALUE for this plan to create a SQL Profile for the July query. This is like an approach that I documented in two earlier blog posts. In 2014 I talked about using a hint to get a faster plan in memory so I could use it in a SQL Profile. In 2017 I suggested using an outline hint in the same way. In both of those cases I ran the problem query with hints and verified that it was faster with the hints. Then I used a SQL Profile to force the better PLAN_HASH_VALUE onto the problem query. So, in all these cases the key is to generate a better plan in any way possible so that it is in memory and then create a SQL Profile based on it. A lot of times we have queries that have run on a better plan in the past and we just apply a SQL Profile that forces the better plan that is already in the system. My December, 2018 post documents this type of situation. But the 2014 and 2017 blog posts that I mentioned above and the July 2018 example that I just described all are similar in that we had to come up with a new plan that the query had never used and then force it onto the SQL statement using a SQL Profile.

The incidents in January and July and the cases where I added hints all lead me to wonder how different one SQL statement can be from another and still share the same plan. The problem last week showed that two queries with slightly different select clauses could still use the same plan. The other cases show that you can add hints or run the statement with bind variables replaced with constants. In the January case I did not have to go back through the analysis that I did in July because I could quickly force the existing plan from the July query onto the January one. The January problem also shows the limits of SQL Profiles. The slightest change to a SQL statement causes a SQL Profile to be ignored, even though the plan would still work for the new SQL statement. But in the January case the ability to use the same plan for slightly different queries made it easy to create a new SQL Profile.


Categories: DBA Blogs

Slides From January 2019 AZORA Meeting

Bobby Durrett's DBA Blog - Wed, 2019-01-30 10:20

Here are slides from the January 2019 AZORA Meeting with Charles Kim and Viscosity:

Oracle Database 12.2 New Features

Oracle Database 18c New Features

Thank you to Charles and Viscosity for making our meeting a big success, and thanks for sharing these slides.


Categories: DBA Blogs

EMEA A&C Partners Cloud Security Workshop

  EMEA A&C Partners Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Announcement: “Oracle Indexing Internals and Best Practices Seminar” – Berlin 8-9 May 2019 !!

Richard Foote - Wed, 2019-01-30 05:00
I’m very excited to announce I’ll be running my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar in conjunction with the German Oracle User Group (DOAG) in Berlin, Germany on 8-9 May 2019. The venue will be the DOAG Offices in Berlin (DOAG Office, 6th Floor, Tempelhofer Weg 64, 12347 Berlin). Because of venue […]
Categories: DBA Blogs

Announement: “Oracle Indexing Internals and Best Practices Seminar” – Berlin 8-9 May 2019 !!

Richard Foote - Tue, 2019-01-29 23:04
I’m very excited to announce I’ll be running my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar in conjunction with the German Oracle User Group (DOAG) in Berlin, Germany on 8-9 May 2019. The venue will be the DOAG Offices in Berlin (DOAG Office, 6th Floor, Tempelhofer Weg 64, 12347 Berlin). Because of venue […]
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator - DBA Blogs