Feed aggregator

reports (cvucheckreport_*.xml) from ora.cvu resource causing excessive disk space

Syed Jaffar - Sun, 2016-10-09 06:34
On Exadata dbnodes, we noticed that Clusterware ora.cvu resource generating XML reports every 6 hours and occupied large amounts of space on /u01 filesystem. Perhaps this is the same on non-Exadata systems too, if you are running Clusterware.

Upon looking into MOS ID (reports from ora.cvu resource using excessive disk space (Doc ID 1964482.1) it was learned that this is an expected behavior with where ora.cvu invokes Clusterware Health check (CHC) every 6 hrs and stores the XML reports under $GI_HOME/cvu/baseline or $ORACLE_BASE/crsdata/@global/cvu/baseline' location. However, the CVU didn't clean-up the reports periodically which ultimately causes  disk full.

As per the note, the bug is fixed as follows:

Bug 18143707 is fixed in
Bug 19703199 is fixed in PSU
Bug 20177779 is fixed in PSU

Quick workaround to this problem is to delete the files (xml) manually in order to release the space.


reports from ora.cvu resource using excessive disk space (Doc ID 1964482.1)
ora.cvu Resource Reports not Purged (Doc ID 2054765.1) Grid Infrastructure Patch Set Updates - List of Fixes in each GI PSU (Doc ID 1928853.1)

sql query related question

Tom Kyte - Sun, 2016-10-09 05:26
i want to compare this month data(ex:sep-2016) with previous year month data(ex:sep-2015) in same page of SQL PLUS by using date column(i.e.DAL_ARR_DATE_TIME) of my table,so please give me a solution for the same.thanks in advance. my table structur...
Categories: DBA Blogs

Implicit rollback on error in NDS and dbms_sql

Tom Kyte - Sun, 2016-10-09 05:26
Good day. Could you please explain to me why the following script returns the next output? <code> in foo_proc Static call - 1 in foo_proc NDS - 0 in foo_proc dbms_sql - 0 </code> Why the inserted row is implicitly rollbacked before th...
Categories: DBA Blogs

Physical Standby database - user sync

Tom Kyte - Sun, 2016-10-09 05:26
Will new users created on the primary database be synced to the standby database automatically?
Categories: DBA Blogs

non-cdb architecture on 12c

Tom Kyte - Sun, 2016-10-09 05:26
Tom. If we upgrade our 11g database to 12c, can we still use the non-cdb model? With 11g going into extended support very soon we would like to upgrade to 12c however, we are not familiar with PDBs yet so we would prefer to mo...
Categories: DBA Blogs

ORA-01156 When Adding Standby Redo Log in Dataguard Configuration

Pakistan's First Oracle Blog - Sat, 2016-10-08 20:20
Database version =

If you get following error while adding a Standby Redo Log (SRL) to a standby database in dataguard configuration:

SQL> alter database add standby logfile group 3 '/logs/test/sbyredo3a.log' size 500M;
alter database add standby logfile group 3 '/logs/test/sbyredo3a.log' size 500M
ERROR at line 1:
ORA-01156: recovery or flashback in progress may need access to files

Then it means that first you have to stop the redo apply, add the SRL and then start the redo apply. Best way to do is from dgmgrl like this:

DGMGRL> connect /

DGMGRL> edit database 'test' set state='APPLY-OFF';

SQL> alter database add standby logfile group 3 '/logs/test/sbyredo3a.log' size 500M;

Database altered.

DGMGRL> edit database 'CONPRO_UK' set state='APPLY-ON';


Categories: DBA Blogs


Rittman Mead Consulting - Sat, 2016-10-08 13:20

I've just attended my first user group in Poland, the very-excellent POUG. This was the first international version of the conference, having been run previously just within Poland. Based on these two days, I would say it was a resounding success! There was a great atmosphere there, really engaged and enthusiastic. The size of the event, friendliness, and fun made it a very welcoming one, and reminded me a lot of my first ever conference that I attended (back in 2010!), the Rittman Mead BI Forum.

I travelled out to Warsaw on the Thursday, and attended the speakers dinner. It's always nice to see familiar faces and meet new ones - as well as enjoy some jolly good food. The next morning I went for a run at the nearby park, enjoying the sunrise over the water

Good morning Warsaw! #POUG https://t.co/aHAuYonlhE pic.twitter.com/LYgU50bIei

— Robin Moffatt (@rmoff) October 7, 2016

The conference had two tracks, focussing primarily on the database but with some BI content too, as well a talk about property graphs. The first session I attended was also one of my favourites. It was "DBA, Heal Thyself: Five Diseases of IT Organizations and How to Cure Them", presented with great humour and energy by Jim Czuprynski. Some of the uncomfortable truths about mistakes made in the field of IT were laid bare, with no prisoners taken! Whilst it was based on the database, much of it was applicable to OBIEE. Things like reliance on bespoke scripts, avoidance of new features, and general ignorance in the field were covered. You can find an article on the topic from Jim here.

After Jim's session was Hans Viehmann talking about Property Graphs. I wrote an article that was published on OTN just last week about this and it was great to get chance to meet Hans, and hear him speak.

I skipped the next session to do a bit of slide polishing and geek chat with Christian Berg, before getting lunch - which was really good:

Just a light lunch at #POUG pic.twitter.com/B3Bntf1JCU

— Robin Moffatt (@rmoff) October 7, 2016

After lunch was OBIEE presentations in both tracks, by Christian Berg, and Kiran Taylor. I sat in on Christian's "Fifty shades of #fail", which is a fun walk through some of the many ways that OBIEE gets done wrong.

My talk, "(Still) No Silver Bullets - OBIEE 12c Performance in the Real World" was the last of the day. I've given this talk quite a few times now, but still enjoy delivering it each time. The topic's one I've spent so much time working on and find so interesting, that it never gets stale! You can find the slides here and set of related links here.

.@rmoff - OBIEE performance in the real world - where is it slow? - #POUG pic.twitter.com/IanRoHBX2G

— Kiran Tailor (@KiranTailorUK) October 7, 2016

The day finished with the POUG After Party, which was at a bar in the center of Warsaw. Good beer, good food, good music - and plenty of geek talk! I really have to take my hat off to the organisers of POUG, they did a great job.

#POUG #bagpipes pic.twitter.com/ZC8WGwq8Z6

— Robin Moffatt (@rmoff) October 7, 2016

The second day of POUG brought more good presentations. I got to see Neil Chandler speak, about execution plans and how they can vary - and how to ensure they don't unless you want them too. It was really interesting, and took me back a few years to when I last looked in-depth at this kind of stuff. After Neil was Jim again, talking about analytic functions. Most of these I knew about, but one that was new to me (and I'll definitely be using) was the PERCENT syntax for FETCH FIRST - very neat.

Great stuff from @JimTheWhyGuy at #POUG. FETCH FIRST I knew… but now I know you can use PERCENT with it too. Cool! pic.twitter.com/f10yoz8yge

— Robin Moffatt (@rmoff) October 8, 2016

The audience at POUG seemed to be predominantly DBAs and database developers, and this kind of talk is just great for spreading awareness of new functionality that is going to make people's jobs easier, and their code run faster. The final talk of the morning was from Martin Widlake, presenting a great discussion about efficient and maintainable bulk processing with SQL and PL/SQL. With his very accessible and engaging presentation style, Martin's talk was an extremely pragmatic and valuable one. Everyone loves a deep-dive geekout on system internals (don't they??), but arguably the most value to the widest section of the audience comes in learning, or being reminded of, how to code and design systems well.

Even without lots of BI content, I found the conference very useful. Whilst Oracle CBO internals may not be my day to day work, many of the topics discussed in a database context can easily be transplanted to the BI world. Performance is performance. Diagnostic approaches are tool-agnostic. As well as the presentations, the opportunity to exchange ideas and war-stories with other experts in the industry (over a beer, usually…) is the kind of thing you just don't get from reading the manuals or a bunch of PDFs.

So that was POUG, and all too soon time return home. Bravo to the organisers of POUG, and here's hoping they run the conference again next year!

Well that was it for #poug. So long @POUG_ORG and thanks for all the fish! pic.twitter.com/mv9Es95yfr

— Christian Berg (@Nephentur) October 8, 2016
Categories: BI & Warehousing

How to speed up an Insert with SELECT

Tom Kyte - Sat, 2016-10-08 11:06
Hi Tom, good morning Sir. I am trying to speed up an Insert statement where the SELECT is selecting all the source table according to this example: var_select_a_insert := 'insert into lwm_usrappods.ITEM_LOC_SOH select to_number...
Categories: DBA Blogs

High number of misses in library cache during execute - where to dig for explanations?

Tom Kyte - Sat, 2016-10-08 11:06
I'm relatively familiar with using SQL_TRACE and tkprof for tuning, but I recently spotted something in a tkprof report for one of our customers that raised my suspicions. There is a lot of SQL in the file where the number of library cache executi...
Categories: DBA Blogs

Session wait time of a transaction

Tom Kyte - Sat, 2016-10-08 11:06
Dear Tom, please help me with the below. Session 1: update table set field = 'A' where field2 = 'B'; it is not committed/rollbacked Session 2: update table set field = 'C' where field2 = 'B'; The session will be waiting for sessio...
Categories: DBA Blogs

How to observe current values of a running plsql code.

Tom Kyte - Sat, 2016-10-08 11:06
Hello, I can observe the current value of a bind variable, using v$sql_bind_capture. I don't know how to do it, when having no bind variable. Having a simple code like this below, I would like to see a VALUE of rec.column1 being passed to my_functi...
Categories: DBA Blogs

BULK Delete

Tom Kyte - Sat, 2016-10-08 11:06
Hi Tom, This is my first question to you and hope to receive a positive response :) Straight to the Question: I have a table named trail_log. This is a huge table and a daily growth of this table is 12GB Approx. Currently 76216160 rows. Import...
Categories: DBA Blogs

How to determine if db objects are still used?

Tom Kyte - Sat, 2016-10-08 11:06
I am working on a database that has nearly 3,900 tables, and 450 views, 90 packages, 740 procedures, and 325 functions. Some of those are called by apps and report servers that I do not have access to all the source code. I have a feeling that m...
Categories: DBA Blogs

UNDO tablespace usage

Tom Kyte - Sat, 2016-10-08 11:06
Hi Team, I want a customized query which should give me which SQL statement is taking more amount of undo tablespace size with all the details. Request you to please help me on this. Regards, Sridhar
Categories: DBA Blogs

Copy millions of blobs from one table to the other one

Tom Kyte - Sat, 2016-10-08 11:06
Hello colleagues, I've got a "small" problem, I have to copy millions of blobs from one table to another one. <b>Target</b> table: PAGE ---------------------- ID NOT NULL NUMBER(19) DOCUPLOADCODE VARCHAR2(255) MIMETYPE ...
Categories: DBA Blogs

macOS Sierra (OS X 10.12)

Tim Hall - Sat, 2016-10-08 08:15

In the 80s and 90s the Sierra was an incredibly popular, but boring saloon car from Ford. Today I upgraded to macOS Sierra, which will eventually be an incredibly popular (for Mac owners), but boring operating system from Apple.

So what’s new? You get Siri! Oh yes, you also get Siri! There is also Siri! And finally, for good measure you get Siri!

If you own multiple devices, there is some fluff, like shared clipboard, but I don’t own multiple Apple devices, so basically this OS is what I had before with a sprinkling of Siri.

What was the experience of upgrading like? An absolute nightmare! It said it was going to take about 18 minutes, but it seemed to hang for a couple of hours. I ended up doing about 5 hard reboots before it actually came up. After all that hassle and wasted time, I got El Crapitan + Siri. Amazing. Way to innovate Apple!

Of course, Apple fanboys will love it and it will change their lives… Whatever!



macOS Sierra (OS X 10.12) was first posted on October 8, 2016 at 2:15 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

SQL Server 2016: TRUNCATE PARTITIONS with sliding Windows scenarios

Yann Neuhaus - Sat, 2016-10-08 07:03

Some time ago, I had to deal with a new partitioning scenario that included sliding windows stuff for mainly archiving purpose. Regarding the customer context, I used some management scripts that include this time dropping oldest partition. We didn’t care about data oldest than 2 years. Usually in this case, I use a method that consists in dropping data by switching first the oldest partition to a staging table and then truncate it. Finally we may safely merge oldest partitions and avoid any data movement. At a first glance, it seems to be a complex process for dropping data but until SQL Server 2014 there is no way to do better in order to minimize operation logging.


blog 105 - 0 - partitioning

This week, I had the opportunity to work with SQL Server 2016 to learn about new partition improvements. By the way, the only thing I could find out from my different internet researches concerned the new command TRUNCATE TABLE WITH PARTITIONS.

My first feeling was it is not a very exciting feature in contrast to previous versions that provided a mixture of both performance and maintenance improvements in this field. But after investigating further, I was able to point out some advantages to use this command. Let’s go back to my sliding windows scenario. In order to drop data from my oldest partition I have to:

  • Switch the oldest partition to a staging table
  • Drop data from this staging table with TRUNCATE command in order to minimize transaction logging
  • Execute MERGE command in order to slide all partitions to the left side


What about introducing the new TRUNCATE TABLE command in this scenario?

In fact, it will simplify the above process by replacing step 1 and step 2 by the TRUNCATE command at the partition level. The new scenario becomes:

  • TRUNCATE TABLE at the corresponding partition
  • Execute MERGE command in order to slide all partitions to the left


The only instruction I need to use is as follows:

TRUNCATE TABLE [dbo].[FactOnlineSales]


What about locking?

As expected, SQL Server will use a lock granularity hierarchy with a mixture of Sch-S, Sch-M and X locks regarding the corresponding locked resource. You may see two allocation units in my case because I’m using a partitioned clustered columnstore index in this demo. As a reminder, compressed columnstore segments are stored in LOB.


Object Resource type Resource subtype Resource description Associated entity Lock request mode OBJECT FactOnlineSales Sch-M METADATA DATA_SPACE data_space_id = 3 Columnstore2007 (filegroup that relies on partition nb 2) Sch-M HOBT Partition nb 2 Sch-M ALLOCATION_UNIT Related to data_space_id = 3 with state = DROPPED (LOB_DATA) X ALLOCATION_UNIT Related to data_space_id = 3     (IN_ROW_DATA) X KEY Records in the partition 2 X



What about logging?

Well, if I refer to the corresponding records into the transaction log file, TRUNCATE partition command seems to act as a normal TRUNCATE operation. Firstly, we may notice few records generated related to marking the concerned structures to drop and then the deferred drop mechanism comes into play by deallocating them.


blog 105 - 1 - truncate partitions tlog 1

blog 105 - 1 - truncate partitions tlog 2

blog 105 - 1 - truncate partitions tlog 3

blog 105 - 1 - truncate partitions tlog 4

Happy partitioning!






Cet article SQL Server 2016: TRUNCATE PARTITIONS with sliding Windows scenarios est apparu en premier sur Blog dbi services.

Learn and Master the concepts of Oracle 12c RAC and EM 13c from Oracle Experts

Syed Jaffar - Sat, 2016-10-08 05:30
Learn and master the concepts of Oracle RAC 12c and EM 13c from Experts, online Instructor lead course.

Enroll now and avail 20% discount upfront, exclusively for my network.

Coupon, exclusively for my network : SJRAC20

At the end of the course and last 2 days, I will be sharing the following:
Day 1 : RAC best practices in real-world scenarios:
- Installation, Network, Storage, Application Design, Backup & Recovery
RAC internals

Day 2 : Taking all your questions. Exclusively taking your questions and answering them.

Guys, do forward this to your networking and help your friends/dears to master Oracle RAC and EM 13c concepts.

Course Objective:

To master Oracle Real Application Clusters 12c. Course is for Oracle DBAs or Apps DBAs looking to upgrade their skill.

Expectations and Goals
  • Install, Configure &amp; Administer 12c RAC
  • Install &amp; Configure 13c Cloud Control
  • Upgrade Database/RAC from 11g to 12c
  • Install, Configure &amp; Administer 12c Grid Infrastructure
Course Agenda:

  • RAC Introduction
  • RAC Architecture
  • Install and configure 12c RAC
  • Administer GI & RAC
  • Monitoring and Troubleshooting RAC
  • Upgrade RAC and Database
  • Migrate DB to ASM and RAC
  • Install, Configure EM 13c
  • Backup, Recovery & Cloning
  • Performance Tuning

The New Batch is going to start from Friday, 14th October 2016 from 07:00 PM IST

JDBC executeBatch looks odd in AWR

Bobby Durrett's DBA Blog - Fri, 2016-10-07 19:18

A project team asked me to look at the performance of an Oracle database application that does a bunch of inserts into a table. But, when I started looking at the AWR data for the insert the data confused me.

The SQL by elapsed time section looked like this:


So, 1514 executions of an insert with 1 second of elapsed time each, almost all of which was CPU. But then I looked at the SQL text:


Hmm. It is a simple insert values statement. Usually this means it is inserting one row. But 1 second is a lot of CPU time to insert a row. So, I used my sqlstat.sql script to query DBA_HIST_SQLSTAT about this sql_id.

     >select ss.sql_id,
  2  ss.plan_hash_value,
  4  ss.executions_delta,
  5  ELAPSED_TIME_DELTA/(executions_delta*1000) "Elapsed Average ms",
  6  CPU_TIME_DELTA/(executions_delta*1000) "CPU Average ms",
  7  IOWAIT_DELTA/(executions_delta*1000) "IO Average ms",
  8  CLWAIT_DELTA/(executions_delta*1000) "Cluster Average ms",
  9  APWAIT_DELTA/(executions_delta*1000) "Application Average ms",
 10  CCWAIT_DELTA/(executions_delta*1000) "Concurrency Average ms",
 11  BUFFER_GETS_DELTA/executions_delta "Average buffer gets",
 12  DISK_READS_DELTA/executions_delta "Average disk reads",
 13  ROWS_PROCESSED_DELTA/executions_delta "Average rows processed"
 15  where ss.sql_id = 'fxtt03b43z4vc'
 16  and ss.snap_id=sn.snap_id
 17  and executions_delta > 0
 19  order by ss.snap_id,ss.sql_id;

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
fxtt03b43z4vc               0 29-SEP-16 PM              441         1100.68922     1093.06512     .32522449                  0                      0             .000492063           60930.449         .047619048             4992.20181
fxtt03b43z4vc               0 29-SEP-16 PM               91         1069.36489     1069.00231    .058494505                  0                      0                      0          56606.3846         .010989011                   5000
fxtt03b43z4vc               0 29-SEP-16 PM               75         1055.05561     1053.73324        .00172                  0                      0                      0          55667.1333                  0             4986.86667
fxtt03b43z4vc               0 29-SEP-16 PM              212         1048.44043     1047.14276    .073080189                  0                      0             .005287736          58434.6934         .004716981             4949.35377

Again it was about 1 second of cpu and elapsed time, but almost 5000 rows per execution. This seemed weird. How can a one row insert affect 5000 rows?

I found an entry in Oracle’s support site about AWR sometimes getting corrupt with inserts into tables with blobs so I thought that might be the case here. But then the dev team told me they were using some sort of app that did inserts in batches of 1000 rows each. I asked for the source code. Fortunately, and this was very cool, the app is open source and I was able to look at the Java code on GitHub. It was using executeBatch in JDBC to run a bunch of inserts at once. I guess you load up a bunch of bind variable values in a batch and execute them all at once. Makes sense, but it looked weird in the AWR.

Here is the Java test program that I hacked together to test this phenomenon:

import java.sql.*;
import oracle.jdbc.*;
import oracle.jdbc.pool.OracleDataSource;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.*;

public class InsertMil5k
  public static void main (String args [])
       throws SQLException
    OracleDataSource ods = new OracleDataSource();
    OracleConnection conn =
      (OracleConnection)(ods.getConnection ());

    PreparedStatement stmt = conn.prepareStatement("insert into test values (:1,:2,:3,:4)");
    byte [] bytes = new byte[255];
    int k;
    for (k=0;k<255;k++)

/* loop 200 times. Make sure i is unique */
    int i,j;
    for (j=0;j < 200; j++) {

/* load 5000 sets of bind variables */

      for (i=j*5000;i < (j*5000)+5000; i++) {
        stmt.setString(1, Integer.toString(i));
        stmt.setInt(2, 1);
        stmt.setBinaryStream(3, new ByteArrayInputStream(bytes), bytes.length);
        stmt.setLong(4, 1);



I started with one of the Oracle JDBC samples and grabbed the batch features from the github site. I just made up some random data which wasn’t super realistic. It took me a while to realize that they were actually, at times, doing 5000 row batches. The other AWR entries had 1000 rows per execution so that finally makes sense with what the dev team told me.

I guess the lesson here is that the AWR records each call to executeBatch as an execution but the number of rows is the size of the batch. So, that explains why a simple one row insert values statement showed up as 5000 rows per execution.


Categories: DBA Blogs

PL/SQL Code Coverage.

Tom Kyte - Fri, 2016-10-07 16:46
Hi, I am implementing a system for automated unit testing of our plsql, and would like include statistics on code coverage, especially to highlight where code has not been tested, e.g. "your tests only cover 75% of the code". I have been lookin...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator