DBA Blogs

NOFILENAMECHECK Parameter Causes DUPLICATE DATABASE To Ignore DB_CREATE_FILE_DEST?

Don Seiler - Wed, 2014-09-10 23:06
Last week I was creating a new testing database from a backup of our demo database, both under Oracle 11.2.0.3. I grabbed one of my old scripts to handle the duplicate function, which looked similar to this:

connect auxiliary /;
run {

        duplicate database to testdb
                backup location '$BACKUPDIR'
                nofilenamecheck;

}

One important difference between the demo database and this new test database is that the original demo database lives on a filesystem and the new database was to go onto ASM on a different host. I had copied the syntax from the old script and kicked it off. I made sure that the db_create_file_dest was set to the ASM diskgroup. However the restore would fail as RMAN tried to write to the filesystem path used by the original demo database, which didn't exist on this host, instead of the ASM diskgroup.



Definitely puzzling to me. I double-checked the documentation for NOFILENAMECHECK, which only suggested that it did a check for matching filenames, but didn't state that it would cause the issue I was seeing. The summary saying that it would prevent RMAN from checking for a name collision, which it does to prevent from overwriting existing files on the same host.

However what I found is that having NOFILENAMECHECK in my command resulted in RMAN restoring the files to the original path, ignoring my db_create_file_dest specifications. When I removed the NOFILENAMECHECK specification from the RMAN command, the files were restored to the ASM diskgroup as intended.

MOS Support documents 1375864.1 and 874352.1 suggest using the DB_FILE_NAME_CONVERT parameter but I found this was not necessary when I set the DB_FILE_CREATE_DEST parameter, as long as I didn't use NOFILENAMECHECK. I couldn't find anything in MOS about NOFILENAMECHECK forcing the restore to use a certain location though.
Categories: DBA Blogs

Getting Started with Windows VDI by Andrew Fryer

Surachart Opun - Wed, 2014-09-10 06:55
Virtual desktop infrastructure (VDI) is the practice of hosting a desktop operating system within a virtual machine (VM) running on a centralized server. VDI is a variation on the client/server computing model, sometimes referred to as server-based computing.
VDI is the new technology that gives lots of benefits.
• Efficient use of CPU and memory resources
• Reduced desktop downtime and increased availability
• Patches and upgrades performed in data center
• New users can be up and running quickly
• Data and applications reside in secure data centers
• Centralized management reduces operational expenses
Reference
Additional, VDI can be deployed with Microsoft Windows and suggest to learn What’s New in VDI for Windows Server 2012 R2 and 8.1
Anyway, I explained much more before starting to mention a book that was written by Andrew FryerGetting Started with Windows VDI - This book guides readers to build VDI by using Windows Server 2012 R2 and 8.1 quickly and easy to follow each chapter.

What Readers Will Learn:
  • Explore the various server roles and features that provide Microsoft's VDI solution
  • Virtualize desktops and the other infrastructure servers required for VDI using server virtualization in Windows Server Hyper-V
  • Build high availability clusters for VDI with techniques such as failover clustering and load balancing
  • Provide secure VDI to remote users over the Internet
  • Use Microsoft's Deployment Toolkit and Windows Server Update Services to automate the creation and maintenance of virtual desktops
  • Carry out performance tuning and monitoring
  • Understand the complexities of VDI licensing irrespective of the VDI solution you have opted for
  • Deploy PowerShell to automate all of the above techniques

Categories: DBA Blogs

API Integration with Zapier (Gmail to Salesforce)

Kubilay Çilkara - Sun, 2014-09-07 12:42
Recently I attended a training session with +General Assembly  in London titled, What and Why of APIs. It was a training session focusing on usage of APIs and it was not technical at all. I find these type of training sessions very useful as they describe concepts and controlling ideas behind technologies rather than the hands-on, involved implementation details.

What grabbed my attention from the many different and very useful public and private API tools, 'thingies', introduced in this training session was Zapier. - www.zapier.com

Zapier looked to me as a platform for integrating APIs with clicks rather than code, with declarative programming. Is a way of automating the internet. What you get when you sign up with them is the ability to use 'Zaps', or create your own zaps. Zaps are integration of endpoints, like connecting Foursquare to Facebook or Gmail to Salesforce and syncing them. One of the Zaps available does that, connects your Gmail emails to Salesforce using the Gmail and Salesforce APIs and lets you sync between them. Not only that, but Zapier Zaps also put triggers on the endpoints which allow you to sync only when certain conditions are true. For example the Gmail to Salesforce Zap can push your email into a Salesforce Lead only when an email with a certain subject arrives to your gmail inbox. This is what a Zapier platform looks like:


An individual Zap looks like this and is nothing more than a mapping of the Endpoints with some trigger actions and filters.


The environment is self-documenting and very easy to use. All you do is drag and drop gmail fields and match them with the Lead, or other custom object Salesforce fields. Then you configure the sync to happen only under certain conditions/filters. Really easy to set-up. The free version runs the sync every 5 hours, well good enough for me. The paid version runs the sync every 5 minutes. 

There is even capability to track historical runs and trigger a manual run via the Zap menu. See below the 'Run' command to run a Zap whenever you like. 


In my case I used the tool to create a Zap to do exactly what I just described. My Zap creates a Salesforce Lead automatically in my Salesforce org whenever a 'special' email is sent to me. Great automation!

This is a taste of the 'platform cloud' tools out there to do API to API and App to App integrations with clicks and not code. With tools like Zapier all you really need is, imagination!

More links:

Categories: DBA Blogs

Watch Oracle DB Elapsed Time and Wall Time With Parallel Query

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Watch Oracle Elapsed Time and Wall Time With Parallel Query
In my recent postings I wrote that when using the Oracle Database parallel query a SQL statement's wall time should be equal to its elapsed time divided by the number of parallel query slaves plus some overhead.

That may seem correct, but is it really true? To check I ran an experiment and posted the results here. The results are both obvious and illuminating.

If you don't want to read but just sit on the couch, have a beer and watch TV you're in luck! I took a clip from my Tuning Oracle Using An AWR Report online video seminar put it on youtube.  You can watch the video clip on YouTube HERE or simply click on the movie below.


The Math, For Review Purposes
In my previous recent postings I detailed the key time parameters; DB Time, DB CPU, non-idle wait time, elapsed time, parallelism and effective parallelism. To save you some clicking, the key parameters and their relationships are shown below.

DB Time = DB CPU + NIWT

Elapsed Time = Sum of DB Time

Wall Time = ( Elapsed Time / Parallelism ) + Parallelism Overhead

Wall Time = Elapsed Time / Effective Parallelism


Test Results: When Oracle Parallel Query was NOT involved.
If you want to see my notes, snippets, etc. they can be found in this text file HERE.

Here is the non-parallel SQL statement.

select /*+ FULL(big2) NOPARALLEL (big2) */ count(*)
into   i_var
from   big2 
where  rownum < 9000000

When the SQL statement was running, I was monitoring the session using my Realtime Session Sampler OSM tool, rss.sql. Since I knew the server process session ID and wanted to sample every second and wanted to see everything just for this session, this is the rss.sql syntax:
SQL>@rss.sql 16 16 827 827 % % 1
For details on any OSM tool syntax, run the OSM menu script, osmi.sql. You can download my OSM Toolkit HERE.

The rss.sql tool output is written to a text file, which I was doing a "tail -f" on. Here is a very small snippet of the output. The columns are sample number, sample time, session SID, session serial#, Oracle username, CPU or WAIT, SQL_ID, OraPub wait category, wait event, [p1,p2,p3].


We can see the session is consuming CPU and waiting. When waiting, the wait event is "direct path read", which is asynchronous (we hope) block read requests to the IO subsystem that will NOT be buffered in the Oracle buffer cache.

Now for the timing results, which are shown in the below table. I took five samples.  It's VERY important to know that the wait time (WAIT_TIME_S), DB CPU (DB_CPU_S), and DB Time (DB_TIME_S) values are related to ONLY server process SID 16. In blazing contrast, the wall time (WALL_S), elapsed time (EL_VSQL_S), and SQL statement CPU consumption (CPU_VSQL_S) is related the entire SQL_ID statement execution.

Here are the "no parallel" experimental results.
SQL> select * from op_results;

SAMPLE_NO WALL_S EL_VSQL_S CPU_VSQL_S WAIT_TIME_S DB_CPU_S DB_TIME_S
---------- ---------- ---------- ---------- ----------- ---------- ----------
1 35.480252 35.470015 9.764407 24.97 9.428506 34.152294
2 35.670021 35.659748 9.778554 25.15 9.774984 35.541861
3 35.749926 35.739473 9.774375 25.12 9.31266 34.126285
4 35.868076 35.857752 9.772321 25.32 9.345398 34.273479
5 36.193062 36.18378 9.712962 25.46 9.548465 35.499693
Let's check the math. For simplicity and clarity, please allow me to round and use only sample 5.
DB_TIME_S = DB_CPU_S + WAIT_TIME_S
35.5 = 9.5 + 25.5 = 35.0
The DB Time is pretty close (35.5 vs 35.0). Close enough to demonstrate the time statistic relationships.
Elapsed Time (EL_VSQL_S) = DB_TIME_S
35.5 = 34.2
The Elapsed Time is off by around 4% (35.5 vs 34.2), but still closely to demonstrate the time statistic relationships.
Wall Time (WALL_S) = Elapsed Time (EL_VSQL_S) / Effective Parallelism
35.5 = 35.5 / 1
Nice! The Wall Time results matched perfectly. (35.5 vs 35.5)

To summarize in a non parallel query (i.e., single server process) situation, the time math results are what we expected! (and hoped for)

Test Results: When Oracle Parallel Query WAS involved.

The only difference in the "non parallel" SQL statement above and the SQL statement below is the parallel hint. Below is the "parallel" SQL statement.
select /*+  FULL(big2) PARALLEL(big2,3)  */ count(*) into i_var from big2 where rownum<9000000>
When the "parallel" SQL statement was running, because Oracle parallel query was involved resulting in multiple related Oracle sessions, when monitoring using my rss.sql tool, I need to open the session ID (and serial#) to include all sessions. I still sampled every second. Here is the rss.sql syntax:
SQL>@rss.sql 0 9999 0 9999 % % 1
The tool output is written to a text file, which I was doing a "tail -f" on. Here is a very small snippet of the output. I manually inserted the blank lines to make it easier to see the different sample periods.


There is only one SQL statement being run on this idle test system. And because there is no DML involved, we don't see much background process activity. If you look closely above, sessions 168 (see third column) must be a log write process because the wait event is "log file parallel write". I checked and session 6 is a background process as well.

It's no surprise to typically see only four session involved. One session is the parallel query coordinator and the three parallel query slaves! Interestingly, the main server process session that I executed the query from is session number 16. It never showed up in any of my samples! I suspect it was "waiting" on an idle wait event and I'm only showing processes consuming CPU or waiting on a non-idle wait event. Very cool.

Now for the timing results. I took five samples.  Again, it's VERY important to know that the wait time (WAIT_TIME_S), DB CPU (DB_CPU_S), and DB Time (DB_TIME_S) values are related to ONLY calling server process, which in this case is session 16. In blazing contrast, the wall time (WALL_S), elapsed time (EL_VSQL_S), and SQL statement CPU consumption (CPU_VSQL_S) is related the entire SQL statement execution.

Here are the "parallel" experimental results.
 SQL>  select * from op_results;

SAMPLE_NO WALL_S EL_VSQL_S CPU_VSQL_S WAIT_TIME_S DB_CPU_S DB_TIME_S
---------- ---------- ---------- ---------- ----------- ---------- ----------
1 46.305951 132.174453 19.53818 .01 4.069579 4.664083
2 46.982111 132.797536 19.371063 .02 3.809439 4.959602
3 47.79761 134.338069 19.739735 .02 4.170921 4.555491
4 45.97324 131.809249 19.397557 .01 3.790226 4.159572
5 46.053922 131.765983 19.754143 .01 4.062703 4.461175
Let's check the math. So simplicity and clarity, please allow me to round and use sample 5.
DB_TIME_S = DB_CPU_S + WAIT_TIME_S
4.5 = 4.1 + 0
The DB Time shown above is kind of close... 10% off. (4.5 vs 4.1) But there is for sure timing error in my collection sript. I take the position, this is close enough to demonstrate the time statistic relationships. Now look below.
Elapsed Time (EL_VSQL_S)  = DB_TIME_S
131.7 != 4.5
Woah! What happened here? (131.7 vs 4.5) Actually, everything is OK (so far aways) because the DB Time is related to the session (Session ID 16), whereas the elapsed time is ALL the DB Time for ALL the processes involved in the SQL statement. Since parallel query is involved, resulting in four additional sessions (1 coordinator, 3 slaves) we would expect the elapsed time to be greater than the DB Time. Now let's look at the wall time.
Wall Time (WALL_S) = ( Elapsed Time (EL_VSQL_S) / Parallelism ) + overhead
46.1 = ( 131.8 / 3 ) + 2.2
Nice! Clearly the effective parallelism is greater than 3 because there is some overhead (2.2). But the numbers makes sense because:

1. The wall time is less than the elapsed time because parallel query is involved.

2. The wall time is close to the elapsed time divided by the parallelism. And we can even see the parallelism overhead.

So it looks like our time math is correct!


Reality And The AWR Or Statspack Report
This is really important. In the SQL Statement section of any AWR or Statspack Report, you will see the total elapsed time over the snapshot interval and perhaps the average SQL ID elapsed time per execution. So what is the wall time? What are users experiencing? The short answer is, we do NOT have enough information.

To know the wall time, we need to know the parallelism situation. If you are NOT using parallel query, than based on the time math demonstrated above, the elapsed time per execution will be close to what the user experiencing (unless there is an issue outside of Oracle). However, if parallelism is involved, you can expect the wall time (i.e, user's experience) to be much less than the elapsed time per execution shown in the AWR or Statspack report.

Another way of looking at this is: If a user is reporting a query is taking 10 seconds, but the average elapsed time is showing as as 60 seconds, parallel query is probably involved. Also, as I mentioned above, never forget the average value is not always the typical value. (More? Check out my video seminar entitled, Using Skewed Data To Your Advantage HERE.)

Thanks for reading!

Craig.
Categories: DBA Blogs

Welcome to WordPress 4.0 “Benny”

VitalSoftTech - Fri, 2014-09-05 00:06
WordPress 4.0 has been released. Upgraded my site without any issues. Key enhancements include: Intuitive editing Toolbar stays in place as you add content. Allows you to focus on your content. Seamless Media Embeds Just placing the link allows you to see the content.  New Plugin Browser Browse Plugins and Reviews directly in the dashboard. […]
Categories: DBA Blogs

A Preview of Oracle OpenWorld 2014

VitalSoftTech - Wed, 2014-09-03 21:54
Oracle OpenWorld is an annual conference organized by Oracle, which provides Oracle users and technologists a unique opportunity to learn and socialize. Oracle OpenWorld 2014 is scheduled to take place from September 28 – October 2, 2014 in San Francisco. The conference is designed to provide Oracle users a platform to find best suited Oracle […]
Categories: DBA Blogs

OSCON 2014: Complete Video Compilation

Surachart Opun - Sat, 2014-08-30 04:30
OSCON 2014 - Today, it's not only developers, system administrators or organizations have use the Open Source. Businesses have established to use the Open Source as well. So, you can not ignore about Open Source. At OSCON, you'll encounter the open source ecosystem. It helps digging deep into the business of open source.

Five Reasons to Attend OSCON: Get straight to the epicenter of all things open source and get better at what you do, Learn from the best and make valuable connections, Get solutions to your biggest challenges that you can apply today, See the latest developments, products, services, and career trends and Hear it first at OSCON.

It's very good idea to attend the OSCON, if you missed OSCON2014. I mention OSCON 2014: Complete Video Compilation. You can download these videos or view them through our HD player, and learn about open source with more than 350 presenters, including Matthew McCullough (GitHub), Leslie Hawthorn (Elasticsearch), James Turnbull (Docker), Andrei Alexandrescu (Facebook), Tim Berglund (DataStax), Paco Nathan (Zettacap), Kirsten Hunter (Akamai), Matt Ray (Chef Software, Inc.), and Damian Conway (Thoughtstream) among them. In these videos, you are able to see a lot of tracks (Business,Cloud,Community,Computational Thinking,Databases & Datastores,Education,Emerging anguages,Geek Lifestyle,Java & JVM,JavaScript - HTML5 - Web,Mobile Platforms,Open Hardware,Operations & System Admin,Perl,PHP,Python,Security,Tools & Techniques,User Experience).

You will able to learn many tracks as I told you. Anyway, Oreilly has improved video streaming and downloading. Additional, it's very useful for playback speed control and mobile viewing.

Categories: DBA Blogs

My Speaking Schedule for Oracle Open World 2014

Galo Balda's Blog - Wed, 2014-08-27 13:22

A quick post to let you know about the two presentations that I’ll be doing at Oracle Open World 2014.

Session ID:         UGF4482
Session Title:     “Getting Started with SQL Pattern Matching in Oracle Database 12c
Venue / Room:  Moscone South – 301
Date and Time:  9/28/14, 13:30 – 14:15

Session ID:          CON4493
Session Title:      “Regular Expressions in Oracle Database 101”
Venue / Room:   Moscone South – 303
Date and Time:   10/2/14, 13:15 – 14:00

As usual, you might have to check before the session to make sure the room has not changed.

I hope to see you there.


Filed under: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL Tagged: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL
Categories: DBA Blogs

Hands-On Programming with R by Garrett Grolemund

Surachart Opun - Wed, 2014-08-27 03:42
R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.
R language is useful to become a data scientist, as well as a computer scientist. I mention a book that points about a data science with R. A Hands-On Programming with R Write Your Own Functions and Simulations By Garrett Grolemund. It was written how to solve the logistical problems of data science. Additional, How to write our own functions and simulations with R. In a book, readers are able to learn in practical data analysis projects (Weighted Dice, Playing Cards, Slot Machine) and understand more in R. Additional, Appendix A-E will help to install/update R and R packages as well as loading Data and debugging in R code.
Garrett Grolemund maintains shiny.rstudio.com, the development center for the Shiny R package.
Free Sampler.
Categories: DBA Blogs

ORA-14048 When Adding Composite Partition

Don Seiler - Thu, 2014-08-21 15:35
Fresh off the heels of my earlier composite partitioning post, I just ran into this confusing issue:

SQL> alter table p_objects
  2          add partition p201410
  3                  values less than (to_date('2014/11/01','yyyy/mm/dd'))
  4          (
  5                  subpartition p201410_spdts values ('DTS')
  6                  , subpartition p201410_spfoo values ('FOO')
  7                  , subpartition p201410_spbar values ('BAR')
  8                  , subpartition p201410_spsys values ('SYS')
  9                  , subpartition p201410_spsysaux values ('SYSAUX')
 10          )
 11  tablespace tbs1
 12  ;
tablespace tbs1
*
ERROR at line 11:
ORA-14048: a partition maintenance operation may not be combined with other
operations

The error doesn't quite make it easy to determine what the problem is. Turns out that the tablespace/storage clause of the ALTER TABLE ... ADD PARTITION has to come prior to the subpartitions definition. Simply moving that part of the statement a few rows up yields success:

SQL> alter table p_objects
  2          add partition p201410
  3                  values less than (to_date('2014/11/01','yyyy/mm/dd'))
  4          tablespace tbs1
  5          (
  6                  subpartition p201410_spdts values ('DTS')
  7                  , subpartition p201410_spfoo values ('FOO')
  8                  , subpartition p201410_spbar values ('BAR')
  9                  , subpartition p201410_spsys values ('SYS')
 10                  , subpartition p201410_spsysaux values ('SYSAUX')
 11          )
 12  ;

Table altered.

Again, probably obvious to most of you. It wasn't as obvious to decipher when I was trying to add a partition with over 6,300 subpartitions. As always:


Categories: DBA Blogs

Adding New Partitions with Custom Subpartition Definition (Range-List)

Don Seiler - Thu, 2014-08-21 14:28
As part of a project for work I wanted to create a script that would create a new range partition but also pre-create all the list subpartitions. By default the subpartitions would be created based on the subpartition template. However for various reasons which I won't get into we don't update or use the subpartition template. I wanted to define the subpartition list as part of the ALTER TABLE ... ADD PARTITION statement. I assumed it was perfectly acceptable but didn't see any obvious examples in my hasty web searching, so I thought I'd share on myself.

I start by creating my composite range-list partitioned table:

SQL> create table p_objects
  2  tablespace tbs1
  3  partition by range(rdate)
  4  subpartition by list(owner)
  5  subpartition template
  6  (
  7          subpartition spsys values ('SYS')
  8          , subpartition spsysaux values ('SYSAUX')
  9  )
 10  (
 11          partition p201301 values less than (to_date('2013/02/01','YYYY/MM/DD')),
 12          partition p201302 values less than (to_date('2013/03/01','YYYY/MM/DD')),
 13          partition p201303 values less than (to_date('2013/04/01','YYYY/MM/DD')),
 14          partition p201304 values less than (to_date('2013/05/01','YYYY/MM/DD')),
 15          partition p201305 values less than (to_date('2013/06/01','YYYY/MM/DD')),
 16          partition p201306 values less than (to_date('2013/07/01','YYYY/MM/DD')),
 17          partition p201307 values less than (to_date('2013/08/01','YYYY/MM/DD')),
 18          partition p201308 values less than (to_date('2013/09/01','YYYY/MM/DD')),
 19          partition p201309 values less than (to_date('2013/10/01','YYYY/MM/DD')),
 20          partition p201310 values less than (to_date('2013/11/01','YYYY/MM/DD')),
 21          partition p201311 values less than (to_date('2013/12/01','YYYY/MM/DD')),
 22          partition p201312 values less than (to_date('2014/01/01','YYYY/MM/DD'))
 23  )
 24  as select object_id
 25          , owner
 26          , object_name
 27          , object_type
 28          , to_date(trunc(dbms_random.value(
 29                  to_char(to_date('2013/01/01','YYYY/MM/DD'),'J'),
 30                  to_char(to_date('2013/12/31','YYYY/MM/DD'),'J')
 31                  )),'J') rdate
 32  from all_objects
 33  where owner in ('SYS','SYSAUX');

Table created.

This creates the partitions with 2 subpartitions each, per my defined template. For example:

PARTITION_NAME                 SUBPARTITION_NAME
------------------------------ ------------------------------
P201301                        P201301_SPSYS
                               P201301_SPSYSAUX
P201302                        P201302_SPSYS
                               P201302_SPSYSAUX
P201303                        P201303_SPSYS
                               P201303_SPSYSAUX

Next I'll add two more partitions, one with no subpartition definition specified and another with a custom definition:

SQL> alter table p_objects
  2          add partition p201408
  3                  values less than (to_date('2014/09/01','yyyy/mm/dd'));

Table altered.

SQL> alter table p_objects
  2          add partition p201409
  3                  values less than (to_date('2014/10/01','yyyy/mm/dd'))
  4          (
  5                  subpartition p201409_spdts values ('DTS')
  6                  , subpartition p201409_spsys values ('SYS')
  7                  , subpartition p201409_spsysaux values ('SYSAUX')
  8          )
  9  ;

Table altered.

The results:
PARTITION_NAME                 SUBPARTITION_NAME
------------------------------ ------------------------------
P201408                        P201408_SPSYS
                               P201408_SPSYSAUX
P201409                        P201409_SPDTS
                               P201409_SPSYS
                               P201409_SPSYSAUX

You can see that the first statement just used the subpartition template as defined in the original CREATE TABLE statement. The second uses the list I defined in the ALTER TABLE statement. I could have only defined the SPDTS subpartition and it would have only used that one and not used SYS or SYSAUX subpartitions at all.

Hopefully this post helps a few of you in your searching when faced with a similar task. This was probably obvious to most of you but the mind starts to slip for some of us.
Categories: DBA Blogs

What Is Oracle Elapsed Time And Wall Time With A Parallelism Twist

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.
What Is Oracle Elapsed Time And Wall Time With A Parallelism Twist
In this post I'm focusing on Oracle Database SQL elapsed time, adding parallelism into the mix and then revisiting wall time. What initially seems simple can take some very interesting twists!

If you are into tuning Oracle Database systems, you care about time. And if you care about time, then you need to understand the most important time parameters: what they are, their differences, how they relate to each other and how to use them in your performance tuning work.

A couple weeks ago I wrote about Oracle DB Time, non-idle wait time, and server process CPU consumption (DB CPU) time. If you have not read that posting, HERE is the link. It must be a good read because it quickly become my most viewed post ever! In this posting, the focus is SQL elapsed time, parallelism, and again wall time. Enjoy!

Quick Review
In my previous related post, I covered non-idle wait time, DB CPU, and DB Time. Here is a very quick summary of each.

Non-Idle Wait Time occurs when an Oracle process is not consuming CPU, the session pauses (i.e., waits) and Oracle considers the wait time important for performance tuning. An example of a non-idle wait event is direct path read temp. An example of an idle wait event is SQL*Net message from client or pmon timer.

DB CPU is Oracle server/foreground/shadow process CPU consumption. This is not include Oracle background process CPU consumption.

DB Time is DB CPU plus Non-Idle Wait Time. Remember that DB Time does not include background process CPU consumption and Oracle Corporation determines which wait events are considered idle.

Elapsed Time
Elapsed Time (ET) is all DB Time related to a defined task. A "defined task" could be a SQL statement, group of SQL statements, pl/sql procedure, batch job, etc. It is whatever makes sense in your tuning situation.

The elapsed time for a SQL_ID can be found in v$sql. But be careful because this elapsed time is related to "all" the SQL_ID executions. Thankfully, there is an "executions" column in v$sql.


Elapsed time is displayed in a number of areas within an Oracle Database AWR and Statspack report. Looking at the above screen shot, the "top" elapsed time SQL has an elapsed time of 268561 seconds. This means that over the AWR report's snapshot interval, for all this SQL's executions, its total DB Time is 268561 seconds. Said another way, if we were to add up all this SQL's DB CPU and non-idle wait time for all its executions within the snapshot interval, the value should be 268561.

There is a lot of great information provided in the AWR and Statspack SQL reports. For example, because the elapsed time and the CPU time (DB CPU) is shown above, we can calculate the non-idle wait time for the "top" elapsed time SQL ID.

non idle wait time = elapsed time - cpu time
268465 = 268561 - 96

For the "top" elapsed SQL, its elapsed time 268561 and it's DB CPU is 96 therefore its non-idle wait time is 268465. Wow! This statement has tons of associated wait time compared to CPU consumption time.

But it gets even better! Because the total Elapsed Time and the total number Executions over the snapshot interval is displayed, we can determine the average elapsed time!

average elapsed time = total elapsed time / executions
746.03 = 268561 / 36

Do not be deceived! The average elapsed time is unlikely what the user is experiencing. Two possibility examples for this deception are skewed elapsed times and parallelism.

For most DBAs this is unexpected. It also causes performance perception problems yet solutions are available to understand what's really going on.

I've spent so much time researching this topic and seen it increase my consulting value, I've posted a number of blog entries on this subject. Plus I created an OraPub Online Institute seminar focused specifically on this subject. It's called Using Skewed Performance Data To Your Advantage. Check it out. I'm really proud how it turned out. I also have a couple of OSM scripts dedicated to this topic, sqlelget[11].sql.

Revisiting Wall Time With A Parallelism Twist
Now it's time to put this all together.

DB CPU is the Oracle server process CPU consumption.

Non-Idle Wait Time (NIWT) is the time when an Oracle process can not consume CPU and must pause and we care about this time.

DB Time is the Oracle server process CPU consumption and all non-idle wait time.

Elapsed Time (ET) is the sum (i.e., all) DB Time related to a task, such as a SQL_ID.

Wall Time is what we hope the user experiences. I'll assume there is no time gap between Oracle and the user, therefore the wall time will equal the user's experience.

Effective Parallelism is the effective number of Oracle parallel slaves or some other form of parallelism, such as designed-in application parallelism. (For simplicity, I'm only going to mention Oracle parallel query.) If Oracle parallel query is not involved, then the effective parallelism is one. If two parallel query slaves are involved, then the effective parallelism will be a little less than 2

Parallelism can reduce wall time because we can simultaneously "burn time" in multiple places. For example, 60 seconds of elapsed time with a process running serially, results in a wall time of 60 seconds. But if we have two parallel query slaves, while the elapsed time (i.e., all the DB Time) is still 60 seconds (plus some overhead time), the wall time will be around 30 seconds (plus some overhead time).

The math is really simple...that is until you factor in scalability (i.e., the overhead), which I won't. If you're interested, read the last chapter of my book, Forecasting Oracle Performance.

Let's simplify this by using some mathematical notation.

DB Time = DB CPU + NIWT

Elapsed Time = Sum of DB Time

Wall Time = Elapsed Time / Effective Parallelism

Pretty straightforward, eh? Below is a short video clip summarizing the key time parameters taken from the OraPub Online Institute seminar, Tuning Oracle Using an AWR Report - Part 2. If you can't see the video, click HERE watch it on YouTube.



Test You Knowledge
True or False? If the total elapsed time is 60 seconds and parallel query is not involved, the total wall time will also be 60 seconds. True

True or False? If the elapsed time per execution is 60 seconds and the wall time is 30 seconds, then parallel query is involved. True

True or False? Bonus question yet very important to understand: If the elapsed time per execution is 60 seconds and two PQ slaves are involved, then the wall time will be 30 seconds.

The last question is false because there is overhead when parallelizing. Parallelism is not free. Because of this, the wall time will hopefully drop to perhaps 35 seconds. That 5 seconds is the parallelization overhead.

Coming Up Next: Video Proof!
While the above may seem correct, I ran some SQL statements and captured the relevant time statistics. There is quite a bit of detail and I ran two different tests, so I'll post that in a week or two.

Thanks for reading,

Craig.
Categories: DBA Blogs

Watch Oracle DB Session Activity With The Real-Time Session Sampler

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Watch Oracle DB Session Activity With My Real-Time Session Sampler
Watching session activity is a great way to diagnose and learn about Oracle Database tuning. There are many approaches to this. I wanted something simple, useful, modifiable, no Oracle licensing
issues and that I could give away. The result is what I call the Oracle Real-Time Session Sampler (OSM: rss.sql).

The tool is simple to use.  Based on a number filtering command line inputs, it repeatedly samples active Oracle sessions and writes the output to a file in /tmp. You can do a "tail -f" on the file to watch session activity in real time!

The rss.sql tool is included in the OraPub System Monitor (OSM) toolkit (v13j), which can be downloaded HERE.

If you simply want to watch a video demo, watch below or click HERE.


The Back-Story
Over the past two months I have been creating my next OraPub Online Institute seminar about how to tune Oracle with an AWR/Statspack report using a quantitative time based approach. Yeah... I know the title is long. Technically I could have used Oracle's Active Session History view (v$active_session_history) but I didn't want anyone to worry about ASH licensing issues. And ASH is not available with Oracle Standard Edition.

The Real-Time Session Sampler is used in a few places in the online seminar where I teach about Oracle session CPU consumption and wait time. I needed something visual that would obviously convey the point I wanted to make. The Real-Time Session Sampler worked perfectly for this.

What It Does
Based on a number of command line inputs, rss.sql repeatedly samples active Oracle sessions and writes the output to file in /tmp. The script contains no dml statements. You can do a "tail -f" on the output file to see session activity in real time. You can look at all sessions, a single session, sessions that are consuming CPU or waiting or both, etc. You can even change the sample rate. For example, once every 5.0 seconds or once every 0.25 seconds! It's very flexible and it's fascinating to watch.

Here is an example of some real output.



How To Use RSS.SQL
The tool is run within SQL*Plus and the output is written to the file /tmp/rss_sql.txt. You need two windows: one to sample the sessions and other other to look at the output file. Here are the script parameter options:

rss.sql  low_sid  high_sid  low_serial  high_serial  session_state  wait_event_partial|%  sample_delay

low_sid is the low Oracle session id.
high_sid is the high Oracle session id.
low_serial is the low Oracle session's serial number.
high_serial is the high Oracle session's serial number.
session_state is the current state of the session at the moment of sampling: "cpu", "wait" or for both "%".
wait_event_partial is when the session is waiting, select the session only with this wait event. Always set this to "%" unless you want to tighten the filtering.
sample_delay is the delay between samples, in seconds.

Examples You May Want To Try
By looking at the below examples, you'll quickly grasp that this tool can be used in a variety of situations.

Situation: I want to sample a single session (sid:10 serial:50) once every five seconds.

SQL>@rss.sql  10 10 50 50 % % 5.0

Situation: I want to essentially stream a single session's (sid:10 serial:50) activity.

SQL>@rss.sql 10 10 50 50 % % 0.125

Situation: I want to see what sessions are waiting for an row level lock while sampling once every second.

SQL>@rss.sql 0 99999 0 99999 wait enq%tx%row% 1.0

Situation: I want to see which sessions are consuming CPU, while sampling once every half second.

SQL>@rss.sql 0 99999 0 99999 cpu % 0.50

Be Responsible... It's Not OraPub's Fault!
Have fun and explore...but watch out! Any time you are sample repeatedly, you run the risk of impacting the system under observation. You can reduce this risk by sampling less often (perhaps once every 5 seconds), by limiting the sessions you want to sample (not 0 to 99999) and by only select sessions in either a "cpu" or "wait" state.

A smart lower impact strategy would be to initially keep a broader selection criteria but sample less often; perhaps once every 15 seconds. Once you know what you want to look for, tighten the selection criteria and sample more frequently. If you have identified a specific session of interest, then you stream the activity (if appropriate) every half second or perhaps every quarter second.

All the best in your Oracle Database tuning work,

Craig.
Categories: DBA Blogs

What Is Oracle DB Time, DB CPU, Wall Time and Non-Idle Wait Time

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.What Is Oracle DB Time, DB CPU, Wall Time and Non-Idle Wait Time
If you are into tuning Oracle Database systems, you care about time. And if you care about time, then you need to understand the most important time parameters: what they are, their differences, how they relate to each other and how to use them in your performance tuning work.

The key Oracle Database time parameters are elapsed time, database time (DB Time), non-idle wait time and server process CPU consumption (DB CPU) time.

This first post is pretty basic, yet core fundamental stuff. So in the following two posts I'll introduce elapsed time, add parallelism into the mix and revisit wall time. What initially seems simple can some take very interesting twists!

You probably know that I am all about quantitative Oracle performance analysis. I research, write, teach, and speak about it. I even have an OraPub Online Institute seminar about how to tune your Oracle Database systems from a standard AWR or Statspack report using an Oracle Time Based Analysis (OTBA) framework.

So let's get started!

Wall Time & Run Time
I'll start with Wall Time because that is close (hopefully) to what a user experiences. In fact, if there is no time gap between the Oracle Database and the user, then we can do a little math and figure out what the users are, on average, experiencing. I'll get back to wall time in the next post, where I include elapsed time and parallelism into the equation.

DB CPU
DB CPU is Oracle server/foreground/shadow process CPU consumption. Each Oracle server process gathers its own CPU consumption using the time and/or getrusage C function system call. So unless there is a major screw-up by either the operating system or the Oracle kernel developers, the time will be good... very good. The name DB CPU is taken from the actual statistic name, which is found in both v$sess_time_model and v$sys_time_model.

If you look at any AWR or Statspack report in the "Time Model" section, you will see DB CPU. The value shown will be all server process CPU consumption within the reporting snapshot interval, converted to seconds. (The raw statistic is stored in microseconds.)

Below is an example Time Model Statistics screen shot from a standard AWR report. I've highlighted DB CPU.



If you run one of my OraPub System Monitor (OSM) time related tools like ttpctx.sql or rtpctx.sql you see a CPU time statistic. That contains both the DB CPU (i.e., server process) and "background process cpu" statistics. Here's an example.

SQL> @ttpctx.sql
Remember: This report must be run twice so both the initial and
final values are available. If no output, press ENTER about 11 times.

Database: prod35 31-JUL-14 12:09pm
Report: ttpctx.sql OSM by OraPub, Inc. Page 1
Total Time Activity (142 sec interval)

Avg Time Time Wait
Time Component % TT % WT Waited (ms) (sec) Count(k)
------------------------------------- ------- ------- ----------- ----------- --------
CPU consumption: Oracle SP + BG procs 95.95 0.00 0.000 347.212 0
PX Deq: Slave Session Stats 1.45 35.74 0.113 5.240 47
library cache: mutex X 0.58 14.26 0.136 2.090 15
PX Deq: Slave Join Frag 0.43 10.57 0.067 1.550 23
PX Deq: Signal ACK EXT 0.29 7.16 0.045 1.050 23
control file parallel write 0.28 7.03 20.600 1.030 0
PX qref latch 0.27 6.75 0.012 0.990 85
latch free 0.20 4.91 0.090 0.720 8
log file parallel write 0.16 4.02 12.826 0.590 0

Non-Idle Wait Time
When an Oracle process can not consume CPU, it will pause. As an Oracle DBA, we know this as wait time. Sometimes a process waits and it's not a performance problem, so we call this Idle Wait Time. Oracle background processes typically have lots of idle wait time. However, when a user is waiting for sometime to complete and way down deep their Oracle server process is waiting to get perhaps a lock or latch, this is Non-Idle Wait Time. Obviously, when tuning Oracle we care a lot about non-idle wait time.

Below is a simple query showing wait event classifications. In this system there are 119 Idle wait events, so all the rest would be classified as non-idle wait events.

Oracle uses a variety of methods to determine wait time. I have a number of postings and educational content available about this. You'll see them if you do an OraPub or blog search for "time".

When working with non-idle wait time, remember the 80/20 rule. Most of the wait time we care about will be contained with in the largest ("top") two to four wait events. Don't waste YOUR time focusing on the 20%.

Here's an example. In the screen shot below, while not shown the total wait time is 1966 seconds.
If you add up the displayed "top" four wait events, their combined wait time is 1857. This is about 95% of all the non-idle wait time. This is a good example demonstrating that most of the wait time is found in the top two to four events.

My OSM toolkit has many wait time related tools. Most start with "sw" for "session wait" but the both ttpctx.sql or rtpctx.sql will contain the non-idle wait time and also CPU consumption. This is a good time to transition into DB Time.

DB Time
DB Time is a time model statistic that is the sum of Oracle process CPU consumption and non-idle wait time. When optimizing Oracle systems we typically focus on reducing "time", though many times database work is also part of the equation. This "time" is essentially DB Time, though sometimes I take control over what I consider idle wait time.

The name DB Time comes from the actual statistic name in both v$sess_time_model and v$sys_time_model.

If you look at any AWR or Statspack report in the "Time Model" section, you will see DB Time.
The DB time value is technically all server process CPU consumption plus the non-idle wait time within the reporting snapshot interval, converted to seconds. (The raw statistic is stored in microseconds.) Surprisingly, Oracle does not include "background cpu time" in the DB Time statistic. There are both good and not so good reasons the background CPU time is not include, but that's a topic for another posting.

A Little Math
We have enough detail to relate DB Time, DB CPU and non-idle wait time together... using a little math.

DB Time = DB CPU + non_idle_wait_time

And of course,

non_idle_wait_time = DB Time - DB CPU

This is important, because there is no single statistic that shows all the non-idle wait time. This must be derived. Shown above is one way to derive the non-idle wait time. Take a look at the AWR report snippet below.

In the Non-Idle Wait Time section above, I stated that the total non-idle wait time was 1966 seconds. I derived this from the Time Model screen shown above. I simply did:

non_idle_wait_time = DB Time - DB CPU
1966.16 = 4032.03 - 2065.87

Coming Up Next
I wanted to keep this post short, which means I left out the more interesting topics. So in the next post I'll merge into the picture elapsed time along with parallelism and revisit wall time. Then in the third post (that's my guess at this point), I'll actually demonstrate this in two different systems.

Thanks for reading,

Craig.
 
Categories: DBA Blogs

Increase DSS Performance by 100x and OLTP by 2x. Learn more about Oracle 12c In-Memory Database.

VitalSoftTech - Mon, 2014-08-04 09:45
With Oracle 12.1.0.2, the largest performance enhancing features, the Oracle 12c In-memory database option, has been released. Learn more about how to enable it, the licensing costs, etc. Also read other about Oracle 12c In-Memory Database FAQ's here ...
Categories: DBA Blogs

New in Oracle 12c: Querying an Associative Array in PL/SQL Programs

Galo Balda's Blog - Sat, 2014-08-02 17:23

I was aware that up to Oracle 11g, a PL/SQL program wasn’t allowed use an associative array in a SQL statement. This is what happens when I try to do it.

SQL> drop table test_array purge;

Table dropped.

SQL> create table test_array as
  2  select level num_col from dual
  3  connect by level <= 10;

Table created.

SQL> select * from test_array;

   NUM_COL
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10                                                                      

10 rows selected.

SQL> drop package PKG_TEST_ARRAY;

Package dropped.

SQL> create or replace package PKG_TEST_ARRAY as
  2
  3    type tab_num is table of number index by pls_integer;
  4
  5  end PKG_TEST_ARRAY;
  6  /

Package created.

SQL> declare
  2    my_array pkg_test_array.tab_num;
  3  begin
  4    for i in 1 .. 5 loop
  5      my_array(i) := i*2;
  6    end loop;
  7
  8    for i in (
  9              select num_col from test_array
 10              where num_col in (select * from table(my_array))
 11             )
 12    loop
 13      dbms_output.put_line(i.num_col);
 14    end loop;
 15  end;
 16  /
            where num_col in (select * from table(my_array))
                                                  *
ERROR at line 10:
ORA-06550: line 10, column 51:
PLS-00382: expression is of wrong type
ORA-06550: line 10, column 45:
PL/SQL: ORA-22905: cannot access rows from a non-nested table item
ORA-06550: line 9, column 13:
PL/SQL: SQL Statement ignored
ORA-06550: line 13, column 26:
PLS-00364: loop index variable 'I' use is invalid
ORA-06550: line 13, column 5:
PL/SQL: Statement ignored

As you can see, the TABLE operator is expecting either a nested table or a varray.

The limitation has been removed in Oracle 12c. This is what happens now.

SQL> set serveroutput on
SQL>
SQL> drop table test_array purge;

Table dropped.

SQL> create table test_array as
  2  select level num_col from dual
  3  connect by level <= 10;

Table created.

SQL> select * from test_array;

   NUM_COL
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10                                                                      

10 rows selected.

SQL> drop package PKG_TEST_ARRAY;

Package dropped.

SQL> create or replace package PKG_TEST_ARRAY as
  2
  3    type tab_num is table of number index by pls_integer;
  4
  5  end PKG_TEST_ARRAY;
  6  /

Package created.

SQL> declare
  2    my_array pkg_test_array.tab_num;
  3  begin
  4    for i in 1 .. 5 loop
  5      my_array(i) := i*2;
  6    end loop;
  7
  8    for i in (
  9              select num_col from test_array
 10              where num_col in (select * from table(my_array))
 11             )
 12    loop
 13      dbms_output.put_line(i.num_col);
 14    end loop;
 15  end;
 16  /

2
4
6
8
10                                                                              

PL/SQL procedure successfully completed.

Here’s another example using a slightly different query.

SQL> declare
  2    my_array pkg_test_array.tab_num;
  3  begin
  4    for i in 1 .. 5 loop
  5      my_array(i) := i*2;
  6    end loop;
  7
  8    for i in (
  9              select a.num_col, b.column_value
 10              from
 11                test_array a,
 12                table (my_array) b
 13              where
 14                a.num_col = b.column_value
 15             )
 16    loop
 17      dbms_output.put_line(i.num_col);
 18    end loop;
 19  end;
 20  /

2
4
6
8
10                                                                              

PL/SQL procedure successfully completed.

Very nice stuff.


Filed under: 12C, PL/SQL Tagged: 12C, PL/SQL
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs