DBA Blogs

Top 5 wait events from v$active_session_history

DBA Scripts and Articles - Mon, 2014-11-24 09:30

This query returns the top 5 wait events for the last hour from the v$active_session_history view. Be careful, this view is part of the diagnostic pack, you should not query this view if you not licensed for it. Top 5 wait events from v$active_session_history [crayon-5908925b92337457831627/] This is obviously an approximation, because v$active_session_history contains only 1 second samples, and … Continue reading Top 5 wait events from v$active_session_history

The post Top 5 wait events from v$active_session_history appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Top 5 wait events from v$active_session_history

DBA Scripts and Articles - Mon, 2014-11-24 09:30

This query returns the top 5 wait events for the last hour from the v$active_session_history view. Be careful, this view is part of the diagnostic pack, you should not query this view if you not licensed for it. Top 5 wait events from v$active_session_history [crayon-5820d56023beb531338939/] This is obviously an approximation, because v$active_session_history contains only 1 second samples, and … Continue reading Top 5 wait events from v$active_session_history

The post Top 5 wait events from v$active_session_history appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Oracle Database 12C Certified Professional SQL Foundations by Steve Ries; Infinite Skills

Surachart Opun - Wed, 2014-11-19 01:36
How to become Become an Oracle Certified Associate? That's a good question for some people who want to start working on Oracle Database and get first Oracle Certification for their work life. Step 1 - Pass one SQL Exam: Oracle Database 12c: SQL Fundamentals 1Z0-061 or Oracle Database 11g: SQL Fundamentals I 1Z0-051 or Oracle Database SQL Expert 1Z0-047. Step 2 - Pass Exam Oracle Database 12c: Installation and Administration 1Z0-062.
With Step 1 -  Oracle Database 12c: SQL Fundamentals 1Z0-061 exam that you should have skills SQL SELECT statement, subqueries, data manipulation, and data definition language. I was not encouraging to learn SQL SELECT statement, subqueries, data manipulation, data definition language and things like that, but in the real-world for working with Oracle Database you have to know them. You can read about them in Oracle Documents, Oracle Learning Library, Oracle University and the Internet. If you are looking for some video training that instruct you for Oracle Database SQL Foundations such as  sql-select, dml, ddl and etc. I mention Oracle Database - 12C Certified Professional SQL Foundations by Steve Ries. I watched it on O'reilly,that is very fast for streaming and downloading. The "Oracle Database - 12C Certified Professional SQL Foundations" video training, that helps learning Oracle Foundations looks easier (watch and do by own) and instructor spoke each topic very clear and easy for listening.

FYI, You can watch some free video.

Video Training: Oracle Database - 12C Certified Professional SQL Foundation
Instructor: Steve Ries
Categories: DBA Blogs

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster?
Most Oracle 12c installations will NOT be using the awesome Oracle Database in-memory features available starting in version 12.1.0.2.0. This experiment is about the performance impact of upgrading to 12c but disabling the in-memory features.

Every experiment I have performed comparing buffer processing rates, clearly shows any version of 12c performs better than 11g. However, in my previous post, my experiment clearly showed a performance decrease after upgrading from 12.1.0.1.0 to 12.1.0.2.0.

This posting is about why this occurred and what to do about it. The bottom line is this: make sure "off" is "totally off."

Turn it totally off, not partially off
What I discovered is by default the in-memory column store feature is not "totally disabled." My experiment clearly indicates that unless the DBA takes action, not only could they be a license agreement violation but a partially disabled in-memory column store slightly slows logical IO processing compared to the 12c non in-memory column store option. Still, any 12c version processes buffer faster than 11g.

My experiment: specific and targeted
This is important: The results I published are based on a very specific and targeted test and not on a real production load. Do not use my results in making a "should I upgrade decision." That would be stupid and an inappropriate use of the my experimental results. But because I publish every aspect of my experiment and it is easily reproducible it is a valid data point with which to have a discussion and also highlight various situations that DBAs need to know about.

You can download all my experimental results HERE. This includes the raw sqlplus output, the data values, the free R statistics package commands, spreadsheet with data nicely formatted and lots of histograms.

The instance parameter settings and results
Let me explain this by first showing the instance parameters and then the experimental results. There are some good lessons to learn!

Pay close attention to the inmemory_force and inmemory_size instance parameters.

SQL> show parameter inmemory

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default string
inmemory_force string DEFAULT
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
percent
optimizer_inmemory_aware boolean TRUE

SQL> show sga

Total System Global Area 7600078848 bytes
Fixed Size 3728544 bytes
Variable Size 1409289056 bytes
Database Buffers 6174015488 bytes
Redo Buffers 13045760 bytes

In my experiment using the above settings the median buffers processing rate was 549.4 LIO/ms. Looking at the inmemory_size and the SGA contents, I assumed the in-memory column store was disabled. If you look at the actual experimental result file "Full ds2-v12-1-0-2-ON.txt", which contain the explain plan of the SQL used in the experiment, there is no mention of the in-memory column store being used. My assumption, which I think is a fair one, was that the in-memory column store had been disabled.

As you'll see I was correct, but only partially correct.

The parameter settings below are when the in-memory column store was totally disabled. They key is changing the default inmemory_force parameter value from DEFAULT to OFF.

SQL> show parameter inmemory

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default string
inmemory_force string OFF
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
percent
optimizer_inmemory_aware boolean TRUE
SQL> show sga

Total System Global Area 7600078848 bytes
Fixed Size 3728544 bytes
Variable Size 1291848544 bytes
Database Buffers 6291456000 bytes
Redo Buffers 13045760 bytes

Again, the SGA does not show any in-memory memory space. In my experiment with the above "totally off" settings, the median buffers processing rate was 573.5 LIO/ms compared to "partially off" 549.4 LIO/ms. Lesson: Make sure off is truly off.

It is an unfair comparison!
It is not fair to compare the "partially off" with the "totally off" test results. Now that I know the default inmemory_force must be changed to OFF, the real comparison should be made with the non in-memory column store version 12.1.0.1.0 and the "totally disabled" in-memory column store version 12.1.0.2.0. This is what I will summarize below. And don't forget all 12c versions showed a significant buffer processing increase compared to 11g.

The key question: Should I upgrade?
You may be thinking, if I'm NOT going to license and use the in-memory column store, should I upgrade to version 12.1.0.2.0? Below is a summary of my experimental results followed by the key points.


1. The non column store version 12.1.0.1.0 was able to process 1.1% more buffers/ms (median: 581.7 vs 573.5) compared to to "totally disabled" in-memory column store version 12.1.0.2.0. While this is statistically significant, a 1.1% buffer processing difference is probably not going to make-or-break your upgrade.

2. Oracle Corporation, I'm told, knows about this situation and is working on a fix. But even if they don't fix it, in my opinion my experimental "data point" would not warrant not upgrading to the in-memory column store version 12.1.0.2.0 even if you are NOT going to use the in-memory features.

3. Visually (see below) the non in-memory version 12.1.0.1.0 and the "totally off" in-memory version 12.1.0.2.0 samples sets look different. But they are pretty close. And as I mentioned above, statistically they are "different."

Note for the statistically curious: The red color 12.1.0.1.0 non in-memory version data set is highly variable. I don't like to see this in my experiments. Usually this occurs when a mixed workload sometimes impacts performance, I don't take enough samples or my sample time duration is too short. To counteract this, in this experiment I captured 31 samples. I also performed the experiment multiple times and the results where similar. What I could have done was used more application data to increase the sample duration time. Perhaps that would have made the data clearer. I could have also used another SQL statement and method to create the logical IO load.

What I learned from this experiment
To summarize this experiment, four things come to mind:

1. If you are not using an Oracle Database feature, completely disable it. My mistake was thinking the in-memory column store was disabled when I set it's memory size to zero and "confirmed" it was off by looking at the SGA contents.

2. All versions of 12c I have tested are clearly faster at processing buffers than any version of 11g.

3. There is a very slight performance decrease when upgrading from Oracle Database version 12.1.0.1.0 to 12.1.0.2.0.

4. It is amazing to me that with all the new features poured into each new Oracle Database version the developers have been able to keep the core buffer processing rate nearly at or below the previous version. That is an incredible accomplishment. While some people may view this posting as a negative hit against the Oracle Database, it is actually a confirmation about how awesome the product is.

All the best in your Oracle performance tuning work!

Craig.




Categories: DBA Blogs

Think Stats, 2nd Edition Exploratory Data Analysis By Allen B. Downey; O'Reilly Media

Surachart Opun - Mon, 2014-11-17 08:15
Lots of Python with data analysis books. This might be a good one that is able to help readers perform statistical analysis with programs written in Python. Think Stats, 2nd Edition Exploratory Data Analysis by Allen B. Downey(@allendowney).
This second edition of Think Stats includes the chapters from the first edition, many of them substantially revised, and new chapters on regression, time series analysis, survival analysis, and analytic methods. Additional, It uses uses pandas, SciPy, or StatsModels in Python. Author developed this book using Anaconda from Continuum Analytics. Readers should use it, that will easy from them. Anyway, I tested on Ubuntu and installed pandas, NumPy, SciPy, StatsModels, and matplotlib packages. This book has 14 chapters relate with processes that author works with a dataset. It's for intermediate reader. So, Readers should know how to program (In a book uses Python), and skill in mathematical + statistical.
Each chapter includes exercises that readers can practice and get more understood. Free Sampler
  • Develop an understanding of probability and statistics by writing and testing code.
  • Run experiments to test statistical behavior, such as generating samples from several distributions.
  • Use simulations to understand concepts that are hard to grasp mathematically.
  • Import data from most sources with Python, rather than rely on data that’s cleaned and formatted for statistics tools.
  • Use statistical inference to answer questions about real-world data.
surachart@surachart:~/ThinkStats2/code$ pwd
/home/surachart/ThinkStats2/code
surachart@surachart:~/ThinkStats2/code$ ipython notebook  --ip=0.0.0.0 --pylab=inline &
[1] 11324
surachart@surachart:~/ThinkStats2/code$ 2014-11-17 19:39:43.201 [NotebookApp] Using existing profile dir: u'/home/surachart/.config/ipython/profile_default'
2014-11-17 19:39:43.210 [NotebookApp] Using system MathJax
2014-11-17 19:39:43.234 [NotebookApp] Serving notebooks from local directory: /home/surachart/ThinkStats2/code
2014-11-17 19:39:43.235 [NotebookApp] The IPython Notebook is running at: http://0.0.0.0:8888/
2014-11-17 19:39:43.236 [NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
2014-11-17 19:39:43.236 [NotebookApp] WARNING | No web browser found: could not locate runnable browser.
2014-11-17 19:39:56.120 [NotebookApp] Connecting to: tcp://127.0.0.1:38872
2014-11-17 19:39:56.127 [NotebookApp] Kernel started: f24554a8-539f-426e-9010-cb3aa3386613
2014-11-17 19:39:56.506 [NotebookApp] Connecting to: tcp://127.0.0.1:43369
2014-11-17 19:39:56.512 [NotebookApp] Connecting to: tcp://127.0.0.1:33239
2014-11-17 19:39:56.516 [NotebookApp] Connecting to: tcp://127.0.0.1:54395

Book: Think Stats, 2nd Edition Exploratory Data Analysis
Author: Allen B. Downey(@allendowney)
Categories: DBA Blogs

NoCOUG watchers protest despicable tactics being used by NoCOUG management

Iggy Fernandez - Thu, 2014-11-13 13:49
FOR IMMEDIATE RELEASE NoCOUG watchers protest despicable tactics being used by NoCOUG management SILICON VALLEY (NOVEMBER 13, 2014) – Veteran NoCOUG watchers all over Northern California have been protesting the despicable tactics being used by NoCOUG management to lure Oracle Database professionals to the NoCOUG conference at the beautiful eBay Town Hall next week. Instead […]
Categories: DBA Blogs

Making Copies of Copies with Oracle RMAN

Don Seiler - Wed, 2014-11-12 10:13
I recently had need to make a copy of an image copy in Oracle rman. Since it wasn't immediately obvious to me, I thought it was worth sharing once I had it sorted out. I was familiar with making a backup of a backup, but had never thought about making a copy of a copy.

First you need to create an image copy of your database or tablespace. For the sake of example, I'll make a copy of the FOO tablespace. The key is to assign a tag to it that you can use for later reference. I'll use the tag "DTSCOPYTEST":

backup as copy 
    tablespace foo 
    tag 'DTSCOPYTEST'
    format '+DG1';

So I have my image copy in the DG1 tablespace. Now say we want to make copy of that for some testing purpose and put it in a different diskgroup. For that, we need the "BACKUP AS COPY COPY" command, and we'll want to specify the copy we just took by using the tag that was used:

backup as copy
    copy of tablespace foo
    from tag 'DTSCOPYTEST'
    tag 'DTSCOPYTEST2'
    format '+DG2';

As you'd guess, RMAN makes a copy of the first copy, writing it to the specified format location.

As always, hope this helps!
Categories: DBA Blogs

GoldenGate Not Keeping Up? Split the Process Using GoldenGate Range Function

VitalSoftTech - Tue, 2014-11-11 10:35
In most environments one set of GoldenGate process (1 Extract & 1 Replicat process) is sufficient for change data synchronization. But if your source database generates a huge volume of data then a single process may not be sufficient to handle the data volume. In such a scenario there may be need to split the […]
Categories: DBA Blogs

Is Oracle Database 12c (12.1.0.2.0) Faster Than Previous Releases?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Is Oracle Database 12c (12.1.0.2.0) Faster Than Previous Releases?
I was wondering if the new Oracle Database 12c version 12.1.0.2.0 in-memory column store feature will SLOW performance when it is NOT being used. I think this is a fair question because most Oracle Database systems will NOT be using this feature.

While the new in-memory column store feature is awesome and significant, with each new Oracle feature there is additional kernel code. And if Oracle is not extremely careful, these new lines of Oracle kernel code can slow down the core of Oracle processing, that is, buffer processing in Oracle's buffer cache.

Look at it this way, if a new Oracle release requires 100 more lines of kernel code to be executed to process a single buffer, that will be reflected in how many buffers Oracle can process per second.

To put bluntly, this article is the result of my research comparing core buffer processing rates between Oracle Database versions 11.2.0.2.0, 12.1.0.1.0 and 12.1.0.2.0.

With postings like this, it is very important for everyone to understand the results I publish are based on a very specific and targeted test and not on a real production load. Do not use my results in making a "should I upgrade decision." That would be stupid and an inappropriate use of the my experimental results. But because I publish every aspect of my experiment and it is easily reproducable it is valid data point with which to have a discussion and also highlight various situations that DBAs need to know about.

There are two interesting results from this research project. This article is about the first discovery and my next article will focus on the second. The second is by far the most interesting!

FYI. Back in August of 2013 performed a similar experiment where I compared Oracle database versions 11.2.0.2.0 with 12.1.0.1.0. I posted the article HERE.

Why "Faster" Means More Buffer Gets Processed Per Second
For this experiment when I say "faster" I am referring to raw buffered block processing. When a buffer is touched in the buffer cache it is sometimes called a buffer get or a logical IO. But regardless of the name, every buffer get increases the instance statistic, session logical reads.

I like raw logical IO processing experiments because they are central to all Oracle Database processing. Plus with each new Oracle release, as additional functionality is inserted it is likely more lines of Oracle kernel code will exist. To maintain performance with added functionality is an incredible feat. It's more likely the core buffer processing will be slower because of the new features. Is this case with Oracle's in-memory column store?

How I Setup The Experiment
I have included all the detailed output, scripts, R commands and output, data plots and more in the Analysis Pack that can be downloaded HERE.

There are a lot of ways I could have run this experiment. But two key items must exist for a fare comparison. First, all the processing must be in cache. There can be no physical read activity. Second, the same SQL must be run during the experiment and have the same execution plan. This implies the Oracle 12c column store will NOT be used. A different execution plan is considered "cheating" as a bad plan will clearly loose. Again, this is a very targeted and specific experiment.

The experiment compares the buffer get rates for a given SQL statement. For each Oracle version, I gathered 33 samples and excluded the first two, just to ensure caching was not an issue. The SQL statement runs for around 10 seconds, processes around 10.2M rows and touches around 5M buffers. I checked to ensure the execution plans are the same for each Oracle version. (Again, all the details are in the Analysis Pack for your reading pleasure.)

I ran the experiment on a Dell server. Here are the details:
$ uname -a
Linux sixcore 2.6.39-400.17.1.el6uek.x86_64 #1 SMP Fri Feb 22 18:16:18 PST 2013 x86_64 x86_64 x86_64 GNU/Linux
To make this easier for myself, to perform the test I used my CPU Speed Test tool (version 1i). I blogged about this last month HERE. The latest version of this tool can be downloaded HERE.

The Results, Statistically
Shown below are the experimental results. Remember, the statistic I'm measuring is buffer gets per millisecond.


Details about the above table: The "Normal" column is about if the statistical distribution of the 31 samples is normal. If the p-value (far right column) is greater than 0.05 then I'll say they are normal. In all three cases, the p-value is less than 0.05. If fact, if you look at the histograms contained in the Analysis Pack every histogram is visually clearly not normal. As you would expect the "Average" and the "Median" are the statistical mean and median. The "Max" is the largest value in the sample set. The "Std Dev" is the standard deviation, which is doesn't mean much since our sample sets are not normally distributed.

As I blogged about before the Oracle Database 12c buffer processing is faster than Oracle Database 11g. However, the interesting part is Oracle version with in-memory column store 12.1.0.2.0 is slower then the previous version of 12c, 12.1.0.1.0. In fact, in my experiment the in-memory column store version is around 5.5% slower! This means version 12.1.0.1.0 "out of the box" can process logical buffers around 5.5% faster! Interesting.

In case you're wondering, I used the default out-of-the-box in-memory column store settings for version 12.1.0.2.0. I checked the in-memory size parameter, inmemory_size and it was indeed set to zero. Also, when I startup the Oracle instance there is no mention of the in-memory column store.

Statistically Comparing Each Version
As an important side bar, I did statistically compare the Oracle Database versions. Why? Because while a 5.5% decrease in buffer throughput may seem important, it may not be statistically significant, meaning this difference can not be explained with our sample sets.

So going around saying version 12.1.0.2.0 is "slower" by 5.5% would be misleading. But in my experiment, it would NOT be misleading because the differences in buffer processing are statistically significant. The relevant experimental details are shown below.

Version A Version B Statistical p-value
Difference
---------- ---------- ----------- -------
11.2.0.1.0 12.1.0.1.0 YES 0.0000
11.2.0.1.0 12.1.0.2.0 YES 0.0000
12.1.0.1.0 12.1.0.2.0 YES 0.0000

In all three cases the p-value was less than 0.05 signifying the two sample sets are statistically
different. Again, all the details are in the Analysis Pack.

The chart above shows the histograms of both Oracle Database 12c version sample sets together. Visually they look very separated and different with no data crossover. So from both a numeric and visual perspective there is a real difference between 12.1.0.1.0 and 12.1.0.2.0.


What Does This Mean To Me
To me this is surprising. First, there is a clear buffer processing gain upgrading from Oracle 11g to 12c. That is awesome news! But I was not expecting a statistically significant 5.5% buffer processing decrease upgrading to the more recent 12.1.0.2.0 version. Second, this has caused me to do a little digging to perhaps understand the performance decrease. The results of my experimental journey are really interesting...I think more interesting than this posting! But I'll save the details for my next article.

Remember, if you have any questions or concerns about my experiment you can run the experiment yourself. Plus all the details of my experiment are included in the Analysis Pack.

All the best in your Oracle performance tuning work!

Craig.





Categories: DBA Blogs

The Importance of Backups (A Cautionary Block Recovery Tale)

Don Seiler - Tue, 2014-11-04 22:28
Just wanted to share a quick story with everyone. As I was in the airport waiting to fly to Oracle OpenWorld this year, I noticed a flurry of emails indicating that part of our storage infrastructure for our standby production database had failed. Long story short, my co-workers did a brilliant job of stabilizing things and keeping recovery working. However, we ended up with more than a few block corruptions.

Using the RMAN command "validate database", we could then see the list of corrupt blocks in the v$database_block_corruption view. All that was needed was to run "recover corruption list" in RMAN, which will dig into datafile copies and backups to do what it can to repair or replace the corrupt blocks and then recover the datafiles. Of course, nothing is ever that easy for us!

The storage we were writing our weekly backups to had been having problems and the latest weekly had failed. We ended up having to back 2 weeks into backups to get the datafile blocks and archivelogs to eventually complete the corruption recovery. I also immediately moved our backups to more reliable storage as well so that we're never in the situation of wondering whether or not we have the backups we need.

So, triple-check your backup plan, validate your backups and TEST RECOVERY SCENARIOS! You can't say your backups are valid until you use them to perform a restore/recovery, and you don't want to find out the hard way that you forgot something.
Categories: DBA Blogs

Does Increasing An Oracle Background Process OS Priority Improve Performance?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Does Increasing An Oracle Background Process OS Priority Improve Performance?
Does increasing an Oracle Database background process operating system priority improve performance? As you might expect, the answer is, "It depends."

In this posting I will explain the results of an experiment where I increase the Oracle Database 12c log writer background processes operating system priority.

In my experiment I created a clear CPU bottleneck and the top wait event was log file parallel write. I gathered some data. Next I increased all the log writer background process priorities. Again, I gathered some data and then I analyzed the two data sets. The results were disappointing, not surprising, but a fundamental rule in performance tuning was demonstrated and reinforced.

You can download the "analysis pack" which contains the raw experimental data, histogram, statistical R results and the R statical package commands HERE. You can download total time delta reporting script (ttpctx.sql) I show below, which is contained within my OraPub System Monitor (OSM) Toolkit HERE.

Before I could gather some data, I needed to create the appropriate system load, the data gather scripts and the data analysis scripts. Here's a quick overview of each.

Increasing The LGWR Processes PriorityIf you are not familiar with changing Oracle Database background OS process priority, I blogged about how to do this HERE.

My experiment consisted of creating two situations and statistically comparing them to see if increasing the LGWR background process OS priority increased performance. The only difference in the "two situations" was the instance parameter, _high_priority_processes. For the "normal priority" part of the experiment, the default "LMS*|VKTM" was used. For the "high priority" part of the experiment the parameter was changed to "LMS*|VKTM|LG*". The "LG*" caused the increase in the Linux OS priority of all log writer processes from the default 19 to 41.

Ready for some version specifics? Oracle continues to make OS priority related instance parameter changes... even within 12c releases. Since this experiment was done with Oracle Database version 12.1.0.1.0 the parameter default was "LMS*|VKTM" not "LMS*" as with version 12.1.0.2.0. Also, in 12.1.0.2.0 VKTM is placed into a new parameter, _highest_priority_processes.

Generating The DML With CPU Bottleneck Load
To generate the DML workload, I used my OP Load Generator. You can download it HERE. It's quick, easy and I'm familiar with it. I kept increasing the number of DML processes until the CPU utilization was pegged at 100%. While the database server was clearly out of CPU power, the log writer background processes were rarely seen as the top CPU consuming process. I used the "top" program to monitor the process activity. The top CPU consuming processes were almost always the Oracle server/foreground/shadow processes. As I'll explain in the Results section below, this has significant ramifications on the results.

Oracle Time Based Analysis Summary
The data collection script was similar to the one I used back in 2012 when collecting data regarding Oracle's commit write facility. Essentially, I collected multiple three minute samples of the delta elapsed time, user commits, total non-idle wait time and CPU consumption. In both the normal and high priority runs, around 99% of the database time was CPU consumption and 1% of the time Oracle non-idle wait time. Also the top wait event (~70%) was log file parallel write (display name: log file redo write).

If you want master an Oracle Time Based Analysis, check out my online seminar, Tuning Oracle Using An AWR Report. It will teach you how to using an AWR report to optimize Oracle performance so users will feel the difference.

The OraPub System Monitor Toolkit script ttpctx.sql report below was taken during the "normal priority" log writer load.

SQL> @ttpctx

Database: prod35 16-MAY-14 06:07pm
Report: ttpctx.sql OSM by OraPub, Inc. Page 1
Total Time Activity (39 sec interval)

Avg Time Time Wait
Time Component % TT % WT Waited (ms) (sec) Count(k)
------------------------------------- ------- ------- ----------- ----------- --------
CPU consumption: Oracle SP + BG procs 98.91 0.00 0.000 238.716 0
log file redo write 0.73 67.56 8.082 1.770 0
control file parallel write 0.18 16.41 33.077 0.430 0
target log write size 0.05 4.20 6.111 0.110 0
oracle thread bootstrap 0.03 3.05 40.000 0.080 0
os thread creation 0.02 1.53 20.000 0.040 0
commit: log file sync 0.01 0.76 10.000 0.020 0
enq: CR - block range reuse ckpt 0.01 0.76 20.000 0.020 0
Disk file operations I/O 0.00 0.00 0.000 0.000 0

The OSM report below was taken during the "high priority" log writer load.

SQL> @ttpctx

Database: prod35 16-MAY-14 09:25pm
Report: ttpctx.sql OSM by OraPub, Inc. Page 1
Total Time Activity (41 sec interval)

Avg Time Time Wait
Time Component % TT % WT Waited (ms) (sec) Count(k)
------------------------------------- ------- ------- ----------- ----------- --------
CPU consumption: Oracle SP + BG procs 98.92 0.00 0.000 238.733 0
log file redo write 0.83 77.01 8.272 2.010 0
control file parallel write 0.08 7.28 14.615 0.190 0
target log write size 0.05 4.98 5.909 0.130 0
oracle thread bootstrap 0.03 3.07 40.000 0.080 0
os thread creation 0.02 1.92 25.000 0.050 0
commit: log file sync 0.01 0.77 10.000 0.020 0
enq: CR - block range reuse ckpt 0.01 0.77 20.000 0.020 0
enq: RO - fast object reuse 0.00 0.38 10.000 0.010 0


Data Collection
For the normal priority load 27 three minute samples where collected. For the high priority situation there were 30 three minute samples collected. (I forgot why there was only 27 samples collected for the normal priority.)  I collected the elapsed time, total non-idle wait time, total CPU consumption (v$sys_time_model: db_cpu + background cpu time) and total user commits.

In this experiment more user commits processed per second means better performance.


Experimental Results
I used the free statistics package "R" (www.r-project.org) to analyze the data. I demonstrate how to get, install and use "R" in my online video seminar, Using Skewed Data To Your Advantage.

With the normal log writer process priority, an average of 984.5 commits/sec and a median of 983.0 commits/sec occurred. With the LG* high process priority, an average of 993.6 commits/sec and a median of 991.0 commits/sec occurred. While the "high priority" situation was able to process more commits per second, is this statistically significant?

The red "smoothed" histogram is the normal priority situation and the blue smoothed histogram is when the log writers were set to the higher priority. The more separated the two histograms the more "different" the sample sets, the more likely there is a statistically significant difference and the more likely a user would feel the difference. Looking at the above histograms plot, there does not appear to be a real difference. But let's do a proper significance test!

Because both sample sets are normally distributed (details are in the Analysis Pack), I could use a simple t-test. R produced a p-value of 0.04451. To be statistically "different" I want the p-value to be less than 0.05 and it is. What does this mean?

While statistically and numerically the commit rates are different, I wouldn't expect any special performance tuning award! In fact, the hassles with cycling a production instance and setting underscore/hidden parameters would make it very unlikely I would increase the OS priority of the log writer background processes. I want to see a big performance difference.

To Summarize... What I Learned... Again
This situation is a perfect example of focusing on the wrong thing! While there is a clear operating system CPU bottleneck and the top wait event is about redo, the log writers are not suffering from a want/need of CPU resources. I suspect the server processes want more CPU resources, but they are NOT the processes we increased their OS priority.

If the log writers were suffering from a lack of CPU resources and fighting for CPU resources, I would expect to see them consuming CPU resources along with the Oracle server processes. And I would definitely expect to see them near the top of the "top" process monitor... especially when their priority has been increased!

Because of this "misguided" tuning effort, this experiment does not build a case for or against changing the log writer priority. What it reinforces is in our quest to optimize performance, make sure we focus on the right thing.

As a side note, this is a great statistical analysis example for two reasons. First, our samples sets look similar, but statistically they are not. Second, while they are statistically different, the performance impact will not be very different. And my guess is the users won't feel a thing... except frustration and anger.

To Super Summarize
When considering increasing a background process's operating system priority, make sure the process is in need of CPU and is not able to get it. In this situation, the DBA could have been mislead by the CPU bottleneck. But upon closer inspection of the log writers from an operating system perspective and knowing the wait event "log file parallel write" is probably more about IO than CPU (Oracle perspective) it would be unlikely that increasing the log writer processes OS priority would help increase the commits per second.

Thanks for reading!

Craig.












Categories: DBA Blogs

Oracle Trivia Quiz

Iggy Fernandez - Wed, 2014-10-29 08:36
All the answers can be found in the November 2014 issue of the NoCOUG Journal. I am the editor of the NoCOUG Journal. What’s NoCOUG, you ask? Only the oldest and most active Oracle users group in the world. If you live in the San Francisco bay area and have never ever attended a NoCOUG […]
Categories: DBA Blogs

Results of the NoCOUG SQL Mini-Challenge

Iggy Fernandez - Tue, 2014-10-28 16:55
As published in the November 2014 issue of the NoCOUG Journal The inventor of the relational model, Dr. Edgar Codd, was of the opinion that “[r]equesting data by its properties is far more natural than devising a particular algorithm or sequence of operations for its retrieval. Thus, a calculus-oriented language provides a good target language […]
Categories: DBA Blogs

How To Change The Priority Of Oracle Background Processes

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.How To Change The Priority Of Oracle Background Processes

Before you get in a huf, it can be done! You can change an Oracle Database background process

priority through an instance parameter! I'm not saying it's a good idea, but it can be done.

In this post I explore how to make the change, just how far you can take it and when you may want to consider changing an Oracle background process priority.

To get your adrenaline going, check out the instance parameter _high_priority_processes from one of your production Oracle system with a version of 11 or greater. Here is an example using my OSM tool, ipx.sql on my Oracle Database version 12.1.0.2.0.
SQL> @ipx _highest_priority_processes
Database: prod40 27-OCT-14 02:22pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_highest_priority_processes = VKTM Highest Priority TRUE
Process Name Mask
Then at the Linux prompt, I did:
$ ps -eo pid,class,pri,nice,time,args | grep prod40
2879 TS 19 0 00:00:00 ora_pmon_prod40
2881 TS 19 0 00:00:01 ora_psp0_prod40
2883 RR 41 - 00:02:31 ora_vktm_prod40
2889 TS 19 0 00:00:01 ora_mman_prod40
2903 TS 19 0 00:00:00 ora_lgwr_prod40
2905 TS 19 0 00:00:01 ora_ckpt_prod40
2907 TS 19 0 00:00:00 ora_lg00_prod40
2911 TS 19 0 00:00:00 ora_lg01_prod40
...
Notice the "pri" for priority of the ora_vktm_prod40 process? It is set to 41 while all the rest of the Oracle background processes are set to the default of 19. Very cool, eh?

Surprised By What I Found
Surprised? Yes, surprised because changing Oracle process priority is a pandoras box. Just imagine if an Oracle server (i.e., foreground) process has its priority lowered just a little and then attempts to acquire a latch or a mutex? If it doesn't get the latch quickly, I might never ever get it!

From a user experience perspective, sometimes performance really quick and other times the application just hangs.

This actually happened to a customer of mine years ago when the OS started reducing a process's priority after it consumed a certain amount of CPU. I learned that when it comes to Oracle processes, they are programed to expect an even process priority playing field. If you try to "game" the situation, do so at your own risk... not Oracle's.

Then why did Oracle Corporation allow background process priority to be changed. And why did Oracle Corporation actually change a background processes priority?!

Doing A Little Exploration
It turns out there are a number of "priority" related underscore instance parameters! On my 11.2.0.1.0 system there 6 "priority" parameters. On my 12.1.0.1.0 system there are 8 "priority" parameters. On my 12.1.0.2.0 system there are 13 "priority" parameters! So clearly Oracle is making changes! In all cases, the parameter I'm focusing on, "_high_priority_processes" exists.

In this posting, I'm going to focus on my Oracle Database 12c version 12.1.0.2.0 system. While you may see something different in your environment, the theme will be the same.

While I'll be blogging about all four of the below parameters, in this posting my focus will be on the _high_priority_processes parameter. Below are the defaults on my system:
_high_priority_processes        LMS*
_highest_priority_processes VKTM
_os_sched_high_priority 1
_os_sched_highest_priority 1

Messing With The LGWR Background Processes
I'm not testing this on a RAC system, so I don't have an LMS background process. When I saw the "LMS*" I immediately thought, "regular expression." Hmmm... I wonder if I can change the LGWR background process. So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:36pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LGWR High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
...
5521 RR 41 - 00:00:00 ora_vktm_prod40
5539 TS 19 0 00:00:00 ora_dbw0_prod40
5541 RR 41 - 00:00:00 ora_lgwr_prod40
5545 TS 19 0 00:00:00 ora_ckpt_prod40
5547 TS 19 0 00:00:00 ora_lg00_prod40
5551 TS 19 0 00:00:00 ora_lg01_prod40
...

How Far Can I Take This?
At this point in my journey, my mind was a blaze! The log file sync wait event can be really difficult to deal with and especially so when there is a CPU bottleneck. Hmmm... Perhaps I can increase the priority of all the log writer background processes?

So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:44pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LG* High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
...
5974 TS 19 0 00:00:00 ora_psp0_prod40
5976 RR 41 - 00:00:00 ora_vktm_prod40
5994 TS 19 0 00:00:00 ora_dbw0_prod40
5996 RR 41 - 00:00:00 ora_lgwr_prod40
6000 TS 19 0 00:00:00 ora_ckpt_prod40
6002 RR 41 - 00:00:00 ora_lg00_prod40
6008 RR 41 - 00:00:00 ora_lg01_prod40
6014 TS 19 0 00:00:00 ora_lreg_prod40
...

So now all the log writer background processes have a high priority. My hope would be that if there is an OS CPU bottleneck and the log writer background processes wanted more CPU, I now have the power to give that to them! Another tool in my performance tuning arsenal!

Security Hole?
At this point, my exuberance began to turn into paranoia. I thought, "Perhaps I can increase the priority of an Oracle server process or perhaps any process." If so, that would be a major Oracle Database security hole.

With fingers trembling, I changed the instance parameters to match an Oracle server process and recycled the instance. Below shows the instance parameter change:

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:52pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|oracleprod40 Process Name Mask

Below is an operating system perspective using the ps command:

$ ps -eo pid,class,pri,nice,time,args | grep prod40
...
6360 TS 19 0 00:00:00 ora_psp0_prod40
6362 RR 41 - 00:00:00 ora_vktm_prod40
6366 TS 19 0 00:00:00 ora_gen0_prod40
6382 RR 41 - 00:00:00 ora_lgwr_prod40
6386 TS 19 0 00:00:00 ora_ckpt_prod40
6388 RR 41 - 00:00:00 ora_lg00_prod40
6394 RR 41 - 00:00:00 ora_lg01_prod40
6398 TS 19 0 00:00:00 ora_reco_prod40
...
6644 TS 19 0 00:00:00 oracleprod40...
...

OK, that didn't work so how about this?

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:55pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|*oracle* Process Name Mask

Let's see what happened at the OS.

$ ps -eo pid,class,pri,nice,time,args | grep prod40
...
6701 RR 41 - 00:00:00 ora_vktm_prod40
6705 RR 41 - 00:00:00 ora_gen0_prod40
6709 RR 41 - 00:00:00 ora_mman_prod40
6717 RR 41 - 00:00:00 ora_diag_prod40
6721 RR 41 - 00:00:00 ora_dbrm_prod40
6725 RR 41 - 00:00:00 ora_vkrm_prod40
6729 RR 41 - 00:00:00 ora_dia0_prod40
6733 RR 41 - 00:00:00 ora_dbw0_prod40
...
6927 RR 41 - 00:00:00 ora_p00m_prod40
6931 RR 41 - 00:00:00 ora_p00n_prod40
7122 TS 19 0 00:00:00 oracleprod40 ...
7124 RR 41 - 00:00:00 ora_qm02_prod40
7128 RR 41 - 00:00:00 ora_qm03_prod40

Oh Oh... That's not good! Now EVERY Oracle background process has a higher priority and my Oracle server process does not.

So my "*" wildcard caused all the Oracle processes to be included. If all the processes a high prioirty, then the log writer processes have no advantage over the others. And to make matters even worse, my goal of increasing the server process priority did not occur.

However, this is actually very good news because it appears this is not an Oracle Database security hole! To me, it looks like the priority parameter is applied during the instance startup for just the background processes. Since my server process was started after the instance was started and for sure not included in the list of background processes, its priority was not affected. Good news for security, not as good of news for a performance optimizing fanatic such as myself.

Should I Ever Increase A Background Process Priority?
Now that we know how to increase an Oracle Database background process priority, when would we ever want to do this? The short answer is probably never. But the long answer is the classic, "it depends."

Let me give you an example. Suppose there is an OS CPU bottleneck and the log writer background processes are consuming lots of CPU while handling all the associated memory management when server process issues a commit. In this situation, performance may benefit by making it easier for the log writer processes to get CPU cycles, therefore improving performance. But don't even think about doing this unless there is a CPU bottleneck. And even then, be very very careful.

In my next block posting, I'll detail an experiment where I changed the log writer background processes priority.

Thanks for reading!

Craig.



Categories: DBA Blogs

OCP 12C – Oracle Data Redaction

DBA Scripts and Articles - Mon, 2014-10-27 08:00

What is Oracle Data Redaction ? Oracle Data Redaction is meant to mask (redact) sensitive data returned from application queries. Oracle Data Redaction doesn’t make change to data on disk, the sensitive data is redacted on the fly before it is returned to the application. You can redact column data by using one of the following methods: Full … Continue reading OCP 12C – Oracle Data Redaction

The post OCP 12C – Oracle Data Redaction appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

OCP 12C – Oracle Data Redaction

DBA Scripts and Articles - Mon, 2014-10-27 08:00

What is Oracle Data Redaction ? Oracle Data Redaction is meant to mask (redact) sensitive data returned from application queries. Oracle Data Redaction doesn’t make change to data on disk, the sensitive data is redacted on the fly before it is returned to the application. You can redact column data by using one of the following methods: Full … Continue reading OCP 12C – Oracle Data Redaction

The post OCP 12C – Oracle Data Redaction appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Getting Started with Impala Interactive SQL for Apache Hadoop by John Russell; O'Reilly Media

Surachart Opun - Sat, 2014-10-25 12:52
Impala is open source and a query engine that runs on Apache Hadoop. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. If you are looking for a book getting start with it - Getting Started with Impala Interactive SQL for Apache Hadoop by John Russell (@max_webster). Assist readers to write, tune, and port SQL queries and other statements for a Big Data environment, using Impala. The SQL examples in this book start from a simple base for easy comprehension, then build toward best practices that demonstrate high performance and scalability. For readers, you can download QuickStart VMs and install. After that, you can use it with examples in a book.
In a book, it doesn't assist readers to install Impala or how to solve the issue from installation or configuration. It has 5 chapters and not much for the number of pages, but enough to guide how to use Impala (Interactive SQL) and has good examples. With chapter 5 - Tutorials and Deep Dives, that it's highlight in a book and the example in a chapter that is very useful.
Free Sampler.

This book assists readers.
  • Learn how Impala integrates with a wide range of Hadoop components
  • Attain high performance and scalability for huge data sets on production clusters
  • Explore common developer tasks, such as porting code to Impala and optimizing performance
  • Use tutorials for working with billion-row tables, date- and time-based values, and other techniques
  • Learn how to transition from rigid schemas to a flexible model that evolves as needs change
  • Take a deep dive into joins and the roles of statistics
[test01:21000] > select "Surachart Opun" Name,  NOW() ;
Query: select "Surachart Opun" Name,  NOW()
+----------------+-------------------------------+
| name           | now()                         |
+----------------+-------------------------------+
| Surachart Opun | 2014-10-25 23:34:03.217635000 |
+----------------+-------------------------------+
Returned 1 row(s) in 0.14sAuthor: John Russell (@max_webster)
Categories: DBA Blogs

OCP 12C – Resource Manager and Performance Enhancements

DBA Scripts and Articles - Fri, 2014-10-24 08:16

Use Resource Manager for a CDB and a PDB Managing Resources between PDBs The Resource Manager uses Shares ans Utilization limit to manage resources allocated to PDBs. The more “Shares” you allocate to a PDB, the more resource it will have. Shares are allocated through DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE. One directive can only concern one PDB and you can’t … Continue reading OCP 12C – Resource Manager and Performance Enhancements

The post OCP 12C – Resource Manager and Performance Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

OCP 12C – SQL Enhancements

DBA Scripts and Articles - Thu, 2014-10-23 09:20

Extended Character Data Type Columns In this release Oracle changed the maximum sixe of three data types  In Oracle 12c if you set a VARCHAR2 to 4000 bytes or less it is stored inline, if you set it to more than 4000 bytes then it is transformed in extended character data type and stored out … Continue reading OCP 12C – SQL Enhancements

The post OCP 12C – SQL Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Configure Oracle Exadata Write Back Flash Cache

VitalSoftTech - Thu, 2014-10-23 00:37
In addition to improving read I/Os, Oracle Exadata Write back flash cache also provides the ability to cache write I/Os directly to PCI flash. Exadata storage software version 11.2.3.2.1 is the minimum version required to use write back flash cache. Grid infrastructure and database homes must run 11.2.0.3.9 or later to use with Write-back Smart […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs