Skip navigation.

Bobby Durrett's DBA Blog

Syndicate content
Oracle database performance
Updated: 7 hours 49 min ago

Jonathan Lewis

Tue, 2016-04-19 17:09

I am finally getting around to finishing my four-part blog series on people who have had the most influence on my Oracle performance tuning work. The previous three people were Craig ShallahamerDon Burleson, and Cary Millsap. The last person is Jonathan Lewis. These four people, listed and blogged about in chronological order, had the most influence on my understanding of how to do Oracle database performance tuning. There are many other great people out there and I am sure that other DBAs would produce their own, different, list of people who influenced them. But this list reflects my journey through my Oracle database career and the issues that I ran into and the experiences that I had. I ran into Jonathan Lewis’ work only after years of struggling with query tuning and getting advice from others. I ran into his material right around the time that I was beginning to learn about how the Oracle optimizer worked and some of its limits. Jonathan was a critical next step in my understanding of how Oracle’s optimizer worked and why it sometimes failed to pick the most efficient way to run a query.

Jonathan has produced many helpful tuning resources including his blog, his participation in online forums, and his talks at user group conferences, but the first and most profound way he taught me about Oracle performance tuning was through his query tuning book Cost-Based Oracle Fundamentals. It’s $30 on Amazon and that is an incredibly small amount of money to pay compared to the value of the material inside the book. I had spent many hours over several years trying to understand why the Oracle optimizer some times choses the wrong way to run a query. In many cases the fast way to run something was clear to me and the optimizer’s choices left me stumped. The book helped me better understand how the Oracle optimizer chooses what it thinks is the best execution plan. Jonathan’s book describes the different parts of a plan – join types, access methods, etc. – and how the optimizer assigns a cost to the different pieces of a plan. The optimizer chooses the plan with the least cost, but if some mistake causes the optimizer to calculate an unrealistic cost then it might choose a poor plan. Understanding why the optimizer would choose a slow plan helped me understand how to resolve performance issues or prevent them from happening, a very valuable skill.

There is a lot more I could say about what I got from Jonathan Lewis’ book including just observing how he operated. Jonathan filled his book with examples which show concepts that he was teaching. I think that I have emulated the kind of building of test scripts that you see throughout his book and on his blog and community forums. I think I have emulated not only Jonathan’s approach but the approaches of all four of the people who I have spotlighted in this series. Each have provided me with profoundly helpful technical information that has helped me in my career. But they have also provided me with a pattern of what an Oracle performance tuning practitioner looks like. What kind of things do they do? To this point in my career I have found the Oracle performance tuning part of my job to be the most challenging and interesting and probably the most valuable to my employers. Jonathan Lewis and the three others in this four-part series have been instrumental in propelling me along this path and I am very appreciative.

Bobby

Categories: DBA Blogs

Log file parallel write wait graph

Thu, 2016-03-31 09:50

I got a chance to use my onewait Python based graph to help with a performance problem. I’m looking at slow write time from the log writer on Thursday mornings. Here is the graph with the database name erased:

log_file_parallel_write_waits

We are still trying to track down the source of the problem but there seems to be a backup on another system that runs at times that correspond to the spike in log file parallel write wait times. The nice thing about this graph is that it shows you activity on the top and average wait time on the bottom so you can see if the increased wait time corresponds to a spike in activity. In this case there does not seem to be any increase in activity on the problematic database.  But that makes sense if the real problem is contention by a backup on another system.

Anyway, my Python graphs are far from perfect but still helpful in this case.

Bobby

Categories: DBA Blogs

Python DBA Graphs Github Repository

Tue, 2016-03-29 16:40

I decided to get rid of the Github repository that I had experimented with and to create a new one. The old one had a dump of all my SQL scripts but without any documentation. But, I have updated my Python graphing scripts a bit at a time and have had some recent value from these scripts in my Oracle database tuning work. So, I created a Github repository called PythonDBAGraphs. I think it will be more valuable to have a repository that is more focused and is being actively updated and documented.

It is still very simple but I have gotten real value from the two graphs that are included.

Bobby

Categories: DBA Blogs

Another SQL Profile to the rescue!

Mon, 2016-03-28 17:57

We have had problems with set of databases over the past few weeks. Our team does not support these databases, but my director asked me to help. These are 11.2.0.1 Windows 64 bit Oracle databases running on Windows 2008. The incident reports said that the systems stop working and that the main symptom was that the oracle.exe process uses all the CPU. They were bouncing the database server when they saw this behavior and it took about 30 minutes after the bounce for the CPU to go back down to normal. A Windows server colleague told me that at some point in the past a new version of virus software had apparently caused high CPU from the oracle.exe process.

At first I looked for some known bugs related to high CPU and virus checkers without much success. Then I got the idea of just checking for query performance. After all, a poorly performing query can eat up a lot of CPU. These Windows boxes only have 2 cores so it would not take many concurrently running high CPU queries to max it out. So, I got an AWR report covering the last hour of a recent incident. This was the top SQL:

Top SQL

The top query, sql id 27d8x8p6139y6, stood out as very inefficient and all CPU. It seemed clear to me from this listing that the 2 core box had a heavy load and a lot of waiting for CPU queuing. %IO was zero but %CPU was only 31%. Most likely the rest was CPU queue time.

I also looked at my sqlstat report to see which plans 27d8x8p6139y6 had used over time.

PLAN_HASH_VALUE END_INTERVAL_TIME     EXECUTIONS Elapsed ms
--------------- --------------------- ---------- -----------
     3067874494 07-MAR-16 09.00.50 PM        287  948.102286
     3067874494 07-MAR-16 10.00.03 PM        292  1021.68191
     3067874494 07-MAR-16 11.00.18 PM        244  1214.96161
     3067874494 08-MAR-16 12.00.32 AM        276  1306.16222
     3067874494 08-MAR-16 01.00.45 AM        183  1491.31307
      467860697 08-MAR-16 01.00.45 AM        125      .31948
      467860697 08-MAR-16 02.00.59 AM        285  .234073684
      467860697 08-MAR-16 03.00.12 AM        279  .214354839
      467860697 08-MAR-16 04.00.25 AM        246   .17147561
      467860697 08-MAR-16 05.00.39 AM         18        .192
     2868766721 13-MAR-16 06.00.55 PM         89    159259.9
     3067874494 13-MAR-16 06.00.55 PM          8  854.384125
     2868766721 13-MAR-16 07.00.50 PM         70  1331837.56

Plan 2868766721 seemed terrible but plan 467860697 seemed great.

Our group doesn’t support these databases so I am not going to dig into how the application gathers statistics, what indexes it uses, or how the vendor designed the application. But, it seems possible that forcing the good plan with a SQL Profile could resolve this issue without having any access to the application or understanding of its design.

But, before plunging headlong into the use of a SQL Profile I looked at the plan and the SQL text.  I have edited these to hide any proprietary details:

SELECT T.*
    FROM TAB_MYTABLE1 T,
         TAB_MYTABLELNG A,
         TAB_MYTABLE1 PIR_T,
         TAB_MYTABLELNG PIR_A
   WHERE     A.MYTABLELNG_ID = T.MYTABLELNG_ID
         AND A.ASSIGNED_TO = :B1
         AND A.ACTIVE_FL = 1
         AND T.COMPLETE_FL = 0
         AND T.SHORTED_FL = 0
         AND PIR_T.MYTABLE1_ID = T.PIR_MYTABLE1_ID
         AND ((PIR_A.FLOATING_PIR_FL = 1 
               AND PIR_T.COMPLETE_FL = 1)
              OR PIR_T.QTY_PICKED IS NOT NULL)
         AND PIR_A.MYTABLELNG_ID = PIR_T.MYTABLELNG_ID
         AND PIR_A.ASSIGNED_TO IS NULL
ORDER BY T.MYTABLE1_ID

The key thing I noticed is that there was only one bind variable. The innermost part of the good plan uses an index on the column that the query equates with the bind variable. The rest of the plan is a nice nested loops plan with range and unique index scans. I see plans in this format in OLTP queries where you are looking up small numbers of rows using an index and join to related tables.

-----------------------------------------------------------------
Id | Operation                        | Name                     
-----------------------------------------------------------------
 0 | SELECT STATEMENT                 |                          
 1 |  SORT ORDER BY                   |                          
 2 |   NESTED LOOPS                   |                          
 3 |    NESTED LOOPS                  |                          
 4 |     NESTED LOOPS                 |                          
 5 |      NESTED LOOPS                |                          
 6 |       TABLE ACCESS BY INDEX ROWID| TAB_MYTABLELNG           
 7 |        INDEX RANGE SCAN          | AK_MYTABLELNG_BY_USER    
 8 |       TABLE ACCESS BY INDEX ROWID| TAB_MYTABLE1             
 9 |        INDEX RANGE SCAN          | AK_MYTABLE1_BY_MYTABLELNG
10 |      TABLE ACCESS BY INDEX ROWID | TAB_MYTABLE1             
11 |       INDEX UNIQUE SCAN          | PK_MYTABLE1              
12 |     INDEX UNIQUE SCAN            | PK_MYTABLELNG            
13 |    TABLE ACCESS BY INDEX ROWID   | TAB_MYTABLELNG           
-----------------------------------------------------------------

The bad plan had a gross Cartesian merge join:

Plan hash value: 2868766721

----------------------------------------------------------------
Id | Operation                       | Name                     
----------------------------------------------------------------
 0 | SELECT STATEMENT                |                          
 1 |  NESTED LOOPS                   |                          
 2 |   NESTED LOOPS                  |                          
 3 |    MERGE JOIN CARTESIAN         |                          
 4 |     TABLE ACCESS BY INDEX ROWID | TAB_MYTABLE1             
 5 |      INDEX FULL SCAN            | PK_MYTABLE1              
 6 |     BUFFER SORT                 |                          
 7 |      TABLE ACCESS BY INDEX ROWID| TAB_MYTABLELNG           
 8 |       INDEX RANGE SCAN          | AK_MYTABLELNG_BY_USER    
 9 |    TABLE ACCESS BY INDEX ROWID  | TAB_MYTABLE1             
10 |     INDEX RANGE SCAN            | AK_MYTABLE1_BY_MYTABLELNG
11 |   TABLE ACCESS BY INDEX ROWID   | TAB_MYTABLELNG           
12 |    INDEX RANGE SCAN             | AK_MYTABLELNG_BY_USER    
----------------------------------------------------------------

Reviewing the SQL made me believe that there was a good chance that a SQL Profile forcing the good plan would resolve the issue. Sure, there could be some weird combination of data and bind variable values that make the bad plan the better one. But, given that this was a simple transactional application it seems most likely that the straightforward nested loops with index on the only bind variable plan would be best.

We used the SQL Profile to force these plans on four servers and so far the SQL Profile has resolved the issues. I’m not saying that forcing a plan using a SQL Profile is the only or even best way to resolve query performance issues. But, this was a good example of where a SQL Profile makes sense. If modifying the application, statistics, parameters, and schema is not possible then a SQL Profile can come to your rescue in a heartbeat.

Bobby

Categories: DBA Blogs

Math Resources

Thu, 2016-03-17 16:51

I feel like I have not been posting very much on this blog lately. I have been focused on things outside of Oracle performance so I haven’t had a lot of new scripts to post.  I have been quietly updating my Python source code on GitHub so check that out. I have spent a lot of time educating myself in various ways including through the leadership and communication training program that comes from Toastmasters. My new job title is “Technical Architect” which is a form of technical leadership so I’m trying to expand myself beyond being an Oracle database administrator that specializes in performance tuning.

In addition to developing my leadership and communication skills I have gotten into a general computer science self-education kick. I took two introductory C.S. classes on edX. I also read a book on Linux hacking and a book on computer history. I was thinking of buying one of the Donald Knuth books or going through MIT’s free online algorithms class class 6.006. I have a computer science degree and spent two years in C.S. graduate school but that was a long time ago. It is kind of fun to refresh my memory and catch up with the latest trends. But the catch is that both the Knuth book and MIT’s 6.006 class require math that I either never learned or have forgotten. So, I am working my way through some math resources that I wanted to share with those who read this blog.

The first thing I did was to buy a computer math book, called Concrete Mathematics,  that seemed to cover the needed material. Reviews on Amazon.com recommended this book as good background for the Knuth series and one of the Oracle performance experts that I follow on Twitter recommended it for similar reasons. But, after finishing my second edX class I began exploring the MIT OCW math class that was a prerequisite to MIT’s 6.006 algorithms class. MIT calls the math class 6.042J and I am working through the Fall 2010 version of the class. There is a lot of overlap between the class and the book but they are not a perfect match. The book has some more difficult to follow material than the class. It is probably more advanced.  The class covers some topics, namely graph theory, that the book does not.  The free online class has some very good lecture videos by a top MIT professorTom Leighton. I even had my wife and daughters sit down and watch his first lecture with me on our family television for fun on my birthday.

The book led me to a great free math resource called Maxima. Maxima has all kinds of great math built into it such as solving equations, factoring integers, etc. Plus, it is free. There are other similar and I think more popular programs that are not free but for my use it was great to simply download Maxima and have its functionality at my fingertips.

The last resource that I wanted to mention is the Mathematics section of Stack Exchange. It is a pretty structured online forum with a question and answer format. It is helpful to me since I am going through 6.042J without a professor or teaching assistant to answer my questions. The people on math stack exchange are very helpful if you at least try to follow the etiquette for their forum. For example, they have an easy to use way to format math formulas in your questions and answers and the users of the forum expect you to use it. But it isn’t hard. I had one question from the Concrete Math book where I couldn’t understand the answer key in the back. I asked about it on stack exchange and got a great answer in no time.

Anyway, maybe all of this math and computer science study is a departure from my bread and butter Oracle database work and performance tuning. But the free online resources like the OCW web site, the Maxima program, and the stack exchange forum along with the book that I paid for are a great set of resources. I have already used some of the concepts that I have learned about number theory and its application to RSA encryption. But, at the same time I am enjoying studying these things and mostly see it as something fun to do in my spare time. (I’m weird I know.)

So, I have written this blog post to share the math related things that I am studying and using to those who might benefit from them. I am not a math expert, but I am getting a lot out of these materials. I hope that others find these resources as enjoyable and educational as I have.

Bobby

Categories: DBA Blogs

General troubleshooting lessons from recent Delphix issue

Fri, 2016-02-05 11:25

Delphix support helped me resolve an issue yesterday and the experience gave me the idea of writing this post about several general computer issue troubleshooting tips that I have learned down through the years. Never mind that I ignored these lessons during this particular problem. This is more of a “do as I say” and not a “do as I do” story.  Actually, some times I remember these lessons. I didn’t do so well this week. But the several mistakes that I made resolving this recent Delphix issue motivate me to write this post and if nothing else remind myself of the lessons I’ve learned in the past about how to resolve a computer problem.

Don’t panic!

I’m reminded of the friendly advice on the cover of the Hitchhiker’s Guide to the Galaxy: “Don’t panic!”. So, yesterday it was 4:30 pm. I had rebooted the Delphix virtual machine and then in a panic had the Unix team reboot the HP Unix target server. But, still I could not bring up any of the Delphix VDBs.  We had people coming over to our house for dinner that night and I was starting to worry that I would be working on this issue all night. I ended up getting out of the office by 5:30 pm and had a great dinner with friends. What was I so stressed about? Even the times that I have been up all night it didn’t kill me. Usually the all night issues lead to me learning things anyway.

Trust support

The primary mistake that I made was to get my mind fixed on a solution to the problem instead of working with Delphix support and trusting them to guide us to the solution. We had a number of system issues due to a recent network issue and I got my mind set on the idea that my Delphix issue was due to some network hangup. I feel sorry for our network team because it seems like the first thought people have any time there is some issue is that it is a “network issue”. I should know better. How many times have I been working on issues when everyone says it is a “database issue” and I’m annoyed because I know that the issue is somewhere else and they are not believing me when I point to things outside the database. Anyway, I opened a case with Delphix on Monday when I couldn’t get a VDB to come down. It just hung for 5 minutes until it gave me an error. I assumed that it was a network hangup and got fixated on rebooting the Delphix VM. Ack! Ultimately, I ended up working with two helpful and capable people in Delphix support and they resolved the issue which was not what I thought at all. There are times to disagree with support and push for your own solution but I did this too early in this case and I was dead wrong.

Keep it simple

I’ve heard people refer to Occam’s razor which I translate in computer terms to mean “look for simple problems first”. Instead of fixing my mind on some vague network issue where the hardware is not working properly, how about assuming that all the hardware and software is working normally and then thinking about what problems might cause my symptoms? I can’t remember how many times this has bit me. There is almost always some simple explanation.  In this case I had made a change to a Unix shell script that runs when someone logs in as the oracle user. This caused Delphix to no longer be able to do anything with the VDBs on that server. Oops! It was a simple blunder, no big deal. But I’m kicking myself for not first thinking about a simple problem like a script change instead of focusing on something more exotic.

What changed?

I found myself saying the same dumb thing that I’ve heard people say to me all the time: nothing changed. In this case I said something like “this has worked fine for 3 years now and nothing has changed”. The long-suffering and patient Delphix support folks never called me on this, but I was dead wrong. Something had to have changed for something that was working to stop working. I should have spent time looking at the various parts of our Delphix setup to see if anything had changed before I contacted support. All I had to do was see the timestamp on our login script and I would see that something had recently changed.

Understand how it all works

I think my Delphix skills are a little rusty. We just started a new expansion project to add new database sources to Delphix. It has been a couple of years since I’ve done any heavy configuration and trouble shooting. But I used to have a better feel for how all the pieces fit together. I should have thought about what must have gone on behind the scenes when I asked Delphix to stop a VDB and it hung for 5 minutes. What steps was it doing? Where in the process could the breakdown be occurring? Delphix support did follow this type of reasoning to find the issue. They manually tried some of the steps that the Delphix software would do automatically until they found the problem. If I stopped to think about the pieces of the process I could have done the same. This has been a powerful approach to solving problems all through my career. I think about resolving PeopleSoft issues. It just helps to understand how things work. For example, if you understand how the PeopleSoft login process works you can debug login issues by checking each step of the process for possible issues. The same is true for Oracle logins from clients. In general, the more you understand all the pieces of a computer system, down to the transistors on the chips, the better chance you have of visualizing where the problem might be.

Well, I can’t think of any other pearls of wisdom from this experience but I thought I would write these down while it was on my mind. Plus, I go on call Monday morning so I need to keep these in mind as I resolve any upcoming issues. Thanks to Delphix support for their good work on this issue.

Categories: DBA Blogs

Cloning 10.2.0.3 Oracle Home on fully patched 11.31 HP-UX hangs

Thu, 2016-02-04 11:38

I based this blog post on information that I learned from this Oracle Support document:

Runinstaller And Emctl Do Not Work After Upgrading HP-UX 11.31 To 11.31 Update3 (Sep 2008) (Doc ID 780102.1)

My situation was slightly different from what Oracle’s note describes so I thought it would be helpful to document what I found.

In my case I am cloning an Oracle 10.2.0.3 home on to a fully patched HP-UX 11.31 server. I have used this same clone process on Oracle 11.1, 11.2, and 12.1 Oracle Homes with no issues. The symptom is that the 10.2 clone process just hangs with no helpful messages.

I searched Oracle’s support site and the web for issues with 10.2 cloning and could not find anything that matched my symptoms. I then decided to give up on cloning and try to install the base 10.2.0.1 binaries and then patch to match the home that I was trying to clone. The 10.2.0.1 install also hung.  But, I know that the 10.2.0.1 install works since we have used it on many other similar systems. But, they were on HP-UX 11.23 and not the fully patched HP-UX 11.31. So, I searched for installer issues on 11.31 and 10.2 and found the Oracle document listed above.

Evidently there is some bug with the JDK that Oracle included in 10.2 so that it does not work with HP-UX 11.31 with the current patches. A later version of Java resolves the issue.

Now that I understood the issue I decided to go back to the clone and try to apply the recommendations from the Oracle note, even though it doesn’t mention cloning.

The Oracle note suggests adding the –jreLoc /opt/java1.4 option to the runInstaller command line. The only catch is that my HP system did not have /opt/java1.4.  The oldest java we have installed is in /opt/java1.5. So, I tried the clone with the -jreLoc /opt/java1.5 option and it got almost to the end of the clone before it hung doing some emctl step.  Then I realized that I needed to follow the steps in the Oracle note to rename the Oracle Home’s jdk directory and set up a link to the Java1.5 directory.  So, I did these steps to point to the correct jdk directory:

cd $ORACLE_HOME

mv jdk jdk.orig

ln -s /opt/java1.5 jdk

Then I ran the clone with this command line:

$ORACLE_HOME/oui/bin/runInstaller -jreLoc /opt/java1.5 -clone -silent ORACLE_HOME=$ORACLE_HOME ORACLE_BASE=$ORACLE_BASE ORACLE_HOME_NAME=$ORACLE_HOME_NAME

Success!

It wasn’t that hard to apply the note to the clone situation but I thought it was worth blogging it. If someone googles runInstaller clone hang 10.2 11.31 and needs the solution they will find it.

Of course, I may be the only person in the world cloning a 10.2 Oracle Home on an HP-UX Itanium 11.31 system, but it’s here if someone needs it.

Bobby

Categories: DBA Blogs

Another python graph – one wait event

Thu, 2016-01-14 17:28

Here is another graph that I created in Python with Pyplot:

onewait_medium

This is onewait() in dbgraphs.py  on my github repository. myplot.py has the plotting code. perfq.py has the query. db.py has the database access code.

I blanked out the database name in the example graph to hide it.

This is a graphical version of my onewaitevent.sql script. It queries the AWR looking at a particular wait event per hour. You look at the number of wait events in an hour to see how busy the system was and then the average elapsed time for that hour. Also, you set the smallest number of waits to include so you can drop hours where nothing is going on.

In the example graph you can find times where the average time for a db file sequential read is high and the system is busy. You use the top graph to see how busy the system is and the bottom to see where the average time spikes.

Still just an experiment but I thought I would pass it along. It isn’t that hard to create the graph in Python and I seem to have a lot of flexibility since I’m writing code instead of using an existing program like Excel.

Bobby

 

Categories: DBA Blogs

Trying Python and Pyplot for Database Performance Graphs

Wed, 2016-01-06 16:16

In the past I have used Excel to graph things related to Oracle database performance. I am trying out Python and the Pyplot library as an alternative to Excel.  I took a graph that I had done in Excel and rewrote it in Python. The graph shows the CPU usage within the database by category.  For example, I labeled the database CPU used by a group of web servers “WEBFARM1” on the graph.

Here is an example graph:

monday

You can find most of this code in the Python section of my GitHub repository. Here is the code that I used to create the example graph above using some made up data: zip

To make this graph in Excel I was running a sqlplus script and cutting and pasting the output into a text file that I imported into Excel. Very manual. No doubt there are ways that I could have automated what I was doing in Excel. But I have studied Python as part of the edX classes I took so I thought I would give it a try.

Python let me write a program to run the graph from an icon on my desktop. I used the cx_Oracle package to pull the data from the database and Pyplot for the graph.

I’m running the Windows 32 bit version of Canopy Express for my Python development environment. This environment comes with Pylot so I just had to install cx_Oracle to have all the packages I needed to make the graph.

I think both Excel and Python/Pyplot still have value. Excel still seems easier for quick and dirty graphing. But I used Python to automate a report that I run every day with fewer manual steps.  Probably could have done the same thing in Excel but I have recently studied Python so I was able to apply what I learned in my classes without a lot more effort.

Bobby

 

 

Categories: DBA Blogs