Bobby Durrett's DBA Blog

Subscribe to Bobby Durrett's DBA Blog feed
Oracle database performance
Updated: 38 min ago

Modified PythonDBAGraphs to use datetime on X axis

Fri, 2017-06-16 18:55

I modified PythonDBAGraphs to use datetime objects on the X axis for all the reports except ashcpu.py. This lets you hover the mouse over a point and see the date and time for the point. Previously I was using text strings as labels with dates and times on the X axis and it was not obvious which point was for which date and time.

Here is a screenshot:

I have an example of a graph using datetime objects in an earlier post: post

I used two very helpful pages from the Matplotlib documentation: doc, example

I uninstalled cx_Oracle 6 and went back to version 5.3. This resolved the bug I mentioned in my earlier PythonDBAGraphs post. This enabled me to take out the TO_CHAR function calls that I had just added to work around the bug. The SQL queries looks simpler and easier to understand now.

I modified the README to include my new list of installed packages.

I feel like these changes will make the existing graphs more useful to me. Also, the code and SQL queries are cleaner now so it will be easier for me to create new graphs in the future.

Bobby

 

Categories: DBA Blogs

Pushed out new version of PythonDBAGraphs

Fri, 2017-06-02 17:03

I pushed out a new version of PythonDBAGraphs. I got a new laptop at work so I am setting it up with the software that I need to do my job. I decided that instead of reinstalling Enthought Canopy I would go with the current version of Python 3 and install the packages that I need one at a time. This gives me a cleaner install and I’m on the most current version of Python and the packages. I had installed Canopy for the two edX classes that I took so I wasn’t motivated to uninstall it on my old laptop. I use PythonDBAGraphs just about every day for my work and I didn’t want to whack it by uninstalling Canopy on my old laptop and trying to install something new. But, now I have my old laptop as a backup so I was able to take the time to install Python 3 and the current packages. I have updated the README with details of my new configuration.

I had to make a number of changes in the code. I had to change <> to !=. I had to change print xyz to print(xyz). Also, I think I ran across a bug in the new version of cx_Oracle. Here is an example. Assume you have an open connection and cursor. The cursor is cur.

cur.execute('select to_char(blocks/7) from user_tables')
row = cur.fetchone()
print(row)

cur.execute('select blocks/7 from user_tables')
row = cur.fetchone()
print(row)

Here is the output:

('2.28571428571428571428571428571428571429',)
Traceback (most recent call last):
  File "bug.py", line 12, in 
    row = cur.fetchone()
ValueError: invalid literal for int() with base 10: '2.28571428571428571428571428571428571429'

Strange. The workaround is to take any numbers with fractional values and run to_char() on them. I made a number of changes in the code to work around this bug/feature.

Anyway, I’m glad that I moved to Python 3 and that it enables me to use pip to install just the packages I want. Hopefully this update will make PythonDBAGraphs more usable for others since not everyone has Canopy.

Bobby

Categories: DBA Blogs

X86-64 Assembly Book

Wed, 2017-05-24 15:32

I have written earlier blog posts about my diversion from studying Oracle to studying computer science. Here are some relevant posts: url1,url2,url3,url4. After finishing the math for computer science online class and book that I was working on I stared working through a book about assembly language programming. I bought the book in a frenzy of enthusiasm about studying computer science for fun. But then I had to decide if I was going to work through the assembly book now and delay moving on to the algorithms class using Python that I had intended to do next. I intended to use the math that I studied to prepare me for the algorithms class. But, assembly is a nice low-level hardware oriented thing to study and a contrast from the math, computer science theory, and higher level Python scripting language. So, I decided to delay my study of algorithms and work on assembly.  I’m working on the last exercise of the 16th chapter out of 19 in the book and thought I would start this blog post to document my experience so that others might benefit from it. The book is Ray Seyfarth’s  “Introduction to 64 Bit Assembly Language Programming for Linux and OS X“. I have saved my work on the exercises on GitHub: repository.

I want to let people know what type of environment and tools that I used so they can compare notes with my experience if they decide to work through the book. I started out on an Oracle Enterprise Linux 7 virtual machine running under VirtualBox on my laptop. Oracle’s Linux is a version of Red Hat Linux. I believe that I had to compile YASM and the author’s ebe tool. It has been a while but I think I had to search around a bit to get the right packages so that they would compile and I had some parts of ebe that never worked correctly. Starting with Chapter 9 Exercise 2 I switched from YASM, the assembler recommended by the author of the book to NASM, a more commonly used assembler. I switched because I hit a bug in YASM. So, chapter 9 exercise 1 and earlier were all YASM. Also, after the early lessons I moved from the GUI ebe debugger to the command line gdb debugger. I wanted to get more familiar with gdb anyway since I use Linux for my job and might need gdb to help resolve problems. After I got a new laptop I switched to using Centos 7 on VirtualBox. I was able to install nasm and gdb using yum from standard repositories and did not have to do any manual compilation of development tools in my new environment. So, if you choose to work through the book you could go the nasm and gdb route that I ended up with if you have challenges installing and using ebe and yasm. There are some minor differences between nasm and yasm but they are pretty easy to figure out using the nasm manual.

There are connections between x86-64 assembly programming on Linux and my job working with Oracle databases. At work, 64 bit Linux virtual machines are our standard Oracle database platform. They are also the building blocks of the cloud. You see a lot of 64 bit Linux on Amazon Web Services, for example. So, I’m really kind of doing assembly language for fun since it is so impractical as a programming language, but at the same time I’m doing it on the platform that I use at work. Maybe when I’m looking at a dump in an Oracle trace file on Linux it will help me to know all the registers. If I’m working with some open source database like MySQL it can’t hurt to know how to debug in gdb and compile with gcc.

The assembly language book also connected with my passion for performance tuning. The author had some interesting things to say about performance. He kind of discouraged people from thinking that they could easily improve upon the performance of the GCC C compiler with all of its optimizations. It was interesting to think about the benefits of SIMD and how you might write programs to work better with pipelining and the CPU’s cache. It was kind of like Oracle performance tuning except you were looking at just CPU and lower level factors. But you still have tests to prove out your assumptions and you have to try to build tests to show that what you think is so will really hold up. Chapter 16 Exercise 1 is a good example of SIMD improving performance. I started with a simple C version that ran in 3.538 seconds. An AVX version of the subroutine did 8 floating point operations at a time and ran the same function in 2.1057 seconds. Here are some of the AVX instructions just for fun:

    vmovups ymm0,[x_buffer]      ; load 8 x[i] values
    vmovups ymm1,[rsi+r10*4]     ; load 8 x[j] values
    vsubps ymm0,ymm1             ; do 8 x[i]-x[j] ops
    vmulps ymm0,ymm0             ; square difference

Generally, x86-64 assembly ended up feeling a lot like C. The book has you use a variety of calls from the C library so in the later chapters the assembly programs had calls to a lot of the functions that you use in C such as printf, scanf, strlen, strcmp, and malloc. Like C it was common to get segmentation faults without a lot to go on about what caused it. Still the back trace (bt) command in gdb leads you right to the instruction that got the error so it some ways it was easier to diagnose segmentation faults in assembly than I remember it being in C. It brought back memories of taking C in college and puzzling over segmentation faults and bad pointers. It also made me think of the time in a previous job when I progressed from C to C++ to Java. I came out of school having done a fair amount of C programming. Then I read up on C++ and object-oriented programming. But C++ still had the segmentation faults. Then I found Java and thought it was great because it gave you more meaningful error messages than segmentation fault. Now, I have embraced Python recently because of the edX classes that I took and because of the ways I have used it at work. Working with assembly has kind of taking me back down the chain of ease of use from Python to C to assembly. I can’t see using C or assembly for every day use but most of the software that we use is written in C so it seems reasonable to have some familiarity with C and the lower level assembly that lies beneath it.

Anyway, I have three more chapters to go but thought I would put out this update now while I am thinking about it. I may tweak this post later or put out a follow-up, but I hope that it is useful to someone. If you feel inclined to study 64 bit x86 assembly on Linux I think that you will find the Ray Seyfarth book a good resource for you. If you want to talk to me about my experience feel free to leave at comment on this post or send an email.

Bobby

Categories: DBA Blogs

Finished reading the Snowflake database documentation

Tue, 2017-05-16 13:36

I just finished reading the Snowflake database documentation and I thought I would blog about my impressions. I have not spent a lot of time using Snowflake so I can not draw from experience with the product. But, I read all the online documentation so I think I can summarize some of the major themes that I noticed. Also, I have read a lot of Oracle database documentation in the past so I can compare Oracle’s documentation to Snowflake’s. Prior to reading the online documentation I also read their journal article which has a lot of great technical details about the internals. So, I intend to include things from both the article and the documentation.

My observations fall into these major categories:

  • Use of Amazon Web Services instead of dedicated hardware for data warehouse
  • Ease of use for people who are not database administrators
  • Use of S3 for storage with partitioning and columnar storage
  • Lots of documentation focused on loading and unloading data
  • Lots of things about storing and using JSON in the database
  • Limited implementation of standard SQL – no CREATE INDEX
  • Computer science terms – from the development team instead of marketing?
  • Role of database administrator – understand architecture and tuning implications
  • Focus on reporting usage of resources for billing
  • JavaScript or SQL stored functions
  • Down side to parallelism – high consumption of resources
  • High end developers but still a new product
  • Maybe specialized purpose – not general purpose database

First let me say that it is very cool how the Snowflake engineers designed a data warehouse database from scratch using Amazon Web Services (AWS). I have worked on a couple of different generations of Exadata as well as HP’s Neoview data warehouse appliance and a large Oracle RAC based data warehouse so I have experience with big, expensive, on site data warehouse hardware. But Snowflake is all in Amazon’s cloud so you don’t have to shell out a lot of money up front to use it. It makes me think that I should try to come up with some clever use of AWS to get rich and famous. All of the hardware is there waiting for me to exploit it and you can start small and then add hardware as needed. So, to me Snowflake is a very neat example of some smart people making use of the cloud. Here is a pretty good page about Snowflake’s architecture and AWS: url

You would not think that I would be happy about a database product that does not need database administrators since I have been an Oracle database administrator for over 20 years. But, it is interesting how Snowflake takes some tasks that DBAs would do working with an on site Oracle database and makes them easy enough for a less technical person to do them.  There is no software to install because Snowflake is web-based. Creating a database is a matter of pointing and clicking in their easy to use web interface. Non-technical users can spin up a group of virtual machines with enormous CPU and memory capacity in minutes. You do not setup backup and recovery. Snowflake comes with a couple of built-in recovery methods that are automatically available. Also, I think that some of the redundancy built into AWS helps with recovery. So, you don’t have Oracle DBA tasks like installing database software, creating databases, choosing hardware, setting up memory settings, doing RMAN and datapump backups. So, my impression is that they did a good job making Snowflake easier to manage. Here is a document about their built-in backup and recovery: url

Now I get to the first negative about Snowflake. It stores the data in AWS’s S3 storage in small partitions and a columnar data format. I first saw this in the journal article and the documentation reinforced the impression: url1,url2. I’ve used S3 just enough to upload a small file to it and load the data into Snowflake. I think that S3 is AWS’s form of shared filesystem. But, I keep thinking that S3 is too slow for database storage. I’m used to solid state disk storage with 1 millisecond reads and 200 microsecond reads across a SAN network from a storage device with a large cache of high-speed memory. Maybe S3 is faster than I think but I would think that locally attached SSD or SSD over a SAN with a big cache would be faster. Snowflake seems to get around this problem by having SSD and memory caches in their compute nodes. They call clusters of compute nodes warehouses, which I think is confusing terminology. But from the limited query testing I did and from the reading I think that Snowflake gets around S3’s slowness with caching. Caching is great for a read only system. But what about a system with a lot of small transactions? I’ve seen Snowflake do very well with some queries against some large data sets. But, I wonder what the down side is to their use of S3. Also, Snowflake stores the data in columnar format which might not work well for lots of small insert, update, and delete transactions.

I thought it was weird that out of the relatively small amount of documentation Snowflake devoted a lot of it to loading and unloading data. I have read a lot of Oracle documentation. I read the 12c concepts manual and several other manuals while studying for my OCP 12c certification. So, I know that compared to Oracle’s documentation Snowflake’s is small. But, I kept seeing one thing after another about how to load data. Here are some pages: url1,url2. I assume that their data load/unload statements are not part of the SQL standard so maybe they de-emphasized documenting normal SQL constructs and focused on their custom syntax. Also, they can’t make any money until their customers get their data loaded so maybe loading data is a business priority for Snowflake. I’ve uploaded a small amount of data so I’m a little familiar with how it works. But, generally, the data movement docs are pretty hard to follow. It is kind of weird. The web interface is so nice and easy to use but the upload and download syntax seems ugly. Maybe that is why they have some much documentation about it?

Snowflake also seems to have a disproportionate amount of documentation about using JSON in the database. Is this a SQL database or not? I’m sure that there are Oracle manuals about using JSON and of course there are other databases that combine SQL and JSON but out of the relatively small Snowflake documentation set there was a fair amount of JSON. At least, that is my impression reading through the docs. Maybe they have customers with a lot of JSON data from various web sources and they want to load it straight into the database instead of extracting information and putting it into normal SQL tables. Here is an example JSON doc page: url

Snowflake seems to have based their product on a SQL standard but they did not seem to fully implement it. For one thing there is no CREATE INDEX statement. Uggh. The lack of indexes reminds me strongly of Exadata. When we first got on Exadata they recommended dropping your indexes and using Smart Scans instead. But, it isn’t hard to build a query on Exadata that runs much faster with a simple index. If you are looking up a single row with a unique key a standard btree index with a sequential, i.e. non-parallel, query is pretty fast. The lack of CREATE INDEX combined with the use of S3 and columnar organization of the data makes me think that Snowflake would not be great for record at a time queries and updates. Of course, an Oracle database excels at record at a time processing so I can’t help thinking that Snowflake won’t replace Oracle with its current architecture. Here is a page listing all the things that you can create, not including index: url

Snowflake sprinkled their article and documentation with some computer science terms. I don’t recall seeing these types of things in Oracle’s documentation. For example, they have a fair amount of documentation about HyperLogLog. What in the world? HyperLogLog is some fancy algorithm for estimating the number of rows in a large table without reading every row. I guess Oracle has various algorithms under the covers to estimate cardinality. But they don’t spell out the computer science term for it. At least that’s my impression. And the point of this blog post is to give my impressions and not to present some rigorous proof through extensive testing. As a reader of Oracle manuals I just got a different feel from Snowflake’s documentation. Maybe a little more technical in its presentation than Oracle’s. It seems that Snowflake has some very high-end software engineers with a lot of specialized computer science knowledge. Maybe some of that leaks out into the documentation. Another example, their random function makes reference to the name of the underlying algorithm: url. Contrast this with Oracle’s doc: url. Oracle just tells you how to use it. Snowflake tells you the algorithm name. Maybe Snowflake wants to impress us with their computer science knowledge?

Reading the Snowflake docs made me think about the role of a database administrator with Snowflake. Is there a role? Of course, since I have been an Oracle DBA for over 20 years I have a vested interest in keeping my job. But, it’s not like Oracle is going away. There are a bazillion Oracle systems out there and if all the new students coming into the work force decide to shy away from Oracle that leaves more for me to support the rest of my career. But, I’m not in love with Oracle or even SQL databases or database technology. Working with Oracle and especially in performance tuning has given me a way to use my computer science background and Oracle has challenged me to learn new things and solve difficult problems. I could move away from Oracle into other areas where I could use computer science and work on interesting and challenging problems. I can see using my computer science, performance tuning, and technical problem solving skills with Snowflake. Companies need people like myself who understand Oracle internals – or at least who are pursuing an understanding of it. Oracle is proprietary and complicated. Someone outside of Oracle probably can not know everything about it. It seems that people who understand Snowflake’s design may have a role to play. I don’t want to get off on a tangent but I think that people tend to overestimate what Oracle can do automatically. With large amounts of data and challenging requirements you need some human intervention by people who really understand what’s going on. I would think that the same would be true with Snowflake. You need people who understand why some queries are slow and how to improve their performance. There are not as many knobs to turn in Snowflake. Hardly any really. But there is clustering: url1,url2,url3. You also get to choose which columns fit into which tables and the order in which you load the data, like you can on any SQL database. Snowflake exposes execution plans and has execution statistics: url1,url2,url3. So, it seems that Snowflake has taken away a lot of the traditional DBA tasks but my impression is that there is still a role for someone who can dig into the internals and figure out how to make things go faster and help resolve problems.

Money is the thing. There are a lot of money related features in the Snowflake documentation. You need to know how much money you are spending and how to control your costs. I guess that it is inevitable with a web-based service that you need to have features related to billing. Couple examples: url1,url2

Snowflake has SQL and JavaScript based user defined functions. These seem more basic than Oracle’s PL/SQL. Here is a link: url

There are some interesting things about limiting the number of parallel queries that can run on a single Snowflake warehouse (compute cluster). I’ve done a fair amount of work on Oracle data warehouses with people running a bunch of parallel queries against large data sets. Parallelism is great because you can speed up a query by breaking its execution into pieces that the database can run at the same time. But, then each user that is running a parallel query can consume more resources than they could running serially. Snowflake has the same issues. They have built-in limits to how many queries can run against a warehouse to keep it from getting overloaded. These remind me of some of the Oracle init parameters related to parallel query execution. Some URLs: url1,url2,url3 In my opinion parallelism is not a silver bullet. It works great in proofs of concepts with a couple of users on your system. But then load up your system with users from all over your company and see how well it runs then. Of course, one nice thing about Snowflake is that you can easily increase your CPU and memory capacity as your needs grow. But it isn’t free. At some point it becomes worth it to make more efficient queries so that you don’t consume so many resources. At least, that’s my opinion based on what I’ve seen on Oracle data warehouses.

I’m not sure if I got this information from the article or the documentation or somewhere else. But I think of Snowflake as new. It seems to have some high-end engineers behind it who have worked for several years putting together a system that makes innovative use of AWS. The limited manual set, the technical terms in the docs, the journal article all make me think of a bunch of high-tech people working at a startup. A recent Twitter post said that Snowflake now has 500 customers. Not a lot in Oracle terms. So, Snowflake is new. Like any new product it has room to grow. My manager asked me to look into technical training for Snowflake. They don’t have any. So, that’s why I read the manuals. Plus, I’m just a manual reader by nature.

My impression from all of this reading is that Snowflake has a niche. Oracle tries to make their product all things to all people. It has every feature but the kitchen sink. They have made it bloated with one expensive add-on option after another. Snowflake is leaner and newer. I have no idea how much Snowflake costs, but assuming that it is reasonable I can see it having value if companies use it where it makes sense. But I think it would be a mistake to blindly start using Snowflake for every database need. You probably don’t want to build a high-end transactional system on top of it. Not without indexes! But it does seem pretty easy to get a data warehouse setup on Snowflake without all the time-consuming setup of an on premise data warehouse appliance like Exadata. I think you just need to prepare yourself for missing features and for some things not to work as well as they do on a more mature database like Oracle. Also, with a cloud model you are at the mercy of the vendor. In my experience employees have more skin in the game than outside vendors. So, you sacrifice some control and some commitment for ease of use. It is a form of outsourcing. But, outsourcing is fine if it meets your business needs. You just need to understand the pros and cons of using it.

To wrap up this very long blog post, I hope that I have been clear that I’m just putting out my impressions without a lot of testing to prove that I’m right. This post is trying to document my own thoughts about Snowflake based on the documentation and my experience with Oracle. There is a sense in which no one can say that I am wrong about the things that I have written as long as I present my honest thoughts. I’m sure that a number of things that I have written are wrong in the sense that testing and experience with the product would show that my first impressions from the manuals were wrong. For example, maybe I could build a transactional system and find that Snowflake works better than I thought. But, for now I’ve put out my feeling that it won’t work well and that’s just what I think. So, the post has a lot of opinion without a ton of proof. The links show things that I have observed so they form a type of evidence. But, with Oracle the documentation and reality don’t always match so it probably is the same with Snowflake. Still, I hope this dump of my brain’s thoughts about the Snowflake docs is helpful to someone. I’m happy to discuss this with others and would love any feedback about what I have written in this post.

Bobby

Categories: DBA Blogs

Submitted two abstracts for Oracle OpenWorld 2017

Wed, 2017-04-19 14:07

I submitted two abstracts for Oracle OpenWorld 2017. I have two talks that I have thought of putting together:

  • Python for the Oracle DBA
  • Toastmasters for the Oracle DBA

I want to do these talks because they describe two things that I have spent time on and that have been valuable to me.

I have given several recent talks about Delphix. Kyle Hailey let me use his slot at Oaktable World in 2015 which was at the same time as Oracle OpenWorld 2015. Right after that I got to speak at Delphix Sync which was a Delphix user event. More recently I did a Delphix user panel webinar.

So, I’ve done a lot of Delphix lately and that is because I have done a lot with Delphix in my work. But, I have also done a lot with Python and Toastmasters so that is why I’m planning to put together presentations about these two topics.

I probably go to one conference every two years so I’m not a frequent speaker, but I have a list of conferences that I am thinking about submitting these two talks to, hoping to speak at one. These conferences are competitive and I’ve seen that better people than me have trouble getting speaking slots at them. But here is my rough idea of what I want to submit the talks to:

I’ve never gone to RMOUG but I think it is in Denver so that is a short flight and I have heard good things.

Also, we have our on local AZORA group in Phoenix. Recently we have had some really good ACE Director/Oak Table type speakers, but I think they might like to have some local speakers as well so we will see if that will work out.

If all else fails I can give the talks at work. I need to start working on the five speeches in my Toastmasters “Technical Presentations” manual which is part of the Advanced Communication Series. I haven’t even cracked the book open, so I don’t know if it applies but it seems likely that I can use these two talks for a couple of the speech projects.

Anyway, I’ve taken the first steps towards giving my Python and Toastmasters speeches. Time will tell when these will actually be presented, but I know the value that I have received from Python and Toastmasters and I’m happy to try to put this information out there for others.

Bobby

Categories: DBA Blogs

How to find the object that caused ORA-08103 error

Mon, 2017-04-17 14:50

A developer told me that two package executions died with ORA-08103 errors and he didn’t know which object caused the errors.

I found two trace files that had the following contents:

*** SESSION ID:(865.1201) 2017-04-17 10:17:09.476
OBJD MISMATCH typ=6, seg.obj=21058339, diskobj=21058934, dsflg=100000, dsobj=21058339, tid=21058339, cls=1

*** SESSION ID:(595.1611) 2017-04-17 10:17:35.395
OBJD MISMATCH typ=6, seg.obj=21058340, diskobj=21058935, dsflg=100000, dsobj=21058340, tid=21058340, cls=1

Bug 13844883 on Oracle’s support site gave me the idea to look up the object id for the diskobj part of the trace as the current object id. So, I needed to look up 21058934 and 21058935. I used this query to find the objects:

select * from dba_objects where DATA_OBJECT_ID in
(21058934,
21058935);

This pointed to two index partitions that had been rebuilt while the package was running. I’m pretty sure this caused the ORA-08103 error. So, if you get an ORA-08103 error find diskobj in the trace file and look it up as DATA_OBJECT_ID in dba_objects.

Bobby

Categories: DBA Blogs

Optimizer bug fix makes a query run more than 3 times slower

Wed, 2017-04-12 15:39

I’m working on an 11.1.0.7 to 11.2.0.4 upgrade and found a handful of queries that run more than 3 times longer on 11.2.0.4 than 11.1.0.7. The data and optimizer statistics are very similar on the two test databases. I’m pretty sure that an optimizer bug fix caused this difference. So, the irony is that a fix to the optimizer that we get with the upgrade to the very stable 11.2.0.4 release is causing a 3x slowdown in the query.

For my testing I’m using the gather_plan_statistics hint and this query to dump out the plan after executing the query:

select * from table(dbms_xplan.display_cursor(null,null,’ALLSTATS’));

I used an outline hint to force the 11.2.0.4 plan to run under 11.1.0.7 and then I looked at the estimated and actual row counts to find a discrepancy. I found one table with estimated row counts that did not look correct on 11.1.0.7 but made sense on 11.2.0.4.

11.1.0.7

---------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name                      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------------------------------------
|* 29 |           TABLE ACCESS BY INDEX ROWID  | MY_TABLE                  |     17 |      1 |      0 |00:00:07.34 |   96306 |       |       |          |
|* 30 |            INDEX RANGE SCAN            | MY_TABLE_PK               |     17 |     16 |    102 |00:00:01.20 |   96255 |       |       |          |

11.2.0.4

---------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name                      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
---------------------------------------------------------------------------------------------------------------------------------------------------------
|* 29 |           TABLE ACCESS BY INDEX ROWID  | MY_TABLE                  |     17 |      8 |      0 |00:00:07.44 |   96306 |       |       |          |
|* 30 |            INDEX RANGE SCAN            | MY_TABLE_PK               |     17 |     17 |    102 |00:00:01.22 |   96255 |       |       |          |

After the range scan in step 30 in the plan in 11.1.0.7 you have an estimate of 16 rows but the table access in step 29 has an estimate of only 1. In 11.2.0.4 the estimate for step 29 is 8 rows.  Given the optimizer statistics, the optimizer should have estimated 8 rows step 29 in 11.1.0.7. It appears that 11.2.0.4 fixed a bug like this.

Here are the predicates for step 29 in the plan:

29 – filter((“PCP”.”MY_FLAG”=’M’ OR “PCP”.”MY_FLAG”=’Y’))

So the column could have value M or Y. The optimizer statistics have 4 distinct values for the column and no histogram. So the optimizer should assume that 1/4 of the rows meet each criteria. So the optimizer should have estimated 1/4 + 1/4 = 1/2 of the rows from step 30 meet the criteria in step 29. So, 17/2 = 8, rounding down. But in 11.1.0.7 it seems that they multiplied the rows from step 30 by 1/4 two times making it 16*1/4*1/4 = 1. It seems that in 11.1.0.7 the optimizer multiplied by 1/4 twice instead of adding them and then multiplying. There is a known bug related to OR conditions in where clauses:

Bug 10623119 – wrong cardinality with ORs and columns with mostly nulls (Doc ID 10623119.8)

Our 11.2.0.4 database includes this bug fix but I don’t know if this fix caused the difference in behavior that I saw. It seems possible that it did.

The interesting thing is that the real row count for step 29 is 0. So, the pre-bug fix plan in 11.1.0.7 actually estimated the row count more accurately by accident. It estimated 1 and the real count was 0. The correct estimate should have been 8, but that is not as close to 0 as 1 is. I think we just happened to have a few queries where the bug resulted in a more accurate estimate than a properly functioning optimizer. But, I’m only looking at the queries whose performance is worse after the upgrade. There may have been other queries that performed better because of this bug fix.

I ended up passing this and a similar query back to a senior SQL developer and he took one look at the query and described it as “ugly”. He fixed both of them in no time so that both queries now run just as fast or faster on 11.2.0.4 than they did on 11.1.0.7.

So, the original query ran faster when the optimizer was not working properly. A human developer simplified the query and then it ran faster when the optimizer was working properly. Maybe the moral of the story is to build simpler and cleaner SQL queries to begin with and if you find a query whose performance declines with better optimizer information then consider improving the query so that it works well with the better functioning optimizer.

Bobby

P.S. Well, I used a cardinality hint to give the 11.2.0.4 optimizer the correct number of rows for the step in the plan and it didn’t change the plan. So, the change in the way the number of rows was calculated may not be the real reason, or the only reason, for the worse plan on 11.2.0.4. Plus, I tried the alter session command mentioned in the comments and it didn’t change the plan either. I guess that something else in 11.2.0.4 caused the change to the less efficient plan but I can’t say what. Still, a developer fixed the queries in no time so that is still the main point…

Categories: DBA Blogs

_small_table_threshold=1000000 results in > 5x query speedup

Fri, 2017-04-07 16:41

Today I sped a query up by over 5 times by setting _small_table_threshold=1000000.

Here is the query elapsed time and a piece of the plan showing its behavior before setting the parameter:

Elapsed: 00:28:41.67

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                              | Name                   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  69 |              PARTITION RANGE ITERATOR                  |                        |   9125 |      1 |   9122 |00:13:02.42 |    3071K|   3050K|       |       |          |
|* 70 |               TABLE ACCESS FULL                        | SIS_INV_DTL            |   9125 |      1 |   9122 |00:13:02.25 |    3071K|   3050K|       |       |          |

I think that this part of the plan means that the query scanned a range of partitions  9125 times resulting in over three million physical reads. These reads took about 13 minutes. If you do the math it works out to between 200-300 microseconds per read. I have seen similar times from repeated reads from a storage server that has cached the data in memory. I have seen this with a SAN and with Delphix.

Here is my math for fun:

>>> 1000000*((60*13)+2.25)/3050000
256.4754098360656

About 256 microseconds per read.

I ran this query again and watched the wait events in Toad’s session browser to verify that the query was doing a bunch of direct path reads. Even though the query was doing full scans on the partition range 9000 times the database just kept on doing direct path reads for 13 minutes.

So, I got the idea of trying to increase _small_table_threshold. I was not sure if it would work with parallel queries. By the way, this is on 11.2.0.4 on HP-UX Itanium platform. So, I tried

alter session set "_small_table_threshold"=1000000;

I ran the query again and it ran in under 5 minutes. I had to add a comment to the query to get the plan to come back cleanly. So, then I reran the query again and I guess because of caching it came back in under 2 minutes:

First run:

Elapsed: 00:04:28.83

Second run:

Elapsed: 00:01:39.69

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                              | Name                   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  69 |              PARTITION RANGE ITERATOR                  |                        |   9125 |      1 |   9122 |00:00:45.33 |    3103K|      0 |       |       |          |
|* 70 |               TABLE ACCESS FULL                        | SIS_INV_DTL            |   9125 |      1 |   9122 |00:00:45.27 |    3103K|      0 |       |       |          |

The second execution did zero physical reads on these partitions instead of the 3 million that we had without the parameter!

So, it seems that if you have a query that keeps doing full scans on partitions over and over it can run a lot faster if you disable direct path read by upping _small_table_threshold.

Bobby

Categories: DBA Blogs

Jetpack 4.8 outage

Wed, 2017-04-05 17:18

I just got my bobbydurrettdba.com blog back online. It seems that an automatic update of Jetpack to version 4.8 took it down last night. I thought I would post a quick comment on how I resolved it because it is slightly different from what the Jetpack support site says.

When I tried to get into my blog today I just got a white screen saying there was a problem. I could not get into the wp-admin screen to do anything. Then I found out that Jetpack 4.8 had been pushed out and had brought down a lot of WordPress sites. Then I found out that Jetpack 4.8.1 had just come out today to fix it.

Here is the url for the 4.8.1 fix to this issue: fix

The fix points to the manual plugin install url which talks about deleting your /wp-content/plugins/jetpack folder and then manually installing by downloading the 4.8.1 jetpack zip, unzipping it, and ftping it up to /wp-content/plugins.

But all I had to do, after backing up /wp-content/plugins/jetpack was to rename it to /wp-content/plugins/jetpack.old. Then I was able to get into my site and to update the plugin to 4.8.1 through the normal web-based process.

Strangely enough the Jetpack plugin update removed my /wp-content/plugins/jetpack.old directory and replaced it with /wp-content/plugins/jetpack.

Here is what the directory looked like after I ran the update from my blogs admin page:

I never did ftp the 4.8.1 jetpack directory over though I had it unzipped and ready to go on my laptop.

Anyway, I didn’t delete my jetpack directory.  I just renamed it to jetpack.old. Then I ran the normal plugin update process.

Bobby

p.s. My site stats for today are not looking so good with the blog down all day:

Categories: DBA Blogs

Information Technology Jobs Posted

Thu, 2017-03-30 10:15

My company, US Foods, has posted a number of information technology jobs that are in Chicago (Rosemont)  or Phoenix (Tempe). Here is the web site:

https://usfood.taleo.net/careersection/usf_external/jobsearch.ftl

Enter Information Technology as the Job Field to see all the posted IT jobs.

Bobby

Categories: DBA Blogs

Using Delphix to support Oracle upgrade

Wed, 2017-03-22 17:47

I’m working on upgrading a very buggy unpatched 11.1.0.7 Oracle database to a fully patched 11.2.0.4 version. I’m using Delphix to support the upgrade and it has been a big help so far. This is on the HP-UX 11.31 Itanium platform.

The great thing about using Delphix to support an upgrade is that my very first pass through the upgrade scripts was with a full-sized clone of production. In the past I probably started with a tiny subset or even an out of the box demo database for my first upgrade pass and even when I got to QA it wasn’t a full test of a production upgrade. This time, my first test was with all the data and that was very cool.

The main example of how this helped is that we had a lot of data in the SYS.WRI$_OPTSTAT_HISTGRM_HISTORY and SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY tables in production and this made the first upgrade of its clone take a long time. After two or three attempts at other ways to speed things up, I ended up applying patch 12683802 on an 11.1.0.7 Oracle home and this allowed me to truncate these two tables.

Delphix helped me here because I had an unused 11.1.0.7 Oracle home on a different host from the one I was doing the upgrade on. I didn’t want to apply a patch on the upgrade host because there were three other databases using the home and I didn’t want to bring them down or patch them. Delphix let me move the VDB that I was upgrading over to the host that had the unused home. Then I applied the patch there and ran the truncate using the dbms_stats.purge_stats(dbms_stats.purge_all) procedure call that the patch enabled.

Then I moved the VDB back to the host where I intended to do the upgrade, which already had the fully patched 11.2.0.4 binaries installed, and ran the upgrade there. Pretty cool. I think I did the most recent upgrade in about 3.5 hours with the empty OPTSTAT tables.

By the way, doing an upgrade within Delphix is easy. You just bring the VDB up on the old oracle home, do the upgrade as you normally would include all the shutdown and startup commands, and then within the Delphix GUI you let Delphix know the VDB is now on a new Oracle home by shutting it down, choosing the new home, and bringing it back up. Piece of cake.

Bobby

Categories: DBA Blogs

Simple Python for Oracle database tuning example

Thu, 2017-02-16 16:14

I ran across a stackoverflow question and it gave me an idea for a simpler use of Python to graph some Oracle database performance information. I looked at my PythonDBAGraphs scripts and I’m not sure that it is worth modifying them to try to simplify those scripts since I like what they do. But they may make people think that Python scripts to graph Oracle performance data are difficult to write.  But, I think if someone just wants to put together some graphs using Python, Matplotlib, and cx_Oracle they could do it more simply than I have in my PythonDBAGraphs scripts and it still could be useful.

Here is an example that looks at db file sequential read waits and graphs the number of waits per interval and the average wait time in microseconds:

import cx_Oracle
import matplotlib.pyplot as plt
import matplotlib.dates as mdates

connect_string = "MYUSER/MYPASSWORD@MYDATABASE"
con = cx_Oracle.connect(connect_string)
cur = con.cursor()

query="""
select sn.END_INTERVAL_TIME,
(after.total_waits-before.total_waits) "number of waits",
(after.time_waited_micro-before.time_waited_micro)/
(after.total_waits-before.total_waits) "ave microseconds"
from 
DBA_HIST_SYSTEM_EVENT before, 
DBA_HIST_SYSTEM_EVENT after,
DBA_HIST_SNAPSHOT sn
where before.event_name='db file sequential read' and
after.event_name=before.event_name and
after.snap_id=before.snap_id+1 and
after.instance_number=1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number and
(after.total_waits-before.total_waits) > 0
order by after.snap_id
"""

cur.execute(query)

datetimes = []
numwaits = []
avgmicros = []
for result in cur:
    datetimes.append(result[0])
    numwaits.append(result[1])
    avgmicros.append(result[2])
        
cur.close()
con.close()

title="db file sequential read waits"

fig = plt.figure(title)
ax = plt.axes()

plt.plot(datetimes,numwaits,'r')
plt.plot(datetimes,avgmicros,'b')

# Format X axis dates

fig.autofmt_xdate()
ax.fmt_xdata = mdates.DateFormatter('%m/%d/%Y %H:%M')
datetimefmt = mdates.DateFormatter("%m/%d/%Y")
ax.xaxis.set_major_formatter(datetimefmt)

# Title and axes labels

plt.title(title)
plt.xlabel("Date and time")
plt.ylabel("num waits and average wait time")

# Legend

plt.legend(["Number of waits","Average wait time in microseconds"],
loc='upper left')

plt.show()

The graph it produces is usable without a lot of time spent formatting it in a non-standard way:

It is a short 68 line script and you just need matplotlib and cx_Oracle to run it. I’ve tested this with Python 2.

Bobby

Categories: DBA Blogs

Rosetta Code

Sat, 2017-01-21 09:24

I have experimented with the Rosetta Code website as a way to learn more about computer technology. I have mentioned my exploration of the edX and OCW web sites in earlier posts so I thought I would mention Rosetta Code or RC as another way to learn.

The RC site displays code in a variety of programming languages so that you can compare the languages. They define certain tasks and people post code that performs the tasks. For example, the task could be to write a simple Hello World program that writes Hello World to the screen. The site shows how to do this in Java, Python, C, and others. At this moment there are 354 implementations of Hello World on the site.  I added this simple one for Oracle’s version of SQL: My Hello World

But, I can’t help wondering if this is just a fun game or is it really educational? I just finished a task this morning that was interesting to me. It is a draft task which means that people have not yet written enough example programs for the site managers to publish it as an official task. But anyone can see it. I’ve written the Python and PL/SQL examples. This 4 squares problem is just a puzzle or game but it was interesting to think about solving it in a declarative way with SQL and not a procedural way with Python or any other regular language. Maybe there is value in looking at these tasks and thinking about how SQL and relational database thinking could be applied?

Anyway, I just thought I would document what I have done with the RC site. Be careful because once you start working up example code for the site it can become addictive! I am still thinking about what the real value of the site is but it has helped me exercise my programming muscles and think a bit about SQL. Check it out if you think it has value.

Bobby

Categories: DBA Blogs

AZORA Meeting January 26th

Fri, 2017-01-06 15:40

The next Arizona Oracle User Group meeting is January 26th in Scottsdale. Signup on this meetup link.

Copied from invitation:

When:  January 26, 2017 (Thursday)
12:30 pm – 4:00 pm

Where:  Republic Services
1st Floor Training Rooms
14400 N 87th St (AZ101 & Raintree)
Scottsdale, AZ

Agenda:

12:30 – 1:00 Registration and Pizza

1:00-1:10  Welcome

1:10-2:10  Presentations

Room 1    Biju Thomas – OneNeck IT Solutions
(Oracle ACE Director)
“Oracle Database 12c New Features for 11gR2 DBA”

Room 2   Charles Kim – Viscosity North America
(Oracle ACE Director)
“Bullet Proof Your Data Guard Environment”

2:10-2:25  Break – Coffee & Cookies

2:25-3:25  Presentations

Room 1   Biju Thomas – OneNeck IT Solutions
(Oracle ACE Director)
“Introduction to Oracle Databases in the Cloud”

Room 2  Jerry Ward – Viscosity North America
“Building Faceted Search Navigation in APEX
with Oracle JET and  PL/SQL Pipelines”

3:25-3:30    Wrap Up and Closing

Categories: DBA Blogs

Improved VirtualBox Network

Fri, 2017-01-06 09:33

Before I left for vacation I noticed this blog post about setting up a network on VirtualBox: “CREATING A TEST LAB USING VIRTUALBOX / NAT NETWORKING”. Now that I am back in the office I thought I would try to apply some of the things that I learned from the blog post to my existing VirtualBox network. I’m pretty happy with the results so I thought I would document my experience here.

My situation differed slightly from the situation in the original post:

  1. I’m using VirtualBox 5.1.12 instead of 5.0.4
  2. I’m using Oracle Enterprise Linux 7.3 instead of CentOs 6
  3. I have 6 existing VMs instead of starting from scratch

I was cautious at first about trying the things recommended in the post because I didn’t want to mess up my existing VMs. These are all just test environments but it took some work to set them up. But, I wasn’t happy with the way the network was setup on my VMs so that motivated me to try something different. My biggest problem with my network was that I couldn’t SSH into my VMs when I worked from home using my work VPN connection.

After re-configuring my network I now have a static ip address for each VM within its own NAT network and I have port forwarding setup so that I can SSH into each one even when I’m working from home using VPN into my company’s network. I also have port forwarding so that I can login to an Oracle database on one of the VMs from my laptop even when using VPN.

The original blog post has most of the details but I thought that I would document a few minor differences that I came across to be helpful.

Default network name was NatNetwork instead of LocalNat.

In port forwarding I forwarded a port to port 1521 on the VM that has an Oracle database. I had to disable and stop the firewall on that VM so that I could connect to the Oracle database.

I went to /etc/sysconfig/network-scripts and looked at the configuration file and the format wasn’t the same as in the original post. This is probably because it is OEL 7.3 instead of CentOs 6. Instead of editing the file manually I ended up using the graphical network configuration app so that I knew I would edit the configuration correctly.

Later, after I wrote the first draft of this blog post I realized that I wanted to set up the network so that it automatically appends a domain name for our internal systems. I.e. if my standard domain is mycompany.com and I want to connect to mymachine.mycompany.com it would be nice to just connect to mymachine without fully qualifying the name with the domain. I ended up manually editing the file /etc/sysconfig/network-scripts/ifcfg-enp0s3 after googling around on the Internet to see how to add this. I added a DOMAIN line like this:

DOMAIN=”mydomain1.com mydomain2.com”

I created two text files to keep track of two things about my network configuration:

  1. The ports that VirtualBox forwards
  2. The ip address for each vm

I had to get the DNS server ip addresses from my laptop’s network settings.

Anyway, I don’t have much to add to the original post except just to point out these slight differences and my experience. It was super helpful to switch to this NAT network with port forwarding approach.

Bobby

P.S. After writing the first draft of this blog post I did some more testing. There is one thing about this configuration that I don’t like. It seems that anyone on my company’s network can SSH into one of my VMs if they use the forwarded port number and my laptop’s IP address.  I tried this from one of our Unix servers and it worked disturbingly well, even over the VPN. I think that the original blog post wanted this behavior but I was just trying to connect from my laptop to my VMs when I was using VPN. Still, it might come in handy to SSH into my VMs from other computers on our network. At least it isn’t opening up every port on my VM – only the ones I have set up with port forwarding. Also, I usually only bring these VMs up for short periods of time so they wont be available for people to hack into except for short unpredictable intervals and when I’m actively using them.

Categories: DBA Blogs

Delphix Users Panel Webinar

Wed, 2016-11-30 16:19

Delphix is sponsoring a user webinar on Wednesday from 10 to 11 am California time. I and a couple of other technical Delphix users will take part in a panel discussion. The webinar will give you a good chance to hear about other users’ experience with Delphix and a chance to ask questions. Like any good user meeting it will not be a non-technical sales pitch but it will instead focus on users sharing helpful technical information with other Delphix users.

Noted Oracle expert and Delphix employee Kellyn Pot’Vin-Gorman will also be there facilitating the discussion.

Here is the link to sign up for the free webinar: https://www.delphix.com/resources/webinar/delphix-users-panel

Be there or be square! &#x1f642;

Bobby

Categories: DBA Blogs

Improving performance of top query

Wed, 2016-11-16 16:56

I’m on call this week. Here are the steps that I took to speed up a query today.

First I got an AWR report and found the top query. Also, someone from support told me to look at November 11 before the latest release and I found a similar top query.

I got a plan for both the new and old top queries – they both use the same plan which makes me think that the two queries are similar.

I looked at the top segments on the AWR report and found a particular table at the top of the logical reads. An index of that table was like number 5 on that list.

I looked at the columns of the table’s index to see how many distinct values there were. None of the three columns had more than 300 distinct values so they were not very selective. I noticed that there was a unique index on the table and the first column of that index had millions of distinct values.

I extracted some sample bind variable values for the query and find that the second bind variable was null or something like that. But, the index we were using included this second variable.

In looked at the bind variables and found that the first column from the unique index was part of the join conditions in the query. (The query had like 20 joins).

Then I extracted the query text and replaced the bind variables with literals to see how it would run. It used the unique index. I used hints to force the original index and compared to running with the unique index. It ran about 30 times faster with the unique index. I ran a few times to make sure it was all cached.

Then I tried to use SQLT’s coe_xfr_sql_profile.sql to force the plan that used the unique index but got an error. Had to download the latest version of SQLT to get it to work.

Now, on average, the query seems to run about 1000 times faster.

It is a delivered vendor package so it was nice to find the better plan and go behind the scenes to fix it. But, if another release comes out and changes this sql to a new sql_id we will have to create a new profile. It’s not perfect but its a good quick fix for my on call.

Bobby

Categories: DBA Blogs

Running SQL*Plus from a Python script

Fri, 2016-11-04 16:08

I needed to write a new script that was running on a Red Hat Linux 6 virtual machine and that would connect to databases using SQL*Plus. I was going to write a bash shell script but decided to use Python instead to see if I could do it using the Python that came with this version of Linux. I wont paste the entire script here but the key was to run SQL*Plus from Python instead of a shell script. Here is a simple example showing how I did it:

$ cat test.py
import subprocess

"""

Example of running a sqlplus script from python 2.6.6.

"""

def run_sqlplus(sqlplus_script):

    """

    Run a sql command or group of commands against
    a database using sqlplus.

    """

    p = subprocess.Popen(['sqlplus','/nolog'],stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,stderr=subprocess.PIPE)
    (stdout,stderr) = p.communicate(sqlplus_script)
    stdout_lines = stdout.split("\n")

    return stdout_lines

sqlplus_script="""
connect test/test
select * from dual;
exit

"""

sqlplus_output = run_sqlplus(sqlplus_script)

for line in sqlplus_output:
    print line

Here is the output:

$ python test.py

SQL*Plus: Release 12.1.0.2.0 Production on Fri Nov 4 15:44:30 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> SQL> Connected.
SQL>
D
-
X

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

The function run_sqlplus takes a multi-line string as input. This is the text of a SQL*Plus script. It runs sqlplus /nolog to get a SQL*Plus prompt and then passes the strings in as the lines typed at the prompt. The function returns SQL*Plus’s output as a list of strings.

Anyway, this is just an example. I built a real script using these concepts. In this case we are using Python 2.6.6 without cx_Oracle installed so I couldn’t connect directly to an Oracle database. Instead I just ran SQL*Plus from Python.

Bobby

Categories: DBA Blogs

I put my SQL scripts on GitHub

Tue, 2016-10-25 10:45

I created a new GitHub public repository with my SQL scripts. Here is the URL:

https://github.com/bobbydurrett/OracleDatabaseTuningSQL

I’ve experimented with GitHub for my Python graphing scripts but wasn’t sure about putting the SQL out there. I don’t really have any comments in the SQL scripts. But, I have mentioned many of the scripts in blog posts so those posts form a type of documentation. Anyway, it is there so people can see it. Also, I get the benefit of using Git to version my scripts and GitHub serves as a backup of my repository.

Also, I have a pile of scripts in a directory on my laptop but I have my scripts mixed in with those that others have written. I’m pretty sure that the repository only has my stuff in it but if someone finds something that isn’t mine let me know and I’ll take it out. I don’t want to take credit for other people’s work. But, the point is to share the things that I have done with the community so that others can benefit just as I benefit from the Oracle community. I’m not selling anything and if there is someone else’s stuff in there it isn’t like I’m making money from it.

Like anything on the web use at your own risk. The repository contains scripts that I get a lot of benefits from but I make no guarantees. Try any script you get from the internet on some test system first and try to understand the script before you even run it there.

I hope that my new SQL repository helps people in their Oracle work.

Bobby

Categories: DBA Blogs

ASH script to show query run times

Mon, 2016-10-24 17:02

I ran into a situation last week where a developer complained that a query sometimes ran for 3 or more seconds but normally runs much less than 1 second. I had just been to a local AZORA user group meeting where Tim Gorman talked about using ASH to diagnose issues so Tim’s talk motivated me to find some clever way to use ASH. I had these three pairs of start and stop dates and times to work with. Each was about 3 to 4 seconds apart. I started looking at DBA_HIST_ACTIVE_SESS_HISTORY for the time period or even a large 11 second time period that bracketed the interval but I did not get any rows back for the first two intervals and only one row for the third. I knew that the V$ version of ASH sampled every 1 second so it might catch these 3 second queries but the queries in question had run the day before. But, something Tim said in the user group meeting made me think about using the V$ view. He said that on inactive development databases the in-memory V$ ASH data could hang around for a few days. Sure enough I was able to find some information in one of the given time periods. But, then I had to find the one slow execution of the query because there were multiple executions at the same time. I found that grouping by SQL_EXEC_ID would let me see each execution of the query by itself. So, I developed this query to show how long each execution ran:

select 
SQL_EXEC_ID,
to_char(SQL_EXEC_START,'YYYY-MM-DD HH24:MI:SS') sql_start,
to_char(min(sample_time),'YYYY-MM-DD HH24:MI:SS') first_sample,
to_char(max(sample_time),'YYYY-MM-DD HH24:MI:SS') last_sample,
max(sample_time)-min(sample_time) elapsed_seconds
from V$ACTIVE_SESSION_HISTORY a
where 
sample_time 
between 
to_date('20-OCT-2016 17:00:00','DD-MON-YYYY HH24:MI:SS')
and 
to_date('20-OCT-2016 17:30:00','DD-MON-YYYY HH24:MI:SS') and
sql_id='0gt3cjptk68vw'
group by SQL_EXEC_ID,SQL_EXEC_START
order by SQL_EXEC_START,SQL_EXEC_ID;

Here are a few rows from the output from around the time of the first interval that I was looking at:

SQL_EXEC_ID SQL_START           FIRST_SAMPLE        LAST_SAMPLE         ELAPSED_SECONDS
----------- ------------------- ------------------- ------------------- -----------------------
   16785284 2016-10-20 17:05:24 2016-10-20 17:05:25 2016-10-20 17:05:25 +000000000 00:00:00.000
   16785285 2016-10-20 17:05:25 2016-10-20 17:05:25 2016-10-20 17:05:25 +000000000 00:00:00.000
   16785380 2016-10-20 17:05:31 2016-10-20 17:05:31 2016-10-20 17:05:34 +000000000 00:00:03.000
   16785692 2016-10-20 17:05:51 2016-10-20 17:05:52 2016-10-20 17:05:53 +000000000 00:00:01.000
   16785772 2016-10-20 17:05:54 2016-10-20 17:05:55 2016-10-20 17:05:55 +000000000 00:00:00.000
   16785852 2016-10-20 17:05:59 2016-10-20 17:06:01 2016-10-20 17:06:01 +000000000 00:00:00.000
   16785940 2016-10-20 17:06:07 2016-10-20 17:06:08 2016-10-20 17:06:08 +000000000 00:00:00.000

The third row down lined up well with the interval in question. So, I was able to use ASH to show that the query ran for 3 seconds within the database. Also, each line was a wait on db file sequential read. This lead me to look at the execution plan and to check the index and partitioning to look for ways to improve the query’s performance.

Bobby

 

Categories: DBA Blogs

Pages