Skip navigation.

Feed aggregator

Get Proactive - Follow the Oracle Support Events Calendar

Joshua Solomin - Mon, 2015-11-30 16:36
See Upcoming Support Events with the Get Proactive Events Calendar Web application that automatically tracks Advisor Webcasts / newsletter releases / Support training events and also synchronizes events you select into your calendar

Oracle Support sponsors a variety of activities (like our popular Advisor Webcasts) to help customers work more effectively with their Oracle products. Follow our Event Calendar to to stay up to date on upcoming Webcasts and events.

The web app allows you to filter activities by product line, making it easier to see the most relevant items. As new events are added to the schedule, the calendar updates automatically to include sessions, dates, and times. For consistency displayed times will automatically adjust based on your time zone.

The calendar is built using the standard iCalendar format, so you can automatically integrate the calendar data directly in Outlook and Thunderbird. Follow the instructions below to set up your integration and take advantage.

Calendar Click the image to visit the app Calendar Integration
  1. Go to the calendar link here.
  2. Follow the instructions on the page to add the calendar to your email/calendar client.

We've written a brief document detailing some of the features for the calendar. Visit Document 125716.1 to find out more.

Watch featured OTN Virtual Technology Summit Replay Sessions

OTN TechBlog - Mon, 2015-11-30 16:08

Today we are featuring a session from each OTN Virtual Technology Summit Replay Group.  See session titles and abstracts below.  Watch right away and then join the group to interact with other community members and stay up to date on when NEW content is coming!

Best Practices for Migrating On-Premises Databases to the Cloud

By Leighton Nelson, Oracle ACE
Oracle Multitenant is helping organizations reduce IT costs by simplifying database consolidation, provisioning, upgrades, and more. Now you can combine the advantages of multitenant databases with the benefits of the cloud by leveraging Database as a Service (DBaaS). In this session, you’ll learn about key best practices for moving your databases from on-premises environments to the Oracle Database Cloud and back again.

What's New for Oracle and .NET - (Part 1)
By Alex Keh, Senior Principal Product Manager, Oracle
With the release of ODAC 12c Release 4 and Oracle Database 12c, .NET developers have many more features to increase productivity and ease development. These sessions explore new features introduced in recent releases with code and tool demonstrations using Visual Studio 2015.

Docker for Java Developers
By Roland Huss, Principal Software Engineer at Red Hat
Docker, the OS-level visualization platform, takes the IT world by storm. In this session, we will see what features Docker has for us Java developers. It is now possible to create truly isolated, self-contained and robust integration tests in which external dependencies are realized as Docker containers. Docker also changes the way we ship applications in that we are not only deploying application artifacts like WARs or EARs but also their execution contexts. Besides elaborating on these concepts and more, this presentation will focus on how Docker can best be integrated into the Java build process by introducing a dedicated Docker Maven plugin which is shown in a live demo.

Debugging Weblogic Authentication
By Maarten Smeets, Senior Oracle SOA / ADF Developer, AMIS
Enterprises often centrally manage login information and group memberships (identity). Many systems use this information to achieve Single Sign On (SSO) functionality, for example. Surprisingly, access to the Weblogic Server Console is often not centrally managed. This video explains why centralizing management of these identities not only increases security, but can also reduce operational cost and even increase developer productivity. The video demonstrates several methods for debugging authentication using an external LDAP server in order to lower the bar to apply this pattern. This technically-oriented presentation will be especially useful for people working in operations who are responsible for managing Weblogic Servers.

Designing a Multi-Layered Security Strategy
By Glenn Brunette, Cybersecurity, Oracle Public Sector, Oracle
Security is a concern of every IT manager and it is clear that perimeter defense, trying to keep hackers out of your network, is not enough. At some point someone with bad intentions will penetrate your network and to prevent significant damage it is necessary to make sure there are multiple layers of defense. Hear about Oracle’s defense in depth for data centers including some new and unique security features built into the new SPARC M7 processor.

Multiple Tenant Support

Dylan's BI Notes - Mon, 2015-11-30 15:57
The definition of multi-tenancy varies.  Some people think that the tenants are data stripping and adding a tenant ID to every table is to support multiple tenants. One of such example is Oracle BI Applications.  We added the tenant_id everywhere and assume that we can use it later to partition the data. In the cloud […]
Categories: BI & Warehousing

SQL 2016 – Improve Performance with the Query Store feature: Part 1

Pythian Group - Mon, 2015-11-30 15:19


After playing with this great new feature for some time and speaking about it during a SQL Saturday in Montreal, I can say that I am very excited about Query Store in SQL 2016!
Performance tuning is our bread and butter here at Pythian and I think that almost every DBA will agree that, unless you have some 3rd party tool (and even if you have such a tool), it’s not always straight forward to tune T-SQL code. It’s usually a tedious process and it’s a science by itself that requires a good background and understanding of how things work “behind the scenes”, as well as how to help the optimizer “make” the right decisions.

Here are some great links to start with:

If you don’t have a dev/testing environment, you can even start with Azure DB which has this feature already. Just note that this feature is currently at a CTP version, meaning that it is not yet supported. Azure DB is cheap and affordable. Another option would be using a dev/test Azure Virtual Machine with SQL Server 2016 on it.


Good luck and feel free leave us questions or comments!


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

December 9: BlackRock―Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Mon, 2015-11-30 14:06

Join us for an Oracle HCM Cloud Customer Forum on Wednesday, December 9, 2015, at 9:00 a.m. PDT.

Hubert Winter, Managing Director from BlackRock, will talk about how, in an effort to improve operational efficiency by eliminating manual processes and increasing transparency for managers, BlackRock decided to modernize its current corporate applications landscape.

In a first step, the company migrated its HR system from Oracle E-Business Suite to Oracle HCM Cloud, including Core HR, Compensation, Absence Management, and HR Analytics.In this call, Winter will discuss the business value BlackRock gained and what its next steps are as it continues its transformation.

During this Customer Forum call, Linda Fishman, Senior Director, Oracle HCM Cloud, will speak with Winter about the selection process for new HR software, the implementation experience with Oracle HCM Cloud, and the expectations and benefits of the new modern HR system.

BlackRock is the leading global asset manager, serving many of the world's largest companies, pension funds, foundations, and public institutions as well as millions of people from all walks of life. BlackRock has 13,000 employees in more than 30 countries around the world.

Register now to attend the live Forum on Wednesday, December 9, 2015, at  9:00 a.m. PDT and learn more about BlackRock's experience with Oracle HCM Cloud.

Disrupting Civic Digital Engagement

WebCenter Team - Mon, 2015-11-30 13:24

By Franco Amalfi, Director, Digital Engagement Strategy of Oracle Public Sector North America. Original write up can be found here.

Digital experiences are now the cornerstone of interactions with government services, driven in part by the rapid spread of web, mobile, and social channels. Citizens are in control. They decide how, when, and where they want to engage with organizations – whether it is in a service center, over the phone, via a mobile app, email, or on the web. No matter which channel they choose, constituents expect government to recognize them, who they are, what processes or interactions are underway, and to do so at every touch point. Given the fluidity of today’s online interactions, citizens are often one click away from broadcasting a bad experience with your city to hundreds of friends, or abandoning an online service for a high cost in-person transaction.

The more digital engagement becomes culturally ingrained, the more vital it becomes to providing good customer service. To better reach customers, companies are using digital technology to promote, deliver, and enable innovative services that reliably create positive experiences for their target audiences.  Now it’s up to government to meet this new standard. To reach constituents in the digital age, successful cities must cultivate those same instincts. They need to learn how to use technology to deliver their services and messages in a way that appeals to their customers – tech-savvy modern citizens. In doing so, they can build better relationships and interactions with all constituents – any time, any place, any device.

“The innovative use of technology is fundamentally transforming how the American people do business and live their daily lives,” President Barack Obama stated in 2012, as he announced the Federal government’s Digital Government initiative in a memorandum to heads of executive departments and agencies.[1]

Citizens want to feel, in their communications with government agencies, like their voices are being heard. In fact, in a January 2014 survey of 5,000 citizens from 10 countries, 76 percent of the Americans polled wanted more citizen participation and engagement in government, answering “tend to agree” or “strongly agree” to the question “Should people be more involved in shaping how public services are designed or delivered?”[2]

Digital engagement empowers constituents with the ability to learn, share, engage, and let their voices be heard – in real-time. People want to be able to access government services and solutions anytime and anyplace. Digital engagement technologies help government effectively meet those needs. The initial building blocks for success for cities are to:
  • Establish an expectation of excellence. Public perception of government services is generally low.  Leaders in this space need to define an internal and external expectation of quality, timely service and manage to that expectation.  Digital experience technologies like social media, mobile apps, email, and others make it easy to service new channels, integrate them into existing work teams, and exceed expectations.  These tools help highlight positive events, promote new innovative campaigns and gather feedback on existing initiatives.
  • Enhance customer service. Demand for services from constituents is constantly increasing. By adopting digital technologies, such as web, mobile, and self-service, government agencies can deliver a personalized experience, and balance responsiveness while containing costs, leading to increased customer satisfaction while meeting resource constrictions.
  •  Promote civic participation: Engage citizens in improving processes for your city.  Citizens want to feel that they are making a difference. Providing citizens with a platform to collaborate with city officials through digital technologies will lead to the formation of innovative ideas that will create value for all, and increase the sense of belonging to the community. 
  • Empower employees. Like other citizens, government employees embrace the efficiencies and effectiveness of digital technologies in their personal lives.  Empower them to serve constituents with these same services and channels and help them know that they are providing great service. Engaged employees will deliver better experiences to your citizens, leading to increased customer and employee satisfaction
Citizens of the digital age want easy access to the agencies that serve them. To meet that need, cities must modernize and focus on orchestrating personalized interactions with each and every constituent across channels and devices. They must understand the evolving needs of their constituents and use a constituent-centric approach to engage with their audience, taking advantage of all the tools and channels available to serve customers of all ages and in all demographic groups, at home and across the globe.
[2] All Accenture data and references sourced from “Digital Government: Pathways to Delivering Public Services for the Future,” published January 2014

Also see:

Franco Amalfi is Director, Digital Engagement Strategy  of Oracle Public Sector North America. He can be contacted at

Student Course Evaluations and Impact on Active Learning

Michael Feldstein - Mon, 2015-11-30 10:51

By Phil HillMore Posts (379)

The Chronicle has an article out today, “Can the Student Course Evaluation Be Redeemed?”, that rightly points out how student course evaluations are often counter-productive to improving teaching and learning. The article refers to a Stanford professor’s call for an instructor completed “inventory of the research-based teaching practices they use”, but most of the article centers on revised course evaluation tool from a Kansas State University spin-off (the IDEA Center). One of the key problems described is that “administrators often take their results as numerical gospel” as well as faculty misapplying the results.

However they’re used, a lot of course evaluations simply aren’t very good, [IDEA president] Mr. Ryalls says.

But as flawed as they are, faculty members still turn to them as some gauge of effectiveness in the classroom. About three-quarters of instructors use formal evaluations and informal feedback “quite a bit” or “very much” when altering their courses, according to the Faculty Survey of Student Engagement.

One limitation of many tools is that they ask students things they don’t really know. A frequent example: Was your instructor knowledgeable about course content?

There is one additional problem with most student course evaluations that is not explicitly covered in the Chronicle articles – students newly involved in active learning approaches often rate the course and instructor poorly even if they end up learning more effectively. We saw this in our e-Literate TV case study at UC Davis. In a previous post we highlighted how the routine hard work required of students in active learning courses can lead to poor evaluations, but later in the interview student course evaluations came up as a major barrier to improving teaching practices.

Phil Hill: Catherine, especially with even more of a firsthand view, what do you see as the biggest barrier?

Catherine Uvarov: Well, in a way, I was fortunate because I was more a newbie instructor, so I didn’t have like 20 years of experience where I had done it this other way. Just coming in and telling instructors, “Hey, that thing that you’ve been doing for 20 years. You could be doing it better.” They don’t want to hear that. That thing that you’ve been doing for 20 years. You could be doing it better. They have worked very hard over the past 15-, 20-plus years to optimize their instructional methods to the best of their ability within their set of norm practices.

Chris Pagliarulo: And the feedback that they were getting.

Catherine Uvarov: And the feedback, so there is a huge emphasis on student evaluations and how much students like you, which is not really correlated at all with how much they’re actually learning. So, if that’s the only measure of student learning or a student—anything in the class—is student evaluations, then that’s what the instructor is tuning for.

They’re not really figuring out if their students are learning or turning the mirror on themselves and saying, “What can I do to improve my student’s learning?” They’re just saying, “What can I do to make my students like me better?”

Phil Hill: Actually, I’d like you to go a little bit more detail on course evaluations as they’re currently used. I think I heard you say those are more based on, “Do students like me?” So, what do the current course evaluations really measure? What direction does it push faculty?

Catherine Uvarov: In my opinion, the student evaluations are pretty much worthless because the questions that they ask are very generic. It’s like, “Does the person speak loud? Are their visual aids clear?” It’s very generic and bland, and then it gets down to the only question that they really care about—rate the overall performance of this instructor.

What we have found in my flipped class and in any of these where the lecture is changing their style and making the emphasis more on the students, the students are thinking, “Well, I learned all of the material on my own, so the instructor didn’t teach me that material. I’m going to rate the instructor lower because they were not as valuable to me.

Erin Becker: When you make the students do more work, they don’t like you as much, and that hurts your course evaluations, which in turn feeds back in to the incentivization issue.

Marc Faciotti: It’s a challenge. If you’re not thinking about education all day—and most of us have research labs that occupy a lot of time as well (administrative duties and all that type of thing)—so if you don’t have training there, there’s a lot of catching up to do. Most institutions have great resources on campus. There’s people dying here at iAMSTEM to help and to catalyze some of these things. So, seek help, be realistic about how much you’re going to change the first time around, and have kind of a long-term plan for what you’d like to achieve.

Marco Molinaro: I think the biggest barrier we have right now is that the faculty rewards system doesn’t yet take in to account this type of experimentation and doesn’t really promote a faculty member based on the quality of their instruction and the effects that they’ve had on student learning.

Later in the Chronicle article there is a discussion about whether to scuttle student evaluations altogether. I strongly agree with this conclusion:

For Mr. Ryalls, of IDEA, the problems with students’ evaluations shouldn’t scuttle their use altogether. “What drives me crazy,” he says, “is this notion that students don’t know what the hell they’re talking about.” They spend more time than anyone else watching faculty members teach, he says. “Student voice matters.”

The post Student Course Evaluations and Impact on Active Learning appeared first on e-Literate.

RAM is the new disk – and how to measure its performance – Part 3 – CPU Instructions & Cycles

Tanel Poder - Mon, 2015-11-30 00:45

If you haven’t read the previous parts of this series yet, here are the links: [ Part 1 | Part 2 ].

A Refresher

In the first part of this series I said that RAM access is the slow component of a modern in-memory database engine and for performance you’d want to reduce RAM access as much as possible. Reduced memory traffic thanks to the new columnar data formats is the most important enabler for the awesome In-Memory processing performance and SIMD is just icing on the cake.

In the second part I also showed how to measure the CPU efficiency of your (Oracle) process using a Linux perf stat command. How well your applications actually utilize your CPU execution units depends on many factors. The biggest factor is your process’es cache efficiency that depends on the CPU cache size and your application’s memory access patterns. Regardless of what the OS CPU accounting tools like top or vmstat may show you, your “100% busy” CPUs may actually spend a significant amount of their cycles internally idle, with a stalled pipeline, waiting for some event (like a memory line arrival from RAM) to happen.

Luckily there are plenty of tools for measuring what’s actually going on inside the CPUs, thanks to modern processors having CPU Performance Counters (CPC) built in to them.

A key derived metric for understanding CPU-efficiency is the IPC (instructions per cycle). Years ago people were actually talking about the inverse metric CPI (cycles per instruction) as on average it took more than one CPU cycle to complete an instruction’s execution (again, due to the abovementioned reasons like memory stalls). However, thanks to today’s superscalar processors with out-of-order execution on a modern CPU’s multiple execution units – and with large CPU caches – a well-optimized application can execute multiple instructions per a single CPU cycle, thus it’s more natural to use the IPC (instructions-per-cycle) metric. With IPC, higher is better.

Here’s a trimmed snippet from the previous article, a process that was doing a fully cached full table scan of an Oracle table (stored in plain old row-oriented format):

Performance counter stats for process id '34783':

      27373.819908 task-clock                #    0.912 CPUs utilized
    86,428,653,040 cycles                    #    3.157 GHz                     [33.33%]
    32,115,412,877 instructions              #    0.37  insns per cycle
                                             #    2.39  stalled cycles per insn [40.00%]
    76,697,049,420 stalled-cycles-frontend   #   88.74% frontend cycles idle    [40.00%]
    58,627,393,395 stalled-cycles-backend    #   67.83% backend  cycles idle    [40.00%]
       256,440,384 cache-references          #    9.368 M/sec                   [26.67%]
       222,036,981 cache-misses              #   86.584 % of all cache refs     [26.66%]

      30.000601214 seconds time elapsed

The IPC of the above task is pretty bad – the CPU managed to complete only 0.37 instructions per CPU cycle. On average every instruction execution was stalled in the execution pipeline for 2.39 CPU cycles.

Note: Various additional metrics can be used for drilling down into why the CPUs spent so much time stalling (like cache misses & RAM access). I covered the typical perf stat metrics in the part 2 of this series so won’t go in more detail here.

Test Scenarios

The goal of my experiments was to measure the number CPU-efficiency of different data scanning approaches in Oracle – on different data storage formats. I focused only on data scanning and filtering, not joins or aggregations. I ensured that everything would be cached in Oracle’s buffer cache or in-memory column store for all test runs – so disk IO was not a factor here (again, read more about my test environment setup in part 2 of this series).

The queries I ran were mostly variations of this:

SELECT COUNT(cust_valid) FROM customers_nopart c WHERE cust_id > 0

Although I was after testing the full table scanning speeds, I also added two examples of scanning through the entire table’s rows via index range scans. This allows me to show how inefficient index range scans can be when accessing a large part of a table’s rows even when all is cached in memory. Even though you see different WHERE clauses in some of the tests, they all are designed so that they go through all rows of the table (just using different access patterns and code paths).

The descriptions of test runs should be self-explanatory:


SELECT /*+ MONITOR INDEX(c(cust_postal_code)) */ COUNT(cust_valid)
FROM customers_nopart c WHERE cust_postal_code > '0';


SELECT /*+ MONITOR INDEX(c(cust_id)) */ COUNT(cust_valid)
FROM customers_nopart c WHERE cust_id > 0;


FROM customers_nopart c WHERE cust_id > 0;


FROM customers_nopart c WHERE cust_id > 0;


FROM customers_nopart c;


SELECT /*+ MONITOR */ COUNT(cust_valid) 
FROM customers_nopart_hcc_ql WHERE cust_id > 0

Note how all experiments except the last one are scanning the same physical table just with different options (like index scan or in-memory access path) enabled. The last experiment is against a copy of the same table (same columns, same rows), but just physically formatted in the HCC format (and fully cached in buffer cache).

Test Results: Raw Numbers

It is not enough to just look into the CPU performance counters of different experiments, they are too low level. For the full picture, we also want to know how much work (like logical IOs etc) the application was doing and how many rows were eventually processed in each case. Also I verified that I did get the exact desired execution plans, access paths and that no physical IOs or other wait events happened using the usual Oracle metrics (see the log below).

Here’s the experiment log file with full performance numbers from SQL Monitoring reports, Snapper and perf stat:

I also put all these numbers (plus some derived values) into a spreadsheet. I’ve pasted a screenshot of the data below for convenience, but you can access the entire spreadsheet with its raw data and charts here (note that the spreadsheet has multiple tabs and configurable pivot charts in it):

Raw perf stat data from the experiments:

oracle scan test results.png

Now let’s plot some charts!

Test Results: CPU Instructions

Let’s start from something simple and gradually work our way deeper. I will start from listing the task-clock-ms metric that shows the CPU time usage of the Oracle process in milliseconds for each of my test table scans. This metric comes from the OS-level and not from within the CPU:

task-clock-ms.pngCPU time used for scanning the dataset (in milliseconds)

As I mentioned earlier, I added two index (full) range scan based approaches for comparison. Looks like the index-based “full table scans” seen in first and second columns are using the most CPU-time as the OS sees it (~120 and close to 40 seconds of CPU respectively).

Now let’s see how many CPU instructions (how much work “requested” from CPU) the Oracle process executed for scanning the same dataset using different access paths and storage formats:

oracle table scan instructions clean.pngCPU instructions executed for scanning the dataset

Wow, the index-based approaches seem to be issuing multiple times more CPU instructions per query execution than any of the full table scans. Whatever loops the Oracle process is executing for processing the index-based query, it runs more of them. Or whatever functions it calls within those loops, the functions are “fatter”. Or both.

Let’s look into an Oracle-level metric session logical reads to see how many buffer gets it is doing:

oracle buffer gets clean.pngBuffer gets done for a table scan


Wow, using the index with bad clustering factor (1st bar) causes Oracle to do over 60M logical IOs, while the table scans do around 1.6M of logical IOs each. Retrieving all rows of a table via an index range scan is super-inefficient, given that the underlying table size is only 1613824 blocks.

This inefficiency is due to index range scans having to re-visit the same datablocks multiple times (up to one visit per row, depending on the clustering factor of the index used). This would cause another logical IO and use more CPU cycles for each buffer re-visit, except in cases where Oracle has managed to keep a buffer pinned since last visit. The index range scan with a good clustering factor needs to do much fewer logical IOs as given the more “local” clustered table access pattern, the re-visited buffers are much more likely found already looked-up and pinned (shown as the buffer is pinned count metric in V$SESSTAT).

Knowing that my test table has 69,642,625 rows in it, I can also derive an average CPU instructions per row processed metric from the total instruction amounts:

instructions per row.png

The same numbers in tabular form:

Screen Shot 2015-11-30 at 00.38.12

Indeed there seem to be radical code path differences (that come from underlying data and cache structure differences) that make an index-based lookup use thousands of instructions per row processed, while an in-memory scan with a single predicate used only 102 instructions per row processed on average. The in-memory counting without any predicates didn’t need to execute any data comparison logic in it, so could do its data access and counting with only 43 instructions per row on average.

So far I’ve shown you some basic stuff. As this article is about studying the full table scan efficiency, I will omit the index-access metrics from further charts. The raw metrics are all available in the raw text file and spreadsheet mentioned above.

Here are again the buffer gets of only the four different full table scan test cases:

oracle buffer gets table scan only.pngBuffer gets done for full table scans

All test cases except the HCC-compressed table scan cause the same amount of buffer gets (~1.6M) as this is the original table’s size in blocks. The HCC table is only slightly smaller – didn’t get great compression with the query low setting.

Now let’s check the number CPU instructions executed by these test runs:

oracle table scan only instructions.pngCPU instructions executed for full table scans

Wow, despite the table sizes and number of logical IOs being relatively similar, the amount of machine code the Oracle process executes is wildly different! Remember, all that my query is doing is just scanning and filtering the data followed with a basic COUNT(column) operation – no additional sorting, joining is done. The in-memory access paths (column 3 & 4) get away with executing much fewer CPU instructions than the regular buffered tables in row-format and HCC format (columns 1 & 2 in the chart).

All the above shows that not all logical IOs are equal, depending on your workload and execution plans (how many block visits, how many rows extracted per block visit) and underlying storage formats (regular row-format, HCC in buffer cache or compressed columns in In-Memory column store), you may end up doing a different amount of CPU work per row retrieved for your query.

This was true before the In-Memory option and even more noticeable with the In-Memory option. But more about this in a future article.

Test Results: CPU Cycles

Let’s go deeper. We already looked into how many buffer gets and CPU instructions the process executed for the different test cases. Now let’s look into how much actual CPU time (in form of CPU cycles) these tests consumed. I added the CPU cycles metric to instructions for that:

instructions and cycles.pngCPU instructions and cycles used for full table scans

Hey, what? How come the regular row-oriented block format table scan (TABLE BUFCACHE) takes over twice more CPU cycles compared to its instructions executed?

Also, how come all the other table access methods use noticeably less CPU cycles than the number of instructions they’ve executed?

If you paid attention to this article (and previous ones) you’ll already know why. In the 1st example (TABLE BUFCACHE) the CPU must have been “waiting” for something a lot, instructions having spent multiple cycles “idle”, stalled in the pipeline, waiting for some event or necessary condition to happen (like a memory line arriving from RAM).

For example, if you are constantly waiting for the “random” RAM lines you want to access due to inefficient memory structures for scanning (like Oracle’s row-oriented datablocks), the CPU will be bottlenecked by RAM access. The CPU’s internal execution units, other than the load-store units, would be idle most of the time. The OS top command would still show you 100% utilization of a CPU by your process, but in reality you could squeeze much more out of your CPU if it didn’t have to wait for RAM so much.

In the other 3 examples above (columns 2-4), apparently there is no serious RAM (or other pipeline-stalling) bottleneck as in all cases we are able to use the multiple execution units of modern superscalar CPUs to complete more than one instruction per CPU cycle. Of course more improvements might be possible, but more about this in a following post.

For now I’ll conclude this (lengthy) post with one more chart with the fundamental derived metric instructions per cycle (IPC):

instructions per cycle.png

The IPC metric is derived from the previously shown instructions and CPU cycles metrics by a simple division. Higher IPC is better as it means that your CPU execution units are more utilized, it gets more done. However, as IPC is a ratio, you should never look into the IPC value alone, always look into it together with instructions and cycles metrics. It’s better to execute 1 Million instructions with IPC of 0.5 than 1 Billion instructions with an IPC of 3 – but looking into IPC in isolation doesn’t tell you how much work was actually done. Additionally, you’d want to use your application level metrics that give you an indication of how much application work got done (I used Oracle’s buffer gets and rows processed metrics for this).

Looks like there’s at least 2 more parts left in this series (advanced metrics and a summary), but let’s see how it goes. Sorry for any typos, it’s getting quite late and I’ll fix ’em some other day :)


NB! Dates updated: After a 1.5 year break, this year’s only Advanced Oracle Troubleshooting training class (updated with Oracle 12c content) takes place on 14-18 December 2015 and 11-15 January 2016 (I had to reschedule the start from November to December). So sign up now if you want to learn new cool stuff!

Related Posts


Jonathan Lewis - Mon, 2015-11-30 00:03

This is the text of the “whitepaper” I submitted to DOAG for my presentation on “Core Strategies for Troubleshooting”.


In an ideal world, everyone who had to handle performance problems would have access to ASH and the AWR through a graphic interface – but even with these tools you still have to pick the right approach, recognise the correct targets, and acquire information at the boundary that tells you why you have a performance problem and the ways in which you should be addressing it.

There are only three ways you can waste resources on an Oracle system, and only three different types of activity that need to be investigated. If you don’t appreciate that this is the case then you can waste a lot of time following the wrong strategy and attempting to solve the wrong problems. Once you have a clear statement of what you are trying to achieve it becomes much easier to achieve those aims.

Three ways to cause problems

There really are only three symptoms you need to look out for in the database

  • You’re doing it the hard way
  • You’re doing it too often
  • You’re queueing / locking

Inevitably you can see that there is going to be some overlap between the three (and it would be easy to argue that the third is simply a side effect of the other two). If you are executing a single statement “the hard way” you’re likely to be doing single block reads or buffer gets too often. If you’re executing a very lightweight statement too often it’s probably a step in a process that is attempting get a job done “the hard way”, and it’s probably causing (and getting stuck in) queues for latches and mutexes. I’ve included queueing as the third option because the simple action of locking data (deliberately, or by accident) doesn’t fall neatly into the other two.

Another way of looking at this list is to reduce it two just two items with a slightly different flavour: when you spend too much time on a task it’s because you’re either doing too much work, or you’re not being allowed to work.

Three classes of problems to solve

An important aspect of performance problems is the high-level classification; labelling the class of problem properly points you to the correct strategy for investigating the problem. Again there are only three possibilities in the list, which I’ll express as typical complaints:

  • My report is taking too long to run / screen is taking to long to refresh
  • The batch job over-ran the SLA last night
  • The “system” is slow

What’s the most significant difference between three classes of complaint ?

  • “My XXX takes too long”: XXX is a repeatable event that can be isolated and watched – just do it again, and again, and again, and again while I watch every step of what’s going on.
  • The batch job over-ran last night: It’s not a repeatable process, so you’ve got to infer what the problem was from historical evidence; it’s (usually) made up of a number of concurrent processes, which may interfere with each other to varying degrees depending on when their relative start and finish times are.
  • The “system” is slow: possibly all the time, possibly intermittently – if there’s no specific complaint then the only option is to key an eye open for resource-intensive activity to see if you can reduce the workload of specific individual tasks (for which read SQL or PL/SQL statements) or reduce the number of time that those tasks are executed.
The common source

Despite the fact that we can classify performance problems in three ways, it’s worth remembering that ALL the information we might use to drive our diagnosis comes from one place – the Oracle database. At any moment we have sessions that are active, operating statements (cursors), and using a resource such as a file, a latch, a buffer, and so on. We could almost represent each moment by a cube, with sessions along one size, cursors along another, and resources along the third – the cube changes moment by moment, allowing us to visualise time as the fourth dimension in a hypercube of activity.

Instant by instant the Oracle kernel code knows which session is using what resource to operate which cursor – and although the total volume of all that information is far more than could reasonably be recorded, Oracle has many different ways of slicing, dicing and capturing parts of that hypercube – running totals, rolling aggregates, snapshots by session, by cursor, by resource and so on – that are made visible as the dynamic performance views (v$ objects). Trouble-shooting is largely a case of deciding which dynamic performance views are the most appropriate to use for our three classes of task.

Active Session History

Before reviewing the three classes, it’s worth break off for a moment to say a few things about one of the most important and useful views that we have into the wealth of information available; this is the active session history (v$active_session_history / ASH) which Oracle uses to capture a snapshot once every second of what each currently active session is doing; every 10th snapshot is then echoed down into the Automatic Workload Repository (AWR) by a process that runs every hour to copy the dynamic view to a table that can be view through the database view dba_hist_active_sess_history.

The capture rates can all be adjusted: I have never seen anyone change from one snapshot per second, or every 10th snapshot in a production system, but I’ve often seen the dump to the AWR taken every 30 minutes, occasionally 20 or even 15 minutes. On occasion I have asked clients to do a CTAS (create table as select – nologging) to capture the whole of the v$active_session_history to a table that can be exported and reviewed at a later date. Oracle tries to keep a minimum of at least an hour’s ASH in memory but, with a large enough SGA, you may find that this will stretch out to 3 or 4 hours.

Unfortunately although (or perhaps because) ASH and its AWR history are extremely helpful, you have to pay extra licence fees to use the information, and the technology can only be licensed with the Enterprise Edition of Oracle.

My report is slow

The special feature of someone complaining about a specific task is that it’s likely to be repeatable – so we can run it again and again and watch every single detail to see where the time goes. Our slice through the hypercube could take a single session over a period of time and report every action along that path. This, of course, is the 10046 – a.k.a extended SQL trace event. We can enable it in many ways, perhaps through a logon trigger, perhaps through a call to dbms_monitor:

	session_id => &m_sid, 
	serial_num => &m_serial,
	waits      => true, 
	bind       => true, 
	plan_stat  => 'all_executions'

In this example I’ve request all wait states and bind variable to be dumped into the trace file, I’ve also requested that the execution plan (with rowsource execution stats) be dumped for every single execution of every single statement. Sometimes a problem arises because a particular set of bind variables represents a special case that causes a “reasonable” plan to behave very badly. If we’re going to look closely we may as well get as much detail as possible.

The entire “trace” interface was upgraded dramatically in 11g, and one of the useful variants on this theme is particularly relevant to a commonly used Web-based implementation. If you know that a specific screen task corresponds to a particular PL/SQL package you can enable tracing of a cursor (across the system, if necessary) by SQL_ID. So, for example, you might issue the following two commands, with a couple of minutes gap between the two:

alter system
set events ‘sql_trace[SQL:1wthpj7as7urp]
wait=true, bind=true’

— wait a few minutes

alter system
set events ‘sql_trace[SQL:1wthpj7as7urp] off’

Every time the statement with SQL_ID =‘1wthpj7as7urp’ is executed, the session executing it will start adding information to the session trace file, and when the statement ends the tracing will end. This is particularly nice if the “statement” is a top-level call to a PL/SQL procedure because all the SQL inside the procedure will be traced as the package executes.

For a highly focused, highly repeatable task, the 10046 trace event is almost always all you need to do.

The batch over-ran

The big difference between this case and the previous one is that “the batch” is not something you can simply repeat and watch. Moreover, “the batch” is likely to be a large number of separate sections of code that are scheduled to run with a fairly fluid timetable that can result in changes from day to day (or, more likely, night to night) in the set of jobs that might be running concurrently. This means that even if you could re-run the batch job (perhaps on the previous night’s backup) you might not see the same problem appear because a small change in timing could result in a large change in contention).

One of the most important steps of dealing with the batch is pre-emptive: instrument your code and make it possible to compare the run on one night with the run on another. At the very least you need to have something capturing the start and end times of each “significant component” of the batch so you can quickly answer questions like: “which jobs took much longer than usual”, “which job was the first job that took longer than usual”, “which jobs were running concurrently with job X last night when they never usually overlap?”

Ideally you should have much more information than this about each job – basically a report from Oracle which says “how much work did I do, how much time did I spend”: for a session this is simply a report of v$mystat or v$sesstat (joined to v$statname) and v$session_event for the session (v$mystat is a less well- known view that is v$sesstat restricted to “my” session) ; if you classify each job as “connect to the database, do something, disconnect” then this critical log is simply a pair of select statements spooled out somewhere convenient, or written to the database; if you want to break a single connection into significant pieces then a simple pl/sql procedure could read the statistics into a pl/sql array as the piece starts, then re-read the stats and calculate the differences as the piece ends.

Knowing where the time went, and knowing how tasks have behaved differently from previous runs is a big step forward to identifying the problem.

If you don’t have the instrumentation you need then the AWR (if you’re licensed) or Statspack (if you’re not licensed) is a step in the right direction. Apart from the typical hourly snapshots and reports you can take a snapshot as the first and last steps of the batch so that you’ve got “the whole batch” in a single AWR/Statspack report. If you’ve got that you can then do comparisons for things like:

  • Which event consumed much more time than usual
  • Which SQL took much more time than usual
  • Which segment(s) saw much more activity than usual
  • Was there some unusual enqueue activity
  • Can we see some unusual outliers in the event histograms
  • Can we see some unusual memory demands in the pga histogram

Although system-wide summaries rarely point us at exact causes, they can often give us strong clues of areas (and times) where problem originated.

In this respect the “Top Activity” screen from Enterprise Manager (Grid Control / Cloud Control) can be very helpful as it produces a nice graphical presentation of “working”. Where, in the picture of last night’s activity, does the graph start to ramp up, and what colour is the bit that’s growing, and how does that picture compare to the same picture the previous night. (Having two windows open with two different nights makes it wonderfully easy to switch between displays and spot the differences.) Since the top activity screen is created from the dba_hist_active_sess_history, which contains about 100 different details per session of each captured moment, it’s very easy to drill though the spikes to answer questions like: “which object”, “what event”, “which SQL”, “what was the execution plan”, “how much work did that take”, to follow the chain of time back to the cause.

The system is slow

If no-one is going to tell you about specific tasks, and if you don’t have any sort of boundary that allows you to focus on tasks or time-ranges, then the simplest thing to do is look for anything expensive (i.e. time-consuming) and see if you can make it cheaper.

Again, the graphic “Top Activity” screen is very helpful, and I often tell people to arrange to have a system that shows the top activity screens for the most important 2 or 3 databases on a large screen on the wall where any passing DBA might notice a brief spike in workload. There are systems that can be improved by constant monitoring – so long as the monitoring doesn’t take out 100% of an individual’s time but is driven as an informal glance at a picture.

If you’re not licensed to take advantage of the AWR then Statspack can help – but with the smallest time interval (though 15 minutes is as low as I’ve ever gone) between snapshots so that “anomlies” that are short-lived don’t fall out of memory before they can be captured.

An important feature of reading Statspack is that you need to check for missing information – if the headline figure for physical reads is 25M but the “SQL ordered by reads” is 12M then you know that there must be 13M reads that didn’t get captured in the report and that might be the 13M that is causing the problem. Similarly if the “Segments by physical reads” reports 16M reads that’s 4M more than the SQL – but is the 12M a subset of the 16M, or is there only a 3M overlap between the two figures so that between them the 12M and 16M cover the entire 25M. There’s more information in the Statspack report than immediately meets the eye, and a careful drilldown into other areas of the report (typically the Instance Activity) may be needed to clarify the meaning of what you’re seeing as a headline figure.

The other thing you can do with “the slow system” when you don’t have ASH to help is take snapshots (or get some freeware to do the same sort of thing). If the system is slow “right now” you could, for example, take a snapshot of v$sess_io (session I/O), wait 30 seconds then take another snapshot, find the difference and see who is doing most of the I/O work – then chase that session; or take snapshots of v$sesstat limited to (say) statistics like “%redo%” and find out who is generating lots of redo.

Oracle allows you to take this approach back into recent history – there are a number of “metric” views which give you thing like the rolling average, min, and max I/O volumes for the last few intervals of 5 minutes or 1 minute each – telling you, in other words, whether there were any interesting bursts of extreme activity in the recent past. For example, a query against v$sysmetric_summary might give you an output like the following:

------------------------ ------------- ----------- ------------ -----------------------
Physical Reads Per Sec        1,618.95      105.92       358.16 Reads Per Second
Physical Reads Per Txn       97,202.00    5,539.19    20,811.56 Reads Per Txn
Redo Generated Per Sec    6,773,108.94  218,132.86 1,023,458.57 Bytes Per Second
User Calls Per Txn              395.00       43.39        79.85 Calls Per Txn
Total Parse Count Per Sec        31.14        1.88         4.25 Parses Per Second
Host CPU Utilization (%)         64.51        3.93         9.07 % Busy/(Idle+Busy)
Database Time Per Sec            82.96        6.65        15.37 CentiSeconds Per Second
I/O Megabytes per Second         35.58        2.62         5.73 Megabtyes per Second

This summarises the last 12 intervals of 5 minutes. If we look at “Physical Reads per Txn” we can see that there were some extreme swings in activity over that period, so we could drill down into v$sysmetric_history for “Physical Reads per txn”, looking at the 1 minute granularity and see:

METRIC_UNIT                    BEGIN_TIME                VALUE
Physical Reads Per Txn         05-feb 12:45:55          421.00
                               05-feb 12:44:55          477.00
                               05-feb 12:43:55          351.00
                               05-feb 12:42:55          406.84
                               05-feb 12:41:55        1,550.00
                               05-feb 12:40:55       93,984.00
                               05-feb 12:39:55       97,202.00
                               05-feb 12:38:55       93,323.00
                               05-feb 12:37:55          391.00
                               05-feb 12:36:55          504.00
                               05-feb 12:35:55          504.00
                               05-feb 12:34:55          252.00

Yes, a few minutes ago something exploded onto the system doing a huge amount of I/O for about 3 minutes. If we’re lucky we might now drill into the v$sesstat, or v$sess_io, or v$session_event to see if we can find a session that is responsible for a large amount of I/O; and then check v$open_cursor to see if it still has some open cursors that might (if we check v$sql) show us what caused the I/O.

When there are no specific complaints, we just keep an eye open for spikes in activity and try track them down as quickly and cheaply as possible to see if they’re worth addressing.


Oracle gives you a huge amount of information about the work that’s going on and the time that’s being used in the database. Unfortunately the most useful repository of that information is in a dynamic performance view that can only be viewed in the Enterprise Edition after purchasing additional licences. However, the information is summarised, in many different ways in literally hundreds of other dynamic performance views, and it’s easy to pick out helpful information from those views in a variety of ways.

Key to making the best use of those views, though, is recognising that different classes of performance problems require different strategies – and there are only three different classes of problems to worry about.

A False Sense of Security, or how Database Developers can save £35 million

The Anti-Kyte - Sun, 2015-11-29 12:00

Deb’s been recovering from an operation recently. During her convalescence, I have been designated as her nurse and carer.
Careful planning was required prior to the op. She promised not to over do things, and I promised to attend to her every need.
“I should have a little bell so I can ring when I need you”, she opined. “After all, that’s what they do in Downton Abbey”.

We don’t have a bell. Fortunately Deb did have something that would do the job. Last Christmas, a thoughtful relative had given her a toy gun, a replica of the sort that was in the Despicable Me films.

Yes, when my presence was required, Deb simply had to fire the Fart Gun.

In order to attempt to retain a little of the Downton aura that she had been so keen to capture, I did make a point of saying “You rang M’Lady”, in my best Parker-from-Thunderbirds voice whenever I was summoned by the sound of electronic flatulence.

It seems to have worked out OK in the end. Deb is now up and about, having survived a week of my cooking.

When not attending to my nursing duties, I did have the chance to catch up with the unfolding story about the latest TalkTalk cyber attack.
Things that, in retrospect, are quite obvious were rather less clear at the time they were reported.

To begin with, the report was of a Distributed Denial of Service (Ddos) attack which had resulted in a loss of customer data.
Was this some fiendishly clever variation on the theme of a Ddos attack ? As far as I knew, such an exploit was designed solely to take down a site, not to obtain data.
Further “clarification” followed. It was reported that there had also been a “Sequential Attack”. I’ve never heard of one of those.
Just before I ran off to do some research, this was finally translated into techie – it was actually a SQL Injection (SQLi) attack.

Later, it was reported that TalkTalk have estimated the cost of this attack at up to £35 million.

Whilst it’s easy to laugh at the efforts of a CEO struggling with the terminology around this topic, it’s worth bearing in mind that the responsibility, ultimately, lies within the IT Department. But where ? with the Network Admins, Architects, the DBAs ?

SQLi has been around for longer than some of the people who are now making use of it.

As a Database Developer, you may well be feeling confident about security at this point.
After all, your code is sitting on the database server. Access to that server is probably restricted to a White List of machines, which will include the Application Server.
If you’re working with a public facing web application, the Application Server will be sitting behind a firewall.
Additionally, it may well be the case that the Application has been designed to implement the Data Access Layer pattern. Any database calls from the Controller layer are made to your PL/SQL packages which have been written to the Transactional API (XAPI) pattern. So, you don’t even need to wonder whether your Web Developers have used prepared statement calls.
On top of that, the application only connects to the database as a minimally privileged database user. The Application Owning schema itself is actually locked.

What we’re going to cover here is a Database Developer’s eye view of how such an application might look.
We’ll also look at just why these multiple layers of security provide no protection whatsoever against a SQL Injection attack.

Once we’ve compromised the application and caused a bit of havoc, we’ll look at how we, as Database Developers, can ensure that our code is resistant to this kind of attack.

The Application

Our application is running on an Oracle 11g database. The front-end and mid-tier layers can use whatever technology takes your fancy.
For the purposes of the demonstrations that follow, I’ll be using SQL*Plus to simulate calls from the application into the PL/SQL packages that comprise the DAL layer.
I’m not a networking expert so won’t embarass myself by going into detail about the Network Security mechanisms that might bet in place for this application.
For the purposes of the examples that follow, we’ll assume that our Network Admins know what they’re doing and that Network Security is adequate for our application.

In terms of how our application looks from the view of the Database Developers…

DAL and XAPI Patterns

In an n-tier application using the Oracle RDBMS, the Data Access Layer (DAL) usually interacts with the database by means of calls to PL/SQL packages.
These packages are normally written to conform to the Transaction API (XAPI) pattern.

In other words, the mid-tier code contains no SQL statements. Instead, it simply issues calls to the PL/SQL packages, where the SQL is encapsulated.

The PL/SQL packages themselves implement the required transactions with the database.

Functional Requirements

Using the HR schema as an example, let’s see what the XAPI packages to handle Department information might look like.

There are some assumptions that I’ve made in writing this code.

For DML operations on a department, the user will select the relevant department from a drop-down list. The value passed into the package will be the department_id.
The same applies to Locations and Managers.

A user can search for a department using any (or none) of the following criteria :

  • Department Name ( translated to Departemnt ID)
  • Manager Name( translated to Manager id)
  • Location (translated to location_id)
  • Country Name

Based on these assumptions, we may have a package that looks like this :

create or replace package manage_departments

    procedure add_department
        i_department_name departments.department_name%type,
        i_location_id departments.location_id%type default null,
        i_manager_id departments.manager_id%type default null
    procedure delete_department( i_department_id departments.department_id%type);

end manage_departments;

create or replace package body manage_departments

    procedure add_department
        i_department_name departments.department_name%type,
        i_location_id departments.location_id%type default null,
        i_manager_id departments.manager_id%type default null
    -- Create a new Department.
    -- The location and manager are optional in the underlying table so the
    -- business rule is that a department may be created without either being specified
        if i_department_name is null then
            raise_application_error(-20000, 'You must provide a name for the new Department');
        end if;
        insert into departments( department_id, department_name, location_id, manager_id)
        values( departments_seq.nextval, i_department_name, i_location_id, i_manager_id);
    end add_department;
    procedure delete_department( i_department_id departments.department_id%type)
    -- Remove a department. Once again, the Referrential Integrity in the data model
    -- should prevent the removal of a department that still has Employees working in it.
        delete from departments
        where department_id = i_department_id;
    end delete_department;
end manage_departments;

We’ll come onto the routine for implementing the Department search shortly.

First though, it’s worth noting that the way input parameters are used in the package’s procedures ensures that they are bound at runtime…

    manage_departments.add_department('My Test Department');

PL/SQL procedure successfully completed.


Commit complete.

select department_id
from departments
where department_name = 'My Test Department'


If we now look at the statement that was actually run, we can see that the variables have been bound into the query :

select sql_text
from v$sqlarea
where lower(sql_text) like 'insert into departments%';


The principle of Least Privilege

There are still a fair number of applications which implement these patterns yet which also initiate database connections as the Application Owner schema.
Let’s have a look at exactly what privileges the Application Owner has.

select privilege
from session_privs


14 rows selected.

That’s a fair amount of stuff, without even looking at what object privileges may also be granted.
It’s not unheard of for Application Owner accounts to be very highly privileged, even to the point of having CREATE USER.
Just as relevant is the fact that this user automatically has full DML privileges on the tables that it owns – i.e. the tables that comprise our application.

By contrast, having implemented the XAPI pattern, the only privileges required to run the application are :

  • Execute permissions on the XAPI packages

So, instead of connecting as HR, we can use a minimally privileged application user account.
Such an account could be created (connecting as a user with CREATE USER privs) using some variation of…

create user hr_app_user identified by some_password

User created.

grant create session to hr_app_user

grant execute on hr.manage_departments to hr_app_user

-- Execute privs for other packages could go here
-- Alternatively, they could all be assigned to a role
-- which is then granted to this user

There are a couple of points worth noting here.
In reality you’d probably create this user to use some external authentication (e.g. Kerberos) to avoid the need to supply the password when connecting to the database.
It’s also worth bearing in mind that, even though it’s only been explicitly granted these privileges, the user will also inherit any privileges granted to PUBLIC.

Now that the application connects as HR_APP_USER, we can even lock the HR account and the application will continue to function…

alter user hr account lock

User altered.

select account_status
from dba_users
where username = 'HR'


Just to prove that everything still works, let’s remove the department we just created. Connected as hr_app_user…

select user
from dual



PL/SQL procedure successfully completed.


Commit complete.

We cannot directly query the departments table as the HR_APP_USER, but if we connect as a user with SELECT ANY TABLE, we can see that the department has indeed been deleted…

select department_name
from hr.departments
where department_id = 290

no rows selected

At this point it looks like our application is pretty secure. We’re not even using synonyms – another potential attack vector.

Now let’s introduce our Department Search routine…

NOTE – as the HR account is now locked, I’ve created the following packages by connecting as a user with CREATE ANY PROCEDURE and then issuing the command…

alter session set current_schema = hr
create or replace package search_queries
    function get_departments
        i_department_id departments.department_id%type default null,
        i_manager_id departments.manager_id%type default null,
        i_location_id departments.location_id%type default null,
        i_country_name countries.country_name%type default null
        return sys_refcursor;
end search_queries;

create or replace package body search_queries
    function get_departments
        i_department_id departments.department_id%type default null,
        i_manager_id departments.manager_id%type default null,
        i_location_id departments.location_id%type default null,
        i_country_name countries.country_name%type default null
        return sys_refcursor
    -- Either get a list of all departments or filter by one or more criteria - i.e. :
    -- Department ( will return details of one department)
    -- Manager ( all departments with this manager)
    -- Location (all departments in this location)
    -- Country (all departments in a given country) 

        l_stmnt varchar2(4000);
        l_where varchar2(4000);
        l_rc sys_refcursor;
        l_stmnt := 
            'select dept.department_name, coun.country_name,, ' 
            ||q'[ emp.first_name||' '||emp.last_name as manager_name ]'
            ||' from departments dept '
            ||' inner join locations loc '
            ||'     on dept.location_id = loc.location_id '
            ||' inner join countries coun '
            ||' on loc.country_id = coun.country_id '
            ||' left outer join employees emp '
            ||'     on dept.manager_id = emp.employee_id '
            ||' where 1 = 1 ';
        if i_department_id is not null then
            l_where := l_where||' and dept.department_id = '||i_department_id;
        end if;
        if i_manager_id is not null then
            l_where := l_where||' and dept.manager_id = '||i_manager_id ;
        end if; 
        if i_location_id is not null then
            l_where := l_where||' and dept.location_id = '||i_location_id;
        end if;
        if i_country_name is not null then
              l_where := l_where||q'[ and coun.country_name = ']'||i_country_name||q'[']';
        end if;
        if nvl(length(l_where), 0) > 0 then
            l_stmnt := l_stmnt||l_where;
        end if;
        open l_rc for l_stmnt;
        return l_rc;       
    end get_departments;
end search_queries;

This is an extremely simplified example of a search routine. However, it’s still not obvious exactly what it’s meant to be doing at first glance.
Once you do figure out what’s going on, you can see that the variables are being concatenated into the string that builds the query which is ultimately executed.
That’s OK though, isn’t it ? After all, this is PL/SQL, where binding happens automagically.
Hmmm, let’s put our Black Hat on for a moment and test that assumption…

Using the Department Search to find out what the boss earns

To keep things simple, I’ll be using SQL*Plus to simulate the calls from the Application front-end
So, connected as hr_app_user…

select hr.search_queries.get_departments(10) from dual;



------------------------------ ------------------------------ -------------------- --------------------
Administration United States of America Seattle Jennifer Whalen

All seems to work as expected.

The same applies if we specify a country…

select hr.search_queries.get_departments(null, null, null, 'Canada') from dual;



------------------------------ ------------------------------ -------------------- --------------------
Marketing Canada Toronto Michael Hartstein

However, if we now get a bit creative, the result certainly isn’t what the developer intended…

select hr.search_queries.get_departments(null, null, null,
    'A'||chr(39)||' union select table_name, null, null, null from user_tables --')
from dual;



------------------------------ ------------------------------ -------------------- --------------------

8 rows selected.

Because the function is concatenating user input into the query, we can manipulate it using the CHR(39) (single-quote character) to end the string the function is expecting and allows us to concatenate code after it.
The “- -” at the end of our input string is simply to ensure that the statement definition ends after our input.
Remember, there are no carriage returns in our statement as it’s built in the package.

The actual query being executed ( formatted to make it more readable) is :

select dept.department_name, coun.country_name,,
    emp.first_name||' '||emp.last_name as manager_name
from departments dept
inner join locations loc
    on dept.location_id = loc.location_id
inner join countries coun
    on loc.country_id = coun.country_id
left outer join employees emp
    on dept.manager_id = emp.employee_id
where 1 = 1
and coun.country_name = 'A'
union select table_name, null, null, null
from user_tables

Whilst HR_APP_USER itself does not own any tables, the query is being executed from a definer rights package owner by HR. Therefore we get a list of the tables that are owned by HR – the Application Tables.

Now, I wonder if there’s anything of interest in the EMPLOYEES table …

select hr.search_queries.get_departments(null, null, null,
    'A'||CHR(39)||q'[ union select table_name, column_name, null, null from user_tab_cols where table_name = 'EMPLOYEES' --]') 
from dual;



-------------------- -------------------- -------------------- --------------------

11 rows selected.

Now to make the Annual Review process in the company a bit more interesting, we can extract the name, salary and commission rate for every employee, together with their e-mail addresses…

select hr.search_queries.get_departments(null, null, null,
    'A'||CHR(39)|| q'[union select first_name||' '||last_name, email, to_char(salary), to_char(commission_pct) from employees --]')
from dual;



-------------------- -------------------- -------------------- --------------------
Adam Fripp AFRIPP 8200
Alana Walsh AWALSH 3100
Alberto Errazuriz AERRAZUR 12000 .3
Alexander Hunold AHUNOLD 9000
Alexander Khoo AKHOO 3100
Alexis Bull ABULL 4100
Allan McEwen AMCEWEN 9000 .35
Alyssa Hutton AHUTTON 8800 .25

107 rows selected.

…and send this information to everyone on the list.

Notwithstanding the considerable security precautions in place around this application, we’ve demonstrated how easy it is to compromise it using SQLi.
The precautions we have taken are no defence because, as far as the Application is concerned, it’s operating normally.

This demonstrates fairly graphically that, whilst security is the responsibility of Network Admins, Architects, and DBAs, it’s also the responsibility of us Database Developers.
So, let’s see how we can protect against this attack…

Binding the Variables in the Function

The immediate problem we’re faced with here is that there are multiple combinations of values that a user may choose to call the function with.
The USING clause of EXECUTE IMMEDIATE is not dynamic – we have to know the number of variables we’re binding into the query.
Therefore, if we’re going to persist with the Dynamic SQL approach we either need to

  • code for all possible parameter combinations (14 in this case)
  • use the DBMS_SQL package
  • cheat

Let’s try the last one of those…

The Tom Kyte Solution

Using the technique in this article by Mr Kyte, we can just bind all of the variables into the query, irrespective of whether we’ve been passed a value for them :

create or replace package body search_queries
    function get_departments
        i_department_id departments.department_id%type default null,
        i_manager_id departments.manager_id%type default null,
        i_location_id departments.location_id%type default null,
        i_country_name countries.country_name%type default null
        return sys_refcursor
    -- Either get a list of all departments or filter by one or more criteria - i.e. :
    -- Department ( will return details of one department)
    -- Manager ( all departments with this manager)
    -- Location (all departments in this location)
    -- Country (all departments in a given country) 

        l_stmnt varchar2(4000);
        l_where varchar2(4000);
        l_rc sys_refcursor;
        l_stmnt := 
            'select dept.department_name, coun.country_name,, ' 
            ||q'[ emp.first_name||' '||emp.last_name as manager_name ]'
            ||' from departments dept '
            ||' inner join locations loc '
            ||'     on dept.location_id = loc.location_id '
            ||' inner join countries coun '
            ||' on loc.country_id = coun.country_id '
            ||' left outer join employees emp '
            ||'     on dept.manager_id = emp.employee_id '
            ||' where 1 = 1 ';
        if i_department_id is not null then
            l_where := l_where||' and dept.department_id = :dept_id';
            l_where := l_where||' and ( 1 = 1 or :dept_id is null)';
        end if;
        if i_manager_id is not null then
            l_where := l_where||' and dept.manager_id = :mgr_id' ;
            l_where := l_where||' and (1 = 1 or :mgr_id is null)'; 
        end if; 
        if i_location_id is not null then
            l_where := l_where||' and dept.location_id = :loc_id';
            l_where := l_where||' and ( 1 = 1 or :loc_id is null)';
        end if;
        if i_country_name is not null then
            l_where := l_where||' and coun.country_name = :country';
            l_where := l_where||' and ( 1 = 1 or :country is null)';
        end if;
        l_stmnt := l_stmnt||l_where;

        open l_rc for l_stmnt using i_department_id, i_manager_id, i_location_id, i_country_name;
        return l_rc;
    end get_departments;            

end search_queries;

If we now replace our package body with this, we can see that it works in the same way as before…

select hr.search_queries.get_departments(null, null, null, 'Canada') from dual;



-------------------- -------------------- -------------------- ------------------------------
Marketing Canada Toronto Michael Hartstein

…except when you try to inject it…

select hr.search_queries.get_departments(null, null, null,
    'A'||chr(39)||' union select table_name, null, null, null from user_tables --')
from dual;



no rows selected

The query that it’s executing is ( once again formatted for readability) :

select dept.department_name, coun.country_name,,
    emp.first_name||' '||emp.last_name as manager_name
from departments dept
inner join locations loc
    on dept.location_id = loc.location_id
inner join countries coun
    on loc.country_id = coun.country_id
left outer join employees emp
    on dept.manager_id = emp.employee_id
where 1 = 1
and ( 1 = 1 or :dept_id is null)
and (1 = 1 or :mgr_id is null)
and ( 1 = 1 or :loc_id is null)
and coun.country_name = :country

Yes, instead of simply concatenating our parameter values into the statement, it binds the parameters into the query.

At this point, a thought occurs. How is the query that is executed so different from …

The Anti-Kyte solution

Sometimes you have to use dynamic SQL. Quite often however, something like this will fit the bill…

create or replace package body search_queries
    function get_departments
        i_department_id departments.department_id%type default null,
        i_manager_id departments.manager_id%type default null,
        i_location_id departments.location_id%type default null,
        i_country_name countries.country_name%type default null
        return sys_refcursor
    -- Either get a list of all departments or filter by one or more criteria - i.e. :
    -- Department ( will return details of one department)
    -- Manager ( all departments with this manager)
    -- Location (all departments in this location)
    -- Country (all departments in a given country) 

        l_rc sys_refcursor;
        open l_rc for
            select dept.department_name, coun.country_name,, 
                emp.first_name||' '||emp.last_name as manager_name
            from departments dept
            inner join locations loc
                on dept.location_id = loc.location_id
            inner join countries coun
                on loc.country_id = coun.country_id
            left outer join employees emp
            on dept.manager_id = emp.employee_id
            where dept.department_id = nvl(i_department_id, dept.department_id)
            and dept.manager_id = nvl( i_manager_id, dept.manager_id)
            and dept.location_id = nvl( i_location_id, dept.location_id)
            and coun.country_name = nvl( i_country_name, coun.country_name);
        return l_rc;
    end get_departments;            
end search_queries;

Functionally, this works in exactly the same way. However, it’s much easier to see what’s going on.

OK, I know that this is a fairly simple example and once you start getting into some of the more complex combinations of possible predicates, Dynamic SQL may be the only practical solution.
You may also be wondering about the relative performance of these queries. I’m not going to try to draw any conclusions from the tiny data set contained in this application. However, it’s worth running some tests on your application with your data, before you determine the relative merits of these approaches in terms of performance.

I think the main point here is that, whilst there are times when only Dynamic SQL will do, a static SQL approach is usually preferrable.

Changing the Culture – how to stop this happening in the first place

As we’ve seen, SQLi has been around a long time. Despite this, this particular vulnerability is widespread. Wheras it used to be the case that it took a Gru-like evil genius, slaving away at the keyboard, to run this kind of attack, nowdays, there are kits available to do this automatically. Some of them even have user-friendly GUIs.
All Gru requires now is for a Minion or two to stop playing with their Fart guns and click a few buttons.
We can also conclude that, unless the Database Developers do their bit, any other security precautions are rendered ineffective in protecting data, however thorough they may be.

As Database Developers, we need to accept that our code must fulfill four main criteria. It must :

  • Implement the required functionality
  • Be efficient in terms of performance
  • Be maintainable for ongoing production support
  • …and be secure

As well as functional tests, performance tests and peer reviews, the code should undergo penetration tests as part of the development cycle.
There are resources available out there to help with writing and executing such tests.
Red Database Security have a SQL Injection Cheat Sheet available.
If it comes to it, why not obtain one of the aforementioned SQLi tools that are available ? You can search on Google, there’s plenty around, although sqlmap might be a good place to start.

In addition to this, care should be taken when using Dynamic SQL for DML statements.
Part of any Peer Review could be to check for such statements and make sure that binding is always taking place.
I would go further. I would suggest that any use of dynamic SQL needs to be justified to the satisfaction of the team as a whole.

Such steps are unlikely to be expensive in terms of either time or extra staff.
Any expense is going to be peanuts (or possibly bananas), compared to the potential cost of finding your customer details for sale somewhere in the Deep Web.

Filed under: Oracle, PL/SQL, SQL Tagged: bind variables, binding an unkown number of variables into a using clause, DAL pattern, Data Access Layer, defence against sql injection, execute immediate, locking a user account, Native Dynamic SQL, SQL Injection, SQLi, Transaction API, XAPI pattern

ADF and Oracle JET - Integration Pattern with ADF Regions

Andrejus Baranovski - Sun, 2015-11-29 10:41
In my previous post I have shared basic example for Oracle JET integration into ADF - Oracle JET and ADF Faces Integration in ADF. Today I would like to share a bit more than that - integration pattern with ADF Regions. This approach would allow to use JET functionality within ADF environment and leverage ADF reusability.

Sample application -, implements ADF dashboard UI and includes one tile rendered by Oracle JET. You can watch video with running sample:

ADF tile renders a bar chart implemented by Oracle JET:

Integration pattern is straightforward, especially if you are familiar with ADF concepts. There must be separate ADF region defined to hold ADF fragment, where JET HTML code will be implemented:

JET HTML is included into ADF fragment, there are no additional ADF Faces components here, besides top grouping:

ADF region is reusable and we can reuse the same JET code in different ADF pages. Here is the example of ADF region (with JET HTML inside) usage in the page:

JET JavaScript function is the same as you would use for typical index.html page, nothing special here for ADF:

Trace Files -- 8c : Still More Performance Evaluation from Trace File

Hemant K Chitale - Sun, 2015-11-29 09:46
In previous blog posts here and here, I have demonstrated Full Table Scans of a table called ALL_OBJECTS_MANY_LIST being executed via Direct Path Reads (shown as 'direct path read') in the Wait Events.

This sort of read does NOT use the Buffer Cache and the read from disk directly goes to the user's server process PGA.  Thus, the blocks read are not shareable with other users.  Multiple users reading the same table(s) via Direct Path Read are not able to share the Buffer Cache, resulting in Disk Reads being repeated for every Full Table Scan request.  (Note : The OS may be using a File System Buffer Cache or the Storage may be using a Storage Cache to service the Disk Reads, but Oracle will always see every re-read of the table as a Disk Read).

Pre-11g, this behaviour could only be possible for Parallel Query.
However, 11g introduced the feature of "serial_direct_read" which allows non-PQ FullTableScans  to bypass the Buffer Cache.  This behaviour is predicated by the size of the table (segment) vis-a-vis the Buffer Cache, in relation to what is called "small_table_threshold".  A table exceeding a certain multiple of the "small_table_threshold" (which, by default, is a certain ratio of the Buffer Cache) is read via Direct Path Reads.

The Pre-11g and the 11g behaviour for smaller tables is to use Disk Reads that feed into the Buffer Cache, making the buffers shareable for repeated reads by either the same session or other sessions.  This obviates the need to make Disk Read calls for subsequent requests (as long as the blocks are still in the Buffer Cache).  The Wait Event we'd see for such Disk Reads is the (well known) "db file scattered read".

Here's a demo with a smaller table ALL_OBJECTS_SHORT_LIST.

SQL> exec dbms_session.session_trace_enable(waits=>TRUE);

PL/SQL procedure successfully completed.

SQL> select count(*) from all_objects_short_list;


SQL> exec dbms_session.session_trace_disable();

PL/SQL procedure successfully completed.

SQL> select value from v$diag_info where name = 'Default Trace File';


SQL> exit
[oracle@ora11204 ~]$ tkprof /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_3029.trc \
> SHORT_LIST.PRF aggregate=NO sys=NO

TKPROF: Release - Development on Sun Nov 29 23:13:43 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

[oracle@ora11204 ~]$

SQL ID: 69hzxtrb3dv1b Plan Hash: 1680768796

NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
*/ :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.11 437 72 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.11 437 72 0 1

Misses in library cache during parse: 1
isses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=72 pr=437 pw=0 time=110985 us)
3289 3289 3289 TABLE ACCESS SAMPLE ALL_OBJECTS_SHORT_LIST (cr=72 pr=437 pw=0 time=173325 us cost=19 size=61752 card=5146)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 7 0.04 0.05
db file scattered read 9 0.02 0.05

SQL ID: 5ayuqj4djbjxh Plan Hash: 3180576180

select count(*)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 2 2 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.09 140 580 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.11 142 582 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=580 pr=140 pw=0 time=90644 us)
28117 28117 28117 TABLE ACCESS FULL ALL_OBJECTS_SHORT_LIST (cr=580 pr=140 pw=0 time=110678 us cost=158 size=0 card=30071)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file scattered read 21 0.02 0.07
db file sequential read 3 0.00 0.00
SQL*Net message from client 2 9.76 9.76

The first query is the Dynamic Sampling query (because no statistics exist on the target table).  As in example 8a, this is done with a mix of single block ("db file sequential read") and multiblock ("db file scattered read") waits.  Here is a sampling of the waits for the Dynamic Sampling query :
WAIT #139712598070504: nam='db file sequential read' ela= 16 file#=1 block#=61344 blocks=1 obj#=35064 tim=1448809955554822
WAIT #139712598070504: nam='db file sequential read' ela= 951 file#=1 block#=61350 blocks=1 obj#=35064 tim=1448809955555840
WAIT #139712598070504: nam='db file scattered read' ela= 956 file#=1 block#=61362 blocks=6 obj#=35064 tim=1448809955556893
WAIT #139712598070504: nam='db file sequential read' ela= 720 file#=1 block#=61370 blocks=1 obj#=35064 tim=1448809955557710
WAIT #139712598070504: nam='db file sequential read' ela= 837 file#=1 block#=61378 blocks=1 obj#=35064 tim=1448809955558589
WAIT #139712598070504: nam='db file scattered read' ela= 1020 file#=1 block#=61385 blocks=7 obj#=35064 tim=1448809955559711
WAIT #139712598070504: nam='db file sequential read' ela= 804 file#=1 block#=61392 blocks=1 obj#=35064 tim=1448809955560643
WAIT #139712598070504: nam='db file scattered read' ela= 1270 file#=1 block#=61400 blocks=8 obj#=35064 tim=1448809955562000

This is the execution of the actual FullTableScan (the count(*) query submitted by me) :
EXEC #139712598962024:c=0,e=36,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3180576180,tim=1448809955666350
WAIT #139712598962024: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=35064 tim=1448809955666420
WAIT #139712598962024: nam='db file scattered read' ela= 28044 file#=1 block#=61345 blocks=5 obj#=35064 tim=1448809955695578
WAIT #139712598962024: nam='db file sequential read' ela= 1691 file#=1 block#=61351 blocks=1 obj#=35064 tim=1448809955697475
WAIT #139712598962024: nam='db file scattered read' ela= 11 file#=1 block#=61352 blocks=8 obj#=35064 tim=1448809955697539
WAIT #139712598962024: nam='db file scattered read' ela= 64 file#=1 block#=61360 blocks=2 obj#=35064 tim=1448809955697790
WAIT #139712598962024: nam='db file scattered read' ela= 468 file#=1 block#=61368 blocks=2 obj#=35064 tim=1448809955698673
WAIT #139712598962024: nam='db file scattered read' ela= 1493 file#=1 block#=61371 blocks=5 obj#=35064 tim=1448809955700255
WAIT #139712598962024: nam='db file scattered read' ela= 64 file#=1 block#=61376 blocks=2 obj#=35064 tim=1448809955700510
WAIT #139712598962024: nam='db file scattered read' ela= 1856 file#=1 block#=61379 blocks=5 obj#=35064 tim=1448809955702457
WAIT #139712598962024: nam='db file sequential read' ela= 11 file#=1 block#=61384 blocks=1 obj#=35064 tim=1448809955702568
WAIT #139712598962024: nam='db file scattered read' ela= 1184 file#=1 block#=61393 blocks=7 obj#=35064 tim=1448809955703916
WAIT #139712598962024: nam='db file scattered read' ela= 5970 file#=1 block#=61408 blocks=8 obj#=35064 tim=1448809955710042
WAIT #139712598962024: nam='db file scattered read' ela= 263 file#=1 block#=61424 blocks=7 obj#=35064 tim=1448809955710475
WAIT #139712598962024: nam='db file scattered read' ela= 2461 file#=1 block#=82176 blocks=8 obj#=35064 tim=1448809955713060
WAIT #139712598962024: nam='db file scattered read' ela= 2585 file#=1 block#=82184 blocks=8 obj#=35064 tim=1448809955715814
WAIT #139712598962024: nam='db file scattered read' ela= 70 file#=1 block#=82192 blocks=8 obj#=35064 tim=1448809955716030
WAIT #139712598962024: nam='db file scattered read' ela= 687 file#=1 block#=82200 blocks=3 obj#=35064 tim=1448809955716832
WAIT #139712598962024: nam='db file scattered read' ela= 28205 file#=1 block#=82204 blocks=4 obj#=35064 tim=1448809955745140
WAIT #139712598962024: nam='db file scattered read' ela= 13 file#=1 block#=82208 blocks=8 obj#=35064 tim=1448809955745285
WAIT #139712598962024: nam='db file scattered read' ela= 1070 file#=1 block#=82216 blocks=8 obj#=35064 tim=1448809955746453
WAIT #139712598962024: nam='db file sequential read' ela= 5960 file#=1 block#=82304 blocks=1 obj#=35064 tim=1448809955752560
WAIT #139712598962024: nam='db file scattered read' ela= 15 file#=1 block#=82538 blocks=5 obj#=35064 tim=1448809955754269
WAIT #139712598962024: nam='db file scattered read' ela= 15 file#=1 block#=82560 blocks=8 obj#=35064 tim=1448809955754481
WAIT #139712598962024: nam='db file scattered read' ela= 22 file#=1 block#=82674 blocks=14 obj#=35064 tim=1448809955755267
WAIT #139712598962024: nam='db file scattered read' ela= 18 file#=1 block#=82688 blocks=12 obj#=35064 tim=1448809955755460
FETCH #139712598962024:c=9997,e=90650,p=140,cr=580,cu=0,mis=0,r=1,dep=0,og=1,plh=3180576180,tim=1448809955757097
STAT #139712598962024 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=580 pr=140 pw=0 time=90644 us)'
STAT #139712598962024 id=2 cnt=28117 pid=1 pos=1 obj=35064 op='TABLE ACCESS FULL ALL_OBJECTS_SHORT_LIST (cr=580 pr=140 pw=0 time=110678 us cost=158 size=0 card=30071)'

The Segment Header is in Block#61344 which has already been read into the Buffer Cache by the Dynamic Sampling query. So, we don't see a Wait Event for it in the count(*) query.  The Disk Reads start from Block#61345. From 61345 onwards, 5 blocks are read from disk (61345 to 61349). Block#61350 has already been read into the Buffer Cache by the Dynamic Sampling query.  Block#61351 is the last block of the Extents (8 blocks beginning at #61344) so it is a singe block read ("db file sequential read").  Even when Extents are contigous, Oracle will not make a read call that spans Extents.  The next Extent starts at Block#61352 and Oracle does a proper 8 block read.
At Block#61360, Oracle does a 2 block read because Block#61362 has already been read by the Dynamic Sampling query and is in the Buffer Cache.
All the Extents for this table are very small

Thus, you can see that multiblock reads using the "db file scattered read" wait event are reads that also rely on knowing which blocks are already present in the Buffer Cache.  Oracle does not do a "db file scattered read" read for a block that is already present in the Buffer Cache.

(What about Blocks 61363 to 61367 ?    Unfortunately, not every read call does get reported into the trace file, sometimes there may be "holes".  Not every Read Call is reported as being read to the last block in the Extent.
If you look at the Block# IDs being reported above and compare them with the Extents allocated, you will see that Oracle may not have reported every block as a Read Wait Event.
SQL> l
1 select extent_id, file_id, block_id, blocks
2 from dba_extents
3 where
4 owner = 'HEMANT'
5 and segment_name = 'ALL_OBJECTS_SHORT_LIST'
6* order by 1, 2, 3
SQL> /

---------- ---------- ---------- ----------
0 1 61344 8
1 1 61352 8
2 1 61360 8
3 1 61368 8
4 1 61376 8
5 1 61384 8
6 1 61392 8
7 1 61400 8
8 1 61408 8
9 1 61424 8
10 1 82176 8
11 1 82184 8
12 1 82192 8
13 1 82200 8
14 1 82208 8
15 1 82216 8
16 1 82304 128
17 1 82432 128
18 1 82560 128
19 1 82688 128

20 rows selected.


Also, once again note that file#1 indicates that these are in the SYSTEM Tablespace. .

Categories: DBA Blogs

Running Oracle Weblogic Server on Linux ? Did you check Entropy

Online Apps DBA - Sun, 2015-11-29 03:27
This entry is part 1 of 1 in the series WebLogic Server

tThis post covers performance issue (Admin Console taking ages while accessing) and is must read if you are running WebLogic Server on Linux.

We encountered this issue while managing Fusion Middleware (with WebLogic) for one of our customer where accessing Admin Console was very Slow and there were no errors in log (memory and CPU were just fine).

A Brief decsription about Entropy on linux Platform:

An Operating System performs cryptographic operations at every time (on ssh challenges, https connections, etc.) .
Linux has two devices to provide random data at any time: /dev/random and /dev/urandom so the /dev/random pool gets consumed quite fast with the cryptographic operations. This pool also gets consumed while performing I/O operations, mouse,keyboard etc. and since, the Java uses /dev/random by default as entropy generator device that runs out of random bits and Therefore, makes us waiting for more time.

To learn about issues like this Join our Oracle Weblogic Administration Training  (next batch starts on 12th December 2015) where we also cover Oracle Certification for WebLogic Administrators 1Z0-133.



After starting the WebLogic server instance the available Entropy on the system was substantially decreasing.

Check the default system entropy using the command below:
# cat /proc/sys/kernel/random/poolsize

Check the available entropy using th command below:
# cat /proc/sys/kernel/random/entropy_avail


1. Temporary Fix:

a. Override the JAVA_OPTIONS environment variable before starting WebLogic Server via shell scripts (./


b. Start Weblogic server instance


2. Permanent Fix:

a. Edit the Java Security Properties file located in $JAVA_HOME/jre/lib/security/ and change the securerandom.source property which points to /dev/random (by default). set it as:


b. Save the changes and Start WebLogic server


If you want to learn more issues like above or wish to discuss challenges you are hitting in Oracle Weblogic Server, register for our Oracle Weblogic Administration Training.

We are so confident on quality and value of our training that We provide 100% Money back guarantee so in unlikely case of you being not happy after 2 sessions, just drop us a mail before third session and We’ll refund FULL money.

Did you subscribe to our YouTube Channel (435 already subscribed) and Private FaceBook Group (666 Members) ?


The post Running Oracle Weblogic Server on Linux ? Did you check Entropy appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Table Definitions in Oracle #GoldenGate #12c Trail Files

DBASolved - Sat, 2015-11-28 09:07

Oracle GoldenGate 12c ( has changed the information that is stored in the trail files. All the standard information is still there. What Oracle changed has to do with the meta data that is used to define a table.

Note: If you want a understand how to use log dump and general trail information, look here.

Prior to release of Oracle GoldenGate, if the column order of tables between source and target were different, you needed to generate a “definition” file using the “defgen” utility located in $OGG_HOME. This file allowed you to specify either a source or target definitions file which could be used to map the order of columns correctly. This was a nice tool when needed.

In, Oracle took this concept a little bit further. Instead of using a definitions file to do the mapping between source and target tables; Oracle has started to provide this information in the trail files. Review the image below, and you will see the table definition for SOE.ORDERS, which I run in my test environment.

Notice at the top, the general header information is still available for view. Directly under that, you will see a line that has the word “metadata” in it. This is the start of the “metadata” section. Below this is the name of the table and a series of number categories (keep this in mind). Then below this, is the definition of the table with columns and the length of the record.

A second ago, I mentioned the “numbered categories”. The categories correspond to the information defined to the right of the columns defined for the table. When comparing the table/columns between the database and trail file, as few things stand out.

In column 2 (Data Types), the following numbers correspond to this information:

134 = NUMBER

In column 3 (External Length), is the size of the data type:

13 = NUMBER(12,0) + 1
8 = VARCHAR2 length of 8
15 = VARCHAR2 length of 15
30 = VARCHAR2 length of 30

There is more information that stands out, but I’ll leave a little bit for you to decode. Below is the table structure that is currently mapped to the example given so far.

Now, you may be wondering, how do you get this information to come up in the logdump interface? Oracle has provided a logdump command that is used to display/find metadata information. This command is:


There are a few options that can be passed to this command to gather specific information. These options are:


If you issue:


You will get information related to Data Definition Records (DDR) of the table. Information this provides includes the following output:

If you issue:


You will get information related to Table Definition Record (TDR) on the table. Information provide includes the output already discussed earlier.

As you can tell, Oracle has provided a lot of information that is traditionally in the definitions files for mapping tables directly into the trail files. This will make mapping data between systems a bit easier and less complicated architectures.


Filed under: Golden Gate
Categories: DBA Blogs

Data Collector and Policy-Based Management: Part 1

Pythian Group - Fri, 2015-11-27 14:23


This article will address the main techniques for the efficient management of multiple instances in a SQL Server database 2014 using the Tools Data Collector and Policy-Based Management. In addition, it will be demonstrated in a practical way how to configure each of the tools, as well as how to extract and analyze the metrics collected.

With the exponential growth of the amount of data generated by applications, comes the increased complexity in managing database environments for the database administrator. With this growth, combined with the low cost of storage media, servers began a scenario in which the database administrator left to administer dozens of databases and proceeded to administer hundreds of them.

Since the main responsibilities of a database administrator is to ensure the integrity, performance and stability of all instances of SQL Server under its administration, the greater the number of instances and databases used within an enterprise, the greater the difficulty in monitoring and managing such an environment in proactive and automated fashion.

For this type of scenario, SQL Server makes it possible to centralize both the execution of routine tasks for an administrator, since the collection of performance metrics from all instances and existing databases is through the Tools Data Collector (DC) and Policy-Based Management (PBM). For example, there is the need for all databases that have the recovery model parameter set to Full to perform a backup of the log file every hour. So, instead of this policy existing only as a  concept, requiring you to check manually on all database servers, you can use the PBM to create “physically” a policy and ensure that it is applied at once in all instances of SQL Server.

In order to facilitate the understanding of the management of multiple instances, the presentation of the tools will be performed in the following order: first we will look at the Policy-Based Management and then we will know the Data Collector.


What is the Policy-Based Management?

The Policy-Based Management (PBM) is a feature available starting with SQL Server 2008 that enables the creation and implementation of policies on their SQL Server instances. The PBM works similarly to the created group policy through the Active Directory.

Note: Group policies provide centralized management of applications and users, by means of rules created by system administrators and that can be applied at various levels of the directory structure defined in Active Directory.

PBM applies a policy on a particular target, for example, a database, a table, a view, or a stored procedure and then it is checked to see if the target is in accordance with the rules of this policy. If the target does not agree, it is possible to both enforce the rules of politics as raise an alert to the administrator of the database so he/she knows of this violation.

One of the great advantages of the PBM is the implementation of a policy on multiple instances of a SQL Server database at once, facilitating the Administration and management of all the infrastructure of the Corporation Bank.

Many features of SQL Server 2014, such as Resource Governor, Data Compression and In-Memory OLTP need Enterprise Edition or Developer. This is not the case for the PBM, which is available in all editions of SQL Server, including Express (although with the Express Edition is not possible to create a Central Management Server).

As soon as the instance of SQL Server is installed in 2014, it is possible to create and evaluate the policies against any existing SQL Server in your environment, including in versions prior to 2014.


Policy-Based Management Components

The PBM is composed of three main components: Policies, Conditions and Facets, as shown in Figure 1. These components are arranged in a sort of hierarchical order for using the PBM. A facet is required for creating a condition, and the condition is necessary for the creation of policies. The policies, in turn, are applied to specific targets .

Figure 1. Policy Management in SQL Server 2014.



The targets are the managed objects for a particular policy and can be of various types: servers, databases, instances, stored procedures, etc. An important detail is that you can use more than one target at the same time in a policy. For example, we have a policy which States that only object names starting with the db _ prefix are correct and perform a validation of this rule on tables, functions and stored procedures of one or more instances at the same time.



A facet is a group of properties that relate to a particular target. SQL Server 2014 has 86 facets, each containing several different properties. This allows the use of hundreds of properties in the creation of a policy.

You can view the properties of a facet expanding Facets folder and double-clicking any of the options. For example, the facet Data File has several properties, such as maximum size of the data file, number of readings and writings and if the data file is online, as shown in Figure 2.

Note: The facets are available as read-only, i.e. it is not possible to create customized facets or modifications of existing ones. Currently new facets can be included only by Microsoft, through service packs or by upgrading the version of SQL Server.

Figure 2. Facet Properties Data File.



A condition can be described as a rule for a policy to be evaluated. Basically, the rule checks a target and, if this target is not in accordance with the policy, it fails. It is worth mentioning that a policy can evaluate only one condition, however it is possible that a condition has different rules for different properties. A condition can be viewed by expanding the Conditions folder and double-clicking any one of the available options, as shown in Figure 3.

Note: In an instance of SQL Server, a database will not exist conditions customized unless previously imported or created manually, that is, initially there will only be the conditions of system.

Figure 3. Condition created on Facet Database.



The policies are complete packages that include conditions, facets, targets, assessment modes and Server restrictions (the evaluation modes and the server are discussed in the next topic).

When created, the policies are stored in the system database msdb, but you can export them into an XML format. This portability allows database administrators with greater ease to share and compare the policies created. To view a policy it is necessary to expand the Policies folder and double-click any one of the options, as shown in Figure 4.

Figure 4. Details of a policy created

Note: In an instance of SQL Server, databases will not exist as customized policies, unless previously imported or created manually, that is, there will be only the initial policies.


Policy Evaluation Modes

The PBM has four distinct ways of performing a policy and that determines how the evaluation will occur under a predefined target. The following modes of evaluation may be available, depending on the facet used in policies:

  • On Demand: This evaluation mode specifies that the implementation should occur manually. By default, any policy with this evaluation mode is disabled automatically after it is created. However, even though I disabled it it can still be evaluated at any time.
  • On Schedule: By selecting this mode you can schedule the evaluation policy to be evaluated at any time. By default, you can select a schedule already created or create a new schedule that meets your needs. Creating a schedule allows you to set options such as the recurrence of execution, execution frequency per day, frequency of execution per hour and how long a policy should be executed. For example, you could run a particular policy for the next two weeks.
  • On Change: Log Only: when you select this mode, the policy will be evaluated only if a change is made to the target specified. If the change violates the policy, the event will be executed and the results of the violation will be stored in the event log and in the system database msdb. This evaluation mode helps the database administrator without affecting the performance of the environment.
  • On Change: Prevent: This evaluation mode is very similar to the On Change: Log Only, namely, the assessment will be the moment an event to perform any change in target. But unlike the Log Only option, Prevent performs the rollback procedure of any amendment which violates the policy.

The Figure 5 shows an example of a policy and evaluation modes available for the same.

Figure 5. Evaluation Modes.


Server Restrictions

In conjunction with the targets and the facets, the server restrictions are another way to control how a policy is evaluated. A server restriction is nothing more than a condition used to delete a particular policy server through the facet Server.

With the use of the PBM, you can create a server restriction to limit the evaluation of a policy only on instances of SQL Server using the Standard editions or Enterprise. When this policy is applied it will not be assessed by the instances that do not use these specific issues.


Management of Policies

SQL Server 2014 has some features that facilitate the management and evaluation of policies created. One of these features is the ability to create categories to group similar policies and use the Central Management Server (CMS) to execute the policies throughout the database environment.



The categories are a logical group of one or more policies that assist in the management and execution of the same. For example, you can create a policy group that will be evaluated only in test or development environments. When a policy is created, specify a category Description option, as shown in Figure 6.


Figure 6. Category definition


Central Management Server (CMS)

CMS functionality is not part of the architecture of the PBM, but has become extremely important in the use of policies in a SQL Server database consisting of multiple servers.

Through the CMS you can specify a database instance (or greater) to be a central management and store a list of registered instances that can be organized into one or more groups, as shown in Figure 7.

Figure 7. Central Management Server (CMS)



Once the policies are configured and implemented, there is no need to constantly check the servers to make sure that they are in accordance with the conditions set out in the policies. Instead, we can use the SQL Server Agent alerts to receive notifications automatically when a policy is violated.

In the next post of this 3 part series we will learn how to create a policy and how to use it.


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Learn About Hyperion & Oracle BI... 5 Minutes at a Time

Look Smarter Than You Are - Fri, 2015-11-27 14:13
Since early 2015, we've been trying to figure out how to help educate more people around the world on Oracle BI and Oracle EPM. Back in 2006, interRel launched a webcast series that started out once every two weeks and then rapidly progressed to 2-3 times per week. We presented over 125 webcasts last year to 5,000+ people from our customers, prospective customers, Oracle employees, and our competitors.

In 2007, we launched our first book and in the last 8 years, we've released over 10 books on Essbase, Planning, Smart View, Essbase Studio, and more. (We even wrote a few books we didn't get to publish on Financial Reporting and the dearly departed Web Analysis.) In 2009, we started doing free day-long, multi-track conferences across North America and participating in OTN tours around the world. We've also been trying to speak at as many user groups and conferences as we can possibly fit in. Side note, if you haven't signed up for Kscope16 yet, it's the greatest conference ever: go to and register (make sure you use code IRC at registration to take $100 off each person's costs).
We've been trying to innovate our education offerings since then to make sure there were as many happy Hyperion, OBIEE, and Essbase customers around the world as possible. Since we started webcasts, books, and free training days, others have started doing them too which is awesome in that it shares the Oracle Business Analytics message with even more people.
The problem is that the time we have for learning and the way we learn has changed. We can no longer take the time to sit and read an entire book. We can't schedule an hour a week at a specific time to watch an hour webcast when we might only be interested in a few minutes of the content. We can't always take days out of our lives to attend conferences no matter how good they are.  So in June 2015 at Kscope16, we launched the next evolution in training (

#PlayItForward is our attempt to make it easier for people to learn by making it into a series of free videos.  Each one focuses on a single topic. Here's one I did that attempts to explain What Is Big Data? in under 12 minutes:
As you can see from the video, the goal is to teach you a specific topic with marketing kept to an absolute minimum (notice that there's not a single slide in there explaining what interRel is). We figure if we remove the marketing, people will not only be more likely to watch the videos but share them as well (competitors: please feel free to watch, learn, and share too). We wanted to get to the point and not teach multiple things in each video.

Various people from interRel have recorded videos in several different categories including What's New (new features in the new versions of various products), What Is? (introductions to various products), Tips & Tricks, deep-dive series (topics that take a few videos to cover completely), random things we think are interesting, and my personal pet project, the Essbase Technical Reference.
Essbase Technical Reference on VideoYes, I'm trying to convert the Essbase Technical Reference into current, easy-to-use videos. This is a labor of love (there are hundreds of videos to be made on just Essbase calc functions alone) and I needed to start somewhere. For the most part, I'm focusing on Essbase Calc Script functions and commands first, because that's where I get the most questions (and where some of the examples in the TechRef are especially horrendous). I've done a few Essbase.CFG settings that are relevant to calculations and a few others I just find interesting.  I'm not the only one at interRel doing them, because if we waited for me to finish, well, we'd never finish. The good news is that there are lots of people at interRel who learned things and want to pass them on.

I started by doing the big ones (like CALC DIM and AGG) but then decided to tackle a specific function category: the @IS... boolean functions. I have one more of those to go and then I'm not sure what I'm tackling next. For the full ever-increasing list, go to, but here's the list as of this posting: 
What's NextTo see all the videos we have at the moment, go to I'm looking for advice on which TechRef videos I should record next. I'm trying to do a lot more calculation functions and Essbase.CFG settings before I move on to things like MDX functions and MaxL commands, but others may take up that mantle. If you have functions you'd like to see a video on, shoot an email over to, click on the discussion tab, and make a suggestion or two. If you like the videos and find them helpful (or you have suggestions on how to make them more helpful), please feel free to comment too.

I think I'm going to go start working on my video on FIXPARALLEL.
Categories: BI & Warehousing

SQL On The Edge #5 – Custom Visualizations in PowerBi

Pythian Group - Fri, 2015-11-27 13:39


Have you ever used Microsoft’s PowerBi service? If you have, have you used it within the last year? The reason I ask is that the current PowerBi service is so radically different from the initial release that pretty much only the name is what they have in common. Today I’m going to do a short summary of how we got here, where the service is, where it’s going and the actual topic for my video, the new awesome custom visualization functionality!

A Short History Lesson

A few years ago, Microsoft went on the direction of empowering business analysts on what is one of the most common business applications in the world: Excel. With this push, they started releasing some amazing plugins for Excel: PowerPivot, PowerQuery, PowerView, PowerMap. Suddenly we could import millions of rows from all kinds of sources into Excel! And transform them! And visualize them! Then with the release of Office 365 and Sharepoint Online, a service was created to make it easy to share and consume all these Excel-born reports. And thus PowerBi was born but it required all these other tools and subscriptions. It didn’t catch on.

Fast Forward To Today

This initial offering of PowerBi had too many external dependencies. You needed Excel with all the bells and whistles to do the reports and then all these other satellite cloud services. Thankfully someone saw and executed on a clearer vision: one service, no dependencies, no cost of entry, fully mobile friendly. It’s been a very interesting journey from a market perspective to see how MS released something, course corrected and then went head-on with their improved vision.

The PowerBi desktop designer is free. The mobile apps (all major mobile OS’es AND Windows Phone), free as well. The service itself also has a free fully functional tier, it’s only limited by the amount of data and the enterprise sharing capabilities. Add the ease of use of the tools and the natural language query capabilities and this is now a strong tool that can finally become a contender with the Tableau and Qlikviews of the world.

No, it’s not perfect but it is growing and an insane amount of new features are getting added constantly. New vendors are adding content packs and now custom visualizations have made an appearance.

Community Involvement

The idea behind the first batch of custom visuals was great. MS opened a contest for people to submit their best custom visuals and the community responded with amazing creativity and participation. Not only do these immediately provide more value to every current user of PowerBi but they also serve as examples for further development of more custom visuals.

The full gallery of custom visuals can be found in the PowerBi Visual Gallery.

And if you don’t have your PowerBi account, try it out, it’s free! Head over to the official PowerBi site.

Demo Time

For the demo of working with a PowerBi project on the desktop, online and importing a custom visual, let’s check out the video! Enjoy!



Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Log Buffer #451: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-11-27 12:59


The show goes on. This Log Buffer Edition picks some blogs which are discussing new and old features of Oracle, SQL Server and MySQL.


  • Directory Usage Parameters (ldap.ora) list the host names and port number of the primary and alternate LDAP directory servers.
  • Data Visualization Cloud Service (DVCS) is a new Oracle Cloud Service. It is a subset offering of the currently supported Business Intelligence Cloud Service (BICS).
  • ORA-24247: network access denied by access control list (ACL).
  • Latches are low level serialization mechanisms, which protect memory areas inside SGA. They are light wait and less sophesticated than enqueues and can be acquired and released very quickly.
  • handling disks for ASM – when DB, Linux and Storage admins work together.

SQL Server:

  • How to use the Performance Counter to measure performance and activity in Microsoft Data Mining.
  • Phil Factor demonstrates a PowerShell-based technique taking the tedium out of testing SQL DML.
  • Sandeep Mittal provides an introduction to the COALESCE function and shows us how to use it.
  • Hadoop many flavors of SQL.
  • Installing and Getting Started With Semantic Search.


  • Support for storing and querying JSON within SQL is progressing for the ANSI/ISO SQL Standard, and for MySQL 5.7.
  • Loss-less failover using MySQL semi-syncronous replication and MySQL Fabric!
  • Memory consumption The binary format of the JSON data type should consume more memory.
  • This post compares a B-Tree and LSM for read, write and space amplification. The comparison is done in theory and practice so expect some handwaving mixed with data from iostat and vmstat collected while running the Linkbench workload.
  • If you do not have a reliable network access (i.e. in some remote places) or need something really small to store your data you can now use Intel Edison.


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

Delayed Durability and Impact on Mirroring

Pythian Group - Fri, 2015-11-27 12:39

To Test the delayed durability feature with mirroring in high performance and high safety mode. The idea is to confirm what the performance improvement is of the transaction and if it has any benefit in high latency networks.


Test Scenario

We have two databases configured with mirroring in high safety mode, which will require that commits happen at the secondary first. One database called “DalayedTEST” has delayed durability enabled with FORCED mode. Then I have configured performance counters to check latency and performance of each database each second, I have added mirroring, transaction, lock and physical disk counters so we can compare the performance when using delayed durability or not in a mirrored environment. Then we are going to switch mirroring to high performance and see its behaviour as well. No Witness is configured




Using SQLQueryStress tool I am loading 5000 transactions to a log table with 50 threads enabled. This is equivalent to 50 persons loading 5000 records to the same table at the same time. Should be a good test to analyze behaviour of the databases.

Normal Database Test (Full Transactions) High Safety Mode









Locks and Transactions



Physical Disk




A failover happens 30 seconds after starting the test. The database is able to failover and record count 98850.




Delayed Database Test (Delayed Transactions) High Safety Mode









Locks and Transactions



Physical Disk





A failover happens 30 seconds after starting the test. The database is able to failover and record count 165757.


Conclusion – High Safety Mode

Mirroring High Safety Mode



Having delayed transactions enabled in a database with high safety mirroring improves performance under high contention scenarios. In this case having lots of transactions running at the same time and requesting for the same table object proved to be better, as the execution time was faster when using delayed transactions.

When checking the mirroring behaviour one can see that the mirroring log queue is bigger with delayed transactions. The difference is that with full transactions the queue is processed sequentially, which decreases the queue while the transaction completes. As a result, you will see a high spike at the beginning of the stress test which is the full queue , and then it decreases in time. When using delayed transactions one can see spikes spread evenly every 20 seconds or so, this means that one batch is processed, and then other batch is processed, and this process carries on until the queue is empty.

So having a sequential mirror queue processing vs. batch processing makes a difference in latency? It is clear that transaction delay is 3 times worse with delayed transactions as it will processes batches in a cyclic way, which will saturate more at the endpoint on every iteration and also keep the transaction waiting for a commit acknowledgement from the mirroring partner. So, having delayed transactions with high contention, it is faster locally as it has less logical reads/writes, but slows down the mirroring communication as it works in batches.

But total execution time was better with delayed transactions?

Having delayed transaction improves execution time but increases mirroring delay, so it is simply a matter of doing the math. The local processing was so much better that it compensated for the slower mirroring transaction delay and in general terms is faster in this scenario (I used a local network), however if you add other variables to the stress test, such as a high latency network for the mirroring, the end result might not be compensated and the total execution time can end up being around the same or worse.


Now, let´s see the numbers with a high performance mode:

Normal Database Test (Full Transactions) High Performance Mode









Locks and Transactions



Physical Disk



Delayed Database Test (Delayed Transactions) High Performance Mode









Locks and Transactions



Physical Disk



Conclusion High Performance Mode

Mirroring High Performance Mode



Having delayed transactions means we have a database with high performance mirroring that is not improving performance, despite the fact that indicators show less locking and less logical reads. Seems the mirroring queue was not being handled properly. The mirroring queue is considerably bigger when using delayed transactions and despite the fact that we are working in high performance mode the general execution time is the worst in this test environment. After I realized this I ran other tests to see if the results were the same, and generally speaking the total execution time is almost the same, a difference of 4-10 seconds tops.


General Conclusion

When using delayed transactions in all the tests we can conclude that consistently the mirroring queue, and the amount of data transferred is considerably larger than the one produced by full transactions.

When using high safety mode, it seems delayed transactions have a better disk performance, which compensate the higher mirroring/network values and results in a faster execution time. This situation can change if the network has high latency producing the same or worse performance.

When using high performance mode it seems delayed transactions have an ok to good disk performance, but it is not good enough to compensate the higher mirroring/network values, and results in the same or slower execution time.

I would recommend enabling delayed transactions only to fix high contention rates. Also avoid using it in high latency or very busy networks when mirroring high performance environments. These recommendations are based on a small test and in a controlled environment and should not necessarily extend to all environments out there. The rule of thumb is that using delayed transactions does impact mirroring performance, which by itself is a technology known to introduce certain performance issues depending on the system, so do not take the decision to use delayed transactions lightly and test first in a staging environment before rolling into production.


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

My agenda

Jonathan Lewis - Fri, 2015-11-27 11:46

It’s gettting to that time of year, so here are some of the sessions I’ve pencilled in for UKOUG Tech 15:

Sunday 6th:

Probably the whole of the Development stream.

Monday 7th:

9:00 – 9:50: Tom Dale – Fivium : An insight into Oracle Standard Edition, what’s included, what’s not, and how to plug some of those holes!

11:20 – 12:10: Me (et. al.) : Cost Based Optimizer – Panel Session

12:20 – 13:10: Franck Pachot – DBi Services : All About Table Locks: DML, DDL, Foreign Key, Online Operations,…

14:10 – 15:00: Maria Colgan – Oracle : Oracle Database In-Memory By Example

15:10 – 16:00: Philip Brown – Red Stack Tech : How to Evaulate the 12c In-Memory Option

16:30 – 17:20: Tony Hasler – Anvil Computer Services : Optimizer Roundtable

17:30 – 18:20: Joel Goodman – Oracle : Oracle Standard Edition Roundtable

 Tuesday 8th

14:10 – 15:00: Luke Davies – Pythian : It’s Always the Network. Exonerating the Database

15:30 – 16:20: Me : Just Don’t Do It

16:30 – 17:20: Nelson Calero – Pythian : Evolution of Performance Management: Oracle 12c Adaptive Optimization

Wednesday 9th

10:00 – 10:50: Carl Dudley – University of Wolverhampton : Oracle 12c New Features – The Lesser Spotted Variety