Skip navigation.

DBA Blogs

Log Buffer #392, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-10-10 08:19

It seems its all about cloud these days. Even the hardware is being marketed with cloud in perspective. Databases like Oracle, SQL Server and MySQL are ahead in the cloud game and this Log Buffer Edition covers that all.


Oracle:

Oracle Database 12c was launched over a year ago delivering the next-generation of the #1 database, designed to meet modern business needs, providing a new multitenant architecture on top of a fast, scalable, reliable, and secure database platform.

Oracle OpenWorld 2014 Session Presentations Now Available.

Today, Oracle is using big data technology and concepts to significantly improve the effectiveness of its support operations, starting with its hardware support group.

Generating Sales Cloud Proxies using Axis? Getting errors?

How many page views can Apex sustain when running on Oracle XE?

SQL Server:

Send emails using SSIS and SQL Server instead of application-level code.

The public perception is that, when something is deleted, it no longer exists. Often that’s not really the case; the data you serve up to the cloud can be stored out there indefinitely, no matter how hard to try to delete it.

Every day, out in the various online forums devoted to SQL Server, and on Twitter, the same types of questions come up repeatedly: Why is this query running slowly? Why is SQL Server ignoring my index? Why does this query run quickly sometimes and slowly at others?

You need to set up backup and restore strategies to recover data or minimize the risk of data loss in case a failure happens.

Improving the Quality of SQL Server Database Connections in the Cloud

MySQL:

Low-concurrency performance for updates and the Heap engine: MySQL 5.7 vs previous releases.

Database Automation – Private DBaaS for MySQL, MariaDB and MongoDB with ClusterControl.

Removing Scalability Bottlenecks in the Metadata Locking and THR_LOCK Subsystems in MySQL 5.7.

The EXPLAIN command is one of MySQL’s most useful tools for understanding query performance. When you EXPLAIN a query, MySQL will return the plan created by the query optimizer.

Shinguz: Migration between MySQL/Percona Server and MariaDB.

Categories: DBA Blogs

Difference between 2014 and 2015 cadillac srx

Ameed Taylor - Fri, 2014-10-10 01:21
On the off chance that that you would be capable to't consider that any little Cadillac vehicles of the past, envision yourself fortunate, as not one or the other the Opel Omega-based completely Catera or Chevy Cavalier-based Cimarron offer specifically affectionate memories. serendipitously, the only thing that matters now could be the way that the Difference between 2014 and 2015 cadillac srx remains as a fantastic entrance in a class loaded with overachieving movement vehicles.

its a well known fact that the Cadillac other individuals have pointed the back wheel-drive 2015 cadillac escalade platinumsoundly at the balanced BMW 3 succession, which has laid out the portion for a considerable length of time. The 2015 cadillac fleetwood price outside measurements principally recreate those of the 3 accumulation, and the 2015 cadillac escalade redesign bargains pleasant develop quality, feisty effectiveness and an including weight together with a supple ride, much the same as the benchmark Bimmer. Cadillac's most up to date form likewise bargains an intelligent electronic interface with which to work all the nearby inside solace doohickeys, which is a vital component in this section of lavish cars.

The 2015 cadillac escalade first drive stacks up well against its opponent. On the expressway, it supplies great direction feel and a light-footed, shrewdly adjusted ride. Helping the sharp elements is reality that this Caddy is the lightest car in its classification (by utilizing 70-150 pounds, contingent upon trim). further adding to the ATS's physicality is its best 50/50 weight circulation between the passageway and back wheels.

With a trio of motor decisions close by, the 2015 cadillac hybrid productivity ranges from lukewarm to intriguing. the base 2.5-liter 4 serves as the expense and gas financial framework pioneer, actually assuming its 202-drive yield slacks in the again of the base motors found in the opposition. in the mean time, the turbocharged 2.0-liter inline-4 packs a superb midrange punch and is the main alternative inside the ATS extend that might be had with a manual gearbox. With 321 hp, the vivacious V6 offers a sweet soundtrack and is intelligently matched to a dreadfully responsive computerized transmission.

There are various minor contemplations with the ATS. aficionados may requirement for a handbook gearbox with the top motor, while the back seats and trunk are substantially less spacious than what a few adversaries give. indeed, this stage isn't exactly dispossessed of ability, either. The 2013 BMW three grouping still takes high respects by utilizing goodness of its progressed base powertrain and much more alluring driving flow, in any case it is normally ordinarily dearer. We're furthermore marginally partial to the correspondingly intelligently adjusted Audi A4, the refined Mercedes-Benz C-class and cost stuffed - if no more as cleaned - Infiniti G vehicle. however general, the 2013 Cadillac ATS is an extremely solid contender in the, exceptionally aggressive segment of reduced diversion vehicles.

2015 cadillac escalade premium
The 2015 cadillac escalade hp is a five-traveler, extravagance situated action vehicle that is given in four trim extents: base, sumptuous, productivity and top rate.

standard gimmicks on the bottom trim incorporate 17-inch combination wheels, warmed mirrors, mechanized headlights, journey control, twin-zone programmed atmosphere manage, six-way vitality front seats with vitality lumbar, leatherette premium vinyl upholstery, a tilt-and-extendable direction wheel, Onstar, Bluetooth telephone network and a seven-speaker Bose sound framework with satellite radio, an ipod/USB interface and an assistant sound jack.

the luxurious trim gives run-level tires, keyless entrance/ignition, far flung motor begin, eight-methodology force doorway seats, front and back park support, a rearview computerized cam, an auto-darkening rearview imitate, calfskin seating, driver memory works, a 60/forty part collapsing back seat (with move-thru), HD radio, Bluetooth sound streaming and the CUE infotainment interface.

The proficiency trim (no more accessible with 2.5-liter motor) further gives twin fumes outlets, a Driver cognizance bundle arrangement (forward crash caution, back cross-site guests alarm, path takeoff cautioning, programmed wipers and back seat feature airbags), a vivacious air grille, xenon headlights, an overhauled 10-speaker Bose encompass sound gadget (with a CD member), door action seats (with driver-aspect support change) and a set back seat with pass-through.


Stepping as much as the top class trim (not on hand with 2.5-liter engine) adds 18-inch wheels, a navigation machine, a color head-up display and the 60/forty cut up-folding rear seat. An 2015 cadillac escalade images top rate with rear-wheel force additionally comes with summer tires, a sport-tuned suspension, adaptive suspension dampers and a limited-slip rear differential.


among the features which can be usual for the upper trim ranges are to be had as options on the lower trims. just a few other not obligatory applications are also available. the driving force help package includes the features from the awareness package deal and provides adaptive cruise keep watch over, blind-spot monitoring, collision education with brake help, and the colour head-up show. The cold climate bundle contains heated entrance seats and a heated guidance wheel. The monitor performance bundle provides an engine oil cooler and upgraded brake pads. other options embrace totally different wheels, a sunroof and a trunk cargo organizer.

when will the 2015 cadillac escalade be available
the 2.5 fashions include a 2.5-liter 4-cylinder engine that produces 202 hp and one hundred ninety pound-toes of torque. the 2.0 Turbo fashions include a turbocharged 2.0-liter 4-cylinder rated at 272 hp and 260 lb-feet of torque. the 3.6 fashions include a three.6-liter V6 that cranks out 321 hp and 274 lb-feet of torque.

All 2015 cadillac escalade jalopnik engines come matched to a six-pace computerized transmission aside from the two.0 Turbo, which can also be had with a six-velocity handbook. Rear-wheel drive is standard across the board, with all-wheel power optional for the 2.zero- and 3.6-liter engines.

In Edmunds checking out, a rear-force ATS 2.0T with the manual went from zero to 60 mph in 6.3 seconds. A rear-drive ATS 3.6 top class with an automated accelerated from zero to 60 mph in 5.7 seconds. each times are reasonable among in a similar fashion powered entry-stage activity sedans.

EPA-estimated gasoline financial system for the ATS 2.5 stands at 22 mpg city/33 mpg freeway and 26 mpg blended. The V6 is estimated to succeed in 19/28/26 with rear-wheel power and Cadillac claims the two.zero-liter Turbo will get the same with an automatic transmission. With all-wheel power, the ATS V6 drops to 18/26/21.
build 2015 cadillac escalade
standard safety features for the new 2015 cadillac escalade for sale embody antilock disc brakes, traction regulate, balance keep an eye on, energetic front head restraints, front-seat aspect and knee airbags and entire-size aspect curtain airbags. additionally usual is OnStar, which includes automated crash notification, on-demand roadside help, far off door unlocking, stolen vehicle assistance and switch-with the aid of-turn navigation. non-compulsory are the aforementioned Driver consciousness and Driver assistance applications.

In Edmunds brake testing, an ATS 3.6 premium got here to a stop from 60 mph in an impressively short 108 feet. A 2.0T stopped in an ordinary distance of 113 feet.
2015 cadillac deville price
within its cabin, the photos of 2015 cadillac escalade boasts plenty of top quality materials, together with tasteful timber and steel accents. The on hand CUE infotainment interface features huge icons and operates like an iPhone or iPad, which is to say you use it by tapping, flicking, swiping or spreading your fingers -- making it familiar for a lot of users. moreover, "Haptic" comments allows you to understand while you've pressed a virtual button by using pulsing when you contact it.

Up front, the seats do a pleasant job of protecting one in situation right through spirited drives, and it is quite straightforward to find a comfortable riding position. Oddly, the not obligatory game seats do not present rather more in the best way of lateral enhance for the motive force, despite their energy-adjustable bolsters.

Rear-seat headroom is good, but knee room is tight for taller people. regardless of a wide opening, the 2015 cadillac escalade length trunk deals just 10.2 cubic ft of capability — downright stingy for this phase. fortuitously, some trims function a 60/forty split-folding rear seat, which helps on this regard.
2015 cadillac escalade pictures
The 2015 cadillac escalade vs infiniti qx80 is an impressive all-around performer, because of a poised experience, sure-footed cornering functionality and superb response from the guidance and brakes. the two.5-liter engine is smooth, however it offers tepid acceleration compared to other entry-stage powertrains, notably that of the BMW 328i. opt for one of the crucial different ATS engines, then again, and you can have no complaint, as they supply thrust extra in line with this Cadillac's athletic personality. even supposing fans could lament the lack of a guide transmission for the V6, the six-pace automated is tricky to fault. Switched to game mode, this automated is aware of just when to hold a equipment and provides smooth, rev-matched downshifts right on time, each time.

Even with its wearing calibration, the cadillac new models 2015 takes neglected city streets in stride, absorbing the shock of potholes and damaged pavement with out upsetting the automobile or its occupants. because of this, the compact Cadillac makes for a nice day by day driver that can also provide a whole lot of leisure on a Sunday morning power.


Categories: DBA Blogs

Partner Webcast – Oracle Database 12c (12.1.0.2): Are you ready for the Future of the Database?

Oracle Database 12c was launched over a year ago delivering the next-generation of the #1 database, designed to meet modern business needs, providing a new multitenant architecture on top of a fast,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

What is Continuous Integration?

Pythian Group - Thu, 2014-10-09 10:44

Most companies want to deploy features faster, and fix bugs more quickly—at the same time, a stable product that delivers what the users expected is crucial to winning and keeping the trust of those users.  At face value, stability and Lego Trainspeed appear to be in conflict; developers can either spend their time on features or on stability.  In reality, problems delivering on stability as well as problems implementing new features are both related to a lack of visibility.  Developers can’t answer a very basic question: What will be impacted by my change?

When incompatible changes hit the production servers as a result of bug fixes or new features, they have to be tracked down and resolved.  Fighting these fires is unproductive, costly, and prevents developers from building new features.

The goal of Continuous Integration (CI) is to break out of the mentality of firefighting—it gives developers more time to work on features, by baking stability into the process through testing.

Sample Workflow
  1. Document the intended feature
  2. Write one or more integration tests to validate that the feature functions as desired
  3. Develop the feature
  4. Release the feature

This workflow doesn’t include an integration step—code goes out automatically when all the tests pass. Since all the tests can be run automatically, by a testing system like Jenkins, a failure in any test, even those outside of the developers control, constitutes a break which must be fixed before continuing.  Of course in some cases, users follow paths other than those designed and explicitly tested by developers and bugs happen.  New testing is required to validate that bugs are fixed and these contribute to a library of tests which collectively increase collective confidence in the codebase.  Most importantly, the library of tests limits the scope of any bug which increases the confidence of developers to move faster.

Testing is the Secret Sauce

As the workflow illustrates, the better the tests, the more stable the application.  Instead of trying to determine which parts of the application might be impacted by a change, the tests can prove that things still work, as designed.

 

Continuous Integration is just one of the many ways our DevOps group engages with clients. We also build clouds and solve difficult infrastructure problems. Does that sound interesting to you? Want to come work with us? Get in touch!

Categories: DBA Blogs

Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan; O'Reilly Media

Surachart Opun - Thu, 2014-10-09 02:37
Hadoop is an open-source software framework for storage and large-scale processing of data-sets on clusters of commodity hardware. How to deliver log to Hadoop HDFS. Apache Flume is open source to integrate with HDFS, HBASE and it's a good choice to implement for log data real-time collection from front end or log data system.
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.It uses a simple data model. Source => Channel => Sink
It's a good time to introduce a good book about Flume - Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan (@harisr1234). It was written with 8 Chapters: giving basic about Apache Hadoop and Apache HBase, idea for Streaming Data Using Apache Flume, about Flume Model (Sources, Channels, Sinks), and some moew for Interceptors, Channel Selectors, Sink Groups, and Sink Processors. Additional, Getting Data into Flume* and Planning, Deploying, and Monitoring Flume.

This book was written about how to use Flume. It's very good to guide about Apache Hadoop and Apache HBase before starting about Flume Data flow model. Readers should know about java code, because they will find java code example in a book and easy to understand. It's a good book for some people who want to deploy Apache Flume and custom components.
Author separated each Chapter for Flume Data flow model. So, Readers can choose each chapter to read for part of Data flow model: reader would like to know about Sink, then read Chapter 5 only until get idea. In addition, Flume has a lot of features, Readers will find example for them in a book. Each chapter has references topic, that readers can use it to find out more and very easy + quick to use in Ebook.
With Illustration in a book that is helpful with readers to see Big Picture using Flume and giving idea to develop it more in each System or Project.
So, Readers will be able to learn about operation and how to configure, deploy, and monitor a Flume cluster, and customize examples to develop Flume plugins and custom components for their specific use-cases.
  • Learn how Flume provides a steady rate of flow by acting as a buffer between data producers and consumers
  • Dive into key Flume components, including sources that accept data and sinks that write and deliver it
  • Write custom plugins to customize the way Flume receives, modifies, formats, and writes data
  • Explore APIs for sending data to Flume agents from your own applications
  • Plan and deploy Flume in a scalable and flexible way—and monitor your cluster once it’s running
Book: Using Flume - Flexible, Scalable, and Reliable Data Streaming
Author: Hari ShreedharanWritten By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Index Compression Part VI: 12c Index Advanced Compression Block Dumps (Tumble and Twirl)

Richard Foote - Thu, 2014-10-09 01:01
Sometimes, a few pictures (or in this case index block dumps) is better than a whole bunch of words :) In my previous post, I introduced the new Advanced Index Compression feature, whereby Oracle automatically determines how to best compress an index. I showed a simple example of an indexed column that had sections of index entries that were […]
Categories: DBA Blogs

11 Tips To Get Your Conference Abstract Accepted

11 Ways To Get Your Conference Abstract Accepted
This is what happens when your abstract is selected!Ready for some fun!? It's that time of year again and the competition will be intense. The "call for abstracts" for a number of Oracle Database conferences are about to close.

The focus of this posting is how you can get a conference abstract accepted.

As a mentor, Track Manager and active conference speaker I've been helping DBAs get their abstracts accepted for many years. If you follow my 11 tips below, I'm willing to bet you will get a free pass to any conference you wish in any part of the world.

1. No Surprises! 
Track Manager After A SurpriseThe Track Manager wants no surprises, great content and a great presentation. Believe me when I say, they are looking for ways to reduce the risk of a botched presentation, a cancelation or a no show. Your abstract submissions is your first way to show you are serious and will help make the track incredibly awesome.

Tip: In all your conference communications, demonstrate a commitment to follow through.

2. Creative Title.
The first thing everyone sees is the title. I can personally tell you, if the title does not peak my curiosity without sounding stupid, then unless I know the speaker is popular I will not read the abstract. Why do I do this? Because as a Track Manager, I know conference attendees will do the same thing! And as a Track Manager, I want attendees to want to attend sessions in my track.

Tip: Find two people, read the title to them and ask what they think. If they say something like, "What are you going to talk about?" that's bad. Rework the title.

3. Tell A Story
The abstract must tell a compelling story. Oracle conferences are not academic conferences! There needs to be some problem along with a solution complete with drama woven into the story.

Tip: People forget bullet points, but they never forget a good story.

4. Easy To Read
The abstract must be easy to review. The abstract reviewers may have over a hundred abstracts to review. Make it a good quick read for the reviewers and your chances increase.

Tip: Have your computer read your abstract back to you. If you don't say, "Wow!" rework the abstract. 

5. Be A Grown-Up
You can increase the perception you will physically show up and put on a great show at the conference by NOT putting into your abstract emoji, bullet points, your name and title or pushing a product or service. NEVER copy/paste from a powerpoint outline into the abstract or outline. (I've seen people do this!)

Tip: Track Managers do not want to baby sit you. They want an adult who will help make their track great.

6. Submit Introductory Level Abstracts
I finally figured this out a couple years ago. Not everyone is ready for a detailed understanding of cache buffer chain architecture, diagnosis, and solution development. Think of it from a business perspective. Your market (audience) will be larger if your presentation is less technical. If this bothers you, read my next point.

Tip: Submit both an introductory level version and advanced level version of your topic.

7. Topics Must Be Filled
Not even the Track Manager knows what people will submit. And you do not know what the Track Manager is looking for. And you do not know what other people are submitting. Mash this together and it means you must submit more than one abstract. I know you really, really want to present on topic X. But would you rather not have an abstract accepted?

Tip: Submit abstracts on multiple topics. It increases your chances of being accepted.

8. Submit Abstract To Multiple Tracks
This is similar to submitting both an introductory version of your abstract. Here's an example: If there is a DBA Bootcamp track and a Performance & Internals Track, craft your abstract to Bootcamp version has a more foundational/core feel to it. And craft your Performance & Internals version to feel more technical and advanced.

Do not simply change the title and the abstract can not be the same.  If the conference managers or the Track Manager feels you are trying to game the conference, you present a risk to the conference and their track and your abstracts will be rejected. So be careful and thoughtful.

Tip: Look for ways to adjust your topic to fit into multiple tracks.

9. Great Outline Shows Commitment
If the reviewers have read your title and abstract, they are taking your abstract seriously. Now is the time to close the deal by demonstrating you will put on a great show. And this means you already have in mind an organized and well thought out delivery. You convey this with a fantastic outline. I know it is difficult to create an outline BUT the reviewers also know this AND having a solid outline demonstrates to them you are serious, you will show up, and put on a great show.

Tip: Develop your abstract and outline together. This strengthens both and develops a kind of package the reviewers like to see.

10. Learning Objectives Show Value
You show the obvious value of your topic through the learning objectives. Personally, I use these to help keep me focused on my listener, just not what I'm interested in at the moment. Because I love my work, I tend to think everyone also does... not so. I must force myself to answer the question, "Why would a DBA care about this topic?"

Tip: Develop your learning objectives by asking yourself, "When my presentation is over, what do I want the attendees to remember?"

11. Submit About Problems You Solved
Submit on the topics you have personally explored and found fascinating. Every year, every DBA has had to drill deep into at least one problem. This concentrated effort means you know the topic very well. And this means you are qualified to tell others about it! People love to hear from people who are fascinated about something. Spread the good news resulting from a "bad" experience.

Tip: Submit on topics you have explored and are fascinated with.

How Many Abstracts Should I Submit?
It depends on the conference, but for a big North America conference like ODTUG, RMOUG and IOUG I suggest at least four.

Based on what I wrote above, pick three topics, perhaps create both an introductory and advanced version and look to see if it makes sense to submit to multiple tracks. That means you'll probably submit at least four abstracts. It's not as bad as it sounds, because you will only have perhaps three core abstracts. All the others are modifications to fit a specific need. Believe when you receive the acceptance email, it will all be worth it!

See you at the conference!

Craig.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 1

Pythian Group - Wed, 2014-10-08 08:17

Today’s blog post is part one of seven in a series dedicated to Deploying a Private Cloud at Home. In my day-to-day activities, I come across various scenarios where I’m required to do sandbox testing before proceeding further on the production environment—which is great because it allows me to sharpen and develop my skills.

My home network consists of an OpenFiler NAS which also serves DNS, DHCP, iSCSI, NFS and Samba in my network. My home PC is a Fedora 20 Workstation, where I do most of the personal activities.  KVM hypervisor is running on CentOS 6.2 x86_64 to run sandbox VMs for testing.

Recently I decided to move it to the cloud and create a private cloud at home. There are plenty of open source cloud solutions available, but I decided to use OpenStack for two reasons.

  1. I am already running Redhat compatible distros ( CentOS and Fedora ) so I just need to install OpenStack on top of it to get started.
  2. Most of the clients I support have RHEL compatible distros in the environment, so it makes sense having RHEL compatible distros to play around.

Ideally OpenStack cloud consists of minimum three nodes with at least 2 NICs on each node.

  • Controller: As the name suggests, this is the controller node which runs most of the control services.
  • Network: This is the network node which handles virtual networking.
  • Compute : This is the hypervisor node which runs your VMs.

However due to small size of my home network I decided to use legacy networking which only requires controller and compute nodes with single NIC

Stay tuned for the remainder of my series, Deploying a Private Cloud at Home. In part two of seven, I will be demonstrating configuration and setup.

Categories: DBA Blogs

Comparing SQL Execution Times From Different Systems

Comparing SQL Execution Times From Different Systems
Suppose it's your job to identify SQL that may run slower in the about-to-be-upgrated Oracle Database. It's tricky because no two systems are alike. Just because the SQL run time is faster in the test environment doesn't mean the decision to upgrade is a good one. In fact, it could be disastrous.

For example; If a SQL statement runs 10 seconds in production and runs 20 seconds in QAT, but the production system is twice as fast as QAT, is that a problem? It's difficult to compare SQL runs times when the same SQL resides in different environments.

In this posting, I present a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.

And, there is a cool, free, downloadable tool involved!

Why SQL Can Run Slower In Different Environments
There are a number of reasons why a SQL's run time is different in different systems. An obvious reason is a different execution plan. A less obvious and much more complex reason is a workload intensity or type difference. In this posting, I will focus on CPU speed differences. Actually, what I'll show you is how to remove the CPU speed differences so you can appropriately compare two SQL statements. It's pretty cool.

The Mental Gymnastics
If a SQL statement's elapsed time in production is 10 seconds and 20 seconds in QAT, that’s NOT an issue IF the production system is twice as fast.

If this makes sense to you, then what you did was mentally adjust one of the systems so it could be appropriately compared. This is how I did it:

10 seconds in production * production is 2 times as fast as QA  = 20 seconds 
And in QA the sql ran in 20 seconds… so really they ran “the same” in both environments. If I am considering placing the SQL from the test environment into the production environment, then this scenario does not raise any risk flags. The "trick" is determining "production is 2 times as fast as QA" and then creatively use that information.
Determining The "Speed Value"
Fortunately, there are many ways to determine a system's "speed value." Basing the speed value on Oracle's ability to process buffers in memory has many advantages: a real load is not required or even desired, real Oracle code is being run at a particular version, real operating systems are being run and the processing of an Oracle buffer highly correlates with CPU consumption.
Keep in mind, this type of CPU speed test is not an indicator of scalability (benefit of adding additional CPUs) in any way shape or form. It is simply a measure of brut force Oracle buffer cache logical IO processing speed based on a number of factors. If you are architecting a system, other tests will be required.
As you might expect, I have a free tool you can download to determine the "true speed" rating. I recently updated it to be more accurate, require less Oracle privileges, and also show the execution plan of the speed test tool SQL. (A special thanks to Steve for the execution plan enhancement!) If the execution plan used in the speed tool is difference on the various systems, then obviously we can't expect the "true speeds" to be comparable.
You can download the tool HERE.
How To Analyze The Risk
Before we can analyze the risk, we need the "speed value" for both systems. Suppose a faster system means its speed rating is larger. If the production system speed rating is 600 and the QAT system speed rating is 300, then production is deemed "twice as fast."
Now let's put this all together and quickly go through three examples.
This is the core math:
standardized elapsed time = sql elapsed time * system speed value
So if the SQL elapsed time is 25 seconds and the system speed value is 200, then the standardized "apples-to-apples" elapsed time is 5000 which is 25*200. The "standardized elapsed time" is simply a way to compare SQL elapsed times, not what users will feel and not the true SQL elapsed time.
To make this a little more interesting, I'll quickly go through three scenarios focusing on identifying risk.
1. The SQL truly runs the same in both systems.
Here is the math:
QAT standardized elapsed time = 20 seconds X 300 = 6000 seconds
PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds
In this scenario, the true speed situation is, QAT = PRD. This means, the SQL effectively runs just as fast in QAT as in production. If someone says the SQL is running slower in QAT and therefore this presents a risk to the upgrade, you can confidently say it's because the PRD system is twice as fast! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.
2. The SQL runs faster in production.
Now suppose the SQL runs for 30 seconds in QAT and for 10 seconds in PRD. If someone was to say, "Well of course it's runs slower in QAT because QAT is slower than the PRD system." Really? Everything is OK? Again, to make a fare comparison, we must compare the system using a standardizing metric, which I have been calling the, "standardized elapsed time."
Here are the scenario numbers:
QAT standardized elapsed time = 30 seconds X 300 = 9000 secondsPRD standardized elapsed time = 10 seconds X 600 = 6000 seconds
In this scenario, the QAT standard elapsed time is greater than the PRD standardized elapsed time. This means the QAT SQL is truly running slower in QAT compared to PRD. Specifically, this means the slower SQL in QAT can not be fully explained by the slower QAT system. Said another way, while we expect the SQL in QAT to run slower then in the PRD system, we didn't expect it to be quite so slow in QAT. There must another reason for this slowness, which we are not accounting for. In this scenario, the QAT SQL should be flagged as presenting a significant risk when upgrading from QAT to PRD.
3. The SQL runs faster in QAT.
In this final scenario, the SQL runs for 15 seconds in QAT and for 10 seconds in PRD. Suppose someone was to say, "Well of course the SQL runs slower in QAT. So everything is OK." Really? Everything is OK? To get a better understanding of the true situation, we need to look at their standardized elapsed times.
QAT standardized elapsed time = 15 seconds X 300 = 4500 secondsPRD standardized elapsed time = 10 seconds X 600 = 6000 seconds 
In this scenario, QAT standard elapsed time is less then the PRD standardized elapsed time. This means the QAT SQL is actually running faster in the QAT, even though the QAT wall time is 15 seconds and the PRD wall time is only 10 seconds. So while most people would flag this QAT SQL as "high risk" we know better! We know the QAT SQL is actually running faster in QAT than in production! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.
In Summary...
Identify risk is extremely important while planning for an upgrade. It is unlikely the QAT and production system will be identical in every way. This mismatch makes identifying risk more difficult. One of the common differences in systems is their CPU processing speeds. What I demonstrated was a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.
What's Next?
Looking at the "standardized elapsed time" based on Oracle LIO processing is important, but it's just one reason why a SQL may have a different elapsed time in a different environment. One of the big "gotchas" in load testing is comparing production performance to a QAT environment with a different workload. Creating an equivalent workload on different systems is extremely difficult to do. But with some very cool math and a clear understanding of performance analysis, we can also create a more "apples-to-apples" comparison, just like we have done with CPU speeds. But I'll save that for another posting.

All the best in your Oracle performance work!

Craig.




Categories: DBA Blogs

Microsoft Hadoop: Taming the Big Challenge of Big Data – Part Three

Pythian Group - Mon, 2014-10-06 11:57

Today’s blog post completes our three-part series with excerpts from our latest white paper, Microsoft Hadoop: Taming the Big Challenge of Big Data. In the first two posts, we discussed the impact of big data on today’s organizations, and its challenges.

Today, we’ll be sharing what organizations can accomplish by using the Microsoft Hadoop solution:

  1. Improve agility. Because companies now have the ability to collect and analyze data essentially in real time, they can more quickly discover which business strategies are working and which are not, and make adjustments as necessary.
  2. Increase innovation. By integrating structured and unstructured data sources, the solution provides decision makers with greater insight into all the factors affecting the business and encouraging new ways of thinking about opportunities and challenges.
  3. Reduce inefficiencies. Data that currently resides in conventional data management systems can be migrated into Parallel Data Warehouse (PDW) for faster information delivery
  4. Better allocate IT resources. The Microsoft Hadoop solution includes a powerful, intuitive interface for installing, configuring, and managing the technology, freeing up IT staff to work on projects that provide higher value to the organization.
  5. Decrease costs. Previously, because of the inability to effectively analyze big data, much of it was dumped into data warehouses on commodity hardware, which is no longer required thanks to Hadoop.

Download our full white paper to learn which companies are currently benefiting from Hadoop, and how you can achieve the maximum ROI from the Microsoft Hadoop solution.

Don’t forget to check out part one and part two of our Microsoft Hadoop blog series.

Categories: DBA Blogs

rsyslog: Send logs to Flume

Surachart Opun - Mon, 2014-10-06 04:12
Good day for learning something new. After read Flume book, that something popped up in my head. Wanted to test "rsyslog" => Flume => HDFS. As we know, forwarding log to other systems. We can set rsyslog:
*.* @YOURSERVERADDRESS:YOURSERVERPORT ## for UDP
*.* @@YOURSERVERADDRESS:YOURSERVERPORT ## for TCPFor rsyslog:
[root@centos01 ~]# grep centos /etc/rsyslog.conf
*.* @centos01:7777Came back to Flume, I used Simple Example for reference and changed a bit. Because I wanted it write to HDFS.
[root@centos01 ~]# grep "^FLUME_AGENT_NAME\="  /etc/default/flume-agent
FLUME_AGENT_NAME=a1
[root@centos01 ~]# cat /etc/flume/conf/flume.conf
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
#a1.sources.r1.type = netcat
a1.sources.r1.type = syslogudp
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 7777
# Describe the sink
#a1.sinks.k1.type = logger
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://localhost:8020/user/flume/syslog/%Y/%m/%d/%H/
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.batchSize = 10000
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 10000
a1.sinks.k1.hdfs.filePrefix = syslog
a1.sinks.k1.hdfs.round = true


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[root@centos01 ~]# /etc/init.d/flume-agent start
Flume NG agent is not running                              [FAILED]
Starting Flume NG agent daemon (flume-agent):              [  OK  ]Tested to login by ssh.
[root@centos01 ~]#  tail -0f  /var/log/flume/flume.log
06 Oct 2014 16:35:40,601 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter.doOpen:208)  - Creating hdfs://localhost:8020/user/flume/syslog/2014/10/06/16//syslog.1412588139067.tmp
06 Oct 2014 16:36:10,957 INFO  [hdfs-k1-roll-timer-0] (org.apache.flume.sink.hdfs.BucketWriter.renameBucket:427)  - Renaming hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067.tmp to hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]# hadoop fs -ls hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   1 flume supergroup        299 2014-10-06 16:36 hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]#
[root@centos01 ~]#
[root@centos01 ~]# hadoop fs -cat hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
sshd[20235]: Accepted password for surachart from 192.168.111.16 port 65068 ssh2
sshd[20235]: pam_unix(sshd:session): session opened for user surachart by (uid=0)
su: pam_unix(su-l:session): session opened for user root by surachart(uid=500)
su: pam_unix(su-l:session): session closed for user rootLook good... Anyway, It needs to adapt more...



Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

How to setup passwordless ssh in Exadata using dcli

Alejandro Vargas - Sun, 2014-10-05 02:57

 




Normal
0




false
false
false

EN-US
X-NONE
HE













MicrosoftInternetExplorer4















DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:"Times New Roman";
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:Arial;
mso-bidi-theme-font:minor-bidi;}


Setting passwordless ssh root connection using dcli is fast and simple and will easy later to execute commands on all servers using this utility.


In order to do that you should have either:


DNS resolution to all Database and Storage nodes OR have them registered in /etc/hosts


1) Create a parameter file that contains all the server names you want to reach via dcli, tipically we have a cell_group for storage cells, a dbs_group for database servers and an all_group for both of them.


The parameter files will have only the server name, in short format


ie: all_group will have on an Exadata quarter rack:


dbnode1
dbnode2
cell1
cell2
cell3


2) As root user create ssh equivalence:


ssh-keygen   -t    rsa


3) Distribute the key to all servers


dcli -g ./all_group -l root -k -s '-o StrictHostKeyChecking=no'


4) check 


dcli -g all_group -l root hostname 



 

Categories: DBA Blogs

Bash security fix made available for Exadata

Alejandro Vargas - Sun, 2014-10-05 02:29

Complete information about the security fix availability should be reviewed, before applying the fix, in MOS DOC:


 Responses to common Exadata security scan findings (Doc ID 1405320.1)


The security fix is available for download from:


http://public-yum.oracle.com/repo/OracleLinux/OL5/latest/x86_64/getPackage/bash-3.2-33.el5_11.4.x86_64.rpm


The summary installation instructions are as follows:


1) Download getPackage/bash-3.2-33.el5_11.4.x86_64.rpm


2) Copy bash-3.2-33.el5_11.4.x86_64.rpm into /tmp at both database and storage nodes.


3) Remove rpm  exadata-sun-computenode-exact



rpm -e exadata-sun-computenode-exact



4) On compute nodes install bash-3.2-33.el5_11.4.x86_64.rpm using this command:



 rpm -Uvh /tmp/bash-3.2-33.el5_11.4.x86_64.rpm



5) On storage nodes  install bash-3.2-33.el5_11.4.x86_64.rpm using this command:




rpm -Uvh --nodeps /tmp/bash-3.2-33.el5_11.4.x86_64.rpm


6) Remove /tmp/bash-3.2-33.el5_11.4.x86_64.rpm from all nodes


As a side effect of applyin this fix,  during future upgrades on the database nodes, a warning will appear informing:



The "exact package" was not found and it will use minimal instead.


That's a normal and expected message and will not interfere with the upgrade. 







Categories: DBA Blogs

11g Adaptive Cursor Sharing --- does it work only for SELECT statements ? Using the BIND_AWARE Hint for DML

Hemant K Chitale - Sat, 2014-10-04 08:52
Test run in 11.2.0.2

UPDATE 07-Oct-14 :  I have been able to get the DML statement also to demonstrate Adaptive Cursor Sharing with the "BIND_AWARE" hint as suggested by Stefan Koehler and Dominic Brooks.

Some of you may be familiar with Adaptive Cursor Sharing.

This is an 11g improvement over the "bind peek once and execute repeatedly without evaluating the true cost of execution" behaviour that we see in 10g.  Thus, if the predicate is skewed and the bind value is changed, 10g does not "re-peek" and re-evaluate the execution plan. 11g doesn't "re-peek" at the first execution with  new bind but if it finds the true cardinality returned by the execution at signficant variance, it decides to "re-peek" at a subsequent execution.  This behaviour is determined by the new attributes "IS_BIND_SENSITIVE" and "IS_BIND_AWARE" for the SQL cursor.

If a column is highly skewed, as determined by the presence of  Histogram, the Optimizer, when parsing an SQL with a bind against the column as a predicate, marks the SQL as BIND_SENSITIVE. If two executions with two different bind values return very different counts of rows for the predicate, the SQL is marked BIND_AWARE.  The Optimizer "re-peeks" the bind and generates a new Child Cursor that is marked as BIND_AWARE.

Here is a demo.


SQL> -- create and populate table
SQL> drop table demo_ACS purge;

Table dropped.

SQL>
SQL> create table demo_ACS
2 as
3 select * from dba_objects
4 where 1=2
5 /

Table created.

SQL>
SQL> -- populate the table
SQL> insert /*+ APPEND */ into demo_ACS
2 select * from dba_objects
3 /

75043 rows created.

SQL>
SQL> -- create index on single column
SQL> create index demo_ACS_ndx
2 on demo_ACS (owner) nologging
3 /

Index created.

SQL>
SQL> select count(distinct(owner))
2 from demo_ACS
3 /

COUNT(DISTINCT(OWNER))
----------------------
42

SQL>
SQL> select owner, count(*)
2 from demo_ACS
3 where owner in ('HEMANT','SYS')
4 group by owner
5 /

OWNER COUNT(*)
-------- ----------
HEMANT 55
SYS 31165

SQL>
SQL> -- create a histogram on the OWNER column
SQL> exec dbms_stats.gather_table_stats('','DEMO_ACS',estimate_percent=>100,method_opt=>'FOR COLUMNS OWNER SIZE 250');

PL/SQL procedure successfully completed.

SQL> select column_name, histogram, num_distinct, num_buckets
2 from user_tab_columns
3 where table_name = 'DEMO_ACS'
4 and column_name = 'OWNER'
5 /

COLUMN_NAME HISTOGRAM NUM_DISTINCT NUM_BUCKETS
------------------------------ --------------- ------------ -----------
OWNER FREQUENCY 42 42

SQL>

So, I now have a table that has very different row counts for 'HEMANT' and 'SYS'. The data is skewed. The Execution Plan for queries on 'HEMANT' would not be optimal for queries on 'SYS'.

Let's see a query executing for 'HEMANT'.

SQL> -- define bind variable
SQL> variable target_owner varchar2(30);
SQL>
SQL> -- setup first SQL for 'HEMANT'
SQL> exec :target_owner := 'HEMANT';

PL/SQL procedure successfully completed.

SQL>
SQL> -- run SQL
SQL> select owner, object_name
2 from demo_ACS
3 where owner = :target_owner
4 /

OWNER OBJECT_NAME
-------- ------------------------------
.....
.....

55 rows selected.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1820xq3ggh6p6, child number 0
-------------------------------------
select owner, object_name from demo_ACS where owner = :target_owner

Plan hash value: 805812326

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"=:TARGET_OWNER)


19 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '1820xq3ggh6p6'
4 order by child_number
5 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
1820xq3ggh6p6 0 805812326 Y N 1 55

SQL> commit;

Commit complete.

SQL>

We see one execution of the SQL Cursor with an Index Range Scan and Plan_Hash_Value 805812326. The SQL is marked BIND_SENSITIVE because of the presence of a Histogram indicating skew.

Now, let's change the bind value from 'HEMANT' to 'SYS' and re-execute exactly the same query.

SQL> -- setup second SQL for 'SYS'
SQL> exec :target_owner := 'SYS';

PL/SQL procedure successfully completed.

SQL>
SQL> -- run SQL
SQL> select owner, object_name
2 from demo_ACS
3 where owner = :target_owner
4 /

OWNER OBJECT_NAME
-------- ------------------------------
.....
.....

31165 rows selected.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1820xq3ggh6p6, child number 0
-------------------------------------
select owner, object_name from demo_ACS where owner = :target_owner

Plan hash value: 805812326

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"=:TARGET_OWNER)


19 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '1820xq3ggh6p6'
4 order by child_number
5 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
1820xq3ggh6p6 0 805812326 Y N 2 31220

SQL> commit;

Commit complete.

SQL>

This time, for 31,165 rows (instead of 55 rows), Oracle has used the same Execution Plan -- the same Plan_Hash_Value and the same expected cardinality of 55 rows. However, the Optimizer is now "aware" that the 55 row Execution Plan actually returned 31.165 rows.

The next execution will see a re-parse because of this awareness.

SQL> -- rerun second SQL
SQL> select owner, object_name
2 from demo_ACS
3 where owner = :target_owner
4 /

OWNER OBJECT_NAME
-------- ------------------------------
.....
.....

31165 rows selected.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1820xq3ggh6p6, child number 1
-------------------------------------
select owner, object_name from demo_ACS where owner = :target_owner

Plan hash value: 1893049797

------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 299 (100)| |
|* 1 | TABLE ACCESS FULL| DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"=:TARGET_OWNER)


18 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '1820xq3ggh6p6'
4 order by child_number
5 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
1820xq3ggh6p6 0 805812326 Y N 2 31220
1820xq3ggh6p6 1 1893049797 Y Y 1 31165

SQL> commit;

Commit complete.

SQL>

Aha ! This time we have a new Plan_Hash_Value (1893049797) for a Full Table Scan, being represented as a new Child Cursor (Child 1) that is now BIND_AWARE.






Now, here's the catch I see.  If I change the "SELECT ....." statement to an "INSERT .... SELECT ....", I do NOT see this behaviour.  I do NOT see the cursor becoming BIND_AWARE as a new Child Cursor.
Thus, the 3rd pass of an "INSERT ..... SELECT ..... " being the second pass with the Bind Value 'SYS' is correctly BIND_SENSITIVE but not BIND_AWARE.  This is what it shows :


SQL> -- rerun second SQL
SQL> insert into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

31165 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID cqyhjz5a5xyu4, child number 0
-------------------------------------
insert into target_tbl ( select owner, object_name from demo_ACS where
owner = :target_owner )

Plan hash value: 805812326

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 3 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("OWNER"=:TARGET_OWNER)


21 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = 'cqyhjz5a5xyu4'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
cqyhjz5a5xyu4 0 805812326 Y N 3 62385

SQL> commit;

Commit complete.

SQL>

Three executions -- one with 'HEMANT' and the second and third with 'SYS' as the Bind Value -- all use the *same* Execution Plan.

So, does this mean that I cannot expect ACS for DML ?


UPDATE 07-Oct-14 :  I have been able to get the DML statement also to demonstrate Adaptive Cursor Sharing with the "BIND_AWARE" hint as suggested by Stefan Koehler and Dominic Brooks.

SQL> -- run SQL
SQL> insert /*+ BIND_AWARE */ into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

55 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cca9xusptauj, child number 0
-------------------------------------
insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
from demo_ACS where owner = :target_owner )

Plan hash value: 805812326

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 3 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("OWNER"=:TARGET_OWNER)


21 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '0cca9xusptauj'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
0cca9xusptauj 0 805812326 Y Y 1 55

SQL> commit;

Commit complete.

SQL>
SQL> -- setup second SQL for 'SYS'
SQL> exec :target_owner := 'SYS';

PL/SQL procedure successfully completed.

SQL>
SQL> -- run SQL
SQL> insert /*+ BIND_AWARE */ into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

31165 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cca9xusptauj, child number 1
-------------------------------------
insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
from demo_ACS where owner = :target_owner )

Plan hash value: 1893049797

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 299 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
|* 2 | TABLE ACCESS FULL | DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OWNER"=:TARGET_OWNER)


20 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '0cca9xusptauj'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
0cca9xusptauj 0 805812326 Y Y 1 55
0cca9xusptauj 1 1893049797 Y Y 1 31165

SQL> commit;

Commit complete.

SQL>
SQL> -- rerun second SQL
SQL> insert /*+ BIND_AWARE */ into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

31165 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cca9xusptauj, child number 1
-------------------------------------
insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
from demo_ACS where owner = :target_owner )

Plan hash value: 1893049797

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 299 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
|* 2 | TABLE ACCESS FULL | DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OWNER"=:TARGET_OWNER)


20 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '0cca9xusptauj'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
0cca9xusptauj 0 805812326 Y Y 1 55
0cca9xusptauj 1 1893049797 Y Y 2 62330

SQL> commit;

Commit complete.

SQL>

However, there is a noticeable difference.  With the BIND_AWARE Hint, the SQL is Bind Aware right from the first execution (for :target_owner='HEMANT').  So, even at the second execution (for the first run of :target_owner='SYS'), it re-peeks and generates a new Execution Plan (the Full Table Scan) for a new Child (Child 1).
.
.
.
Categories: DBA Blogs

OCP 12C – Real-Time Database Operation Monitoring

DBA Scripts and Articles - Fri, 2014-10-03 09:34

What is Real Time Database Operation Monitoring ? Real Time Database Operation Monitoring will help you track the progress of a set of sql statements and let you create a report. Real Time Database Operation Monitoring acts as a superset of all monitoring components like : ASH, DBMS_MONITOR … You can generate Active Reports which are […]

The post OCP 12C – Real-Time Database Operation Monitoring appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Log Buffer #391, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-10-03 08:04

Oracle Open World is in full bloom. Enthusiasts of Oracle and MySQL are flocking to extract as much knowledge, news, and fun as possible. SQL Server aficionados are not far behind too.

Oracle:

Frank Nimphius have announced REST support for ADF BC feature on OOW today. Probably this functionality will be available in the next JDeveloper 12c update release.

RMAN Enhancements New Privilege A new SYSBACKUP privilege is created in Oracle 12c,  it allows the grantee to perform BACKUP and RECOVERY operations with RMAN SQL in RMAN.

To continue with the objective of separating duties and the least privileges, Oracle 12c introduce new administrative privileges all destined to accomplish specific duties.

Unified Auditing offers a consolidated approach, all the audit data is consolidated in a single place. Unified Auditing consolidate audit records for the following sources.

SOA Suite 12c – WSM-02141 : Unable to connect to the policy access service.

SQL Server:

Data Compression and Snapshot Isolation don’t play well together, you may not see a performance benefit.

Tim Smith answers some questions on SQL Server security like: Is It Better To Mask At the Application Level Or The SQL Server Database Level?

Since SQL Server delivered the entire range of window functions, there has been far less justification for using the non-standard ex-Sybase ‘Quirky Update’ tricks to perform the many permutations of running totals in SQL Server.

Easily synchronize live Salesforce data with SQL Server using the Salesforce SSIS DataFlow Tasks.

Change All Computed Columns to Persisted in SQL Server.

MySQL:

Low-concurrency performance for point lookups: MySQL 5.7.5 vs previous releases.

How to get MySQL 5.6 parallel replication and XtraBackup to play nice together.

The InnoDB labs release includes a snapshot of the InnoDB Native Partitioning feature.

Visualizing the impact of ordered vs. random index insertion in InnoDB.

Single thread performance in MySQL 5.7.5 versus older releases via sql-bench.

Categories: DBA Blogs

OCP 12C – RMAN and Flashback Data Archive

DBA Scripts and Articles - Thu, 2014-10-02 14:36

RMAN Enhancements New Privilege A new SYSBACKUP privilege is created in Oracle 12c,  it allows the grantee to perform BACKUP and RECOVERY operations with RMAN SQL in RMAN You can now use SQL Statements in RMAN like you would do in SQL*PLUS : BEFORE : RMAN> SQL “alter system switch logfile”; NOW : RMAN> alter system switch logfile; [...]

The post OCP 12C – RMAN and Flashback Data Archive appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

OCP 12C – Privileges

DBA Scripts and Articles - Thu, 2014-10-02 13:27

User Task-Specific Administrative Privileges To continue with the objective of separating duties and the least privileges, Oracle 12c introduce new administratives privileges all destinated to accomplish specific duties: SYSBACKUP : Used for RMAN operations like BACKUP, RESTORE, RECOVER SYSDG : Used to administer DATAGUARD, In 12c when you use DGMGRL commandline interface your are automatically [...]

The post OCP 12C – Privileges appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs