Skip navigation.

DBA Blogs

My Sales Journey: #5

Pythian Group - Thu, 2015-10-01 13:49

I am a little sad to let go of the Life of a newly minted Sales Professional title today but happy that I am starting to develop a bit of a patina. My Sales Journey reflects my state of mind right now and feels more appropriate given the fact that today marks the end of my first month at Pythian!

Last week we explored social media and where people are building their online habitats. Today, lets journey inside the minds of these people to see what they think. Let me explain.

As a sales professional for a managed service I talk to decision makers and stakeholders. My job is to engage them and hopefully they will want to take the conversation one step further. For all this to happen I need to know what their responsibilities, their pain points and their challenges are.

So today lets step inside the mind of a VP/Director/Executive:

They are responsible for ensuring the continual functioning of mission critical operations. They have to stay relevant while bridging the gap between business and technology. They are in charge of driving innovation and improving profitability. They are also tasked with delivering flexible strategies that enable business to deliver high quality/valuable services.

Now that I know this on a broad level I can be more effective with my message. This week has been all about creating/assembling content that answers to these challenges and responsibilities. These can come in the form of white papers, relevant marketing, team resumes, cost analysis and slide decks.

Know their industry and purchasing cycles. This is a research heavy task but pays off when you have a carefully curated list that you can refer to. It becomes a powerful weapon in your sales arsenal.

Sleuth enough information to determine initial feasibility. Your first contact is twice as likely to be successful if you have qualified the type of messaging with their current challenge.

As you can tell, lots of learning happened in the last week. I hope some of these tips will help you with your outreach. Stay tuned next week when we step in to the mind of a manager who may be lower on the rung but can be a key person to land your message in front of the decision makers.

Now, It is Friday and time to kick back and relax with a team lunch! My second one in 4 weeks! Yessirrrr, that is how we roll at Pythian. See you next week!


Categories: DBA Blogs

Oracle Database Standard Edition 2 is available

Oracle Database Standard Edition 2 was announced earlier this year in a MOS note (2027072.1) and it is now available for download and purchase. Oracle Database Standard Edition 2 is an...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQL On The Edge #3 – Azure Elastic Database Jobs

Pythian Group - Wed, 2015-09-30 10:15

Hello and welcome to episode number three of SQL On the Edge! In this episode we’re going to cover Azure SQL elastic database jobs. In the last few months, Microsoft has put a lot of effort in augmenting the elastic capabilities of the Azure SQL database service. We’re not only talking about the ability to scale UP but in this case about the capability to scale OUT.

The investments made on this front have produced:

– Elastic database tools for easier .NET development against multiple Azure SQL databases.
– Elastic database pools for better resource management.
– Elastic database jobs for easier job execution against groups of databases.
– Elastic database query for easier integrated querying of groups of databases.

Elastic database jobs are interesting because for a long time there wasn’t a native way to run jobs against Azure SQL databases. The solution we used with many of our clients was to have either a SQL Server on a VM to use the SQL Agent scheduling capabilities or using the Windows scheduler from a Windows Server VM or using the Azure job scheduler. These options, while serviceable were not really optimal for the task.

Elastic database jobs provide capabilities that cover the gaps we had before:

– You can run a job against an entire database pool, a pre-defined shard set or a custom database list.
– Job execution is logged and history can be retrieved per database.
– Information can be collected and stored from the results of each job execution.

For now, elastic database jobs are provided as a customer hosted web solution and an accompanying SQL database. At the moment, Powershell is also required to access the full functionality as the Portal only exposes a small part of it.

Let’s go over the feature and the demo in the video, enjoy!


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

SQL Server: Migrations/Upgrades – 10 Things to Think About

Pythian Group - Wed, 2015-09-30 10:04

Here at Pythian we have planned and implemented dozens of migrations and upgrades for our customers. In this blog post I am trying to summarize the 10 most important things to consider when migrating or upgrading a SQL Server environment, based on our lengthy experience.
If you need to migrate/upgrade your SQL Server environment to a new server/environment but you are afraid of having long outages, losing data or degrading performance, here are the things to consider:

  1. To Cloud or not to cloud: if you are considering moving to the cloud, you should research carefully around  the capabilities, constraints and limitations of the cloud solutions you are considering. For example, there are few things you could do on a “regular” server with a SQL Server instance installed that you cannot implement with Amazon RDS or Azure Database (DaaS). I am not posting any links here because things are changing very fast on the cloud currently, so you need to ensure you are up to date about this.
  2. Physical to Virtual: Often, when moving from a physical box with attached local disks to a VM environment with shared storage, SQL Server performance may degrade due to multiple potential factors. It is recommended to test performance of new environments prior to Production cutover and compare to the performance on the existing one. Following configuration Best Practices in all levels of the architecture is essential (please see more details in section 5).
  3. Minimal downtime during Production cutover: there are few methods to copy an instance and databases to the new environment without affecting exiting Production and then cutover with minimal downtime.
    The options range between transferring backups through portable media, setting up Database Mirroring or Log Shipping or simply backing up and restoring databases.
    The solution depends on few things such as:

    1. Current performance of your environment
    2. Database SLAs and how much downtime can be afforded
    3. Network bandwidth and capacity between existing and new environment (is there a connection between the environments at all?)
    4. Volume of data updates in your current environment
    5. Size of the databases
    6. Etc.
  4. Performance baseline comparison before and after: this is a step that is usually missed during migration and upgrade projects. Performance degradation when moving to a new environment may be a common issue but we need to understand what exactly are the new bottlenecks. For that reason, it’s important to record the performance baseline from the old environment before migrating and another baseline after migration (not right away, we need the instance to “warm up” for a while, say: a week). In comparing the two baselines, fixing the problem may be much easier and faster because it will give us an indication of what to look for.
  5. Best Practices: Microsoft and other providers or partners provide lists of Best Practices. We need to ensure we follow Best Practices in all levels: Hardware, storage, VM, OS and SQL Server. Some examples of Best Practices:
    – VMWare: VMWare Best Practices
    – Storage: Disk partition alignment for SQL Server
    – SQL Server: Best Practices list
  6. Testing phase: before implementing the real cutover, it is strongly recommended that you implement a test migration without affecting current Production environment or as required. This step may take longer and add time to the schedule but it will reduce or (hopefully) prevent issues that may occur during the cutover (i.e.: missing components, application not being able to connect to SQL Server, etc).
  7. Rollback and Plan B: when preparing for a migration or an upgrade of a critical Production environment, it’s always important to plan for a rollback in case something goes wrong. Even more important is to understand how much time a rollback would take, and the implications (technical and business wise).
  8. Upgrade plan: if you are planning to upgrade the SQL Server version or the application version at the same time as the migration is happening, you should ensure that all components can be upgraded ahead of time and decide if any database can stay in older compatibility level. There’s the Upgrade Advisor that can help you check things but there is never a 100% verification unless you test all part of the application(s).  Well.. If there’s not a lot of code running, you may be able to trace everything and have 100% verification of the code, but this rarely happens.
  9. HA and DR planning: When migrating to a new environment, it is probably time to redesign the HA and DR plan in advance. Some of the HA/DR SQL Server native solutions: Cluster, Always On Availability Groups (SQL 2012+), Database Mirroring (to be deprecated in future versions), Log Shipping, Replication, backups. There are other non-SQL Server solutions such as VM or storage mirroring as well as 3-rd party and cloud solutions. You can combine some of the HA/DR solutions together in order to meet higher SLAs as well.
  10. Refresh your monitoring abilities: since you are moving to a new environment, it’s probably also time to refresh your monitoring tools if required. A good monitoring tool will notify you about the right issues and on time, preferably one that can also do pro-active monitoring and help you prevent outages. It is important to be able to monitor hardware and storage, network, Virtual Machine (if applies), Operating System and SQL Server instance(s) so that you can always stay on top and understand in which layer there is a problem. You will need to make sure that someone gets the notifications and is able to connect on a timely manner based on your system’s SLAs to fix issues.


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Import/Export Multiple SSIS Packages

Pythian Group - Wed, 2015-09-30 09:39

While I was working on a migration project recently, I had the task of importing and then exporting multiple packages – about 120~ on to a new server. Usually, if there are less numbers of packages we can do this task manually but, in this case it was tedious and time consuming to import and then export it manually. Here’s where the utility DTUTIL comes in handy, this utility has many options that you can use according to your need.

What I used is very simple and easy to use. I created a two batch file using the below commands:

Exporting SSIS Packages to File:

dtutil /SQL ExportPackage /COPY FILE;C:\Packages\ExportPackage.dtsx

Importing SSIS Package to SQL:

dtutil /file C:\Packages\ImportPackage.dtsx /copy sql;ImportPackage

Find out more details and a comprehensive list of options.


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Using Docker to Visualize MySQL Performance Schema

Pythian Group - Wed, 2015-09-30 09:30

Last week, I was pleased to present at Percona Live Amsterdam 2015 regarding the importance of Performance Schema and how to begin approaching visualizing the data that is available to diagnose performance issues and fine tune your MySQL environment. You can download the slides from the Percona Live conference page.

The session highlighted using the ELK (Elasticsearch+Logstash+Kibana) stack to assist visualizing event data, in addition to graphite+graphana to visualize time-series data.

As many people who attend conferences are aware, the availability of internet access is sketchy at best. To combat this, the visualization stack that I created was strictly local utilizing Docker, specifically the Docker Toolbox.

The docker-compose.yml file that creates the stack is fairly straightforward:

  container_name: mysql_data
  image: tianon/true
    - /var/lib/mysql
    - /var/lib/mysql-files

  container_name: mysql
    - MYSQL_DATABASE=world
  build: mysql
    - mysql_data
    - "./mysql/config:/etc/mysql/conf.d"
    - "3306"
    - "3306:3306"

  container_name: sysbench
  build: ./sysbench
    - "mysql"

  container_name: elasticsearch_data
  image: tianon/true
    - /usr/share/elasticsearch/data

  container_name: elasticsearch
  build: elasticsearch
    - elasticsearch_data
    - "9200:9200"

  container_name: graphite
  image: kamon/grafana_graphite
    - "8000:80"
    - "8126:8126"

  container_name: logstash
  build: logstash
    - "./logstash/scripts:/opt/logstash/scripts"
    - mysql
    - graphite
    - "elasticsearch:es"
    - "9292:9292"

Since the session was about the upcoming changes to the Performance Schema in MySQL 5.7, I chose to use MySQL’s official 5.7 Docker image:

  container_name: mysql_data
  image: tianon/true
    - /var/lib/mysql
    - /var/lib/mysql-files

  container_name: mysql
    - MYSQL_DATABASE=world
  build: mysql
    - mysql_data
    - "./mysql/config:/etc/mysql/conf.d"
    - "3306"
    - "3306:3306"

Pay attention to two aspects of this output.

First, I am creating a mysql_data container. Using a Data Volume Container is highly recommended when working with any data that should be persisted and not baked into the Docker image.

Second, I am building the container using the `build` command. Throughout the entire docker-compose.yml file, I am adding additional functionality to base containers provided by the Docker Hub. For the MySQL container, the Dockerfile looks like this:

FROM mysql:5.7

ADD world_innodb.sql /docker-entrypoint-initdb.d/world.sql

CMD ["mysqld"]

The rest of the docker-compose file follows similar guidelines, and I won’t explain each aspect here. You can find the Dockerfiles for each component of the stack in the dtest/visualize-mysql github repository.

To bring the stack up, the steps are straightforward:

  1. Install Docker Toolbox on your machine
  2. Create the docker-machine:
    $ docker-machine create -d virtualbox \
    --virtualbox-memory "2048" --virtualbox-cpu-count "2" \
    $ eval $(docker-machine env visualize)
  3. Clone the dtest/visualize-mysql repository and change into the parent directory.
    $ git clone
    $ cd visualize-mysql
  4. Bring the environment up:
    $ docker-compose up -d

Verify that the containers are running, noting that the data volume containers should have exited with status 0:

$ docker-compose ps
       Name                     Command               State                            Ports
elasticsearch        / -Des ...   Up>9200/tcp, 9300/tcp
elasticsearch_data   /true                            Exit 0
graphite             /usr/bin/supervisord             Up>80/tcp, 8125/udp,>8126/tcp
logstash             /app/bin/boot                    Up>9292/tcp
mysql                / mysqld            Up>3306/tcp
mysql_data           /true                            Exit 0
sysbench             /etc/ -d            Up

You can then point your browser to the Kibana and Graphana dashboards. The graphana dashboard requires login, and the default credentials are admin/admin.

The dashboards for the presentation are not automatically loaded, so you can find some basic dashboards in the dashboards directory of the repository. Unfortunately, I did not export them before discarding the machine so have lost some of the fine-tuning changes I had made. I do intend to recreate them and even try out the newer versions of the ELK stack available.

The goal here is to show how easy it is to setup a local environment using Docker and existing images on the Docker Hub to get started with your own projects.


Discover more about our expertise in MySQL.

Categories: DBA Blogs

Oracle 12c – Adaptive Query Optimization

Pythian Group - Wed, 2015-09-30 07:51

Scenario: A user complains that a batch job that is critical for the month end business processing is taking more time than expected. As a DBA you quickly check and identify the sql statement that is running, compare the current execution plan with the previous plans and come to the conclusion that it has picked up a wrong plan. Instead of a Hash Join, it is doing a Nested Loop or did not pick the parallel threads and instead does a single threaded full table scan.

At this juncture, you do not have control to change the execution plan unless you cancel the job, fix it and rerun it. A simple analogy is like getting stuck in traffic and cannot do anything about it, even though you know that there are better and faster alternative routes to reach your destination.

Luckily, the above scenario was the case with 11g, though with 12c a new feature called – Adaptive Query Optimization was introduced and this feature helps the optimizer adjust the plans based on the real time data.

12c Optimizer:- With Oracle 11g, an optimizer decides an optimal execution plan for a query based on the conditions in the query, statistical information of the underlying objects and initialization parameters that influence the optimizer. With 12c, a new adaptive approach to query optimization is introduced by adjusting execution plans based on information collected during run time. This new approach is extremely helpful when the existing statistics are not sufficient to generate an optimal plan. There are two aspects in Adaptive Query Optimization:

  1. Improving the initial execution of a query during runtime.
  2. Adaptive statistics, that provide additional information for subsequent executions.

I will be focusing on the first aspect in this article.

The two features that optimizer adjusts based on the real time data are:

Hash Join vs Nested Loops – Optimizer may pick nested loop for a join operation based on the existing table statistics information, but during run time, if it realizes that more data is being processed, it will switch to Hash Join. This adaptive optimizer feature not only changes the plan on run time, but stores the adaptive statistics in the database. This additional information will be useful for future executions.

Parallel Distribution – Hybrid Hash – When a SQL statement is executed in parallel mode, optimizer decides whether to perform certain operations like sorts, joins and aggregations in parallel or as a single threaded operation based on the statistics. With the new adaptive feature, optimizer differs this decision till run time and based on the run time data it decides whether to pick the parallel mode or not.

Good resource about this new feature can be found here and here.

Check some of our other Oracle 12c blogs


Discover more about our Oracle expertise. 

Categories: DBA Blogs

Links for 2015-09-29 []

Categories: DBA Blogs

OPN: #28daysofcloud for Partners

Starting this week, watch for our 28 Days of Cloud series of informative content that will prepare your partners to better sell Oracle cloud offerings. 28 Days of Cloud is an immersive,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Oracle Data Integration for Big Data

Data Integration For Big Data contains relevant importance for Oracle Partners. Having Big Data skills means more than simply employing a few data scientists. If they cannot be filled with...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Trace Files -- 2 : Generating SQL Traces (another session)

Hemant K Chitale - Sun, 2015-09-27 08:50
Here are a few methods to trace another session to capture SQL statement executions.  All of these methods require the appropriate privilege --- which most DBAs seem to mean using SYS (which logs in AS SYSDBA).  I leave it to you to discover the privilege -- save to say that you do NOT need to login AS SYSDBA.

This enables tracing for sessions of a specific Service_Name and *optionally* Module Name and Action Name.  This is useful where you define applications by Service Names and, optionally, use DBMS_APPLICATION_INFO to set Module and Action in a given session.

Thus, if in an HR session, I do :
Enter user-name: hr/hr@orcl

Connected to:
Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> exec dbms_application_info.set_client_info('HR App Client');

PL/SQL procedure successfully completed.

SQL> exec dbms_application_info.set_module('Employee Module','Updating');

PL/SQL procedure successfully completed.


And, in another session (with the appropriate privileges), I do :
SQL> exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE(-                                      
> service_name=>'orcl',-
> module_name=>'Employee Module',-
> action_name=>'Updating');

PL/SQL procedure successfully completed.


the actions in the HR session(s) of that module and action are traced.  (Additional parameters WAITS and BINDS can also be set to TRUE to enable of Waits and Binds (Waits are set to TRUE by default)).
Note : If there are multiple sessions with the same combination of service_name, module_name, action_name, all the sessions are traced !

Tracing is disabled when the session itself uses DBMS_APPLICATION_INFO to change it's Module / Action settings.

Tracing is also disabled when the session that initiated the tracing executes :
> service_name=>'orcl',-
> module_name=>'Employee Module',-
> action_name=>'Updating');

PL/SQL procedure successfully completed.


Thus, this procedure allows tracing by the granularity of Service = Module = Action.  Unfortunately, many custom applications do NOT use DBMS_APPLICATION_INFO to set Module and Action.

This is useful for tracing a single session and where  Module / Action information are not populated by the client.

The call is simple :
> session_id=>153,-
> serial_num=>33,-
> waits=>TRUE,-
> binds=>TRUE);

PL/SQL procedure successfully completed.


The disabling call is :
> session_id=>153,-
> serial_num=>33);

PL/SQL procedure successfully completed.


Thus, this can be issued individually for each session.


Categories: DBA Blogs

EMEA Partners: Oracle Cloud Platform: Integration Workshop for Partners (ICS)

The Oracle team is pleased to invite your integration developers and consultants to a 3-days hands-on workshop on how to integrate applications with Oracle Cloud platform. Oracle will organize...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Attending Tech Conferences

Pythian Group - Fri, 2015-09-25 11:04


Now that I am in management, and have been for some time, I have still been attending tech shows and presenting at them a few times a year. When I came back to Pythian in February 2011, we had a handful of personnel doing sessions at the shows. Since then the number of people submitting abstracts and appearing at these shows has skyrocketed. I cannot take the credit single-handedly for that growth, but I am pleased that I have been a catalyst getting more Pythianites on the global stage where they should be. Bravo to all the new techs appearing at the shows.

I just attended the EastCoastOracle show in Raleigh/Durham USA and sat in on a steady stream of sessions and found the cloud material especially intriguing. Simon Pane, Roopesh Ramklass, Subhajit Das Chaudhuri, and Gleb Otochkin shone and had substance and pizazz in their sessions. Pythian should be proud.

The Pythian name was all over the show and we were tied with another company for most sessions. There was avid interest in what we do and I did get some “You work for Pythian!” comments from attendees. Our name is out there and getting “outer” every show.

Every show I attend I am thankful for the opportunity and Pythian’s support. It sweetens the deal when I look back at the number of presenters we send now as compared to 5 years ago. I value and I’m very honoured that so many colleagues have reached out to me over the years to get over that last remaining hurdle to entering the conference presentation life …


Discover more about our expertise in Cloud and Oracle.

Categories: DBA Blogs

My Sales Journey: #4

Pythian Group - Fri, 2015-09-25 10:53


Let me start today with some great news! This week I got my first meeting….Woohoo! It happened in totally unexpected ways. It happened on Twitter. Of the dozens of calls and emails that I had been sending out it happened with a quick, to the point tweet. It made me take a step back and look at what I was doing differently.

Sales is about reaching people with a message and hoping they get interested enough to talk to you. You have to reach people where they live. Today, people live on many different hubs. We live on Twitter, Facebook, Hootsuite, Snapchat, Email, Linkedin or in an office. People also do these things depending where they are geographically. New Yorkers are more likely to hang out on social media channels than Floridians who may love phone calls and a good chat. Reach your people where they live.

Sales is also a very creative channel. It forces you to think out of the box at all times. To be successful you must not only be creative with your messaging, you also have to be creative about how you will deliver that message. Make your message brief, relevant and fun.

I am an entrepreneur first and foremost, which makes me naturally curious about business. Reaching out to all these people is all about getting to know their business. I am genuinely interested in knowing who they are and how their business works. It is about meeting cool folks doing cool things. Be genuine. Business will follow.

Its been a good week! Do you have a story or two to share about where you meet your people. Have you had success over social media? If so, please share. I want to be inspired!



Categories: DBA Blogs

Log Buffer #442: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-09-25 07:24

This Log Buffer Edition collects and then showers around some of the information-rich blog posts from Oracle, SQL Server and MySQL.


  • Generic Java Server for Static content and Directory Listing using API mode or command line mode.
  • OEM agents on each host upload data to your management servers every few minutes.
  • exitcommit … or your career down the drain.
  • Encryption is the Easy Part; Managing those Keys is Difficult.
  • Managing the OBIEE BI Server Cache from ODI 12c.

SQL Server:

  • Email addresses are very prevalent in IT systems and often used as a natural primary key. The repetitive storage of email addresses in multiple tables is a bad design choice. Following is a design pattern to store email addresses in a single table and to retrieve them efficiently.
  • OpenStack: The Good and Not-So-Good Bits.
  • By defining server- and database-level audits, you can record just about any kind of event that occurs in SQL Server, which can be an invaluable source of security troubleshooting and forensic information when security breaches occur.
  • Koen Verbeeck shows how to easily extract metadata from files in your directories with Power Query.
  • Implementing John Conway’s Game of Life in Microsoft SQL Server.


  • Oracle HA, DR, data warehouse loading, and license reduction through edge apps.
  • Easy Load-balancing and High-availability using MySQL Router.
  • When hosting data on Amazon turns bloodsport.
  • MySQL 5.7 Labs — Using Loopback Fast Path With Windows 8/2012.
  • How to evaluate if MySQL table can be recovered.


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

Links for 2015-09-24 []

Categories: DBA Blogs

Partner Webcast – Why and How Your Business changes with Oracle Cloud

We are facing a new “revolution” in the business world. The capabilities that modern technology provides awake new expectations, new needs and wishes of consumers/customers and enable businesses to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SpyHunter 4 serial Download

Jithin Sarath - Thu, 2015-09-24 12:31
SpyHunter 4 crack has been voted as best anti-malware software. Download SpyHunter 4 Activation Code with email and password list generator patch free.

  • Very effective at removing hijacked browser search toolbars
  • Removed spyware variants that other major brands didn’t detect
  • Customized spyware fix option available
  • Easy to use interface
  • Supports Windows 8.1
Categories: DBA Blogs