Skip navigation.

Pythian Group

Syndicate content
Love Your Data
Updated: 16 hours 1 min ago

Power BI General Availability

Tue, 2015-07-28 09:18

In the last months I’ve been keeping an eye on the Power BI Preview, the new version of the Power BI cloud solution that was completely revamped compared to the first Power BI v1.
This last Friday, July 24th, Power BI was finally released to the public and it is now available globally. The best part? Now they have a FREE version, so you can start playing with it right now and get insights from your data.

Learn more about Power BI and how to use it. And if you have no idea what I’m talking about or what Power BI is, please take 1 minute of your time and watch a quick Youtube video to see it in action.

In the coming weeks I will start writing a series of posts explaining more about the BI suite and what we can use it for, so keep an eye on this blog. Also, if you are already using it, please comment below what you are thinking so far and if you are facing any difficulties.


Discover more about our expertise in SQL Server and Cloud

The post Power BI General Availability appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #433: A Carnival of the Vanities for DBAs

Fri, 2015-07-24 10:05

This Log Buffer Edition covers Oracle, SQL Server and MySQL blogs of the running week.


  • While checking the sources of the Cassandra/NetBeans integration into GitHub yesterday, something went very badly wrong and ALL the source files in my Maven project that disappeared!
  • AWR Reports, Performance Hub, historisches SQL Monitoring in 12c
  • Oracle Database Mobile Server 12c: Advanced data synchronization engine
  • ORA-39001, ORA-39000 and ORA-39142
  • ORA-15410: Disks in disk group do not have equal size

SQL Server:

  • SAN and NAS protocols and how they impact SQL Server
  • SQL Style Habits: Attack of the Skeuomorphs
  • Is It Worth Writing Unit Tests?
  • Large SQL Server Database Backup on an Azure VM and Archiving
  • Reporting Services: Drawing a Buffer on a Map


  • MySQL Tcpdump system : use percona-toolkit to analyze network packages
  • Replication in real-time from Oracle and MySQL into data warehouses and analytics
  • Altering tablespace of table – new in MySQL 5.7
  • MySQL QA Episode 8: Reducing Testcases for Engineers: tuning
  • MySQL upgrade 5.6 with innodb_fast_checksum=1


Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.


The post Log Buffer #433: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Lessons Learned with Kubernetes

Tue, 2015-07-21 13:00
Kubernetes Logo Trend Towards Kubernetes

Three trends in computing have come together to make container orchestration the next obvious evolution of internet service delivery.  The first is the trend to pack an increasing number of segregated services into larger and larger servers for efficiency gains.  The second trend is the rapid build->test->release cycle of modern microservices that can see hundreds or thousands of updates each day.  And, the third trend is infrastructure-as-code which abstracts the actual hardware of servers and networking equipment away into text files that describe the desired infrastructure.  These files can be tested and version controlled in exactly the same way as code, and deployed just as quickly.  At the convergence point sits Kubernetes from Google which uses flat files to describe the infrastructure and containers needed to deliver a service, which can be built, tested, and deployed incredibly quickly.

Pythian has been working with container orchestration using Kubernetes since it was announced to the public in June of 2014.  We have used it to deploy microservices faster while also speeding up the development cycle.  With the advent of V1.0, we decided to revisit some of what we learned implementing internally and with clients on Kubernetes.

Develop Locally

Google, and others provide hosted Kubernetes solutions that are fast and easy to use.  In fact, you can use them for your whole build->test->deploy workflow.  Keep in mind, that with hosted Kubernetes, the containers are exposed to the internet from very early in your development cycle.  If that’s not desirable, or if local development is important, go faster with a local cluster.  Kubernetes can run on as few as three VMs and the vagrant install is well supported.  Our workflow involves sharing the yaml files among the team and developing everything locally before pushing blessed containers for deployment on a production cluster.

Pay Attention to API Versions in Examples

Since the kubernetes team has been developing their api in public for the last year, there have been a number of fairly large breaking changes to the API.  Now that v1 of the API is stable, we can depend on it. However, many of the tutorials and examples online use earlier versions.  Be sure to check which version the example uses before trying to experiment with it.

Get to know Volumes at Cluster Scale

In Kubernetes, volumes are an outgrowth of the Docker concept of a volume, or a filesystem that can be mounted and isn’t tied to the lifecycle of specific container.  Kubernetes re-imagines them at cluster scale and through plugins, allows containers to mount all kinds of things as file systems.  One plugin adds a git repository as a mountable filesystem, which opens the door to some particularly interesting use cases.

Leverage Etcd

At the heart of the Kubernetes cluster is a distributed, shared-state system called etcd.  Built on the RAFT protocol, it stores key->value pairs in a tiered structure and supports an easy REST api.  Etcd also provides a level of access control sufficient to securely store shared secrets for use throughout the cluster, but not available to all etcd consumers.  This feature underpins the concept of a Secret in Kubernetes.  But, your application can also talk directly to the etcd cluster in Kubernetes.  Using confd, your application can use the Kubernetes etcd instance as a data storage layer.  For example, here’s a simple url shortener gist using just nginx, confd, and etcd.
Happy experimentation!

Schedule a free assessment with a Pythian Kubernetes expert.

Learn more about Pythian’s Cloud expertise.

If this sounds like the kind of thing you’d like to work on, we’re hiring too!

The post Lessons Learned with Kubernetes appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Reading System Logs: SQL Server – Part 2

Mon, 2015-07-20 13:52


4355536275_430b18f9d5_nLast time I talked about reading System Logs on the SQL Server box, explaining why it is really important that DBA(s) should scan through the logs once a day on a critical production system. As I mentioned in my previous blog post , sometimes there are messages logged in as information, and at times it can be treated as an early warning before the system gets actual error messages – a sign of warning or an error. That is why it is important to read the information level messages. Let me tell you yet another case that I had where the disk sub system issue was reported as an information in system logs.

In this case, the system was suffering with the high disk I/O. The disk that we had replaced was used for writing backups. For a few days we observed that writing backups were longer than it was before.  The number of databases were the same and the size of these databases were not drastically increased, though the time it was taking to write backups had increased significantly. Looking at the system logs I noticed some messages related to the disk. Searching for those messages lead me to some links pointing toward a disk issue, link among them. After working with others in storage admin they confirmed the issue too, and now they are procuring a new disk.

So, here is what I would say. When you start your day, spare few minutes to read the system logs.  At Pythian, we have our home grown monitoring tool Avail which does this job for us reporting information, warnings and errors as a report.

Excerpts from the System Log:

Log Name:      System
Source:        Server Administrator
Date:          6/18/2015 10:55:55 PM
Event ID:      2271
Task Category: Storage Service
Level:         Information
Keywords:      Classic
User:          N/A
Computer:      SQLServer
The Patrol Read corrected a media error.:  Physical Disk 0:0:10 Controller 0, Connector 0

photo credit: Ubicación del disco duro (antiguo) a desmontar via photopin (license)


Learn more about our expertise in SQL Server.

The post Reading System Logs: SQL Server – Part 2 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

RMAN 11g : How to Restore / Duplicate to a More Recent Patchset

Mon, 2015-07-20 13:24

In an Oracle DBA’s life, you’ll be regularly asked to work on applying a new patchset on a database and then you will apply it starting from the development database to the production database and this process can be quite long depending on the organization you are working for.

In an Oracle DBA’s life, you’ll be regularly asked to refresh a development database (or any environment before the production) with the production data for development, test or whatever needs. For years now, RMAN has helped us a lot to perform this kind of task easily.

And what should happen always happens and one day you will be asked to refresh your more recent patchset DEV database (let’s say with your PROD data (let’s say that it’s running against an version). And let’s call a spade a spade, that could be a bit tricky — and specially if you discover that the versions are different once the RESTORE / DUPLICATE is terminated because you have launched the usual refresh scripts forgetting this little detail…

A solution could be to ask some GB to the sys admin team, copy an ORACLE_HOME from another server, quickly clone it on the DEV server, start a RMAN DUPLICATE / RESTORE DATABASE from the PROD to the DEV and then upgrade it to But this will probably be quite long and in the case that adding some GB to a server requires some procedures, validations, etc… it could take many days to refresh the DEV database which is obviously not what everybody wants. And this possibility does not exists if you face the issue after the RESTORE / DUPLICATE is finished.

Hopefully, there’s a way to achieve this goal by directly RESTORE / DUPLICATE a database to a more recent patchset (note that this method is also working for 10g databases). Let’s explore the two cases you can face doing a direct RESTORE / DUPLICATE to a more recent patchset database.



Whether we are restoring or duplicating the production database from a backup, here is what will happen on the DEV database:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/11/2015 22:38:59
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 17 and starting SCN of 2232530

Here, we can’t open the database with the RESETLOGS option due to the patchset version difference. We have to use a slightly different command:

SQL> alter database open resetlogs upgrade ;
Database altered.

Now the database is opened in upgrade mode, we can now apply the patchset and open it.

SQL> @?/rdbms/admin/catupgrd


SQL> startup
ORACLE instance started.

Total System Global Area 626327552 bytes
Fixed Size 2255832 bytes
Variable Size 243270696 bytes
Database Buffers 377487360 bytes
Redo Buffers 3313664 bytes
Database mounted.
Database opened.

This one is in fact quick and easy.



Starting from 11g, we have the cool DUPLICATE FORM ACTIVE DATABASE feature that we can also use to perform this kind of refresh. When you perform a DUPLICATE FROM ACTIVE DATABASE operation from a to a version, the procedure is different from the previous one as the RESETLOGS will begin but will not be able to finish properly and you will face this error :

RMAN-08161: contents of Memory Script:
 Alter clone database open resetlogs;
RMAN-08162: executing Memory Script

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
RMAN-10041: Could not re-create polling channel context following failure.
RMAN-10024: error setting up for rpc polling
RMAN-10005: error opening cursor
RMAN-10002: ORACLE error: ORA-03114: not connected to ORACLE
RMAN-03002: failure of Duplicate Db command at 03/25/2015 20:22:56
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-39700: database must be opened with UPGRADE option
Process ID: 24341
Session ID: 1 Serial number: 9

At this stage, it’s not possible to open the database in UPGRADE mode nor RECOVER the database and not even generate a BACKUP CONTROLFILE TO TRACE.

SQL> recover database using backup controlfile until cancel ;
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.


So we have to recreate the controlfile. By using these queries, we can easily create a new CREATE CONTROLFILE statement (or we could generate a BACKUP CONTROLFILE TO TRACE from the source database and then adapt it for the destination database).

SQL> select name from v$datafile order by file#;
SQL> select group#, member from v$logfile;
SQL> select name, bytes from v$tempfile order by file#;

And then recreate the controlfile:

 8 GROUP 1 '/u01/app/oracle/data/orcl11204/redo01.log' SIZE 50M BLOCKSIZE 512,
 9 GROUP 2 '/u01/app/oracle/data/orcl11204/redo02.log' SIZE 50M BLOCKSIZE 512,
 10 GROUP 3 '/u01/app/oracle/data/orcl11204/redo03.log' SIZE 50M BLOCKSIZE 512
 12 '/u01/app/oracle/data/orcl11204/system01.dbf',
 13 '/u01/app/oracle/data/orcl11204/sysaux01.dbf',
 14 '/u01/app/oracle/data/orcl11204/undotbs01.dbf',
 15 '/u01/app/oracle/data/orcl11204/users01.dbf'
 16 ;

Control file created.


To finish the recover and open the database in UPGRADE mode, we would need to apply the current redolog (and not any archivelog — we don’t have any archivelog as the RESETLOGS didn’t happen yet).

SQL> select * from v$logfile ;

---------- ------- ------- ------------------------------------------------------- ---
 3 STALE ONLINE /u01/app/oracle/data/orcl11204/redo03.log NO

 2 STALE ONLINE /u01/app/oracle/data/orcl11204/redo02.log NO

 1 STALE ONLINE /u01/app/oracle/data/orcl11204/redo01.log NO

ORA-00279: change 2059652 generated at 03/25/2015 20:22:54 needed for thread 1
ORA-00289: suggestion :
ORA-00280: change 2059652 for thread 1 is in sequence #1

Specify log: {<ret>=suggested | filename | AUTO | CANCEL}
Log applied.
Media recovery complete.
SQL> alter database open resetlogs upgrade ;

Database altered.


Now we can apply the patchset:

SQL> @?/rdbms/admin/catupgrd



And check that everything is good:

SQL> select * from v$version ;

Oracle Database 11g Enterprise Edition Release - 64bit Production
PL/SQL Release - Production
CORE Production
TNS for Linux: Version - Production
NLSRTL Version - Production

SQL> select comp_name, version, status from dba_registry ;

--------------------------------------------- ------------------------------ -----------
Oracle Application Express VALID
Oracle Enterprise Manager VALID
Spatial VALID
Oracle Multimedia VALID
Oracle XML Database VALID
Oracle Text VALID
Oracle Expression Filter VALID
Oracle Rules Manager VALID
Oracle Workspace Manager VALID
Oracle Database Catalog Views VALID
Oracle Database Packages and Types INVALID
JServer JAVA Virtual Machine VALID
Oracle Database Java Packages VALID
OLAP Analytic Workspace INVALID

18 rows selected.




This saved me a lot of time, have a good day :)


Discover more about our expertise in Oracle.

The post RMAN 11g : How to Restore / Duplicate to a More Recent Patchset appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

DevOps: Applied Empathy

Mon, 2015-07-20 12:09

I enjoyed participating in a panel on DevOps culture by Electric Cloud last week. Our commendable hosts kept things light and productive despite the number of participants and breadth of topic.

It wouldn’t be a culture discussion if we had failed to review the motivations for that (DevOps) culture: namely the evolve-or-die progression of tech business in general and software systems of customer engagement in particular. So the logic goes, any non-trivial business is now (also) a software business – thus, being “good at software” (and rapidly deriving high quality, constantly improving, large-scale services from it) is a fundamental market success factor and must be(come) part of the corporate DNA.

I doubt the above is controversial, but the degree to which it feels true probably depends on the scale of opportunity in your sector(s) and the governing regulatory regime(s). Those factors have a big impact not only on the intensity of motivation, but the challenges and resistance to change that must be overcome in a successful program of transformation.

The discussion reminded me that empathy is important for more than just being nice. It’s also a great way to understand what motivates people and how to incorporate their success drivers into transformation efforts.

top of the world in a typical software engineering org

Consider Katniss, for example – she has to deliver to Rex (and consumers like you and me) the “and sandwich”, Velocity (new features) and Operational Excellence, or we (consumers) will find a service that does. She may prioritize Velocity at times over other initiatives, and the stress on Bill grows under this pressure. If, as agent of transformational change, you propose methods of increasing Velocity to Bill – you are likely to face rejection – Bill’s already drowning at the present pace.

If, on the other hand, one approaches Bill to explain that pervasive, intelligent automation strategies can give his team their weekends back, and make his team a proactive contributor and valued partner in growth of the business, one will likely find a different sort of audience.

All this means, to me, is that DevOps is a useful context for improving a complex sort of collaboration that’s called a software product lifecycle. Understanding the motivations and needs of the players in an organization  is a key enabler for planning and executing successful programs of change.


Discover more about our expertise in DevOps and the author Aaron Lee.

The post DevOps: Applied Empathy appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #432: A Carnival of the Vanities for DBAs

Fri, 2015-07-17 13:24

Yet again, this log buffer edition brings some rich blog posts from Oracle, SQL Server and MySQL.


  • Installing Oracle XE, ORDS and Apex on CentOS
  • Major Growth is Expected in the DBaaS Space. Are Your Skills Ready?
  • How to Hide Actions in OBPM 12c Workspace
  • You can configure auto refresh for ADF BC cached LOV and this works out of the box, no special coding is needed.
  • Search and Replace in Oracle SQL Developer Data Modeller

SQL Server:

  • Polyglot Persistence is a fancy term meaning that when storing data, it is best to use multiple data storage technologies, chosen based upon the way data is being utilized by individual applications or components of a single application.
  • HOW TO: Recreate SQL Server Statistics In a Different Environment
  • New SQL Server Management Studio 2015/June – with Check for Updates!
  • Power BI General Availability Announced: July 24, 2015
  • Ensuring Columns Are Always Present In A Table Returned By Power Query


  • Using MySQL sandbox for upgrade testing
  • Bypassing SST in Percona XtraDB Cluster with binary logs
  • Un-obvious “Unknown column in ‘field list’” error
  • Bypassing SST in Percona XtraDB Cluster with incremental backups
  • innodb_fast_checksum=1 and upgrading to MySQL 5.6


Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.


The post Log Buffer #432: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Real Life DBA: Finding Errors in SQL Server

Fri, 2015-07-17 13:00

I tend to spend time on a few different forums at times, mostly the StackExchange network, and find folks asking questions about various errors seen in the error log of an SQL Server instance. Most DBAs are aware that errors that SQL Server provides can be vague at times, so it takes a bit of knowing where to look for more information. In some situations it is after the fact before you can get to the instance to start your analysis so you have to focus on those areas that contain historical information. A few of those places to check are:

  1. Various DMVs, depending on the error, but things like connectivity errors you can search the DMV sys.dm_os_ring_buffers.
  2. System_Health XEvent session has various things like deadlock reports and some of the things you find in the ring buffer are written to this session as well.
  3. Default Trace, this has been around since 2005 but being that Profiler and Traces are on the depreciation list it is a unknown how long it will stay around, but it can help while it is there.

In the field, we have our Avail Monitoring that customers can utilize for monitoring an instance. It will monitor the error log for a SQL Server instance, and we will get a page when something is found that it does not know about. This is an error message I actually received today for a client:


That error messages only tells me the SPID that it occurred under, which we know someone else could be using that SPID by the time I get to the server. It does not provide the login or even database it pertains to. So, I went searching around and decided to go check the default trace, and actually found this bit of information:

Error9100_1 Error9100_2

You can see from this the SPID matches and I was able to get the login that caused the error and what was being done. When you get an issue with SQL Server that is vague or does not provide enough information, go back to the basics and recall what options you have in front of you.


Discover more about our expertise in SQL Server.

The post Real Life DBA: Finding Errors in SQL Server appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #431: A Carnival of the Vanities for DBAs

Fri, 2015-07-10 08:46

This Log buffer edition covers Oracle, SQL Server and MySQL blog posts about new features, tips, tricks and best practices.


  • Traditionally, assigning specific processes to a certain set of CPUs has been done by using processor sets (and resource pools). This is quite useful, but it requires the hard partitioning of processors in the system. That means, we can’t restrict process A to run on CPUs 1,2,3 and process B to run on CPUs 3,4,5, because these partitions overlap.
  • Parallel_Degree_Limit, Parallel_Max_Degree, Maximum DOP? Confused?
  • JDeveloper 12c – ORA-01882: time zone region not found
  • Using a Parallel Gateway without a Merge in OBPM
  • Secure multi-threaded live migration for kernel zones

SQL Server:

  • How to Unlock Your Team’s Creative Power with SQL Prompt and a VCS
  • In-Memory OLTP – Common Workload Patterns and Migration Considerations
  • The Poster of the Plethora of PowerShell Pitfalls
  • Temporarily Change SQL Server Log Shipping Database to Read Only
  • Monitoring Availability Groups with Redgate SQL Monitor


  • Introducing MySQL Performance Analyzer
  • MySQL 5.7.8 – Now featuring super_read_only and disabled_storage_engines
  • Become a MySQL DBA – Webinar series: Which High Availability Solution?
  • How to obtain the MySQL version from an FRM file
  • MySQL Enterprise Audit: Parsing Audit Information From Log Files, Inserting Into MySQL Table


Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #431: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Pillars of PowerShell: SQL Server – Part 1

Thu, 2015-07-09 13:37

This is the sixth and final post in the series on the Pillars of PowerShell, at least part one of the final post. The previous posts in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling
  5. Windows OS

PowerShell + SQL Server is just cool! You will see folks talk about the ability to perform a task against multiple servers at a time, automate implementing a configuration or database change, or just obtaining a bit of consistency when doing certain processes. I tend to use it just because I can, and it is fun to see what I can do. There are a some instances where I have used it for a specific purpose where it saved me time, but overall I just chose to use it. I would say that on average there are going to be things you can do in PowerShell that could be done in T-SQL, and in those cases you use the tool that fits your needs.

Interacting with SQL Server PowerShell

There are a three main ways to interact with SQL Server using PowerShell that I have seen:

  1. SQL Server PowerShell (SQLPS)
  2. SQL Server Server Management Object (SMO)
  3. Native .NET coding

I am not going to touch on the third option in this series because it is not something I use enough to discuss. I will say, it is not the first choice for me to use it, but it does serve a purpose at times.

To try and provide enough information to introduce you to working with PowerShell and SQL Server, I broke this into two parts. Part one, we are going to look at SQL Server PowerShell (SQLPS) and using the SQL Server Provider (SQLSERVER:\). In part two we will go over SMO and what can be accomplished.

SQLPS, to me, offers you quick access to do the one-liner type tasks against SQL Server. It is just a preference really on which option you go with, so if it works for you just use it. There are some situations that using the SQL Server Provider actually requires you to mix in using SMO (e.g. creating a schema or database role). It also offers up a few cmdlets that are added onto (and improved upon) with each release of SQL Server.


The first thing to understand is how to get the product module into your PowerShell session. As with most products, some portion of the software has to exist on the machine you are working on, or the machine your script is going to be executed on. SQL Server PowerShell and SMO are installed by default if you install the SQL Server Management Tools (aka SSMS and such) for SQL Server 2008 and higher. I will only mention that they can also be found in the SQL Server Feature Pack if you need a more “standalone” type setup on a remote machine.

One thing you should get in the habit of doing with your scripts is verifying certain things that can cause more errors than are desired, one of those is dealing with modules. If the module is not loaded when the script is run your script is just going to spit out a ton of red text. If the prerequisites are not there to begin with, there is no point in continuing. You can verify that a version of the SQLPS module is installed on your machine by running the following command:

Get-Module -ListAvailable -Name SQL*

If you are running SQL Server 2012 or 2014 you will see something like this:


This works in a similar fashion when you want to verify if the SQL Server 2008 snap-in is loaded:


I generally do not want to have to remember or type out these commands all the time when I am doing things on the fly, so I will add this bit of code to my PowerShell Profile:

Import-Module SQLPS -DisableNameChecking -ErrorAction 'Stop'

#Load SQL Server 2008 by uncommenting next line
#Add-PSSnapin *SQL* -ErrorAction 'Stop'

One cool thing that most cmdlets you use in PowerShell contain is the -ErrorAction parameter. There are a few different values you can use for this parameter, and you can find those by checking the help on about_CommonParamters. If your script is one that is going to be interactive or run manually I would use -ErrorAction ‘Inquire‘ instead, try it out on a machine that does not have the module installed to see what happens. Once you have the module or snap-in loaded you will be able to access the SQL Server PowerShell Provider.

One side note, there actually is a “sqlps.exe” utility that is easily accessible in most cases via the right-click menu in SSMS (e.g. right-click on the “Databases” node in Object Explorer). If you open this, you are thrust into the SQLPS provider and the “directory” of the node you opened from in SSMS. However convenient as that may seem, it is something that was added to the depreciation list with SQL Server 2012, so there’s not much point in talking about it. It has its own little quirks that most folks steer clear of using it anymore.

Being Specific

The code I use in my profile is going to load the most current version of the module found on my system, at least it should. It may not do as you think it will every time. In some circumstances when you are developing scripts on your own system you may need to only import a specific version; especially if you are in a mixed version environment for SQL Server. You can load a specific version of the module by utilizing Get-Module to find your version, and just pass it to Import-Module.

Get-Module -ListAvailable -Name SQLPS | select name, path
#110 = SQL Server 2012, 120 = SQL Server 2014, 130 = SQL Server 2016
Get-Module -ListAvailable -Name SQLPS |
     where {$_.path -match "110"} | Import-Module

# To show that it was indeed loaded
Get-Module -Name SQLPS | select name, path

#If you want to switch to another one, you need to remove it
Remove-Module SQLPS

By default when you browse the SQLPS provider (or most providers actually), it is going to utilize the account that is running the PowerShell session, Windows Authentication. If you find yourself working with an instance that you require SQL Login authentication, don’t lose hope. You can connect to an instance via the SQL Server Provider with a SQL Login. There is an MSDN article that provides a complete function that you can use to create a connection for such a purpose. It does not show a version of the article for SQL Server 2008 but I tested this with SQL Server 2008 R2 and it worked fine.


One important note I will make that you can learn from the function in that article: the password is secure and not stored or processed in plain text.

SQLPS Cmdlets

SQLPS as noted previously offers a handful of cmdlets for performing a few administrative tasks against SQL Server instances. The majority of the ones you will find with SQL Server 2012 for example revolve around Availability Groups (e.g. disabling, creating, removing, etc.). The other unmentionables include Backup-SqlDatabase and Restore-SqlDatabase, these do exactly what you think but with a few limitations. The backup cmdlet can actually only perform a FULL, LOG, or FILE level backup (not sure why they did not offer support of a differential backup). Anyway, they could be useful for automating backups of production databases to “refresh” development or testing environments as the backup cmdlet does support doing a copy only backup. Another way is if you deal with Express Edition you can utilize this cmdlet and a scheduled task to backup those databases.

Update 7/13/2015: One correction, where I should have checked previously, but the Backup cmdlet for 2012 and above does include an “-Incremental” parameter for performing differential backups.

The other main cmdlet you get with SQLPS is what most people consider the replacement to the sqlcmd utility, Invoke-Sqlcmd. The main thing you get from the cmdlet is a smarter output in the sense that PowerShell will more appropriately detect the data type coming out, compared to the utility that just had everything as a string.

SQLPS One-liners

Working with the SQL Server Provider you will traverse this provider as you would a drive on your computer. So you can use the cmdlet Get-ChildItem or do as most folks and use the alias dir. The main thing to understand is the first few “directories” to access a given SQL Server instance. There are actually multiple root directories under the provider that you can see just by doing “dir SQLSERVER:\“. You can see by the description what each one is for, the one we are interested in is the “Database Engine”


Once you get beyond the root directory it can require a bit of patience as the provider is slow to respond or return information. If we want to dig into an instance of SQL Server you just need to understand the structure of the provider, it will generally follow this syntax: <Provider>:\<root>\<hostname>\<instance name>\. The instance name will be “DEFAULT” if you are dealing with a SQL Server default instance. If you have a named instance you just add the name of the instance (minus the server name).

To provide a real-world example, Avail Monitoring is the custom tool Pythian developed to monitor the SQL Server environments of our customers (or Oracle or MySQL…you get the point). One of the features it includes, among many, is monitoring for failed jobs. We customize the monitoring around the customer’s requirements so some job failures will page us immediately when it occurs, while others may allow a few extra failures before we are notified to investigate. This is all done without any intervention required by the customer and I know from that notification what job failed. Well right off you are going to want to check the job history for that job to see what information shows up, and I can use SQLPS Provider to do just that:

# To see the job history
dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs | where {$ -eq "Test_MyFailedJob"} | foreach {$_.EnumHistory()} | select message, rundate -first 5 | format-list
# if I needed to start the job again
$jobs = dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs
$jobs | where {$ -eq "Test_MyFailedJob"} | foreach {$_.Start()}

You might think that is a good bit to typing, but consider how long it can take for me to do the same thing through SSMS…I can type much faster than I can click with a mouse.

Anyway to close things out, I thought I would show one cool thing SQLPS can be used for the most: scripting out stuff. Just about every “directory” you go into with the provider is going to offer a method named “Script()”.

$jobs | where {$ -eq "Test_MyFailedJob"} | foreach {$_.Script()}

I will get the T-SQL equivalent of the job just like SSMS provides, this can be used to document your jobs or used when refreshing a development server.


I hope you got the idea of what SQLPS can do from the information above, one-liners are always fun to discover. The SQL Server Provider is not the most used tool out there by DBAs, but it can be a life-saver at times. In the next post we will dig into using SMO and the awesome power it offers.


Discover more about our expertise in SQL Server

The post Pillars of PowerShell: SQL Server – Part 1 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Reading System Logs on SQL Server

Thu, 2015-07-09 12:54

HDDRecently, while I was working on a backup failure issue, I found that it was failing for a particular database. When I ran the backup manually to a different folder it would complete successfully, but not on the folder that it was directed to when the backup jobs were originally configured .  This makes me suspicious about hard disk corruption. In the end, I fixed the backup issues in the interim so that in the future I would not get paged, as well as lowering the risk of having no backup in place.

Upon reviewing the Windows Event logs, it was revealed that I was right about suspecting a faulty hard drive. The log reported some messages related to the SCSI codes, especially the SCSI Sense Key 3 which means SCSI had a Medium error. Eventually, the hard drive was replaced by the client and the database has been moved to another drive.  In the past month, I have had about 3 cases where I have observed that the serious messages related to storage are reported as information. I have included one case here for your reference, which may help you in case you see such things in your own logs.

CASE 1 – Here is what I found in the SQL Server error log:

  • Error: 18210, Severity: 16, State: 1
  • BackupIoRequest::WaitForIoCompletion: read failure on backup device ‘G:\MSSQL\Data\SomeDB.mdf’.
  • Msg 3271, Level 16, State 1, Line 1
  • A non-recoverable I/O error occurred on file “G:\MSSQL\Data\SomeDB.mdf:” 121 (The semaphore timeout period has expired.).
  • Msg 3013, Level 16, State 1, Line 1
  • BACKUP DATABASE is terminating abnormally.

When I ran the backup command manually I found that it ran fine until a specific point (i.e. 55%) before it failed again with the above error. Further, I decided to run DBCC CHECKDB which reports when a particular table has a consistency error at a particular page. Here are the reported errors:

Msg 8966, Level 16, State 2, Line 1
Unable to read and latch page (1:157134) with latch type SH. 121(The semaphore timeout period has expired.) failed.
Msg 2533, Level 16, State 1, Line 1
Table error: page (1:157134) allocated to object ID 645577338, index ID 0, partition ID 72057594039304192, alloc unit ID 72057594043301888 (type In-row data) 
was not seen. The page may be invalid or may have an incorrect alloc unit ID in its header. The repair level on the DBCC statement caused this repair to be bypassed.

Of course, repairing options did not help as I had anticipated initially, since the backup was also failing when it reached at 55%. The select statement also failed to complete when I queried the object 645577338.  The only option that I was left with was to recreate the new table and drop the original table. After this had been done, the full back up succeeded. As soon as this was completed we moved the database to another drive.

I was still curious regarding these errors, so I started looking at Windows Error Logs – System folder, filtering it to show only Errors and Warnings.  However, this did not show me anything that attracted me to read further. Thus, I removed the filter, and carefully reviewed the logs.  To my surprise, the logs show entries for a bad sector, but, this was in the Information section of Windows Event Viewer, System folder.

Event Type: Information
Event Source: Server Administrator
Event Category: Storage Service
Event ID: 2095
Date: 6/10/2015
Time: 1:04:18 AM
User: N/A
Computer: SQLServer
Description: SCSI sense data Sense key: 3 Sense code:11 Sense qualifier: 0:  Physical Disk 0:2 Controller 0, Connector 0.

There could be a different error, warning or information printed on your server depending what the issue is. Upon further review there is still much to be said in order to explain codes and descriptions.

You may have noticed that I have referred to this as CASE 1, which means, I will blog one or two more case(s) in the future. Stay tuned!

Photo credit: Hard Disk KO via photopin (license)

Learn more about our expertise in SQL Server.

The post Reading System Logs on SQL Server appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Pythian Introduces Cassandra Consulting & Operational Support

Tue, 2015-07-07 06:00

Over the past ten years, the technology industry has begun to adopt a whole new way of thinking about the database and data storage. This is largely the result of the fast moving, high-volume and non-traditional types of data that are being generated to support both internal business processes and real-time web applications. Using a NoSQL database enables businesses to quickly and cost-effectively process large amounts of data, whether the data is hosted in the enterprise or on the cloud.

Pythian’s new Cassandra services address the needs of customers deploying Apache Cassandra. Pythian offers highly knowledgeable and experienced Cassandra experts who can guide you to success with your Cassandra deployment by filling critical skills and capacity gaps, getting your Cassandra instance up and running quickly, and ensuring that it performs optimally as you move forward.

If you’re thinking of implementing Cassandra, or already have, watch our webinar, Getting Started With Cassandra, which covers key topics for starting out, such as when you should use Cassandra, potential challenges, real world Cassandra applications and benefits, and more.

Learn more about Pythian’s Cassandra Services.

The post Pythian Introduces Cassandra Consulting & Operational Support appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

So, What Kind of Person Are You?

Mon, 2015-07-06 08:48

It’s my not-so-humble opinion that it takes a special kind of ‘someone’ to work in a successful and innovative collective such as Pythian.  We’re a diverse team of thought-leaders, technology forecasters, technical prodigies and individual contributors.  When we look for people to join our global company we’re looking for people who want to see that their work really matters…that they matter.  We have truly discerning tastes when it comes to who gets to have “Pythian” in their email signature – you have to love data and value what it does for people.

Oh.  And you have to like people (we’re funny like that).

Our global team is brimming with top talent dedicated to building something larger than them.  We haven’t gotten this far by playing it safe.  We play it smart.  We’re strategic.  We have a vision to be the most trusted and admired technology services organization in the world….

….And we’re looking for fantastic people to join us.   In order to take your career to the next level at Pythian, you have to be able to:

Lend a hand – There are times when it’s easier to allocate blame but in the end, the problem still exists.  It may not be ‘your fault’ but it can become everyone’s problem.  In situations where there isn’t a lot of time for advanced planning it’s the people who take steps towards a solution that will make the greatest (and often most favorable) impact.

Play to your strengths – Maybe you’re a whiz with numbers or an I.T. genius.  Perhaps your level of organization is outstanding or you have incredible leadership skills. Play to what energizes you.  Cultivate it, infuse your work with it and success will follow.

Lean into the unknown – Opportunity is often found in the things we never knew existed.  Many of the talented people that I’ve come to know at Pythian can dive fearlessly into projects and own them.   If they don’t know how to do something, they learn it and they learn how to do it well.  That’s just the way it’s done.

Embrace diversity – We believe that every employee that works with us is deserving of dignity and respect.

Be approachable –Typically there’s a good mix of personalities in any successful company.  While introverts seem to be at the helm of hands on I.T. work, extroverts also contribute significantly to getting things done.  Regardless of which way your personality leans, always be approachable.  A positive disposition is often contagious.

Put your best face forward – Remember that the skill and professionalism that you demonstrate every day will inevitably become your business card.  Maya Angelou once said, “People will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Do you think you can picture yourself here? Discover more about what it’s like to be part of the Pythian team.  You just might be our kind of people!

The post So, What Kind of Person Are You? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #430: A Carnival of the Vanities for DBAs

Fri, 2015-07-03 12:56

This Log Buffer Edition cuts through the crowd and picks some of the outstanding blog posts from Oracle, SQL Server and MySQL.


  • Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.
  • Query existing HBase tables with SQL using Apache Phoenix.
  • Even though WebLogic with Active GridlLink are Oracle’s suggested approach to deploy Java applications that use Oracle Real Applications Clusters (RAC), there might be scenarios in which you can’t make that choice (e.g. certification issues, licensing, library dependency, etc.).
  • OSB & MTOM: When to use Include Binary Data by Reference or Value.
  • Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn’t use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project.

SQL Server:

  • Using DAX to create SSRS reports: The Basics.
  • Getting to know your customers better – cohort analysis and RFM segmentation in R.
  • Using the T-SQL PERCENTILE Analytic Functions in SQL Server 2000, 2005 and 2008.
  • Schema-Based Access Control for SQL Server Databases.
  • How to Fix a Corrupt MSDB SQL Server Database.


  • MySQL Enterprise Audit – parsing audit information from log files, inserting into a MySQL table.
  • Proposal to deprecate MySQL INTEGER display width and ZEROFILL.
  • Using Cgroups to Limit MySQL and MongoDB memory usage.
  • Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA.
  • Setting up environments, starting processes, and monitoring these processes on multiple machines can be time consuming and error prone.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #430: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Prepare for the Leap Second

Mon, 2015-06-29 10:37

Catch up on how to handle the Leap Second and whether you’re ready for it with our previous updates on the impacts it will have on Cassandra and Linux.


Background Information

A leap second will be inserted at the end of June 30, 2015 at 23:59:60 UTC. 

There is a small time difference between the atomic clock and astronomical time (which is based on the rotation of earth). Rotation of earth is slowing down.

To synchronize these times, one second will be added to the atomic clock – a leap second – so that both clocks will synchronize. This will happen on June 30th – July 1st midnight UTC (not in local time, time the same as in GMT time zone). After 23 hours 59 minutes 59 seconds, the time will become 23 hours 59 minutes 60 seconds.

Since this system of correction was implemented in 1972, 26 such leap seconds have been inserted. The most recent one happened on June 30, 2012 at 23:59:60 UTC.

Unlike daylight savings time, which shifts the timezone information and does not alter the underlying UTC time clock on which servers work, a leap-second change is an actual change in the UTC time value. Usually, UTC time is continuous and predictable, but the leap second breaks this normal continuity requiring it to be addressed.


What You Need to Know – Summary

The June 2015 leap second event is the addition of one second to the atomic clock on June 30, 2015. Pythian has researched the implications that the upcoming leap second insertion may have and presents the relevant information to its clients and the wider community.

At the operating system level:

  • Windows and AIX servers are not affected by this issue.
  • Linux servers using NTP (network time protocol) may be affected, potentially causing error messages, server hangs or 100% CPU utilization. There are a series of patches and workarounds available, depending upon the needs of the components running on the Linux server.
  • HP-UX servers have NTP patches released in Q2 2015.

For databases and other software components:

  • Java programs are at risk of generating endless error loops, spiking CPU utilization. Patches are available.
  • Databases generally obtain time-stamps from the server OS, so those running on Linux have potential issues. For most, there are no additional corrections necessary.
  • Oracle databases have minimal additional risk. Oracle clustered environments and java-based administration tools should be reviewed and corrective actions taken.
  • Microsoft SQL Server databases have no risk but may expose minor application issues on data granularity and error handling.
  • Open source databases should be reviewed for Java risks. Updated kernels are available.
  • Cisco UCS environments should be reviewed. Patches are available.

Symptoms from the leap second event may persist for up to a day before and after the leap second event, as server NTP service updates are provided.

For all environments, a complete assessment and planning for your systems should be performed. The Pythian team would be pleased to help you perform this assessment and complete the planning necessary to ensure your systems can handle the leap second event in 2015. Get started by reviewing the full Leap Second Report.


The post Prepare for the Leap Second appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #429: A Carnival of the Vanities for DBAs

Fri, 2015-06-26 06:47

This Log Buffer Edition gathers a wide sample of blogs and then purifies the best ones from Oracle, SQL Server and MySQL.


  • If you take a look at the “alter user” command in the old 9i documentation, you’ll see this: DEFAULT ROLE Clause.
  • There’s been an interesting recent discussion on the OTN Database forum regarding “Index blank blocks after a large update that was rolled back.”
  • 12c Parallel Execution New Features: 1 SLAVE distribution
  • Index Tree Dumps in Oracle 12c Database (New Age)
  • Is it possible to cause tables to be stale with only tiny amounts of change?

SQL Server:

  • Making Data Analytics Simpler: SQL Server and R
  • Challenges with integrating MySQL data in an ETL regime and the amazing FMTONLY trick!
  • Azure Stream Analytics aims to extract knowledge structures from continuous ordered streams of data by real-time analysis.
  • Grant User Access to All SQL Server Databases
  • SQL SERVER – How Do We Find Deadlocks?


  • Efficient Use of Indexes in MySQL
  • Business-critical MySQL with DR in vCloud Air
  • Become a MySQL DBA blog series – Common operations – Schema Changes.
  • Building a Better CREATE USER Command

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #429: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #428: A Carnival of the Vanities for DBAs

Mon, 2015-06-22 11:45

The Log Buffer Edition once again is sparkling with some gems, hand-picked from Oracle, SQL Server and MySQL.


  • Oracle GoldenGate  is now certified with Unity 14.10.  With this certification, customers can use Oracle GoldenGate to deliver data to Teradata Unity which can then automate the distribution of data to multiple Teradata databases.
  • How do I change DNS servers on Exadata storage servers.
  • Flushing Shared Pool Does Not Slow Its Growth.
  • Code completion is the key feature you need when adding support for your own JavaScript framework to NetBeans IDE.
  • Replicating Hive Data Into Oracle BI Cloud Service for Visual Analyzer using BICS Data Sync.

SQL Server:

  • Trigger an Email of an SSRS Report from an SSIS Package.
  • Script All Server Level Objects to Recreate SQL Server.
  • A Syntax Mystery in a Previously Working Procedure.
  • Using R to Explore Data by Analysis – for SQL Professionals.
  • Converting Rows to Columns (PIVOT) and Columns to Rows (UNPIVOT) in SQL Server.


  • Some applications, particularly those written with a single-node database server in mind, attempt to immediately read a value they have just inserted into the database, without making those operations part of a single transaction. A read/write splitting proxy or a connection pool combined with a load-balancer can direct each operation to a different database node.
  • Q&A: High availability when using MySQL in the cloud.
  • MariaDB 10.0.20 now available.
  • Removal and Deprecation in MySQL 5.7.
  • Getting EXPLAIN information from already running queries in MySQL 5.7.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs