Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 23 min 15 sec ago

Log Buffer #433: A Carnival of the Vanities for DBAs

Fri, 2015-07-24 10:05

This Log Buffer Edition covers Oracle, SQL Server and MySQL blogs of the running week.

Oracle:

  • While checking the sources of the Cassandra/NetBeans integration into GitHub yesterday, something went very badly wrong and ALL the source files in my Maven project that disappeared!
  • AWR Reports, Performance Hub, historisches SQL Monitoring in 12c
  • Oracle Database Mobile Server 12c: Advanced data synchronization engine
  • ORA-39001, ORA-39000 and ORA-39142
  • ORA-15410: Disks in disk group do not have equal size

SQL Server:

  • SAN and NAS protocols and how they impact SQL Server
  • SQL Style Habits: Attack of the Skeuomorphs
  • Is It Worth Writing Unit Tests?
  • Large SQL Server Database Backup on an Azure VM and Archiving
  • Reporting Services: Drawing a Buffer on a Map

MySQL:

  • MySQL Tcpdump system : use percona-toolkit to analyze network packages
  • Replication in real-time from Oracle and MySQL into data warehouses and analytics
  • Altering tablespace of table – new in MySQL 5.7
  • MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh
  • MySQL upgrade 5.6 with innodb_fast_checksum=1

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

 

The post Log Buffer #433: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Lessons Learned with Kubernetes

Tue, 2015-07-21 13:00
Kubernetes Logo Trend Towards Kubernetes

Three trends in computing have come together to make container orchestration the next obvious evolution of internet service delivery.  The first is the trend to pack an increasing number of segregated services into larger and larger servers for efficiency gains.  The second trend is the rapid build->test->release cycle of modern microservices that can see hundreds or thousands of updates each day.  And, the third trend is infrastructure-as-code which abstracts the actual hardware of servers and networking equipment away into text files that describe the desired infrastructure.  These files can be tested and version controlled in exactly the same way as code, and deployed just as quickly.  At the convergence point sits Kubernetes from Google which uses flat files to describe the infrastructure and containers needed to deliver a service, which can be built, tested, and deployed incredibly quickly.

Pythian has been working with container orchestration using Kubernetes since it was announced to the public in June of 2014.  We have used it to deploy microservices faster while also speeding up the development cycle.  With the advent of V1.0, we decided to revisit some of what we learned implementing internally and with clients on Kubernetes.

Develop Locally

Google, and others provide hosted Kubernetes solutions that are fast and easy to use.  In fact, you can use them for your whole build->test->deploy workflow.  Keep in mind, that with hosted Kubernetes, the containers are exposed to the internet from very early in your development cycle.  If that’s not desirable, or if local development is important, go faster with a local cluster.  Kubernetes can run on as few as three VMs and the vagrant install is well supported.  Our workflow involves sharing the yaml files among the team and developing everything locally before pushing blessed containers for deployment on a production cluster.

Pay Attention to API Versions in Examples

Since the kubernetes team has been developing their api in public for the last year, there have been a number of fairly large breaking changes to the API.  Now that v1 of the API is stable, we can depend on it. However, many of the tutorials and examples online use earlier versions.  Be sure to check which version the example uses before trying to experiment with it.

Get to know Volumes at Cluster Scale

In Kubernetes, volumes are an outgrowth of the Docker concept of a volume, or a filesystem that can be mounted and isn’t tied to the lifecycle of specific container.  Kubernetes re-imagines them at cluster scale and through plugins, allows containers to mount all kinds of things as file systems.  One plugin adds a git repository as a mountable filesystem, which opens the door to some particularly interesting use cases.

Leverage Etcd

At the heart of the Kubernetes cluster is a distributed, shared-state system called etcd.  Built on the RAFT protocol, it stores key->value pairs in a tiered structure and supports an easy REST api.  Etcd also provides a level of access control sufficient to securely store shared secrets for use throughout the cluster, but not available to all etcd consumers.  This feature underpins the concept of a Secret in Kubernetes.  But, your application can also talk directly to the etcd cluster in Kubernetes.  Using confd, your application can use the Kubernetes etcd instance as a data storage layer.  For example, here’s a simple url shortener gist using just nginx, confd, and etcd.
Happy experimentation!

Schedule a free assessment with a Pythian Kubernetes expert.

Learn more about Pythian’s Cloud expertise.

If this sounds like the kind of thing you’d like to work on, we’re hiring too!

The post Lessons Learned with Kubernetes appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Reading System Logs: SQL Server – Part 2

Mon, 2015-07-20 13:52

Greetings!

4355536275_430b18f9d5_nLast time I talked about reading System Logs on the SQL Server box, explaining why it is really important that DBA(s) should scan through the logs once a day on a critical production system. As I mentioned in my previous blog post , sometimes there are messages logged in as information, and at times it can be treated as an early warning before the system gets actual error messages – a sign of warning or an error. That is why it is important to read the information level messages. Let me tell you yet another case that I had where the disk sub system issue was reported as an information in system logs.

In this case, the system was suffering with the high disk I/O. The disk that we had replaced was used for writing backups. For a few days we observed that writing backups were longer than it was before.  The number of databases were the same and the size of these databases were not drastically increased, though the time it was taking to write backups had increased significantly. Looking at the system logs I noticed some messages related to the disk. Searching for those messages lead me to some links pointing toward a disk issue, link among them. After working with others in storage admin they confirmed the issue too, and now they are procuring a new disk.

So, here is what I would say. When you start your day, spare few minutes to read the system logs.  At Pythian, we have our home grown monitoring tool Avail which does this job for us reporting information, warnings and errors as a report.

Excerpts from the System Log:

Log Name:      System
Source:        Server Administrator
Date:          6/18/2015 10:55:55 PM
Event ID:      2271
Task Category: Storage Service
Level:         Information
Keywords:      Classic
User:          N/A
Computer:      SQLServer
Description:
The Patrol Read corrected a media error.:  Physical Disk 0:0:10 Controller 0, Connector 0

photo credit: Ubicación del disco duro (antiguo) a desmontar via photopin (license)

 

Learn more about our expertise in SQL Server.

The post Reading System Logs: SQL Server – Part 2 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

RMAN 11g : How to Restore / Duplicate to a More Recent Patchset

Mon, 2015-07-20 13:24

In an Oracle DBA’s life, you’ll be regularly asked to work on applying a new patchset on a database and then you will apply it starting from the development database to the production database and this process can be quite long depending on the organization you are working for.

In an Oracle DBA’s life, you’ll be regularly asked to refresh a development database (or any environment before the production) with the production data for development, test or whatever needs. For years now, RMAN has helped us a lot to perform this kind of task easily.

And what should happen always happens and one day you will be asked to refresh your more recent patchset DEV database (let’s say 11.2.0.4) with your PROD data (let’s say that it’s running against an 11.2.0.3 version). And let’s call a spade a spade, that could be a bit tricky — and specially if you discover that the versions are different once the RESTORE / DUPLICATE is terminated because you have launched the usual refresh scripts forgetting this little detail…

A solution could be to ask some GB to the sys admin team, copy an 11.2.0.3 ORACLE_HOME from another server, quickly clone it on the DEV server, start a RMAN DUPLICATE / RESTORE DATABASE from the 11.2.0.3 PROD to the 11.2.0.3 DEV and then upgrade it to 11.2.0.4. But this will probably be quite long and in the case that adding some GB to a server requires some procedures, validations, etc… it could take many days to refresh the DEV database which is obviously not what everybody wants. And this possibility does not exists if you face the issue after the RESTORE / DUPLICATE is finished.

Hopefully, there’s a way to achieve this goal by directly RESTORE / DUPLICATE a database to a more recent patchset (note that this method is also working for 10g databases). Let’s explore the two cases you can face doing a direct RESTORE / DUPLICATE to a more recent patchset database.

 

RESTORE / DUPLICATE DATABASE case

Whether we are restoring or duplicating the production database from a backup, here is what will happen on the DEV database:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/11/2015 22:38:59
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 17 and starting SCN of 2232530
RMAN

Here, we can’t open the database with the RESETLOGS option due to the patchset version difference. We have to use a slightly different command:

SQL> alter database open resetlogs upgrade ;
Database altered.
SQL>

Now the database is opened in upgrade mode, we can now apply the 11.2.0.4 patchset and open it.

SQL> @?/rdbms/admin/catupgrd

...

SQL> startup
ORACLE instance started.

Total System Global Area 626327552 bytes
Fixed Size 2255832 bytes
Variable Size 243270696 bytes
Database Buffers 377487360 bytes
Redo Buffers 3313664 bytes
Database mounted.
Database opened.
SQL>

This one is in fact quick and easy.

 

DUPLICATE FROM ACTIVE DATABASE case

Starting from 11g, we have the cool DUPLICATE FORM ACTIVE DATABASE feature that we can also use to perform this kind of refresh. When you perform a DUPLICATE FROM ACTIVE DATABASE operation from a 11.2.0.3 to a 11.2.0.4 version, the procedure is different from the previous one as the RESETLOGS will begin but will not be able to finish properly and you will face this error :

RMAN-08161: contents of Memory Script:
{
 Alter clone database open resetlogs;
}
RMAN-08162: executing Memory Script

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
RMAN-10041: Could not re-create polling channel context following failure.
RMAN-10024: error setting up for rpc polling
RMAN-10005: error opening cursor
RMAN-10002: ORACLE error: ORA-03114: not connected to ORACLE
RMAN-03002: failure of Duplicate Db command at 03/25/2015 20:22:56
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-39700: database must be opened with UPGRADE option
Process ID: 24341
Session ID: 1 Serial number: 9

At this stage, it’s not possible to open the database in UPGRADE mode nor RECOVER the database and not even generate a BACKUP CONTROLFILE TO TRACE.

SQL> recover database using backup controlfile until cancel ;
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.

SQL>

So we have to recreate the controlfile. By using these queries, we can easily create a new CREATE CONTROLFILE statement (or we could generate a BACKUP CONTROLFILE TO TRACE from the source database and then adapt it for the destination database).

SQL> select name from v$datafile order by file#;
SQL> select group#, member from v$logfile;
SQL> select name, bytes from v$tempfile order by file#;

And then recreate the controlfile:

SQL> CREATE CONTROLFILE REUSE DATABASE "TST11204" RESETLOGS ARCHIVELOG
 2 MAXLOGFILES 16
 3 MAXLOGMEMBERS 3
 4 MAXDATAFILES 100
 5 MAXINSTANCES 8
 6 MAXLOGHISTORY 292
 7 LOGFILE
 8 GROUP 1 '/u01/app/oracle/data/orcl11204/redo01.log' SIZE 50M BLOCKSIZE 512,
 9 GROUP 2 '/u01/app/oracle/data/orcl11204/redo02.log' SIZE 50M BLOCKSIZE 512,
 10 GROUP 3 '/u01/app/oracle/data/orcl11204/redo03.log' SIZE 50M BLOCKSIZE 512
 11 DATAFILE
 12 '/u01/app/oracle/data/orcl11204/system01.dbf',
 13 '/u01/app/oracle/data/orcl11204/sysaux01.dbf',
 14 '/u01/app/oracle/data/orcl11204/undotbs01.dbf',
 15 '/u01/app/oracle/data/orcl11204/users01.dbf'
CHARACTER SET AL32UTF8
 16 ;

Control file created.

SQL>

To finish the recover and open the database in UPGRADE mode, we would need to apply the current redolog (and not any archivelog — we don’t have any archivelog as the RESETLOGS didn’t happen yet).

SQL> select * from v$logfile ;

 GROUP# STATUS TYPE MEMBER IS_
---------- ------- ------- ------------------------------------------------------- ---
 3 STALE ONLINE /u01/app/oracle/data/orcl11204/redo03.log NO

 2 STALE ONLINE /u01/app/oracle/data/orcl11204/redo02.log NO

 1 STALE ONLINE /u01/app/oracle/data/orcl11204/redo01.log NO

SQL> RECOVER DATABASE USING BACKUP CONTROLFILE until cancel ;
ORA-00279: change 2059652 generated at 03/25/2015 20:22:54 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/data/TST11204/archivelog/2015_03_25/o1_mf_1_1_%u_.arc
ORA-00280: change 2059652 for thread 1 is in sequence #1

Specify log: {<ret>=suggested | filename | AUTO | CANCEL}
/u01/app/oracle/data/orcl11204/redo01.log
Log applied.
Media recovery complete.
SQL> alter database open resetlogs upgrade ;

Database altered.

SQL>

Now we can apply the 11.2.0.4 patchset:


SQL> @?/rdbms/admin/catupgrd

...

SQL>

And check that everything is good:


SQL> select * from v$version ;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL> select comp_name, version, status from dba_registry ;

COMP_NAME VERSION STATUS
--------------------------------------------- ------------------------------ -----------
OWB 11.2.0.3.0 VALID
Oracle Application Express 3.2.1.00.12 VALID
Oracle Enterprise Manager 11.2.0.4.0 VALID
OLAP Catalog 11.2.0.4.0 INVALID
Spatial 11.2.0.4.0 VALID
Oracle Multimedia 11.2.0.4.0 VALID
Oracle XML Database 11.2.0.4.0 VALID
Oracle Text 11.2.0.4.0 VALID
Oracle Expression Filter 11.2.0.4.0 VALID
Oracle Rules Manager 11.2.0.4.0 VALID
Oracle Workspace Manager 11.2.0.4.0 VALID
Oracle Database Catalog Views 11.2.0.4.0 VALID
Oracle Database Packages and Types 11.2.0.4.0 INVALID
JServer JAVA Virtual Machine 11.2.0.4.0 VALID
Oracle XDK 11.2.0.4.0 VALID
Oracle Database Java Packages 11.2.0.4.0 VALID
OLAP Analytic Workspace 11.2.0.4.0 INVALID
Oracle OLAP API 11.2.0.4.0 VALID

18 rows selected.

SQL>

&nbsp;

 

This saved me a lot of time, have a good day :)

 

Discover more about our expertise in Oracle.

The post RMAN 11g : How to Restore / Duplicate to a More Recent Patchset appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

DevOps: Applied Empathy

Mon, 2015-07-20 12:09

I enjoyed participating in a panel on DevOps culture by Electric Cloud last week. Our commendable hosts kept things light and productive despite the number of participants and breadth of topic.

It wouldn’t be a culture discussion if we had failed to review the motivations for that (DevOps) culture: namely the evolve-or-die progression of tech business in general and software systems of customer engagement in particular. So the logic goes, any non-trivial business is now (also) a software business – thus, being “good at software” (and rapidly deriving high quality, constantly improving, large-scale services from it) is a fundamental market success factor and must be(come) part of the corporate DNA.

I doubt the above is controversial, but the degree to which it feels true probably depends on the scale of opportunity in your sector(s) and the governing regulatory regime(s). Those factors have a big impact not only on the intensity of motivation, but the challenges and resistance to change that must be overcome in a successful program of transformation.

The discussion reminded me that empathy is important for more than just being nice. It’s also a great way to understand what motivates people and how to incorporate their success drivers into transformation efforts.

top of the world in a typical software engineering org

Consider Katniss, for example – she has to deliver to Rex (and consumers like you and me) the “and sandwich”, Velocity (new features) and Operational Excellence, or we (consumers) will find a service that does. She may prioritize Velocity at times over other initiatives, and the stress on Bill grows under this pressure. If, as agent of transformational change, you propose methods of increasing Velocity to Bill – you are likely to face rejection – Bill’s already drowning at the present pace.

If, on the other hand, one approaches Bill to explain that pervasive, intelligent automation strategies can give his team their weekends back, and make his team a proactive contributor and valued partner in growth of the business, one will likely find a different sort of audience.

All this means, to me, is that DevOps is a useful context for improving a complex sort of collaboration that’s called a software product lifecycle. Understanding the motivations and needs of the players in an organization  is a key enabler for planning and executing successful programs of change.

 

Discover more about our expertise in DevOps and the author Aaron Lee.

The post DevOps: Applied Empathy appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #432: A Carnival of the Vanities for DBAs

Fri, 2015-07-17 13:24

Yet again, this log buffer edition brings some rich blog posts from Oracle, SQL Server and MySQL.

Oracle:

  • Installing Oracle XE, ORDS and Apex on CentOS
  • Major Growth is Expected in the DBaaS Space. Are Your Skills Ready?
  • How to Hide Actions in OBPM 12c Workspace
  • You can configure auto refresh for ADF BC cached LOV and this works out of the box, no special coding is needed.
  • Search and Replace in Oracle SQL Developer Data Modeller

SQL Server:

  • Polyglot Persistence is a fancy term meaning that when storing data, it is best to use multiple data storage technologies, chosen based upon the way data is being utilized by individual applications or components of a single application.
  • HOW TO: Recreate SQL Server Statistics In a Different Environment
  • New SQL Server Management Studio 2015/June – with Check for Updates!
  • Power BI General Availability Announced: July 24, 2015
  • Ensuring Columns Are Always Present In A Table Returned By Power Query

MySQL:

  • Using MySQL sandbox for upgrade testing
  • Bypassing SST in Percona XtraDB Cluster with binary logs
  • Un-obvious “Unknown column in ‘field list’” error
  • Bypassing SST in Percona XtraDB Cluster with incremental backups
  • innodb_fast_checksum=1 and upgrading to MySQL 5.6

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

 

The post Log Buffer #432: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Real Life DBA: Finding Errors in SQL Server

Fri, 2015-07-17 13:00

I tend to spend time on a few different forums at times, mostly the StackExchange network, and find folks asking questions about various errors seen in the error log of an SQL Server instance. Most DBAs are aware that errors that SQL Server provides can be vague at times, so it takes a bit of knowing where to look for more information. In some situations it is after the fact before you can get to the instance to start your analysis so you have to focus on those areas that contain historical information. A few of those places to check are:

  1. Various DMVs, depending on the error, but things like connectivity errors you can search the DMV sys.dm_os_ring_buffers.
  2. System_Health XEvent session has various things like deadlock reports and some of the things you find in the ring buffer are written to this session as well.
  3. Default Trace, this has been around since 2005 but being that Profiler and Traces are on the depreciation list it is a unknown how long it will stay around, but it can help while it is there.

In the field, we have our Avail Monitoring that customers can utilize for monitoring an instance. It will monitor the error log for a SQL Server instance, and we will get a page when something is found that it does not know about. This is an error message I actually received today for a client:

Error9100_avail_page

That error messages only tells me the SPID that it occurred under, which we know someone else could be using that SPID by the time I get to the server. It does not provide the login or even database it pertains to. So, I went searching around and decided to go check the default trace, and actually found this bit of information:

Error9100_1 Error9100_2

You can see from this the SPID matches and I was able to get the login that caused the error and what was being done. When you get an issue with SQL Server that is vague or does not provide enough information, go back to the basics and recall what options you have in front of you.

 

Discover more about our expertise in SQL Server.

The post Real Life DBA: Finding Errors in SQL Server appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #431: A Carnival of the Vanities for DBAs

Fri, 2015-07-10 08:46

This Log buffer edition covers Oracle, SQL Server and MySQL blog posts about new features, tips, tricks and best practices.

Oracle:

  • Traditionally, assigning specific processes to a certain set of CPUs has been done by using processor sets (and resource pools). This is quite useful, but it requires the hard partitioning of processors in the system. That means, we can’t restrict process A to run on CPUs 1,2,3 and process B to run on CPUs 3,4,5, because these partitions overlap.
  • Parallel_Degree_Limit, Parallel_Max_Degree, Maximum DOP? Confused?
  • JDeveloper 12c – ORA-01882: time zone region not found
  • Using a Parallel Gateway without a Merge in OBPM
  • Secure multi-threaded live migration for kernel zones

SQL Server:

  • How to Unlock Your Team’s Creative Power with SQL Prompt and a VCS
  • In-Memory OLTP – Common Workload Patterns and Migration Considerations
  • The Poster of the Plethora of PowerShell Pitfalls
  • Temporarily Change SQL Server Log Shipping Database to Read Only
  • Monitoring Availability Groups with Redgate SQL Monitor

MySQL:

  • Introducing MySQL Performance Analyzer
  • MySQL 5.7.8 – Now featuring super_read_only and disabled_storage_engines
  • Become a MySQL DBA – Webinar series: Which High Availability Solution?
  • How to obtain the MySQL version from an FRM file
  • MySQL Enterprise Audit: Parsing Audit Information From Log Files, Inserting Into MySQL Table

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #431: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Pillars of PowerShell: SQL Server – Part 1

Thu, 2015-07-09 13:37
Introduction

This is the sixth and final post in the series on the Pillars of PowerShell, at least part one of the final post. The previous posts in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling
  5. Windows OS

PowerShell + SQL Server is just cool! You will see folks talk about the ability to perform a task against multiple servers at a time, automate implementing a configuration or database change, or just obtaining a bit of consistency when doing certain processes. I tend to use it just because I can, and it is fun to see what I can do. There are a some instances where I have used it for a specific purpose where it saved me time, but overall I just chose to use it. I would say that on average there are going to be things you can do in PowerShell that could be done in T-SQL, and in those cases you use the tool that fits your needs.

Interacting with SQL Server PowerShell

There are a three main ways to interact with SQL Server using PowerShell that I have seen:

  1. SQL Server PowerShell (SQLPS)
  2. SQL Server Server Management Object (SMO)
  3. Native .NET coding

I am not going to touch on the third option in this series because it is not something I use enough to discuss. I will say, it is not the first choice for me to use it, but it does serve a purpose at times.

To try and provide enough information to introduce you to working with PowerShell and SQL Server, I broke this into two parts. Part one, we are going to look at SQL Server PowerShell (SQLPS) and using the SQL Server Provider (SQLSERVER:\). In part two we will go over SMO and what can be accomplished.

SQLPS, to me, offers you quick access to do the one-liner type tasks against SQL Server. It is just a preference really on which option you go with, so if it works for you just use it. There are some situations that using the SQL Server Provider actually requires you to mix in using SMO (e.g. creating a schema or database role). It also offers up a few cmdlets that are added onto (and improved upon) with each release of SQL Server.

Loading/Importing

The first thing to understand is how to get the product module into your PowerShell session. As with most products, some portion of the software has to exist on the machine you are working on, or the machine your script is going to be executed on. SQL Server PowerShell and SMO are installed by default if you install the SQL Server Management Tools (aka SSMS and such) for SQL Server 2008 and higher. I will only mention that they can also be found in the SQL Server Feature Pack if you need a more “standalone” type setup on a remote machine.

One thing you should get in the habit of doing with your scripts is verifying certain things that can cause more errors than are desired, one of those is dealing with modules. If the module is not loaded when the script is run your script is just going to spit out a ton of red text. If the prerequisites are not there to begin with, there is no point in continuing. You can verify that a version of the SQLPS module is installed on your machine by running the following command:

Get-Module -ListAvailable -Name SQL*

If you are running SQL Server 2012 or 2014 you will see something like this:

SQLModule1

This works in a similar fashion when you want to verify if the SQL Server 2008 snap-in is loaded:

SQLSnapin1

I generally do not want to have to remember or type out these commands all the time when I am doing things on the fly, so I will add this bit of code to my PowerShell Profile:

Push-Location
Import-Module SQLPS -DisableNameChecking -ErrorAction 'Stop'
Pop-Location

#Load SQL Server 2008 by uncommenting next line
#Add-PSSnapin *SQL* -ErrorAction 'Stop'

One cool thing that most cmdlets you use in PowerShell contain is the -ErrorAction parameter. There are a few different values you can use for this parameter, and you can find those by checking the help on about_CommonParamters. If your script is one that is going to be interactive or run manually I would use -ErrorAction ‘Inquire‘ instead, try it out on a machine that does not have the module installed to see what happens. Once you have the module or snap-in loaded you will be able to access the SQL Server PowerShell Provider.

One side note, there actually is a “sqlps.exe” utility that is easily accessible in most cases via the right-click menu in SSMS (e.g. right-click on the “Databases” node in Object Explorer). If you open this, you are thrust into the SQLPS provider and the “directory” of the node you opened from in SSMS. However convenient as that may seem, it is something that was added to the depreciation list with SQL Server 2012, so there’s not much point in talking about it. It has its own little quirks that most folks steer clear of using it anymore.

Being Specific

The code I use in my profile is going to load the most current version of the module found on my system, at least it should. It may not do as you think it will every time. In some circumstances when you are developing scripts on your own system you may need to only import a specific version; especially if you are in a mixed version environment for SQL Server. You can load a specific version of the module by utilizing Get-Module to find your version, and just pass it to Import-Module.

Get-Module -ListAvailable -Name SQLPS | select name, path
#110 = SQL Server 2012, 120 = SQL Server 2014, 130 = SQL Server 2016
Push-Location
Get-Module -ListAvailable -Name SQLPS |
     where {$_.path -match "110"} | Import-Module
Pop-Location

# To show that it was indeed loaded
Get-Module -Name SQLPS | select name, path

#If you want to switch to another one, you need to remove it
Remove-Module SQLPS
Authentication

By default when you browse the SQLPS provider (or most providers actually), it is going to utilize the account that is running the PowerShell session, Windows Authentication. If you find yourself working with an instance that you require SQL Login authentication, don’t lose hope. You can connect to an instance via the SQL Server Provider with a SQL Login. There is an MSDN article that provides a complete function that you can use to create a connection for such a purpose. It does not show a version of the article for SQL Server 2008 but I tested this with SQL Server 2008 R2 and it worked fine.

SQLSnapin_Authentication

One important note I will make that you can learn from the function in that article: the password is secure and not stored or processed in plain text.

SQLPS Cmdlets

SQLPS as noted previously offers a handful of cmdlets for performing a few administrative tasks against SQL Server instances. The majority of the ones you will find with SQL Server 2012 for example revolve around Availability Groups (e.g. disabling, creating, removing, etc.). The other unmentionables include Backup-SqlDatabase and Restore-SqlDatabase, these do exactly what you think but with a few limitations. The backup cmdlet can actually only perform a FULL, LOG, or FILE level backup (not sure why they did not offer support of a differential backup). Anyway, they could be useful for automating backups of production databases to “refresh” development or testing environments as the backup cmdlet does support doing a copy only backup. Another way is if you deal with Express Edition you can utilize this cmdlet and a scheduled task to backup those databases.

Update 7/13/2015: One correction, where I should have checked previously, but the Backup cmdlet for 2012 and above does include an “-Incremental” parameter for performing differential backups.

The other main cmdlet you get with SQLPS is what most people consider the replacement to the sqlcmd utility, Invoke-Sqlcmd. The main thing you get from the cmdlet is a smarter output in the sense that PowerShell will more appropriately detect the data type coming out, compared to the utility that just had everything as a string.

SQLPS One-liners

Working with the SQL Server Provider you will traverse this provider as you would a drive on your computer. So you can use the cmdlet Get-ChildItem or do as most folks and use the alias dir. The main thing to understand is the first few “directories” to access a given SQL Server instance. There are actually multiple root directories under the provider that you can see just by doing “dir SQLSERVER:\“. You can see by the description what each one is for, the one we are interested in is the “Database Engine”

SQLProvider2

Once you get beyond the root directory it can require a bit of patience as the provider is slow to respond or return information. If we want to dig into an instance of SQL Server you just need to understand the structure of the provider, it will generally follow this syntax: <Provider>:\<root>\<hostname>\<instance name>\. The instance name will be “DEFAULT” if you are dealing with a SQL Server default instance. If you have a named instance you just add the name of the instance (minus the server name).

To provide a real-world example, Avail Monitoring is the custom tool Pythian developed to monitor the SQL Server environments of our customers (or Oracle or MySQL…you get the point). One of the features it includes, among many, is monitoring for failed jobs. We customize the monitoring around the customer’s requirements so some job failures will page us immediately when it occurs, while others may allow a few extra failures before we are notified to investigate. This is all done without any intervention required by the customer and I know from that notification what job failed. Well right off you are going to want to check the job history for that job to see what information shows up, and I can use SQLPS Provider to do just that:

# To see the job history
dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.EnumHistory()} | select message, rundate -first 5 | format-list
SQLProvider3
# if I needed to start the job again
$jobs = dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs
$jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.Start()}

You might think that is a good bit to typing, but consider how long it can take for me to do the same thing through SSMS…I can type much faster than I can click with a mouse.

Anyway to close things out, I thought I would show one cool thing SQLPS can be used for the most: scripting out stuff. Just about every “directory” you go into with the provider is going to offer a method named “Script()”.

$jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.Script()}

I will get the T-SQL equivalent of the job just like SSMS provides, this can be used to document your jobs or used when refreshing a development server.

Summary

I hope you got the idea of what SQLPS can do from the information above, one-liners are always fun to discover. The SQL Server Provider is not the most used tool out there by DBAs, but it can be a life-saver at times. In the next post we will dig into using SMO and the awesome power it offers.

 

Discover more about our expertise in SQL Server

The post Pillars of PowerShell: SQL Server – Part 1 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Reading System Logs on SQL Server

Thu, 2015-07-09 12:54

HDDRecently, while I was working on a backup failure issue, I found that it was failing for a particular database. When I ran the backup manually to a different folder it would complete successfully, but not on the folder that it was directed to when the backup jobs were originally configured .  This makes me suspicious about hard disk corruption. In the end, I fixed the backup issues in the interim so that in the future I would not get paged, as well as lowering the risk of having no backup in place.

Upon reviewing the Windows Event logs, it was revealed that I was right about suspecting a faulty hard drive. The log reported some messages related to the SCSI codes, especially the SCSI Sense Key 3 which means SCSI had a Medium error. Eventually, the hard drive was replaced by the client and the database has been moved to another drive.  In the past month, I have had about 3 cases where I have observed that the serious messages related to storage are reported as information. I have included one case here for your reference, which may help you in case you see such things in your own logs.

CASE 1 – Here is what I found in the SQL Server error log:

  • Error: 18210, Severity: 16, State: 1
  • BackupIoRequest::WaitForIoCompletion: read failure on backup device ‘G:\MSSQL\Data\SomeDB.mdf’.
  • Msg 3271, Level 16, State 1, Line 1
  • A non-recoverable I/O error occurred on file “G:\MSSQL\Data\SomeDB.mdf:” 121 (The semaphore timeout period has expired.).
  • Msg 3013, Level 16, State 1, Line 1
  • BACKUP DATABASE is terminating abnormally.

When I ran the backup command manually I found that it ran fine until a specific point (i.e. 55%) before it failed again with the above error. Further, I decided to run DBCC CHECKDB which reports when a particular table has a consistency error at a particular page. Here are the reported errors:

Msg 8966, Level 16, State 2, Line 1
Unable to read and latch page (1:157134) with latch type SH. 121(The semaphore timeout period has expired.) failed.
Msg 2533, Level 16, State 1, Line 1
Table error: page (1:157134) allocated to object ID 645577338, index ID 0, partition ID 72057594039304192, alloc unit ID 72057594043301888 (type In-row data) 
was not seen. The page may be invalid or may have an incorrect alloc unit ID in its header. The repair level on the DBCC statement caused this repair to be bypassed.

Of course, repairing options did not help as I had anticipated initially, since the backup was also failing when it reached at 55%. The select statement also failed to complete when I queried the object 645577338.  The only option that I was left with was to recreate the new table and drop the original table. After this had been done, the full back up succeeded. As soon as this was completed we moved the database to another drive.

I was still curious regarding these errors, so I started looking at Windows Error Logs – System folder, filtering it to show only Errors and Warnings.  However, this did not show me anything that attracted me to read further. Thus, I removed the filter, and carefully reviewed the logs.  To my surprise, the logs show entries for a bad sector, but, this was in the Information section of Windows Event Viewer, System folder.

Event Type: Information
Event Source: Server Administrator
Event Category: Storage Service
Event ID: 2095
Date: 6/10/2015
Time: 1:04:18 AM
User: N/A
Computer: SQLServer
Description: SCSI sense data Sense key: 3 Sense code:11 Sense qualifier: 0:  Physical Disk 0:2 Controller 0, Connector 0.

There could be a different error, warning or information printed on your server depending what the issue is. Upon further review there is still much to be said in order to explain codes and descriptions.

You may have noticed that I have referred to this as CASE 1, which means, I will blog one or two more case(s) in the future. Stay tuned!

Photo credit: Hard Disk KO via photopin (license)

Learn more about our expertise in SQL Server.

The post Reading System Logs on SQL Server appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Pythian Introduces Cassandra Consulting & Operational Support

Tue, 2015-07-07 06:00

Over the past ten years, the technology industry has begun to adopt a whole new way of thinking about the database and data storage. This is largely the result of the fast moving, high-volume and non-traditional types of data that are being generated to support both internal business processes and real-time web applications. Using a NoSQL database enables businesses to quickly and cost-effectively process large amounts of data, whether the data is hosted in the enterprise or on the cloud.

Pythian’s new Cassandra services address the needs of customers deploying Apache Cassandra. Pythian offers highly knowledgeable and experienced Cassandra experts who can guide you to success with your Cassandra deployment by filling critical skills and capacity gaps, getting your Cassandra instance up and running quickly, and ensuring that it performs optimally as you move forward.

If you’re thinking of implementing Cassandra, or already have, watch our webinar, Getting Started With Cassandra, which covers key topics for starting out, such as when you should use Cassandra, potential challenges, real world Cassandra applications and benefits, and more.

Learn more about Pythian’s Cassandra Services.

The post Pythian Introduces Cassandra Consulting & Operational Support appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

So, What Kind of Person Are You?

Mon, 2015-07-06 08:48

It’s my not-so-humble opinion that it takes a special kind of ‘someone’ to work in a successful and innovative collective such as Pythian.  We’re a diverse team of thought-leaders, technology forecasters, technical prodigies and individual contributors.  When we look for people to join our global company we’re looking for people who want to see that their work really matters…that they matter.  We have truly discerning tastes when it comes to who gets to have “Pythian” in their email signature – you have to love data and value what it does for people.

Oh.  And you have to like people (we’re funny like that).

Our global team is brimming with top talent dedicated to building something larger than them.  We haven’t gotten this far by playing it safe.  We play it smart.  We’re strategic.  We have a vision to be the most trusted and admired technology services organization in the world….

….And we’re looking for fantastic people to join us.   In order to take your career to the next level at Pythian, you have to be able to:

Lend a hand – There are times when it’s easier to allocate blame but in the end, the problem still exists.  It may not be ‘your fault’ but it can become everyone’s problem.  In situations where there isn’t a lot of time for advanced planning it’s the people who take steps towards a solution that will make the greatest (and often most favorable) impact.

Play to your strengths – Maybe you’re a whiz with numbers or an I.T. genius.  Perhaps your level of organization is outstanding or you have incredible leadership skills. Play to what energizes you.  Cultivate it, infuse your work with it and success will follow.

Lean into the unknown – Opportunity is often found in the things we never knew existed.  Many of the talented people that I’ve come to know at Pythian can dive fearlessly into projects and own them.   If they don’t know how to do something, they learn it and they learn how to do it well.  That’s just the way it’s done.

Embrace diversity – We believe that every employee that works with us is deserving of dignity and respect.

Be approachable –Typically there’s a good mix of personalities in any successful company.  While introverts seem to be at the helm of hands on I.T. work, extroverts also contribute significantly to getting things done.  Regardless of which way your personality leans, always be approachable.  A positive disposition is often contagious.

Put your best face forward – Remember that the skill and professionalism that you demonstrate every day will inevitably become your business card.  Maya Angelou once said, “People will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Do you think you can picture yourself here? Discover more about what it’s like to be part of the Pythian team.  You just might be our kind of people!

The post So, What Kind of Person Are You? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #430: A Carnival of the Vanities for DBAs

Fri, 2015-07-03 12:56

This Log Buffer Edition cuts through the crowd and picks some of the outstanding blog posts from Oracle, SQL Server and MySQL.


Oracle:

  • Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.
  • Query existing HBase tables with SQL using Apache Phoenix.
  • Even though WebLogic with Active GridlLink are Oracle’s suggested approach to deploy Java applications that use Oracle Real Applications Clusters (RAC), there might be scenarios in which you can’t make that choice (e.g. certification issues, licensing, library dependency, etc.).
  • OSB & MTOM: When to use Include Binary Data by Reference or Value.
  • Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn’t use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project.

SQL Server:

  • Using DAX to create SSRS reports: The Basics.
  • Getting to know your customers better – cohort analysis and RFM segmentation in R.
  • Using the T-SQL PERCENTILE Analytic Functions in SQL Server 2000, 2005 and 2008.
  • Schema-Based Access Control for SQL Server Databases.
  • How to Fix a Corrupt MSDB SQL Server Database.

MySQL:

  • MySQL Enterprise Audit – parsing audit information from log files, inserting into a MySQL table.
  • Proposal to deprecate MySQL INTEGER display width and ZEROFILL.
  • Using Cgroups to Limit MySQL and MongoDB memory usage.
  • Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA.
  • Setting up environments, starting processes, and monitoring these processes on multiple machines can be time consuming and error prone.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #430: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Prepare for the Leap Second

Mon, 2015-06-29 10:37

Catch up on how to handle the Leap Second and whether you’re ready for it with our previous updates on the impacts it will have on Cassandra and Linux.

 

Background Information

A leap second will be inserted at the end of June 30, 2015 at 23:59:60 UTC. 

There is a small time difference between the atomic clock and astronomical time (which is based on the rotation of earth). Rotation of earth is slowing down.

To synchronize these times, one second will be added to the atomic clock – a leap second – so that both clocks will synchronize. This will happen on June 30th – July 1st midnight UTC (not in local time, time the same as in GMT time zone). After 23 hours 59 minutes 59 seconds, the time will become 23 hours 59 minutes 60 seconds.

Since this system of correction was implemented in 1972, 26 such leap seconds have been inserted. The most recent one happened on June 30, 2012 at 23:59:60 UTC.

Unlike daylight savings time, which shifts the timezone information and does not alter the underlying UTC time clock on which servers work, a leap-second change is an actual change in the UTC time value. Usually, UTC time is continuous and predictable, but the leap second breaks this normal continuity requiring it to be addressed.

 

What You Need to Know – Summary

The June 2015 leap second event is the addition of one second to the atomic clock on June 30, 2015. Pythian has researched the implications that the upcoming leap second insertion may have and presents the relevant information to its clients and the wider community.

At the operating system level:

  • Windows and AIX servers are not affected by this issue.
  • Linux servers using NTP (network time protocol) may be affected, potentially causing error messages, server hangs or 100% CPU utilization. There are a series of patches and workarounds available, depending upon the needs of the components running on the Linux server.
  • HP-UX servers have NTP patches released in Q2 2015.

For databases and other software components:

  • Java programs are at risk of generating endless error loops, spiking CPU utilization. Patches are available.
  • Databases generally obtain time-stamps from the server OS, so those running on Linux have potential issues. For most, there are no additional corrections necessary.
  • Oracle databases have minimal additional risk. Oracle clustered environments and java-based administration tools should be reviewed and corrective actions taken.
  • Microsoft SQL Server databases have no risk but may expose minor application issues on data granularity and error handling.
  • Open source databases should be reviewed for Java risks. Updated kernels are available.
  • Cisco UCS environments should be reviewed. Patches are available.

Symptoms from the leap second event may persist for up to a day before and after the leap second event, as server NTP service updates are provided.

For all environments, a complete assessment and planning for your systems should be performed. The Pythian team would be pleased to help you perform this assessment and complete the planning necessary to ensure your systems can handle the leap second event in 2015. Get started by reviewing the full Leap Second Report.

 

The post Prepare for the Leap Second appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #429: A Carnival of the Vanities for DBAs

Fri, 2015-06-26 06:47

This Log Buffer Edition gathers a wide sample of blogs and then purifies the best ones from Oracle, SQL Server and MySQL.

Oracle:

  • If you take a look at the “alter user” command in the old 9i documentation, you’ll see this: DEFAULT ROLE Clause.
  • There’s been an interesting recent discussion on the OTN Database forum regarding “Index blank blocks after a large update that was rolled back.”
  • 12c Parallel Execution New Features: 1 SLAVE distribution
  • Index Tree Dumps in Oracle 12c Database (New Age)
  • Is it possible to cause tables to be stale with only tiny amounts of change?

SQL Server:

  • Making Data Analytics Simpler: SQL Server and R
  • Challenges with integrating MySQL data in an ETL regime and the amazing FMTONLY trick!
  • Azure Stream Analytics aims to extract knowledge structures from continuous ordered streams of data by real-time analysis.
  • Grant User Access to All SQL Server Databases
  • SQL SERVER – How Do We Find Deadlocks?

MySQL:

  • Efficient Use of Indexes in MySQL
  • SHOW ENGINE INNODB MUTEX is back!
  • Business-critical MySQL with DR in vCloud Air
  • Become a MySQL DBA blog series – Common operations – Schema Changes.
  • Building a Better CREATE USER Command

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #429: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Log Buffer #428: A Carnival of the Vanities for DBAs

Mon, 2015-06-22 11:45

The Log Buffer Edition once again is sparkling with some gems, hand-picked from Oracle, SQL Server and MySQL.

Oracle:

  • Oracle GoldenGate 12.1.2.1.1  is now certified with Unity 14.10.  With this certification, customers can use Oracle GoldenGate to deliver data to Teradata Unity which can then automate the distribution of data to multiple Teradata databases.
  • How do I change DNS servers on Exadata storage servers.
  • Flushing Shared Pool Does Not Slow Its Growth.
  • Code completion is the key feature you need when adding support for your own JavaScript framework to NetBeans IDE.
  • Replicating Hive Data Into Oracle BI Cloud Service for Visual Analyzer using BICS Data Sync.

SQL Server:

  • Trigger an Email of an SSRS Report from an SSIS Package.
  • Script All Server Level Objects to Recreate SQL Server.
  • A Syntax Mystery in a Previously Working Procedure.
  • Using R to Explore Data by Analysis – for SQL Professionals.
  • Converting Rows to Columns (PIVOT) and Columns to Rows (UNPIVOT) in SQL Server.

MySQL:

  • Some applications, particularly those written with a single-node database server in mind, attempt to immediately read a value they have just inserted into the database, without making those operations part of a single transaction. A read/write splitting proxy or a connection pool combined with a load-balancer can direct each operation to a different database node.
  • Q&A: High availability when using MySQL in the cloud.
  • MariaDB 10.0.20 now available.
  • Removal and Deprecation in MySQL 5.7.
  • Getting EXPLAIN information from already running queries in MySQL 5.7.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Log Buffer #427: A Carnival of the Vanities for DBAs

Fri, 2015-06-12 06:54

This Log Buffer Edition covers various blog posts from the last week regarding Oracle, SQL Server and MySQL.

Oracle:

  • Merging Overlapping Date Ranges with MATCH_RECOGNIZE
  • The latest version of Enterprise Manager, EM 12.1.0.5, has been announced!
  • Kdump is the Linux kernel crash-dump mechanism. In the event of a server crash, Kdump creates a memory image (vmcore) that can help in determining the cause of the crash.
  • APEX Connect Presentation and Download of the sample application
  • One of my favorite feature of ZFS is the I/O aggregation done in the final stage of issuing I/Os to devices.

SQL Server:

  • Reusing T-SQL Code is catching on.
  • SELECT INTO vs INSERT INTO on Columnstore
  • SQL Monitor Custom Metric: WriteLog wait time
  • Query Performance Tuning – A Methodical Approach
  • How to Get SQL Server Dates and Times Horribly Wrong

MySQL:

  • Hash-based workarounds for MySQL unique constraint limitations
  • Replicate MySQL to Amazon Redshift with Tungsten: The good, the bad & the ugly
  • Indexing MySQL JSON Data
  • Improving the Performance of MySQL on Windows
  • Auditing MySQL with McAfee and MongoDB

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Pillars of PowerShell: Windows OS

Wed, 2015-06-10 06:42
Introduction

This is the fifth blog post continuing the series on the Pillars of PowerShell. The previous post in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling

The Windows Operating System (OS) is something a DBA should know and be familiar with since SQL Server has to run on top of it. I would say that on average most DBAs interact with the OS for troubleshooting purposes. In this post I just want to point out a few snippets of how PowerShell can help you do this type of work.

 Services Console Manager

In the SQL Server 2000 days DBAs became very familiar with typing in “services.msc” in the run prompt. Scrolling through the list of services to find out what state it is, or what the login is configured for with a particular service. Now, if you are performing administrative tasks against SQL Server services it is always advised that you use SQL Server Configuration Manager. However, if you are looking to check the status of the service or performing a restart of just the service, PowerShell can help out.

Get-Service

This cmdlet has a few discrepancies that it can help to understand upfront when you start using PowerShell instead of the Services Console. In the Services Console you find the service by the “Name”, this is the “DisplayName in the Get-Service cmdlet. The “Name” in Get-Service is actually the “Service Name” in the Service Console, do you follow? OK. So with SQL Server the DisplayName for a default instance would be “SQL Server (MSSQLSERVER)”, and the “Name” would be “mssqlserver”. This cmdlet allows you to filter by either field so the below two commands will return the same thing:

Get-Service 'SQL Server (MSSQLSERVER)'
Get-Service mssqlserver

You can obviously see which one is easier to type right off. So with SQL Server you will likely know that a default instance’s name would be queried using “mssqlserver”, and a named instance would be “mssql$myinstance”. So if you wanted to find all of the instances running on a server you could use this one-liner:

Get-Service mssql*
Restart-Service

This does exactly what you think it will, so you have to be careful. You can call this cmdlet by itself and restart a service by referencing the “name” just as you did with Get-Service. I want to show you how the pipeline can work for you in this situation. You will find some cmdlets in PowerShell that have a few “special” features. The service cmdlets are included in this category, they allow an array as an input object to the cmdlet for the property or via the pipeline.

So, let’s use the example that I have a server with multiple instances of SQL Server, and all the additional components like SSRS and SSIS. I only want to work with the named instance “SQL12″. I can get the status of all component services with this command:

Get-Service -Name 'MSSQL$SQL12','ReportServer$SQL12','SQLAgent$SQL12','MsDtsServer110'

Now if I need to do a controlled restart of all of those services I can just do this command:

Get-Service -Name 'MSSQL$SQL12','ReportServer$SQL12','SQLAgent$SQL12','MsDtsServer110' |
Restart-Service -Force -WhatIf

The added “-WhatIf” will not actually perform the operation but tell you what it would end up doing. Once I remove that the restart would actually occur. All of this would look something like this in the console:

Get-Service_Restart-Service Win32_Service

Some of you may recognize this one as a WMI class, and it is. Using WMI offers you a bit more information than the Get-Service cmdlet. You can see that by just running this code:

Get-Service mssqlserver
Get-WmiObject win32_service | where {$_.name -eq 'mssqlserver'}

The two commands above equate to the same referenced service but return slightly different bits of information by default:

gwmi_Win32_Service

However, if you run the command below, you will see how gathering service info with WMI offers much more potential:

Get-WmiObject win32_service | where {$_.name -eq 'mssqlserver'} | select *

Get-Service will not actually give you the service account. So here is one function I use often (saved in my profile):

function Get-SQLServiceStatus ([string[]]$server)
{
 foreach ($s in $server) {
 Get-WmiObject win32_service -ComputerName $s |
	where {$_.DisplayName -match &quot;SQL &quot;} |
	select @{Label=&quot;ServerName&quot;;Expression={$s}},
	DisplayName, Name, State, Status, StartMode, StartName
 }
}

One specific thing I did in this function is declaring the type of parameter you pass into this function. When you use “[string[]]”, it means the parameter accepts an array or multiple objects. You can set your variable to do this, but you also have to ensure the function is written in a manner that can process the array. I did this simply by wrapping the commands into a “foreach” loop. So an example use of this against a single server would be:
getsqlserverstatus
If you wanted to run this against multiple servers it would go something like this:

Get-SQLServerStatus -server 'MyServer','MyServer2','MyServer3' | Out-GridView
#another option
$serverList = 'MyServer','MyServer2','MyServer3'
Get-SQLServerStatus -server $serverList | Out-GridView
Disk Manager

Every DBA should be very familiar with this management console and can probably get to it blind folded. You might use this or “My Computer” when you need to see how much free space there is on a drive. If you happen to be working in an environment that only has Window Server 2012 and Windows 8 or higher, wish I was there with you. PowerShell 4.0 and higher offers storage cmdlets that let you get information about your disk and volume much easier, and cleaner. They actually use CIM (Common Information Model), which is what WMI is built upon. I read somewhere that basically “WMI is just Microsoft’s way of implementing CIM”. They are obviously going back to the standard, as they have done with other areas. It is worth learning more about, and it actually allows you to connect to a PowerShell 2.0 machine to get the same amount of information.

Anyway back to the task at hand. If you are working on PowerShell 3.0 or lower you can use Get-WmiObject and win32_Volume to get similar information that the storage cmdlet Get-Volume returns in 4.0:

Get-Volume
Get-WmiObject win32_volume | select DriveLetter, Label, FileSystem,
@{Label="SizeRemaining";Expression={"{0:N2}" -f($_.FreeSpace/1GB)}},
@{Label="Size";Expression={"{0:N2}" -f($_.Capacity/1GB)}} | Format-Table
win32_volume  Windows Event Viewer

Almost everyone is familiar with and knows their way around the Windows Event Viewer. I actually left this last for a reason. I want to walk you through an example that I think will help “put it all together” on what PowerShell can do for you. Our scenario is dealing with a server that had an unexpected restart, at least for me. There are times that I will get paged by our Avail Monitoring product for a customer’s site, and I need to find out who or why the server restarted. The most common place you are going to go for this will be the Event Log.

Show-EventLog

If you just want to go through Event Viewer and manually find events, and it is a remote server, I find this to be the quickest method:

Show-EventLog -ComputerName Server1

This command will open Event Viewer and go through the process of connecting you to “Server1″. No more right-clicking and selecting “connect to another computer”!

Get-EventLog

I prefer to just dig into searching for events, this is where Get-EventLog comes in handy. You can call this cmdlet and provide:

  1. Specific Log to look in (system, application, or security most commonly)
  2. Specify a time range
  3. Look just for specific entry type (error, information, warning, etc.)

In Windows Server 2003 Microsoft added a group policy “Shutdown Event Tracker” that if enabled writes particular events to the System Log when a server restarts, either planned or unplanned. In an unplanned event the first user that logs into the server will get a prompt about the unexpected shutdown. When you are dealing with planned, they are prompted for a similar prompt for restart and it has to be filled in before the restart will occur. What you can do with this cmdlet is search for those messages in the System Log.

To find the planned you would use:

Get-EventLog -LogName System -Message "*restart*" -ComputerName Server1 |
select * -First 1

Then to find the unplanned simply change “*restart*” to “*shutdown*”:


geteventlog

In this instance I find that SSIS and SSRS did not start back up and failed to start. I found this because I checked the status of the services for SQL Server using my custom function, Get-SQLServiceStatus:

troubleshoot_service1a

To search for events after the shutdown I need to find the first event that is written to the Event Log when a server starts up, the EventLog source. I can then use that time stamp as a starting point to search for messages on the SQL Server services that did not start up correctly. I just need the time stamp of the event and pass that into the Get-EventLog cmdlet to pull up error events. I am going to do that with this bit of code:

$t = Get-EventLog -LogName System -Source EventLog -Message "*shutdown*" | select * -First 1
Get-EventLog -LogName System -Before $t.TimeGenerated -Newest 5 -EntryType Error |
select TimeGenerated, Source, Message | Format-Table -Wrap
troubleshoot_service2a Summary

I hope you found this post useful and it gets you excited about digging deeper into PowerShell. In the next post I am going to close up the series digging into SQL Server and a few areas where PowerShell can help.

 

Learn more about our expertise in SQL Server.

Categories: DBA Blogs

Log Buffer #426: A Carnival of the Vanities for DBAs

Fri, 2015-06-05 08:57

This Log Buffer edition transcends beyond ordinary and loop in few of the very good blog posts from Oracle, SQL Server and MySQL.


Oracle:

  • Variable selection also known as feature or attribute selection is an important technique for data mining and predictive analytics.
  • The Oracle Utilities SDK V4.3.0.0.2 has been released and is available from My Oracle Support for download.
  • This article provides a high level list of the new features that exist in HFM 11.1.2.4 and details the changes/differences between HFM 11.1.2.4 and previous releases.
  • In recent years, we’ve seen increasing interest from small-to-mid-sized carriers in transforming their policy administration systems (PAS).
  • Got a question on how easy it is to use ORDS to perform insert | update | delete on a table?

SQL Server:

  • The Importance of Database Indexing
  • Stairway to SQL Server Security Level 9: Transparent Data Encryption
  • Query Folding in Power Query to Improve Performance
  • Writing Better T-SQL: Top-Down Design May Not be the Best Choice
  • Cybercrime – the Dark Edges of the Internet

MySQL:

  • One of the challenges in storage engine design is random I/O during a write operation.
  • Fast Galera Cluster Deployments in the Cloud Using Juju
  • High availability using MySQL in the cloud
  • Become a DBA blog series – Monitoring and Trending
  • MySQL as an Oracle DBA

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Permissions in Redshift Administration

Thu, 2015-06-04 12:01

As with any database administration strategy, management of Redshift requires setting and revoking permissions.  When first setting up permissions, it is ideal to establish groups as the basic unit of user access.  This keeps us from having to manage hundreds of permissions as users enter and leave organizations.  If you haven’t done this early on and are managing permissions on the user level, you can leverage some of the queries below to make the migration to group based access easy.

Another advantage we see in managing by groups is for some data warehouses we want to exclude users from running reports during ETL runs in order to prevent contention or reporting on incomplete data.  All we have to do is run this query at the start of the jobs:

revoke usage on schema myreportschema from group report_group;

When the jobs are finished, we then grant usage again:

grant usage on schema myreportschema to group report_group;

It is easy to see users and group assignments via (note that a user can belong to more than one group):

select usesysid, usename, usesuper, nvl(groname,'default')
from pg_user u
left join pg_group g on ','||array_to_string(grolist,',')||',' like '%,'||cast(usesysid as varchar(10))||',%'
order by 3,2;

Grants in Redshift are ultimately at the object level.  That is to say while we can grant access to all existing objects within a schemas, those grants are stored at an object level.  That is why issuing this command works for all existing tables, but tables added after this command that have been run do not automatically have the select privilege granted:

grant select on all tables in schema myreportschema to group report_group;

While this is good for security granularity, it can be administratively painful.  Let us examine some strategies and tools for addressing this issue.

Redshift has the useful view, information_schema.table_privileges, that lists tables and their permissions for both users and groups.  Note that this also includes views despite the name.   AWS also provides some useful views in their Redshift utilities package in Github, most notably v_get_obj_priv_by_user which essentially flattens out information_schema.table_privileges and makes it much easier to read:

schemaname objectname usename sel ins upd del ref myreportschema myreporttable biuser TRUE FALSE FALSE FALSE FALSE

However, note that this view does not report on groups and the HAS_TABLE_PRIVILEGE function that the view uses has no equivalent for groups, so to examine group privileges we can parse the ACL:

select
namespace as schemaname , item as object, pu.groname as groupname
, decode(charindex('r',split_part(split_part(array_to_string(relacl, '|'),pu.groname,2 ) ,'/',1)),0,0,1)  as select
, decode(charindex('w',split_part(split_part(array_to_string(relacl, '|'),pu.groname,2 ) ,'/',1)),0,0,1)  as update
, decode(charindex('a',split_part(split_part(array_to_string(relacl, '|'),pu.groname,2 ) ,'/',1)),0,0,1)  as insert
, decode(charindex('d',split_part(split_part(array_to_string(relacl, '|'),pu.groname,2 ) ,'/',1)),0,0,1)  as delete
from
(select
use.usename as subject,
nsp.nspname as namespace,
c.relname as item,
c.relkind as type,
use2.usename as owner,
c.relacl
from
pg_user use
cross join pg_class c
left join pg_namespace nsp on (c.relnamespace = nsp.oid)
left join pg_user use2 on (c.relowner = use2.usesysid)
where c.relowner = use.usesysid
and  nsp.nspname not in ('pg_catalog', 'pg_toast', 'information_schema')
)
join pg_group pu on array_to_string(relacl, '|') like '%'||pu.groname||'%'

One of the biggest challenges is to fill in missing grants.  We can do this by modifying the above query.  Here’s an example where we create the grant statements for all missing select grants for the report_group:

select 'grant select on '||namespace||'.'||item||' to group report_group;'
from
(select
use.usename as subject,
nsp.nspname as namespace,
c.relname as item,
c.relkind as type,
use2.usename as owner,
c.relacl
from
pg_user use
cross join pg_class c
left join pg_namespace nsp on (c.relnamespace = nsp.oid)
left join pg_user use2 on (c.relowner = use2.usesysid)
where
c.relowner = use.usesysid
and  nsp.nspname ='myreportschema'
and ((array_to_string(relacl, '|') not like '%report_group%' or relacl is null)
or
-- note the 'r' for the select privilege
(array_to_string(relacl, '|') like '%report_group%' and charindex('r', split_part( split_part( array_to_string( relacl, '|' ), 'admin_group', 2 ) , '/', 1 ) ) = 0 )
)
and c.relkind   <> 'i'  -- is not an index
);

 

Learn more about Pythian’s expertise in Database Managed Services.

Categories: DBA Blogs