Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 8 hours 45 min ago

Log Buffer #419: A Carnival of the Vanities for DBAs

Fri, 2015-04-17 10:51

This Log Buffer Edition covers Oracle, MySQL, SQL Server blog posts from around the world.

Oracle:

  • Why the Internet of Things should matter to you
  • Modifying Sales Behavior Using Oracle SPM – Written by Tyrice Johnson
  • SQLcl: Run a Query Over and Over, Refresh the Screen
  • Data Integration Tips: ODI 12.1.3 – Convert to Flow
  • JRE 1.8.0_45 Certified with Oracle E-Business Suite

SQL Server:

  • What’s this, a conditional WHERE clause that doesn’t use dynamic SQL?
  • The job of a DBA requires a fusion of skill and knowledge. To acquire this requires a craftsman mindset. Craftsmen find that the better they get at the work, the more enjoyable the work gets, and the more successful they become.
  • Using SQL to perform cluster analysis to gain insight into data with unknown groups
  • There are times when you don’t what to return a complete set of records. When you have this kind of requirement to only select the TOP X number of items Transact SQL (TSQL) has the TOP clause to meet your needs.
  • Spatial Data in SQL Server has special indexing because it has to perform specialised functions.

MySQL:

Profiling MySQL queries from Performance Schema

How to Easily Identify Tables With Temporal Types in Old Format!

The Perfect Server – CentOS 7.1 with Apache2, Postfix, Dovecot, Pure-FTPD, BIND and ISPConfig 3

Database Security – How to fully SSL-encrypt MySQL Galera Cluster and ClusterControl

MDX: retrieving the entire hierarchy path with Ancestors()

Categories: DBA Blogs

Ever Wondered How Pythian is Kind of Like a Fire Truck?

Tue, 2015-04-14 06:10
pierce____enforcerg____-54ee03db38d10

 

I have.

Coming from the world of selling fire trucks I’m used to selling necessary solutions to customers in need. The stakes are high. If the truck doesn’t perform best case scenario it’s a false alarm. Worst case scenario someone, many people, die.

Let me tell you a bit about fire trucks.

A lot of people think that a fire truck is a fire truck. That there is some factory where fire trucks are made, carbon copies of one another, varying only in what they carry – water, a pump, a ladder. That’s not the case. Every truck is custom engineered, designed, and manufactured from scratch. Things can go wrong. In a world where response time is everything, you don’t want something to go wrong. Not with the fire truck. Not when everything else is going wrong. Not when someone is trapped in their vehicle. Not when a house is burning down.

For the past five years I have been selling disaster management systems. There has been a clear, immediate, pressing need from my customers. I loved the urgency, I fed off that energy, helping people in charge of saving lives come up with solutions that help them do just that. When first walking into Pythian, I didn’t understand the importance of data, I didn’t comprehend the stakes. But they are present and the analogy can be made.

Pythian’s services are like a fire truck.

Data is like your house, your car, your life. When your business is dependent on your data and your data fails, your business fails. Data failures are serious. Downtime causes huge revenue losses as well as loss of trust and reputation. Identity theft, loss of security, these disasters are pressing threats in our digitized society.

Pythian’s FIT-ACER program is like your Fire Marshall.

We don’t just prepare for disasters, we help prevent them. Modeled after the Mayo Clinic’s patient checklist, Pythian’s FIT-ACER human reliability check acknowledges that no matter how intelligent our DBAs are (http://www.pythian.com/experts/) they can still make mistakes:

FIT-ACER: Pythian Human Reliability Checklist

F

Focus (SLOW DOWN! Are you ready?)

A

Assess the command (SPEND TIME HERE!)

I

Identify server/DB name, time, authorization

C

Check the server / database name again

T

Type the command (do not hit enter yet)

E

Execute the command

R

Review and document the results

We don’t just hire the best to do the best work, we hire the best, make sure they’re at their best, check their best, and apply their best. Every time we interact with your data we do so at a high level to improve your system, to prevent disaster.  And we answer our phones when disaster hits.

The average response time for a fire crew in Ontario is 6 minutes. The average response time for Pythian is under 4.

Take it from someone who knows disaster,

Pythian’s the best fire truck around.

Categories: DBA Blogs

Community dinner @ Pedro’s

Mon, 2015-04-13 08:36

Folks, as usual Pythian is organizing the community dinner. After many years, food, (responsible) drinking and photos, this event has become an important moment for all of us, to know each other better, discuss and have fun.

This year is also the 20th year for MySQL so … YEAAAH let us celebrate, with more food, fun and responsible drinking.

If you had not done it yet … register yourself here: https://www.eventbrite.com/e/the-pythian-mysql-community-pay-your-own-way-dinner-tickets-15692805604

Info about the event:

When: Tuesday April 14, 2015 – 7:00 PM at Pedro’s (You are welcome to show up later, too!)
Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

 

I know, I know … we are that kind of people that decide where to go at the last minute, and every year we do the same, but if you could register, that will help us to organize it better … and c’mon the dinner is on Tuesday … so we are almost there!!!

 

Anyhow, hope to see all of you there, all of you!

Some reference: Menu Eventbrite Pedro

Categories: DBA Blogs

Technology for the Non-Technical

Mon, 2015-04-13 08:33

I am potentially one of the least technical people in my generation. I’m 30 and I am afraid of my cellphone, my laptop, Netflix, the microwave…. Okay, afraid is maybe a strong word, but baffled by them at the very least.

In high school, while my classmates wrote most of their papers and assignments on the computer, I insisted on writing everything out by hand and only typed it out afterwards if absolutely required. It wasn’t that I had issues with typing – my mom who worked as an administrator for many years made sure that I learned to type from a very young age and I type quickly with a reasonable amount of accuracy. I just felt that writing by hand kept me more “connected” to the words I penned. Simply, my name is Sarah and I am a Luddite.

After high school I studied journalism for a couple of years and then entered the workforce into a number of different jobs, such as in sales and marketing and it became necessary for me to “engage” with technology a little more heavily. Typing articles and assignments slowly became second nature but grocery lists, thank you notes, birthday cards all continued to be written by hand.

For the last few years I’ve been working for technology and IT organizations, and for the last 14 months I’ve been working with Pythian, a leading IT services provider specializing in data infrastructure management. That was a big leap for me. Not only was I required to use technology constantly in my day-to-day (Smartphone, CRM system, soft phone, multiple email interfaces ACK!), but I also needed to do a lot more than dip my toes into some fairly intense technical knowledge to gain an understanding of our client base and what solutions would be most appropriate for the people I speak to every day. These people are Chief Information Officers, Chief Technology Officers’s and Vice Presidents of Information Technology for companies that are incredibly data-dependent. The quality and security of their data management directly affects their revenue and it’s critical that it is handled with a great amount of expertise and attention to detail. Kind of intimidating.

I have spent the last year wrapping myself in terms like NoSQL, non-relational database, Hadoop, MongoDB, SQL Server and Oracle. Do I have a perfect understanding of the benefits and draw-backs of each of these yet? No. What I do have is a great network of technical geniuses who work with me who have spent their careers becoming experts in their respective technologies. I know who the best resources are and how to connect with them to get the best answers and solutions. I’m very lucky to work at company that is incredibly transparent – questions are always welcomed and answered. I sit sandwiched between the offices of the Chief Revenue Officer and the CEO and Founder of our organization and while both are incredibly busy people, they are also happy to answer questions and share their insights and energy with anyone here.

All of our technical resources are just an instant message away and can often answer my questions in a few concise lines. So, while I am still monstrously uncomfortable with tasks like defragging (sounds like organized Fraggle removal to me) my computer or resetting my smartphone when it acts up, I am coming along slowly, in baby steps – an IT late-bloomer you could say – and it’s all much less painful than I ever feared it would be.

Categories: DBA Blogs

My thoughts on the Resilience of Cassandra

Mon, 2015-04-13 06:32

This blog is a part 1 of a 2 in a series. This will be different from my previous blogs, as this is more about some decisions you can make with Cassandra regarding the resilience of your system. I will talk deeply about this topic in the upcoming Datastax Days in London (https://cassandradaylondon2015.sched.org/), this is more of an introduction!

TL;DR: Cassandra is tough!

Cassandra presents itself as a “Cassandra delivers continuous availability, linear scalability, and operational simplicity across many commodity servers with no single point of failure, along with a powerful data model designed for maximum flexibility and fast response times.“ (http://docs.datastax.com/en/cassandra/2.0/cassandra/gettingStartedCassandraIntro.html). In a production system, having your persistence layer failure tolerant is a big thing. Even more so when you can make it resilient to full locations failure through geographic replication (and easily).

As in any production system you need to plan for failure. Should we blindly trust in Cassandra resilience and forget about the plan because “Cassandra can handle it”? By reading the documentation, some may think that by having several data centers and a high enough replication factor we are covered. In part this is true. Cassandra will handle servers down, even a full DC (or several!) down. But, anyway, you should always prepare for chaos! Failure will increase pressure on your remaining servers, latency will increase, etc. And when things get up again, will it just work? Getting all data in sync, are you ready for that? Did you forgot about gc_grace_seconds? There are lots of variables and small details that can be forgotten if you don’t plan ahead. And then in the middle of a problem, it will not help having those details forgotten!

My experience tells me that you must take Cassandra failures seriously, and plan for them! Having a B plan is never a bad thing, and a C even. Also, make sure those plans work! So for this short introduction I will leave a couple of recommendations:

  • Test your system against Cassandra delivering a bad service (timeouts, high latency, etc).
  • Set a “bare minimum” for your system to work (how low can we go on consistency, for example).
  • Test not only your system going down, but also prepare for the coming up!
  • Keep calm! Cassandra will help you!

Overall, Cassandra is a tough and robust system. I’ve had major problems with network, storage, Cassandra itself, etc. And in the end Cassandra not only survived, it gave me no downtime. But with every problem I had, it increased my knowledge and awareness of what I could expect. This lead to planning for major problems (which did happen) and this combined with the natural resilience of Cassandra made me go through those events without downtime.

Fell free to comment/discuss about it, in the comment section below! Juicy details will be left for London!

Categories: DBA Blogs

Licensing Oracle in a public cloud: the CPU calculation impact

Fri, 2015-04-10 09:18

First of all a disclaimer: I don’t work for Oracle nor do I speak for them. I believe this information to be correct, but for licensing questions, Oracle themselves have the final word.

With that out of the way, followers of this blog may have seen some of the results from my testing of actual CPU capacity with public clouds like Amazon Web Services, Microsoft Azure, and Google Compute Engine. In each of these cases, a CPU “core” was actually measured to be equivalent to an x86 HyperThread, or half a physical core. So when provisioning public cloud resources, it’s important to include twice as many CPU cores as the equivalent physical hardware. The low price and elasticity of public cloud infrastructure can however offset this differential, and still result in a cost savings over physical hardware.

One place this difference in CPU core calculation can have a significant impact, however, is software licensing. In this post I’ll look at Oracle database licensing in particular.

Oracle databases can be licensed using many metrics, including unlimited use agreements, embedded licenses, evaluation/developer licenses, partner licenses, and many more. But for those without a special agreement in place with Oracle, there are two ways to license products: Named User Plus (NUP) and processor licenses. NUP licenses are per-seat licenses which have a fixed cost per physical user or non-user device. The definition of a user is very broad, however. Quoting the Oracle Software Investment Guide:

Named User Plus includes both humans and non-human operated devices. All human users and non-human operated devices that are accessing the program must be licensed. A non-human operated device can be many things, such as a temperature-monitoring device. It is important to note that if the device is operated by a person, then this person must be licensed. As described in illustration #1, the 400 employees who are operating the 30 forklifts must be licensed because the forklift is not a “non-human operated device”.

So, if the application has any connection outside the organization (batch data feeds and public web users would be examples), it’s very difficult to fit the qualifications to count as NUP licenses.

Now, this leaves per-processor licenses, using processor cores that can potentially run the database software as licensing metric. When running in a public cloud, however, there is an immediate issue, which is your Oracle instance could presumably run on any of the thousands of servers owned by the cloud provider, so unique physical processors are virtually impossible to count. Fortunately, Oracle has provided a way to properly license Oracle software in public cloud environments: Licensing Oracle Software in the Cloud Computing Environment. It sets out a few requirements, including:

  • Amazon EC2, Amazon S3, and Microsoft Azure are covered under the policy.
  • There are limits to the counting of sockets and the number of cores per instance for Standard Edition and Standard Edition One.

But most importantly is the phrase customers are required to count each virtual core as equivalent to a physical core. Knowing that each “virtual core” is actually half a physical core, it can shift the economics of public cloud usage for Oracle database significantly.

Here’s an example of a general-purpose AWS configuration and a close equivalent on physical hardware. I’m excluding costs of external storage and datacenter costs (power, bandwidth, etc) from the comparison.

  • m3.2xlarge
  • 8 virtual / 4 physical CPU cores (from an E5-2670 processor at 2.6GHz)
  • 30GB RAM
  • 2x80GB local SSD storage
  • 3-year term

Total: $2989 upfront

A physical-hardware equivalent:

  • A single quad-core E5-2623 v3 processor at 3GHz
  • 32GB RAM
  • Oracle standard edition one
  • 2x120GB local SSD
  • 3-year 24×7 4hr on-site service

I priced this out at dell.com and came out with a total of $3761.

Now let’s add in an Oracle license. From the Oracle Price List, a socket of Standard Edition One costs $5800, with an additional $1276/year for support. Due to the counting of CPU cores, our AWS hardware requires two sockets of licensing. So instead of saving $772, we end up paying $9628 more.

 Standard Edition One

If we were to use Oracle Enterprise edition (excluding any options or discounts), that becomes an extra $157,700. Not small change anymore.

 Enterprise Edition

So before you make the jump to put your Oracle databases on a public cloud, check your CPU core counts to avoid unexpected licensing surprises.

Categories: DBA Blogs

Log Buffer #418: A Carnival of the Vanities for DBAs

Fri, 2015-04-10 08:20

This Log Buffer edition has collected some of the valuable blog posts from different databases like Oracle, SQL Server and MySQL.

Oracle:

  • Accessing HDFS files on a local File system using mountable HDFS – FUSE
  • enq: TM – contention
  • The Four A’s of Data Management
  • ODI, Big Data SQL and Oracle NoSQL
  • Using the RIDC Client to Interface with Oracle Webcenter Content

SQL Server:

  • SQL Server 2014 has introduced a rebuilt Cardinality Estimator (CE) with new algorithms
  • Creating a multi-option parameter report for SQL Server Reporting Services
  • Re-factoring a database object can often cause unexpected behavior in the code that accesses that object
  • What is Database Continuous Integration?
  • Deleting Historical Data from a Large Highly Concurrent SQL Server Database Table

MySQL:

For years it was very easy to defend InnoDB’s advantage over competition. Covering index reads were saving I/O operations and CPU everywhere, table space and I/O management allowed focusing on database and not on file systems or virtual memory behaviors, and for the past few years InnoDB compression was the way to have highly efficient OLTP.

InnoDB locks and deadlocks with or without index for different isolation level.

pquery binaries with statically included client libs now available!

MySQL Group Replication – mysql-5.7.6-labs-group-replication.

MySQL 5.7 aims to be the most secure MySQL Server release ever, and that means some significant changes in SSL/TLS.

Categories: DBA Blogs

Log Buffer #417: A Carnival of the Vanities for DBAs

Fri, 2015-04-10 08:06

This Log Buffer travels wide and deep to scour through the Internet to bring some of the most valuable and value-adding blog posts from Oracle, SQL Server and MySQL.

Oracle:

What is SQLcl ? SQLcl is a new command line interface like SQL*PLUS coming along with SQL Developper 4.1 Early Adopter. It’s a lightweight tool (only 11MB) developed by the SQL Developer team, which is fully compatible with Windows and Unix/Linux.  Also, you don’t need to install it so it’s totally portable.

Find Users with DBA Roles.

Virtual Compute Appliance 2.0.2 Released.

In case you are not familiar with WLST (the WebLogic Scripting Tool), it is a powerful scripting runtime for administering WebLogic domains.

The following article gives some useful hints-and-tips Richard used recently in helping people customizing tables and lists-of-values using Page Composer.

SQL Server:

With the idea of a generic Dacpac defined by international standard, comes the potential for a Visual Studio developer. This uses SSDT to create a generic database model to a SQL-92 compliant standard that can then be deployed to any one of the major RDBMSs.

Using the APPLY operator to reduce repetition and make queries DRYer.

Image a situation when you use the SQL Server RAND() T-SQL function as a column in a SELECT statement, and the same value is returned for every row as shown below. In this tip, Dallas Snider explains how you can get differing random values on each row.

This articles describes two ways to shred Unicode Japanese character from xls files into SQL Server table using SSIS.

Arshad Ali demonstrates how you can use the command line interface to tune SQL queries and how you can use SQL Server Profiler to capture the workload for tuning with Database Engine Tuning Advisor.

MySQL:

Postgres-Performance seit 7.4.

The Ubuntu 12.04.3 LTS release only provides MySQL 5.1 and MySQL 5.5 using the default Ubuntu package manager.

As part of a MySQL 5.5 to MySQL 5.6 upgrade across several Ubuntu servers of varying distros an audit highlighted a trivial but interesting versioning identification error in Ubuntu’s packaging of MySQL.

MySQL 5.6 will now automatically recreate the InnoDB redo log files during a MySQL restart if the size (or number) of these logs changes, i.e. a change to innodb_log_file_size.

Mermaids have the same probability of fixing your permission problems, but people continue believing in the FLUSH PRIVILEGES myth.

Categories: DBA Blogs

Disable Lock Escalation in SQL Server

Fri, 2015-04-10 07:55

If a lot of rows or pages are locked, the SQL Server escalates to a table-level lock, to save resources. Each single lock takes approx. 100 bytes. So if you have many locks it takes a lot of resources to manage them. (There is a great blog about lock escalation, if you want some more info: http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/05/17/lock-escalation.aspx)

 

Until SQL Server 2008, there was no way to change the lock escalation for a single table. You could deactivate the escalation for the server by using the Trace Flags:

  • 1211 - Disables Lock Escalation completely – allows to use 60% of the allocated memory – if 60% of memory is used and more locking is needed you will get an out-of-memory error.
  • 1224 - Disables Lock Escalation until the memory threshold of 40% allocated memory is reached – after that Lock Escalation is enabled.

 

But that was in most cases not a good choice and caused a lot of performance problems. In SQL-Server 2008 and above there is a new table option (ALTER-TABLE) that can be used to change the default Lock-Escalation. This helps you if you have a table where you want to disable the escalation or if the table is partitioned.

 

On a partitioned table activating the AUTO Option can improve concurrency, by escalating the locks to the partition-level and not to the table-level.

ALTER TABLE – table option:

SET ( LOCK_ESCALATION = { AUTO | TABLE | DISABLE } )

  • AUTO (should be considered if you have a partitioned table)
  • If tables is partitioned – the locks will be escalated to the partition-level
  • If table is not partitioned – the locks will be escalated to the table-level
  • TABLE
  • Default behavior
  • Locks are escalated to the table-level
  • DISABLE
  • Lock escalation to the table-level is deactivated in most cases
  • In some necessary cases it’s allowed to escalate to the table-level

This is a cool feature, that are many developers are not aware of.

Thanks for Reading!

Categories: DBA Blogs

OakTable World at IOUG COLLABORATE15

Thu, 2015-04-02 18:52

Update history:
5-Apr: WIT panel added, Alex removed, Gwen and Pete schedule shifted.
11-Apr: Gwen and Pete swapped sessions.
13-Apr: Jonathan off lightning talks.

Guess what? OakTable World at IOUG C15 is happening again! Last year, we had awesome sessions and wonderful attendees. The sessions were so successful, in fact, that we needed a bigger room this year (there were other reasons too, but hey we can fit more people now!).

What: OakTable World C15
When: Wednesday, April 15, 2015, 8:00am – 5:30pm
Where: Mandalay Ballroom K

I really hope that, if you are reading this, you are planning to attend COLLABORATE 15 – IOUG Forum at the Mandalay Bay Resort & Casino in Vegas from April 12-16. If you haven’t yet planned your trip, this might just help you make the call. You know you want to be there!

OakTable Network will be holding its highly anticipated OakTable World during COLLABORATE 15! As always, IOUG was able to provide a room for us to use for the whole day (and boy what a big room it is!). The agenda is determined by the OakTable speakers, who choose topics they are passionate about. And if history is any indicator, these are also the topics you really want to hear about.

For those of you who aren’t familiar with OakTable World, Mogens Nørgaard started it as an underground event during Oracle OpenWorld—somewhere between 2007 and 2009. After several successful years and increasing popularity, the event became known as OakTable World during OOW12 and OOW13. Last year, we hosted OTWC14 at IOUG COLLABORATE 14 in Vegas. Needless to say, it was a success. So…Vegas here we come again!

Thank you to all the great companies who have sponsored this event over the years—you know who you are. This year, the usual suspects have pitched in to make it happen again—Pythian, Enkitec and Delphix. Once again, we will be printing unique t-shirts with cool graphics and awesome sponsors’ logos. Be part of history!

The OTW sessions are (mostly) aligned with conference sessions, except we start a tad later (you will appreciate it) and we’ve shifted a few sessions by 15 minutes to pack in as many as possible. Don’t worry, though, we don’t run anything during lunch or afternoon nap. :)

The current schedule is below, but check back regularly as it may change due to random events.

Time Presenter Title 8:10-8:15 someone authorized Opening Notes 8:15-9:00 Tim Gorman Augmenting SQL Monitor 9:15-10:15 guest session Women in Technology Panel 10:30-11:10 Pete Sharman Knowledge Sharing – Why Do It? 11:15-12:00 Gwen Shapira Kafka for DBAs – Because Inquiring Minds Need to Know 14:00-15:00 see below Lightning Talks! 15:15-16:15 Cary Millsap The Go/No-Go Matrix for Thinking Clearly About Testing 16:30-17:30 Jared Still Knowledge Builds Intuition

Lightning Talks are 10-minute presentations done in a rapid-fire fashion. They are always a huge success—you’ve got to be there! They may be technical, motivational, or inspirational, but regardless they are always cool speeches. The sequence of the talks may change, but everything will be presented within the hour.

Presenter Lightning Talk Kyle Hailey What is DevOps Kellyn Pot’Vin-Gorman SQLT in AWR Warehouse Alex Gorbachev #100miles Pete Sharman SnapClones++ Jonah H. Harris Performing MongoDB-Compatible NoSQL on Top of Oracle SQL

The OakTable Network folks and other great people will be hanging around, so make sure you drop by! This is an awesome place to grow your network. Remember that the presenters determine the agenda. Our passion to share and educate is what drives us. Come join us.

Vegas, here we come!

Categories: DBA Blogs

Pillars of Powershell #2: Commanding

Tue, 2015-03-31 07:28
Introduction

This is the second blog post as a continuance in the series on the Pillars of PowerShell. In the initial blog post we went over the various interfaces that can be used to work with PowerShell. In this blog post we are going to start out by going through a few terms you might find when you start reading up on PowerShell. Otherwise, I will go over three of the cmdlets you will find can be used to discover and get documentation on the cmdlets available to you in PowerShell.

Pillar 2: Commanding

The following are a few terms I will use throughout this series, and ones you might find referenced in any reading material, which I wanted to introduce so we start out on the same page:

  • Session
    When you open PowerShell.exe or PowerShell_ISE.exe it will create a session for you, essentially a blank slate for you to build and create. You can think of this as a query window within the SQL Server Management Studio.
  • Cmdlets
    Pronounced “command-lets”, these are the bread and butter of PowerShell that allow you to do things from getting information or manipulating it. Microsoft has coined the format of Verb-Noun for the cmdlets and it has pretty much stuck. Each version of PowerShell as it is released comes with additional cmdlets, and then product teams like SQL Server and Active Directory are also releasing cmdlets to allow you to interact with them through PowerShell. Each time you open a session with PowerShell there are a set of core cmdlets that are automatically loaded for you.
  • Module
    A module is basically just a set of cmdlets that can be added within your session of PowerShell. When you load a module into your session, the commands are then made available to you. If you close that session and open a new one, you will then have to reload that module to access the commands again.
  • Objects
    PowerShell is based on .NET, this is what is behind the scenes more or less, thus with .NET being an object oriented language PowerShell treats the data that is returned as objects. So, if I use a cmdlet to return the processes running on a machine, each process that it returns is as an object.
  • The Pipeline
    This is named after the symbol used to connect cmdlets together, “|” (vertical bar on your keyboard). You can think of this like a train, each train car represents a set of objects and each car you pass it through can do something to each object until you reach the end.
  • PowerShell Profile
    This is basically the ability to customize your PowerShell session each time you open or start PowerShell. You can do things such as pre-load modules, create custom bits of PowerShell code for reuse or easy access, and many other things. I would compare this to your profile in Windows that keeps up with things like the icons or applications you have pinned to the taskbar or the default browser.

Now I want to take you through a few core cmdlets that are used most commonly to discover what is available in the current version of PowerShell or the module you might be working with in it. I tend to use these commands almost every time I open PowerShell. I do not try to memorize everything, especially when I can look it up so quickly in PowerShell.

Get-Command

This cmdlet does exactly what you think it does, gets a list of commands that are available in your current PowerShell session. You can use the parameters of this cmdlet to filter the list down to what interest you, say all the “get” cmdlets:

Get-Command -Verb Get -CommandType Cmdlet
p2_get-command-verb-get Get-Help

Now you might be wondering where the documentation is for all of the cmdlets you saw using Get-Command? Where is the Books Online equivalent to what you get with SQL Server? Well unlike SQL Server you can actually get the documentation via the cmdlet Get-Help. This cmdlet can return the information to you or you can use a parameter to open it up in the browser, if it is available. So for example one of the best things to look up documentation on initially is the Get-Help cmdlet itself:

Get-Help

The output of this command is good to read through but the main items I want to pull out are three particular parameters:

  1. Online: This will take you to the TechNet page of the documentation for the cmdlet. This may not work with every cmdlet you come across but if Microsoft owns it there should be something.
  2. Examples: This is going to provide a few examples and descriptions of how you can use the cmdlet and the more common parameters.
  3. Full: This will show you pretty much what document is online. This just keeps you in PowerShell instead of view it in the browser.

So let me try bringing up the examples of the Get-Help cmdlet itself:

Get-Help Get-Help -Examples
p2_get-help-examples

If you are using Windows 7 OS or higher, you may receive something similar to this screenshot. This is something that was added in PowerShell version 3.0, the cmdlet Update-Help. This cmdlet is used to actually update the help files on a computer as needed. In the event Microsoft updates the help files, or the online TechNet page, you can use this to download a current version of it. Microsoft has moved to this method in place of trying to do the updates locally with cumulative updates or service packs. It does require Internet access to execute the cmdlet. If your machine is not on the Internet you can download them from Microsoft’s download center. In order to fix the above message I just need to issue the command: Update-Help.

You should see a progress bar as shown below while it is running through updating all the files (which that progress bar is actually done using PowerShell):

p2_update-help

I ended up getting two errors for certain modules and this is because I am not running the cmdlet with elevated privileges. If you open PowerShell.exe with the “Run As Administrator” option and then execute the cmdlet again it will be able to update all help files without error.

Now if you run the previous command again you should see the actual examples, although if you notice it can be annoying to try and scroll back up to read all that information. A tidbit I did not know about right away was that there is a parameter in Get-Help that opens up a separate window, which makes it easier to read called, “-ShowWindow“. It is basically the “-Full” output but with the option to filter out sections that do not interest you.

Get-Help Get-Help -ShowWindow
p2_get-help-showwindow

You actually can use Get-Help to search for cmdlets as well. I tend to do this more than trying to use Get-Command just because it is a bit quicker. You can just issue something like this to find all the Get” cmdlets:

Get-Help get*

One more thing about the help system in PowerShell, it also includes things called “about files” that are basically concept topics that go deeper into certain areas. They offer a wealth of information and you can also get to these online. Something for you to try on your own to see what is available is just issue this command:

Get-Help about*
Get-Member

This cmdlet is a little gem that you will use more than anything. If you pipe any cmdlet (or one-liner) to Get-Member it will provide you a list of the properties or methods available to you for the object(s) passed. This cmdlet also includes a filter of “-MemberType” that I can use to only return the properties available to me. The properties are those that we can “select” to return as output or pass to other cmdlets down the pipe.

Get-Command | Get-Member -MemberType Property
Out-GridView

I am only going to touch on this cmdlet. It can be used to output objects into a table like view, that also offers some filtering attributes too. There are a few different Out-* cmdlets that are available to you for outputting information to various destinations. You can find these using the Get-Command or Get-Help cmdlets. To use Out-GridView on a Windows Server OS you will have to add the PowerShell ISE feature. You will get an error stating as much if you do not.

Get-Command | Out-GridView

get-comand-get-member-out-gridview Summary

The three cmdlets Get-CommandGet-Help, and Get-Member that I spoke on above are ones I think you should become very familiar with and explore them deeply. Once you master using these it will provide you the ability to find out anything and everything about a cmdlet or module that you are trying to use. When you start working with various modules such as Azure or SQL Server PowerShell (SQLPS) these cmdlets are quite useful in discovering what is available.

Categories: DBA Blogs

Pillars of PowerShell #1: Interacting

Tue, 2015-03-31 06:55
Introduction

PowerShell is a tool that if adopted can be used to help automate and standardize processes in your Windows and SQL Server environment (among other things). This blog series is intended to show you some of the basics (not all of them) that will get you up and running with PowerShell. I say not all of them, because there are areas in PowerShell that you can go pretty deep in, just like SQL Server. I want to just give you the initial tools to get you on your way to discovering the awesomeness within PowerShell. I decided to go with a Greek theme, and just break this series up into pillars. In this first blog post I just wanted to show you the tools that are available to allow you to interact with PowerShell itself.

Pillar 1: Interacting

Interacting with PowerShell is most commonly issuing commands directly on the command line interface (CLI), the step above that would be building out a script that contains multiple commands. The first two options are available “out-of-the-box” on a Windows machine that has PowerShell installed. After this, you have a few third party options available to you that I will point out.

  1. PowerShell.exe
    This is the command prompt (or console as some may call it) that most folks will spend their day-to-day life entering what are referred to as “one-liners”. This is the CLI for PowerShell. You can access this in Windows by going through the Start Menu, or just type it into the Run prompt.
    powershell.exe
  2. PowerShell_ISE.exe
    This is the PowerShell Integrated Scripting Environment and is included in PowerShell 2.0 and up. This tool gives you the ability to have a script editor and CLI in one place. You can find out more about this tool and the various features that come with each version here. You can access this similar to the same way you would PowerShell.exe. In Windows Server 2008 R2 and above though this it is a Windows Feature that has to be added or activated before you can use it.
    powershell_ise
  3. Visual Studio (VS) 2013 Community Edition + PowerShell Tools for Visual Studio 2013
    VS 2013 Community is the free version of Visual Studio that includes the equivalent functionality of Visual Studio Professional Edition. Microsoft opened up the door for many things when they did this, the main one being that you can now develop PowerShell scripts along side your C# or other .NET projects. Adam Driscoll (PowerShell MVP) developed and released an add-on specifically for VS 2013 Community that you can get from GitHub, here.
  4. Third party ISE/Editors
    The following are the main players in the third party offerings for PowerShell ISE or script editors. I have tried all of them before, but since they only exist on the machine you install them on I tend to stick with what is in Windows. They have their place and if you begin to develop PowerShell heavily (e.g. full project solutions) they can be very useful in the management of your scripts:

Summary

This was a fairly short post that just started out with showing you what your options are to start working and interacting with PowerShell. PowerShell is a fun tool to work with and discover new things that it can do for you. In this series I will typically stick with using the CLI (PowerShell.exe) for examples.

One more thing to point out is the versions of PowerShell currently released (as of this blog post) are 2.0, 3.0, and 4.0. The basic commands I am going to go over will work in any version, but where specific nuances exist between each version I will try to point out if needed.

Categories: DBA Blogs

PowerShell Script to Manipulate SQL Server Backup Files

Tue, 2015-03-31 06:39
Scenario

I use Ola Hallengren’s famous backup solution to back up my SQL Server databases. The destination for full backups is a directory on local disk; let’s say D:\SQLBackup\

If you are familiar with Ola’s backup scripts, you know the full path for backup file looks something like:

D:\SQLBackup\InstanceName\DatabaseName\FULL\InstanceName_DatabaseName_FULL_yyyymmdd_hhmiss.bak

Where InstanceName is a placeholder for the name of the SQL server instance, similarly, DatabaseName is for the Database Name.

Problem

Depending upon my retention period settings, I may have multiple copies of full backup files under the said directory. The directory structure is complicated too (backup file for each database is under two parent folders). I want to copy the latest backup file (only) for each database to a UNC share and rename the backup file scrubbing everything but the database name.

Let’s say the UNC path is \\RemoteServer\UNCBackup. The end result would have the latest full backup file for all the databases copied over to \\RemoteServer\UNCBackup with files containing their respective database names only.

Solution

I wrote a PowerShell script to achieve the solution. This script can be run from a PowerShell console or PowerShell ISE. The more convenient way would be to use PS subsystems and schedule a SQL Server agent job to run this PowerShell script. As always, please run this on a test system first and use at your own risk. You may want to tweak the script depending upon your requirement.

 

<#################################################################################

   

Script Name: CopyLatestBackupandRename.ps1                       

Author     : Prashant Kumar                           

Date       : March 29th, 2015

   

Description: The script is useful for those using Ola Hallengren’s backup solution.

             This script takes SQL Server full backup parent folder as an input,

             a remote UNC path as another input and copies the latest backup file

             for each database, renames the backup file to the remote UNC path.

 

 

This Sample Code is provided for the purpose of illustration only and is not

intended to be used in a production environment. THIS SAMPLE CODE AND ANY

RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER

EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF

MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

##################################################################################>

 

#Clear screen

cls

 

#Specify Parent folder where Full backup files are originally being taken

$SourcePath = ‘D:\SQLBackup\InstanceName’

 

#Specify UNC path ot network share where backup files has to be copied

$UNCpath = ‘\\RemoteServer\UNCBackup’

 

#Browse thru subfolders (identical to database names) inside $SourcePath

$SubDirs = dir $SourcePath -Recurse | Where-Object {$_.PSIsContainer} | ForEach-Object -Process {$_.FullName}

 

#Browse through each sub-drorectory inside parent folder

ForEach ($Dirs in $SubDirs)

       {

    #List recent file (only one) within sub-directories

       $RecentFile = dir $Dirs | Where-Object {!$_.PSIsContainer} | Sort-Object {$_.LastWriteTime} -Descending | Select-Object -First 1

      

    #Perform operation on each file (listed above) one-by-one

       ForEach ($File in $RecentFile)

              {

       

              $FilePath = $File.DirectoryName

              $FileName = $File.Name

        $FileToCopy=$FilePath+‘\’+$FileName

        $PathToCopy=($filepath -replace [regex]::Escape($SourcePath), $UNCpath)+‘\’

       

        #Forecfully create the desired directory structure at destination if one doesn’t exist

        New-Item -ItemType Dir -Path $PathToCopy -Force

 

        #Copy the backup file

        Copy-Item $FileToCopy $PathToCopy

 

        #Trim the date time from the copied file name, store in a variable

        $DestinationFile = $PathToCopy+$FileName

        $RenamedFile = ($DestinationFile.substring(0,$DestinationFile.length-20))+‘.bak’

 

        #Rename the copied file

        Rename-Item $DestinationFile $RenamedFile

 

        }

             

       }

 

 

 

Categories: DBA Blogs

SQL Server 2012 SP2 Cummulative Update 5

Tue, 2015-03-31 06:20

Hey folks,

Microsoft released the 5th Cummulative Update for SQL Server 2012 SP2. This update package contains fixes for 27 different issues, distributed as follows:

 

CU5

 

One very important issue that was fixed on this CU release was KB3038943 –   Error 4360 when you restore the backup of secondary replica to another server in AlwaysOn Availability Groups.

If you use SQL Server 2012 SP2 Always On and you offload your log backups to the secondary node it is recommended that you apply this patch!

The full Cummulative Update release and the download links can be found here: http://support.microsoft.com/en-us/kb/3037255/en-us

 

Categories: DBA Blogs

Log Buffer #416, A Carnival of the Vanities for DBAs

Mon, 2015-03-30 12:29

This log buffer edition sprouts from the beauty, glamour and intelligence of various blog posts from Oracle, SQL Server, and MySQL.

Oracle:

Oracle Exadata Performance: Latest Improvements and Less Known Features

Exadata Storage Index Min/Max Optimization

Oracle system V shared memory indicated deleted

12c Parallel Execution New Features: Concurrent UNION ALL

Why does index monitoring makes Connor’s scratch his head and charges off to google so many times.

SQL Server:

Learn how to begin unit testing with tSQLt and SQL Server.

‘Temporal’ tables contain facts that are valid for a period of time. When they are used for financial information they have to be very well constrained to prevent errors getting in and causing incorrect reporting.

As big data application success stories (and failures) have appeared in the news and technical publications, several myths have emerged about big data. This article explores a few of the more significant myths, and how they may negatively affect your own big data implementation.

When effective end dates don’t align properly with effective start dates for subsequent rows, what are you to do?

In order to automate the delivery of an application together with its database, you probably just need the extra database tools that allow you to continue with your current source control system and release management system by integrating the database into it.

MySQL:

Ronald Bradford on SQL, ANSI Standards, PostgreSQL and MySQL.

How to Manage the World’s Top Open Source Databases: ClusterControl 1.2.9 Features Webinar Replay

A few interesting findings on MariaDB and MySQL scalability, multi-table OLTP RO

MariaDB: The Differences, Expectations, and Future

How to Tell If It’s MySQL Swapping

Categories: DBA Blogs

Pythian at Collaborate 15

Fri, 2015-03-27 15:05

Make sure you check out Pythian’s speakers at Collaborate 15. Stop by booth #1118 for a chance meet some of Pythian’s top Oracle experts, talk shop, and ask questions. This many Oracle experts in one place only happens once a year, have a look at our list of presenters, we think you’ll agree.

Click here to view a PDF of our presenters

 

Pythian’s Collaborate 15 Presenters | April 12 – 16 | Mandalay Bay Resort and Casino, Las Vegas

 

Christo Kutrovsky | ATCG Senior Consultant | Oracle ACE

 

Maximize Exadata Performance with Parallel Queries

Wed. April 15 | 10:45 AM – 11:45 AM | Room Banyan D

 

Big Data with Exadata

Thu. April 16 | 12:15 PM – 1:15 PM | Room Banyan D

 

Deiby Gomez Robles | Database Consultant | Oracle ACE

 

Oracle Indexes: From the Concept to Internals

Tue. April 14 | 4:30 PM – 5:30 PM | Room Palm C

 

Marc Fielding | ATCG Principal Consultant | Oracle Certified Expert

 

Ensuring 24/7 Availability with Oracle Database Application Continuity

Mon. April 13 | 2:00 PM – 3:00 PM | Room Palm D

 

Using Oracle Multi-tenant to Efficiently Manage Development and Test Databases

Tue. April 14 | 11:00 AM – 12:00 PM | Room Palm C

 

Maris Elsins | Oracle Application DBA | Oracle ACE

Mining the AWR: Alternative Methods for Identification of the Top SQLs in Your Database

Tue. April 14 | 3:15 PM – 4:15 PM | Room Palm B

 

Ins and Outs of Concurrent Processing Configuration in Oracle e-Business Suite

Wed. April 15 | 8:00 AM – 9:00 AM | Room Breakers B

 

DB12c: All You Need to Know About the Resource Manager

Thu. April 16 | 9:45 AM – 10:45 AM | Room Palm A

 

Alex Gorbachev | CTO | Oracle ACE Director

 

Using Hadoop for Real-time BI Queries

Tue, April 14 | 9:45 AM – 10:45 AM | Room Jasmine E

 

Using Oracle Multi-tenant to Efficiently Manage Development and Test Databases

Tue, April 14 | 11:00 AM – 12:00 PM | Room Palm C

 

Anomaly Detection for Database Monitoring

Thu, April 16 | 11:00 AM – 12:00 PM | Room Palm B

 

Subhajit Das Chaudhuri | Team Manager

 

Deep Dive: Integration of Oracle Applications R12 with OAM 11g, OID 11g , Microsoft AD and WNA

Tue, April 14 | 3:15 PM – 4:15 PM | Room Breakers D

 

Simon Pane | ATCG Senior Consultant | Oracle Certified Expert

 

Oracle Service Name Resolution – Getting Rid of the TNSNAMES.ORA File!

Wed, April 15 | 9:15 AM – 10:15 AM | Room Palm C

 

René Antunez | Team  Manager | Oracle ACE

 

Architecting Your Own DBaaS in a Private Cloud with EM12c

Mon. April 13 | 9:15 AM – 10:15 AM | Room Reef F

 

Wait, Before We Get the Project Underway, What Do You Think Database as a Service Is…

Mon, Apr 13 | 03:15 PM – 04:15 PM | Room Reef F

 

My First 100 days with a MySQL DBMS

Tue, Apr 14 | 09:45 AM – 10:45 AM | Room Palm A

 

Gleb Otochkin | ATCG Senior Consultant | Oracle Certified Expert

 

Your Own Private Cloud

Wed. April 15 | 8:00 AM – 9:00 AM | Room Reef F

 

Patching Exadata: Pitfalls and Surprises

Wed. April 15 | 12:00 PM – 12:30 PM | Room Banyan D

 

Second Wind for Your exadata

Tue. April 14 | 12:15 PM – 12:45 PM | Room Banyan C

 

Michael Abbey | Team Manager, Principal Consultants | Oracle ACE

 

Working with Colleagues in Distant Time Zones

Mon, April 13 | 12:00 PM – 12:30 PM | Room North Convention, South Pacific J

 

Manage Referential Integrity Before It Manages You

Tue, April 14 | 2:00 PM – 3:00 PM | Room Palm C

 

Nothing to BLOG About – Think Again

Wed, April 15 | 7:30 PM – 8:30 PM | Room North Convention, South Pacific J

 

Do It Right; Do It Once. A Roadmap to Maintenance Windows

Thu, April 16 | 11:00 AM – 12:00 PM | Room North Convention, South  Pacific J

Categories: DBA Blogs

Oracle Database 12c In-Memory Q&A Webinar

Thu, 2015-03-26 09:21

Today I will be debating Oracle 12c’s In-Memory option with Maria Colgan of Oracle (aka optimizer lady, now In-Memory lady).

This will be in a debate form with lots of Q&A from the audience. Come ask the questions you always wanted to ask.

Link to register and attend:
https://attendee.gotowebinar.com/register/7874819190629618178

Starts at 12:00pm EDT.

Categories: DBA Blogs

Free Apache Cassandra Training Event in Cambridge, MA March 23

Fri, 2015-03-20 14:24

I’ll be speaking, along with DataStax and Microsoft representatives at Cassandra Essentials Day this coming Monday (March 23) in Cambridge. MA. This free training event will cover the basics of Apache Cassandra and show you how to try it out quickly, easily, and free of charge on the Azure cloud. Expect to learn about the unique aspects of Cassandra and DataStax Enterprise and to dive into real-world use cases.

Space is limited, so register online to reserve a spot.

Categories: DBA Blogs

My Co-op Experience at Pythian

Fri, 2015-03-20 06:30
That's me in front of our office. I promise there is a bigger Pythian logo!

That’s me in front of our office. I promise there is a bigger Pythian logo!

Unlike most other engineering physics students at Carleton who prefer to remain within the limits of engineering, I had chosen to apply for a software developer co-op position at Pythian in 2014. For those of you who do not know much about the engineering physics program (I get that a lot and so I will save you the trip to Google and tell you), this is how Stanford University describes their engineering physics program: “Engineering Physics prepares students to apply physics to tackle 21st century engineering challenges and to apply engineering to address 21st century questions in physics.” As you can imagine, very little to do with software development. You might ask, then why apply to Pythian?

Programming is changing the way our world functions. Look at the finance sectors: companies rely on complicated algorithms to determine where they should be investing their resources which in turn determines the course of growth for the company. In science and technology, algorithms help us make sense of huge amounts of unstructured data which would otherwise take us years to process, and help us understand and solve many or our 21st century problems. Clearly, learning how to write these algorithms or code cannot be a bad idea, rather, one that will be invaluable. A wise or a not so wise man once said, (you will know what I mean if you have seen the movie iRobot): “If you cannot solve a problem, make a program that can.” In a way, maybe I intend to apply physics to tackle all of 21st century problems by writing programs. (That totally made sense in my head).

Whatever it might be, my interest in programming or my mission to somehow tie physics, engineering, and programming together, I found myself looking forward to an interview with Pythian. I remember having to call in for a Skype interview. While waiting for my interviewers to join the call, I remember thinking about all the horror co-op stories I had heard: How you will be given piles of books to read over your work term (you might have guessed from this blog so far, not much of a reader, this one. If I hit 500 words, first round’s on me!). Furthermore, horror stories of how students are usually labeled as a co-op and given no meaningful work at all.

Just as I was drifting away in my thoughts, my interviewers joined the call. And much to my surprise they were not the traditional hiring managers in their formal dresses making you feel like just another interviewee in a long list of interviewees. Instead they were warm and friendly people who were genuinely interested in what I could offer to the company as a co-op student. The programming languages I knew, which one was my favourite, the kind of programs I had written, and more. They clearly stated the kind of work I could expect as a co-op student, which was exactly the same kind of work that the team was going to be doing. And most importantly, my interviewers seemed to be enjoying the kind of work they do and the place they work at.

So, when I was offered the co-op position at Pythian. I knew I had to say yes!

My pleasant experience with Pythian has continued ever since. The most enjoyable aspect of my work has been the fact that I am involved in a lot of the team projects which means I am always learning something new and gaining more knowledge each day, after each project. I feel that in an industry like this, the best way to learn is by experience and exposure. At Pythian that is exactly what I am getting.

And if those are not good enough reasons to enjoy working for this company, I also have the privilege of working with some extremely experienced and knowledgeable people in the web development industry. Bill Gates had once suggested that he wants to hire the smartest people at Microsoft and surround himself with them. This would create an environment where everyone would learn from each other and excel in their work. And I agree with that. Well now if you are the next Bill Gates, go ahead, create your multibillion dollar company and hire the best of the best and immerse yourself in the presence of all that knowledge and intelligence. But I feel I have found myself a great alternative, a poor man approach, a student budget approach or whatever you want to call it, take full advantage of working with some really talented people and learn as much as you can.

Today, five months into my yearlong placement with Pythian, I could not be more sure and proud of becoming a part of this exciting company, becoming a Pythianite. And I feel my time spent in this company has put me well in course to complete my goal of tying physics, engineering and programming together.

Categories: DBA Blogs

Log Buffer #415, A Carnival of the Vanities for DBAs

Fri, 2015-03-20 06:25

This Log Buffer Edition covers the Oracle, SQL Server and MySQL with a keen look on the novel ideas.

Oracle:

The case was to roll forward a physical standby with an RMAN SCN incremental backup taken from primary.

Oracle Database 12c: Smart upgrade

This blog covers how to specify query parameters using the REST Service Editor.

Production workloads blend Cloud and On-Premise Capabilities

ALTER DATABASE BEGIN BACKUP and ALTER DATABASE END BACKUP

SQL Server:

Mail Fails with SQLCMD Error

How to get Database Design Horribly Wrong

Using the ROLLUP, CUBE, and GROUPING SETS Operators

The Right and Wrong of T-SQL DML TRIGGERs (SQL Spackle)

How converting extensive, repetitive code to a data-driven approach resolved a maintenance headache and helped identify bugs

MySQL:

Distributing innodb tables made simpler!

Choosing a good sharding key in MongoDB (and MySQL)

Update a grails project from version 2.3.8 to version 2.4.4

MySQL Enterprise Backup 3.12.0 has been released

If table is partitioned it makes it easy to maintain. Table has grown so huge and the backups are just keep running long then probably you need to think of archival or purge.

Categories: DBA Blogs