Skip navigation.

DBA Blogs

Index Advanced Compression: Multi-Column Index Part II (Blow Out)

Richard Foote - Thu, 2015-09-24 02:00
I previously discussed how Index Advanced Compression can automatically determine not only the correct number of columns to compress, but also the correct number of columns to compress within specific leaf blocks of the index. However, this doesn’t mean we can just order the columns within the index without due consideration from a “compression” perspective. As […]
Categories: DBA Blogs

iObit Driver Booster Pro 3 License Key

Jithin Sarath - Wed, 2015-09-23 10:59
iObit Driver Booster Pro 3 with license key is powerful tool to update pc/laptop drivers. Get driver booster 3 serial keys tested and verified [100% working].Driver Booster Pro 3 key is a Utility Program for your PC. After installing a Completely new Operating system, many of us get stuck on Driver issues. Microsoft usually release their Windows updates along with device drivers but some drivers can’t be found in Windows Update. One can search the Manufacturer website for drivers but Beginner PC Users are not aware of these things. By Keeping These things in our mind, Today We are Sharing Driver Booster Pro Free Download With serial key. You can easily update or install your Drivers with this One Click application.
Categories: DBA Blogs

Creating a Test Lab Using VirtualBox / NAT networking

Pythian Group - Tue, 2015-09-22 12:37

My job is almost completely reliant upon my ability to perform work in lab of virtual machines. Almost every action plan I write is tested locally. When I need to troubleshoot an issue for a client one of the most common first steps I’ll perform is attempting to recreate the issue in a virtual environment so I can work on it there without the risk of impacting client data.

I believe that having a place to test and grow your skills is an absolute necessity for anyone working in the IT field today regardless of your specialization, even if you’re an IT generalist. But every now and then I hear about individuals who have issues with their virtual machines or with the virtual environment provided by their employer, so I figured this was a good time to share my method of creating a virtual lab. More specifically, one that allows you to do virtual work on a commodity laptop, one that won’t break down if you lose connectivity, one that won’t be interfered with if you connect to a restrictive VPN.

We’re going to build 2 CentOS 6 virtual machines on a virtual NAT network using Oracle VirtualBox. Before you begin, all you’re going to need is the installer for VirtualBox for your host machine and an ISO of CentOS 6 (64 bit).

The first thing we’re going to do once VirtualBox is installed is setup a NAT network.  The most common application of NAT networking is likely the usage of home broadband internet ‘routers’. This takes the single IP you get from your ISP and allows multiple machines to interface with it.

We’re going to use this concept to build a NAT network in your virtual environment that your virtual machines will connect to. By internalizing the network structure I will be able to interact with my virtual machines and allow my virtual machines to interact with one another regardless of whether my PC is connected to a network or not. This can be really handy if you’re having a networking problem, if you’re traveling, or if you’re inheriting a restrictive networking policy from a client’s soft VPN connection, etc. Regardless of what happens, if you can turn on your machine you can work in your lab of virtual machines.


For this tutorial I’ve installed VirtualBox version 5.0.4 on my local machine. A Nat Network is already setup by default, but you can verify this by going to File -> Preferences -> Network (on the left menu bar). On the Nat Networks tab you will likely see an network called ‘LocalNat’. You can click on the edit button (the little screwdriver icon) to see the network configuration. If you don’t have a Nat Network by default, create one by hitting the icon that looks like a network card with a ‘+’ symbol over it, then select it and click the edit button.


You will see the following options in the Nat Network Details window:

(Checkbox) Enable Network: Make sure this is checked.

Network Name: I’ve decided to stay with the name ‘LocalNat’.

Network CCIDR: This is the IP/Netmask of the Nat Network. For this tutorial I’ve chosen to stay with the default option of For those of you unfamiliar with slash subnet notation, /24 translates to, or rather a class ‘C’ network. This means all devices (or in this case, virtual machines) that connect to this network must have an IP starting with 10.0.2.XX, but the XX can be anything you want it to be above the number 3. The reason for this the first IP (1) is reserved for the host resolution and the second IP (2) is reserved for the network gateway.

(Checkbox) Support DHCP: Leave this unchecked. We will be assigning static IPs.

(Checkbox) Support IPv6: Leave this unchecked. We only need Ipv4.


There will also be a button for port forwarding. We will come back to that later after our first virtual machine has been created. Keep clicking OK until you’re back to VM VirtualBox Manager, as any networking configuration changes you have made will not be applied until you have done so.



The network is setup and we’re ready to create our first virtual machine! Click on new and this will take you to the ‘Create Virtual Machine’ window. The first option will be to name your virtual machine. In this case I am going to name my first virtual machine ‘CentVM1’ and, will use type ‘Linux’ and Version ‘Red Hat (64-Bit)’.


NOTE: Often I hear about people having issues with the 64-bit options not being available in the version menu. If you only see 32 bit versions, learn how to enable the 64 bit options.

The next option will be to set the amount of memory that your virtual machine will use. Bare in mind that the amount of virtual memory you set will be used on your local machine whenever the virtual machine is turned on. For this tutorial I have set the memory to 1024MB (1G), meaning whenever the virtual machine is on, 1G of memory will be used on the local machine to support it.


The next step will give you the option to create a virtual hard disk for your virtual machine. Select ‘create a virtual hard disk now’ and click ‘create’.


The next option will allow you to select a hard disk type. In this example I will use a VDI (VirtualBox Disk Image).


The next option will allow you to choose whether you want the virtual hard disk to be dynamically allocated or a fixed size.

Dynamically allocated means that the file on your machine that represents the virtual hard disk will start at a small size, but will grow as needed until it meets the cap designated by the size of the drive. For example, if you specify for the drive to be 20G in size it will appear as 20G in the virtual machine, but the file that represents this on your host machine will be much smaller until the disk space is actually used. The advantage to this is that you save space initially until you actually use the hard disk space in the virtual machine. The downside to this is if you need to use more space, you may incur I/O overhead to allow your host machine to expand the size of the file on demand. The file can also easily become fragmented and distributed on the physical drive platter.

Fixed size is just what it sounds like. If you specify that the virtual machine is going to have a 20G hard disk, a 20G file is going to be created on the host machine and will likely not experience any expansion beyond that. The downside is that the file that represents the disk on your host machine will be 20G regardless of how much of the disk is actually being used. The upside is that you don’t have to worry about losing track of how much your virtual machine disk files may expand.

In this case I find dynamic allocation to be acceptable as I’m only going to use this VM for testing and development. However, you can use whichever you feel comfortable with using.


The next option allows you to set the size of the virtual hard disk. For a MySQL lab virtual machine I would recommend no less than 10G. In this case, I’m going to use 20G in case I need the extra space.


After clicking ‘create’ you will be brought back to the VM VirtualBox Manager. You will see the VM you just created, but we can’t start it up just yet! We need to attach it to the Nat Network we setup earlier and we need to load the CentOS 6 ISO. Right click on your new virtual machine (CentosVM1) and click Settings.


In the settings window for you virtual machine, click ‘Storage’ on the left menu, in the storage tree section click the ’empty’ CD-ROM designation under the IDE controller, and then to the right click in the attributes section click on the icon that looks like a CD next to the optical drive drop down menu. Then click ‘Choose Virtual Optical Disk File’


Browse to and select the CentOS 6 iso. When you return to the Virtual machines setting window you will see that the iso has been loaded into the virtual optical drive on the IDE controller.


Next, on the left menu section, click ‘Network’. Here you will see that you have a network adapter that is enabled and is attached to NAT by default. Use the ‘Attached to’ drop down menu and select ‘Nat Network’, then on the ‘Name’ drop down menu, select the Nat Network we created earlier (LocalNat). This will attach this network controller to your Nat Network.


Click OK to go back to the VM VirtualBox Manager. We are now ready to start up the virtual machine! I know we said earlier we were going to create 2 virtual machines, but we’ll take care of the second one later using cloning. For now, select your first virtual machine (CentVM1) and click start.



Install OS / Configure networking

You should install CentOS as you normally would in any other circumstance. However, I would suggest that you consider a minimal installation when given the option of what installation you would like to use. You don’t know what you will or won’t have available to you on a client system, so I strongly prefer to have a virtual machine where I can’t assume anything and will need to install packages on an as needed basis. This will allow you to troubleshoot and be ready for issues you may face on a client system.


The only other thing to note is that during the installation I selected to use the hostname cent1.localhost.

Once the installation is complete we are going to have to use the virtualbox virtual machine terminal until we have connectivity established. We can start by setting up networking. You’ll notice that if you run ‘ifconfig’ there isn’t any network controllers enabled beyond the virtual machine’s loopback ( Let’s enable the controller by editing it’s config file. You can edit the file with vi by using the following command:

> vi /etc/sysconfig/network-scripts/ifcfg-eth0

You will want the file to contain the following:

DEVICE="eth0" #this is the default, do not change
HWADDR="08:00:27:F6:B3:84" #this is going to be the mac address of the VM's network interface, do not change
NM_CONTROLLED="yes" #this is the default, do not change
ONBOOT="yes" #set to yes. This is what sets the network interface to start at boot
BOOTPROTO="none" #Since we are using a static IP, we don't want any boot networking protocol like dhcp or bootp to run
IPADDR= #The IP address of your virtual machine. I typically start my first machine with 6 and then move up from there. This is a matter of personal preference
NETMASK= #This netmask matches the /24 subnet notation we set for the Nat Network earlier
GATEWAY= #As stated earlier, VirtualBow reserves the .2 IP in the subnet for gateway
DNS1= #The DNS server you want to use. Personal preference.


Once you’re done modifying the configuration file for adapter eth0, you will want to restart networking with the following command:

> /etc/init.d/network restart

You should now see your static IP when running ifconfig and should be able to ping out.





At this point the machine is able to perform its basic functions. Note that you can’t SSH to the server yet, we’ll get to that. This is where you will want to put any finishing touches on the virtual machine and create your first snapshot. Before creating my first snapshot I will typically take the following steps.

  • Stop iptables and remove it entirely from chkconfig so it doesn’t start at boot. This is the linux software firewall and hasn’t been needed for any work I’ve ever had to do in a lab. This is typically one of the big things that throws people off when they’re trying to establish inbound connectivity for the first time so I recommend disabling it.
  • Disable SELINUX.
  • Update the OS by running yum update.

Note that I did not install MySQL. The reason for this is I want a snapshot of just the operating system load and nothing else. I utilize snapshots a lot to quickly spin up different types of working environments and I find it’s easier to restore an OS load to create a new environment then to deal with uninstalling and reinstalling packages.

For example. If I have an Oracle MySQL 5.6 VM installed and I want to switch to Percona 5.6, I would stop the VM, take a snapshot of my Oracle load for future reference, restore the OS Load snapshot, start the VM, and now I have a freshly installed CentOS VM where I can install Percona 5.6 and then create a new snapshot for it. This way if I ever need to use Oracle or Percona I can just load the appropriate snapshot and start the VM. No need to have more than 1 VM unless you’re emulating a replication or clustering environment.

So now that we have our first VM configured. Let’s shut it down and snapshot it. Shutdown your VM using the following command.

> shutdown -h now

Once shut down, go back to the VM VirtualBox Manager. Click on your VM and then click on snapshots.


Right now the only thing that you should see is the current state. No other states have been saved at this point, so create a new snapshot by clicking the menu icon that looks like a camera. This will take you to the option to create a snapshot of your virtual machine. When naming your snapshot and creating the snapshot description BE DESCRIPTIVE. You are going to want to know what is the state of the snapshot without having to load it. Once done, click ok and you will see that the new snapshot is created.




Setting up port forwarding

The last thing you need to do for this VM is make it accessible. You’ll notice that if you try to SSH to the machine at you’re not going to get very far. The reason for this is because that IP is sitting behind a NAT network, just like your machine does if you are using a router between yourself and your ISP. If your local machine on your local network is, no one outside your network is going to be able to ping your machine from the internet, they would have to connect to you using your public facing IP address with a port that you have designated to forward from your router to your local machine. Remember how we mentioned NAT Network port forwarding earlier? We’re going to configure that now.

Click on File -> Preferences -> Network (on the left menu bar), select your NAT network (LocalNat) and click the screwdriver icon to the right to edit like we did before when confirming the networking settings for the NAT network. This will bring you back to the NAT networking details window, click on the ‘port forwarding’ button.

You’ll need to setup a rule to forward a port on your local machine to the port of your virtual machine. In this example we’re going to setup an SSH port forward. We’ll use the following options…

Name: CentVM1SSH

Protocol: TCP

Host IP: <blank>

Host Port: 61022

Guest IP:

Guest Port: 22


What we have done is created a port forwarding rule called ‘CentVM1SSH’ that has stated that any inbound connectivity on port 61022 on your local machine should be forwarded to port 22 on the virtual machine we just created. Now we should be able to SSH from our local machine to our VM using Don’t forget to turn your virtual machine back on before you try this! Also, be sure that you have clicked okay on all preferences windows and are back to the VirtualBox VM Manager before attempting as new networking port forward rules will not be applied until you have done so.



Cool! As you can see from the example images above we can SSH directly to our VM using the ports we have forwarded.

Having 1 virtual machine is nice, but it’s not going to be enough. How often do you see single server solutions? We need another! However, I don’t want to go through all that configuration again, so let’s just clone this virtual machine. Make sure your first virtual machine is shut down and then follow these steps.


Cloning your virtual machine

In the VM VirtualBox Manager, right click on your first virtual machine and click on clone.


This will bring up the ‘Clone Virtual Machine’ window. Enter the name for your second virtual machine (CentVM2), MAKE SURE ‘Reinitialize the MAC address of all network cards’ IS CHECKED and then click next. If you do not reinitialize the MAC address, it means that the network card on your new virtual machine will have the same MAC address as the first one. This is bad.


The next options will be to either create a full clone or a linked clone. A linked clone is like a snapshot. I prefer to go with full clones so long as I have the disk space to support them.


The next option will ask if you want the a clone of the machine in just it’s current state, or everything including the existing snapshots. I would suggest cloning the machine in it’s current state. We are going to have to update some networking settings on the new virtual machine and create a new OS load snapshot for it later.


After you click on the clone button, you’ll see a progress bar come up. On average the cloning process of a small VM with just an OS load typically takes about 5 minutes. Once it’s done you will see that you have 2 virtual machines at your disposal. However there are going to be some issues with the second virtual machine that we will need to fix. Specifically it has the hostname, IP, and MAC address of the first virtual machine in it’s configuration files. Let’s fix that!


First, we need to know what the MAC address of the new virtual machine is. We can get that by right clicking on the second virtual machine, click on settings and then select network from the menu on the left. Drop down the advanced options and note the MAC address.


Go back to the VM VirtualBox Manager and start up your second virtual machine.

You will want to edit the following files….


Change the ‘HOSTNAME’ entry to whatever you would like your new virtual machine host name to be.


Change the ‘HWADDR’ entry to the MAC address that was reinitialized for this virtual machine. This is the MAC address you noted earlier.

Change the ‘IPADDR’ entry to the IP address you would like for this machine.


Delete everything that comes below the comments at the top of the file. This file is used to store information to tie network adapters to their MAC addresses and will be repopulated on the next reboot.

One this is done, reboot your virtual machine. The following is output that shows what the contents look like on my second virtual machine after it was rebooted.

[root@cent2 ~]# cat /etc/sysconfig/network
[root@cent2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
[root@cent2 ~]# cat /etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key. #Delete below this line before reboot
# PCI device 0x8086:0x100e (e1000) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:7e:a7:a5", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

At this point you’re good to go. Shut down your second virtual machine. Take an OS load snapshot of it and you’re all set. You can power on both virtual machines and they should be able to communicate with one another with no additional configuration. Don’t forget to set up another port forward so you can SSH directly to your second virtual machine, and additional ports as needed to connect to services like MySQL.

This virtual machine setup is as compact as you would like it to be, while being effective and will remain available to you so long as you can power your machine on. Enjoy! Oh, and also, everything written in this tutorial was tested using virtual machines.

Good day!


Discover more about Pythian and our technical expertise.

Categories: DBA Blogs

SQL On The Edge #2 – SQL 2016 Temporal Tables

Pythian Group - Tue, 2015-09-22 12:31

Hello and welcome to our second episode of SQL On The Edge! On this episode we’re going to focus on a new feature of SQL Server 2016 called Temporal Tables.

What’s a Temporal table?

Temporal tables allow us to go back in time and see what the data looked like at some point in the past and are also referred to as “system-versioned” tables. This has several different uses, some of which are:

a) Auditing
b) Quickly fix mistakes
c) Data trending

A temporal table is implemented as two different tables by SQL Server and they’re displayed transparently as one table to the user. Each one of these tables also has two datetime2 columns to track the START of the validity of a record and the END of the validity. These tables are referred to as the current and the history table. Partitioning is supported for both the current and the history table to make it easier to work with large amounts of archived data.


Enabling support for a table to be temporal comes with some limitations, these are the main ones:
– A PK is required on the current table.
– The table can’t be TRUNCATED.
– No support for FILETABLE or FILESTREAM.

There are more, for the full list refer to the documentation.

Creating and Querying

For creating a temporal table and query examples let’s jump to the video, enjoy!


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Nero 2016 Platinum Crack Download

Jithin Sarath - Tue, 2015-09-22 09:15
Nero 2016 Platinum serial is a powerful CD/DVD burning tool with advanced configuration options. Download Nero 2016 Platinum Serial Key + Crack free.
Categories: DBA Blogs

WebLogic on ODA: Final release and new partner supported model

A new version of the WebLogic installation wizard for WebLogic on ODA is now available on OTN for download, and supports the latest release of ODA hardware and software. With the latest...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Windows Containers: Installing SQL Server

Pythian Group - Mon, 2015-09-21 14:28

This blog post is a quick introduction to Containers in the Windows world, and a walk-through on installing SQL Server in a Windows Container.


As many of you have heard, Microsoft is jumping into containers with native support for Docker containers in Windows 2016. Containers are the current big thing in virtualization, Linux, and DevOps, because the are very light-weight and allow you to quickly create a new environment without having to wait for a VM to be deployed and provisioned by the server team. I expect them to be just as useful and ubiquitous in the Windows world very soon.

Hypervisors are based on emulating hardware, and so they can be very resource intensive. At a minimum, they’re required to have an entire Operating System, CPU, RAM, and some drives assigned before they’re useable, and that’s often overkill for a VM running a single application. Containers, by contrast, virtualize only the OS level and share the kernel libraries between them, and you don’t need to worry about the rest. Containers are small and light-weight enough, that you can expect to run 4 to 6 times as many containers vs VMs on one host.

This MSDN Blog Post goes into detail on containers and their differences from VMs.

It’s important to note that containers are meant to run a single application and do not have GUI interfaces. So, everything must be run via the command line or a remote connection.

Why use containers for SQL Server?
  1. You need to quickly create a set of SQL Server instances for development or testing.
  2. Your company runs a Software-as-a-Service and wants to separate clients into different environments while squeezing everything they can from their hardware.
  3. You want to be able to share development environments without everyone getting in each others way.
  4. Your VM or Server team just isn’t very good, and they take forever to get you what you need.
Installing SQL Server in a Windows Container

The following is a walk-through for installing SQL Server in a Windows Container. You might want to reference the Docker documentation for more details on the commands I use.

When you’re done with the tutorial, try to get multiple containers and their instances of SQL Server running on the same box.

Step 1: Create a New Server

The server should be running Windows Server 2016 Technical Preview 3 (or higher) Core with the Container role enabled.

I used Azure’s “Windows Server Container Preview” VM for this tutorial, which luckily has the Host OS all setup for me. Find out more details on setting up Windows to run containers.

* A quick note for anyone who hasn’t used Windows Server Core before: Open Task Manager and use File–Run New Task to get new CMD windows.

At this point, you should also create a new directory structure in your VM:


Creating an Azure VM

Creating an Azure VM


Step 2: Configure Azure Security Rules

If you’re using Azure, you need to define the Inbound Security Rule for this port. To get there in Azure:
From the VM’s main blade click: All Settings — Network Interfaces — [Interface Name] — Network Security Group — All Settings — Inbound Security Rules.

The default rule to allow RDP traffic will be there. Create another rule to allow SQL Server traffic. For reasons I don’t understand, setting the port to 1433 here doesn’t work. You need to open it up, and hope your firewall is up to date.

Creating an Inboud Security Rule


Step 3: Configure Windows Firewall

Run the following in Powershell on your host to open the right ports. I’ll be using the default port 1433:

if (!(Get-NetFirewallRule | where {$_.Name -eq "SQLServer 1433"})) {
New-NetFirewallRule -Name "SQL Server 1433" -DisplayName "SQL Server 1433" -Protocol tcp -LocalPort 1433 -Action Allow -Enabled True


Step 4: Enable .Net 3.5 Framework

This is a hassle. The base Windows image that Microsoft provides does not have .Net Framework 3.5 enabled. So, you need to enable it in the container which should be easy enough, and we’ll get to that. Unfortunately, for reasons that I do not understand, when attempting to install .Net 3.5 in the container, it doesn’t use WindowsUpdate and fails. If you have a Windows .iso file (which I don’t), you can theoretically point the below command at it from within the container, and it should work.

The “fix” is to enable .Net 3.5 on the host, export the registry keys, and then import them into the Container’s registry. This tricks the SQL Server installer into thinking you have it enabled. Does SQL Server 2016 need anything in the .Net 3.5 SP1 Framework? Probably!

Seriously, I spent hours banging my head against this thing and if you can figure out how to get out to from your container, please let me know.

Enable and upgrade the .Net 3.5 Framework on the host server by running the following commands within Powershell. You don’t need to do this if you have a Windows .iso file because we’ll be installing it in the container later.

get-windowsfeature -name NET-Framework-Features | install-windowsfeature
get-windowsfeature -name NET-Framework-Core | install-windowsfeature


Using regedit, export the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.5 registry keys and save them as C:\mssql\install\registrykeys\registry.reg


Step 5: Download and Extract SQL Server Installation Files

Run the following commands in Powershell on your host to download the SQL Server installers. Change URLs as needed…

wget -uri '' -outfile 'SQLServer2016-x64-ENU.exe'
wget -uri '' -outfile ''

Run the executable and save the files at C:\mssql\install. You should delete or move the .exe & .box files as well.

Step 6: Create SQL Server Configuration File

As mentioned earlier, containers don’t allow any GUI interfaces, so SQL Server has to be installed silently. In addition, not everything in SQL Server is supported on Windows Server Core.

I used this configuration file. If you use the same one, make sure you change the password (search for CHANGEME).

Please put your configuration file at C:\mssql\install\configurationfile.ini.


Step 7: Create your dockerfile

Docker uses the dockerfile as a configuration file and to ensure images are built exactly the same every time.

Take the below code and save it in a text file as c:\mssql\dockerfile

The lack of extension is on purpose. This isn’t a new folder. If Windows insists on saving the file with a .txt extension, which happened to me a couple of times, use the Powershell rename-file command and remove the extension.

#Define the base image we'll be building everything else off of...
FROM windowsservercore

#Give it a label
LABEL Description=”SQL Server” Vendor=”Microsoft” Version=”13.00.500.53″

#These files and folders will be imported into the docker image and be available for us to use.
ADD install/SQLServer2016-x64-ENU /mssql/install
ADD install/configurationfile.ini /mssql/install/configurationfile.ini
ADD install/registrykeys/registry.reg /mssql/registrykeys/registry.reg


Step 8: Build Docker Image

At this point, everything you need to install SQL Server should be staged somewhere underneath the c:\mssql directory and you should have a reference to each file or the folder in your dockerfile.

To build the docker image, run this:

docker build -t mssql2016 c:\mssql

This command tells docker to build an image with a name of mssql2016, and that the dockerfile is located in the c:\mssql folder.

While it’s running, open up Task Manager and watch how much CPU & RAM it uses. Also, get some coffee and check your email. You’ve got time.


Step 9: Verify

After the build completes, run the below command to see all of the images you have available. It should be a magical rainbow of three choices.

docker images


Step 10: Run your image

This command will run your image

docker run -it --name mssqlContainer -p 1433:1433 mssql2016 cmd

Let’s walk through each of these parameters:

  • it | Runs an interactive psuedo-terminal.
  • name | The name of your running container.
  • p 1433:1433 | This binds port 1433 on the host to port 1433 on the container process.
    • In other words, any traffic coming into the host on port 1433 will be forwarded to the container’s port 1433.
  • mssql2016 | The name of the image you want to run.
  • cmd | The utility that you’ll be running.


Step 11: Enable .Net 3.5 Framework

As mentioned back in Step 4, this is a hassle.

We need to import the registry keys into the Container’s registry to trick the SQL Server installer into thinking we have .Net 3.5 SP1 installed. IN ADDITION, we need to enable as much as possible of the actual .Net 3.5 framework so it’s at least sort of usable. So, run the following commands to enable .Net 3.5 and import the registry keys.

DISM /online /enable-feature /featurename:NetFx3ServerFeatures
reg import C:\mssql\registrykeys\registry.reg


Step 12: Install SQL Server in a Windows Container

Navigate to C:\mssql\install and run the below command to install SQL Server using the values setup in your configuration file.

setup /IAcceptSQLServerLicenseTerms /ConfigurationFile=configurationfile.ini


Step 13: Fix the installation

At this point, the SQL Server instance should be up and running. Unfortunately, there’s a good chance the next time you start the container that the instance will not come up.

Here’s a blog post talking all about what happens. It appears to be due to how the container shuts down the underlying processes (or doesn’t).

The quick fix is to go against every best practice document and run SQL Server under LocalSystem.

sc config MSSQLSERVER obj=LocalSystem


Step 14: Connect to SQL Server

As a test of the instance, you can use OSQL from the command line to verify it’s up and running.
C:\Program Files\Microsoft SQL Server\130\Tools\Binn>osql -E

From your local machine, connect to the SQL Server instance. You should use your host server’s IP address (the Public IP address in Azure).

Congratulations! (but you’re not done yet)


Step 15: Save your work

Boy, it sure would be a shame if something happened to that nice, new SQL Server installation you’ve got.

Once you’re done playing with the instance, head back to the command window with access to your container. I recommend attempting to cleanly stop the container. From another command window on the host, run:

docker ps
## The output from docker ps will give you a ContainerID. Use it in the stop command.
docker stop [ContainerID]


Alternatively, just type exit as many times as necessary to get back to the host’s command line, and the container will shut down very poorly.

Type this to commit the new image to your repository
docker commit [ContainerID] mssql2016:Installed

This will save your container locally and give it a new tag of Installed. This will also take some time.

To start it again, use:
docker run -it --name mssqlcontainer mssql2016:Installed cmd


Discover more about our expertise in SQL Server.

Categories: DBA Blogs

Trace Files -- 1 : Generating SQL Traces (own session)

Hemant K Chitale - Sun, 2015-09-20 08:18
Beginning a new series of posts on Trace Files.

An SQL Trace captures SQL statements (including recursive SQL calls -- e.g. data dictionary lookups or triggers being executed, which are not "directly" visible to the client or explicitly executed by the client).   Optionally, it can capture Wait Events and Binds.

Tracing for Wait Events allows us to capture the Wait Events that occur within the duration of an SQL call.  Tracing for Binds allows us to capture the Bind Values that were passed by the Client (where the SQL code uses Bind Variables instead of Literals)

Here are a few methods to get the SQL Trace for one's own session (whether interactively in sqlplus or programmatically through any other client)

EVENT 10046
This is not recommended by Oracle although it seems to be widely in use.  Use with extreme caution as specifying the wrong event number or level when issuing an ALTER SESSION command can result in unpredictable behaviour and possible corruption.  I include it here not as a recommendation but only because I anticipate that I will be asked about this.  I do NOT recommend using this method.  (Levels 16 and 32 have not been tested by me)

This method is used with either of these commands :

ALTER SESSION SET EVENTS '10046 trace name context forever, level 1'; -- for the same behaviour as SQL_TRACE=TRUE
ALTER SESSION SET EVENTS '10046 trace name context forever, level 4'; -- for tracing Binds with the SQLs
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8'; -- for tracing Wait Events with the SQLs
ALTER SESSION SET EVENTS '10046 trace name context forever, level 12'; -- for tracing both Binds and Wait Events with the SQLs
ALTER SESSION SET EVENTS '10046 trace name context forever, level 16'; -- to dump STAT lines for each execution
ALTER SESSION SET EVENTS '10046 trace name context forever, level 32'; -- never dump execution statistics

After executing the desired SQLs to be traced, tracing is disabled with :

ALTER SESSION SET EVENTS '10046 trace name context off';
(I have updated the "context forever, level 0" to "context off"  which is the correct method.  This just shows that I haven't used this method of tracing for a long time !)

The instance / session parameter SQL_TRACE is the oldest method and may still be in use although it has been deprecated in recent versions.

This is done with the SQL command :


After executing the SQLs that need to be traced, it is then disabled with the SQL command :


This PLSQL procedure is the proper alternative to setting the parameter SQL_TRACE.

This is done with


The EXEC call allows the PLSQL procedure to be executed from the command line.

After executing the SQLs that need to be traced, it is then disabled with the command :


This PLSQL procedure is the preferred method.  It also offers switches to enable/disable tracing for Wait Events and Binds separately.

This is done with


Thus, the switch for tracing Waits is active but tracing Binds is deactivated.  (Optionally, Binds could also be traced with the binds switch set to TRUE).

After executing the target SQLs in the session, it is then disabled with the command :


Thus, no flags need to be supplied to disable both Waits and Binds.


Categories: DBA Blogs

My YouTube Videos as introductions to Oracle SQL and DBA

Hemant K Chitale - Sat, 2015-09-19 09:37
From my teaching an 11g OCA program, I've developed (and will continue to add) a few videos for those who are just beginning with Oracle SQL and DBA and/or those intending to prepare for the SQL or 11g OCA exams.

These are being built at  http:///   There are two separate PlayLists, one for Oracle SQL and one for Oracle 11g DBA - I   (i.e. corresponding to the OCA training and exams).

Happy Viewing.


Categories: DBA Blogs

ORA-01917: user or role 'PDB_DBA' does not exist

Pakistan's First Oracle Blog - Sat, 2015-09-19 00:42
I manually created a container database (CDB) in my Linux 6.4 based virutal machine. After creating it, I tried to create a pluggable database but got following error:

SQL> select instance_name,status from v$instance;

---------------- ------------
cdbtest          OPEN

1 row selected.

SQL> show con_name


SQL> select * from cdb_pdbs;

      DBID    CON_UID GUID                             STATUS    CREATION_SCN
---------- ---------- -------------------------------- --------- ------------
---------- --------- --- --- ----------
3972600152 3972600152 201432B4FDA10CEBE0530900000AFE92 NORMAL             213
 202375680 LOGGING   NO  NO           2

SQL> select instance_name,status from v$instance;

---------------- ------------
cdbtest          OPEN

1 row selected.

SQL> show con_name

SQL> select * from cdb_pdbs;

      DBID    CON_UID GUID                             STATUS    CREATION_SCN
---------- ---------- -------------------------------- --------- ------------
---------- --------- --- --- ----------
3972600152 3972600152 201432B4FDA10CEBE0530900000AFE92 NORMAL             213
 202375680 LOGGING   NO  NO           2

So I tried with different options, but it always gave that error about PDB_DBA role. Searching the MOS, presented document.

This Document states that if we create a CDB manually we need to get from DBCA because we might have omit some options. CDB and PDB needs to be installed with all the options. I created CDB manually in first place because I had low disk space in my virtual machine, and I was intending to use all the options.

So if you get that error, the solution is to either create database through DBCA.
Categories: DBA Blogs

Log Buffer #441: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-09-18 13:38

This Log Buffer Edition dives deep into the ocean of blogosphere and surfaces with some cool blog posts from Oracle, SQL Server and MySQL.


  • Lets Talk DB Perth
  • The Fundamental Challenge of Computer System Performance
  • Index Advanced Compression: Multi-Column Index
  • Middleware Diagnostics Advisor (MDA) Setup
  • Most people are now aware that in 12c, a VARCHAR2 has been extended from a maximum size of 4000 to 32767 bytes.

SQL Server:

  • Creating SQL databases on Azure Blob storage
  • Monitoring Availability Group Replica Synchronization
  • PowerShell Tool Time: Controlling Our Tools
  • Automated Patching of SQL Server IaaS VMs
  • SQLCMD Mode; Run all SQL files in a directory


  • Clarification on “Call me Maybe: MariaDB Galera Cluster”
  • MySQL Aurora CPU spikes
  • Oracle HA, DR, data warehouse loading, and license reduction through edge apps
  • Protecting MySQL Passwords With the sha256_password Plugin
  • Ever get called out for a MySQL issue only to realize that there was no issue?  It was a false alarm from the monitor.
Categories: DBA Blogs

My Sales Journey: #3

Pythian Group - Fri, 2015-09-18 13:26


This week started off with the last on boarding session with the CEO himself. It was my favorite for many reasons. It was passionate the way it can be when you build something from nothing with blood, sweat and tears and it resonated with me having been down that path myself. You hear it in the tone of their voice and the pride in their eyes. That BORG (boarding of recruited gurus!) is the inspiration for today’s post. Here are my takeaways from Week 3:

To be effective in a sales role you need to speak like a CEO, learn your product inside out and when people listen to you they must hear how amazing your product/service is without you ever saying how amazing it is. Your passion will shine through by your demonstrated knowledge.

Outreach needs organization as it demands many different tasks to be done at once. It is a daunting task but I am sure it will get better as I learn how to get the best out of my day. I am lucky to have a set up that allows me focus like two big screens, headsets, phones and most of all a manager who is there to assist and actively involved to make sure I succeed.

Being flustered is normal! You know that moment when you are making calls and leaving voice messages and suddenly you get a live one! You mumble your way through it and think Wow! I could have been so much better. I had one of those moments this week and it made me go out and do research and find better questions to ask. I chalk it down to my learning curve and move on.

As a rookie, your team is your support structure. Again, lucky to be surrounded by people who collaborate, take the time to help out, craft email templates and show you how its done on the phone. Without all that this fish jumping into unknown waters would certainly drown.

It’s been a good week! Share this with your rookies or if you are rookie tell me how your week was. Looking forward to hearing from you.


Categories: DBA Blogs

Links for 2015-09-17 []

Categories: DBA Blogs

Partner Webcast – Developers Continuous Delivery Using Oracle PaaS Cloud Services

Cloud computing is now broadly accepted as an economical way to share a pool of configurable computing resources. Several companies plan to move key parts of their development workloads to public...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Index Advanced Compression: Multi-Column Index Part I (There There)

Richard Foote - Thu, 2015-09-17 00:57
I’ve discussed Index Advanced Compression here a number of times previously. It’s the really cool additional capability introduced to the Advanced Compression Option with, that not only makes compressing indexes a much easier exercise but also enables indexes to be compressed more effectively than previously possible. Thought I might look at a multi-column index to highlight just […]
Categories: DBA Blogs

Index Advanced Compression: Multi-Column Index Part I (There There)

Richard Foote - Thu, 2015-09-17 00:57
I’ve discussed Index Advanced Compression here a number of times previously. It’s the really cool additional capability introduced to the Advanced Compression Option with, that not only makes compressing indexes a much easier exercise but also enables indexes to be compressed more effectively than previously possible. Thought I might look at a multi-column index to highlight just […]
Categories: DBA Blogs

Part 3: Comparing Oracle Cloud Database Backups Options

Pythian Group - Wed, 2015-09-16 10:43
Comparing Oracle Database Backup Service (ODBS) and Oracle Secure Backups (OSB) to Amazon Web Services (AWS)

This is part 3 of a 3 part series on “Getting Started with Oracle Cloud Backups”.

  • Part 1 covers setting up RMAN to backup directly to the new Oracle Cloud Database Backup Service (ODBS).
  • Part 2 covers setting up RMAN to backup directly to the cloud using Amazon Web Services (AWS) Simple Storage Service (S3).
  • Part 3 compares and contrasts the two services.



Oracle recently announced their Oracle Database Backup Service (ODBS) as part of their big push to the Cloud. However while the name is new, the technology really isn’t. It’s effectively just a re-brand of their Oracle Secure Backup Cloud Module which was introduced years ago, initially with the ability to backup to the Amazon Simple Storage Service (S3). The functional and non-functional differences are minor but are summarized in this article.


Use Case

Both services probably appeal mostly to small or medium sized business looking for off-site backups for whatever reason (such as DR or regulatory requirements).

Keep in mind that a service like this probably isn’t a full replacement for your onsite primary backup storage device. But it very well could replace your old-style off site backup or tape vaulting vendor, which usually involves a physical pickup of backup tapes and transportation to a storage location on a daily, weekly, or in some cases monthly basis.

And while the restore times are certainly going to be considerably slower than restoring from on-premise disk based devices, it’s undoubtedly faster than bringing back tapes from an offsite storage location through a vendor service (time of which is usually measured in days with ad-hoc recall requests often being at an additional expense).

The specifics of how to technically get started with implementing either service is discussed in the previous articles of this series.


Decision Criteria Checklist

Many factors come into consideration when deciding on whether to allow a business critical database to travel off-site and when selecting the appropriate vendor or service to do so. The following generic checklist is simply a guide of suggested criteria that one may consider:

  • Storage costs (metered or flat rate; progressive or flat incremental rates)?
  • Ease of installation (outages required or not)?
  • Effects on internal processes (i.e. does the module need to be reinstalled when home cloning; changes to RDBMS software installation processes)?
  • Ease of use?
  • Changes required to existing scripts/processes (i.e. complicated changes to RMAN commands or scripts; changes or integration with 3rd party backup tools required)?
  • Is backed-up data secured at rest (at vendor data center)?
  • Is backed-up data secured in flight (transfer to or from vendor data center through the public internet)?
  • Upload and download performance (is there an effect on Recovery Time Objectives)?
  • Is transferring the additional data going to effect the organization internet performance or costs from their ISP?
  • Additional licensing costs?


Pros and Cons

The answers to some of the above criteria quests are going to be site and/or database specific. Others have been discussed in more details in the other articles in this series.

However, the pros and cons of each service can be summarized as follows:

OSDB ProsOSDB ConsNo upfront costs (no additional licenses required)No security through keys/credentials – instead must use “users” correlated to actual named users and email addressesReasonable and competitive metered ratesNavigating between services, accounts, and domains not as simple as it should beSecurity at-rest and in-flight through mandatory encryption and HTTPS transferWebUI doesn’t show details beyond overall “space used” (i.e. doesn’t show files or per database space usage)Advanced RMAN compression option included (without requiring the Advanced Compression Option)Can’t specify Oracle data center used, just the key geographical region (i.e. North America)Data is triple mirrored in Oracle Data CenterNo ability to replicate data between Oracle data centers


OSB & AWS ProsOSB & AWS ConsAbility to create unique IDs and keys for each DB being backed up (credentials independent of named users)Requires licenses for the “Oracle Secure Backup Cloud Module”, which is licensed on a RMAN per-channel basisBilling calculator for easy cost estimationBy default data is neither secure at-rest or in-flight (though both can be enabled)Additional options with S3 such as more specific data center selection and cross-region replication

It should be noted that while the Oracle Secure Backup Cloud Module is licensed on a per “stream” or per RMAN channel basis, those RMAN channels are not dedicated to one database. Rather, they are concurrently in-use channels. So if you had licenses for 10 “streams” (channels) those could be used by concurrently by 10 different databases each only using one RMAN channel or one database using 10 RMAN channels or any combination there of.

And while both provide use of backup encryption and advanced compression options as part of “special-use licensing”, it should be noted that these options are available only for the cloud based (or in the case of OSB, SBT based) backups. Regular disk based backups of the same database(s) would still require the Advanced Security Option for RMAN backup encryption and the Advanced Compression Option for anything other than “BASIC” RMAN backup compression.

The AWS solution also provides (by default) the option of not securing the data at rest or in flight. Not encrypting RMAN backups is beneficial when trying to take advantage of storage based deduplication which is not relevant here. Hence I struggle to think of a strong business use case for not wanting to encrypt backups all else being equal? Similarly why one may want to choose HTTP over HTTP for critical business data?



One possible requirement may be the need to use both services concurrently for test/evaluation purposes. Fortunately, since the module (library, configuration, and wallet) files are all uniquely named, it’s absolutely possible to use both services concurrently, even from within the same RMAN session. For example:

RMAN> run {
2> allocate channel odbs type sbt
3> PARMS=',SBT_PARMS=(OPC_PFILE=/u01/app/oracle/product/12.1.0/dbhome_2/dbs/opcCDB121.ora)';
4> backup tablespace users;
5> }

allocated channel: odbs
channel odbs: SID=270 device type=SBT_TAPE
channel odbs: Oracle Database Backup Service Library VER=

Starting backup at 10-SEP-15
channel odbs: starting full datafile backup set
channel odbs: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel odbs: starting piece 1 at 10-SEP-15
channel odbs: finished piece 1 at 10-SEP-15
piece handle=2tqgq3t5_1_1 tag=TAG20150910T114021 comment=API Version 2.0,MMS Version
channel odbs: backup set complete, elapsed time: 00:00:15
Finished backup at 10-SEP-15
released channel: odbs

RMAN> run {
2> allocate channel aws_s3 type sbt
3> PARMS=',SBT_PARMS=(OSB_WS_PFILE=/u01/app/oracle/product/12.1.0/dbhome_2/dbs/osbwsCDB121.ora)';
4> backup tablespace users;
5> }

allocated channel: aws_s3
channel aws_s3: SID=270 device type=SBT_TAPE
channel aws_s3: Oracle Secure Backup Web Services Library VER=

Starting backup at 10-SEP-15
channel aws_s3: starting full datafile backup set
channel aws_s3: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel aws_s3: starting piece 1 at 10-SEP-15
channel aws_s3: finished piece 1 at 10-SEP-15
piece handle=2uqgq3un_1_1 tag=TAG20150910T114111 comment=API Version 2.0,MMS Version
channel aws_s3: backup set complete, elapsed time: 00:00:15
Finished backup at 10-SEP-15
released channel: aws_s3



However, if the wrong SBT library is being used by certain RMAN commands trying to access the backup pieces, the following RMAN error will be returned:

RMAN-06207: WARNING: 2 objects could not be deleted for SBT_TAPE channel(s) due
RMAN-06208:          to mismatched status.  Use CROSSCHECK command to fix status
RMAN-06210: List of Mismatched objects
RMAN-06211: ==========================
RMAN-06212:   Object Type   Filename/Handle
RMAN-06213: --------------- ---------------------------------------------------
RMAN-06214: Backup Piece    2qqgq3ik_1_1
RMAN-06214: Backup Piece    c-3847224663-20150910-01


The issue with the above example DELETE BACKUP command is resolved by simply allocating a channel using the proper library for the proper vendor.

Running similar backups as the above commands but on an entire CDB (using the RMAN command “BACKUP AS COMPRESSED BACKUPSET DATABASE;”) rather than just one tablespace in order to have a larger source or amount of data to process shows some interesting results:

SQL> SELECT status, input_type,
  2         backed_by_osb, compression_ratio,
  3         input_bytes_display, output_bytes_display,
  4         input_bytes_per_sec_display, output_bytes_per_sec_display,
  5         time_taken_display
  6    FROM v$rman_backup_job_details
  7   ORDER BY start_time;

---------- ------------- --- ----------------- -------------- -------------- -------------- -------------- ----------
COMPLETED  DB FULL       YES        6.60411404     4.47G        692.75M          1.59M        245.88K      00:48:05
COMPLETED  DB FULL       YES        6.60794224     4.47G        692.50M          1.96M        303.43K      00:38:57


The interesting points are:

  1. The backups to OSDB (top line in red) consistently took longer on my system than the backups to AWS (blue second line). Would need longer term testing to see whether this is an anomaly or a pattern. Backup performance time would need to be evaluated in detail when selecting a service.
  2. Both are recorded in the catalog as using OSB (the “backed_by_osb” column). This is no surprise as the OSDB module is simply a copy of the OSBWS module as is apparent by the API version numbers.



From an RMAN functional perspective the two are almost identical. As would be expected since the ODBS module and library is essentially just a clone of the OSB module and library. With a re-branding and slight modification simply to differentiate it and to conveniently provide Oracle an exemption from their own licensing requirements. Which is not uncommon from Oracle – after-all they do like to provide exceptions to themselves, allowing them to promote their products and services over their competition.

From a data management perspective the Amazon S3 is far ahead with additional features, such as regional replication and more granular storage location options. Something Oracle very well may catch up on but at time of writing does not yet provide.

Hence, I think the choice really comes down to priorities. Additional storage/data management options vs. additional licensing costs. For many smaller customers price is a key concern and therefore the Oracle solution likely preferable as essentially it is the same as the OSB & AWS solution but without the license requirement.


Discover more about our expertise in Oracle and Cloud.

Categories: DBA Blogs

Part 2: Oracle Cloud Backups to AWS S3

Pythian Group - Wed, 2015-09-16 07:08
Getting Started with Oracle Cloud Backups – Amazon Web Services (AWS)

This is part 2 of a 3 part series on “Getting Started with Oracle Cloud Backups”.

  • Part 1 covers setting up RMAN to backup directly to the new Oracle Cloud Database Backup Service (ODBS).
  • Part 2 covers setting up RMAN to backup directly to the cloud using Amazon Web Services (AWS) Simple Storage Service (S3).
  • Part 3 compares and contrasts the two services.

As mentioned in part 1 of this series, it’s actually really easy to get started with writing Oracle backups directly to the cloud. So regardless of the motivation, it’s certainly possible to go from zero to fully backed-up to a cloud based service in just a few hours, hence meeting any requirements to implement off-site backups extremely quickly. Of course, there are several dependencies including the database size, internet upload speed (and cost), and change control formalities.

Recovery speed is most certainly going to be slow compared to recovering from local on-premise storage, but this service probably isn’t meant to replace your on-premise backup strategy, but rather compliment it or possibly replace your off-site tape-vaulting vendor service (at least for Oracle database backups). Regardless, recovery time and RTO must of course also be taken into consideration.

Also, while the Amazon Web Services metered usage costs are extremely competitive, directly integrating with RMAN does require the Oracle Secure Backup Cloud Module, which is a licensed option.

However, Amazon does also provide some additional unique features with their S3 storage: namely object versioning, object life-cycle rules, the ability to pick the storage region with more granularity (i.e. a specific region within the United States) and “Cross-Region Replication”.

This article provides a quick start for getting up and running with Oracle RMAN backups to AWS S3 using the Oracle Secure Backup (OSB) Cloud Module for users new to the service and cloud backups.


Backing up to Amazon Web Services (AWS)

Backing up to the AWS Simple Storage Service or S3 isn’t new. I first experimented with this in June 2011 and hadn’t really touched it since. Yet, fundamentally it hasn’t really changed at all since that time.

From a very high level the process is:

  1. Create an AWS S3 account and setup the necessary credentials.
  2. Install an AWS specific Oracle Secure Backup library into your Oracle Home.
  3. Run an RMAN backup using the SBT_TAPE device type.

Really that’s it. And the first two steps are pretty quick and easy. The 3rd is the one that will vary depending on the size or your database and your upload bandwidth.


Creating an AWS S3 Account

Creating the necessary AWS S3 account is almost trivially easy. If you or your organization does not already have an “Amazon Web Services account”, start by signing up for the Free Trial. The free trial gives you 5GB of storage (along with about 20 other services) for a full year.

For almost all organizations the 5GB of storage probably won’t be sufficient. However, Amazon does provide a simple to use billing calculator to try to estimate your monthly costs based on usage.

Amazon also provides other benefits, such as being able to choose what data centers or geographic regions your data is stored in. Further details can be found in their FAQ.

After filling in the necessary information and creating an account (if your organization does not already have one), the next step is to set up a security credential. In the top right corner of your S3 console navigate to “Security Credentials”:



From the “Users” option select “Create New Users”:



Enter a user name and do check the check box to generate an access key – this is what will be used by RMAN, not the actual username:



Once added, remember to record display (and maybe temporarily record or use the “Download Credentials” button in the bottom right) the “Access Key ID” and “Secret Access Key”, as you’ll need this information during the setup. Don’t worry, if you forget or loose this information, it’s very easy to generate new security credentials for this user. Or, to delete credentials as I have for the one shown in the screenshots. (It’s conceptually similar to the “single use password” you can generate with a Google Account with their 2-factor authentication enabled.)



Before the new credential can be used, it must be given permissions though a “Policy”. Just like within the Oracle database, the required permission (or “policy”) can be granted directly or through a role allowing for flexible management.

From “Users” click on the newly created username and then the “Attach Policy” button:



For RMAN to write to the S3 storage, the “AmazonS3FullAccess” policy will be required. After selected choose the “Attach Policy” button in the bottom right.

At this point, we’re ready to start configuring the database and using the service.


Installing the “Oracle Secure Backup Cloud Module for Amazon S3”

Installing the necessary “Oracle Secure Backup Cloud Module for Amazon S3” into your Oracle home is very easy.

From OTN download an installer Java JAR file and copy and extract the zip to your database server. When run, the installer will determine the proper database version and OS platform, as well as download the appropriate library file to your Oracle home or other specified directory.

Installation requires a few mandatory arguments, namely:

  • The AWS credentials created previously
  • Your OTN username (but not your Oracle account password)
  • The location for the library file it will download. Usually use $ORACLE_HOME/lib
  • The location for the secure wallet file which stores the AWS credentials

There are a number of other optional arguments (specified in the README or by running the JAR file without arguments) such as proxy server details if necessary.

Example installation:

$ java -jar osbws_install.jar \
>    -AWSID AKI***************QA \
>    -AWSKey no/MD*******************************upxK \
>    -otnUser \
>    -walletDir $ORACLE_HOME/dbs/osbws_wallet \
>    -libDir $ORACLE_HOME/lib
Oracle Secure Backup Web Service Install Tool, build 2015-06-22
AWS credentials are valid.
S3 user already registered.
Registration ID: 17d*****-0***-4***-8***-41e******ccc
S3 Logging Bucket: oracle-log-pane-1
Validating log bucket location ...
Validating license file ...
Oracle Secure Backup Web Service wallet created in directory /u01/app/oracle/product/12.1.0/dbhome_2/dbs/osbws_wallet.
Oracle Secure Backup Web Service initialization file /u01/app/oracle/product/12.1.0/dbhome_2/dbs/osbwsCDB121.ora created.
Downloading Oracle Secure Backup Web Service Software Library from file
Downloaded 27151475 bytes in 40 seconds. Transfer rate was 678786 bytes/second.
Download complete.


This determines the appropriate Oracle Secure Backup library file for your OS platform and database version and downloads it to the specified location (recommended $ORACLE_HOME/lib) and creates a config file and the wallet file:

$ ls -ltr $ORACLE_HOME/lib | tail -1
-rw-r--r--. 1 oracle oinstall  86629108 Sep  3 09:05

$ ls -ltr $ORACLE_HOME/dbs | tail -1
-rw-r--r--. 1 oracle oinstall      204 Sep  3 09:04 osbwsCDB121.ora

$ cat $ORACLE_HOME/dbs/osbwsCDB121.ora
OSB_WS_WALLET='location=file:/u01/app/oracle/product/12.1.0/dbhome_2/dbs/osbws_wallet CREDENTIAL_ALIAS=pane_aws'

$ ls -l $ORACLE_HOME/dbs/osbws_wallet
total 12
-rw-r--r--. 1 oracle oinstall 10228 Sep  3 09:04 cwallet.sso


At this point we’re ready to backup directly to the AWS S3 cloud.


Using with RMAN

Sending the RMAN backup pieces to the AWS S3 storage is as simple as backing via the normal RMAN commands but to the SBT_TAPE device. Of course the new OSB library file and configuration file to use must be specified. For example we can backup in a single run block without over-riding any of our existing RMAN configuration:

backup device type SBT_TAPE tablespace users;

RMAN> run {
2> allocate channel aws_s3 type sbt
3> parms=',SBT_PARMS=(OSB_WS_PFILE=/u01/app/oracle/product/12.1.0/dbhome_2/dbs/osbwsCDB121.ora)';
4> backup tablespace users;
5> }

allocated channel: aws_s3
channel aws_s3: SID=21 device type=SBT_TAPE
channel aws_s3: Oracle Secure Backup Web Services Library VER=

Starting backup at 03-SEP-15
channel aws_s3: starting full datafile backup set
channel aws_s3: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel aws_s3: starting piece 1 at 03-SEP-15
channel aws_s3: finished piece 1 at 03-SEP-15
piece handle=0aqg7f8h_1_1 tag=TAG20150903T095737 comment=API Version 2.0,MMS Version
channel aws_s3: backup set complete, elapsed time: 00:00:15
Finished backup at 03-SEP-15

Starting Control File and SPFILE Autobackup at 03-SEP-15
piece handle=c-3847224663-20150903-01 comment=API Version 2.0,MMS Version
Finished Control File and SPFILE Autobackup at 03-SEP-15
released channel: aws_s3



And to verify:

RMAN> list backup of tablespace users;

List of Backup Sets

------- ---- -- ---------- ----------- ------------ ---------------
6       Full    1.34M      DISK        00:00:00     03-SEP-15
        BP Key: 6   Status: AVAILABLE  Compressed: NO  Tag: TAG20150903T094342
        Piece Name: /u01/app/oracle/product/12.1.0/dbhome_2/dbs/06qg7eee_1_1
  List of Datafiles in backup set 6
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  6       Full 1366827    03-SEP-15 /u01/app/oracle/oradata/CDB121/users01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
8       Full    1.50M      SBT_TAPE    00:00:02     03-SEP-15
        BP Key: 8   Status: AVAILABLE  Compressed: NO  Tag: TAG20150903T095453
        Handle: 08qg7f3d_1_1   Media:
  List of Datafiles in backup set 8
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  6       Full 1367173    03-SEP-15 /u01/app/oracle/oradata/CDB121/users01.dbf



Notice that the first backup was a local one run earlier and shows a local backup piece (file). The second shows that the media was “”. The “oracle-data-pane-1” is the “bucket” or logical container automatically created within the Amazon S3.

If we want to make the backup command automatically use the AWS S3 SBT_TAPE it’s trivial to do using the RMAN CONFIGURE command:

RMAN> configure channel device type sbt parms=',
2> SBT_PARMS=(OSB_WS_PFILE=/u01/app/oracle/product/12.1.0/dbhome_2/dbs/osbwsCDB121.ora)';

using target database control file instead of recovery catalog
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  ',SBT_PARMS=(OSB_WS_PFILE=/u01/app/oracle/product/12.1.0/dbhome_2/dbs/osbwsCDB121.ora)';
new RMAN configuration parameters are successfully stored

RMAN> backup device type sbt tablespace users;

Starting backup at 03-SEP-15
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=16 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup Web Services Library VER=
channel ORA_SBT_TAPE_1: starting full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 03-SEP-15
channel ORA_SBT_TAPE_1: finished piece 1 at 03-SEP-15
piece handle=0eqg7ft3_1_1 tag=TAG20150903T100834 comment=API Version 2.0,MMS Version
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:15
Finished backup at 03-SEP-15

Starting Control File and SPFILE Autobackup at 03-SEP-15
piece handle=c-3847224663-20150903-03 comment=API Version 2.0,MMS Version
Finished Control File and SPFILE Autobackup at 03-SEP-15



And really that’s all there is to it. Of course you can perform more advanced RMAN commands such as allocating multiple channels, etc. And we’re free to perform any combination of local backups and/or cloud backups to the SBT_TAPE device type:

RMAN> list backup summary;

List of Backups
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
1       B  F  A SBT_TAPE    01-SEP-15       1       1       NO         TAG20150901T123222
2       B  F  A SBT_TAPE    01-SEP-15       1       1       NO         TAG20150901T123222
3       B  F  A SBT_TAPE    01-SEP-15       1       1       NO         TAG20150901T123222
4       B  F  A SBT_TAPE    01-SEP-15       1       1       NO         TAG20150901T123222
5       B  F  A SBT_TAPE    01-SEP-15       1       1       NO         TAG20150901T150814
6       B  F  A DISK        03-SEP-15       1       1       NO         TAG20150903T094342
7       B  F  A DISK        03-SEP-15       1       1       NO         TAG20150903T094343
8       B  F  A SBT_TAPE    03-SEP-15       1       1       NO         TAG20150903T095453
9       B  F  A SBT_TAPE    03-SEP-15       1       1       NO         TAG20150903T095737
10      B  F  A SBT_TAPE    03-SEP-15       1       1       NO         TAG20150903T095752
11      B  F  A DISK        03-SEP-15       1       1       NO         TAG20150903T100555
12      B  F  A DISK        03-SEP-15       1       1       NO         TAG20150903T100557
13      B  F  A SBT_TAPE    03-SEP-15       1       1       NO         TAG20150903T100834
14      B  F  A SBT_TAPE    03-SEP-15       1       1       NO         TAG20150903T100850


And obviously restoring works exactly the same way.

IMPORTANT: Note here that none of the backups are encrypted or compressed. Not having to encrypt backups is the first major functional difference from the Oracle Database Backup Service (OSDB) where encryption in mandatory.

Either encryption and/or compression can be used. Normally, backup encryption requires the Oracle Advanced Security Option however one exemption to that is when backing up using Oracle Secure Backup. Similarly, “basic” RMAN backup compression is included with the database without any additional licenses. However the additional compression options (such as “HIGH”, “MEDIUM”, or “LOW”) usually do require the Oracle Advanced Compression option but are included as “Special-use licensing” of the Oracle Secure Backup product.

Compressing prior to backup is generally highly recommended. While this will consume local CPU cycles, it will minimize transfer time through the internet and S3 space used.


Advanced Configuration

Generally, the base parameters should be sufficient. However, if further customization is required there are a number of optional parameters that can be added to the configuration/initialization file (in this example “$ORACLE_HOME/dbs/osbwsCDB121.ora”). An easy way to look for the available parameters including hidden parameters is to search through the OSB Cloud Module library file. For example:

$ strings $ORACLE_HOME/lib/ |grep OSB_


Secure Transfer

Similar to how backups using the OSB cloud module do not require encryption, they also transfer the data without SSL security by default.

This is apparent from a simple Linux netstat command while the backup is running:

$ netstat | grep http
tcp        0 299300 ORALINUX.localdom:46151 s3-1-w.amazonaws.c:http ESTABLISHED


However we can easily remedy this by changing the URL in the configuration/initialization file to use the “https” address:

$ head -1 $ORACLE_HOME/dbs/osbwsCDB121.ora


After adjusting, starting a new RMAN session (critical) and re-running the same backup command now shows a secure SSL (or HTTPS) connection:

$ netstat | grep http
tcp        0 299300 ORALINUX.localdom:34048 s3-1-w.amazonaws.:https ESTABLISHED
tcp        0      0 ORALINUX.localdom:25700 s3-1-w.amazonaws.:https TIME_WAIT


One would think that the undocumented parameter “_OSB_WS_NO_SSL” (which is a boolean accepting only TRUE or FALSE as possible values) might also effect which connection is used though experimentation showed no effect.


Viewing Usage

Viewing the files stored and data usage on the AWS S3 at first is not as intuitive as one might think. From the AWS dashboard selecting S3 shows the “buckets” (logical storage containers) and allows you to drill down into each to see the actual files stored:



However, properties of the bucket such as the number of files and total size is not displayed. To find this information you must navigate to the “Billing and Cost Management” account option:



From here we can see the usage and the associated costs and other relevant details of the account:




Just like with the Oracle Database Backup Service, getting going with the Oracle Secure Backup Cloud Module and Amazon Web Services is very simple. It really can be implemented quickly and easily allowing for RMAN backups to be written directly to off-site storage almost immediately.

However unlike OSDB, the AWS OSB module is not secure by default. With Oracle’s service, the data must be encrypted and the transfer is secured by default and hence they can live up to their claim that your data is secure in flight and at rest. However with the AWS OSB module, by default the reverse is true. Though of course, both HTTPS data transfer and backup encryption can be enabled if desired.

On a positive note, the ability to generate AWS ID and Key values (or “credentials”) specific for each backup configuration can be considered a security advantage.

Apart from the security differences, functionally the two services are almost identical as would be expected. After all, the OSDB module appears to simply be a clone of the OSBWS module introduced about half a decade ago.

The one big caveat of the AWS OSB module is the licensing cost. The Oracle Secure Backup Cloud module is licensed per RMAN channel. Channel based licenses can be shared among multiple databases however.

And overall the S3 storage service is more functional and mature than Oracle’s newer public cloud service. AWS includes some nice features with respect to geographic data placement and replication.

The bottom line is that the Oracle Secure Backup Cloud Module is a great technical solution for implementing off-site Oracle RMAN backups (likely to compliment your on-premise backups) and specifically putting them onto the AWS S3. Functionally it seems great. However, the additional licensing costs is likely the road block for most small to mid-sized organizations.


Additional References


Discover more about our expertise in Oracle and Cloud.

Categories: DBA Blogs