Skip navigation.

Pythian Group

Syndicate content
Love Your Data
Updated: 9 hours 32 min ago

SQL On The Edge #1 – 2016 Installer

Fri, 2015-09-11 14:35


Hello and welcome to my new video blog (Vlog) series SQL On The Edge! My goal with these videos is to provide short, concise demonstrations of the latest and greatest features regarding SQL Server, Azure SQL Database and other Microsoft data platform tools. I’m very interested in other related tech like Azure Data Factory, Power BI, etc so expect the series to branch out in those directions as well!

For today I’m going to be showing the installation process for the latest SQL Server 2016 CTP release: 2.3. This release just came out September 2nd, 2015 and is the best way to play around with the upcoming full release of the product.

As a quick reminder, these are some of the main new features coming to SQL 2016:

  • Improvements for AlwaysOn Availability Groups
  • Improvements on capabilities of the In-Memory OLTP technology
  • Improvements on the Columnstore indexes technology
  • New support for processing JSON
  • New Query Store feature
  • Temporal tables
  • New Azure integrations and many more.

Yeah, it’s a pretty big list and even though 2014 was released not long ago, this doesn’t feel like 2014 R2. Rest assured I will be covering in detail all these new features on new video episodes as time goes by.

For now, let’s jump into the first video in the series where we run down the installation process for this new CTP 2.3 and point out some of the best practices and new features on this new installer. Be sure to grab your own copy of the CTP installer right here and try this out for yourself.


Discover more about our expertise with SQL Server.

Categories: DBA Blogs

Log Buffer #440: A Carnival of the Vanities for DBAs

Fri, 2015-09-11 14:14


This Log Buffer Edition carries on with the weekly culling of blog posts from Oracle, SQL Server and MySQL.


Most people are now aware that in 12c, a VARCHAR2 has been extended from a maximum size of 4000 to 32767 bytes.

FDMEE is offered as the full-featured successor to Oracle Hyperion Financial Data Quality Management ERP Integration Adapter for Oracle Applications (ERP Integrator) and Oracle Hyperion Financial Data Quality Management (FDM).

Cybersecurity is Hot! In fact, so is the weather here in California at this moment.

How do you monitor an ZFS Storage Appliance with Oracle Enterprise Manager 12c?

Using SYSBACKUP in 12c with a media manager layer

SQL Server:

Autogenerating SSIS file import packages using Biml

When to Quote in PowerShell

More on CXPACKET Waits: Skewed Parallelism

Streamline Log Shipping Failovers: DR Made Just For You…

Getting query text from SSMS into Excel with PowerShell


MariaDB 10.1.7 now available

Track and Optimize Server Connection Methods

Abstracting Binlog Servers and MySQL Master Promotion without Reconfiguring all Slaves

Testing MySQL partitioning with pt-online-schema-change

Third day with InnoDB transparent page compression

Categories: DBA Blogs

Measuring CPU Performance across servers

Fri, 2015-09-11 14:09

The y-cruncher utility is a handy little tool that will calculate digits of Pi using multiple threads, and will report back with detailed information on the duration of each step. This allows a direct 1:1 comparison of CPU performance across all servers. Y-Cruncher will perform benchmark testing, stress testing, and I/O analysis, as well as let you create a custom test for very large machines.

This is particularly nice for Virtual Machines, where you may not be sure the Hypervisor is configured correctly and need to test.

The output is easy to read, and gives information on every step performed. For most of us, the important pieces will be:

Start Date: Thu Sep 10 17:43:14 2015
End Date: Thu Sep 10 17:45:48 2015

Computation Time: 120.163 seconds
Total Time: 154.007 seconds

CPU Utilization: 98.577 %
Multi-core Efficiency: 98.577 %

I thought a nice test of this utility would be to compare a few different sizes of Microsoft Azure & Amazon AWS servers and see how they perform. For all of these tests, I am running Windows 2012 R2, calculating 1 million digits of pi, and using multi-threaded processing when I have more than 1 vCPU.

The below table has my (not surprising) results.

ProviderTypeCPU InfoNum of vCPUCPU Time (Sec)AzureStandard A1AMD Opteron 4171 HE 2.10 GHz114446.791AzureStandard D13Intel Xeon E5-2660 v0 2.20 GHz8293.939AzureStandard G3Intel Xeon E5-2698B v3 2.00 GHz8142.508AWSt2.smallIntel Xeon E5-2670 v2 2.50 GHz13115.828AWSm4.2xlargeIntel Xeon E5-2676 v2 2.40 GHz8205.36AWSc4.2xlargeIntel Xeon E5-2666 v3 2.90 GHz8177.72
Categories: DBA Blogs

Amazon S3 to Glacier – Cloud ILM

Fri, 2015-09-11 13:57

Falling in love with Kate Upton is easy, but even better than that is to be swept off your feet by Information Lifecycle Management (ILM) in the Amazon Web Services (AWS). But that’s understandable right?:) Simple, easily-configurable, fast, reliable, cost effective and proven are the words which describe it.

Pythian has been involved with ILM for a long time. With various flavours of databases and systems, Pythian has been overseeing creation, alteration, and flow of data for a long time until it becomes obsolete. That is why AWS’s ILM resonates perfectly well with Pythian’s expertise.

Amazon S3 is an object store for short term storage, whereas Amazon Glacier is their cloud archiving offering or storage for long term. Rules can be defined on the information to specify and automate its lifecycle.

The following screenshot shows the rules being configured on objects from S3 bucket to Glacier and then permanent deletion. If it’s an object 90 days after creation it will be moved to Glacier, and then after 1 year, it will be permanently deleted. Look at the graphical representation of lifecycle as how intuitive it is.




Discover more about our expertise in Cloud.

Categories: DBA Blogs

My Sales Journey: Episode 2

Fri, 2015-09-11 06:46

Goldfish jumping into the sea

It is Week 2 at Pythian and one thing that is becoming clear to me is that I am surrounded by people trying to create value for clients or be valuable themselves. The more we innovate and think about doing things in new interesting ways the more engaged we become in our work. Sales, with its endless outreach and prospecting may seem tedious to many of us, so in creating an atmosphere of engagement, team spirit with lots of jokes thrown in can make the day less daunting.

This week is definitely more intense than the last one. The scope of on-boarding has more than doubled. I have attended initiation sessions with VP’s of Infrastructure and Service Delivery and I have shadowed every member on the sales team to learn their approach and hopefully take the best of it and create my own.

Salesforce is a “force” if you have never done it before with a big learning curve. Being computer and web savvy definitely goes a long way to figure out its intricacies. I am lucky to have a sales research team to help build lists and provide support. Another perk about working at Pythian!

Its Friday! The week flew past me in a good way. My on-boarding check-list is complete, this post is almost done, the first few calls to get me started are complete and so are the first few emails! Nothing beats starting the first day of the week feeling accomplished.

If you are reading this I would love to hear from you about your experience just starting out in Sales or if you are seasoned and have some tricks to share. Looking forward to hearing from you!


Categories: DBA Blogs

Sales: Eliminating the Status Quo

Wed, 2015-09-09 13:49

The Challenging Sale

Working for Pythian as a sales person is challenging, exciting, rewarding, and challenging.

“But Ben you said challenging twice!”

It’s not an easy job (read: challenging) because we disrupt the status quo, bringing new value to our prospects and clients and teaching them about their industry, how to compete, and more (see: challenging per The Challenger Sale). It’s exciting and rewarding because of who we work with and the sorts of problems we solve. It’s not an easy job, but it is fulfilling.

Competing on Outcomes: People, Tools, Process

Companies need to play offence AND defence with their data in order to keep the lights on but also drive innovation which addresses consumer’s and companies’ needs. With Pythian’s A+ team of DBAs and engineers, we’re able to actually deliver the sort of incredible results our clients need in order to maximize their market potential and customer lifetime value. We’re selling real, valuable, measurable professional services outcomes and it’s wonderful to be able to look back on successful engagement after successful engagement and think, “We did that!” Our sales team needs to be part of that A+ metric as well, and Pythian gives us the tools to succeed, from Salesforce to LinkedIn Sales Navigator, from a generous training budget to an open door policy with management. We have processes which allow a repeatable and agile pre-sales engagement, and are empowered to succeed. The stage is set for success: hard work and following the processes will get you there.

In closing, the transformative power of technology will be leveraged by more and more companies and so our greenfield opportunity to help them compete and WIN on the KPIs of Velocity, Efficiency, Performance, Security, Availability, and Leverage grows commensurately. It’s a great place to Love Your Data, and a great place for people to thrive and make a real impact.

Categories: DBA Blogs

Part 1: Oracle Cloud Database Backup Service

Wed, 2015-09-09 13:09
Getting Started with Oracle Cloud Backups – Oracle Cloud Database Backup Service (ODBS)

This is part 1 of 3 in a series on “Getting Started with Oracle Cloud Backups”.

  • Part 1 covers setting up RMAN to backup directly to the new Oracle Cloud Database Backup Service (ODBS) (part of the Oracle Public Cloud or OPC).
  • Part 2 covers setting up RMAN to backup directly to the cloud using Amazon Web Services (AWS) Simple Storage Service (S3).
  • Part 3 compares and contrasts the two services.



If you’re looking for easy “off-site” backups of your Oracle databases these days (likely to compliment your onsite backup strategy), fortunately there’s a wealth of options for you. Both Amazon Web Services (AWS) and Oracle Cloud Database Backup Service (ODBS) make the process of backing up directly to the cloud fairly easy, meaning you can go from zero to backed-up to the cloud in just a matter of a couple of hours. And both use a dynamic and very affordable “pay-as-you-go” pricing model.

So if your boss comes to you one morning and says “we need an off-site backup ASAP!!!” due to whatever reason (regulatory, audit, internal policies, pending volcano explosion in your area, or Godzilla sighting) it’s actually quite realistic that you can tell him/her “DONE!” by lunch.

Of course there are several dependencies on that, specifically database size, internet upload speed, and change control formalities. And don’t forget restore times (downloading backups through the internet) and your RTO. But if your database is 10.2 or higher (any edition including Standard Edition), and your operating system is Linux, Solaris, AIX, HP-UX, or Windows, the process is quite simple.

This article provides a quick start for getting up and running with Oracle RMAN backups to the Oracle Database Backup Service for users new to the service and cloud backups.


Backing up to Oracle Public Database Backup Service

Conceptually, backing up to the Oracle Database Backup Service is very simple. Once you have an Oracle Cloud account created, then you just download a RMAN cloud backup library module onto you database server and configure RMAN to use it. This article will dig into those steps in a little more detail.

Getting started with the Oracle Database Backup Service is fairly easy. But first it’s important to understand that they have two slightly different usage/billing models: “Metered” and “Non-Metered”.

With the Metered service the storage and data transfer is metered and charged based on “GB/Month” at rates starting at $0.0264 GB/Month for the storage component, with additional costs for outbound data transfer, and request commands. With the non-metered service the price is simply $33 per TB/month with an unlimited number of transfers and requests.

Regardless of whether you think that the progressively charged meter rate or the flat non-metered rate is most applicable, the overall cost is extremely low. With no up-front capital costs or licensing costs and a minimal monthly usage charge, the cloud based backup solution really does seem to offer value for the price. Although it should be noted that this is due to the extremely competitive cloud based storage options available these days from competitors at both the enterprise and consumer levels.

Backed up data (data-at-rest) is always encrypted (more on that later) and the transfer (data-in-flight) is always secured via HTTPS.  And it’s protected via 3-way mirroring within the Oracle Data Centers.


Identity Domains

Oracle Cloud introduces a hierarchal logical structure. This starts with the “Oracle Cloud Account”, which typically corresponds to the organization or company. Within the “Account” is one or more “Identity Domains”. These logical constructs hold one or more unique Oracle cloud services. Users and authentication can be added to and span the identity domains but with unique access to each service.

The structure isn’t only logical as the identity domains (and the associated services) are correlated with an Oracle Data Center is a specific geographical location. Discover more information about this.


Creating an Oracle Database Backup Service Account

To get started, navigate and choose the “Try It” button for the free 30-day trial, which provides 250GB of free storage during the trail period. Here you’ll be promoted to choose which billing model you’ll want to continue with after your free-trial expires:


From there you’ll be prompted for information such as your name and address and of course your existing “Oracle Account”. Note that it is mandatory to first have an “Oracle Account” that anyone can create.

Interestingly Oracle also wants to verify your mobile phone number (so they can “verify your identity”) as part of the sign-up process unlike most other cloud providers. Presumably, so a sales associate can eventually follow-up with you.

The final step is to create a new “Cloud Account” and specify the name for the account. Not to be confused with your “ Account” that is used on OTN and My Oracle Support – this one differs:



Here’s the interesting part: even though the sign-up page mentions a free 30-day trial (see screen-shot earlier), when actually registering the time magically doubles to 60-days:



The online documentation also says 30-days yet the service Dashboard (or sometimes called “WebUI” by Oracle) clearly shows 60-days.

It’s also interesting to note that after signing up specifically for the Database Backup Service, the trial automatically also includes most of the other Oracle Public Cloud offerings including the “Oracle Storage Cloud Service”, the “Oracle Compute Cloud Service”, the “Oracle Database Cloud Service”, the “Oracle Java Cloud Service”, and some others. All with the same 60-day trial period. Though it’s not obvious when signing up specifically for the “metered ODBS trial”, this is sort of mentioned in the online documentation:


“There are no specific trials for Oracle Database Backup Service. When you request a trial of Oracle Database Backup Service, you actually get a trial of Oracle Storage Cloud Service. Oracle Database Backup Service uses Oracle Storage Cloud Service containers to store cloud backups.”

Hence, if you’re planning to try these different services using the free-trials serially, you’re out of luck. You get the one 60-day window for all services running in parallel.  But you can start a new trial under a separate & new identity domain. After submitting, the order should take less than an hour to complete after which you’ll receive an email with further details. This is where it gets a little confusing. You now have an “ Account” used for OTN, MOS, etc., which is based on your email address and has a password as well as an “Oracle Cloud Administrator Account”. This is also based on that same email address but has a different password (temporary password is in the email). Log on to your Oracle Cloud account as logging in and navigating from is confusing.


Creating Users

Creating individual users or credentials to use with the RMAN backup through secured wallet files is not at all obvious at first (unlike the competitive offering from Amazon Web Services, which is discussed in the next article in this series).

From the main dashboard at this point it’s not simple to setup credentials to be used specifically and exclusively by the RMAN backup scripts. Instead by pressing the “Security” button in the top right you can add additional administrators to your Oracle Cloud account, but each is tied to an account.

The trick is that you need to navigate into the dashboard specific for the identity domain! One way to do this is to find the Backup Service from the main dashboard, click on the menu option on the far right and choose “My Services”:



You’ll then be prompted with a new login window specific to the Identity Domain:



Specify, the required Identity Domain or the one chosen (or default one used) when signing up – this is also in your welcome email that can be re-sent from the same menu. After specifying the Identity Domain, log in using your Oracle Cloud Account credentials, not your account credentials (even though the two emails are the same)!

The Identity Domain dashboard looks almost identical to the main Oracle Cloud dashboard, though one noticeable exception is that there’s now a “Users” button along the top:



From here we can add new users and assign them specific roles such as the ability to read, write, or administer database backups:


However the disadvantage here is that these are actual “users” and not just keys and secure credentials as Amazon Web Services Identity Access Module provides.


Installing the “Oracle Database Cloud Backup Module”

To start using the service with RMAN, you’ll first need to download and install the Oracle Database Cloud Backup Module into each Oracle Home. The process is easy enough: start by downloading the installer zip file and then copying the enclosed Java JAR file to your database server. This is a generic installer which will determine the database version and OS platform/release and will install the appropriate library module. The key dependency is Java which you will already have in the Oracle home.

Installation is simple and required as mandatory arguments:

  • The service name for the ODBS account (which is actually “Storage” even though ODBS doesn’t count against OPC Storage).
  • The identity domain for the ODBS account.
  • The user name for the ODBS account (not the user).
  • The password for the ODBS account (not the user’s password).
  • The location for the secure wallet file which stores the ODBS credentials.

Other options such as proxy server details can also be specified. Storing the library in the $ORACLE_HOME/lib directory (specified via the “libDir” argument) is recommended. Details of all options and arguments can be found in the README file or by running the Java JAR file without any arguments.

Example of installation:

$ java -jar opc_install.jar \
>    -serviceName Storage \
>    -identityDomain ************ \
>    -opcId '' \
>    -opcPass '*****' \
>    -walletDir $ORACLE_HOME/dbs/opc_wallet \
>    -libDir $ORACLE_HOME/lib
Oracle Database Cloud Backup Module Install Tool, build 2015-05-12
Oracle Database Cloud Backup Module credentials are valid.
Oracle Database Cloud Backup Module wallet created in directory /u01/app/oracle/product/12.1.0/dbhome_2/dbs/opc_wallet.
Oracle Database Cloud Backup Module initialization file /u01/app/oracle/product/12.1.0/dbhome_2/dbs/opcCDB121.ora created.
Downloading Oracle Database Cloud Backup Module Software Library from file
Downloaded 23169388 bytes in 34 seconds. Transfer rate was 681452 bytes/second.
Download complete.


After running, the appropriate library file is downloaded, the configuration file created, and a secure wallet file (with the credentials) created:

$ ls -ltr $ORACLE_HOME/lib | tail -1
-rw-r--r--. 1 oracle oinstall  72062283 Sep  4 11:35

$ ls -ltr $ORACLE_HOME/dbs | tail -1
-rw-r--r--. 1 oracle oinstall      183 Sep  4 11:35 opcCDB121.ora

$ cat $ORACLE_HOME/dbs/opcCDB121.ora
OPC_WALLET='LOCATION=file:/u01/app/oracle/product/12.1.0/dbhome_2/dbs/opc_wallet CREDENTIAL_ALIAS=storage_opc'

$ ls -l $ORACLE_HOME/dbs/opc_wallet
-rw-r--r--. 1 oracle oinstall 10196 Sep  4 11:35 cwallet.sso

There are other optional customizations that can be added to the configuration file (discussed later), but at this point the initial configuration is complete and RMAN backups to the ODBS can be made.


Using with RMAN

Actually writing the backups to the ODBS is now as simple as configuring and backing up to the SBT_TAPE backup device.

For example, with all RMAN “CONFIGURE” parameters set to their default values, to backup using ODBS in a single RMAN run block without any permanent configuration changes we can try:

RMAN> run {
2> allocate channel odbs type sbt
3> PARMS=',SBT_PARMS=(OPC_PFILE=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/opcORA11G.ora)';
4> backup tablespace users;
5> }

allocated channel: odbs
channel odbs: SID=21 device type=SBT_TAPE
channel odbs: Oracle Database Backup Service Library VER=

Starting backup at 04-SEP-15
channel odbs: starting full datafile backup set
channel odbs: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel odbs: starting piece 1 at 04-SEP-15
released channel: odbs
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on odbs channel at 09/04/2015 13:11:08
ORA-27030: skgfwrt: sbtwrite2 returned error
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
   KBHS-01602: backup piece 0kqgaeut_1_1 is not encrypted



This error is not unexpected and brings us to a unique feature of the ODBS – backup encryption is mandatory. RMAN backups to disk require the Advanced Security Option in order to use encryption but RMAN backups using the “Oracle Secure Backup SBT interface” does not. Similarly, the “Oracle Database Cloud Backup Module” is effectively just a customized version of the Oracle Secure Backup SBT interface and also does not require the Advanced Security Option (Source).

The required encryption can be implemented one of these ways:

  1. Password-based encryption – simple but requires the password to be entered manually on backup and restore.
  2. Transparent encryption where the encryption password is stored in an Oracle wallet.
  3. Dual-mode encryption where either a wallet or supplied password can be used.

Oracle recommends transparent encryption using a wallet and considers it more secure as no passwords are required.

However, using password encryption just for simplicity to start, we only have to add one simple RMAN command prior to the run block:


executing command: SET encryption
using target database control file instead of recovery catalog

RMAN> run {
2> allocate channel odbs type sbt
3> PARMS=',SBT_PARMS=(OPC_PFILE=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/opcORA11G.ora)';
4> backup tablespace users;
5> }

allocated channel: odbs
channel odbs: SID=272 device type=SBT_TAPE
channel odbs: Oracle Database Backup Service Library VER=

Starting backup at 04-SEP-15
channel odbs: starting full datafile backup set
channel odbs: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel odbs: starting piece 1 at 04-SEP-15
channel odbs: finished piece 1 at 04-SEP-15
piece handle=0mqgahd0_1_1 tag=TAG20150904T135232 comment=API Version 2.0,MMS Version
channel odbs: backup set complete, elapsed time: 00:00:15
Finished backup at 04-SEP-15

Starting Control File and SPFILE Autobackup at 04-SEP-15
piece handle=c-3847224663-20150904-00 comment=API Version 2.0,MMS Version
Finished Control File and SPFILE Autobackup at 04-SEP-15
released channel: odbs

RMAN> list backup of tablespace users;
List of Backup Sets

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
18      Full    1.50M      SBT_TAPE    00:00:06     04-SEP-15
        BP Key: 18   Status: AVAILABLE  Compressed: NO  Tag: TAG20150904T135232
        Handle: 0mqgahd0_1_1   Media: ************************/oracle-da
  List of Datafiles in backup set 18
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  6       Full 1394781    04-SEP-15 /u01/app/oracle/oradata/CDB121/users01.dbf



Voila: Encrypted backups straight to the cloud via ODBS. Additional commands such as RESTORE, RECOVER, CROSSCHECK, DELETE BACKUP, etc. all work in exactly the same way – the ODBS simply appears to RMAN as a SBT_TAPE device.

Now if we want all backups to use the ODBS without having to manually specify the SBT device parameters in a channel allocation each time we can simply persist the configuration using the CONFIGURE command:

2> PARMS=',SBT_PARMS=(OPC_PFILE=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/opcORA11G.ora)';

new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  ',SBT_PARMS=(OPC_PFILE=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/opcORA11G.ora)';
new RMAN configuration parameters are successfully stored


executing command: SET encryption

RMAN> backup device type sbt tablespace users;

Starting backup at 04-SEP-15
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=255 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=
channel ORA_SBT_TAPE_1: starting full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 04-SEP-15
channel ORA_SBT_TAPE_1: finished piece 1 at 04-SEP-15
piece handle=0oqgaidh_1_1 tag=TAG20150904T140953 comment=API Version 2.0,MMS Version
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:15
Finished backup at 04-SEP-15

Starting Control File and SPFILE Autobackup at 04-SEP-15
piece handle=c-3847224663-20150904-01 comment=API Version 2.0,MMS Version
Finished Control File and SPFILE Autobackup at 04-SEP-15



RMAN provides various encryption algorithms (listed in the V$RMAN_ENCRYPTION_ALGORITHMS view) and all are applicable to the ODBS backups.

Another marketed benefit of ODBS is compression. “Basic Compression” (CONFIGURE COMPRESSION ALGORITHM ‘BASIC’;) can be used for all RMAN backups without the requirement for additional licensing (Source).The Oracle Advanced Compression Option allows for RMAN compression to include the additional compression levels “HIGH”, “MEDIUM”, and “LOW”. High compresses the most and results in the minimal amount of data transferred through the network (likely ideal for backing up through the internet), but at the expense of higher local CPU resources.

Therefore, a few additional permutations of the sample backup would be:

RMAN> backup as compressed backupset device type sbt tablespace users;

Starting backup at 04-SEP-15
using channel ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: starting compressed full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 04-SEP-15
channel ORA_SBT_TAPE_1: finished piece 1 at 04-SEP-15
piece handle=0qqgajcv_1_1 tag=TAG20150904T142639 comment=API Version 2.0,MMS Version
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:15
Finished backup at 04-SEP-15

Starting Control File and SPFILE Autobackup at 04-SEP-15
piece handle=c-3847224663-20150904-02 comment=API Version 2.0,MMS Version
Finished Control File and SPFILE Autobackup at 04-SEP-15


new RMAN configuration parameters:
new RMAN configuration parameters are successfully stored

RMAN> backup as compressed backupset device type sbt tablespace users;

Starting backup at 04-SEP-15
using channel ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: starting compressed full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 04-SEP-15
channel ORA_SBT_TAPE_1: finished piece 1 at 04-SEP-15
piece handle=0sqgajg1_1_1 tag=TAG20150904T142817 comment=API Version 2.0,MMS Version
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:15
Finished backup at 04-SEP-15

Starting Control File and SPFILE Autobackup at 04-SEP-15
piece handle=c-3847224663-20150904-03 comment=API Version 2.0,MMS Version
Finished Control File and SPFILE Autobackup at 04-SEP-15



Comparing the backup performance and compression of FULL database backups using:

  1. Uncompressed backup to DISK
  2. Uncompressed backup to ODBS
SQL> COLUMN input_bytes_per_sec_display HEADING INPUT_BPS FORMAT a14
SQL> COLUMN output_bytes_per_sec_display HEADING OUTPUT_BPS FORMAT a14
SQL> SELECT status, input_type,
  2         compression_ratio,
  3         input_bytes_display, output_bytes_display,
  4         input_bytes_per_sec_display, output_bytes_per_sec_display,
  5         time_taken_display
  6    FROM v$rman_backup_job_details;

---------- ----------- ----------------- -------------- -------------- -------------- -------------- ----------
COMPLETED  DB FULL             1.2824343     4.53G          3.53G         22.08M         17.21M      00:03:30
COMPLETED  DB FULL            1.28192371     4.53G          3.53G        400.65K        312.54K      03:17:34
COMPLETED  DB FULL            5.39639373     4.46G        845.75M          1.22M        232.00K      01:02:13
COMPLETED  DB FULL            6.44232804     4.46G        708.75M          1.88M        299.41K      00:40:24

10 rows selected.



Clearly the compression helps significantly in terms of output bytes and the elapsed backup time. And the ODBS unique advantage of being able to specify COMPRESSION=‘HIGH’ makes a noticeable difference on my system at the expense of CPU resources.

But drilling into the details of this test any further is of little value as there are so many site and system specific variables in play. Specifically:

  • Database backup size
  • Nature and compressibility of the data
  • Backup parallelism or number of channels chosen
  • Internet upload bandwidth
  • Available CPU resources and headroom for compressing

So, similar tests should be performed on each system to find the sweet spot between compression, space used, and backup elapsed time.


Additional Considerations

Some more advanced options would included generating a detailed trace file of the OPC module by adding the “_OPC_TRACE_LEVEL=100” parameter to configuration (.ora) file. The resulting trace file is created in the ADR rdbms trace directory.

Other documented parameters that can be included in the config file are:



Further, searching the OPC library module shows many additional hidden parameters (an examination of each is outside of the scope of this article):

$ strings $ORACLE_HOME/lib/ | grep ^_OPC_


Oracle Database Standard Edition

Oracle Database Standard Edition historically did not support backup encryption, yet ODBS supports Standard Edition and with ODBS encryption is mandatory. Oracle created the following bug to document that contradiction:

“Bug 18339044 – Support encryption on oracle public cloud sbt library in standard edition (Doc ID 18339044.8)”


A one-off patch for this bug is available for Oracle database onward on all ODBS supported platforms. And fortunately the patch can be applied without outage.

However depending on how up to date you are with PSU patching this may be a moot point as this bug fix is included with the following PSUs or server patch sets/bundles: (Server Patch Set) (Jan 2015) Database Patch Set Update (DB PSU) Bundle Patch 6 for Exadata Database Bundle Patch 23 for Exadata Database Patch 12 on Windows Platforms Patch 33 on Windows Platforms

Viewing data usage

Viewing the amount of data used is quite simple from the Oracle Cloud Dashboard:



Note that this storage is not reflected as part of the “Oracle Storage Cloud Service” – they are treated as separate.

Drilling down into the “Oracle Database Backup Service” details:



Some interesting observations from this:

  • The data is not real-time as the report can only show up to the prior day (“yesterday”). It doesn’t show the current day’s activities.
  • There’s no further drill down, no way to tell which files/backup pieces or databases (if backing up many) are consuming the space.
  • There’s not much else in the way of details or anything else we can do from the web dashboard interface except for exporting the usage numbers graphed to a CSV file. We need to use other mechanisms (including RMAN commands) to obtain further details.

Note that the above screen-shots are from the Oracle Cloud Account level, which shows all Identity Domains for the account. It’s also possible to log into the dashboard for just a single Identity Domain. However, currently this doesn’t add any additional functionality or level of detail through the web interface.



The Oracle Database Backup Service definitely is a simple, low-cost, and effective off-site backup solution. No upfront costs and extremely reasonable metered usage rates make it extremely competitive.

And forcing backups to the service to be encrypted (and transferred via HTTPS), while the keeping the encryption key in a wallet maintained on your on-premise device makes the configuration secure. And protected as long as the wallet file is backed up and secured separately.

Oracle’s triple-mirroring of the data in their data centers is nice, however there’s no option to choose which of the 19 Oracle data centers is used.

The position of the “special-use licensing” for the encryption and compression backup options at no additional cost may not be as much of a unique advantage as initially thought. Backup encryption to competing cloud services through the Oracle Secure Backup SBT module is already permitted as is “BASIC” level backup compression for all RMAN backups, both without additional licenses (specifically the Advanced Security Option or the Advanced Compression Option). Whether the advanced compression options provided as part of the “special-use licensing”, such as “HIGH” compression is of significant value will vary on a case-by-case or site-by-site basis.

But the key difference on the licensing front is that there is no requirement for the Oracle Secure Backup module which is separately licensed per RMAN channel (Source). This is likely the reason why the product name, modules, etc make no reference of “Oracle Secure Backup” or OSB.

DYI solutions of backing up to disk, then encrypting and possibly compressing those RMAN backup pieces separately, then uploading to any one of the many cloud storage providers adds manual steps, complexity, and possible failure points and lacks the full RMAN integration for reporting, piece management, and recovery. Conversely the ODBS only requires some very minor RMAN configuration options or adjustments to existing RMAN commands and scripts.

Many parts of the Oracle Cloud service offering is confusing at first. Specifically, the difference between the account and Oracle Cloud Account both based on the same email as the username. Next, the concept of Identity Domains and how to navigate to them and between them or between the dashboards and services pages certainly could be more obvious.

Beyond that WebUI for the actual ODBS is basic and provides a minimal amount of functionality and information. But probably because of the reasoning that you really don’t need any further information. Details on backup pieces and space usage can be obtained through RMAN commands and/or SQL queries. However, it would be nice to be able to graphically see through the WebUI at least which backups, or which databases are consuming the space. Instead you just get a single “space used” number graphed by day. Excluding the current day!

So overall this is probably one of the easiest and cost effective ways of writing secured and compressed database backups to an off-site location (as long as your RTO with the slower restore/download times can still be met). The fact that encryption is included without needing licenses for the Oracle Secure Backup module seems to be the “killer feature”. The additional backup compression options is a bonus on top of that.


Additional References


Discover more about our expertise in Oracle and Cloud.

Categories: DBA Blogs

Favorite Way: Migrating to Exadata

Wed, 2015-09-09 11:54

There are lots of considerations to be taken into account when migrating databases to the Exadata. It’s like any other migration. DBAs and other stakeholders of the system have to evaluate what to migrate and what not to, physical and logical settings, versions and many other things.

I’m not delving into migration preparation or strategies and methods here, instead I want to mention which method I like the best to migrate databases to the Exadata. By doing it this way you can make the source database primary in Dataguard configuration and creating a physical standby on the Exadata, and then switch-over the primary to Exadata.

How’s that? Neat, simple, quick and beautiful.

Generally, these steps would be followed to accomplish this physical approach of migration:

  • At source database, enable the force logging, put it in archivelog mode, set secondary archive destination, add TNS entries and add standby redo logs etc.
  • At Exadata, make sure ASM instances are up and running.
  • At standby side, create a simple parameter file, make entries in TNS, and then start standby instance in nomount.
  • Then from primary, in RMAN, connect to the source database and with target auxiliary standby and run duplicate target database for standby from active database.
  • Then after making sure that standby is in sync with primary, switch-over to the standby.
  • Using srvctl register this database to clusterware at all nodes of Exadata and configure database as RAC.
  • Point your applications to the new Exadata database.

Of course, the devil is in the detail and proper planning, testing and execution is needed just like any other migration project.

As a side note, my second favourite method for migration is with Oracle GoldenGate. :)


Learn more about our expertise in Cloud technologies.

Categories: DBA Blogs

Magic of “\d” in Vertica

Wed, 2015-09-09 11:31

A quick neat way to list down important and oft-needed information like names of databases, schemas, users, tables, projections etc. We can also use patterns with the ‘\d’ to narrow down the results. Let’s see it in action:

Connect with Vertica vsql:

vsql  -U dbadmin -w vtest -h -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
\g or terminate with semicolon to execute query
\q to quit

vtest=> \dn
List of schemas
Name     |  Owner  | Comment
v_internal   | dbadmin |
v_catalog    | dbadmin |
v_monitor    | dbadmin |
public       | dbadmin |
TxtIndex     | dbadmin |
store        | dbadmin |
online_sales | dbadmin |
mytest       | mytest  |
(8 rows)

vtest=> \dn mytest
List of schemas
Name  | Owner  | Comment
mytest | mytest |
(1 row)

vtest=> \dn my*
List of schemas
Name  | Owner  | Comment
mytest | mytest |
(1 row)

vtest=> \dn v
List of schemas
Name | Owner | Comment
(0 rows)

vtest=> \dn *v*
List of schemas
Name    |  Owner  | Comment
v_internal | dbadmin |
v_catalog  | dbadmin |
v_monitor  | dbadmin |
(3 rows)

Likewise you can list down other information like:

vtest=> \dj
List of projections
Schema    |            Name             |  Owner  |       Node       | Comment
mytest       | ptest                       | mytest  | v_vtest_node0002 |
mytest       | testtab_super               | mytest  |                  |

To list down views:

vtest=> \dv
No relations found.

If you connect with the mytest user and run:

vtest=> \dt
List of tables
Schema |  Name   | Kind  | Owner  | Comment
mytest | testtab | table | mytest |
(1 row)

Following are more ‘\d’ options from help:

\d [PATTERN]   describe tables (list tables if no argument is supplied)
PATTERN may include system schema name, e.g. v_catalog.*
\df [PATTERN]  list functions
\dj [PATTERN]  list projections
\dn [PATTERN]  list schemas
\dp [PATTERN]  list table access privileges
\ds [PATTERN]  list sequences
\dS [PATTERN]  list system tables. PATTERN may include system schema name
such as v_catalog, v_monitor, or v_internal.
Example: v_catalog.a*
\dt [PATTERN]  list tables
\dtv [PATTERN] list tables and views
\dT [PATTERN]  list data types
\du [PATTERN]  list users
\dv [PATTERN]  list views


Discover more about our expertise in Big Data.


Categories: DBA Blogs

Azure cli on Mac

Wed, 2015-09-09 10:54

Microsoft Azure has a really good, intuitive GUI interface where you can do a variety of tasks. But for me, as a DBA working most of my time in command line, sometimes it’s much easier to log on and run command from my shell to start, stop or check the status for my VM. I am using a Macbook as my main work machine and luckily Microsoft provides Azure cli package for Mac. It’s easy to set it up and start using. In this blog I will try to describe how to set it up and begin using a Mac.

First, you need to download the azure-cli.0.9.8.dmg package and install it. It’s a pretty easy and straight forward process. By the end of installation you will be redirected to an explanation from Ralph Squillace on the cli, which is definitely worth reading. Also, on the final screen of your installation you will see some useful advice like “how to start”, “uninstall azure cli” and what you need to add to your “$PATH variable”. By default it is going to be installed to your /usr/local/bin directory, so, it makes perfect sense to include it to your default PATH if you haven’t done it before.

I found the tool be pretty easy and straightforward to use. Now, go to your terminal window and type Azure. This will provide a perfect starting point with a list of available commands. You will need to login if you have a school or working account or alternatively download the publish settings if, like me, you have your own individual account for Azure. To do that just type command “azure account download”. It will connect to the Microsoft portal and download the file you will need to import.
Here are the commands and expected output:

Glebs-MacBook:~ $ azure account download
info: Executing command account download
info: Launching browser to
help: Save the downloaded file, then execute the command
help: account import
info: account download command OK
Glebs-MacBook:~ $

Glebs-MacBook:~ $ azure account import ~/Downloads/Pay-As-You-Go-8-27-2015-credentials.publishsettings
info: Executing command account import
info: account import command OK
Glebs-MacBook:~ $

Now you can see your account:

Glebs-MacBook:~$ azure account list
info: Executing command account list
data: Name Id Tenant Id Current
data: ------------- ------------------------------------ --------- -------
data: Pay-As-You-Go 839567e2-ffb5-67f1-9e07-02547bd7e0a7 undefined true
info: account list command OK

So, now you are all set and can use the tool with your own subscription. You don’t need to import it again and will be able to use it just typing commands. Here I am going to show basic commands and output.

To list your VMs just type:

Glebs-MacBook:~$ azure vm list
info: Executing command vm list
+ Getting virtual machines
data: Name Status Location DNS Name IP Address
data: -------- ------------------ -------- --------------------- ----------
data: myoradb1 StoppedDeallocated East US
data: myoragate StoppedDeallocated East US
data: myoradb2 StoppedDeallocated East US
data: mypostgr01StoppedDeallocated East US
info: vm list command OK

To start any of your machine run:

Glebs-MacBook:~$ azure vm start myoradb1
info: Executing command vm start
+ Getting virtual machines
+ Starting VM
info: vm start command OK
Glebs-MacBook:~ otochkin$ azure vm show myoradb1
info: Executing command vm show
+ Getting virtual machines
data: DNSName ""
data: Location "East US"
data: VMName "oradb1"
data: IPAddress ""
data: InstanceStatus "RoleStateUnknown"
data: InstanceSize "Medium"
info: vm show command OK

To get the help just run “azure -h” and it will provide list of main commands. After that you can run one of the commands without parameters like “azure vm” and it will give full list of available commands for the “vm” mode.

That is it for now. Good luck in your exploration of Azure.


Discover more about our expertise with Cloud technologies. 

Categories: DBA Blogs

Patching Fun: GI and Database Patches

Tue, 2015-09-08 19:05

So, it has been some time since I have patched a database. Patching seemed necessary in a 12c database used for testing, as it was having some crashing issues. As the database and GI home were both un-patched, it seemed worthwhile to just apply the latest set of patches rather than spend (more) time troubleshooting the issue. This is a CDB database on Oracle Linux 6.5, 64 bit. This is a standalone database with Oracle Restart and no RAC.

After downloading the appropriate patches, I unzipped the files to a local directory, installed OPatch and then started in on the patch documentation. After perusing the docs it appeared that patch would be fairly easy with this command:

# opatchauto apply <UNZIPPED_PATCH_LOCATION>/20996835 -ocmrf <ocm response file>

One problem though. What is an OCM response file?

As this was my first time using Opatchauto, I wanted to make this work.  The documentation did include a reference to the following helpful Support Note:

How to Create an OCM Response file to Apply a Patch in Silent Mode – opatch silent (Doc ID 966023.1)

Creating the response file was quite straightforward, since the Oracle user the following command took care of it:

cd /u01/app/oracle/patches/20996835
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output ./unconfig.rsp

With that out of the way it was now time to try the patch. This was done as root, as in instructed in the patch documentation.

/u01/app/grid/product/12.1.0/grid/OPatch/opatchauto apply /u01/app/oracle/patches/20996835 -ocmrf /u01/app/oracle/patches/20996835/unconfig.rsp

All went well for a few moments, but then (gasp) a failure.  Please excuse the sarcasm; I may not have patched a database recently, yet I’m hardly new to the process.  Patching working properly on the first attempt would have been quite a surprise.

Here is what appeared in the Opatch log file:

2015-09-03_17-30-37 :
Clusterware is either not running or not configured. You have the following 2 options:
1. Configure and start the Clusterware on this node and re-run the tool
2. Run the tool with '-oh <GI_HOME>' to first patch the Grid Home, then invoke tool with '-database <oracle database name>' or '-oh <RAC_HOME>' to patch the RAC home
Parameter Validation: FAILED

Apparently I should not have shutdown GI before starting. Rather than restarting GI, I chose door #2, that is patch GI and DB homes separately.

Seems simple enough. Here’s the command I used:

/u01/app/grid/product/12.1.0/grid/OPatch/opatchauto apply /u01/app/oracle/patches/20996835 -oh /u01/app/grid/product/12.1.0/grid -ocmrf /u01/app/oracle/patches/20996835/unconfig.rsp

Again, all went well for a few moments, and then another failure.  This time a file could not be found. This is from the log file:

Patch 20831113:
onewaycopyAction : Source File "/u01/app/oracle/patches/20996835/20831113/files/crs/install/" does not exists or is not readable
',': Cannot copy file from '' to '/u01/app/grid/product/12.1.0/grid/crs/install/'

That seemed rather odd.  The patch is running as root, and the file is definitely readable by root:

[oracle@ora12102a 20996835]$ ls -l /u01/app/oracle/patches/20996835/20831113/files/crs/install/
-rwx------ 1 oracle asmdba 3541 Jul 12 04:04 /u01/app/oracle/patches/20996835/20831113/files/crs/install/

Thinking this might just be a one of error, I changed the permissions on this file to global read:

[oracle@ora12102a 20996835]$ chmod +r /u01/app/oracle/patches/20996835/20831113/files/crs/install/
[oracle@ora12102a 20996835]$ ls -l /u01/app/oracle/patches/20996835/20831113/files/crs/install/
-rwxr--r-- 1 oracle asmdba 3541 Jul 12 04:04 /u01/app/oracle/patches/20996835/20831113/files/crs/install/

… and then re-ran the patch command. Unfortunately yet another failure. Similar to the previous error, but of course this time a different file:

Patch 19769480:
onewaycopyAction : Source File "/u01/app/oracle/patches/20996835/20831110/19769480/files/sqlpatch/sqlpatch" does not exists or is not readable
'oracle.rdbms,': Cannot copy file from 'sqlpatch' to '/u01/app/grid/product/12.1.0/grid/sqlpatch/sqlpatch'

Checking on the file, again readable by root:

[oracle@ora12102a 20996835]$ ls -l /u01/app/oracle/patches/20996835/20831110/19769480/files/sqlpatch/sqlpatch
-rwx--x--- 1 oracle asmdba 2808 Jul 12 04:03 /u01/app/oracle/patches/20996835/20831110/19769480/files/sqlpatch/sqlpatch

Searching the Oracle Support site on this issue led to this note: OPatch fails with error “sqlpatch does not exists or is not readable” (Doc ID 2026594.1)

The solution as per this note is as follows:

Make sure to download the patch as Oracle Software owner and install with same user account.
As a workaround, to fix this error, we have to give the read permission to the file which is issue 

Executing the patch as the Oracle software owner however simply does not work:

[grid@ora12102a ~]$ $ORACLE_HOME/OPatch/opatchauto apply /u01/app/oracle/patches/20996835/ -oh $ORACLE_HOME -ocmrc /u01/app/oracle/patches/20996835/unconfig.rsp
Cannot run as 'grid'. Please run as root user.

Ok, so what is the real solution?

It was apparent that the reason for these errors is that at some point the patching process is doing an su (or something similar) to the software owners account, in this case the grid user.  And grid cannot read that file. The simplest solution seemed to be this: Make all files and directories in the patch directory globally readable.

[oracle@ora12102a ~]$ cd /u01/app/oracle/patches/20996835/

[oracle@ora12102a 20996835]$ chmod -R +r . 

Yes, it was just that easy.  Too bad the patch files do not have the correct permissions right out of the box. Following this the patch succeeded for the GI home, and then also for the DB home. And yes, I did run datapatch as well.

Has the 12c database stopped crashing? I don’t know yet, but will report back later.

  • Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)
  • OPatch fails with error “sqlpatch does not exists or is not readable” (Doc ID 2026594.1)
  • How to Create an OCM Response file to Apply a Patch in Silent Mode – opatch silent (Doc ID 966023.1)
  • Master Note For OPatch (Doc ID 293369.1)
  • OPatch:
  • GI Patch:


Discover more about our expertise in Oracle.


Categories: DBA Blogs

My Sales Journey: Episode 1

Tue, 2015-09-08 08:20

Goldfish jumping into the sea

I have been an entrepreneur my whole career. Always pushed towards or gravitated towards making business better. After having successfully sold my last venture and taking a pause I am starting a brand new journey in Sales and want to document it through its ups and downs. The goal of this series of posts is not only to document my journey but to be immersed in yours as well. I would love to hear from you if you are in Sales whether you are a seasoned professional or just starting to tread the waters. My hope is that reading these posts will inspire, create new avenues to reach out to like minded people and above all create a discussion that is meaningful and drives success.

Week 1 in the life of a newly minted Sales Development Professional

Last week was long and short at the same time. I love the fact that Pythian has a strong on boarding process. It adds a sense of purpose and you are constantly meeting with people or being taught new things. All the meetings are timed and managed perfectly to not overload with information but temper you slowly into its process.

By day 4 I had already had my first sales meeting, met with the head of HR, IT, Security, Business Development, CRO and Sales Directors as part of my orientation. I had read white papers, corporate decks, got onto Salesforce and been initiated in to the Delphic Academy – Pythian’s very own professional development platform.

After my first week I was excited to conclude it with lunch with my team and my PNG commander last Friday. What commander you may ask? Here at Pythian, I have my very own Pythian Next Generation Commander to mentor new employees and help them transition into the workplace. Yes! We have a thing like that!

So, my first week was not about Sales but I have a feeling that by the end of this week we will be diving right in to it. Stay tuned! Please do tell me about your “first week experiences” whether they are a month old or 25 years old. Looking forward to hearing from you.


Categories: DBA Blogs

Log Buffer #439: A Carnival of the Vanities for DBAs

Fri, 2015-09-04 07:41

This Log Buffer Edition covers some nifty blog posts from Oracle, SQL Server and MySQL.


  • Real Application Testing report On Premise vs. Oracle Public Cloud
  • Foreign Keys and Library Cache Locks
  • IN/EXISTS bugs by Jonathan Lewis.
  • Creating a trace file from EM12c is quite easy and doesn’t require a DBA offering up the world to allow a developer or support person to perform this action.
  • Oracle Cloud : First Impressions

SQL Server:

  • Hidden Tricks To SQL Server Table Cleanup
  • An Introduction to the OpenPOWER Foundation
  • Configuring Service Broker Architecture
  • Understand the Limitations of SQL Server Dynamic Data Masking
  • Creating Dashboards for Mobile Devices with Datazen – Part 3


  • Second day with InnoDB transparent page compression
  • Amazon RDS Migration Tool
  • A new client utility called mysqlpump that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects and table data.
  • How MySQL-Sandbox is tested, and tests MySQL in the process
  • Orchestrator 1.4.340: GTID, binlog servers, Smart Mode, fail-overs and lots of goodies


Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

COLLABORATE16 IOUG – Call For Papers

Thu, 2015-09-03 11:48

There’s so many ways to proceed
To get the knowledge you need
One of the best
Stands out from the rest
COLLABORATE16 – indeed!

Why not be part of the show
By sharing the stuff that you know
Got something to say
For your colleagues each day
Call for papers –> let’s go

I believe many of you would agree that regardless of how insignificant you believe your corner of the Oracle technology may be, everyone has something to say. I attended my first show in Anaheim CA USA in 1990 and started presenting at shows the year after in Washington DC USA. It’s not hard to get over the hump, moving from I would love to present a paper at a show but I just don’t have the koyich to wow that was fun. The only way you will ever get the strength is to do it (and do it and do it …).

Some suggestions for getting started …

  1. Co-present with a colleague
  2. Collaborate through paper and slides development WITH your colleague rather than parcel off portions to one another then merge at the end.
  3. Be cautions of trying to cover too much in too little time (I once attended a session at IOUW [a pre-cursor to COLLABORATE] where the presenter had over 400 slides to cover in 45 minutes].
  4. Ask for assistance from seasoned presenters (mentor/protégé type relationship).
  5. Go slowly at first and set yourself some realistic but aggressive goals.

The experience of presenting at shows is rewarding and I for one do it as much as I can … Ensuring Your Physical Standby is Usable and Time to Upgrade to 12c (posting of 2015 presentation details pending).

The confidence gain, personal koyich, and rewards of presenting at events are life long and can help propel your career into the ionosphere. Speaking of confidence, 20 months ago I started playing bridge. Now look where my experience presenting at shows and writing for Oracle Press got me … check this out :).

Surprises surprises abound
With the new confidence found
Presenting is great
Get now on your plate
In no time you’ll be so renowned


Discover more about our expertise in the world of Oracle.

Categories: DBA Blogs

Oracle EBS R12.2: Restarting Online Patching Enablement patch

Thu, 2015-09-03 11:33

If you are in process of upgrading to Oracle E-Business Suite 12.2.4, you would have went though this critical phase in the upgrade which is to apply the Online Patching Enablement patch:


It’s very common to run into errors with this patch in the first try and have to apply it couple of times, in order to get all issues fixed and get online patching enabled. The recommended command to apply this patch is:

adpatch options=hotpatch,forceapply

When the time comes to re-apply the patch to fix problems, if you use the same command to reapply the patch, you will notice that the patch completed normal with in no time and nothing happens in the back end. This is because of a specific feature from Adpatch. ADPATCH by default skips jobs that are marked as “run successfully” in previous runs or as part of another patch. So we have to force it re-run those jobs. This can be done by using command below:

adpatch options=hotpatch,forceapply,nocheckfile

Sometimes we run into cases where Online Patching Enablement patch completes as “normal” and the actual online patching feature gets enabled where we see that a schema or two have failed to enable the EBR feature. As soon as APPS schema gets EBR enabled by this patch, even though other custom schemas failed to get enabled, Adpatch gets disabled and we are forced to adop utility from then on. In this scenario, we can still re-apply the Online Patch Enablement using Adpatch after setting the environment variable below:


I see that online patching enablement exercise for every customer is a unique experience. Do post your experiences with this online patching enablement patch in the comments section. I’d love to hear your story!

Discover more about Pythian’s expertise in the world of Oracle.

Categories: DBA Blogs

VMware Debuts SQL Server DBaaS Platform

Thu, 2015-09-03 11:14


Yesterday at VMworld, VMware announced its entry into the managed database platform market with the introduction of vCloud Air SQL. This new service is an on-demand, managed service offering of Microsoft SQL Server. It’s meant to further the adoption of hybrid operations, since it can be used to extend on-premises SQL Server use into the cloud.

Currently the two major players in this space are Amazon RDS and Azure SQL. Both of those offerings are significantly more mature and feature-rich than VMware’s service as outlined in the early access User Guide.

The beta version of vCloud Air SQL has a number of initial limitations such as:

  • SQL Server Licensing is not included or available. Meaning that the vCloud Air SQL platform is utilizing a “bring your own license” (BYOL) model. This requires that you have an enterprise agreement with software assurance in order to leverage license mobility for existing instances.
  • SQL 2014 is not currently offered, only SQL 2008 & SQL 2012 are supported at this time.
  • SQL Server Instances are limited to 150GB
  • Service tiers are limited to three choices at launch and altering the service tier of an existing instance is not supported at this time.

Although there are a number of limitations, reviewing the early access beta documentation reveals some interesting details about this service offering:

  • “Instant” Snapshot capabilities appear to be superior to any competitors managed service offerings. These features will be appealing to organizations leveraging DevOps and automated provisioning methodologies.
  • Persistent storage is solid state (SSD) based and will likely be more performant than competing HDD offerings.
  • A new cloud service named vCloud Air SQL DR is planned as a companion product. This service will integrate with an organization’s existing on-premises SQL Server instances. Once integrated, it will provide a variety of cloud based disaster recovery options leveraging Asynchronous replication topologies.

If you want to try this new service, VMware is offering a $300 Credit for first time vCloud Air users HERE.

Discover more about Pythian’s expertise in SQL Server.



Categories: DBA Blogs

Autoconfig in Oracle EBS R12.2

Thu, 2015-09-03 08:28

All seasonal Oracle Apps DBAs know that Autoconfig is the master utility that can configure the whole E-Business Suite Instance. In E-Business Suite releases 11i, 12.0 and 12.1 running Autoconfig recreated all the relevant configurations files used by Apache server. If the context file has the correct settings, then configuration files should include the correct setting after running Autoconfig. This is not the case anymore in Oracle E-Business Suite 12.2. Some of the Apache config files are under fusion middleware control now, namely httpd.conf, admin.conf and ssl.conf. All other Apache config files are still under Autoconfig control. But these 3 critical config files include the main config pieces like Webport, SSL port etc.

So if you have to change the port used by EBS instance, then you have to log into the Weblogic admin console and change port there and then sync context xml file using This utility will get the current port values from Weblogic console and update the xml with new port values. Once the context xml file syncs, we have to run Autoconfig to sync other config files and database profile values to pickup new webport

Similarly, if you want to change the JVM augments or class path, you have run another utility called to make those changes from command line or login to the Weblogic admin console to do those changes. Interestingly, few of the changes done in Weblogic admin console or fusion middleware control are automatically synchronized with context xml file by the script that runs in the background all the time. But Apache config file changes were not picked by this script, so Apache changes had to be manually synchronized

There are a few My Oracle Support notes that can help you understand these utilities little more, such as 1676430.1 and 1905593.1. But understand that Autoconfig is a different ball game in Oracle E-Business Suite R12.2.

Discover more about Pythian’s expertise in the world of Oracle.

Categories: DBA Blogs

Amazon RDS Migration Tool

Wed, 2015-09-02 15:06

Amazon has just released their RDS Migration Tool, and Pythian has recently undertaken training to use for our clients. I wanted to share my initial thoughts on the tool, give some background on its internals, and provide a walk-through on the functionality it will be most commonly used for.

There are many factors to consider when evaluating cloud service providers, including cost, performance, and high availability and disaster recovery options. One of the most critical and overlooked elements of any cloud offering though, is the ease of migration. Often, weeks are spent evaluating all of the options only to discover after the choice is made that it will take hours of expensive downtime to complete the migration, and that there is no good rollback option in the case of failure.

In order to reduce the friction inherent in the move to a DBaaS offering, Amazon has developed an RDS Migration tool. This is an in-depth look at this new tool, which will be available after September 1, 2015. Contact Pythian to start a database migration.

With the introduction of the RDS Migration tool, Amazon has provided a powerful engine capable of handling much more than basic migration tasks. It works natively with Oracle, SQL Server, Sybase, MySQL, PostgreSQL, Redshift (target only), Aurora (target only), and provides an ODBC connector for all other source systems. The engine is powerful enough to handle fairly complex transformations and replication topologies; however, it is a migration tool and isn’t intended for long-term use.


Amazon’s RDS Migration Tool architecture is very simple. It consists of your source system, an AWS VM with the Migration Tool installed on it, and the target RDS instance.

Each migration is broken up into Tasks. Within a Task, a source and target database are defined, along with the ability to transform the data, filter the tables or data being moved, and perform complex transformations.

Tasks can be scheduled to run at particular times, can be paused and resumed, and can alert on success or failure. It’s important to note that if a task is paused while a table is loading, that table will be reloaded completely from the beginning when the task resumes.

Within a running task, the following high-level steps are performed:
• Data is pulled from the source using a single thread per table
• Data is converted into a generic data type
• All transformations are applied
• Data is re-converted into the target system’s datatype and inserted
• After the initial load, if specified, the tool monitors for updates to data and applies them in near real-time

While processing the data, each table has a single thread reading from it, and any updates are captured using the source system’s native change data capture utility. Changes are not applied until after the initial load is completed. This is done to avoid overloading the source system, where it’s assumed client applications will still be running.

Performance Considerations

There are several factors which might limit the performance seen when migrating a database.

Network Bandwidth
Probably the biggest contributor to performance issues across data centers, there is no magic button when moving to RDS. If the database is simply too big or too busy for the network to handle the data being sent across, then other options may need to be explored or used in conjunction with this tool.

Some workarounds to consider when network performance is slow include:
• Setup AWS Direct Connect
• Use a bulk-load utility, and then use the tool to catch up on transactions
• Only migrate data from a particular point in time

RDS Migration Tool Server CPU
The migration tool converts all data into a common data type before performing any transformations, then converts them into the target database’s data type. This is obviously very heavy on the server’s CPU, and this is where the main performance bottlenecks on the server are seen.

Capacity of Source database
This tool uses a single SELECT statement to migrate the data, and then returns for any changed data after the initial bulk load is completed. On a busy system, this can be a lot of undo and redo data to migrate, and the source system needs to be watched closely to ensure the log files don’t grow out of control.

Capacity of Target database
In the best case scenario, this will be the limiter as it means all other systems are moving very fast. Amazon does recommend disabling backups for the RDS system while the migration is running to minimize logging.


The following walkthrough looks at the below capabilities of this tool in version 1.2:

• Bulk Data Migration to and from the client’s environment and Amazon RDS
• Near Real-Time Updates to data after the initial load is completed
• The ability to transform data or add auditing information on the fly
• Filtering capabilities at the table or schema level

You will need to have setup network access to your databases for the RDS Migration Tool.

1. After confirming access with your account manager, access the tool by opening the AWS console, selecting EC2, and choosing AMIs.
AWS Console

2. Select the correct AMI and build your new VM. Amazon recommends an M4.large or M4.xlarge.

3. After building the new VM, you will need to install the connectors for your database engine. In this example, we’ll be using Oracle Instant Client and MySQL ODBC Connector 5.2.7.

  • For the SQL Server client tools, you will need to stop the Migration services before installing.

4. Access the Migration Tool

  • Within VM: http://localhost/AmazonRDSMigrationConsole/
  • Public URL: https:[VM-DNS]/AmazonRDSMigrationConsole/
    • Username/Password is the Administrator login to the VM

5. The first screen after logging in displays all of your current tasks and their statuses.
RDS Migration Tool Home Screen

6. Clicking on the Tasks menu in the upper-left corner will bring up a drop-down menu to access Global Settings. From here, you can set Notifications, Error Handling, Logging, etc…
RDS Migration Tool Global Settings

7. Back on the Tasks menu, click the Manage Databases button to add the source and target databases. As mentioned earlier, this walkthrough will be an Oracle to Aurora migration. Aurora targets are a MySQL database for the purposes of this tool.
RDS Migration Tool Manage Databases Pop-Up

8. After defining your connections, close the Manage Databases pop-up and select New Task. Here, you can define if the task will perform a bulk-load of your data and/or if it will attempt to apply changes made.
RDS Migration Tool New Task

9. After closing the New Task window, simply drag & drop the source and target connectors into the task.

10. By selecting Task Settings, you can now define task level settings such as number of threads, truncate or append data, and define how a restart is handled when the task is paused. You can also override the global error handling and logging settings here.

  • The best practice recommendation is to find the largest LOB value in your source database and set that as the max LOB size in the task. Setting this value allows the task to optimize LOB handling, and will give the best performance.

RDS Migration Tool Task Settings

11. Select the Table Selection button to choose which tables will be migrated. The tool uses wildcard searches to allow any combination of tables to exclude or include. For example, you can:

  • Include all tables in the database
  • Include all tables in a schema or set of schemas
  • Exclude individual tables and bring over all remaining tables
  • Include individual tables and exclude all remaining tables

The tool has an Expand List button which will display all tables that will be migrated.

In this screenshot, all tables in the MUSER08 schema that start with T1 will be migrated, while all tables that start with T2 will be excluded EXCEPT for the T20, T21, T22, & T23 tables.
RDS Migration Tool Table Selection

12. After defining which tables will be migrated, select an individual table and choose the Table Settings button. Here you can add transformations for the individual tables, add new columns or remove existing ones, and filter the data that is brought over.

In this screenshot, the T1 table records will only be brought over if the ID is greater than or equal to 50 and the C1 column is LIKE ‘Migrated%’
RDS Migration Tool Table Settings

13. Select the Global Transformations button. Like the table selection screen, you use wildcards to define which tables these transformations will be applied to.
You can:

  • Rename the schema
  • Rename the table
  • Rename columns
  • Add new columns
  • Drop existing columns
  • Change the column data types

In this screenshot, a new column named MigratedDateTime will be created on all tables and populated with the current DateTime value.
RDS Migration Tool Global Transformations

14. Finally, save the task and choose Run. This will kick off the migration process and bring up the Monitoring window. From here, you can see the current task’s status, notifications, and errors, as well as get an idea of the remaining time.
RDS Migration Tool Monitoring Window

Categories: DBA Blogs

You work for a software company. You don’t? Think again.

Tue, 2015-09-01 09:26

If someone asks what business your company is in, you might say transportation, or networks, or retail, or a hundred other possibilities. But what you should be saying is that you are also in the software business.

At its core, your company is a software company. Or it should be.

Why? Because your competitors are growing in numbers, emerging from nowhere and aggressively using digital strategies to succeed over you.

To be successful, you must continually innovate and differentiate your company, no matter what your industry. You must do things better, faster, and cheaper. And you must engage your customers and your partners in new and meaningful ways. It doesn’t matter whether you’re a bank, a pharmaceutical company, or a logistics provider. Think like a startup and use software to stay one step ahead.

This connection can be easy if your business is already using software to provide differentiating product features or services. If you sell goods online, or you deliver content, or you offer software as a service, you know you’re in the software business. You probably focus on being more responsive and being agile, that is delivering new features faster than ever before and using data to gain business insights and to optimize the user experience.

For those companies who don’t initially think of themselves as software companies, it’s a little more interesting. In time, they will realize that software is what is differentiating them.

For example, Redline Communications thinks of itself as a wireless infrastructure company that delivers wireless networks in remote locations. In actuality, it uses software to add new features to its network components. It also uses software to expand a network’s capacity on demand, and to troubleshoot problems. Redline might manufacture hardware but it is solidly in the software business.

Pythian is often described as an IT services company, but it is undoubtedly a software company. Nobody at Pythian touches a customer’s critical production system or data without going through a software portal called Adminiscope that secures access and records all activity. Pythian doesn’t sell software, but it is absolutely in the software business.

Then there are the companies that would not traditionally be classified as technology businesses at all, but have clearly made the connection. And that doesn’t mean just having an online presence. Take retailer Neiman Marcus, a company that has consciously entered the software space with the development of apps like  “Snap. Find. Shop.” a tool that lets users take photos of a product they want and helps them track it down. They know they need to engage customers more personally, and the way to do that is through software that enables them to interact with customers, to understand and respond to buying behaviors and preferences.

KAR Auction Services, who you might know as a car auction company, has stated publicly that that they no longer want to be a car auction company that uses technology but “a technology company that sells cars”. They know that software will drive the future of their business.

It is increasing difficult to sell, deliver or support any product or service without using software. It is increasingly difficult to truly understand your business without being data driven, the byproduct of software. It is increasingly difficult to recruit employees without using software. Your customers and your employees expect you to be agile and responsive, and software helps you meet those expectations, and then measures, monitors, analyzes, and integrates data to keep you ahead of the game.

In today’s hyper-competitive world, your company must use software and technology to become agile in order to respond to ever-changing customer needs. Then you must remain as aggressive by measuring, monitoring, evaluating, and responding to data about your products and services as well as their impact on your customers and their environment. Whether it’s customer feedback about product features, or changing market trends, you need to be ready to react and iterate your products and processes at lightning speed. Software is the one thing that’s going to enable that. 

So what does it mean to use software to be competitive? It means departing from tradition. It means empowering IT to go beyond cutting costs to transform the business. It means empowering everyone in the company to innovate around software. It means encouraging radical disruptive ideas on how to change the business. And it means putting a digital strategy at the heart of your planning. And this is certainly what your competition is doing.

Categories: DBA Blogs

asmcmd> a better “du”

Fri, 2015-08-28 14:28

I discovered ASM with a RAC running on Linux Itanium and that was a big adventure. At this time there was no asmcmd. In 2005, Oracle released Oracle 10gR2 and asmcmd came into the place and we figured out how to make it work with a 10gR1 ASM. We were very excited to have a command line for ASM until… we tried it ! let’s call a spade a spade,  it was very poor…

10 years after, Oracle has released 11gR1, 11gR2, 12cR1, asmcmd has been improved but the “ASM shell” remains very weak and specially the “du” command :

Used_MB Mirror_used_MB
 556178 556178
ASMCMD> du .
Used_MB Mirror_used_MB
 556178 556178
ASMCMD> du *
Used_MB Mirror_used_MB
 556265 556265

Why “du *” does not act as it acts in any Unix shell ? How do I know the size of each subdirectory in my current directory ?


Nowadays, we use to have dozens of instances running on the same server sharing the same ASM :

[oracle@higgins ~]$ ps -ef | grep pmon | wc -l
[oracle@higgins ~]$

so should I use one “du” per database (directory) to know the size used by each database ? what if I keep one month of archivelogs in my FRA ? should I wait for the month of February to have only 28 “du” to perform if I want to know the size of archivelogs generated each day (if this is a non-leap year !) ?


This is why I wrote this piece of code to have a “du” under ASM that makes my life easier everyday :

[oracle@higgins ~]$ cat
# du of each subdirectory in a directory for ASM

if [[ -z $D ]]
 echo "Please provide a directory !"
 exit 1

(for DIR in `asmcmd ls ${D}`
     echo ${DIR} `asmcmd du ${D}/${DIR} | tail -1`
 done) | awk -v D="$D" ' BEGIN {  printf("\n\t\t%40s\n\n", D " subdirectories size")           ;
                                  printf("%25s%16s%16s\n", "Subdir", "Used MB", "Mirror MB")   ;
                                  printf("%25s%16s%16s\n", "------", "-------", "---------")   ;}
                                  printf("%25s%16s%16s\n", $1, $2, $3)                         ;
                                  use += $2                                                    ;
                                  mir += $3                                                    ;
                         END   { printf("\n\n%25s%16s%16s\n", "------", "-------", "---------");
                                 printf("%25s%16s%16s\n\n", "Total", use, mir)                 ;} '
[oracle@higgins ~]$
Let's see it in action with some real life examples :
[oracle@higgins ~]$. oraenv
The Oracle base remains unchanged with value /oracle
[oracle@higgins ~]$./ DATA

DATA subdirectories size

Subdir  Used MB Mirror MB
------  ------- --------
DB01/    2423    2423
DB02/    2642    2642
DB03/    321201  321201
DB04/    39491   39491
DB05/    180753  180753
DB06/    4672    4672
DB07/    1431    1431
DB08/    2653    2653
DB09/    70942   70942
DB10/    96001   96001
DB11/    57322   57322
DB12/    70989   70989
DB13/    4639    4639
DB14/    40800   40800
DB15/    13397   13397
DB16/    15279   15279
DB17/    19020   19020
DB18/    8886    8886
DB19/    4671    4671
DB20/    14994   14994
DB21/    502245  502245
DB22/    4839    4839
DB23/    10169   10169
DB24/    7772    7772
DB25/    7828    7828
DB26/    112109  112109
DB27/    5564    5564
DB28/    16895   16895
------  ------- ---------
Total   1639627 1639627
[oracle@higgins ~]$


Another one with many archivelogs directories :
[oracle@higgins ~]$./ FRA/THE_DB/ARCHIVELOG/

 FRA/THE_DB/ARCHIVELOG/ subdirectories size

 Subdir       Used MB Mirror MB
 ------        ------ ---------
 2015_02_19/    114   114
 2015_02_20/    147   147
 2015_02_21/    112   112
 2015_02_22/    137   137
 2015_02_23/    150   150
 2015_02_24/    126   126
 2015_02_25/    135   135
 2015_02_26/    130   130
 2015_02_27/    129   129
 2015_02_28/    119   119
 2015_03_01/    146   146
 2015_03_02/    150   150
 2015_03_03/    128   128
 2015_03_04/    134   134
 2015_03_05/    44    44
 2015_05_27/    28    28
 2015_05_28/    95    95
 2015_05_29/    76    76
 2015_05_30/    187   187
 2015_05_31/    78    78
 2015_06_01/    111   111
 2015_06_02/    105   105
 2015_06_03/    43    43
 2015_06_04/    142   142
 2015_06_05/    42    42
 2015_06_06/    84    84
 2015_06_07/    70    70
 2015_06_08/    134   134
 2015_06_09/    77    77
 2015_06_10/    143   143
 2015_06_11/    2     2
 2015_06_21/    14    14
 2015_06_22/   14918 14918
 ------       ------- ---------
 Total         18250   18250

[oracle@higgins ~]$

This example is a very nice one as it shows us that 2015 is not a leap year and that some archivelogs are still on disk even if they probably shouldn’t and that’s a good information as v$log_history do not contain these information anymore :

SQL> select trunc(FIRST_TIME), count(*) from v$log_history group by trunc(FIRST_TIME) order by 1 ;

--------- ----------
22-JUN-15 402


Hope it will also makes your life easier,

Have a good day :)

Categories: DBA Blogs