Skip navigation.

DBA Blogs

List of SQL Server Databases in an Availability Group

Pythian Group - Fri, 2014-05-23 11:07

After migrating some databases to SQL Server 2012 and creating Availability Groups for some databases, we have noticed that some of our maintenance scripts were failing with the following error:

The target database (‘MyDB1′) is in an availability group and is currently accessible for connections when the application intent is set to read only. For more information about application intent, see SQL Server Books Online.

Databases that are part of an Availability group and play the secondary role can be read-only replica and therefore are not writable.

Those databases cannot be part of some of the out-of-the-box maintenance plans or other coded maintenance procedures.

For the out-of-the-box Maintenance Plans, there is an option in the Plan to choose the preferred replica. (Link)

Any code that requires writes in the database will not work.

How do we get the list of those databases so that we can exclude them/include them in our code?

So first,

How do we know that this instance is part of Availability Group(s):

ISNULL(agstates.primary_replica, '') AS [PrimaryReplicaServerName],
ISNULL(arstates.role, 3) AS [LocalReplicaRole]
FROM master.sys.availability_groups AS AG
LEFT OUTER JOIN master.sys.dm_hadr_availability_group_states as agstates
    ON AG.group_id = agstates.group_id
INNER JOIN master.sys.availability_replicas AS AR
    ON AG.group_id = AR.group_id
INNER JOIN master.sys.dm_hadr_availability_replica_states AS arstates
    ON AR.replica_id = arstates.replica_id AND arstates.is_local = 1


Name PrimaryReplicaServerName LocalReplicaRole (1=primary,2=secondary,3=none) AvGroup_1 MyServer01 2 AvGroup_2 MyServer02 1


How do we get some information about the databases in the Availability Group:

SELECT AS [AvailabilityGroupName],
ISNULL(agstates.primary_replica, '') AS [PrimaryReplicaServerName],
ISNULL(arstates.role, 3) AS [LocalReplicaRole],
dbcs.database_name AS [DatabaseName],
ISNULL(dbrs.synchronization_state, 0) AS [SynchronizationState],
ISNULL(dbrs.is_suspended, 0) AS [IsSuspended],
ISNULL(dbcs.is_database_joined, 0) AS [IsJoined]
FROM master.sys.availability_groups AS AG
LEFT OUTER JOIN master.sys.dm_hadr_availability_group_states as agstates
   ON AG.group_id = agstates.group_id
INNER JOIN master.sys.availability_replicas AS AR
   ON AG.group_id = AR.group_id
INNER JOIN master.sys.dm_hadr_availability_replica_states AS arstates
   ON AR.replica_id = arstates.replica_id AND arstates.is_local = 1
INNER JOIN master.sys.dm_hadr_database_replica_cluster_states AS dbcs
   ON arstates.replica_id = dbcs.replica_id
LEFT OUTER JOIN master.sys.dm_hadr_database_replica_states AS dbrs
   ON dbcs.replica_id = dbrs.replica_id AND dbcs.group_database_id = dbrs.group_database_id
ORDER BY ASC, dbcs.database_name


AvailabilityGroupName PrimaryReplicaServerName LocalReplicaRole DatabaseName SynchronizationState IsSuspended IsJoined AvGroup_1 MyServer01 2 MyDB1 2 0 1 AvGroup_1 MyServer01 2 MyDB2 2 0 1 AvGroup_1 MyServer01 2 MyDB3 2 0 1

So, for example,

If we would like to get the databases that are secondary in the Availability Group,  to be excluded when writes are required:

dbcs.database_name AS [DatabaseName]
FROM master.sys.availability_groups AS AG
LEFT OUTER JOIN master.sys.dm_hadr_availability_group_states as agstates
   ON AG.group_id = agstates.group_id
INNER JOIN master.sys.availability_replicas AS AR
   ON AG.group_id = AR.group_id
INNER JOIN master.sys.dm_hadr_availability_replica_states AS arstates
   ON AR.replica_id = arstates.replica_id AND arstates.is_local = 1
INNER JOIN master.sys.dm_hadr_database_replica_cluster_states AS dbcs
   ON arstates.replica_id = dbcs.replica_id
LEFT OUTER JOIN master.sys.dm_hadr_database_replica_states AS dbrs
   ON dbcs.replica_id = dbrs.replica_id AND dbcs.group_database_id = dbrs.group_database_id
WHERE ISNULL(arstates.role, 3) = 2 AND ISNULL(dbcs.is_database_joined, 0) = 1
ORDER BY  dbcs.database_name


DatabaseName MyDB1 MyDB2 MyDB3

Related Links:

Monitor Availability Groups (Transact-SQL)

AlwaysOn Availability Groups Dynamic Management Views and Functions (Transact-SQL)

AlwaysOn Availability Groups Connectivity Cheat Sheet (describing types of AG replicas and if connection or reads are allowed on secondary DB)

Backups on secondary replicas – Always ON Availability Groups

Categories: DBA Blogs

Life at Pythian as a MySQL DBA

Pythian Group - Fri, 2014-05-23 08:00

Several people have asked me in the last year what it is like to work for Pythian. There are many reasons for a question like this, depending on who is asking.

Some people are concerned about the fact I am working for a Canadian-based company.

This typically comes from some of my xenophobic North American peers. But let me assure you, it’s not bad! Pythian is based out of Canada, but has employees around the globe in more than 25 countries. It’s pretty impressive, because Pythian must comply with the work laws of each country.

When you’re first hired at Pythian, you’ll be invited to visit their global headquarters in Ottawa, Canada. This is a great opportunity to get to know your peers, as well as the history of the company and how it operates. The country is beautiful, even if you’re lucky enough to be hired during their winter. Overall, it’s not very different compared to working for any other company in my country, aside from the extreme cold weather and all the talk about hockey and curling.

Besides, I actually like hockey.

Some people are curious about what it’s like working from home.

Pythian is not my first experience working remotely. I have been telecommuting since 2005. I tell these people that it’s not for everyone. Working remotely takes hard work and self-discipline.

When I first transitioned from office-life to working remotely, it was brutal. My productivity plummeted; I rarely began real work before noon. You typically don’t have your boss hovering over your shoulder at home—If you want this, feel free to add a monitor on the wall behind you and make sure your boss uses a webcam.

A remote employee must treat the home office like a real office. That means no wandering to your desk in your pajamas, half asleep. Make a concerted effort to dress for work, just as if you’re going into the office. If you have to, take a drive around the block as part of your commute!

If you have family or friends living with you, make sure they know that while you may be physically in the building, you are off limits except in emergencies.

Communication with colleagues can be challenging, and despite technology, your organization must develop an attitude with dealing with remote employees. At my first company I was among two people working remotely, and staying in the loop was like pulling teeth. Pythian on the other hand, is built with a large portion of its workforce being remote. The company is growing rapidly, and so must its policies. It is a major focus to ensure that all employees are kept up-to-date and in the loop. Communication lines are open using appropriate technologies such as Skype and Google Hangouts ensuring that team members are engaged with each other.

Some people are interested in the type of work I do as a MySQL consultant.

This is the conversation I love to have. Most often it comes from someone I met on the internet, and that’s ok. The best thing about working at Pythian is the sheer amount of technologies I get to work with on a daily basis.

As a MySQL Consultant at Pythian, I deal with customers running MySQL 4.1 through 5.6, Percona Server and MariaDB variations, Galera, Tungsten Replicator and many other solutions to solve some of the internet’s toughest problems. Our clients run datasets from a few gigabytes to multiple terabytes. Our clients run on different operating systems: Solaris, FreeBSD, all flavors of Linux, and even Windows in some cases. Our clients use filesystems ranging from ext2 to xfs and zfs, although we try really hard to persuade against ext2. We provide consulting for environments running on many of the cloud providers and hosting providers. We develop proof of concepts, migration and upgrade plans, performance tuning, capacity planning, high availability and disaster recovery plans, and much more.

Let’s face it: The open source world is ever-changing and new technologies are always being created to interact with the MySQL database or even NoSQL technologies, such as MongoDB. There are relatively few places to work at that can offer exposure and experience to such environments.

Do you have what it takes to be a “Pythianite”?

Pythian is dedicated to developing their employees, also known as Pythianites. Pythian provides ample opportunity for career growth, but this work is not for everyone. It’s very fast paced and at times stressful. There are high expectations for our consultants, and we genuinely have to love your data.

If you think you  have what it takes to work at Pythian, check out our current opportunities.

Categories: DBA Blogs

Should AUD$ Table Exist on System Tablespace During DB Upgrade?

Pythian Group - Fri, 2014-05-23 07:59

I see this following message on all My Oracle Support notes, which talks about database Oracle manual upgrade steps.

4.16 Verify the existence of the AUD$ tables
Ensure that if the aud$ table exists that it is in the SYS schema and in the SYSTEM tablespace.
If the AUD$ table is not in SYSTEM tablespace and not owned by the SYS user then before doing the upgrade put it back to the SYSTEM tablespace and it should be owned by SYS .

When I come across the same kind of message even for the latest database version 12cR1 on MOS note 1503653.1, I thought of checking the true functionality of this warning. This doubt seems very valid especially when we have new feature named “DBMS_AUDIT_MGMT”, which can be used to relocate and purge data for aud$ table from oracle rdbms version 10gR1.

I created a database named “test” using rdbms version and enabled database auditing. After few sessions testing, I see records on aud$ table. I moved the table and associated LOB segments after disabling db audit and enable it again after the activity completion.

SQL> SELECT comp_id, status, SUBSTR(version,1,10) version, comp_name FROM dba_registry ORDER BY 1;

——- ——– —————————————- ———————————–
CATALOG VALID Oracle Database Catalog Views
CATPROC VALID Oracle Database Packages and Types

SQL> show parameter audit_trail

———————————— ———– ——————————
audit_trail string NONE

SQL> alter table sys.aud$ move tablespace users
2 LOB (sqltext) STORE AS lobsegment1 (TABLESPACE users )
3 LOB (sqlbind) STORE AS lobsegment2 (TABLESPACE users );

Table altered.

SQL> select count(1) from aud$;


SQL> select distinct tablespace_name from dba_segments where segment_name in(‘AUD$’,'SYS_IL0000000384C00040$$’,'SYS_IL0000000384C00041$$’);


SQL> ALTER SYSTEM SET audit_trail=db, extended scope=spfile;

System altered.

SQL> alter system set audit_sys_operations=TRUE scope=spfile;

System altered.

I configured Oracle home of version and executed the pre-upgrade script “preupgrd.sql” on the 11gR2 oracle home. I didn’t notice any error messages related to this table availability on different tablespace. Reviewed the steps required as per MOS note 1503653.1, omitted many points due to the nature of this database including step 4.16. I never faced any issues with the upgrade and even noticed the aud$ table exists on USERS tablespace only after the upgrade.

SQL> SELECT comp_id, status, SUBSTR(version,1,10) version, comp_name FROM dba_registry ORDER BY 1;

——- ——– —————————————- ———————————–
CATALOG VALID Oracle Database Catalog Views
CATPROC VALID Oracle Database Packages and Types
XDB VALID Oracle XML Database

SQL> select owner,count(*) from dba_objects where status=’INVALID’ group by owner;

no rows selected

SQL> select count(1) from sys.aud$;


SQL> select tablespace_name from dba_segments where segment_name=’AUD$’;


SQL> select owner,count(*) from dba_objects where status=’INVALID’ group by owner;

no rows selected

SQL> show parameter compatible

———————————— ———– ——————————
compatible string 12.0.0

SQL> show parameter audit

———————————— ———– ——————————
audit_file_dest string /u02/app/oracle/product/12.1.0/dbhome_1/rdbms/audit
audit_sys_operations boolean TRUE
audit_syslog_level string
audit_trail string DB, EXTENDED

Checked few other MOS notes, it seems they really look for aud$ table existence of SYS schema only.

FAQ : Database Upgrade And Migration (Doc ID 1352987.1)
Which schema should the AUD$ table exist in while doing the upgrade ?
AUD$ table should exist in SYS schema while doing the upgrade.

So this blog question remains valid. For those who keep bigger aud$ table on dedicated tablespace for better database performance, this relaxation means saving 1-3 hours of production database downtime. Is this the time to ask Oracle Support to review the point 4.16 to check for owner only for aud$ table?

Note: This testing was carried out only from rdbms version 11gR2 to 12cR1. Please test this behavior on your test environment before you prepare action plan for the production upgrade.

Categories: DBA Blogs

Log Buffer #372, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-05-23 07:58

Log Buffer Carnival enables readers to see through the minds of database bloggers who are converting their thought process into blog posts. This Log Buffer Edition is the part of same chain. Enjoy.


Tanel Poder is combining Bloom Filter Offloading and Storage Indexes on Exadata.

Randolf talks about 12c Hybrid Hash Distribution with Skew Detection / Handling – Failing.

Kyle talks about 5 Databases issues that are costing you time and money.

Frits Hoogland unfolds as how Exadata Smart Scan works.

A Framework Approach to Building an Oracle WebCenter Intranet, Extranet, or Portal.

SQL Server:

Microsoft adds forecasting capabilities to Power BI for O365

Capping CPU using Resource Governor – The Concurrency Mathematics

Dell Doubles Application Speeds, Processes Transactions 9X Faster with In-Memory OLTP

Architecture of the Microsoft Analytics Platform System

Introducing the AzureCAT PPI Theater at PASS BA


Dean Ellish blogs about using Go with MariaDB.

MySQL 5.6 has added support for EXPLAIN FORMAT=JSON. The basic use case for that feature is that one can look at the JSON output and see more details about the query plan. More advanced/specific use cases are difficult, though.

Real-Time Data Movement: The Key to Enabling Live Analytics With Hadoop.

MySQL Enterprise Monitor 2.3.17 is now available for download on the My Oracle Support (MOS) web site.

If you run multiple MySQL environments on multiple servers it’s a good habit to set your MySQL prompt to double check which server you are on.

Categories: DBA Blogs

Oracle #GoldenGate Replicate Apply (Run) Rate

DBASolved - Thu, 2014-05-22 12:16

For a couple of weeks now, I’ve been trying to figure out a way to identify the size of data for transactions that are getting processed over a give period of time with Oracle GoldenGate.  When I started to think through the process, I keyed in on the Relative Byte Address (RBA).  What is the RBA?  From reading Oracle’s GoldenGate documentation, the RBA is mostly a marker within the trail file to identify the location of the transaction.  This got me to thinking; maybe I can use the RBA to “estimate” the amount of data applied to the source over a period of time (compare 2 RBAs).

Before I ventured off in the unknown; I wanted to verify if there was a method already identified by Oracle.  What I found in MOS was Note ID: 1356524.1.  This note deals mostly with how to identify the speed of the extraction process.  What I found interesting in this note is that Oracle is using the RBA to help calculate the amount of data being extracted.  With this note in hand, I felt comfortable in using the RBA to “estimate” the amount of data being applied by a replicat.

Note:  How to estimate Goldengate extract redo processing speed? (Doc ID 1356524.1)

A few sentences ago, I mentioned that I wanted to compare 2 RBAs to “estimate” the amount of data applied over a period of time.  In order to do this, I need to convert the RBA into meaningful number.

The following formulas I used to convert the RBA to megabytes and then into the metrics I wanted:

(($sec_rba - $first_rba)/(1024*1024))  <-  find the “estimated” size applied in MB
($mb_min*60)                           <- find the “estimate” size applied over an hour in MB
($mb_hr/(1024))                        <- find the “estimate” size applied in GB for an hour
($gb_hr*24)                            <- find the “estimate” size for a day in GB

Now the question was how can I grab this information from each replicat.  The information I needed could be found by doing a “info replicat <replicat>, detail” (The detail part is not really needed, just use it to list out all the associated trail files).    The output from the info command looks similar to this:

Info Replicat Output:


The thing to keep in mind is that I’m only concern about two lines in this output.  The first line is the “Log Read Checkpoint” and the second line that has the Date and RBA number.  Now in order to gather this information and do the calculations using the RBA, I wrote a Perl script.  The  for this basics of the script are below:

#!/usr/bin/perl -w
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
use strict;
use warnings;

#Static Variables

my $gghome = "/u01/app/oracle/product/12.1.2/ogg";
my $outfile = "/tmp/gg_replicat_runrates.txt";
my $sleeptime = 60;
my $gguser = "c##ggate";
my $ggpass = "ggate";
my @process = ("replicat");
my $replicat;
my($rate_min, $rate_hr, $rate_gb_hr, $rate_gb_day);


open (RUNRATES, ">>$outfile") or die "Unable to open file";
foreach my $i(@process)
my @process_name = `ps -ef | grep dirprm | grep $i | grep -v grep | awk '{print \$14}'`;   
my @replicats = @process_name;

    foreach (@replicats)
        $replicat = $_;
        check_replicat($gguser, $ggpass, $replicat);
        ($curtrail1,$date1,$rba1) = check_replicat();
        #print "$curtrail1 -> $date1 -> $rba1\n";
        check_replicat($gguser, $ggpass, $replicat);
        ($curtrail2,$date2,$rba2) = check_replicat();
        #print "$curtrail2 -> $date2 -> $rba2\n";
        ($rate_min, $rate_hr, $rate_gb_hr, $rate_gb_day) = calc_rate();
        print RUNRATES "$replicat|$date1|$curtrail1|$rba1|$date2|$curtrail2|$rba2|$rate_min|$rate_hr|$rate_gb_hr|$rate_gb_day\n";
close (RUNRATES);

#Sub Programs

sub check_replicat
my @buf = `$gghome/ggsci << EOF
dblogin userid $gguser\@pdb2 password $ggpass
info replicat $replicat, detail

my $curtrail;
my $date;
my $rba;

    foreach (@buf)
        if (/Log Read Checkpoint/)
            if (m/(\.\/\w+\/\w+)/g)
                $curtrail = $1;
        if (/RBA/)
            if (m/(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/g)
                $date = $1."-".$2."-".$3." ".$4.":".$5.":".$6;
            if (m/RBA (\d+)/g)
                $rba = $1;

} #end sub check_replicat

sub calc_rate
    my $first_rba = $rba1;
    my $sec_rba = $rba2;
    my $mb_min = (($sec_rba-$first_rba)/(1024*1024));
    my $mb_hr = ($mb_min*60);
    my $gb_hr = ($mb_hr/(1024));
    my $gb_day = ($gb_hr*24);
    return ($mb_min,$mb_hr,$gb_hr, $gb_day);
} #end sub calc_rate

This script is a bit longer than I like; however, it will capture all information required and then waits 60 seconds and gather the information again for the replicat it is working on. Once the first and second RBA are grabbed then the script writes the output to a flat file with the calculations for MB per min, MB per hour, GB per hour and GB per day.

Once the flat file has been written,  I can now use an external table that will allow me to view this data from SQL (see my other post on monitoring GG from  Using the external table, I can see what my run rates are from any SQL capable tool.  Below is a simple query to pull the data from the external table.

Note: Some numbers in the output may be negative.  This is due to the subtraction between RBA2 (smaller) and RBA1 (larger).

        repgroup as processgroup,
        to_char(date1, 'DD-MON-YY HH:MI:SS') snap1,
        curtrail1 as snap1_trail,
        rba1 as snap1_rba,
        to_char(date1, 'DD-MON-YY HH:MI:SS') snap2,
        curtrail2 as snap2_trail,
        rba2 as snap2_rba,
  repgroup = 'REP';

--Output (unformatted)--

PROCESSG SNAP1              SNAP1_TRAIL                     SNAP1_RBA SNAP2              SNAP2_TRAIL                     SNAP2_RBA RATE_MIN_MB RATE_HR_MB RATE_HR_GB RATE_DAY_GB
-------- ------------------ ------------------------------ ---------- ------------------ ------------------------------ ---------- ----------- ---------- ---------- -----------
REP      22-MAY-14 01:38:51 ./dirdat/rt000034                 2905346 22-MAY-14 01:38:51 ./dirdat/rt000034                 3197702         286      17130         17         401
REP      22-MAY-14 01:39:49 ./dirdat/rt000034                 3197702 22-MAY-14 01:39:49 ./dirdat/rt000034                 3521610         316      18979         19         445
REP      22-MAY-14 01:40:50 ./dirdat/rt000034                 3521610 22-MAY-14 01:40:50 ./dirdat/rt000034                 3802260         274      16444         16         385
REP      22-MAY-14 01:41:49 ./dirdat/rt000034                 3802260 22-MAY-14 01:41:49 ./dirdat/rt000034                 4112529         303      18180         18         426
REP      22-MAY-14 01:42:49 ./dirdat/rt000034                 4112529 22-MAY-14 01:42:49 ./dirdat/rt000034                 4463477         343      20563         20         482


Being able to use an external table to view run rates additional scripts can be written to report on what is going on within the Oracle GoldenGate apply process.  Allowing administrators a better understanding of what is going on within their environments.  At the same time, I think this information is valuable in the turning process of Oracle GoldenGate as environment grown.

Let me know your thoughts and comments on this, because it is always interesting to see how other organizations solve these issues as well.


twitter: @dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Standalone 12c grid install example

Bobby Durrett's DBA Blog - Wed, 2014-05-21 14:33

Here are my rough screenshots from an install of 12c grid on a standalone Linux vm: pdf

I’m running version 6 of Oracle’s Linux in a VMWare Player 4 gigabyte virtual machine on my Windows 7 laptop.  I ran into a couple of key points that I wanted to document.


I added two new disks to my VM but after booting my machine they disappeared.  I ended up adding them back in and rebooting again and they stayed.  Not sure what to say except after you add them reboot and make sure they are still there.


The new disk devices showed up as /dev/sdb and /dev/sdc.  Then I had to run fdisk on each one to create a partition that took up the entire disk.  That led to devices being out there called /dev/sdb1 and /dev/sdc1.

Next I had to do some steps related to the oracleasm utility but I found out later that the next step I should have done was disable SE Linux:


I had to edit the file /etc/selinux/config so that SELINUX=disabled and reboot.  Then I ran these commands to configure oracleasm:

yum install oracleasm-support
/usr/sbin/oracleasm configure -i
/usr/sbin/oracleasm init
/usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
/usr/sbin/oracleasm createdisk DISK2 /dev/sdc1

The next tricky thing was figuring out the path to these two disks during the creation of the disk group as part of the grid install.  Here is what the install screen looked like:


It didn’t have any disks to choose from.  I tried various discovery paths such as /dev/sd* and ORCL:* but finally found one that worked:


Now the disks showed up and I was able to continue:


So, that’s all I wanted to point out, just a few key things I had to understand.  You can check out my link for all the screenshots I took.

- Bobby

Categories: DBA Blogs

Microsoft Analytics Platform System: Name Overhaul in Big Data War!

Pythian Group - Wed, 2014-05-21 08:18

I had the chance to attend a course about what used to be called Parallel Data Warehouse (PDW). PDW was introduced few years ago with the offering of SQL Server 2008 R2 Parallel Data Warehouse , something very few people could get their hands on. The appliance used to cost a ton of money, too many parts and only offered by HP or DELL in a pre-configured setup.

With SQL server 2012, Microsoft made many enhancements slashing the size of the appliance to almost half, and consequently the cost, and improving the performance as well.

Just while I was attending the course, Microsoft announced a name change and the introduction of new fancy brand name: Microsoft Analytics platform. CEO Satya Nadella announced the new platform with other products as well. I suggest reading this blog written by Satya himself

I’m sharing some of my personal (again, personal) opinions here about the platform and the appliance. So let’s take a step back and explore some of the basics here.

What is PDW (aka APS)?

Microsoft loves those three letters acronyms, although certifications may be an exception. Not to go to great lengths, it’s a Massively parallel Processing (MPP) “APPLIANCE” that is optimized for large scale enterprise data warehouses. The appliance is a logical entity that encompasses servers , switches , storage enclosed into a rack. Think of it as your typical environment of servers, switches and storage all brought together in one huge rack appliance. The idea behind the appliance is simple: We do all the dirty work for you and give you something “optimized” to handle huge amounts of data without the hassle of configuration, tuning and license worries; Of course the word “optimized” is according to Microsoft terms.


The appliance is not only about SQL server but it also incorporates Hadoop and an engine, PolyBase, to simplify talking to Hadoop using conventional SQL server T-SQL and can also tap HDsinghts to reach out to data stored in the cloud.

The appliance is only available through few vendors, used to be only HP and DELL but now also includes Quantas. Please check resources section for more information about Microsoft Analytics Platform.

The following video vividly tries to demonstrate the usage of APS:
Why PDW/APS appliance ?

Customers with huge amounts of data that also spans heterogeneous sources want always to get meaningful information out of that data. The more the data they have , the harder and longer the time to extract key information. Appliances are tuned machines with massive resources to help analyze , aggregate and join data much faster. Conventional SMP machines can work up to a certain level with much needed tuning and optimization that may not always work. vendors take this tuning and optimizations responsibility and present you a sleek machine that is supposed to overcome multiple design and resources limitations. Some of the examples of existing appliances are Oracle Exadata , Teradata Data Warehouse Appliance and IBM PureData and Netezza.

Are you saying that conventional SQL server setup can’t achieve this? Not entirely. Think of this as car upgrades where they may be based on the same chassis but high-end models have more powerful engines, features and performance. Although SQL server keeps bringing enhancements like updatable clustered columnstore indexes and in-memory OLTP in SQL server 2014 , PDW/APS appliance differs from conventional SMP in the following areas:

  1. PDW/APS appliance is a bundle of hardware and software offering customized to scale out. You can add and remove(much harder though) nodes to scale out to your data needs.Each “node” runs on separate server with seperate SQL server and hardware resources and managed by a “control” node to distribute the workload. You can read about APS Solution Brief here
  2. You can’t buy a PDW SQL server licence and install in your environment and you can not even assemble the product even if you have the blueprint , you just get it from one of the vendors and plug-n-play it.
  3. PolyBase plugs in Hadoop. You may be able to connect existing SQL server with Hadoop but Polycase provides easy to use T-SQL functions to extract data from Hadoop providing almost immediate ability to query Hadoop without a long learning curve.
  4. Many conventional SQL server features are suppressed. Yes , I can see your raised eyebrows but the idea is that Microsoft wanted to remove areas that can introduce bottlenecks such as CLR , Extended dlls and even SQL server agent. If you need to do something that can’t be done inside PDW , such as scheduled jobs, then move it to another tier.

Where does PDW/APS fit?

The cost of buying and running the appliance suggests that it’s not for everyone. The available resources and bundling Ploycase to connect to Hadoop shows that it’s for an enterprise with huge and heterogeneous amounts of data that is spread around. Bringing this data together with least customization is the goal of the appliance.

PDW/APS can help bring data together from following areas:

  • Social apps
  • Mobile
  • Sensors
  • Web
  • Other data sources such as RDBMS

The appliance fits a segment of industries, notably:

  • Retail
  • Healthcare
  • Financials
  • Manufacturing
  • Social media

Microsoft has some case studies about clients deploying PDW and realizing up to 100X performance gain. Here are the case studies:

  1. MEC -Media Firm Uncovers the Value of Digital Media with Parallel Data Warehouse
  2. The Royal Bank of Scotland – Leading UK Bank Gains Rapid Insight into Economic Trends with Analytics Platform System
  3. Progressive Insurance – Progressive Data Performance Grows by Factor of Four, Fueling Business Growth Online Experience

I believe Microsoft has to do more to get the appliance to more customers and I think the brand rename is part of this push. Training, support and deployment materials are also needed since there is not much resources online.
Is PDW worth it for clients?

Microsoft competes against TeraData, Netezza and Oracle Exadata. Cost is a very big factor: Licence and support. MS tends to do fairly well with MS shop customers and SQL server base clients. However, first version of PDW tended to be so expensive and bulky but current appliance is almost half price and half size than used to be. Expertise seems to be low still and this is what MS is working on.

Microsoft word is that instead of investing too much and too long on creating the same technology by trial and error and spend much time on tuning , here’s a working appliance that we have tuned it for you and just focus on your business. Per following chart, MS claims the cheapest price per TB compared to other vendors.


Regardless , the appliance makes more sense to SQL server clients even more ; however , I still see clients sticking to solutions from vendors of the main RDBMS technology they run since importing data will easier and learning curve will be less steep.

The appliance will make a case for mid to large enterprises with new Terabytes of data each month including unstructured data. SMP SQL and APS may correlate in the region of few to tens of terabytes but once we talk about hundreds of terabytes of data including unstructured data then APS starts to make sense.
PDW Against Cloud?

There are few reasons clients may opt for an in-premise appliance, including :

  1. Some data is sensitive to trust putting in the cloud.
  2. The amount of data is huge to upload to cloud.
  3. Cloud is not mature yet to support all features.

Most cloud vendors are not yet ready to offer such expensive appliance. Amazon, however, has something similar called Redshift. They actually make a case against on-premise solution saying you don’t need those bulky expensive appliances that can break, while you can use Amazon cloud solution and pay “less”. However, there are few points :

  1. MS appliance tries to play in the lower segment of cost.
  2. You don’t need lots of DBAs to manage. In fact, I was surprised that MS took out most of the the parts that can introduce problems with the appliance : no CLR, no extended modules, many other features were disabled. They applied best-practices and locked it so you can not temper with the appliance. I was told that the only thing that can cause APS to fail is a hardware failure and we already have redundancy so a failure is even less probable
  3. Not everyone wants their data in cloud , mostly for security. I was told about a customer who wanted to destroy , with a hammer, the hard drives after testing the appliance. It took few weeks to zero write the drives , few times.
  4. Transferring a lot of data to public cloud is not that fast , unless you are already hosting your data in same location such as on Amazon.

APS VS Hadoop

Hadoop excels in non-structred data such as text , sensors data , web crawling ..etc and whether you already have existing Hadoop lusters running or plan to , you may still have valuable relational data stored in your existing SQL server instances. APS makes it easy to bridge the gap between the two and use T-SQL to join the data from the two sources without worrying much about the design of Hadoop cluster.

APS region share

USA is top then EU then Asia.
APS and SQL 2014

The appliance still runs a customized version of SQL server 2012 , no word yet when SQL 2014 will be introduced. Upgrading the cluster is supported but it is not something end-customer can do yet.
PDW support

Many cases are still only supported by Microsoft Customer service and support (CSS) like failing back and downsizing but they are trying to automate some tasks to be done by experienced DBAs.
DBA role with APS

Don’t hold your breath! As outlined before, much of the configuration and optimizations are done for you in what MS believes is the best balance. You can not do many of the tasks a DBA does like changing Tempdb configurations, max degree of parallelism, etc. That’s the idea behind the appliance in fact. Focusing on bringing the data to the appliance and modifying your design to extract information.

When I did work with the appliance , I didn’t find it particularly complex from an operations point of view. Most of the work is about bringing data in, figuring out the best way to join data together without shuffling data around and that’s about understanding the data and business logic.

You can help in the following areas though:

  • Exporting> importing data from existing data sources into APS
  • Advise any changes to the design and business logic to comply with appliance requirements
  • Advise how to bring data together for analysis.
  • Design and implement data extraction practices.
  • Troubleshoot analysis routines and schedules.
  • Troubleshoot long running queries and schedules. APS has a web portal where you can see all running processes; you can also use DMVs to get this information and some of them are unique to APS.


APS appliance targets customers with huge amount of data that span heterogeneous sources who need read to plug solution.

With the new brand of the PDW appliance, Microsoft is getting more serious about big data and analytics. However, many vendors are well-established here and it’s still a long run in a market that’s expected to explode if it has not started yet.

Happy reading!

Categories: DBA Blogs

Everyone Should Have Firefighter Mentality

Pythian Group - Wed, 2014-05-21 07:43

Australia, being the most arid country in the world is known for its bushfires. The part of Australia where I live in doesn’t have many bushfires, but we do have several building fires, like any other part of the world.

Firefighter is one occupation which many kids want to be when they grow up. Kids are inspired by the shiny, clean, smart outwardly appearances of the firefighter crews passing majestically with sirens ringing in their immaculately clean trucks.

While volunteering for one of the fire stations in my suburb on a quiet day, I found those firefighters doing nothing but polishing their trucks, cleaning their stuff, making sure everything is in optimal order, and waiting for the emergency on their toes.

No matter what field you are in, what profession you are following; No matter on which step of corporate ladder you are; If you are a full time employee, a contractor, or a business owner, it is a fact that there are ample quiet times. Small chunks of hushed, still periods during each workday.

Those still moments are the “firefighter” time. Don’t let that time go to waste. Clean your desk and your tools. Polish your skills, and think about yourself and your career. Identify your areas for improvement, and grow your network. Read, write, or help out others.

In other words, get ready for that fire.

Categories: DBA Blogs

New OTN Tour

Hans Forbrich - Mon, 2014-05-19 11:51
For those of you who watch the OTN Tours, here is a list of the big ones this year

Feb 2014 - OTN Yathra by AIOUG (India) ...
May 2014 - OTN MENA by ARAB OUG ...
August 2014 - OTN LAD North
August 2014 - OTN LAD South
November 2014 - OTN APAC

I was part of OTN Yathra (fantastic time again, thanks Murali), and have my name in the hat for LAD and APAC.  Unfortunately MENA conflicts with other scheduled events.
Categories: DBA Blogs

HA of Database Control for RAC made easy

The Oracle Instructor - Mon, 2014-05-19 09:11

When you install an 11g RAC database without Grid Control respectively Cloud Control present, this is what the DBCA will give you:

RAC_dbconsole1There is one Database Control OC4J Container only, running on host01. Should host01 go down, the Enterprise Manager is no longer available now. We could make that a resource, known to the clusterware and let it failover in that case. But also – and even easier – we can start a second OC4J Container to run on host02 simultaneously like this:

RAC_dbconsole2Let’s see how to implement that:


[oracle@host01 ~]$ emca -reconfig dbcontrol -cluster -EM_NODE host02 -EM_NODE_LIST host02

STARTED EMCA at May 14, 2014 5:16:14 PM
EM Configuration Assistant, Version Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.

Enter the following information:
Database unique name: orcl
Service name: orcl
Do you wish to continue? [yes(Y)/no(N)]: yes
May 14, 2014 5:16:26 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/orcl/emca_2014_05_14_17_16_14.log.
May 14, 2014 5:16:29 PM oracle.sysman.emcp.util.DBControlUtil stopOMS
INFO: Stopping Database Control (this may take a while) ...
May 14, 2014 5:16:34 PM oracle.sysman.emcp.EMAgentConfig performDbcReconfiguration
INFO: Propagating /u01/app/oracle/product/11.2.0/dbhome_1/host02_orcl/sysman/config/ to remote nodes ...
May 14, 2014 5:16:34 PM oracle.sysman.emcp.util.DBControlUtil startOMS
INFO: Starting Database Control (this may take a while) ...
May 14, 2014 5:17:33 PM oracle.sysman.emcp.EMDBPostConfig performDbcReconfiguration
INFO: Database Control started successfully
May 14, 2014 5:17:34 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage
****************  Current Configuration  ****************
----------        ----------        ---------------------

orcl              host01    
orcl              host02    

Enterprise Manager configuration completed successfully
FINISHED EMCA at May 14, 2014 5:17:34 PM
[oracle@host01 ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release
Copyright (c) 1996, 2011 Oracle Corporation.  All rights reserved.

Oracle Enterprise Manager 11g is running.
Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/host01_orcl/sysman/log

Not only can I access Database Control at host01 as usual, I can also get it at host02 now:

[oracle@host01 ~]$ ssh host02
Last login: Wed May 14 10:50:32 2014 from
[oracle@host02 ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release
Copyright (c) 1996, 2011 Oracle Corporation.  All rights reserved.
Oracle Enterprise Manager 11g is running.
Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/host02_orcl/sysman/log

All this is of course not new, but you won’t find it easy in the docs. That is something from my RAC accelerated course last week in Reading, by the way. Even seasoned RAC DBAs are sometimes not aware of that option, so I thought it might be helpful to publish it here briefly :-)

Tagged: Database Control, High Availability, Oracle Enterprise Manager, RAC
Categories: DBA Blogs

Partitions and Segments and Data Objects

Hemant K Chitale - Mon, 2014-05-19 07:47
Have you ever wondered about how Partitions are mapped to Segments ?  How  does Oracle identify the "data object" (as different from the logical object) that maps to the Segment for an Object ?

[Why does Oracle differentiate between "object_id" and "data_object_id" ?  An object may be created without a Segment.  An existing Segment for an object (e.g. a Table) may be recreated (e.g. by an ALTER TABLE tablename MOVE command) thus changing it's "data_object_id" without changing its "object_id")].

For a Partitioned Table, every Partition is an Object.  But (beginning with and "deferred_segment_creation" behaviour), the Segment is created only when the Partition is populated with one or more rows.  What happens when a Partition is SPLIT ?

Here is a simple demonstration with some notes :

SQL> drop table test_partitioned_table purge;

Table dropped.

SQL> -- create test table with 5+1 partitions
SQL> create table test_partitioned_table
2 (
3 id_column number,
4 data_column varchar2(15)
5 )
6 partition by range (id_column)
7 (
8 partition p_100 values less than (101),
9 partition p_200 values less than (201),
10 partition p_300 values less than (301),
11 partition p_400 values less than (401),
12 partition p_500 values less than (501),
13 partition p_max values less than (MAXVALUE)
14 )
15 /

Table created.

SQL> -- populate the first 4 partitions
SQL> insert into test_partitioned_table
2 select rownum, to_char(rownum)
3 from dual
4 connect by level < 379
5 /

378 rows created.

SQL> -- identify the segments that did get created
SQL> -- note : Use DBA_SEGMENTS as HEADER_% information is not available in USER_SEGMENTS
SQL> select partition_name, header_file, header_block
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6 order by partition_name
7 /

------------------------------ ----------- ------------
P_100 11 34449
P_200 11 35473
P_300 11 36497
P_400 11 38417

SQL> -- identify the objects
SQL> -- use the DBA_OBJECTS view for consistency with previous query on DBA_SEGMENTS
SQL> select subobject_name, object_id, data_object_id
2 from dba_objects
3 where owner = 'HEMANT'
4 and object_name = 'TEST_PARTITIONED_TABLE'
5 and object_type = 'TABLE PARTITION'
6 order by subobject_name
7 /

------------------------------ ---------- --------------
P_100 114541 114541
P_200 114542 114542
P_300 114543 114543
P_400 114544 114544
P_500 114545 114545
P_MAX 114546 114546

6 rows selected.


Notice how, although there are 6 Partitions and 6 Objects, there are only 4 Segments. Only the first 4 Partitions that have rows in them now have Segments associated with them.  Pay attention to the (HEADER_FILE, HEADER_BLOCK) and DATA_OJECT_ID values as I proceed to manipulate the Partitions.

SQL> -- split the first partition
SQL> alter table test_partitioned_table
2 split partition p_100
3 at (51)
4 into (partition p_50, partition p_100)
5 /

Table altered.

SQL> -- identify the segments
SQL> select partition_name, header_file, header_block
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_50','P_100')
7 order by partition_name
8 /

------------------------------ ----------- ------------
P_100 11 40465
P_50 11 39441

SQL> -- identify the objects
SQL> select subobject_name, object_id, data_object_id
2 from dba_objects
3 where owner = 'HEMANT'
4 and object_name = 'TEST_PARTITIONED_TABLE'
5 and object_type = 'TABLE PARTITION'
6 and subobject_name in ('P_50','P_100')
7 order by subobject_name
8 /

------------------------------ ---------- --------------
P_100 114541 114548
P_50 114547 114547


Notice how Partition P_50, with a new OBJECT_ID and DATA_OBJECT_ID (above the highest then-existent value). But did you notice that the (HEADER_FILE, HEADER_BLOCK) pair and the DATA_OBJECT_ID for Partition P_100 are completely new values ? Oracle has created a *new* Segment for Partition P_100 and discarded the old segment. The SPLIT operation has created two *new* Segments and removed the old Segment for Partition P_100. What does this also mean ?  Oracle had to actually rewrite all 100 rows in that Partition in the process of creating two new Segments.  Let me say that again : Oracle had to rewrite all 100 rows.

Let me proceed with another test.

SQL> -- insert one row into the 5th partition
SQL> insert into test_partitioned_table
2 select 450, to_char(450) from dual
3 /

1 row created.

SQL> -- identify the segment
SQL> select partition_name, header_file, header_block
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name = 'P_500'
7 order by partition_name
8 /

------------------------------ ----------- ------------
P_500 11 34449

SQL> -- identify the object
SQL> select subobject_name, object_id, data_object_id
2 from dba_objects
3 where owner = 'HEMANT'
4 and object_name = 'TEST_PARTITIONED_TABLE'
5 and object_type = 'TABLE PARTITION'
6 and subobject_name = 'P_500'
7 order by subobject_name
8 /

------------------------------ ---------- --------------
P_500 114545 114545


Inserting a row into Partition P_500 has created a Segment (which did not exist earlier).  [Coincidentally, it has actually "reused" the one extent that earlier belonged to Partition P_100 -- look at the (HEADER_FILE, HEADER_BLOCK) pair --  and had become a free extent  for a while.  I say "coincidentally" because if there had been some other extent allocation for this Table or any other Table or Index in the same tablespace, that free extent could have been reused by another Partition / Table / Index].  The DATA_OBJECT_ID had already been allocated to the Partition when the Table was created, so this doesn't change.
I have deliberately inserted  a "boundary" value of 450 in this Partition.  This will be the maxvalue for Partition P_450.  I will now proceed to split the Partition along this boundary.

SQL> -- split the 5th partition
SQL> -- now p_450 will have the 1 row and p_500 no rows
SQL> alter table test_partitioned_table
2 split partition p_500
3 at (451)
4 into (partition p_450, partition p_500)
5 /

Table altered.

SQL> -- identify the segments
SQL> select partition_name, header_file, header_block
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_450','P_500')
7 order by partition_name
8 /

------------------------------ ----------- ------------
P_450 11 34449
P_500 11 41489

SQL> -- identify the objects
SQL> select subobject_name, object_id, data_object_id
2 from dba_objects
3 where owner = 'HEMANT'
4 and object_name = 'TEST_PARTITIONED_TABLE'
5 and object_type = 'TABLE PARTITION'
6 and subobject_name in ('P_450','P_500')
7 order by subobject_name
8 /

------------------------------ ---------- --------------
P_450 114549 114545
P_500 114545 114550


Now, isn't that interesting ? Partition P_450 has "inherited" the (HEADER_FILE, HEADER_BLOCK) and DATA_OBJECT_ID of what was Partition P_500 earlier. What has happened is that the Segment for Partition P_500 has now become the Segment for Partition P_450 while a *new* Segment (and DATA_OBJECT_ID) has been created for Partition P_500. Effectively, the physical entity (Segment and Data_Object) for Partition P_500 has been "reused" for Partition P_450 while Partition P_500 has been "born again" in a new incarnation. This SPLIT (unlike the earlier SPLIT) resulted in only 1 new Segment (and Data_Object).  The existing row remained in the existing Segment without being rewritten.  The new Segment is created for any "empty" Partition.

For further reading, I suggest that you read on "fast split" operations under "Optimizing SPLIT PARTITION and SPLIT SUBPARTITION Operations".

SQL> select * from test_partitioned_table partition (p_450);

---------- ---------------
450 450

SQL> select * from test_partitioned_table partition (p_500);

no rows selected


There you can see that the row is in Partition P_450 whose physical extent is the same as before.

Note : In my demonstration, each Segment is only 1 Extent.

SQL> l
1 select partition_name, extents
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6* order by partition_name
SQL> /

------------------------------ ----------
P_100 1
P_200 1
P_300 1
P_400 1
P_450 1
P_50 1
P_500 1

7 rows selected.


You may have to be dealing with Segments with multiple Extents.

Another takeaway from the query on DBA_SEGMENTS is that (OWNER, SEGMENT_NAME) is not the identifying Key for a Segment.  In fact for a Partitioned table there is *NO* Segment for the Table itself.  There exist Segments for the Table Partitions.  The query on DBA_SEGMENTS must be on (OWNER, SEGMENT_NAME, PARTITION_NAME) by SEGMENT_TYPE = 'TABLE PARTITION'.


Categories: DBA Blogs

Interesting info-graphics on Data-center / DB-Manageability

Pankaj Chandiramani - Mon, 2014-05-19 04:21

 Interesting info-graphics on Data-center / DB-Manageability

Categories: DBA Blogs

last post on GLOC 2014 conference ... new technical stuff coming soon

Grumpy old DBA - Sun, 2014-05-18 18:25
One of the presentations I am going to work on next will be in the SQL Plan Management interaction/intersection area with AWR information.  At GLOC 2014 Craig Martin had a very nice one that kind of is kick starting my interest in coming up with some relevant / worthwhile.

For the conference itself it was an epic "best ever" kind of event that made one feel both valued/valuable for despite all the hard work and there was plenty of that ... it was above and beyond what we were hoping for.  All that achieved through a combination of top speakers and greath workshops.

Somehow even the conference did well when at the last moment our keynote speaker for the final day of the event scratched on us.  Through a set of bad luck and problems in Chicago Steven Feuerstein was unable to fly into Cleveland.  After 15 years of doing events this was our first time when a keynote speaker did not make it.  Yeah I know we shoulda had a contingency plan in place.

Ric Van Dyke from Hotsos stepped up to the plate and delivered an exceptional keynote in Steven's place.  Thank you Ric!

A special thanks to all of our NEOOUG officers and our conference planning committee which also included several people outside of NEOOUG ... you know who you are and we could not have done this without you!

Finally my appreciation for our conference chair Rumpi Gravenstein!  He is the guy the visionary that started this whole journey for us.  His energy and enthusiasm and dedication is inspiring to see!
Categories: DBA Blogs

SQL Activity for the last hour

DBA Scripts and Articles - Sat, 2014-05-17 12:59

This script can be used to show the top 10 SQL activity for the last hour. It uses the v$active_session_history view to search top SQL by resource consumption. Top 10 SQL Activity [crayon-53d520300a3ad381473125/] Here is the result you can obtain: and the active sessions history graph for the same period:

The post SQL Activity for the last hour appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

12c Online Table Redefinition Examples

Bobby Durrett's DBA Blog - Fri, 2014-05-16 16:22

I’ve been learning about online table redefinition in Oracle 12c.  Here is a zip of some test scripts I built for my own understanding: zip

I spent some time trying out DBMS_REDEFINITION on an 11.2 Exadata test system and then started messing with it on 12c in a generic Linux VM.  The 12c version of the DBMS_REDEFINITION includes a new procedure called REDEF_TABLE which lets you do in one step certain things you did with multiple calls to the package in 11.2.  This is an online table compress on 12c:

    uname                      => user,
    tname                      => 'TEST',
    table_compression_type     => 'COMPRESS');

Things that can’t be done in one step like this require calls to procedures such as CAN_REDEF_TABLE, START_REDEF_TABLE, REGISTER_DEPENDENT_OBJECT, COPY_TABLE_DEPENDENTS, and FINISH_REDEF_TABLE.  Example online12c5.sql uses all of these.  Here is a summary of each included file’s test:

online12c1.sql – compress table in one step

online12c2.sql – compress table in multiple steps and show that it creates a hidden column without a primary key

online12c3.sql – same as previous test but with primary key

online12c4.sql – copy contraints and indexes

online12c5.sql – change the columns for the non-unique index

online12c6.sql – change order and type of columns

- Bobby

Categories: DBA Blogs

Log Buffer #371, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-05-16 07:56

Fueled by massive growth of data and propelled by mammoth future potential of its application, bloggers across the database technologies are finding new things to explore in the light of previous knowledge. This Log Buffer Edition covers that all.


To configure an instrumentation watch, you first need to know what instrumentation is, and how to instrument applications or servers.

Why Choose to Deploy Agile PLM in the Cloud?

One of the things that makes JSF different from other approaches to server-side Java web development is that it offers a much higher level of abstraction and transparently maintains UI state.

Step by step instructions for setting up a development environment for doing development with Hippo CMS.

Oracle Solaris 11.2 at the Atlanta OpenStack Summit

SQL Server:

RAID and Its Impact on your SQL Performance.

Microsoft Azure Diagnostics Part 1: Introduction

Using Encrypted Backups in SQL Server 2014

A new plug in for Management Studio from Red Gate is free. It will give you access to all the scripts at SQLServerCentral, including your own briefcase.

Validate File Properties with PowerShell Prior to Import into SQL Server


Benchmark: SimpleHTTPServer vs pyclustercheck (twisted implementation)

Archival and Analytics – Importing MySQL data into Hadoop Cluster using Sqoop

Cross your Fingers for Tech14, see you at OSCON

New Tungsten Replicator 2.2.1 now available

MySQL May Newsletter is Available!

Categories: DBA Blogs

Consider speaking at #ukoug_tech14

The Oracle Instructor - Thu, 2014-05-15 10:11

The call for papers is still open for UKOUG Tech 14 – a great event to speak at for sure!

UKOUG Tech 14The board explicitly encourages first-time speakers and women to submit an abstract.

Both doesn’t apply for me, but I have submitted abstracts in spite :-)

I can say only the best about the past annual conferences of the UK Oracle User Group. Great speakers, very good conditions and an excellent opportunity to get in touch with other Oracle enthusiasts.

So if you – yes, YOU! – are an expert in Oracle Core Technology, but hesitated so far to speak at public events about your topics, this might be the best time to get over it :-)

Tagged: #ukoug_tech14
Categories: DBA Blogs

(Slightly Off Topic) Spurious Correlations

Hemant K Chitale - Wed, 2014-05-14 09:33
During the course of the job, we find, discover and analyze "data" and come up with "information".  Sometimes we find correlations and "discover" causes.  We say "Event 'A'  caused Result 'X'".   However, it can  so happen that some "discovered" correlations are not "causal correlations" --- i.e. "Event 'A' has NOT really caused Result 'X'".  The mathematical correlation ("coefficient of correlation") may be high but there really is no logical or physical association between the two.

Here are some examples of Spurious Correlations.

The next time you say "I find a high correlation between the two", stop and think.  For a study of common biases and fallacies, I recommend "the art of thinking clearly" by rolf dobelli.


Categories: DBA Blogs

Oracle Passwords: How to Use Punctuation Symbols

Pythian Group - Wed, 2014-05-14 07:04

You can’t make a password strong enough. But at least you can try. Having at least one upper case character, one lower case character, one number, one punctuation mark or symbol, and greater than 8 characters, you can have a password which can be considered something decently safe, probably.

In Oracle, if you embed punctuation marks within your passwords, you have to use single quote around that password if you are using orapwd utility from command line. If you are altering the password from the sqlplus utility, you need to use the double quotes around the password.

Example of both is given below, as sys password is being changed:

From orapwd:

$ORACLE_HOME/dbs $ orapwd file=orapwtest password=”My1s!pass” entries=40  force=y
-bash: !pass”: event not found

$ORACLE_HOME/dbs $ orapwd file=orapwtest password=’My1s!pass’ entries=40  force=y

Changed successfully.

From sqlplus:

SQL> alter user sys identified by ‘My1s!pass’;
alter user sys identified by ‘My1s!pass
ERROR at line 1:
ORA-00988: missing or invalid password(s)

SQL> alter user sys identified by “My1s!pass”;

User altered.


Categories: DBA Blogs

Tackling the challange of Provisoning Databases in an agile datacenter

Pankaj Chandiramani - Wed, 2014-05-14 01:03

One of the key task that a DBA performs repeatedly is Provisioning of Databases which also happens to one of the top 10 Database Challenges as per IOUG Survey .

Most of the challenge comes in form of either Lack of Standardization or it being a Long and Error Prone Process . This is where Enterprise Manager 12c can help by making this a standardized process using profiles and lock-downs ; plus have a role and access separation where lead dba can lock certain properties of database (like character-set or Oracle Home location  or SGA etc) and junior DBA's can't change those during provisioning .Below image describes the solution :

In Short :

  • Its Fast

  • Its Easy 

  • And you have complete control over the lifecycle of your dev and production resources.

I actually wanted to show step by step details on how to provision a 11204 RAC using Provisioning feature of DBLM  , but today i saw a great post by MaaZ Anjum that does the same , so i am going to refer you to his blog here :

Patch and Provision in EM12c: #5 Provision a Real Application Cluster Database

Other Resources : 

Official Doc :

Screen Watch :

Others :

Categories: DBA Blogs