Pythian Group

Subscribe to Pythian Group feed
Love Your Data
Updated: 3 hours 16 min ago

An Effective Approach to Migrate Dynamic Thrift Data to CQL, Part 1

Tue, 2016-05-17 16:00
1. Introduction

Recently Pythian’s Cassandra team worked on one customer’s request to copy data of several Cassandra tables between two Cassandra clusters (not Cassandra data centers). The original approach we used to copy data is through Cassandra COPY TO/FROM commands because the size of the data to be copied is not large (several hundred mega-bytes per table). The execution of the commands was successful, but for one of the tables that we did data copy, the application complained about missing data. We examined the data for that table using Cassandra cqlsh utility and found no discrepancy. After a further discussion with the customer, we realized that the source tables were created and manipulated by a Thrift based application and the application can dynamically create different columns for different rows, although each row does share a common set of statically defined columns. It is the data in these dynamic columns that are missing during the data copy process.

We addressed the issue, but in the end we felt that we should write something about Cassandra data migration from Thrift to CQL because this is quite a common problem faced by many existing Cassandra users right now, considering that Cassandra is gradually phasing out Thrift and replacing it with CQL.

This post is the first part of a three-post series. In this post (Part 1), we’re going to dive into the details of Cassandra storage engine (pre-3.0 version) and explore some fundamental concepts that are key to better understanding the discussion in the following posts. In the next post (Part 2), we’re going to explore how Cassandra tables can be defined statically, dynamically, or in a mixed mode in Thrift and what the corresponding table definition in CQL are. In the last post (Part 3), we’ll present an effective approach to migrate dynamically generated data in Thrift into a statically defined CQL table, without suffering any data loss.

1.1. Cassandra Transition from Thrift to CQL API

Apache Thrift is a software framework developed at Facebook for “scalable cross-language services development”. In early days of Cassandra, Thrift base API was the only method to develop Cassandra client applications. But with the maturity of CQL (Cassandra query language), Cassandra is gradually moving away from Thrift API to CQL API. Along with this trend,

  • Thrift based client drivers are not officially supported.
  • Thrift API will not get new Cassandra features; it exists simply for backward compatibility purpose.
  • CQL based “cqlsh” utility is replacing thrift based “cassandra-cli” utility as the main command-line tool to interact with Cassandra.

Please note that the latest version of CQL is 3.x (3.4 as of May, 2016). CQL2 is deprecated and removed for Cassandra 2.2 and later.  In the discussion below, we will simply use CQL to refer to CQL 3.

2. Overview of Internal Cassandra Data Storage Structure

Please note that since Cassandra 3.0, the underlying storage engine for Cassandra has gone through a lot of changes. The discussion in this post series is for pre-3.0 Cassandra (v 2.2 and before).

At very high level, a Cassandra table (or column family by old term) can be seen as a map of sorted map in the following format *:

     Ma<RowKey, SortedMap<ColumnKey, ColumnValue>>

A graphical representation looks like below:


Please note that although there are more complex structures such as Super Column and Composite Column, the basic idea remains the same and the representation above is good enough for us to describe the problem in this document.

Internally, such a storage structure is where both Thrift and CQL APIs are based. The difference is that Thrift API manipulates the storage structure directly, but CQL API does so through an abstraction layer and expresses the data to user in a tabular form similar to what SQL does for a relational database.

In order to make this clearer, let’s use an example to compare the outputs between cassandra-cli and cqlsh command utilities, which are based on Thrift and CQL protocols separately. The table schema is defined as below (in CQL format):

CREATE TABLE song_tags (
   id uuid,
   tag_name text,
   PRIMARY KEY (id, tag_name)

This simple table is used to maintain the song tags. After inserting several rows in this table, we examined the table content using both Thirft based “cassandra-cli” utility and CQL based “cqlsh” utility. The result is as below:


From the example above, it can be easily noticed that between Thrift and CQL, the terms “row” and “column” don’t share the same meaning and this causes some confusion when people do Thrift to CQL migration. For example,

  • In Thrift, one row means one data partition that is determined by the partition key definition. Each row has multiple columns, or more precisely “cells”. Each cell contains the time-stamp of when it is created. The name/key of the cell/column is not necessarily the name as defined in the table definition, especially when there are clustering column(s) defined (just as in the example above)
  • In CQL, one row could be one partition, or could be one part of a partition. It really depends on how the partition key and cluster key are designed

Due to such differences, I’m going to use the following terms in this document for clarification:

  • “CQL Row”, or simply “Row”, refers to a row in CQL context
  • “Storage Row”, refers to a row in Thrift context
  • “Column”, refers to a column in both CQL or Thrift contexts
  • “Cell” particularly refers to a column in Thrift context
  • “Table” refers to a table in CQL context, or a column family in Thrift context
  • “Partition” refers to a data partition determined by the hash key. In Thrift context, “Partition” and “Storage Row” has the same meaning. In CQL context, one “Partition” includes multiple “CQL Rows”.
3. Compact Storage

In CQL, a table property, called for COMPACT STRORAGE, is created for backward compatibility.  As the name suggests, tables created with this directive consumes less storage space compared with those created without this directive.

To make this clear, we also uses an example to explain. Basically, two tables are created to keep track of the average student grades for classes in a school. The table definition for them are exactly the same except that one (avg_grade) is defined without COMPACT STORAGE property, but another (avg_grade2) does. The same records of data are also inserted into both tables, as below:

CREATE TABLE avg_grade (
    student_id int,
    class_id int,
    grade double,
    PRIMARY KEY (student_id, class_id)

CREATE TABLE avg_grade2 (
    student_id int,
    class_id int,
    grade double,
    PRIMARY KEY (student_id, class_id)

insert into avg_grade(student_id, class_id, grade) values (1, 1, 75.2);
insert into avg_grade(student_id, class_id, grade) values (2, 1, 81.3);

insert into avg_grade2(student_id, class_id, grade) values (1, 1, 75.2);
insert into avg_grade2(student_id, class_id, grade) values (2, 1, 81.3);

The statements are executed in cqlsh utility. The data is then flushed from memory to disk with “nodetool flush” command. After that, sstable2json utility is used to examine the contents of the SSTable data files for both tables. Below is the output:


From the output above, we can see that tables NOT in compact storage mode has more cells within each Storage Row (e.g. the extra cell with empty value for clustering column “class”) and each Cell stores more metadata (e.g. the name of the “grade” column is added in each of the row). So just from storage perspective, having a table defined in compact storage could save quite some storage space, especially when we’re dealing with many columns and/or with complex column types like collections.

In Thrift, tables are always stored in compact storage mode. In CQL, tables by default are stored in non-compact storage mode, unless the tables are defined with “COMPACT STRORAGE” property. As a result of this, CQL tables without “COMPACT STORAGE” property are not visible in Thrift based utilities like cassandra-cli.

When a CQL table is defined with COMPACT STORAGE property, it gets the benefit of saving some disk space. However, there are also some caveats that need to pay attention to. For example:

  • It cannot have new columns added or existing columns dropped.
  • If it has a compound primary key (multiple columns), then at most one column can be defined as not part of the key.
  • It cannot have a column defined with non-frozen collection types.
    • Note that for people who are not familiar with the concept of “frozen” vs. “non-frozen” collection, a frozen collection serializes all sub-components of the collection into one single value when stored, which is treated as a blob and the whole value must be updated once. On the contrary, A non-frozen collection allows updates on individual fields.
Categories: DBA Blogs

Characterset Conversion Conundrums

Mon, 2016-05-16 11:30

Every now and then a database needs to be migrated from one characterset to another. The driver behind this is ususally to either synchronize all databases to a single standard or to support new characters or symbols like €,

Categories: DBA Blogs

Replication between Tungsten clusters

Mon, 2016-05-16 10:52
Replication between Tungsten clusters

The process I will describe in this post will allow you to configure replication between Tungsten clusters. The most common use case I have seen for this is a dedicated ETL cluster.

The setup will look like this:


Replication between Tungsten clusters

So we will have a composite datasource (compositeprod) composed of east and west clusters, and two ETL clusters, one on each side.

The described setup allows failover or switchover within a single datacenter (e.g. db1.east -> db2.east) or to the Disaster Recovery Site, (e.g. db1.east -> db1.west) for the core cluster.

At the time of this article’s publication it is not possible to replicate between two composite clusters, so each ETL cluster needs to be standalone. This feature is expected for Tungsten 5.0 release.

We will install a standalone replicator on the ETL hosts to bring data in from the core cluster. This assumes you have already completed the required prerequisites.


Using the .ini files installation method, the config file for the core cluster hosts would be:





The config file for etl.east hosts would be:






For etl.west hosts:






Now we need to install the cluster by running the Tungsten installer on each core and ETL host:

./install/continuent-tungsten-4.0.2-6072082/tools/tpm install

Also on ETL hosts only, we need to install the standalone replicator:

./install/tungsten-replicator-4.0.0-18/tools/tpm install

Each ETL host will end up with 2 different replicator processes:

  1. The replicator that brings data in from the core cluster
  2. The replicator that reads data from the ETL cluster master to replicate to the ETL slaves

With the ‘executable-prefix‘ option, it is easy to reference each one:

  1. tr_trepctl status will show info from the core -> ETL replicator
  2. trepctl status will show info from the ETL cluster replicator
Tips & tricks

One thing to keep in mind is that for aliases to actually work, we need to add the folllowing line to .bashrc:

. "/opt/continuent/replicator/share/"

as there is a bug where the installer only adds the aliases from the cluster installation, and ignores the ones from the standalone replicator installation.

The [defaults.replicator] tag is helpful to pass options to the standalone replicator (it is ignored by the cluster installation), as the same tungsten.ini file will be read by both the cluster and standalone replicator products.

In this case I am using it to specify different THL and RMI ports, so the standalone replicator does not collide with the cluster replicator.

We also need to tell the core -> ETL replicator not to log its updates to the binlog, otherwise the changes will be picked up by the ETL cluster replicator as well, and end up being applied two times on the ETL slaves, resulting in consistency errors.

Do you have any questions about replication between Tungsten clusters? Let me know in the comments!


Categories: DBA Blogs

Analyze Index Validate Structure – The Dark Side

Fri, 2016-05-13 09:38

Recently a co-worker wanted to discuss a problem he had encountered after upgrading a database.

The upgrade plan included steps to verify object integrity; this was being done with analyze table <tablename> validate structure cascade. All was fine until one particular table was being analyzed.  Suddenly it seemed the process entered a hung state.

The job was killed and separate commands were created to analyze each object individually.  That went well up until one of the last indexes was reached.

Me: How long has it been running?

Coworker: Three days.

Yes, you read that correctly, it had been running for three days.

My friend ran a 10046 trace to see what the process was doing; nearly all the work was ‘db file sequential read’ on the table.

At this time I suspected it was related to the clustering_factor for the index in question.  The analyze process for an index verifies each row in the index.  If the cluster is well ordered then the number of blocks read from the table will be similar to the number of blocks making up the table.

If however the table is not well ordered relative to the columns in the index the number of blocks read from the table can be many times the total number of blocks that are actually in the table.

Consider for a moment that we have rows with an  ID of 1,2,3,4 and 5.  Let’s assume that our index is created on the ID column.

If these rows are stored in order in the table, it is very likely these rows will all be in the same block, and that a single block read will fetch all of these rows.

If however the rows are stored in some random order, it may be that a separate block read is required for each lookup.

IDBlock Number122275316425104

In this case 5 separate blocks must be read to retrieve these rows.

In the course of walking the index, some  minutes later these rows must also be read:

IDBlock Number104857622104857775104857816104857921048580104

The blocks where these rows reside are the same blocks as the earlier example. The problem of course is that quite likely the blocks have been removed from cache by this time, and must be read again from disk.

Now imagine performing this for millions of rows. With a poor clustering factor the analyze command on an index could take quite some time to complete.

This seemed worthy of a test so we could get a better idea of just how bad this issue might be.

The test was run with 1E7 rows. The SQL shown below creates 1E7 rows, but you can simply change the value of level_2 to 1e3 to reduce the total rows to 1E6, or even smaller if you like.


-- keep this table small and the rows easily identifiable
-- or not...

-- 1e3 x 1e4 = 1e7
def level_1=1e3
def level_2=1e4

drop table validate_me purge;

create table validate_me
pctfree 0
   -- for a good clustering factor
   -- for a bad clustering factor
   floor(dbms_random.value(1,1e6)) id
   , substr('ABCDEFGHIJKLMNOPQRSTUVWXYZ',mod(id,10),15) search_data
   , to_char(id,'99') || '-' || rpad('x',100,'x') padded_data
from (
   select rownum id
      select null
      from dual
      connect by level <= &level_1
   ) a,
      select null
      from dual
      connect by level <= &level_2
   ) b

create index validate_me_idx1 on validate_me(id,search_data);

exec dbms_stats.gather_table_stats(user,'VALIDATE_ME',method_opt => 'for all columns size 1')


Let’s see just what the clustering factor is for this index. The following script cluster-factor.sql will get this information for us.


col v_tablename new_value v_tablename noprint
col v_owner new_value v_owner noprint

col table_name format a20 head 'TABLE NAME'
col index_name format a20 head 'INDEX NAME'
col index_rows format 9,999,999,999 head 'INDEX ROWS'
col table_rows format 9,999,999,999 head 'TABLE ROWS'
col clustering_factor format 9,999,999,999 head 'CLUSTERING|FACTOR'
col leaf_blocks format 99,999,999 head 'LEAF|BLOCKS'
col table_blocks format 99,999,999 head 'TABLE|BLOCKS'

prompt Owner:

set term off feed off verify off
select upper('&1') v_owner from dual;
set term on feed on

prompt Table:

set term off feed off verify off
select upper('&2') v_tablename from dual;
set term on feed on

   , t.num_rows table_rows
   , t.blocks table_blocks
   , i.index_name
   , t.num_rows index_rows
   , i.leaf_blocks
   , clustering_factor
from all_tables t
   join all_indexes i
      on i.table_owner = t.owner
      and i.table_name = t.table_name
where t.owner = '&v_owner'
   and t.table_name = '&v_tablename'


undef 1 2


Output from the script:


SQL> @cluster-factor jkstill validate_me



                                          TABLE                                            LEAF     CLUSTERING
TABLE NAME               TABLE ROWS      BLOCKS INDEX NAME               INDEX ROWS      BLOCKS         FACTOR
-------------------- -------------- ----------- -------------------- -------------- ----------- --------------
VALIDATE_ME              10,000,000     164,587 VALIDATE_ME_IDX1         10,000,000      45,346     10,160,089

1 row selected.

Elapsed: 00:00:00.05


On my test system creating the table for 1E7 rows required about 2 minutes and 15 seconds, while creating the index took 28 seconds.

You may be surprised at just how long it takes to analyze that index.


SQL> analyze index jkstill.validate_me_idx1 validate structure online;

Index analyzed.

Elapsed: 00:46:06.49


Prior to executing this command a 10046 trace had been enabled, so there is a record of how Oracle spent its time on this command.


If you are wondering how much of the 46 minutes was consumed by the tracing and writing the trace file, it was about 6 minutes:


$>  grep "WAIT #48004569509552: nam='db file sequential read'"; oravm1_ora_2377_VALIDATE.trc  | awk '{ x=x+$8 } END { printf ("%3.2f\n",x/1000000/60) }'

A Well Ordered Table

Now lets see how index analyze validate structure performs when the table is well ordered. The table uses the DDL as seen in the previous example, but rather than use dbms_random to generate the ID column, the table is created with the rows loaded in ID order.  This is done by uncommenting id in the DDL and commenting out the call to dbms_random.


SQL> analyze index jkstill.validate_me_idx1 validate structure online;

Index analyzed.

Elapsed: 00:01:40.53

That was a lot faster than previous.  1 minute and 40 seconds whereas previously the same command ran for 40 minutes.


Using some simple command line tools we can see how many times each block was visited.


First find the cursors and verify this cursor used only once in the session

$> grep -B1 '^analyze index' oravm1_ora_19987_VALIDATE.trc
PARSING IN CURSOR #47305432305952 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462922977143796 hv=2128321230 ad='b69cfe10' sqlid='318avy9zdr6qf'
analyze index jkstill.validate_me_idx1 validate structure online

$> grep -nA1 'PARSING IN CURSOR #47305432305952' oravm1_ora_19987_VALIDATE.trc
63:PARSING IN CURSOR #47305432305952 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462922977143796 hv=2128321230 ad='b69cfe10' sqlid='318avy9zdr6qf'
64-analyze index jkstill.validate_me_idx1 validate structure online
276105:PARSING IN CURSOR #47305432305952 len=55 dep=0 uid=90 oct=42 lid=90 tim=1462923077576482 hv=2217940283 ad='0' sqlid='06nvwn223659v'
276106-alter session set events '10046 trace name context off'

As this cursor was reused, we need to limit the lines we considered from the trace file.


One wait line appears like this:

WAIT #47305432305952: nam=’db file sequential read’ ela= 317 file#=8 block#=632358 blocks=1 obj#=335456 tim=1462923043050233

As it is already known the entire table resides in one file, it is not necessary to check the file.

From the following command it is clear that no block was read more than once during the analyze index validate structure when the table was well ordered in relation to the index.


$> tail -n +64 oravm1_ora_19987_VALIDATE.trc| head -n +$((276105-64)) | grep "WAIT #47305432305952: nam='db file sequential read'" | awk '{ print $10 }' | awk -F= '{ print $2 }' | sort | uniq -c | sort -n | tail
      1 742993
      1 742994
      1 742995
      1 742996
      1 742997
      1 742998
      1 742999
      1 743000
      1 743001
      1 743002


That command line may look a little daunting, but it is really not difficult when each bit is considered separately.

From the grep command that searched for cursors we know that the cursor we are interested in first appeared at line 64 in the trace file.

tail -n +64 oravm1_ora_19987_VALIDATE.trc

The cursor was reused at line 276105, so tell the tail command to output only the lines up to that point in the file.

head -n +$((276105-64))

The interesting information in this case is for ‘db file sequential read’ on the cursor of interest.

grep “WAIT #47305432305952: nam=’db file sequential read'”

Next awk is used to output the block=N portion of each line.

awk ‘{ print $10 }’

awk is again used, but this time to split the block=N output at the ‘=’ operator, and output only the block number.

awk -F= ‘{ print $2 }’

The cut command could have been used here as well. eg. cut -d= -f2

Sort the block numbers


Use the uniq command to get a count of how many times each value appears in the output.

uniq -c

Use sort -n to sort the output from uniq.  If there are any counts greater than 1, they will appear at the end of the output.

sort -n

And pipe the output through tail. We only care if any block was read more than once.


Now for the same procedure on the trace file generated from the poorly ordered table.


$> grep -B1 '^analyze index' oravm1_ora_2377_VALIDATE.trc
PARSING IN CURSOR #48004569509552 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462547433220254 hv=2128321230 ad='aad620f0' sqlid='318avy9zdr6qf'
analyze index jkstill.validate_me_idx1 validate structure online

$> grep -nA1 'PARSING IN CURSOR #48004569509552' oravm1_ora_2377_VALIDATE.trc
51:PARSING IN CURSOR #48004569509552 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462547433220254 hv=2128321230 ad='aad620f0' sqlid='318avy9zdr6qf'
52-analyze index jkstill.validate_me_idx1 validate structure online
6076836:PARSING IN CURSOR #48004569509552 len=55 dep=0 uid=90 oct=42 lid=90 tim=1462550199668869 hv=2217940283 ad='0' sqlid='06nvwn223659v'
6076837-alter session set events '10046 trace name context off'


The top 30 most active blocks were each read 53 or more times when the table was not well ordered in relation to the index.

$> tail -n +51 oravm1_ora_2377_VALIDATE.trc | head -n +$((6076836-51)) | grep "WAIT #48004569509552: nam='db file sequential read'" | awk '{ print $10 }' | awk -F= '{ print $2 }' | sort | uniq -c | sort -n | tail -30
     53 599927
     53 612399
     53 613340
     53 633506
     53 640409
     53 644099
     53 649054
     53 659198
     53 659620
     53 662600
     53 669176
     53 678119
     53 682177
     53 683409
     54 533294
     54 533624
     54 537977
     54 549041
     54 550178
     54 563206
     54 568045
     54 590132
     54 594809
     54 635330
     55 523616
     55 530064
     55 532693
     55 626066
     55 638284
     55 680250



There is a feature of RMAN that allows checking for logical and physical corruption of an Oracle database via the  command backup check logical validate database.  This command does not actually create a backup, but just reads the database looking for corrupt blocks. Following is an (edited) execution of running this command on the same database where the analyze index commands were run.

A portion of the block corruption report is included.

RMAN> backup check logical validate database;
Starting backup at 06-MAY-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=29 instance=oravm1 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00008 name=+DATA/oravm/datafile/alloctest_a.273.789580415
input datafile file number=00009 name=+DATA/oravm/datafile/alloctest_u.272.789582305
input datafile file number=00024 name=+DATA/oravm/datafile/swingbench.375.821472595
input datafile file number=00023 name=+DATA/oravm/datafile/swingbench.374.821472577
input datafile file number=00019 name=+DATA/oravm/datafile/bh08.281.778786819
input datafile file number=00002 name=+DATA/oravm/datafile/sysaux.257.770316147
input datafile file number=00004 name=+DATA/oravm/datafile/users.259.770316149
input datafile file number=00001 name=+DATA/oravm/datafile/system.256.770316143
input datafile file number=00011 name=+DATA/oravm/datafile/alloctest_m.270.801310167
input datafile file number=00021 name=+DATA/oravm/datafile/ggs_data.317.820313833
input datafile file number=00006 name=+DATA/oravm/datafile/undotbs2.265.770316553
input datafile file number=00026 name=+DATA/oravm/datafile/undotbs1a.667.850134899
input datafile file number=00005 name=+DATA/oravm/datafile/example.264.770316313
input datafile file number=00014 name=+DATA/oravm/datafile/bh03.276.778786795
input datafile file number=00003 name=+DATA/oravm/datafile/rcat.258.861110361
input datafile file number=00012 name=+DATA/oravm/datafile/bh01.274.778786785
input datafile file number=00013 name=+DATA/oravm/datafile/bh02.275.778786791
input datafile file number=00022 name=+DATA/oravm/datafile/ccdata.379.821460707
input datafile file number=00007 name=+DATA/oravm/datafile/hdrtest.269.771846069
input datafile file number=00010 name=+DATA/oravm/datafile/users.271.790861829
input datafile file number=00015 name=+DATA/oravm/datafile/bh04.277.778786801
input datafile file number=00016 name=+DATA/oravm/datafile/bh05.278.778786805
input datafile file number=00017 name=+DATA/oravm/datafile/bh06.279.778786809
input datafile file number=00018 name=+DATA/oravm/datafile/bh07.280.778786815
input datafile file number=00020 name=+DATA/oravm/datafile/bh_legacy.282.778787059
input datafile file number=00025 name=+DATA/oravm/datafile/baseline_dat.681.821717827
input datafile file number=00027 name=+DATA/oravm/datafile/sqlt.668.867171675
input datafile file number=00028 name=+DATA/oravm/datafile/bh05.670.878914399

channel ORA_DISK_1: backup set complete, elapsed time: 00:25:27

List of Datafiles
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
1    OK     0              75632        256074          375655477
  File Name: +DATA/oravm/datafile/system.256.770316143
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              158478
  Index      0              17160
  Other      0              4730

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2    OK     0              36332        394240          375655476
  File Name: +DATA/oravm/datafile/sysaux.257.770316147
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              170007
  Index      0              138603
  Other      0              49298


As shown in the report, only 25 minutes were required to check the entire database for physically or logically corrupt blocks, as opposed to the 40 minutes needed to analyze index validate structure.

While the RMAN corruption check is not the same as the check performed by analyze index validate structure, it is a test that can be completed in a much more timely manner, particularly if some indexes are both large and have a high value for the clustering factor.

Rebuild the Index?

If you have strong suspicions that a large index with an unfavorable clustering factor has corrupt blocks, it may be more expedient to just rebuild the index.  If the database is on Oracle Enterprise Edition, the rebuild can also be done with the ONLINE option.

Consider again the index on the test table with 1E7 rows.  Creating the index required 28 seconds, while validating the structure required 40 minutes.


 SQL> alter index validate_me_idx1 rebuild online;

Index altered.

Elapsed: 00:00:59.88


The conclusion is quite clear; the use of analyze index validate structure needs to be carefully considered when its use it contemplated for large indexes. The use of this command could be very resource intensive and take quite some time to complete. It is worthwhile to consider alternatives that my be much less resource intensive and time consuming.

Categories: DBA Blogs

MySQL encrypted streaming backups directly into AWS S3

Fri, 2016-05-13 09:25

Cloud storage is becoming more and more popular for offsite storage and DR solutions for many businesses. This post will help with those people that want to perform this process for MySQL backups directly into Amazon S3 Storage. These steps can probably also be adapted for other processes that may not be MySQL oriented.


In order to perform this task we need to be able to stream the data, encrypt it, and then upload it to S3. There are a number of ways to do each step and I will try and dive into multiple examples so that way you can mix and match the solution to your desired results.  The AWS S3 CLI tools that I will be using to do the upload also allows encryption but to try and get these steps open for customization, I am going to do the encryption in the stream.

  1. Stream MySQL backup
  2. Encrypt the stream
  3. Upload the stream to AWS S3
Step 1 : Stream MySQL Backup

There are a number of ways to stream the MySQL backup. A lot of it depends on your method of backup. We can stream the mysqldump method or we can utilize the file level backup tool Percona Xtrabackup to stream the backup. Here are some examples of how these would be performed.


When using mysqldump it naturally streams the results. This is why we have to add the greater than sign to stream the data into our .sql file. Since mysqldump is already streaming the data we will pipe the results into our next step

[root@node1 ~]# mysqldump --all-databases > employee.sql


[root@node1 ~]# mysqldump --all-databases |

xtrabackup will stream the backup but with a little more assistance to tell it to do so. You can reference Precona’s online documentation ( for all of the different ways to stream and compress the backups using xtrabackup. We will be using the stream to tar method.

innobackupex --stream=tar /root > /root/out.tar


innobackupex --stream=tar ./ |
Step 2 : Encrypt The Stream

Now that we have the backup process in place, we will then want to make sure that our data is secure. We will want to encrypt the data that we are going to be sending up to AWS S3 as to make sure the data is protected. We can accomplish this a couple of ways. The first tool I am going to look at is GnuPG (, which is the open source version of PGP encryption. The second tool I will look at is another very popular tool OpenSSL (  Below are examples of how I set them up and tested their execution with streaming.


I will be creating a public and private key pair with a password that will be used to encrypt and decrypt the data. If you are going to do this for your production and sensitive data, please ensure that your private key is safe and secure.  When creating the keypair I was asked to provide a password.  When decrypting the data I was then asked for the password again to complete the process. It was an interactive step and is not shown in the example below. To accept a stream, you don’t provide a file name to encrypt, then to stream the output, you just don’t provide an output parameter.

[root@node1 ~]# gpg --gen-key
gpg (GnuPG) 2.0.14; Copyright (C) 2009 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: root
Name must be at least 5 characters long
Real name: root@kmarkwardt
Email address:
You selected this USER-ID:
    "root@kmarkwardt <>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

can't connect to `/root/.gnupg/S.gpg-agent': No such file or directory
gpg-agent[1776]: directory `/root/.gnupg/private-keys-v1.d' created
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

After typing for what felt like FOREVER, to generate enough entropy

gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 1EFB61B1 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
pub   2048R/1EFB61B1 2016-04-29
      Key fingerprint = 8D98 2D23 3C49 F1E7 9CD2  CD0F 7163 EB03 1EFB 61B1
uid                  root@kmarkwardt <>
sub   2048R/577322A0 2016-04-29

[root@node1 ~]#


[root@node1 openssl]# echo "test" | gpg --output install.log.gpg --encrypt -r root 
[root@node1 openssl]# cat install.log.gpg
 ???    Ws"???l?
??g             ?w??g?C}P
???Qq?m??&?rKE??*}5.?4XTj?????Th????}A???: ^V?/w?$???"?<'?;
[root@node1 ~]# gpg --decrypt -r root --output install.log.decrypted install.log.gpg
You need a passphrase to unlock the secret key for
user: "root@kmarkwardt <>"
2048-bit RSA key, ID 577322A0, created 2016-04-29 (main key ID 1EFB61B1)

can't connect to `/root/.gnupg/S.gpg-agent': No such file or directory
gpg: encrypted with 2048-bit RSA key, ID 577322A0, created 2016-04-29
     "root@kmarkwardt <>"
[root@node1 ~]# ls


[root@node1 ~]# mysqldump --all-databases | gpg --encrypt -r root 
[root@node1 ~]# innobackupex --stream=tar ./ | gpg --encrypt -r root 



As with GPG we will generate a public and private key with a pass phrase.  There are other ways to use openssl to encrypt and decrypt the data such as just using a password with no keys, using just keys with no password, or encrypt with no password or keys.  I am using keys with a password as this is a very secure method.

[root@node1 openssl]# openssl req -newkey rsa:2048 -keyout privkey.pem -out req.pem
Generating a 2048 bit RSA private key
writing new private key to 'privkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

[root@node1 openssl]# openssl x509 -req -in req.pem -signkey privkey.pem -out cert.pem
Signature ok
subject=/C=XX/L=Default City/O=Default Company Ltd
Getting Private key
Enter pass phrase for privkey.pem:
[root@node1 openssl]# ls -al
total 20
drwxr-xr-x  2 root root 4096 May  5 10:47 .
dr-xr-x---. 9 root root 4096 May  4 04:38 ..
-rw-r--r--  1 root root 1103 May  5 10:47 cert.pem
-rw-r--r--  1 root root 1834 May  5 10:43 privkey.pem
-rw-r--r--  1 root root  952 May  5 10:43 req.pem
[root@node1 openssl]# rm -rf req.pem 
[root@node1 openssl]# echo "test" | openssl smime -encrypt -aes256 -binary -outform DER cert.pem > test.dat
[root@node1 openssl]# cat test.dat 
                 0    UXX10U

                              Default City10U

P?l????]iz/???H???????[root@node1 openssl]#
[root@node1 openssl]# openssl smime -decrypt -in test.dat -inform DER -inkey privkey.pem -out test.txt
Enter pass phrase for privkey.pem:
[root@node1 openssl]# cat test.txt 


[root@node1 ~]# mysqldump --all-databases | openssl smime -encrypt -aes256 -binary -outform DER cert.pem
[root@node1 ~]# innobackupex --stream=tar ./ | openssl smime -encrypt -aes256 -binary -outform DER cert.pem
Step 3 : Stream to Amazon AWS S3

Now that we have secured the data, we will want to pipe the data into an Amazon AWS S3 bucket.  This will provide an offsite copy of the MySQL backup that you can convert to long term storage, or restore into an EC2 instance.  With this method I will only be looking at one.  The Amazon provided AWS CLI tools incorporates working with S3.  Allowing you to copy your files up into S3 with the ability to stream your input.


In order to tell the AWS CLI S3 copy command to accept STDIN input you just have to put a dash in the place of the source file.  This will allow the command to accept a stream to copy.  The AWS CLI tools for copying into S3 also allows for encryption.  But I wanted to provide other methods as well to allow you to customize your own solution.   You can also stream the download of the S3 bucket item, which could allow for uncompression as you download the data or any other number of options.


echo "test" | aws s3 cp - s3://pythian-test-bucket/incoming.txt 


-- MySQL Dump -> OpenSSL Encryption -> AWS S3 Upload
[root@node1 ~]# mysqldump --all-databases | openssl smime -encrypt -aes256 -binary -outform DER cert.pem | aws s3 cp - s3://pythian-test-bucket/mysqldump.sql.dat
-- Xtrabackup -> OpenSSL Encryption -> AWS S3 Upload
[root@node1 ~]# innobackupex --stream=tar ./ | openssl smime -encrypt -aes256 -binary -outform DER cert.pem |aws s3 cp - s3://pythian-test-bucket/mysqldump.tar.dat
-- MySQL Dump -> GPG Encryption -> AWS S3 Upload
[root@node1 ~]# mysqldump --all-databases | gpg --encrypt -r root | aws s3 cp - s3://pythian-test-bucket/mysqldump.sql.gpg
-- MySQL Dump -> GPG Encryption -> AWS S3 Upload
[root@node1 ~]# innobackupex --stream=tar ./ | gpg --encrypt -r root | aws s3 cp - s3://pythian-test-bucket/mysqldump.tar.gpg





Categories: DBA Blogs

Migrating Your Enterprise Applications To Amazon Web Services (AWS)

Thu, 2016-05-12 15:34


Many of the enterprise clients we work with are looking at Amazon Web Services (AWS) to support their cloud strategies and reap the benefits of the public cloud: lower costs, higher scalability, greater availability of computing resources.


AWS is well known for its cutting edge service offerings, which are always growing and evolving—making them an easy choice to recommend to our enterprise clients. When used to provide infrastructure as a service (IaaS), it simplifies hardware provisioning and management and makes it easy to allocate compute, memory and storage capacity. In a platform as a service (PaaS) situation, it can streamline operations through automated backups and patching, and can offer the convenience of pay-as-you-go billing.


The challenge for many organizations migrating to the cloud comes when they start to move their enterprise applications. That’s because these applications often rely on highly customized infrastructure with tightly coupled components. They also tend to exist in silos. Some careful planning is needed to make sure the new cloud environment delivers the outperforms the legacy on-premises one in terms of scalability, reliability and performance. Whenever desired by the customer, we will often recommend and apply a number of optimizations during the migration process, such as decoupling of components, stability and performance tune-ups as well as improved operational  visibility and platform automation.


When we migrated Harvard Business Publishing to AWS, for example, we tailored the public cloud architecture to meet their specific needs. (Harvard Business Publishing is a subsidiary of Harvard University — based in Boston, with offices in New York City, India, Singapore, Qatar and the United Kingdom. They produce and distribute content across multiple platforms.)


Harvard Business Publishing was running Oracle applications in a data center on three Oracle Database Appliances. With the data center lease coming up for renewal, they asked us to help migrate their Oracle-based production, quality assurance and disaster recovery systems to an AWS cloud. Obviously, these systems were mission critical.


We did thorough capacity planning and developed a reliable, highly-automated and predictable migration approach up front, customized the AWS architecture, and migrated to a large number of virtual machines and Oracle schemas.


In another instance, we migrated CityRealty’s core applications to the cloud from on-premise hardware running Oracle databases. CityRealty is the oldest continuously operating real estate website in New York City — the world’s largest real estate market — and remains the city’s leading site today. We proposed moving three production databases to Amazon Elastic Compute Cloud (EC2), which provides highly scalable compute capacity, as well as upgrading the Oracle databases. We also built a highly efficient standby and recovery solution using Amazon Elastic Block Storage (EBS) snapshots.


In both of these cases, we did the planning up front to ensure the new cloud environment would be ready to receive the clients’ applications and run them without any compromise in performance. We’ve done similar migrations for clients operating e-commerce businesses with revenues of millions and even billions per year. Every situation is a little different: in some cases clients needed to simplify and standardize their mission-critical systems; in others, boost reliability; in others, increase automation or eliminate transaction-hampering latency.


We have the benefit of small, dedicated cloud teams around the world with multiple, advanced AWS certifications to address even the most complex requirements. Our goal always is to ensure the public cloud architecture is ideally suited to the enterprise, and to provide detailed implementation plans and data management solutions for optimal performance. That allows us to implement and manage public, private, and hybrid environments with lower risk and cost for organizations that want to seize the benefits of the cloud.

Find out how Pythian can help you with cloud solutions for AWS  today.

Categories: DBA Blogs

Properly removing users in MySQL

Thu, 2016-05-12 09:58

Removing users in MySQL is an easy task, but there are some drawbacks to check carefully before dropping a user. Not taking these possible issues into consideration can render your application unusable.

First it is important to understand the concept of user in MySQL. A user has three main functions: authentication, privileges and resources. This is different from other databases: in MySQL users don’t own objects. An object belongs to a database and there is no direct relationship between objects and users. This simplifies the process of deleting a user because no objects will be erased.

But users, as I wrote before, have an associated set of privileges that define what database sessions can do, and the privileges applied both in stored programs and view execution.

At the same time, procedures, functions, triggers, and views have two possible security contexts: invoker and definer. If they are created using the invoker security model, then the user calling the routine or view must have enough privileges to execute the code within the routine or view. But if created using the definer security model, the code can be executed if the definer has enough privileges to execute it. And yes, the default security model in MySQL is definer.

This means that, unless declared otherwise, most routines will check the privileges for the user that created the routine. If somebody removes that user, querying the view or executing the code will fail with error. All the procedures, functions, views and triggers created by that user with the default options will become unusable.

How do we fix this? The quick and dirty way is to recreate the user with a different password to avoid user logins. It is not an elegant solution but probably this is the first thing you will do while you solve the problem correctly.  Another alternative, if you are running MySQL 5.7 is account locking, this feature disables login for that account but allows code and views to be executed. In any case, it is a good practice to make a backup of the user creation scripts prior to dropping the user. Percona toolkit has the utility pt-show-grants for that purpose.

The elegant way to avoid the problem is to check that there are not routines or views using the definer security model and configured to run with the user privileges of the user you want to remove. There are several tables in the mysql user catalog that provide you with this information.

Unfortunately there is not an easy way to change this attributes. The best thing you can do is drop and recreate those objects using different security characteristics.

Let see an example:

Connect to the database using an account with enough privileges to create users, databases and procedures and create a new database only for testing purposes.

mysql> create database remove_test;
Query OK, 1 row affected (0,05 sec)

Create a user with privileges on the database created in previous step:

mysql> grant all privileges on remove_test.* to test_user@'%' identified by 'test';
Query OK, 0 rows affected, 1 warning (0,20 sec)

Connect to the database using the user created:

$ mysql -u test_user -ptest remove_test

Create a view, check the attributes and execute it. We are going to use a couple of functions that I will explain a bit later.

mysql> create view test_view as select current_user(),user();
Query OK, 0 rows affected (0,05 sec)
mysql> show create view test_view\G
*************************** 1. row ***************************
                View: test_view
         Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`test_user`@`%` SQL SECURITY DEFINER VIEW `test_view` AS select current_user() AS `current_user()`,user() AS `user()`
character_set_client: utf8
collation_connection: utf8_general_ci
1 row in set (0,00 sec)

mysql> select * from test_view;
| current_user() | user()              |
| test_user@%    | test_user@localhost |
1 row in set (0,00 sec)

As we can see, the view has been created with SQL SECURITY DEFINER and DEFINER=`test_user`@`%`. The view returns the value from two functions: current_user() returns the value that matches against the mysql.user table and that defines the privileges the current session or current execution context has. The function user() returns the host you are connected from and the username specified as connection identifier.

Now we reconnect to the database using the privileged account from the first step, and requery the view:

$ mysql -u root -p remove_test
mysql> select * from test_view;
| current_user() | user()         |
| test_user@%    | root@localhost |
1 row in set (0,00 sec)

The view is executed by root@localhost but with the privileges of test_user@%. It is important to note that if you monitor the execution of a routine or view, using SHOW PROCESSLIST or query against information_schema.processlist, the contents of user and host are the same that current_user() return, not the ones returned by user().

Now we will drop the user and query once again the view.

mysql> drop user test_user@'%';
Query OK, 0 rows affected (0,07 sec)
mysql> select * from test_view;
ERROR 1449 (HY000): The user specified as a definer ('test_user'@'%') does not exist

In case you don’t see the error and still get the correct results, this is because the contents of the view are stored in the query cache. Clear their contents and repeat the query.

mysql> reset query cache;
Query OK, 0 rows affected (0,00 sec)

How to validate if it is safe to remove a user? You must query the information_schema tables to find objects than can bring problems and recreate them.

mysql> select routine_schema as db,
    ->        routine_type as object_type,
    ->        routine_name as object_name
    -> from routines
    -> where security_type='DEFINER'
    ->   and definer='test_user@%'
    -> union all
    -> select trigger_schema as db,
    ->        'TRIGGER' as object_type,
    ->         trigger_name as object_name
    -> from triggers
    -> where definer='test_user@%'
    -> union all
    -> select table_schema as db,
    ->        'VIEW' as object_type,
    -> table_name as object_name
    -> from views
    -> where security_type='DEFINER'
    ->   and definer='test_user@%';
| db          | object_type | object_name |
| remove_test | VIEW        | test_view   |
1 row in set (0,02 sec)

Replace test_user@% with the value of the user you want to remove.

This way you get a list of the objects you must change to avoid problems. As I said before the best thing you can do to change the definer is recreating the objects. To obtain the creation script you should use SHOW CREATE VIEW, SHOW CREATE TRIGGER, SHOW CREATE PROCEDURE or SHOW CREATE FUNCTION.

If you want to avoid problems in the future, my recommendation is that for every application, create an account for that application, lock it with an unknown password or using account locking feature. Grant that account all the privileges needed to run the application and make sure all the views, procedures, functions and triggers have that account in the definer field.

Categories: DBA Blogs

Log Buffer #473: A Carnival of the Vanities for DBAs

Wed, 2016-05-11 16:32

This Log Buffer Edition covers some of the hottest blog posts of Oracle, SQL Server and MySQL for the previous week.


A recent post on the OTN database forum reminded Jonathan how easy it is to forget to keep repeating a piece of information after the first couple of hundred times you’ve explained it. No matter how “intuitively obvious” it is for one person, it’s new to someone else.

The OMS Patcher is a newer patching mechanism for the OMS specifically.

A SQLT report has all kinds of pertinent information including—to name just a few—optimizer settings, indexes, statistics, plan history, and view definitions.

Joins are fundamental in SQL, and are used in most every production query. There are three types in particular that every developer should fully understand.

Why should Oracle Database professionals care about NoSQL and where to start?

SQL Server:

Understanding OPENJSON and FOR JSON in Azure and SQL 2016

Using Data Compression in Master Data Services 2016

The most frustrating thing with any new system is often just working out how to connect to it.

There are several decisions to be made when designing indexes for Memory-optimized tables in In-Memory OLTP, and to make the best choice, it pays to understand something about the nature and usage of memory-optimised indexes.

Database Hardware and Infrastructure Trends

STRING_SPLIT() in SQL Server 2016 : Follow-Up #2


Why would I run MySQL/MariaDB on POWER anyway?

By default, the write() system call returns after all data has been copied from the user space file descriptor into the kernel space buffers. There is no guarantee that data has actually reached the physical storage.

OpenCPS: Vietnam’s Public Sector goes Open Source

MySQL uses metadata locking to manage concurrent access to database objects, and to ensure data consistency when performing modifications to the schema

Using Docker for development is a great way of ensuring that what you develop will be the same that you deploy in production.

Categories: DBA Blogs

Defining Digital Transformation

Wed, 2016-05-11 15:33


Terminology is important—and it’s particularly important for us to define terms that are central to what we do. So when it comes to the subject of digital transformation, what exactly are we talking about?


In speaking with clients and industry thought leaders, I’ve come to realize that the term “digital transformation” has a different meaning to different people. For a term that is so widely used — and that, on its surface, seems pretty straightforward — the range of interpretation is remarkable. It’s a bit like when we say “I’ll do it later.”  “Later” to one person means “before the sun goes down today.” “Later” to another person means “sometime in the future”, and it could mean days or weeks in their mind. “Later” to a third person can mean “I have no plans to do it, and this is my way of telling you nicely.”


Because the term is so essential to the work we do for our clients, I thought it would be helpful to define what digital transformation means to us here at Pythian. There’s so much we can say on the topic, so I plan to follow up with a series of articles about how I’ve seen it implemented, or worse, not implemented or even embraced as a concept.


To start, “digital transformation” is about technology. I know that to some people it isn’t, but I disagree. These days, you can’t transform your business without technology. It’s not about which technology you choose, as much as it’s about how to use it. Even more specifically, we’ve found that the businesses that are achieving positive transformation are using technology to capitalize on data. I have yet to see a single transformation project that didn’t use data as a major component of its success.


Let’s look at the term “transformation.” This equates to change, but it doesn’t mean change for its own sake. The change we’re talking about has to benefit the business. However, the factor that can make or break successful change is people. Their attitudes, preconceptions, and ideas almost always have to be aligned with the change for successful transformation to occur. People need to get behind the initiative, people have to fund it, people have to develop it, and people have to support it once it’s developed. And we all know that getting people to change can be more difficult than developing any new technology. In short, the transformative capabilities inherent in technology can only be realized when coupled with the willingness to embrace change.

Why Digital Transformation?

Why is the concept of digital transformation important in the first place? At Pythian, we believe that it’s about using technology and data to change your business for the better. What do we mean when we say “for the better”? Therein lies the controversy.  “For the better” means different things to different people depending on their company’s key objectives.


“For the better” can mean:

  • Becoming more efficient to drive costs down so your profitability can improve
  • Reducing mistakes and improving your reputation, or the quality of your product
  • Differentiating your product to get ahead of the competition
  • Doing what you do, only faster than your competitors
  • Creating new revenue streams
  • Improving the customer experience. This is a big one, so I will dedicate an entire blog post to exploring exactly what it means.


Digital transformation is the key to achieving any one, or all of these benefits, and knowing your objectives and priorities will help you shape your digital transformation initiative. So to start, focus less on what digital transformation is, and more on what you want the outcome of a transformation to be.


Categories: DBA Blogs

GoldenGate 12.2 Big Data Adapters: part 4 – HBASE

Wed, 2016-05-11 11:51

This is the next post in my series about Oracle GoldenGate Big Data adapters. Here is list of all posts in the series:

  1. GoldenGate 12.2 Big Data Adapters: part 1 – HDFS
  2. GoldenGate 12.2 Big Data Adapters: part 2 – Flume
  3. GoldenGate 12.2 Big Data Adapters: part 3 – Kafka
  4. GoldenGate 12.2 Big Data Adapters: part 4 – HBASE

In this post I am going to explore HBASE adapter for GoldenGate. Let’s start by recalling what we know about HBASE. The Apache HBASE is non-relational, distributed database. It has been modelled after the Google’s Bigtable distributed database. It can provide read write access to the data and is based on top of Hadoop or HDFS.

So, what does it tell us? First, we can write and change the data. Second, we need to remember that it is non-relation database and it is a bit of a different approach to data in comparison with traditional relation databases. You can think about HBase as about a key-value store. We are not going deep inside HBASE architecture and internals here, since our main task is to test Oracle GoldenGate adapter and see how it works. Our configuration has an Oracle database as a source with a GoldenGate extract and target system where we have Oracle GoldenGate for BigData.

We have more information about setting up the source and target in the first post in the series about HDFS adapter. The source side replication part has already been configured and started. We have initial trail file for data initialization and trails for the ongoing replication. We capture changes for all tables in the ggtest schema on the oracle database.
Now we need to prepare our target site. Let’s start from HBase. I used a pseudo-distributed mode for my tests where I ran a fully-distributed mode on a single host. It is not acceptable for any production configuration but will suffice for our tests. On the same box I have HDFS to serve as a main storage. Oracle documentation for the adapter states that they support HBase from version 1.0.x . In my first attempt I tried to use HBase version 1.0.0 (Cloudera 5.6) but it didn’t work. I got errors in the GoldenGate and my extract was aborted.
Here is the error :

2016-03-29 11:51:31  ERROR   OGG-15051  Oracle GoldenGate Delivery, irhbase.prm:  Java or JNI exception:
java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)Lorg/apache/hadoop/hbase/HTableDescriptor;.
2016-03-29 11:51:31  ERROR   OGG-01668  Oracle GoldenGate Delivery, irhbase.prm:  PROCESS ABENDING.

So, I installed another version HBase and the version 1.1.4 worked just fine. I used simple, standard HBase configuration for pseudo-distributed mode where region server was on the same host as master and hbase.rootdir point to local hdfs.
Here is example of configuration:

[root@sandbox conf]# cat regionservers
[root@sandbox conf]#

As soon as we have HBase setup and running we can switch our attention to GoldenGate instead. We have already a trail file with initial load. Now we need to prepare our configuration files for initial and ongoing replication. Let’s go to our GoldenGate for Big Data home directory and prepare everything. In first, we need a hbase.conf file copied from $OGG_HOME/AdapterExamples/big-data/hbase directory to $OGG_HOME/dirprm. I left everything as it used to be in the original file changing only gg.classpath parameter to point it to my configuration files and libs for HBase.
Here is an example of the configuration files:

[oracle@sandbox oggbd]$ cat dirprm/hbase.props







javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=ggjava/ggjava.jar

In second, we have to prepare a parameter file for our initial load. I used a simple file with minimum parameters.

[oracle@sandbox oggbd]$ cat dirprm/irhbase.prm
-- passive REPLICAT irhbase
-- Trail file for this example is located in "./dirdat/initld" file
-- Command to add REPLICAT
-- run replicat irhbase:
-- ./replicat paramfile dirprm/irhbase.prm reportfile dirrpt/irhbase.rpt
EXTFILE /u01/oggbd/dirdat/initld
TARGETDB LIBFILE SET property=dirprm/hbase.props

Having that configuration file we can run the replicat in passive mode from command line and see the result.
Here is initial status for HBASE:

hbase(main):001:0> version
1.1.4, r14c0e77956f9bb4c6edf0378474264843e4a82c3, Wed Mar 16 21:18:26 PDT 2016

hbase(main):001:0> list
0 row(s) in 0.3340 seconds

=> []

Running the replicat:

oracle@sandbox oggbd]$ ./replicat paramfile dirprm/irhbase.prm reportfile dirrpt/irhbase.rpt
[oracle@sandbox oggbd]$

Now we have 2 tables in HBASE:

hbase(main):002:0> list
2 row(s) in 0.3680 seconds


Let’s have a look to the tables structure and contains:

hbase(main):004:0> describe 'BDTEST:TEST_TAB_1'
N_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
1 row(s) in 0.2090 seconds

hbase(main):005:0> scan 'BDTEST:TEST_TAB_1'
ROW                                            COLUMN+CELL
 1                                             column=cf:ACC_DATE, timestamp=1459269153102, value=2014-01-22:12:14:30
 1                                             column=cf:PK_ID, timestamp=1459269153102, value=1
 1                                             column=cf:RND_STR, timestamp=1459269153102, value=371O62FX
 1                                             column=cf:RND_STR_1, timestamp=1459269153102, value=RJ68QYM5
 1                                             column=cf:USE_DATE, timestamp=1459269153102, value=2014-01-24:19:09:20
 2                                             column=cf:ACC_DATE, timestamp=1459269153102, value=2014-05-11:05:23:23
 2                                             column=cf:PK_ID, timestamp=1459269153102, value=2
 2                                             column=cf:RND_STR, timestamp=1459269153102, value=371O62FX
 2                                             column=cf:RND_STR_1, timestamp=1459269153102, value=HW82LI73
 2                                             column=cf:USE_DATE, timestamp=1459269153102, value=2014-01-24:19:09:20
 3                                             column=cf:ACC_DATE, timestamp=1459269153102, value=2014-01-22:12:14:30
 3                                             column=cf:PK_ID, timestamp=1459269153102, value=3
 3                                             column=cf:RND_STR, timestamp=1459269153102, value=RXZT5VUN
 3                                             column=cf:RND_STR_1, timestamp=1459269153102, value=RJ68QYM5
 3                                             column=cf:USE_DATE, timestamp=1459269153102, value=2013-09-04:23:32:56
 4                                             column=cf:ACC_DATE, timestamp=1459269153102, value=2014-05-11:05:23:23
 4                                             column=cf:PK_ID, timestamp=1459269153102, value=4
 4                                             column=cf:RND_STR, timestamp=1459269153102, value=RXZT5VUN
 4                                             column=cf:RND_STR_1, timestamp=1459269153102, value=HW82LI73
 4                                             column=cf:USE_DATE, timestamp=1459269153102, value=2013-09-04:23:32:56
4 row(s) in 0.1630 seconds

hbase(main):006:0> scan 'BDTEST:TEST_TAB_2'
ROW                                            COLUMN+CELL
 7|IJWQRO7T|2013-07-07:08:13:52                column=cf:ACC_DATE, timestamp=1459269153132, value=2013-07-07:08:13:52
 7|IJWQRO7T|2013-07-07:08:13:52                column=cf:PK_ID, timestamp=1459269153132, value=7
 7|IJWQRO7T|2013-07-07:08:13:52                column=cf:RND_STR_1, timestamp=1459269153132, value=IJWQRO7T
1 row(s) in 0.0390 seconds


Everything looks good for me. We have structure and records as expected. Let’s go forward and setup ongoing replication.
I have created a parameter file for my replicat using the the initial load parameters as a basis:

[oracle@sandbox oggbd]$ cat dirprm/rhbase.prm
-- Trail file for this example is located in "dirdat/or" directory
-- Command to add REPLICAT
-- add replicat rhbase, exttrail dirdat/or
TARGETDB LIBFILE SET property=dirprm/hbase.props
MAP ggtest.*, TARGET bdtest.*;

We are checking our trail files and starting our replicat using the latest trail file. By default, a replicat would be looking for a trail with sequential number 0, but, since I have a purging policy on my GoldenGate it deletes old files and I need tell to replicat where to start exactly.

[oracle@sandbox oggbd]$ ll dirdat/
total 4940
-rw-r-----. 1 oracle oinstall    3028 Feb 16 14:17 initld
-rw-r-----. 1 oracle oinstall 2015199 Mar 24 13:07 or000043
-rw-r-----. 1 oracle oinstall 2015229 Mar 24 13:08 or000044
-rw-r-----. 1 oracle oinstall 1018490 Mar 24 13:09 or000045
[oracle@sandbox oggbd]$ ggsci

Oracle GoldenGate Command Interpreter
Version OGGCORE_12.
Linux, x64, 64bit (optimized), Generic on Nov 10 2015 16:18:12
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (sandbox.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt


GGSCI (sandbox.localdomain) 2> add replicat rhbase, exttrail dirdat/or,EXTSEQNO 45

GGSCI (sandbox.localdomain) 3> start replicat rhbase

Sending START request to MANAGER ...

GGSCI (sandbox.localdomain) 4> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

REPLICAT    RUNNING     RHBASE      00:00:00      00:00:06

GGSCI (sandbox.localdomain) 5> info rhbase

REPLICAT   RHBASE    Last Started 2016-03-29 12:56   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:08 ago)
Process ID           27277
Log Read Checkpoint  File dirdat/or000045
                     2016-03-24 13:09:02.000274  RBA 1018490

GGSCI (sandbox.localdomain) 6>

I inserted number of rows to test_tab_1 on oracle side and all of them were successfully replicated to HBASE.

hbase(main):015:0> count 'BDTEST:TEST_TAB_1'
Current count: 1000, row: 1005694
Current count: 2000, row: 442
Current count: 3000, row: 6333
3473 row(s) in 1.0810 seconds

=> 3473

Let’s have a look bit close to test_tab_1 and test_tab_2:

hbase(main):005:0> scan 'BDTEST:TEST_TAB_1'
ROW                                            COLUMN+CELL
 1                                             column=cf:ACC_DATE, timestamp=1459269153102, value=2014-01-22:12:14:30
 1                                             column=cf:PK_ID, timestamp=1459269153102, value=1
 1                                             column=cf:RND_STR, timestamp=1459269153102, value=371O62FX
 1                                             column=cf:RND_STR_1, timestamp=1459269153102, value=RJ68QYM5
 1                                             column=cf:USE_DATE, timestamp=1459269153102, value=2014-01-24:19:09:20
 2                                             column=cf:ACC_DATE, timestamp=1459269153102, value=2014-05-11:05:23:23
 2                                             column=cf:PK_ID, timestamp=1459269153102, value=2
 2                                             column=cf:RND_STR, timestamp=1459269153102, value=371O62FX
 2                                             column=cf:RND_STR_1, timestamp=1459269153102, value=HW82LI73
 2                                             column=cf:USE_DATE, timestamp=1459269153102, value=2014-01-24:19:09:20

hbase(main):006:0> scan 'BDTEST:TEST_TAB_2'
ROW                                            COLUMN+CELL
 7|IJWQRO7T|2013-07-07:08:13:52                column=cf:ACC_DATE, timestamp=1459269153132, value=2013-07-07:08:13:52
 7|IJWQRO7T|2013-07-07:08:13:52                column=cf:PK_ID, timestamp=1459269153132, value=7
 7|IJWQRO7T|2013-07-07:08:13:52                column=cf:RND_STR_1, timestamp=1459269153132, value=IJWQRO7T
1 row(s) in 0.0390 seconds


You can see that row identifier for test_tab_1 is value for pk_id and for test_tab_2 it is concatenation of all values for all columns. Why is it so? The difference is in constraints for the tables. Since we don’t have a primary key or unique index for test_tab_2 it uses all columns as a key value. We can try to add a constraint and see the result.

select * from dba_constraints where owner='GGTEST' and table_name='TEST_TAB_2';

no rows selected

alter table ggtest.test_tab_2 add constraint pk_test_tab_2 primary key (pk_id);

Table altered.

insert into ggtest.test_tab_2 values(9,'PK_TEST',sysdate,null);

1 row created.


Commit complete.


And let us comare with result on the HBASE:

hbase(main):012:0> scan 'BDTEST:TEST_TAB_2'
ROW                                           COLUMN+CELL
 7|IJWQRO7T|2013-07-07:08:13:52               column=cf:ACC_DATE, timestamp=1459275116849, value=2013-07-07:08:13:52
 7|IJWQRO7T|2013-07-07:08:13:52               column=cf:PK_ID, timestamp=1459275116849, value=7
 7|IJWQRO7T|2013-07-07:08:13:52               column=cf:RND_STR_1, timestamp=1459275116849, value=IJWQRO7T
 8|TEST_INS1|2016-03-29:15:14:37|TEST_ALTER   column=cf:ACC_DATE, timestamp=1459278884047, value=2016-03-29:15:14:37
 8|TEST_INS1|2016-03-29:15:14:37|TEST_ALTER   column=cf:PK_ID, timestamp=1459278884047, value=8
 8|TEST_INS1|2016-03-29:15:14:37|TEST_ALTER   column=cf:RND_STR_1, timestamp=1459278884047, value=TEST_INS1
 8|TEST_INS1|2016-03-29:15:14:37|TEST_ALTER   column=cf:TEST_COL, timestamp=1459278884047, value=TEST_ALTER
 9                                            column=cf:ACC_DATE, timestamp=1462473865704, value=2016-05-05:14:44:19
 9                                            column=cf:PK_ID, timestamp=1462473865704, value=9
 9                                            column=cf:RND_STR_1, timestamp=1462473865704, value=PK_TEST
 9                                            column=cf:TEST_COL, timestamp=1462473865704, value=NULL
3 row(s) in 0.0550 seconds


It is fully dynamic and changed row id column on the fly. Will it work with unique index? Yes it will :

delete from ggtest.test_tab_2 where pk_id=9;

1 row deleted.

alter table ggtest.test_tab_2 drop constraint pk_test_tab_2;

Table altered.

create unique index ggtest.ux_test_tab_2 on ggtest.test_tab_2 (pk_id);

Index created.

insert into ggtest.test_tab_2 values(10,'UX_TEST',sysdate,null);

1 row created.


Here is the newly inserted row.

hbase(main):017:0> scan 'BDTEST:TEST_TAB_2'
ROW                                           COLUMN+CELL
 10                                           column=cf:ACC_DATE, timestamp=1462474389145, value=2016-05-05:14:53:03
 10                                           column=cf:PK_ID, timestamp=1462474389145, value=10
 10                                           column=cf:RND_STR_1, timestamp=1462474389145, value=UX_TEST
 10                                           column=cf:TEST_COL, timestamp=1462474389145, value=NULL
 7|IJWQRO7T|2013-07-07:08:13:52               column=cf:ACC_DATE, timestamp=1459275116849, value=2013-07-07:08:13:52
 7|IJWQRO7T|2013-07-07:08:13:52               column=cf:PK_ID, timestamp=1459275116849, value=7

But it will not make any difference if we just create an index on the source. It will not change anything. So, if we need to identify a key for a table we have to have at least unique constraint. Of course it is just default behavior for a schema replication and we may use KEYCOLS to identify keys for some tables.

Interesting that if we change a table structure it will affect all newly inserted rows but will not change existing even if we update some values. It works by this way if you have an unique identifier and it was not changed by your DDL operation.
Here is an example. We have a column “TEST_COL” in the table test_tab_2. Let’s drop the column and update the row. Keep in mind that our primary key is column PK_ID and we are not modifying the key.

alter table ggtest.test_tab_2 drop column TEST_COL;

Table altered.

update ggtest.test_tab_2 set rnd_str_1='TEST_COL' where pk_id=9;

1 row updated.


In HBASE we can see the same set of columns:

hbase(main):030:0> scan 'BDTEST:TEST_TAB_2'
ROW                                           COLUMN+CELL
 9                                            column=cf:ACC_DATE, timestamp=1462477581440, value=2016-05-05:15:46:13
 9                                            column=cf:PK_ID, timestamp=1462477794597, value=9
 9                                            column=cf:RND_STR_1, timestamp=1462477794597, value=TEST_COL
 9                                            column=cf:TEST_COL, timestamp=1462477581440, value=NULL
1 row(s) in 0.0200 seconds

We still have the deleted column TEST_COL even we’ve updated the row.
But if we insert any new row it will have the new set of columns:

insert into ggtest.test_tab_2 values(10,'TEST_COL',sysdate);

1 row created.


Commit complete.

And in HBASE:

hbase(main):031:0> scan 'BDTEST:TEST_TAB_2'
ROW                                           COLUMN+CELL
 10                                           column=cf:ACC_DATE, timestamp=1462477860649, value=2016-05-05:15:50:55
 10                                           column=cf:PK_ID, timestamp=1462477860649, value=10
 10                                           column=cf:RND_STR_1, timestamp=1462477860649, value=TEST_COL
 9                                            column=cf:ACC_DATE, timestamp=1462477581440, value=2016-05-05:15:46:13
 9                                            column=cf:PK_ID, timestamp=1462477794597, value=9
 9                                            column=cf:RND_STR_1, timestamp=1462477794597, value=TEST_COL
 9                                            column=cf:TEST_COL, timestamp=1462477581440, value=NULL
2 row(s) in 0.0340 seconds

And, as for all other cases, truncate on source table is not going to be replicated to the target and the operation will be ignored. You have to truncate the table in HBASE by yourself to keep the data in sync. In case you insert data again the data in HBASE will be “updated”. But it will not delete other rows. It will be more like a “merge” operation.
Here is an example:

truncate table ggtest.test_tab_2;

Table truncated.

insert into ggtest.test_tab_2 values(10,'TEST_COL2',sysdate);

1 row created.


Commit complete.

select * from ggtest.test_tab_2;

---------------- ---------- -----------------
	      10 TEST_COL2  05/05/16 16:01:20


hbase(main):033:0> scan 'BDTEST:TEST_TAB_2'
ROW                                           COLUMN+CELL
 10                                           column=cf:ACC_DATE, timestamp=1462478485067, value=2016-05-05:16:01:20
 10                                           column=cf:PK_ID, timestamp=1462478485067, value=10
 10                                           column=cf:RND_STR_1, timestamp=1462478485067, value=TEST_COL2
 9                                            column=cf:ACC_DATE, timestamp=1462477581440, value=2016-05-05:15:46:13
 9                                            column=cf:PK_ID, timestamp=1462477794597, value=9
 9                                            column=cf:RND_STR_1, timestamp=1462477794597, value=TEST_COL
 9                                            column=cf:TEST_COL, timestamp=1462477581440, value=NULL
2 row(s) in 0.0300 seconds


I spent some time testing performance and found the main bottleneck was my Oracle source rather than GoldenGate and HBASE. I was able to sustain transaction rate up to 60 DML per second and my Oracle DB started to struggle to keep pace because of waiting for a commit. The HBASE and replicat were absolutely fine. I also checked how it handles big transactions and inserted about 2 billion rows by one transaction. It worked fine. Of course it doesn’t prove that any of your production configurations will be without any performance issues. To conduct real performance tests I need to use much bigger environment.
In addition, I noticed one more minor error in Oracle documentation for adapter related to “keyValuePairDelimiter” parameter. In documentation it is replaced by “keyValueDelimiter”. It just small mistype and the “keyValueDelimiter” is repeated twice. First time it is correct and the second time it stands on the place where “keyValuePairDelimiter” is supposed to be. Here is the link.

As a summary I can say that despite some minor issues the adapters and GoldenGate for Big Data showed quite mature status and readiness for real work. I think it is good robust technology and, hopefully, its development will continue improving it with new releases. I am looking forward to use it in a real production environment with significant workload. In following posts I will try to test different DDL operations and maybe some other datatypes. Stay tuned.

Categories: DBA Blogs

Learn how to optimize text searches in SQL Server 2014 by using Full-Text Search – Part 1

Tue, 2016-05-10 09:50

In this article, we’ll cover the functionality available in SQL Server 2014 for textual research known as Full-Text Search, its installation and implementation. Additionally, we will see how to develop textual searches using the predicates CONTAINS, CONTAINSTABLE, FREETEXT and FREETEXTTABLE, and use the FILESTREAM feature to improve the research and storage of binary data in tables.

The research based on words and phrases is one of the main features of the search tools on the web, like Google, and digital document management systems. To perform these searches efficiently, many developers create highly complex applications that do not have the necessary intelligence to find terms and phrases in the columns that store text and digital documents in the database tables.

What the vast majority of these professionals don’t know is that SQL Server has an advanced tool for textual research, the Full-Text Search (FTS).

FTS has been present in SQL Server since version 7, and through use textual searches can be performed both in columns that store characters, and in columns that store documents (for example, Office documents and PDFs), in its native form.

With options like searches for words and phrases, recognition of different languages, derivation of words (for example: play, played and playing), the possibility of developing a thesaurus, the creation of ranked results, and elimination of stopwords for search, FTS becomes a powerful tool for textual searches. As main factors for the use of textual searches we have:

  • The current databases are increasingly used as repositories of digital documents;
  • The cost for storage of information has slowed considerably, enabling the storage of Gigabytes, Terabytes and even Petabytes;
  • New types of digital documents are constantly being created, and the requirements for their storage, and subsequent research, are becoming larger and more complex;
  • Developers need a robust and reliable interface for performing textual research intelligence.

FTS has great advantages over other alternatives for textual research. For example, the command LIKE. The main tasks you can perform with FTS are:

  • Textual research based on linguistics. A linguistic research is based on words or phrases in a particular language, taking into consideration the verb conjugation, derived words, accent, among other features. Unlike the LIKE predicate, FTS uses an efficient indexing structure to perform textual research;
  • Automatic removal of stopwords informed in a textual research. The following are considered stopwords ones that don’t add to the result of the survey, such as from, to, the, the, a, an;
  • Assigning weights to the terms searched, making certain words are more important than others within the same textual research;
  • Generation of prioritization, allowing a better view of the documents that are most relevant according to the research carried out;
  • Indexing and searching in the most diverse types of digital documents. With FTS you can carry out searches in text files, spreadsheets, ZIP files, among others.

In this article will describe the architecture of the FTS, your installation and configuration, the main T-SQL commands used in textual research, the use of FTS in conjunction with the FILESTREAM, and also some techniques to optimize searches through the FTS.

FTS architecture

The architecture of the FTS has several components working in conjunction with the SQL Server query processor to perform textual research efficiently. The Figure 1 illustrates the major components of the architecture of the FTS. Let’s look at some of them:

  • Client Consultation: The client application sends the textual queries to the SQL Server query processor. It is the responsibility of the client application to ensure that the textual queries are written in the right way by following the syntax of FTS;
  • SQL Server Process (sqlservr.exe): The SQL Server process contains the query processor and also the engine of the FTS, which compiles and executes the textual queries. The integration between SQL Server and process the FTS offers a significant performance boost because it allows the query processor lot more efficient execution plans for textual searches;
  • SQL Server Query Processor: The query processor has multiple subcomponents that are responsible for validating the syntax, compile, generate execution plans and execute the SQL queries;
  • Full-Text Engine: When the SQL Server query processor receives a query FTS, it forwards the request to the FTS Engine. The Engine is responsible for validating FTS the FTS query syntax, check the full-text index, and then work together with the SQL Server query processor to return the textual search results;
  • Indexer: The indexer works in conjunction with other components to populate the full-text index;
  • Full-Text Index: The full-text index contains the most relevant words and their respective positions within the columns included in the index;
  • Stoplist: A stoplist is a list of stopwords for textual research. The indexer stoplist query during the indexing process and implementation of textual research to eliminate the words that don’t add value to the survey. SQL Server 2014 stores the stoplists within the database itself, thus facilitating their administration;
  • Thesaurus: The thesaurus is an XML file (stored externally to the database) in which you can define a list of synonyms that can be used for the textual research. The thesaurus must be based on the language that will be used in the search. The full-text engine reads the thesaurus file at the time of execution of research to verify the existence of synonyms that can increase the quality and comprehensiveness of the same;
  • Filter daemon host (fdhost.exe): Is responsible for managing the processes of filtering, word breaker and stemmer;
  • SQL Full-Text Filter Daemon Launcher (fdlauncher.exe): Is the process that starts the Filter daemon host (Fdhost.exe) when the full-text engine needs to use some of the processes managed by the same.


Figure 1. Architecture of FTS.

For the better understanding of the process of creation, use and maintenance of the structure of full-text indexes, you must also know the meaning of some important concepts. They are:

  • Term: The word, phrase or character used in textual research;
  • Full-Text Catalog: A group of full-text indexes;
  • Word breaker: The process that is the barrier every word in a sentence, based on the grammar rules of the language selected for the creation of full-text index;
  • Token: A word, phrase or character defined by the word breaker;
  • Stemmer: The process that generates different verb forms for the words, based on the grammar rules of the language selected for the creation of full-text index;
  • Filter: Component responsible for extracting textual information from documents stored with the data type varbinary(max) and send this information to the process word breaker.
Indexing process

The indexing process is responsible for the initial population of a full-text index and update of this index when the data modifications occur on the columns that have been indexed by FTS. This initialization process and update the full-text index named crawl.

When the crawl process is started, the FTS component known as protocol handler accesses the data in the table being indexed and begins the process to load into memory the existing content in this table, also known as streaming. To have access to data which are stored on disk, the protocol handler allows FTS to communicate with the Storage Engine. After the end of streaming the filter daemon host process performs data filtering, and initiates the processes of word breaker and stemmer for the filling in of the full-text index.

During the indexing process the stoplist is queried to remove stopwords, and so fill the structure of the full-text index with words that are meaningful to the textual research. The last step of the indexing process is known as a master merge, in which every word indexed are grouped in a single full-text index.

Despite the indexing process requires a high i/o consumption, it is not necessary to the blocking of the data being indexed. However a query performed using a full-text index during the indexing process can generate a result incomplete.

Full-Text query processing

For the full-text query processing are used the same words and phrases limiters that were defined by the Word breaker during the indexing process. You can also use additional components, as for example, the stemmer and the thesaurus, depending on the full-text predicates (CONTAINS or FREETEXT) used in the textual research. The use of full-text predicates will be discussed later in this article.

The process stemmer generates inflectional forms of the searched words. For example, from the term “play” is searched also the terms “played”, “play”, “play” beyond the term itself “play”.

Through rules created in the thesaurus file you can use synonyms to replace or expand the searched terms. For example, when performing a textual search using the term “Ruby”, the full-text engine can replace it by the synonym “red”, or else expand the research considering the terms automatically “red”, “wine”, “Scarlet” and also “Ruby”.

After processing of the full-text query, the full-text engine provides information to SQL query processor that assist in creating an execution plan optimized for textual research. There is a greater integration between the full-text engine and the query processor of SQL (both are components of the SQL Server process), enabling textual searches are conducted in a more optimized.

In the next post of this 4 part series, we will learn how to install the FTS and how to use it. Stay tuned!

Categories: DBA Blogs

Did You Know: Oracle EBS R12.2 #1 – Managing OHS

Mon, 2016-05-09 15:21

For a long time now I’ve wanted to start a blog series outlining the not-so-obvious things that have changed in the new Oracle E-Business Suite R12.2. Here comes the new “Did You Know” series specifically for Oracle E-Business Suite! Lets start this series with Apache, aka Oracle HTTP Server.

People are already aware that OHS10g/OC4J is replaced with OHS11g/Weblogic in R12.2.X. On the surface it looks like a simple change, but a lot changed under the hood. In Oracle EBS 11i/R12.1, one could change apache port, but just updating the Context XML file and running autoconfig.

In R12.2, we have to change OHS stuff like web port by logging into the EM console url and then running $AD_TOP/bin/ script to sync OHS config parameters to the Context XML file. This script is only needed for OHS config for now. Any changes for Weblogic als need to be done in weblogic console url, not in Context XML file. But these changes in weblogic console are automatically propagated to xml file by the script.

You can get more details of the procedures by reviewing the MOS notes below.

  • E-Business Suite 12.2 Detailed Steps To Change The R12.2 Default Port To 80 (Doc ID 2072420.1)
  • Managing Configuration of Oracle HTTP Server and Web Application Services in Oracle E-Business Suite Release 12.2 (Doc ID 1905593.1)
Categories: DBA Blogs

MySQL InnoDB’s Full Text Search overview

Fri, 2016-05-06 12:56

NOTE: If you want to read and play with the interactive application, please go to the shinnyapps article. It has been developed using Shiny/R in order to allow you to see the effects of the algorithms.

Thanks to Valerie Parham-Thompson at Pythian and Daniel Prince at Oracle.

Github repository contains the code to generate and load the data and also, the Shiny/R code.

Some initial thoughts

A couple of weeks ago one of our customers came up with a question regarding FTS over InnoDB engine. Although the question is not answered in the current article, I came up with the conclusion that FTS is sometimes misunderstood.

The point of this article is to show dynamically how the search algorithms work, using non-fictional data (data sources were downloaded from Gutenberg project within an easy interface (please see at the bottom of the ShinnyApps post here) .

In order to show the effects off the field sizes over the query expansion algorithm, you will see two main tables (bookContent and bookContentByLine) both containing the same books in different approaches: by line and by paragraph. You’ll see the noise generated by the QUERY EXPANSION algorithm when phrases are too large.

For the sake of simplicity, in this article we won’t go through the FTS parsers. That is possible material for a future post.

Why I consider FTS sometimes misunderstood?

FTS is a technology that can be use for any purpose, not only simple searches. Generally, FTS engines are placed to work as a service for web or document searches, which generally require technologies like Solr, ElasticSearch or Sphinx. However, certain bussines rules require complex searches, and having such feature inside RDBMS can be a win.

RDBMS aren’t a good place for massive amount of FTS queries, without using any of the join capabilities that they offer, or the ACID properties.

As I said above, FTS is totally acceptable in RDBMS, if you are using at least one RDBMS main feature, required by your bussines model.


To start showing the effects of the algorithms, the following example searches the word ‘country’ using query expansion. This means that we are not looking only the exact matches, but also the entries that appear the most when the the exact match has been found.

In the SELECT clause you’ll see both FTS expressions using NATURAL LANGUAGE with query expansion and BOOLEAN modes respectively.

View the code on Gist.

The noise generated by the query expansion is expected and described in the official documentation here.

The interesting case is the following row, which has 2 exact occurrences (you can see the positions 1 and 63) and it is not the highest rank using query extension. Remember, this is expected.

Text: "country districts. As Lucca had five gates, he divided his own country"
bookid: 1232
pos: 1,63
QERank: 80
BoolRank: 14

This is even worse when using large sentences. In the example bellow you will see the same query, against the table storing by paragraph. The boolean rank shows some of the entries way above others, however the query extension locates at the top records that not necessarily has a lot of exact matches.

View the code on Gist.

The query expansion is useful when you intend to search which entries contain more words that appear frequently within the search term. Having large text fields increase the probability to have more words that appear among the search term. In the case of bookContent table (by paragraph table), the average field size is 443.1163 characters.


There is a way to play with the contents of the FTS indexes. As you may noticed in the previous examples, I used the set global innodb_ft_aux_table = 'ftslab/bookContent'; statement, which loads the index content to memory for an easy querying.

If you use RDS, the option innodb_ft_aux_table is not available as it is GLOBAL and require SUPER privileges.

i.e. You can easily get the most frequent tokens:

View the code on Gist.

We can query the index contents with a simple SQL statement like the following:

View the code on Gist.

In the example shown before the is no intention to compare ranks score as they are based in different algorithms. The idea there is to show that QUERY EXPANSION can have non desire results in some cases due to its mechanism.

Building custom stopwords

It probably isn’t very useful information as most of these words appears too frequently and are modal verbs, adverbs, pronouns, determiners, etc. It could be the case that you are not interested on indexing those words. If that’s the case you can add them as stopwords in your own stopwords table. Specially if you are more interested in boolean searches, loosing some part of the language expressions.

We can build a custom stopwords table based on our current data:

View the code on Gist.

Let’s build our stopwords table using both default and new entries and keeping the alphabetical order:

View the code on Gist.

The idea behind choosing our own stopwords is to measure how much index do we safe filtering those words that are extremely frequent and don’t add a necessary meaning to the search. This topic could be covered in a separate blog post.

Going ahead on choosing stop words

The full article is amazingly interesting. In brief, it says that the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, and so on (rank-frequency distribution is an inverse relation).

Considerations and recommendations

– Use QUERY EXPANSION only if you are interested in searching relations over exact matches. Remember that the field
size is crucial when using this.
– FTS is not the best fit for exact string matches in single columns. You don’t want to use FTS for searching emails in a single column, name and lastname fields , i.e. For those, you’ll probably use other techniques as reverse searches , exact match operator (=) or hashing (CRC32 for emails or large texts smaller than 255 characters).
– Keep your FTS indexes short. Do not add ALL the text columns. Parse first from your application the user search and adapt the query.
– If you are using BOOLEAN MODE, you can use the rank score to filter rows. MySQL is clever enough to optimize the
FTS functions to avoid double executions. You can do this using something like: match(content,title) against ("first (third)") > 1 . Generally, scores lower than 1 can be ignored when using boolean or natural mode searches.
– `OPTIMIZE TABLE` does a rebuild of the table. To avoid this, set innodb_optimize_fulltext_only=1 in order to do an incremental maintance on the table.
– Recall that NATURAL LANGUAGE MODE does not take the operands as the BOOLEAN MODE. This affects the ranking score (try +bad (thing) i.e.)
– If you plan to order by rank, it is not necessary to specify the clause `ORDER BY` as InnoDB does the order after retrieve the doc ids . Also,the behavior is different from the default as it returns the heaviest at the top (like an ORDER BY rank DESC).
– If you come from MyISAM’s FTS implementation, recall that the ranking scoring is different.
– Create the FULLTEXT index after the data is loaded InnoDB Bulk Load. When restoring FTS backups, you will probably hit the “ERROR 182 (HY000) at line nn: Invalid InnoDB FTS Doc ID”.
– Try to avoid using use more than one FTS expression in the where clause. Keep in mind that this affects the order in the results and it consumes a considerably amount of CPU. InnoDB orders by the latest expression in the WHERE clause. WL#7123.
– Also, if avoiding the rank information in the projection (SELECT clause) and using other aggregations like count(*), will use the “no ranking” FT_hints. The LIMIT hint won’t be used if invoked explicitly an ORDER BY and the MATCH clause in the projection.

View the code on Gist.

– If you plan to use FTS_DOC_ID column with AUTO_INCREMENT option, have in mind that there is a limitation regarding this. You must declare a single column PRIMARY KEY constraint or as an UNIQUE index. Also, the data type is stricted as `bigint unsigned`. i.e:

View the code on Gist.


This variable controls the number of top matches when using `WITH QUERY EXPANSION` (affects only MyISAM). Reference.

Bug 80347 – Invalid InnoDB FTS Doc ID

emanuel@3laptop ~/sandboxes/rsandbox_5_7_9 $ ./m dumpTest < full.dump
ERROR 182 (HY000) at line 73: Invalid InnoDB FTS Doc ID

emanuel@3laptop ~/sandboxes/rsandbox_5_7_9 $ ./m dumpTest < ddl.dump
emanuel@3laptop ~/sandboxes/rsandbox_5_7_9 $ ./m dumpTest < onlyData.dump
emanuel@3laptop ~/sandboxes/rsandbox_5_7_9 $ ./m dumpTest < full.dump
ERROR 182 (HY000) at line 73: Invalid InnoDB FTS Doc ID

mysqldump is not very clever if you use `FTS_DOC_ID`:

2016-02-13T22:11:53.125300Z 19 [ERROR] InnoDB: Doc ID 10002 is too big. Its difference with largest used Doc ID 1 cannot exceed or equal to 10000

It takes dumps without considering the restriction coded in `innobase/row/`:

Difference between Doc IDs are restricted within
4 bytes integer. See fts_get_encoded_len()

The fix to this is backuping the table by chunks of 10000 documents.

Other useful links

Fine tuning
Maintenance: innodb_optimize_fulltext_only
Writing FTS parser plugins

Categories: DBA Blogs

InnoDB flushing and Linux I/O

Thu, 2016-05-05 12:06

Since documentation is not very clear to me on the topic of InnoDB flushing in combination with Linux IO (specifically the write system call), I decided to put together this article in hopes of shedding some light on the matter.

How Linux does I/O

By default, the write() system call returns after all data has been copied from the user space file descriptor into the kernel space buffers. There is no guarantee that data has actually reached the physical storage.

The fsync() call is our friend here. This will block and return only after the data and metadata (e.g. file size, last update time) is completely transferred to the actual physical storage.

There is also fdatasync() which only guarantees the data portion will be transferred, so it should be faster.

There are a few options that we can specify at file open time, that modify the behaviour of write():


In this case, the write() system call will still write data to kernel space buffers, but it will block until the data is actually transferred from the kernel space buffers to the physical storage. There is no need to call fsync() after.


This completely bypasses any kernel space buffers, but requires that the writes are the same size as the underlying storage block size (usually 512 bytes or 4k). By itself, it does not guarantee that the data is completely transferred to the device when the call returns.


As stated above, we would need to use both options together guarantee true synchronous IO.

Relation with InnoDB flushing

Innodb_flush_method parameter controls which options will be used by MySQL when writing to files:

At the time of this writing, we have the following options:


This is the default value, and is equivalent to specifying fsync option.


Both data and redo log files will be opened without any special options, and fsync() will be used when the db needs to make sure the data is transferred to the underlying storage.


This one is confusing, as O_DSYNC us actually replaced with O_SYNC within the source code before calling open(). It is mentioned this is due to some problems on certain Unix versions. So O_SYNC will be used to open the log files, and no special options for the datafiles. This means fsync() needs to be used to flush the data files only.


Data files are opened with O_DIRECT. Log files are opened with no extra options. Some filesystems (e.g. XFS) do not guarantee metadata without the fsync() call, so it is still used as safety measure.


InnoDB uses O_DIRECT during flushing I/O, but skips the fsync() system call afterwards. This can provide some performance benefits if you are using a filesystem that does not require the fsync() to sync metadata.

I am deliberately not mentioning the experimental options littlesync and nosync.

There is also an extra option in Percona Server:


It uses O_DIRECT to open the log files and data files and uses fsync() to flush both the data and the log files.

Which InnoDB flushing method should I use?

The general consensus if you have a battery backed write cache or fast IO subsystem (e.g. SSD’s) is to use the O_DIRECT method. However it is a better practice to run tests to determine which method provides a better performance for each particular environment.


One downside of using O_DIRECT is that it requires the innodb-buffer-pool-size to be configured correctly. For example, if you accidentally leave your buffer pool size at the default value of 128M, but have 16G of RAM, the buffer pool contents will at least sit in the filesystem cache. This will not be true if you have O_DIRECT enabled (I would like to thank Morgan Tocker for his contribution regarding this section of the post).



Categories: DBA Blogs

Log Buffer #472: A Carnival of the Vanities for DBAs

Thu, 2016-05-05 09:14

This Log Buffer Edition takes into account blog posts from Oracle, SQL Server and MySQL.


Enterprise Manager Support Files 101- The EMOMS files

From time to time we see a complaint on OTN about the stats history tables being the largest objects in the SYSAUX tablespace and growing very quickly.

Delphix replication and push button cloud migration

PS360: A Utility to Extract and Present PeopleSoft Configuration and Performance Data

Contemplating Upgrading to OBIEE 12c?

SQL Server:

Modifying the SQL Server Model System Database to Customize New Database Settings

Azure SQL Database Elastic Database Jobs

SQL Server Resource Governor

Add a Custom Index in Master Data Services 2016

Unified Approach to Generating Documentation for PowerShell Cmdlets


Writing SQL that works on PostgreSQL, MySQL and SQLite

MariaDB MaxScale 1.4.2 GA is available for download

MariaDB ColumnStore, a new beginning

Planets9s – Watch the replay: Become a MongoDB DBA (if you’re re really a MySQL user)

Upgrading to MySQL 5.7, focusing on temporal types

Categories: DBA Blogs

How to Deal with MetaData Lock

Thu, 2016-05-05 08:59
What is MetaData Lock?

MySQL uses metadata locking to manage concurrent access to database objects, and to ensure data consistency when performing modifications to the schema: DDL operations. Metadata locking applies not just to tables, but also to schemas and stored programs (procedures, functions, triggers, and scheduled events).

In this post I am going to cover metadata locks on tables and triggers, that are usually seen by DBAs during regular operations/maintenance.

Kindly refer to these 4 different connections to MySQL Instance:

Screen Shot 2016-04-19 at 2.58.52 pm


The screenshot shows that the uncommitted transaction may cause metadata lock to ALTER operations. The ALTER will not proceed until the transaction is committed or rolled-back. What is worse, after the ALTER is issued, any queries to that table (even simple SELECT queries) will be blocked. If the ALTER operation is an ONLINE DDL operation available in 5.6+, queries will proceed as soon as the ALTER begins.

Refer to this video tutorial on MySQL Metadata Locks for further context.

These days we have a “DBAs” favourite tool “pt-online-schema-change” (osc). Let’s have a look what will happen If we run osc instead of ALTER.

Screen Shot 2016-04-19 at 3.07.26 pm

OSC gets stuck at metadata lock at the point of creating triggers on table.

Let’s jump on the second topic how can we mitigate MDL issues:

Mitigating the MetaData Lock Issues

There are various solutions to tackling MDL:

  1. Appropriate setting of wait_timeout variable which will kill stuck/sleep threads after a certain time.
  2. Configure pt-kill to get rid of stuck/sleep threads  
  3. Fix code where transactions are not committed after performing DB queries
How to kill Sleep Connections in RDS which are causing MDL

If you are on RDS and your MySQL is having bunch of Sleep threads and you don’t know which connection is causing metadata lock, then you have to kill all the Sleep queries which are in mysql for more than a certain time. As we know “kill thread_id” is not permitted in RDS, but you can use the query below to get the exact queries to kill Sleep threads.

Example Output:

| CONCAT('CALL mysql.rds_kill ( ',id,')',';') |
| CALL mysql.rds_kill ( 5740758); |
| CALL mysql.rds_kill ( 5740802); |
| CALL mysql.rds_kill ( 5740745); |
| CALL mysql.rds_kill ( 5740612); |
| CALL mysql.rds_kill ( 5740824); |
| CALL mysql.rds_kill ( 5740636); |
| CALL mysql.rds_kill ( 5740793); |
| CALL mysql.rds_kill ( 5740825); |
| CALL mysql.rds_kill ( 5740796); |
| CALL mysql.rds_kill ( 5740794); |
| CALL mysql.rds_kill ( 5740759); |
| CALL mysql.rds_kill ( 5740678); |
| CALL mysql.rds_kill ( 5740688); |
| CALL mysql.rds_kill ( 5740817); |
| CALL mysql.rds_kill ( 5740735); |
| CALL mysql.rds_kill ( 5740818); |
| CALL mysql.rds_kill ( 5740831); |
| CALL mysql.rds_kill ( 5740795); |
| CALL mysql.rds_kill ( 4926163); |
| CALL mysql.rds_kill ( 5740742); |
| CALL mysql.rds_kill ( 5740797); |
| CALL mysql.rds_kill ( 5740832); |
| CALL mysql.rds_kill ( 5740751); |
| CALL mysql.rds_kill ( 5740760); |
| CALL mysql.rds_kill ( 5740752); |
| CALL mysql.rds_kill ( 5740833); |
| CALL mysql.rds_kill ( 5740753); |
| CALL mysql.rds_kill ( 5740722); |
| CALL mysql.rds_kill ( 5740723); |
| CALL mysql.rds_kill ( 5740724); |
| CALL mysql.rds_kill ( 5740772); |
| CALL mysql.rds_kill ( 5740743); |
| CALL mysql.rds_kill ( 5740744); |
| CALL mysql.rds_kill ( 5740823); |
| CALL mysql.rds_kill ( 5740761); |
| CALL mysql.rds_kill ( 5740828); |
| CALL mysql.rds_kill ( 5740762); |
| CALL mysql.rds_kill ( 5740763); |
| CALL mysql.rds_kill ( 5740764); |
| CALL mysql.rds_kill ( 5740773); |
| CALL mysql.rds_kill ( 5740769); |
| CALL mysql.rds_kill ( 5740770); |
| CALL mysql.rds_kill ( 5740771); |
| CALL mysql.rds_kill ( 5740774); |
| CALL mysql.rds_kill ( 5740784); |
| CALL mysql.rds_kill ( 5740789); |
| CALL mysql.rds_kill ( 5740790); |
| CALL mysql.rds_kill ( 5740791); |
| CALL mysql.rds_kill ( 5740799); |
| CALL mysql.rds_kill ( 5740800); |
| CALL mysql.rds_kill ( 5740801); |
| CALL mysql.rds_kill ( 5740587); |
| CALL mysql.rds_kill ( 5740660); |
53 rows in set (0.02 sec)
  1. Capture sql queries to kill Sleep threads

mysql –skip-column-names -e ‘SELECT CONCAT(“CALL mysql.rds_kill ( “,id,”)”,”;”) FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND=”Sleep” AND TIME > 10’ > kill_sleep_threads.sql

2.Execute queries from mysql prompt


mysql> source kill_sleep_threads.sql
Improvements in MySQL 5.7 related to MDL

Generally, we would want to kill as few connections as possible. But the trouble with metadata locks prior to 5.7 is that there is no insight available into which threads are taking the metadata lock. In MySQL 5.7, there are several improvements in getting insight into metadata lock information.

The Performance Schema now exposes metadata lock information:

  • Locks that have been granted (shows which sessions own which current metadata locks)
  • Locks that have been requested but not yet granted (shows which sessions are waiting for which metadata locks).
  • Lock requests that have been killed by the deadlock detector or timed out and are waiting for the requesting session’s lock request to be discarded

This information enables you to understand metadata lock dependencies between sessions. You can see not only which lock a session is waiting for, but which session currently holds that lock.

The Performance Schema now also exposes table lock information that shows which table handles the server has open, how they are locked, and by which sessions.

To check who holds the metadata lock in MySQL 5.7, We have to enable global_instrumentation and wait/lock/metadata/sql/mdl.

Below is the example to enable global_instrumentation and wait/lock/metadata/sql/mdl

mysql> UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME = 'global_instrumentation';

Query OK, 0 rows affected (0.00 sec)

Rows matched: 1  Changed: 0  Warnings: 0

mysql> UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' WHERE NAME = 'wait/lock/metadata/sql/mdl';

Query OK, 1 row affected (0.00 sec)

Rows matched: 1  Changed: 1  Warnings: 0

Once global_instrumentation and wait/lock/metadata/sql/mdl are enable, below query will show the locks status on connections.


| TABLE | sbtest | sbtest1 | SHARED_READ | GRANTED | 29 | 4 | NULL |
| GLOBAL | NULL | NULL | INTENTION_EXCLUSIVE | GRANTED | 30 | 5 | alter table sbtest1 add key idx_pad(pad) |
| SCHEMA | sbtest | NULL | INTENTION_EXCLUSIVE | GRANTED | 30 | 5 | alter table sbtest1 add key idx_pad(pad) |
| TABLE | sbtest | sbtest1 | SHARED_UPGRADABLE | GRANTED | 30 | 5 | alter table sbtest1 add key idx_pad(pad) |
| TABLE | sbtest | sbtest1 | EXCLUSIVE | PENDING | 30 | 5 | alter table sbtest1 add key idx_pad(pad) |
| TABLE | sbtest | sbtest1 | SHARED_READ | PENDING | 31 | 6 | select count(*) from sbtest1 |
6 rows in set (0.00 sec)



Best-practice when running any DDL operation, even with performance schema changes in 5.7, it to make sure to check processlist for presence of MDL waits, check SHOW ENGINE INNODB STATUS for long active transactions. Kill DDL operation while resolving the MDL issue so as to prevent query pileup. For a temporary fix implement pt-kill or wait_timeout. Review and fix application code/scripts for any uncommitted transactions to solve metadata lock issue.

Categories: DBA Blogs

Transparent Data Encryption for SQL Server in an Availability Group

Tue, 2016-05-03 13:24

With the all new features in SQL Server 2016 always on, which you can read up on here, it’s easy to forget about Transparent Data Encryption (TDE). This blog post will focus on TDE.

TDE encrypts database files at rest. What this means is your .MDF and .NDF Files, and consequently your backups, will be encrypted, meaning you will not be able to detach the database files and restore them on another server unless that server has the same certificate that was used to encrypt the database.

In this blog post I am using SQL Server 2014 and will explain how to enable TDE on an existing AG Group database

  1. The first thing we need to check is if the server has a master encryption key on all replica in the AG Group
SELECT * FROM sys.symmetric_keys
WHERE name = '##MS_DatabaseMasterKey##'

The Screenshot below shows I don’t have a key so I need to create one

No Master Encryption Key

  1. Create a Database Master Encryption Key on each of the replicas in the AG Group. It is important to use a complex password


  1. Run the code in step 1 and this time you should see the below

Master Encryption Key

  1. Now we need to create a certificate to use for the encryption of the database on the primary replica. This can be accomplished by using the below

WITH SUBJECT = 'SQL Server 2014 AdventureWorks2012 Encryption Certificate';

  1. Validate the Certificate

SELECT name, pvt_key_encryption_type_desc, thumbprint FROM sys.certificates

Validate Encryption Key

The thumbprint will be useful because when a database is encrypted, it will indicate the thumbprint of the certificate used to encrypt the Database Encryption Key.  A single certificate can be used to encrypt more than one Database Encryption Key, but there can also be many certificates on a server, so the thumbprint will identify which server certificate is needed

  1. Next We need to backup the certificate on the Primary Replica

TO FILE = ‘C:\BackupCertificates\BackupEncryptionCert.bak’
WITH PRIVATE KEY ( FILE = ‘C:\BackupCertificates\BackupEncryptionCertKey.bak’ ,
ENCRYPTION BY PASSWORD = ‘Certi%yC&amp;mpl£xP@$$Wrd’)

Encryption Files

The BACKUP CERTIFICATE command will create two files.  The first file is the server certificate itself.  The second file is a “private key” file, protected by a password. Both files and the password will be used to restore the certificate onto other instances.

  1. The Files created in step 6 needs to be copied to each of the other replicas and created in SQL Server. After the files are copied the below command can be used to create the certificates

FROM FILE = ‘C:\BackupCertificates\BackupEncryptionCert.bak’
WITH PRIVATE KEY (FILE = ‘C:\BackupCertificates\BackupEncryptionCertKey.bak’,
DECRYPTION BY PASSWORD = ‘Certi%yC&amp;mpl£xP@$$Wrd’);

  1. That’s all the configuration needed for each instance now we are ready to start encrypting the database. We now need to tell SQL Server which Encryption Type we want to use and which certificate to use. This can be done using the following code on the Primary Replica

Use Adventureworks2012

  1. Finally, the last step is to enable TDE by executing the below command on the Primary Replica



And that’s it, I hope you enjoyed this tutorial and found it informative. If you have any questions, please comment below.

Categories: DBA Blogs

Reserved words usage in MySQL

Mon, 2016-05-02 15:07

It is not uncommon to come across MySQL databases where reserved words are in use as identifiers for any kind of database objects.

Perhaps when the application schema was implemented, the words were not reserved yet, and they became reserved later on a subsequent MySQL release.

It is a good practice to check reserved words usage prior to doing any database upgrades, as any newly reserved keywords will cause syntax errors on the new version.

This is usually not a problem if proper quoting is used for referencing the objects, as described on the official manual page.

The actual steps to do this depend on the environment; for example, the following can be configured to tell Hibernate to escape identifiers:

property name="hibernate.globally_quoted_identifiers" value="true"

This does not appear to be documented properly (there is an open bug unresolved at the time of this writing).

However, we cannot make the assumption that all application code is properly escaped to deal with this kind of issues.

So what are the symptoms?

Error 1064 will be reported while trying to use a reserved word:

mysql> CREATE TABLE interval (begin INT, end INT);
ERROR 1064 (42000): You have an error in your SQL syntax ...
near 'interval (begin INT, end INT)'
How can we check for reserved words?

The following procedure can help you find out if any particular MySQL version’s reserved words are in use:

  1. Using the list on the corresponding manual page, create a text file with one reserved word on each line
  2. Load data into a temporary table
     USE test;
    CREATE TABLE reserved_words VARCHAR(50); 
    LOAD DATA INFILE 'reserved_words.txt' INTO TABLE test.reserved_words;
  3. Check for any column names using reserved keywords
    SELECT table_schema, table_name, column_name, ordinal_position 
    FROM information_schema.columns
    WHERE table_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' ) 
    AND column_name = ANY ( SELECT * FROM test.reserved_words ) 
    ORDER BY 1,2,4;
  4. Check for any table names using reserved keywords
    SELECT table_schema, table_name
    FROM information_schema.tables
    WHERE table_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' ) 
    AND table_name = ANY ( SELECT * FROM test.reserved_words );
  5. Check for any procedures or functions
    SELECT routine_schema, routine_name, routine_type
    FROM information_schema.routines
    WHERE routine_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' )
    AND routine_name = ANY ( select * from test.words );

I hope this post helps you avoid one of the many issues you may encounter during the challenging task of database upgrades.

Categories: DBA Blogs

Leaving Behind the Limits of Binary Thinking for Full Inclusiveness

Fri, 2016-04-29 15:39


When Pythian became the first tech company in Canada to release gender-based metrics last November, we wanted to make a bold statement with the launch of the Pythia Program. And apparently it worked. We’ve already increased the amount of female applicants by more than 10% over just one quarter. Our internal Pythia Index has also risen 3% from 56% to 59%. And just this week, Pythian’s CEO Paul Vallée received the WCT Diversity Champion award in recognition of his leadership and efforts to promote diversity in the workplace, and a more inclusive tech industry that promotes men and women from all backgrounds.

Despite a clear case for gender parity, and research confirming the financial return for companies, full inclusion is still ‘controversial’ to implement. A lot of this has to do with the unconscious associations we still have with male and female roles which are placed in opposition. This kind of binary thinking is rampant, especially in our social constructions of what constitutes masculinity and femininity.

When the Pythia Program was in its early stages, we actually noticed a lot of binary, either/or thinking was shaping our assumptions. Off/On. 0/1. We can do this OR that. We can empower women technical professionals OR talk to employees about unconscious bias. We can take a stand on gender diversity OR maintain good relationships with male colleagues. Wait a minute…why can’t we do both?

If we had continued to believe our choices were that limited, it would have seriously eroded any impetus to act on our values of gender equity and inclusiveness. It was time to reframe our thinking, and that’s when we stopped compromising. A bolder stance emerged when we did away with limited, binary thinking that was trapping us in false dichotomies.

Let’s look at this from a data perspective, because that’s what we love and do best.

Current computer chips store information in electrical circuits as binary bits, either in a state of 0 or 1, so there’s a finite amount of data that can be processed. Quantum computer chips, or ‘qubits’ however, can be in the state of 0, 1, or both at the same time–giving quantum computers mind-blowing processing power.

So if we apply this idea of ‘binary’ vs. ‘quantum’ into a human context, could we potentially become quantum thinkers? Quantum thinking would be holistic, and enable the mind to function at a greater level of complexity. It’s an unlimited approach that ‘either/or’ binary thinking simply does not permit. Wouldn’t it be more exciting to break away from these limitations and move to a higher, more innovative level? Things look different when this binary thinking is disrupted. Start by replacing either/or with ‘and’.

We can help achieve gender parity AND we can achieve diversity in other important areas. Pythian can be inclusive, people-focused AND financially strong. Men can be powerful leaders AND feminist.

There is one big exception, one area where it’s either/or: whether you support the status quo of tech’s current ‘bro culture’, or inclusive leadership that embraces the value of multiple perspectives. Those two states cannot co-exist.

As he accepted his award for Diversity Champion at the WCT Gala on April 27, Pythian CEO Paul Vallée made his position clear “To the women who are working hard in high tech, and who are marginalized by bro culture — which is a real problem, we are in the midst of a culture war — I salute you and keep fighting the good fight because we will prevail. To the male leaders that have taken sides in this battle, the Pythia Index will help you keep score, whether you’re on my team [fighting to end bro culture] or the opposite team.”

As Einstein said, “you can’t solve problems with the same thinking used to create them.” And lack of gender diversity in the tech industry is a problem Pythian wants to help solve.

Categories: DBA Blogs

A Practitioner’s Assessment: Digital Transformation

Thu, 2016-04-28 13:49


Rohinee Mohindroo is a guest blogger on Pythian Business Insights.


trans·for·ma·tion/ noun: a thorough or dramatic change in form or appearance

The digital transformation rage continues into 2016 with GE, AT&T, GM, Domino’s, Flex, and Starbucks, to name a few. So what’s the big deal?

Technical advances continue to progress at a rapid rate. Digital transformation simply refers to the rate at which the technological trends are embraced by an individual, organization or team.

Organizational culture and vocabulary are leading indicators of the digital transformation maturity level.


Level 1: Business vs. Tech (us vs. them). Each party is fairly ignorant of the value and challenges of the other. Each blames the other for failures and takes credit for successes. Technology is viewed as a competency with a mandate to enable the business.

Level 2: Business and Tech (us and them). Each party is aware of the capability and challenges of the other. Credit for success is shared, failure is not discussed publicly or transparently. Almost everyone  is perceived to be technically literate with a desire to deliver business differentiation.

Level 3: Business is Tech (us). Notable awareness of the business model and technology capabilities and opportunities throughout the organization. Success is expected and failure is an opportunity. The organization is relentlessly focused on learning from customers and partners with a shared goal to continually re-define the business.

Which level best describes you or your organization? Please share what inhibits your organization from moving to the next level.


Categories: DBA Blogs