Pythian Group

Subscribe to Pythian Group feed
Love Your Data
Updated: 52 min ago

When the default value is not the same as the default

Mon, 2016-04-25 11:39

I was working on a minor problem recently where controlfile autobackups were written to the wrong location during rman backups. Taking controlfile autobackups is generally a good idea, even if you configure controlfile backups yourself. Autobackups also include an spfile backup, though not critical for restore, is still convenient to have. And autobackups are taken not only after backups, but more importantly every time you change the physical structure of your database, like adding or removing datafiles and tablespaces which would make a restore with an older controlfile a lot harder.

What happened in this case was that the CONTROLFILE AUTOBACKUP FORMAT parameter was changed from the default ‘%F’ to the value ‘%F’. Yes, the values are the same. But setting a value and not leaving it at the default changed the behaviour of those autobackups. Where by default ‘%F’ means writing to the flash recovery area, explicitly setting the format parameter to ‘%F’ will save the autobackup to the folder $ORACLE_HOME/dbs/.

See for yourself. This shows an autobackup while the parameter is set to the default and as expected, the autobackup is written to the flash recovery area. So that is the correct location but the filename is a bit off. It should be c-DBID-YYYYMMDD-SERIAL.


RMAN configuration parameters for database with db_unique_name CDB1 are:

RMAN> backup spfile;

Starting backup at 18-APR-16
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 18-APR-16
channel ORA_DISK_1: finished piece 1 at 18-APR-16
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2016_04_18/o1_mf_nnsnf_TAG20160418T172428_ckb62f38_.bkp tag=TAG20160418T172428 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-APR-16

Starting Control File and SPFILE Autobackup at 18-APR-16
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/autobackup/2016_04_18/o1_mf_s_909509070_ckb62gko_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 18-APR-16

Now we are setting the to format string to ‘%F’ and observe the autobackup is not written to the FRA but $ORACLE_HOME/dbs. At least it has the filename we were expecting.


new RMAN configuration parameters:
new RMAN configuration parameters are successfully stored

RMAN> backup spfile;

Starting backup at 18-APR-16
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 18-APR-16
channel ORA_DISK_1: finished piece 1 at 18-APR-16
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2016_04_18/o1_mf_nnsnf_TAG20160418T172447_ckb62z7f_.bkpx tag=TAG20160418T172447 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-APR-16

Starting Control File and SPFILE Autobackup at 18-APR-16
piece handle=/u01/app/oracle/product/ comment=NONE
Finished Control File and SPFILE Autobackup at 18-APR-16


RMAN configuration parameters for database with db_unique_name CDB1 are:

This is like Schrödinger’s parameter, where you can either get the correct location or the correct name, but not both. To be fair, not assigning the right name to the autobackup in the FRA does not matter much because the files will be found during a restore anyway.

At this point it is good to remember how to use CLEAR to reset a parameter to it’s default instead of just setting the default value.


old RMAN configuration parameters:
RMAN configuration parameters are successfully reset to default value


RMAN configuration parameters for database with db_unique_name CDB1 are:

I have tested this in versions 10g, 11g and with the same result. The behaviour is also not unknown. In fact, bug 4248670 was logged against this in 2005 but has not been resolved so far. My Oracle Support does mention the above workaround of clearing the parameter in note 1305517.1 though.

Categories: DBA Blogs

MySQL Query Best Practices

Mon, 2016-04-25 11:30

You can get many returns from a Google search for “MySQL Query Best Practices” or “MySQL Query Optimization.” The drawback is that too many rules can provide confusing or even conflicting advice. After doing some research and tests, I outlined the essential and important ones below:

1) Use proper data types

1.1) Use the smallest data types if possible

MySQL tries to load as much data as possible into memory (innodb-buffer-pool, key-buffer), so a small data type means more rows of data in memory, thus improving performance. Also, small data sizes reduces disk i/o.

1.2) Use Fixed-length Data Types if Possible

MySQL can calculate quickly the position of a fixed-length column in a specific row of a table.

With the flexible-length data type, the row size is not fixed, so every time it needs to do a seek, MySQL might consult the primary key index. However, the flexible-length data type can save data size, and the disk space required.

In practice, if the column data size varies a lot, then use a flexible-length data type (e.g., varchar); if the data length is short or length barely changes, then use a fixed data type.

1.3) Use not null unless there is reason not to

It is harder for MySQL to optimize queries that refer to nullable columns, because they make indexes, index statistics, and value comparisons more complicated. A nullable column uses more storage space and requires special processing inside MySQL.

When a nullable column is indexed, it requires an extra byte per entry and can even cause a fixed-size index (e.g., an index on a single integer column) to be converted to a variable-sized one in MyISAM.

2)Use indexes smartly

2.1) Make primary keys short and on meaningful fields

A shorter primary key will benefit your queries, because the smaller your primary key, the smaller the index, and the less pages in the cache. In addition, a numeric type is prefered because numeric types are stored in a much more compact format than character formats and so it will make primary key shorter.

Another reason to make primary key shorter, is because we usually use primary key to join with the other tables.

It is a good idea to use a primary key on a meaningful field, because MySQL uses a cluster index on a primary key. We usually just need the info from primary key, and especially when joined with other tables, it will only search in the index without reading from the data file in disk, and benefit the performance. When you use a meaningful field as the primary key, make sure the uniqueness on the fields wouldn’t change, otherwise it might affect all the tables using this as foreign key when you have to change the primary key.

2.2) Index on the search fields only when needed

Usually we add indexes on the fields that frequently show up in a where clause — that is the purpose of indexing. But while an index will benefit reads, it can make writes slower (inserting/updating), so index only when you need it and index smartly.

2.3) Index and use the same data types for join fields

MySQL can do joins on different data types, but the performance is poor as it has to convert from one type to the other for each row. Use the same data type for join fields when possible.

2.4) Use a composite index if your query has has more than one field in the where clause

When the query needs to search on multiple columns of a table, it might be a good idea to create a compound index for those columns. This is because with composite index on multiple columns, the search will be able to narrow down the result set by the first column, then the second, and so on.

Please note that the order of the columns in the composite index affects the performance, so put the columns in the order of the efficiency of narrowing down the search.

2.5) Covering index for most commonly used fields in results

In some cases, we can put all the required fields into an index (i.e., a covering index) with only some of the fields in the index used for searching and the others for data only. This way, MySQL only need to access the index and there is no need to search in another table.

2.6) Partial index for long strings or TEXT, BLOB data types by index on prefix

There is a size limitation for indexes (by default, 1000 for MyISAM, 767 for InnoDB). If the prefix part of the string already covers most of the unique values, it is good to just index the prefix part.

2.7) Avoid over-indexing

Don’t index on the low cardinality values, MySQL will choose a full table scan instead of use index if it has to scan the index more than 30%.

If a field already exists in the first field of a composite index, you may not need an extra index on the single field. If it exists in a composite index but not in the leftmost field, you will usually need a separate index for that field only if required.

Bear in mind that indexing will benefit in reading data but there can be a cost for writing (inserting/updating), so index only when you need it and index smartly.

3) Others
3.1) Avoid SELECT *
There are many reasons to avoid select * from… queries. First, it can waste time to read all the fields if you don’t need all the columns. Second, even if you do need all columns, it is better to list the all the field names, to make the query more readable. Finally, if you alter the table by adding/removing some fields, and your application uses select * queries, you can get unexpected results.

3.2) Prepared Statements
Prepared Statements will filter the variables you bind to them by default, which is great for protecting your application against SQL injection attacks.

When the same query is being used multiple times in your application, you can assign different values to the same prepared statement, yet MySQL will only have to parse it once.

3.3) If you want to check the existence of data, use exists instead SELECT COUNT

To check if the data exists in a table, using select exists (select *…) from a table will perform better than select count from a table, since the first method will return a result once it gets one row of the required data, while the second one will have to count on the whole table/index.

3.4) Use select limit [number]

Select… limit [number] will return the only required lines of rows of data. Including the limit keyword in your SQL queries can have performance improvements.

3.5) Be careful with persistent connections

Persistent connections can reduce the overhead of re-creating connections to MySQL. When a persistent connection is created, it will stay open even after the script finishes running. The drawback is that it might run out of connections if there are too many connections remaining open but in sleep status.

3.6) Review your data and queries regularly

MySQL will choose the query plan based on the statistics of the data in the tables. When the data size changes, the query plan might change, and so it is important to check your queries regularly and to make optimizations accordingly. Check regularly by:

3.6.1) EXPLAIN your queries

3.6.2) Get suggestions with PROCEDURE ANALYSE()

3.6.3) Review slow queries

Categories: DBA Blogs

Proud to Work at Pythian, One of Canada’s Top 25 ICT Professional Services Companies

Fri, 2016-04-22 13:18

It’s only four months into 2016, and there’s a lot to be excited about. In addition to moving Pythian’s global headquarters in Ottawa, Canada to the hip and happening neighbourhood of Westboro, we’ve been receiving accolades for being one of Canada’s top ICT professional services companies, and a great place to work. Following are three reasons to be proud to work at Pythian.

In April Pythian was recognized as one of Canada’s Top 25 Canadian ICT Professional Services Companies on the prestigious Branham300 list. We also appeared on the Top 250 Canadian ICT Companies list for the second year in a row.

The Branham300 is the definitive listing of Canada’s top public and private ICT companies, as ranked by revenues. Not too many people can say that they work at a company that is one of the Top 25 ICT Professional Services Companies in Canada.

In February, our CEO Paul Vallée was named “Diversity Champion of the Year” by Women in Communications and Technology (WCT). In 2015 Pythian launched the Pythia Project, a corporate initiative designed to increase the percentage of talented women who work and thrive at Pythian, especially in tech roles. A new metric called the “Pythia Index” was also introduced. It measures the proportion of people in a business, or in a team, who are women leaders or report to a woman leader. Pythian was also the first Canadian tech company to release its gender stats, and invite other Canadian tech companies to join in the battle against “bro culture”. Stay tuned for more news on the Pythia program in the coming months.

And last, but not least, in March, Pythian was selected as one of Canada’s Top Small & Medium Employers for 2016. This award recognizes small and medium employers with exceptional workplaces and forward-thinking human resource policies. Everyone that works at Pythian is aware of the amazing benefits, but there is a hard working team that really goes the extra mile to make the company a great place to work. Thank you.

Clearly 2016 is off to a fantastic start! I’m looking forward to more good news to share.

Categories: DBA Blogs

How to set up Flashback for MongoDB Testing

Fri, 2016-04-22 12:52


After you’ve upgraded your database to a new version, it’s common that the performance degrades in some cases. To prevent this from happening, we could capture the production database operations and replay them in the testing environment which has the new version installed.

Flashback is a MongoDB benchmark framework that allows developers to gauge database performance by benchmarking queries. Flashback records the real traffic to the database and replays operations with different strategies. The framework is comprised of a set of scripts that fall into 2 categories:

  1. Records the operations(ops) that occur during a stretch of time
  2. Replays the recorded ops

The framework was tested on Ubuntu 10.04.4 LTS


-go 1.4

-git 2.3.7

-python 2.6.5

-pymongo 2.7.1

-libpcap0.8 and libpcap0.8-dev


  1. Download Parse/Flashback source code

# go get

  1. Manually modify the following file to workaround a mongodb-tools compatibility issue

In pass_util.go file:

func GetPass() string {
–    return string(gopass.GetPasswd())
+    if data, errData := gopass.GetPasswd(); errData != nil {
+        return “
+    } else {
+        return string(data)
+    }


  1. Compile the go lang part of the tool

# go build -i ./src/



Suppose you have to two shards, Shard a and Shard b. Each shard has 3 nodes. In each shard a, primary is a1. In shard b, primary is b2.

1. copy sample config file for editing

# cp ./src/

2. Change config for testing


# Indicates which database(s) to record.

“target_databases”: [“test”],

# Indicates which collections to record. If user wants to capture all the

# collections’ activities, leave this field to be `None` (but we’ll always

# skip collection `system.profile`, even if it has been explicit

# specified).

“target_collections”: [“testrecord”],


“oplog_servers”: [

{ “mongodb_uri”: “mongodb://” },

{ “mongodb_uri”: “mongodb://” }




# In most cases you will record from the profile DB on the primary

# If you are also sending queries to secondaries, you may want to specify

# a list of secondary servers in addition to the primary

“profiler_servers”: [

{ “mongodb_uri”: “mongodb://” },

{ “mongodb_uri”: “mongodb://mongodb.b2:27018” }



“oplog_output_file”: “./testrecord_oplog_output”,

“output_file”: “./testrecord_output”,


# If overwrite_output_file is True, the same output file will be

# overwritten is False in between consecutive calls of the recorer. If

# it’s False, the recorder will append a unique number to the end of the

# output_file if the original one already exists.

“overwrite_output_file”: True,


# The length for the recording

“duration_secs”: 3600




“logging_level”: logging.DEBUG



duration_secs indicates the length for the recording. For production capture, should set it at least to 10-12 hrs.

Make sure has write permission to the output dir

  1. Set all primary servers profiling level to 2


2. Start operations recording


3. The script starts multiple threads to pull the profiling results and oplog entries for collections and databases that we are interested in. Each thread works independently. After fetching the entries, it will merge the results from all sources to get a full picture of all operations as one output file.

4. You can run the from any server as long as the server has flashback installed  and can connect to all mongod servers.

5. As a side note, running mongod in replica set mode is necessary (even when there is only one node), in order to generate and access the oplogs


  1. Run flashback. Style can be “real” or ”stress”

        Real: replay ops in accordance to their original timestamps, which allows us to imitate regular traffic.

        Stress: will preload the ops to the memory and replay them as fast as possible. This potentially limits the number of  ops played back per session to the             available memory on the Replay host.

For sharded collections, point the tool to a mongos. You could also point to a single shard primary for non-sharded collections.

./flashback -ops_filename=”./testrecord_output” -style=”real” -url=”localhost:27018″ -workers=10

  • Several pymongo (python’s MongoDB driver) arguments in the code are deprecated causing installation and running errors.
  • Need to define a faster restore method (ie. LVM snapshots) to rollback the test environment after each replay.
  • Need to capture execution times for each query included in the test set to be able to detect excecution plan changes.
  • In a sharded cluster, record can be executed from a single server with access to all primaries and/or secondaries.
  • Pulling oplogs from secondaries is recommended if we are looking to reduce load on the primaries.
  • Memory available would dramatically affect operation’s merge process after recording
  • Memory available would also affect replay times (see Tests summary)
Tests summary


Record test scenario 1


Record server: mongos server (8G RAM)

Time : about 2 hours to finish the recording

Details: Ran record while inserting and updating 1000 documents


Record test scenario 2


Record server: shard a primary node a1 (80G RAM)

Time: about 2 minutes to finish the recording

Details: Ran record while inserting and updating 1000 documents

Record test scenario 3


Record server: shard a primary node a1 (80G RAM)

Time: it took about 20 minutes to finish the recording

Details: Ran record while inserting and updating 100,000 documents

Replay test scenario 1

Replay server: mongos server (8G RAM)

Time: it took about 1 hour to finish the replay

Details: replayed 1000 operations in “real” style


Replay test scenario 2

Replay server: shard a primary node a1 (80G RAM)

Time: about 5 minutes to finish the replay

Details: replayed 1000 operations in “real” style

Replay test scenario 3

Replay server: mongos server (8G RAM)

Time: failed due to insufficient memory

Details: replayed 1000 operations in “stress” style


Replay test scenario 4

Replay server: shard a primary node a1 (80G RAM)

Time: about 1minute to finish the replay

Details: replayed 1000 operations in “stress” style


Replay test scenario 5

Replay server: shard a primary node a1 (80G RAM)

Time: about 20 minutes to finish the replay

Details: replayed 50,000 operations in “stress” style

Categories: DBA Blogs

Data Encryption at Rest in Oracle MySQL 5.7

Wed, 2016-04-20 13:28


I’ve previously evaluated MariaDB’s 10.1 implementation of data encryption at rest (, and recently did the same for Oracle’s implementation ( in their MySQL 5.7.


First, here’s a walkthrough of enabling encryption for MySQL 5.7:

1. Install keyring plugin.

1a. Add the following to the [mysqld] section of /etc/my.cnf:


<script src=””></script>
1b. Restart the server:

service mysqld restart

1c. Verify:

| keyring_file | ACTIVE        |

2. Ensure innodb_file_per_table is on.

2a. Check.

mysql> show global variables like 'innodb_file_per_table';
| Variable_name         | Value |
| innodb_file_per_table | ON    |

2b. If OFF, add the following to the [mysqld] section of /etc/my.cnf, restart, and alter each existing table to move it to its own tablespace:


Get list of available InnoDB tables:

mysql>select table_schema, table_name, engine from information_schema.tables where engine='innodb' and table_schema not in ('information_schema');

Run ALTER … ENGINE=INNODB on each above InnoDB tables:



Next, I walked through some testing.

1. Create some data.

[root@localhost ~]# mysqlslap --concurrency=50 --number-int-cols=2 --number-char-cols=3 --auto-generate-sql --auto-generate-sql-write-number=10000 --no-drop

2. Observe the mysqlslap.t1 table is not automatically encrypted. Unlike MariaDB’s implementation, there is not an option to encrypt tables by default.

2a. Via the mysql client:

Empty set (0.05 sec)

2b. Via the command line:

(Install xxd if required.)

[root@localhost ~]# yum install vim-common
[root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep -v "0000 0000" | less
0010dc0: 5967 4b30 7530 7942 4266 664e 6666 3143  YgK0u0yBBffNff1C
0010dd0: 5175 6470 3332 536e 7647 5761 3654 6365  Qudp32SnvGWa6Tce
0010de0: 3977 6576 7053 3730 3765 4665 4838 7162  9wevpS707eFeH8qb
0010df0: 3253 5078 4d6c 6439 3137 6a7a 634a 5465  2SPxMld917jzcJTe

3. Insert some identifiable data into the table:

mysql> <strong>insert</strong> into mysqlslap.t1 values (1,2,"private","sensitive","data");
Query OK, 1 row affected (0.01 sec)

mysql> select * from mysqlslap.t1 where charcol2="sensitive";
| intcol1 | intcol2 | charcol1 | charcol2  | charcol3 |
|       1 |       2 | private  | sensitive | data     |
1 row in set (0.02 sec)

4. Observe this data via the command line:

[root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep -v "0000 0000" | less
04fa290: 0002 7072 6976 6174 6573 656e 7369 7469  ..privatesensiti

5. Encrypt the mysqlslap.t1 table:

mysql> <strong>alter</strong> table mysqlslap.t1 encryption='Y';
Query OK, 10300 rows affected (0.31 sec)
Records: 10300  Duplicates: 0  Warnings: 0

6. Observe the mysqlslap.t1 table is now encrypted:

6a. Via the mysql client:

| mysqlslap    | t1         | ENCRYPTION="Y" |

6b. Via the command line:

[root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep "private"
[root@localhost ~]#

6c. Observe snippet of the file:

[root@localhost ~]# xxd /var/lib/mysql/mysqlslap/t1.ibd | grep -v "0000 0000" | less
0004160: 56e4 2930 bbea 167f 7c82 93b4 2fcf 8cc1  V.)0....|.../...
0004170: f443 9d6f 2e1e 9ac2 170a 3b7c 8f38 60bf  .C.o......;|.8`.
0004180: 3c75 2a42 0cc9 a79b 4309 cd83 da74 1b06  &amp;lt;u*B....C....t..
0004190: 3a32 e104 43c5 8dfd f913 0f69 bda6 5e76  :2..C......i..^v

7. Observe redo log is not encrypted:

[root@localhost ~]# xxd /var/lib/mysql/ib_logfile0 | less
23c6930: 0000 0144 0110 8000 0001 8000 0002 7072
23c6940: 6976 6174 6573 656e 7369 7469 7665 6461  ivatesensitiveda
23c6950: 7461 3723 0000 132e 2f6d 7973 716c 736c  ta7#..../mysqlsl

This is expected because the documentation ( reports encryption of files outside the tablespace is not supported: “Tablespace encryption only applies to data in the tablespace. Data is not encrypted in the redo log, undo log, or binary log.”


I found in my testing of MariaDB’s implementation of data encryption at rest that there were still places on the file system that a bad actor could view sensitive data. I’ve found the same in this test of Oracle’s implementation. Both leave data exposed in log files surrounding the tablespace files.


As a bonus to this walkthrough, during this testing, the table definition caught my eye:

mysql> show create table mysqlslap.t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `intcol1` int(32) DEFAULT NULL,
  `intcol2` int(32) DEFAULT NULL,
  `charcol1` varchar(128) DEFAULT NULL,
  `charcol2` varchar(128) DEFAULT NULL,
  `charcol3` varchar(128) DEFAULT NULL
1 row in set (0.00 sec)

As discussed in, the MariaDB implementation does not include the “encrypted=yes” information in the table definition when tables are implicitly encrypted.

I was curious what would happen if I did a mysqldump of this encrypted table and attempted to restore it to a nonencrypted server. DBAs expect mysqldump to create a portable file to recreate the table definition and data on a different version of mysql. During upgrades, for example, you might expect to use this for rollback.

Here is my test. I first did the dump and looked inside the file.

[root@localhost ~]# mysqldump mysqlslap t1 > mysqlslap_t1_dump
[root@localhost ~]# less mysqlslap_t1_dump
  `intcol1` int(32) DEFAULT NULL,
  `intcol2` int(32) DEFAULT NULL,
  `charcol1` varchar(128) DEFAULT NULL,
  `charcol2` varchar(128) DEFAULT NULL,
  `charcol3` varchar(128) DEFAULT NULL

<strong>INSERT</strong> INTO `t1` VALUES (

As expected, that definition makes the dump less portable. The restore from dump is not completed and throws an error (this is not remedied by using –force):

On a slightly older 5.7 version:

mysql> select version();
| version() |
| 5.7.8-rc  |

[root@centosmysql57 ~]# mysql mysqlslap < mysqlslap_t1_dump
ERROR 1064 (42000) at line 25: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ENCRYPTION='Y'' at line 7

On a different fork:

MariaDB [(none)]> select version();
| version()       |
| 10.1.12-MariaDB |
1 row in set (0.00 sec)

[root@maria101 ~]# mysql mysqlslap < mysqlslap_t1_dump
ERROR 1911 (HY000) at line 25: Unknown option 'ENCRYPTION'

This doesn’t have anything to do with the encrypted state of the data in the table, just the table definition. I do like the encryption showing up in the table definition, for better visibility of encryption. Maybe the fix is to have mysqldump strip this when writing to the dump file.

Categories: DBA Blogs

Log Buffer #470: A Carnival of the Vanities for DBAs

Wed, 2016-04-20 12:39

This Log Buffer Edition rounds up blog posts from Oracle, SQL Server and MySQL.


jq is a unix utility that can parse json files and pull out individual elements – think of it as sed/awk for json files.

Some thoughts about analyzing performance problems.

Microsites: Add a Map Component and Publish your Microsite

New Installation Cookbook: Oracle Linux 6.7 with Oracle RAC

Are you worried about what to do now that Discoverer is almost out of support

Automatic Big Table Caching in RAC


SQL Server:

sp_msforeachdb: Improving on an Undocumented Stored Procedure

The Practical Problems of Determining Equality and Equivalence in SQL

TEMPDB Enhancements in SQL Server 2016

Performance Surprises and Assumptions : DATEADD()

Knee-Jerk Performance Tuning : Incorrect Use of Temporary Tables



Virtual Hosting with vsftpd + TLS encryption and MySQL on Ubuntu 15.10

MySQL 5.7.12 – Part 4: A new MySQL Command Line Shell

Database Firewall Filter in MaxScale 1.4.1

Orchestrator-agent: How to recover a MySQL database

Rosetta Stone: MySQL, Pig and Spark (Basics)

Categories: DBA Blogs

No Arguments Here – Use xargs for File Management Performance

Wed, 2016-04-20 10:30


Database Administrators and System Administrators have this in common: managing a large number of log files is just part of the job on Linux systems.

Tools such as logrotate significantly simplify the file management task for routinely created log files. Even so, there are still many ‘opportunities’ to exercise your command line fu to manage thousands or millions of files.  These may be files that need to be moved, removed or searched.

When the files span multiple directories the find command is often used. The following command for instance will find all log files of a certain age and size and remove them.

find . -name "*.log" -size +1M -exec rm  {} \;


For a few files this will work just fine, but what happens if the number of files to be processed is several thousands, or even millions?

The xargs Difference

Let’s first create 200k files to use for testing. These files will all be empty, there is no need for any content for these tests.

The script can be used to create the directories and empty files.

As it takes some time to create the files, we will not use the rm command here, but rather just the file command. The command will be timed as well.

#  time find . -type f -name file_\* -exec file {} \; >/dev/null

real    1m24.764s
user    0m4.624s
sys     0m12.581s

Perhaps 1 minute and 24 seconds seems to be a reasonable amount of time to process so many files.

It isn’t.

Let’s use a slightly different method to process these files, this time by adding xargs in a command pipe.

 time find . -type f -name file_\* | xargs file >/dev/null

real    0m0.860s
user    0m0.456s
sys     0m0.432s

Wait, what?!  0.8 seconds? Can that be correct?

Yes, it is correct. Using xargs with find can greatly reduce the resources needed to iterate through files.

How then, is is possible for the command that used xargs to complete so much faster than the command that did not use xargs?

When iterating through a list of files with the -exec  argument to the find command, a new shell is forked for each execution of find.

For a large number of files this requires a lot of resources.

For demonstration purposes I will be using the ‘file’ command rather than ‘rm’.

Could it be that the xargs method may have benefited from the caching effects of running the first find command?

Could be – let’s run find … -exec again and see if it benefits from caching.

# time find . -type f -name file_\* -exec file {} \; >/dev/null

real    1m25.722s
user    0m3.900s
sys     0m11.893s

Clearly any caching didn’t help find … -exec.

Why Is xargs Fast?

Why is the use of xargs so much faster than find? In short it is due to find starting a new process for each file it finds when the -exec option is used.

The command ‘find | xargs’ was wrapped in a shell script to facilitate the use of strace.

The script takes 2 arguments; the number of files to pipe to xargs and the number files that xargs should send to the file command for each invocation of file.

The number of files to process is controlled by piping the output of find to head.

The xargs –max-args argument is used to control how many arguments are sent to each invocation of find.

We can now use strace with the -c option; -c accumulates a count of all calls along with timing information.

Calling the script to run for the first 10000 files, with 1000 files sent to each invocation of find:

# strace -c -f  ./ 10000 1000
MAX_FILES: 10000
MAX_ARGS: 1000
Process 11268 attached
Process 11269 attached
Process 11267 resumed
Process 11269 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 99.55    0.080017        5001        16         2 wait4
  0.35    0.000280           0     12372           newfstatat
  0.09    0.000074           0       208           getdents
  0.01    0.000006           0     10000           lstat
  0.00    0.000000           0       199           read
  0.00    0.000000           0       276         1 write
  0.00    0.000000           0       384        91 open
  0.00    0.000000           0       313         4 close
  0.00    0.000000           0        68        42 stat
  0.00    0.000000           0       189           fstat
  0.00    0.000000           0         5         1 lseek
  0.00    0.000000           0       209           mmap
  0.00    0.000000           0        71           mprotect
  0.00    0.000000           0        37           munmap
  0.00    0.000000           0        72           brk
  0.00    0.000000           0        41           rt_sigaction
  0.00    0.000000           0        80           rt_sigprocmask
  0.00    0.000000           0         2           rt_sigreturn
  0.00    0.000000           0        13        12 ioctl
  0.00    0.000000           0        77        77 access
  0.00    0.000000           0         2           pipe
  0.00    0.000000           0         6           dup2
  0.00    0.000000           0         1           getpid
  0.00    0.000000           0        14           clone
  0.00    0.000000           0        14           execve
  0.00    0.000000           0         2           uname
  0.00    0.000000           0         4         1 fcntl
  0.00    0.000000           0       206           fchdir
  0.00    0.000000           0         5           getrlimit
  0.00    0.000000           0         1           getuid
  0.00    0.000000           0         1           getgid
  0.00    0.000000           0         1           geteuid
  0.00    0.000000           0         1           getegid
  0.00    0.000000           0         1           getppid
  0.00    0.000000           0         1           getpgrp
  0.00    0.000000           0        14           arch_prctl
  0.00    0.000000           0         2         1 futex
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0         1           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00    0.080377                 24910       232 total

The largest chunk of time was spent in the wait4 system call. These are waits on execve, of which there were 14.

Of the 14 calls to execve, there was 1 each for the use of bash (the script itself), find, head and xargs, leaving 10 calls to be consumed by file.

The following command can be used if you would like to try this yourself:

strace  -f -e trace=execve  ./ 10000 1000  2>&1 | grep execve

What happens when the same type of test is run against find with the -exec argument?

There is no method (that I can find in the man page anyway) by which we can limit the number of files that are sent to the program specified in the -exec argument of find.

We can still learn what is going on, it is just necessary to wait 1.5 minutes for the command to complete.

# strace -c -f find . -type f -name file_\*  -exec file {} \; >/dev/null

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 96.80    4.101094          21    200000           wait4
  0.69    0.029305           0    200000           clone
  0.46    0.019278           0   2602351   1400007 open
  0.44    0.018833           0    600001           munmap
  0.31    0.013108           0   3200017           mmap
  0.30    0.012715           0   1401173           fstat
  0.16    0.006979           0   1200006   1200006 access
  0.15    0.006543           0   1202345           close
  0.15    0.006288           0   1000004    600003 stat
  0.13    0.005632           0   1000004           read
  0.12    0.004981           0    200000           lstat
  0.09    0.003704           0    600026           brk
  0.07    0.003016           0   1000009           mprotect
  0.07    0.002776           0    200001    200000 ioctl
  0.03    0.001079           0    201169           newfstatat
  0.02    0.000806           0      2347           getdents
  0.01    0.000600           0    200000           write
  0.00    0.000003           0    200001           arch_prctl
  0.00    0.000002           0    202341           fchdir
  0.00    0.000000           0         3           rt_sigaction
  0.00    0.000000           0         1           rt_sigprocmask
  0.00    0.000000           0    400001    200000 execve
  0.00    0.000000           0         1           uname
  0.00    0.000000           0         1           fcntl
  0.00    0.000000           0         2           getrlimit
  0.00    0.000000           0         2         1 futex
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0         1           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00    4.236742              15811808   3600017 total


You may have noticed there are twice as many calls to execve than there were files to process.

This is due to something referenced in the comments of Unless a full path name is specified when running a command, the PATH variable is searched for that command. If the command is not found by the first invocation of execve, then another attempt is made the next directory in PATH.

The following example shows the difference between using the command name only, and then using the fully pathed name of the file command.

# strace -e trace=execve -f find -maxdepth 1 -type f -name \*.sh  -exec file {} \;  2>&1 | grep execve
execve("/usr/bin/find", ["find", "-maxdepth", "1", "-type", "f", "-name", "*.sh", "-exec", "file", "{}", ";"], [/* 83 vars */]) = 0
[pid  9267] execve("/usr/local/bin/file", ["file", "./"], [/* 83 vars */]) = -1 ENOENT (No such file or directory)
[pid  9267] execve("/usr/bin/file", ["file", "./"], [/* 83 vars */]) = 0
[pid  9268] execve("/usr/local/bin/file", ["file", "./"], [/* 83 vars */]) = -1 ENOENT (No such file or directory)
[pid  9268] execve("/usr/bin/file", ["file", "./"], [/* 83 vars */]) = 0
[pid  9269] execve("/usr/local/bin/file", ["file", "./"], [/* 83 vars */]) = -1 ENOENT (No such file or directory)
[pid  9269] execve("/usr/bin/file", ["file", "./"], [/* 83 vars */]) = 0

# strace -e trace=execve -f find -maxdepth 1 -type f -name \*.sh  -exec /usr/bin/file {} \;  2>&1 | grep execve
execve("/usr/bin/find", ["find", "-maxdepth", "1", "-type", "f", "-name", "*.sh", "-exec", "/usr/bin/file", "{}", ";"], [/* 83 vars */]) = 0
[pid  9273] execve("/usr/bin/file", ["/usr/bin/file", "./"], [/* 83 vars */]) = 0
[pid  9274] execve("/usr/bin/file", ["/usr/bin/file", "./"], [/* 83 vars */]) = 0
[pid  9275] execve("/usr/bin/file", ["/usr/bin/file", "./"], [/* 83 vars */]) = 0
Too Much Space

Regardless of how bad a practice it may be, there will be times that file and directory names may contain space characters. Literal spaces, newlines and tabs can all play havoc with file name processing;  xargs has you covered.

Two files are created to demonstrate:


# touch 'this filename has spaces' this-filename-has-no-spaces

# ls -l
total 0
-rw-r--r-- 1 jkstill dba 0 Apr 15 09:28 this filename has spaces
-rw-r--r-- 1 jkstill dba 0 Apr 15 09:28 this-filename-has-no-spaces

What happens when the output of find it piped to xargs?


 find . -type f | xargs file
./this-filename-has-no-spaces: empty
./this:                        ERROR: cannot open `./this' (No such file or directory)
filename:                      ERROR: cannot open `filename' (No such file or directory)
has:                           ERROR: cannot open `has' (No such file or directory)
spaces:                        ERROR: cannot open `spaces' (No such file or directory)

The spaces in one of the filenames causes xargs to treat each word in the filename as a separate file.

Because of this it is a good idea to use the -print0 and -0 args as seen in the following example. These arguments change the output terminator of find to the null character, as well as changing the input terminator of xargs to the null character to deal with space characters in file and directory names.


 find . -type f -print0  | xargs -0 file
./this-filename-has-no-spaces: empty
./this filename has spaces:    empty

There is quite a bit more to xargs than this, I would encourage you to read the man page and experiment with the options to better learn how to make use of it.

Hope For find

For many versions of GNU find there is an easy modification that can be made to the command line that will cause the -exec option to emulate the method xargs uses pass input to a command.

Simply by changing -exec command {} \; to  -exec command {} +, the find command will execute much faster than previously.

Here the find command has matched the performance of xargs when processing 200k files:


# time find . -type f -name file_\*  -exec file {} +  | wc
 200000  400000 8069198

real    0m0.801s
user    0m0.436s
sys     0m0.404s

This may mean a quick and simple change to maintenance scripts can yield a very large increase in performance.

Does this mean there is no longer a need for xargs?  Not really, as xargs offers levels of control over the input to piped commands that simply are not available in the find command.

If you’ve never used xargs, you should consider doing so, as it can reduce the resource usages on your systems and decrease the runtime for maintenance tasks.

Categories: DBA Blogs

Data is Everything, and Everything is Data

Tue, 2016-04-19 10:33
Exploring the phenomenon of “datafication”

In the year 2000, only a quarter of the world’s stored information was digital; the rest was on paper, film, and other analog media. Today, less than two percent of all stored information is nondigital. (1)

This is largely the result of “datafication”, a process that turns all aspects of life—preferences, opinions, telephone calls and sensor-driven information—into data.

Datafication is the driving force behind Big Data. It’s also causing a threefold shift in how we look for meaning in the information available to us: away from traditional sampling approaches, toward greater tolerance of messy, unstructured data, and into the search for correlations rather than absolute, singular causes to explain trends and events. These changes are already having major impacts in every area of our lives—from scientific research to business and finance, to healthcare, education and transportation.

Watch this video of Pythian President and CEO Paul Vallée, as he talks about datification and generating revenue.

From sampling to knowing

Representative sampling is based on the idea that, within a certain margin of error, we can make inferences about a total population from a small, randomized subset. This works well for simple questions like, “Which of our customers generate the most revenue?” but lacks the detail to effectively answer queries like, “Which customers are most profitable?” or, “Which customers are considering to leave us for another vendor?”

Inexpensive computer memory, powerful processors and sophisticated algorithms now allow us to analyze vast amounts of data rather than small samples. Using Big Data in this way has the considerable advantage of predictive capability—it can identify patterns and trends that aren’t detectable in a small sample, giving an unprecedented view of future behavior.

From clean to messy

What’s new about Big Data isn’t just that there’s lots of it. Because it comes from many different sources in many different formats, it’s not tidy like traditional datasets. Tolerating some inaccuracy may require data analysts to shift their outlooks a little, but when you’re trying to answer big, complex questions, the gain in data scope is a good trade-off against using smaller amounts of very exact data. Here’s an example.

In 2009, Google showed it’s possible to predict locations of seasonal outbreaks of the flu using nothing more than archived records of Google searches. The sheer size of the data set (think a billion searches a day in the U.S. alone) more than compensated for its messiness. After running nearly half a billion calculations against the data, Google identified 45 terms—words such as “headache” and “runny nose”—that had a strong correlation with the CDC’s data on flu outbreaks.

From cause to correlation

The Google example points to a third change brought about by datafication and Big Data: abandoning the search for certainty. Instead of looking for causes, innovative data users are looking for correlations. For example, automotive and aviation engineers are collecting and analyzing massive quantities of information on engines that have failed, looking for patterns that will help them predict when other engines might be at risk of failing in the future. They’re not seeking a single cause for a single event; they’re mapping correlations between huge numbers of events to recognize patterns that can be put to practical, preventative use.

The correlation approach has been used to spot infections in premature babies before overt symptoms appear and to predict everything from manhole cover failures to consumer purchasing habits.

Big Data insights require Big Thinking

Harnessing the powerful, often unpredictable, insights available from Big Data requires three things: as complete a dataset as possible, people with the skills required to collect, manage and analyze that data, and people who know how to ask unexpected, even visionary questions. It’s not just a matter of the right technologies—it’s about a fundamental shift in how we relate to data and what can be done with it.

Categories: DBA Blogs

SQL Server Dates, Dates and More Dates

Tue, 2016-04-19 09:44


Working with SQL Server date functions can be frustrating. This purpose of this blog is to share some date statements I use regularly, especially when doing business Intelligence and DataWarehouse solutions.

I hope you find them useful and if you have any questions or any more useful statements in relation to dates in SQL Server, please feel free to leave them in the comments below

SELECT DATEADD(d , -1 , GETDATE()) 'Yesterday'
----First Day of Current Week
SELECT DATEADD(wk , DATEDIFF(wk , 0 , GETDATE()) , 0) 'First Day of Current Week'
----Last Day of Current Week
SELECT DATEADD(wk , DATEDIFF(wk , 0 , GETDATE()) , 6) 'Last Day of Current Week'
----First Day of Last Week
SELECT DATEADD(wk , DATEDIFF(wk , 7 , GETDATE()) , 0) 'First Day of Last Week'
----Last Day of Last Week
SELECT DATEADD(wk , DATEDIFF(wk , 7 , GETDATE()) , 6) 'Last Day of Last Week'
----First Day of Current Month
SELECT DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) , 0) 'First Day of Current Month'
----Last Day of Current Month
SELECT DATEADD(ms , -3 , DATEADD(mm , 0 , DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) + 1 , 0))) 'Last Day of Current Month'
----First Day of Last Month
SELECT DATEADD(mm , -1 , DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) , 0)) 'First Day of Last Month'
----Last Day of Last Month
SELECT DATEADD(ms , -3 , DATEADD(mm , 0 , DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) , 0))) 'Last Day of Last Month'
----First Day of Current Year
SELECT DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) , 0) 'First Day of Current Year'
----Last Day of Current Year
SELECT DATEADD(ms , -3 , DATEADD(yy , 0 , DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) + 1 , 0))) 'Last Day of Current Year'
----First Day of Last Year
SELECT DATEADD(yy , -1 , DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) , 0)) 'First Day of Last Year'
----Last Day of Last Year
SELECT DATEADD(ms , -3 , DATEADD(yy , 0 , DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) , 0))) 'Last Day of Last Year'
Categories: DBA Blogs

Internals of Querying the Concurrent Requests’ Queue – Revisited for R12.2

Tue, 2016-04-19 09:01

Once upon a time I wrote about the Internal Workflow of an E-Business Suite Concurrent Manager Process. Many things have changed since that blog post, the most obvious change being the release of Oracle e-Business Suite R12.2. I decided to check if the way the concurrent manager queues were processed by concurrent manager processes were still the same. My main goal was to see if the manager processes still don’t attempt any way of coordination to distribute the requests among them.

This is how I did the testing:

  • I used the VM templates provided by Oracle to build my R12.2.4 test environment. By the way, I didn’t expect that the process of getting the environment up would be so simple! Downloading the media files from was the most time-consuming step, once done – it took me just 1 hour to un-compress everything, import the Virtual Assembly file and bring up the R12.2.4 environment on my laptop.
  • 3 Standard managers are defined by default
  • Sleep seconds were left as is = 30 seconds
  • Cache size was increased from 1 to 5.
  • Identified the 3 DB processes that belong to the Standard managers:
    select sid, serial# from v$session where module='e:FND:cp:STANDARD'
  • I enabled tracing with binds and waits for each of them like this:
    exec dbms_monitor.session_trace_enable(sid,serial#,true,true);
  • Once that was done I submitted one concurrent program – “Active users” and waited for it to complete.
  • I disabled the tracing and collected the trace files.
    exec dbms_monitor.session_trace_disable(sid,serial#);
  • Collected the trace files

I found 2 of the trace files to be very interesting. To make things more simple, the manager process “A” will be the one that executed the concurrent request, and process “B” will be the one that didn’t.

Before the “Active Users” Request Was Submitted

No other requests were running at the time I did the testing, so I clearly observed how both Managers A and B queried the FND_CONCURRENT_REQUESTS table BOTH of the trace files displayed the same method of how requests are picked up from the queue. Note, I’m showing only the lines relevant to display the main query only, and I have formatted the query text to make it more readable:

PARSING IN CURSOR #139643743645920 len=1149 dep=0 uid=100 oct=3 lid=100 tim=1460211399835915 hv=3722997734 ad='d275f750' sqlid='cd23u4zfyhvz6'
FROM Fnd_Concurrent_Requests R
WHERE R.Hold_Flag                             = 'N'
AND R.Status_Code                             = 'I'
AND R.Requested_Start_Date                   &amp;amp;amp;amp;amp;lt;= Sysdate
AND (R.Node_Name1                            IS NULL
OR (R.Node_Name1                             IS NOT NULL
AND FND_DCP.target_node_mgr_chk(R.request_id) = 1))
AND (R.Edition_Name                          IS NULL
OR R.Edition_Name                            &amp;amp;amp;amp;amp;lt;= sys_context('userenv', 'current_edition_name'))
  FROM Fnd_Concurrent_Programs P
  WHERE P.Enabled_Flag         = 'Y'
  AND R.Program_Application_Id = P.Application_Id
  AND R.Concurrent_Program_Id  = P.Concurrent_Program_Id
    FROM Fnd_Oracle_Userid O
    WHERE R.Oracle_Id = O.Oracle_Id
      FROM Fnd_Conflicts_Domain C
      WHERE P.Run_Alone_Flag = C.RunAlone_Flag
      AND R.CD_Id            = C.CD_Id
  AND (P.Execution_Method_Code                          != 'S'
  OR (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN ((0,98),(0,100),(0,31721),(0,31722),(0,31757)))
AND ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN ((510,40032),(510,40033),(510,42156),(510,42157),(530,43793),(530,43794),(535,42626),(535,42627),(535,42628)))
ORDER BY NVL(R.priority, 999999999),
EXEC #139643743645920:c=0,e=33,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211399835910
FETCH #139643743645920:c=0,e=546,p=0,cr=106,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211399836507
WAIT #139643743645920: nam='SQL*Net message to client' ela= 3 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211399836572

*** 2016-04-09 10:17:09.837
WAIT #139643743645920: nam='SQL*Net message from client' ela= 30000367 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211429836965
EXEC #139643743645920:c=0,e=59,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211429838767
FETCH #139643743645920:c=0,e=689,p=0,cr=106,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211429839587
WAIT #139643743645920: nam='SQL*Net message to client' ela= 4 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211429839652

*** 2016-04-09 10:17:39.840
WAIT #139643743645920: nam='SQL*Net message from client' ela= 30000325 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211459840003

It’s important to observe that:

  • All manager’s processes still compete for the same requests. If the query is executed at the same time, the same list of concurrent requests will be retrieved by all processes.
  • The constants literals used in lines 30-32 mean that the query for checking the queue is still built when the concurrent manager process starts up. These constants are mainly used to implement the specializations rules in the query.
  • Only rowid for the pending requests’ rows in FND_CONCURRENT_REQUESTS are fetched.
  • The sleep time is clearly visible on lines 41,42 and 48,49
After the “Active Users” Request Was Submitted – Starting the Concurrent Request

The manager process A was the first to pick up the submitted requests and it could be observed by the “r=1” (1 row fetched) in the FETCH call for the query we just reviewed:

FETCH #139643743645920:c=0,e=437,p=0,cr=113,cu=0,mis=0,r=1,dep=0,og=1,plh=3984653669,tim=1460211519844640

Immediately after this, the manager process A locked the row in FND_CONCURRENT_REQUESTS table, this way, the request got assigned to this process. Notice the similar where predicates used in this query, these are actually required to make sure that the request is still not picked up by another manager process. However the main thing here is the fact that the request row is accessed by the “rowid” retrieved earlier (row 45, the value of the bind variable “:reqname” is “AAAjnSAA/AAAyn1AAH” in this case). Locking of the row is done by the “FOR UPDATE OF R.status_code NoWait” clause on line 49:

PARSING IN CURSOR #139643743640368 len=4530 dep=0 uid=100 oct=3 lid=100 tim=1460211519864113 hv=4239777398 ad='cde86338' sqlid='6ya6bzgybbrmq'
SELECT R.Conc_Login_Id,
  ... excluded other 156 columns for brevity...
FROM fnd_concurrent_requests R,
  fnd_concurrent_programs P,
  fnd_application A,
  fnd_user U,
  fnd_oracle_userid O,
  fnd_conflicts_domain C,
  fnd_concurrent_queues Q,
  fnd_application A2,
  fnd_executables E,
  fnd_conc_request_arguments X
WHERE R.Status_code             = 'I'
AND (R.Edition_Name            IS NULL
OR R.Edition_Name              &amp;amp;amp;amp;amp;lt;= sys_context('userenv', 'current_edition_name'))
AND R.Request_ID                = X.Request_ID(+)
AND R.Program_Application_Id    = P.Application_Id(+)
AND R.Concurrent_Program_Id     = P.Concurrent_Program_Id(+)
AND R.Program_Application_Id    = A.Application_Id(+)
AND P.Executable_Application_Id = E.Application_Id(+)
AND P.Executable_Id             = E.Executable_Id(+)
AND P.Executable_Application_Id = A2.Application_Id(+)
AND R.Requested_By              = U.User_Id(+)
AND R.Cd_Id                     = C.Cd_Id(+)
AND R.Oracle_Id                 = O.Oracle_Id(+)
AND Q.Application_Id            = :q_applid
AND Q.Concurrent_Queue_Id       = :queue_id
AND (P.Enabled_Flag            IS NULL
OR P.Enabled_Flag               = 'Y')
AND R.Hold_Flag                 = 'N'
AND R.Requested_Start_Date     &amp;amp;amp;amp;amp;lt;= Sysdate
AND ( R.Enforce_Seriality_Flag  = 'N'
OR ( C.RunAlone_Flag            = P.Run_Alone_Flag
AND (P.Run_Alone_Flag           = 'N'
  FROM Fnd_Concurrent_Requests Sr
  WHERE Sr.Status_Code         IN ('R', 'T')
  AND Sr.Enforce_Seriality_Flag = 'Y'
  AND Sr.CD_id                  = C.CD_Id
AND Q.Running_Processes                                     &amp;amp;amp;amp;amp;lt;= Q.Max_Processes
AND R.Rowid                                                  = :reqname
AND ((P.Execution_Method_Code                               != 'S'
OR (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID)       IN ((0,98),(0,100),(0,31721),(0,31722),(0,31757))))
AND ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN ((510,40032),(510,40033),(510,42156),(510,42157),(530,43793),(530,43794),(535,42626),(535,42627),(535,42628))) 
FOR UPDATE OF R.status_code NoWait

The behavior of the manager process B was a little bit more interesting. It too managed to fetch the same rowid from FND_CONCURRENT_PROCESSES table belonging to the submitted “Active Users” processes. However, when it tried to lock the row in FND_CONCURRENT_REQUESTS (By using exactly the same query), this happened:

PARSING IN CURSOR #139690311998256 len=4530 dep=0 uid=100 oct=3 lid=100 tim=1460211519900924 hv=4239777398 ad='cde86338' sqlid='6ya6bzgybbrmq'
BINDS #139690311998256:
  oacdty=01 mxl=32(18) mxlc=00 mal=00 scl=00 pre=00
  oacflg=20 fl2=1000001 frm=01 csi=873 siz=0 off=64
  kxsbbbfp=7f0c2f713f20  bln=32  avl=18  flg=01
EXEC #139690311998256:c=1000,e=1525,p=0,cr=25,cu=1,mis=0,r=0,dep=0,og=1,plh=4044729389,tim=1460211519902727
ERROR #139690311998256:err=54 tim=1460211519902750

The query failed with “ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired”.
This is how the access to pending concurrent requests is serialized to make sure only one of the manager processes can run it. And, I think, relying on the well-tuned and highly efficient locking mechanism of Oracle Database is a very very smart idea.

  • The coordination between manager processes is still not happening to distribute the requests, but the managers all query the queue the same way and then compete between themselves to lock the requests’ entries on the table 1st. The process that gets the lock also gets to execute the concurrent request.
  • The cache size variable couldn’t be observed in the trace files, but as far as I remember from my previous research the process would only fetch “cache size”-number of rowids using the 1st query in this post. This could be tested by submitting larger volume of requests simultaneously.
  • The “sleep seconds” kicks in only when the manager process didn’t fetch any rowids from the queue. After all the cached requests are attempted/executed by the manager process, the queue is checked again immediately without waiting for the “sleep seconds” (Not explained in detail in this post, but it’s revealed in the trace files)
  • The DMLs used to query the FND_CONCURRENT_REQUESTS and to lock the row are very very similar to Pre-R12.2 releases of e-Business Suite (Another sign that the process hasn’t changed, though one change that I see is the addition of where clause predicates for Checking the Editions).
Categories: DBA Blogs

More Effective Anti-Entropy Repair in Cassandra

Mon, 2016-04-18 13:12
1. Introduction

Cassandra offers three different repair mechanisms to make sure data from different replicas are consistent: Hinted Hand-off, Read-Repair, and Anti-Entropy Repair.

The first two mechanisms are kind of “automatic” mechanisms that will be triggered internally within Cassandra, depending on the configuration parameter values that are set for them either in cassandra.yaml file (for Hinted Hand-off) or in table definition (for Read-Repair). The last mechanism (Anti-Entropy Repair), however, needs to be triggered with manual intervention, by running “nodetool repair” command (possibly with various options associated with it).

Despite the “manual” nature of Anti-Entropy repair, it is nevertheless necessary, because the first two repair mechanisms cannot guarantee fixing all data consistency scenarios for Cassandra. For example, 1) what if a node is down longer than “max_hint_window_in_ms” (which defaults to 3 hours)? and  2) For deletes, Anti-Entropy repair has to be executed before “gc_grace_seconds” (defaults to 10 days) in order to avoid tombstone resurrection.

At the same time, however, due to how Anti-Entropy repair is designed and implemented (Merkle-Tree building and massive data streaming across nodes), it is also a very expensive operation and can cause a big burden on all hardware resources (CPU, memory, hard drive, and network) within the cluster. In this case, how to run Anti-Entropy repair effectively becomes a constant topic within Cassandra community.

In this post, I’d like to explore and summarize some of the techniques that can help achieve a more effective Anti-Entropy repair for Cassandra.

2. What is the issue of an Anti-Entropy Repair

Before Cassandra version 2.2, a sequential full Anti-Entropy repair is the default behavior when “nodetool repair” command (with no specific options) is executed. When this happens,

1) The node that initiates the operation, called coordinator node, scans the partition ranges that it owns (either as primary or replica) one by on. For each partition range, it sends the request to each of the peer/replica nodes to build a Merkle tree.

2) The peer/replica node scans all SSTables and a major, or validation, compaction is triggered, which reads every row in the SSTables, generates a hash for it, and then adds the result to a Merkle tree.

3) Once the peer node finishes building the Merkle tree, it sends the result back to the coordinator. The coordinator node compares every Merkle tree with all other trees. If difference is detected, data is exchanged between differing nodes.

Looking at this highly summarized procedure, there are a few things that immediately caught my attention:

First, because building Merkletree requires hashing every row of all SSTables,  it is a very expensive operation, stressing CPU, memory, and disk I/O.

Second, when Merkletree difference is detected, network bandwidth can be overwhelmed to stream the large amount of data to remote nodes.

Lastly, although not obvious, it is a worse problem in my opinion, which is that this operation can cause computation repetition and therefore waste resources unnecessarily. Please see this post for an example of how this can happen.

Based on the high cost related with it, a default (pre-Cassandra 2.2) sequential full Anti-Entropy repair is, in practice, rarely considered to be a routine task to run in production environment. Actually, a common trick that many people do with Cassandra repair is simply touching all data and let read-repair do the rest of work.

However, there do have situations when an Anti-Entropy repair is required, for example, to recover from data loss. What can we do in these cases?

3. What can help to achieve a more effective Anti-Entropy repair?

Over the time, different options for “nodetool repair” command have been introduced in different Cassandra versions. These options represent different techniques that can help achieve more effective Anti-Entropy repair, which I’ll go through in more details in this section. Meanwhile, I will also try to clarify some of the confusion that new Cassandra users might have around the different nodetool repair command options (and related jargon’s) of different Cassandra versions.

3.1 Primary range repair

In Cassandra, each node manages several (token) ranges of data. One of them is the primary range which is the token range that is assigned to the node according to the token setup within the ring. Other ranges of data are actually replica of primary ranges from other nodes. Running “nodetool repair” with option “-pr” (or “–partition-range”)  on a node means that the node only repairs the data of the primary range, but not other ranges managed on this node. The default behavior of a repair (without this option) is to repair all ranges of data managed by a node.

When using this option, it avoids the cost of doing Merkle tree calculation on non-primary range data. It also helps reduce excessive data streaming across the network. One caveat of using this option is that since each node only repairs one range of data that is managed by a node, this option needs to run on ALL nodes in the ring in order to repair all the data.

This option is available in very early release (0.x) of Cassandra.

3.2 Sequential repair

“nodetool repair” option “-snapshot” (or “–with-snapshot“) means a sequential repair (also called snapshot repair) and is a feature introduced in Cassandra version 1.2 and made as default in Cassandra version 2.0. Actually, in DataStax document for Cassandra 2.0 and later, you won’t find this option. In Cassandra 2.0 and beyond, you can specify option “-par” (or “–parallel”) to tell Cassandra to run in parallel. What does this really mean?

As we mentioned in section 3.1, each node manages several different ranges of data, either as primary or replica. When “-par” option is used, it instructs Cassandra to run repair on all these ranges at the same, which means the expensive repair operation (especially the Merkletree building part) happens on multiple nodes concurrently. This could be problematic and may slow down the entire cluster.

But when using “-snapshot” option (or default in Cassandra 2.0 and beyond), for each range of data, a snapshot is first taken and the repair operation is executed on the snapshot sequentially. This means that at any time for a given replica set of data, only one replica is being repaired, allowing other replica to satisfy the application requests in a more performant way.

3.3 Incremental repair

Since Cassandra version 2.1, a new option “-ic” (or “–incremental “) is introduced for incremental repair. Starting from Cassandra version 2.2, this option becomes the default option and at the same time, option “-full” (or “–full”) is used to specify a full repair.

The option of incremental repair is a great feature that will help overcome all 3 issues as listed in Section 2. Instead of building a Merkle tree out of all SSTables (repaired or not), this option only builds the tree out of un-repaired SSTables. Therefore,

  • The size of the Merkle tree to be built is way smaller and therefore requires much less computing resources
  • When doing Merkle tree comparison, less data will be compared and potentially streamed over the network
  • Already repaired data don’t need to be computed and compared again, thus avoiding a lot of unnecessary repetition.

There are a few things that need to pay attention to when using this option, please check my previous post  Apache Cassandra 2.1 Incremental Repair for more details.

Please also be noted that when running incremental repair in Cassandra 2.1 with Leveled Compaction Strategy (LCS), it may fail with RuntimeException (see CASSANDRA-9935 for more detail).

3.4 Sub-range repair

So far in this post when talking about repairing a range of data, it means the entire range (either primary or non-primary), which is sometimes also called as endpoint range. Beginning with Cassandra version 1.1.11, “nodetool repair” has options of “-st” (or “–start-token”) and “-et” (or “–end-token”) to specify a particular sub-range of data to repair.

Conceptually, sub-range repair is much like primary range repair, except that each sub-range repair operation focuses even smaller subset of data. So in general sub-range repair shares much of the pros and cons as primary range repair. One key benefit of using sub-range repair is the freedom of specifying the repair granularity through command line, which gives the DevOps team much more flexibility regarding how to set up a repair schedule to best match the cluster’s workload status. It is also doable to consider parallel running of multiple sub-range repairs on different sets of data at the same time, either on the same node, or on different nodes. This option has actually been deemed as one of the advanced repair techniques, as per post: Advanced repair techniques

Despite the flexibility of this technique, it has to be emphasized that no matter how the repair schedule is set for sub-range repairs, all ranges of data in the ring has to be repaired at least once before gc_grace_seconds limit is reached.

3.4.1 How to calculate valid sub-range token values

When invalid sub-range values are provided to the “-st” or “-et” option of “nodetool repair” command, most likely the command will fail and throw errors about “invalid tokens”. So in order to make sub-range repair work effectively, we need a systematic method that can generate valid sub-range token values.

We know that in Cassandra, data spans multiple nodes using a token ring. Each node is responsible for one or more slices of that ring. Each slice is a token range that has the start and end point. By this understanding, a natural way to generate valid sub-range token values would be: 1) find out all token ranges associated with one node in the entire ring; 2) for each token range associated with the node, divide the range into smaller chunks of sub-ranges. The start and end point of these sub-ranges would be valid values to be fed into the “-st” or “-et” option of “nodetool repair” command. The actual method and granularity to divide a node’s token range into smaller sub-ranges is where the flexibility comes from and can be adjusted accordingly to best suit the needs of the system.

There are different ways to find token ranges that are associated with a Cassandra node. Some of them are summarized below:

  1. Run “nodetool ring” command. “nodetool status” command will also do for single-token set-up.
  2. Run CQL query against “local” or “peers” tables in “system” keyspace to get the token values associated with a host.
  3. Cassandra client drivers provides APIs to get such information as well. For example, the “Metadata” class (in package com.datastax.driver.core) of the Java client API  provides the following two methods:
    * public Set<TokenRange> getTokenRanges() :
      Returns the token ranges that define data distribution in the ring.
    * public Set<TokenRange> getTokenRanges(String keyspace,Host host) :
      Returns the token ranges that are replicated on the given host, for the given keyspace.
  4. Based on the thrift Java API describe_splits call, an open-source utility tool called “cassandra-list-subranges” has been developed to list valid sub-range values for a specific keyspace and table on a particular Cassandra node. For details of this tool, please check the GitHub repository page at

Sub-range repair is available since Cassandra version 1.1.11.

3.4.2 DataStax Enterprise (DSE) repair service

If the Cassandra cluster is running under DataStax Enterprise version, the OpsCenter provides a “repair service” starting from version 4.0. The way that this service works is to continuously repairing small chunks of data (using sub-range repair) in a cluster in the background until the entire cluster is repaired and then it starts the next cycle of processing.

From the OpsCenter Window, there is only one parameter to specify the maximum amount of time it takes to repair the entire cluster once, which is typically less than gc_grace_seconds setting. The default is 9 days, compared with default 10 days of gc_grace_seconds setting.

More advanced setting, such as the parallelism of concurrent repairs, the maximum pending repairs allowed to be running on a node at one time, and etc. can be set either in opscenterd.conf file (for all clusters) or in cluster_name.conf file (for a specific cluster).

More detailed information about DSE repair service and its configuration can be found in the following documents:

Please be noted that DSE repair service, although convenient, is only available for DSE version and not for open source Cassandra version. It also lacks the capability to specify what keyspaces or tables  to repair, such as in “nodetool repair” tool.

3.5 Data-Center vs. Cluster wide repair

Since Cassandra version 1.2, Cassandra starts to provide options for “nodetool repair” to offer the following repair capabilities regarding repair locality/scope:

  • Cassandra 1.2: “-local” (or “–in-local-dc”), only repair nodes in the same data center as the node on which the “nodetool repair” command is executed.
  • Cassandra 2.0: “-dc <dc_name>” (or “–in-dc <dc_name>”), repairs nodes in the named data center
  • Cassandra 2.1:
    • “-dcpar” (or “–dc-parallel”), repairs data center in parallel;
    • “-hosts” (or “–in-hosts”), repairs specified hosts

When none of these options is specified, all data replica across the entire cluster is repaired.

4. Conclusion

In this post, we discussed in details some of the techniques that can help with a more effective Anti-Entropy repair in Cassandra. To summarize:

  • Primary range repair is recommended for maintenance purpose and when it is used, it has to be executed on all nodes in the ring one by one to make sure the whole data range in the ring is covered.
  • When available (Cassandra version 2.1 and beyond), incremental repair will bring the biggest benefit compared with other techniques. But please be aware of the caveats related with it. Please also be noted that with the bug as specified in CASSANDRA-9935, incremental repair may have issues with Level Compaction Strategy (LCS).
  • Sub-range repair can be very useful, especially when incremental repair is not an option. But please make sure that whatever schedule that is set up using this technique has to make sure all range of data to be finished within gc_grace_seconds limit.
  • If a repair is needed to recover from data loss, a sequential, full repair is needed. Neither primary range repair nor incremental repair technique works properly for this purpose. But due to the high cost associated with this operation, sometimes it might be faster to simply wipe out the data directory on a node and let it do bootstrapping.
Categories: DBA Blogs

What’s Holding You Back From Giving Back?

Fri, 2016-04-15 10:19

This week in honour of National Volunteer Week, I’m reflecting on the importance that volunteering has had in my life.

I’ve learned from each and every one of my experiences. From wrapping holiday presents at the mall, to helping source articles for an industry magazine, to wish granting, recruiting committee and board members, and providing HR advice and counsel. These experiences eventually led me to become a member of the board of directors for a number of organizations, and I was even named Board Chair.

Ironically, the rewards and benefits that I have received from the experiences far outweigh any amount of time I have given the many organizations I have supported over the years. Volunteering has provided me the opportunity to expand my skills and experience, and take on leadership roles long before I had the credentials to be hired for them. I initially started volunteering when I moved to Ottawa, and quickly learned that there is no better way to get to know your community, meet new people and expand your network. Once I started, I never looked back. I caught the “volunteer bug.” It is an important part of my life.

I am often asked how I find the time to volunteer. I always respond with, “like anything, if it’s important to you, you can and will find the time.” As I have expanded my family and career, I seek opportunities where I can continue to share my knowledge, skills and experience in ways that do not impede on either. A perfect example of this would be career mentoring. I have been a mentor for a number of organizations including the HRPA, OCISO, and the WCT. I have been fortunate to have great mentors in the past and now pay it forward. I remain connected with many of them.

In my role as VP of HR at Pythian I was thrilled to champion our Love Your Community Programs. These programs provide our employees in over 36 countries with a volunteer day and opportunities for sponsorship – i.e. raising money for causes that are meaningful to them. The programs have allowed Pythian the opportunity to positively impact the communities where our employees live.

Volunteer Week highlights the importance of volunteering in our communities, and showcases the impact it has on the lives of both the volunteer, and the communities they support. What’s holding you back from giving back?

And because it couldn’t be said any better: “We make a living by what we get, but we make a life by what we give.”

Winston Churchill

Categories: DBA Blogs

Optimizer Stew – Parsing the Ingredients to Control Execution Plans

Fri, 2016-04-15 09:01

No matter how many times I have worked with Outlines, Baselines and Profiles, I keep having to look up reminders as to the differences between these three.

There is seemingly no end to articles to the number of articles and blog that tell you what needs to be licensed, how to use them, and which version of Oracle where each made its debut appearance.

This blog will discuss none of that.  This brief article simply shows the definitions of each from the Glossary for the most current version of the Oracle databases. As of this writing that version is

And here they are.

Stored Outline

A stored outline is simply a set of hints for a SQL statement. The hints in stored outlines direct the optimizer to choose a specific plan for the statement.

Link to Stored Outline in the Oracle Glossary

SQL plan baseline

A SQL baselines is a set of one or more accepted plans for a repeatable SQL statement. Each accepted plan contains a set of hints, a plan hash value, and other plan-related information. SQL plan management uses SQL plan baselines to record and evaluate the execution plans of SQL statements over time.

Link to SQL Plan Baseline in the Oracle Glossary

SQL profile

A SQL profile is a set of auxiliary information built during automatic tuning of a SQL statement. A SQL profile is to a SQL statement what statistics are to a table. The optimizer can use SQL profiles to improve cardinality and selectivity estimates, which in turn leads the optimizer to select better plans.

Link to SQL Profile in the Oracle Glossary

Categories: DBA Blogs

Log Buffer #469: A Carnival of the Vanities for DBAs

Wed, 2016-04-13 09:40

This Log Buffer Edition digs deep into the realms of Oracle, SQL Server and MySQL and brings together a few of the top blog posts.


We’ve all encountered a situation when you want to check a simple query or syntax for your SQL and don’t have a database around. Of course, most of us have at least a virtual machine for that, but it takes time to fire it up, and if you work from battery, it can leave you without power pretty quickly.

View Criteria is set to execute in Database mode by default. There is option to change execution mode to Both. This would execute query and fetch results from database and from memory.  Such query execution is useful, when we want to include newly created (but not committed yet) row into View Criteria result. Newly created row will be included into View Criteria resultset.

Upgrading database hardware in an organization is always a cumbersome process. The most time consuming step is, planing for the upgrade, which mainly includes choosing right hardware for your Oracle databases. After deciding on the hardware type for your databases, rest will be taken care by technical teams involved.

Gluent New World #02: SQL-on-Hadoop with Mark Rittman

The pre-12c implementation of DCD used TNS packages to “ping” the client and relied on the underlying TCP stack which sometimes may take longer. Now in 12c this has changed and DCD probes are implemented by TCP Stack. The DCD probes will now use the TCP KEEPALIVE socket.


SQL Server

Snippets will allow you to code faster by inserting chunks of code with few key strokes.

One of more common concerns among database administrators who consider migrating their estate to Azure SQL Database is their ability to efficiently manage the migrated workloads.

A SQL Server Patching Shortcut

Move an Existing Log Shipping Database to a New Monitor Server

Knee-Jerk Performance Tuning : Incorrect Use of Temporary Tables



MySQL 5.7 sysbench OLTP read-only results: is MySQL 5.7 really faster?

7 Galera Cluster presentations in Percona Live Santa Clara 18-21.4. Meet us there!

Generate JSON Data with dbForge Data Generator for MySQL v1.6!

Extending the SYS schema to show metadata locks


Categories: DBA Blogs

Installing SQL Server 2016 – Standalone Instance – New Features

Tue, 2016-04-12 10:48

In this article I am going to go through a typical install of SQL Server 2016, and explain some of the best practices to follow when setting up a production server. I will also take a look at the new features when installing SQL Server 2016 compared to older versions.

The version of SQL Server I am using in SQL Server 2016 RC2. That means that some of the features may change between the time of writing and the retail release of SQL Server 2016.


Let’s Get Started!

The first screen you come to after opening the installation media:

Initial Screen

  1. Click the Installation button highlighted, and then the first option in the list New SQL Server Standalone Installation.


New for SQL Server 2016
For the Seasoned SQL Server DBA you will notice a few additions to this screen.

  • SQL Server Management Tools can now be installed from this screen. The files and binaries will be downloaded when you click this link, as they are no longer bundled with the installation media.
  • SQL Server Data Tools can also be installed from this screen.
  • Additionally, a standalone instance of R can be installed. R is a statistical programming language embedded into SQL server 2016, making it easy for data scientists and BI professionals to get a good level of analysis without leaving the SQL Server environment.


  1. Select the edition of SQL SERVER you would like you would like to install, I chose developer as it’s a full featured installation of SQL Server that can be used for development only, and not in a production environment. You could also enter a key in here instead of selecting a version.


  1. Read and accept the licence.
  2. SQL Server will then check a few rules to make sure you can install SQL server on your hardware.
  3. Select whether or not to allow Microsoft to check for updates and click next.
  4. SQL Server Installation will then install some setup files needed for installation and perform a few more checks. As you can see below, a warning has appeared asking me to make sure the correct ports are open on the firewall. Click Ok and then Next to proceed with the installation.

Rule Check

  1. The next screen is where we want to select all the features we want to install. As my machine is being set up as a dev machine to test SQL 2016 features, I am installing all the features. In a production environment only install the features that are needed. You can also select where to install the binaries and the root instance from this screen. Click next once you have selected the settings needed.


New for SQL Server 2016

  • The ability to install a standalone instance of R is now available in the Shared Features Section.
  • The ability to install R services in the database engine is now available.
  • The ability to install Polybase Query Service for external sources is now available to install. Polybase query service allows users to query big data sources such as Hadoop using common T-SQL statements. If you are planning on installing the Polybase feature, then the Java Runtime SRE needs to be installed first.


  1. The next screen is where you will need to name the instance, if this is not the only SQL server installation on this hardware. It also confirms the installation directory as per the previous screens. As this is the only installation of SQL Server on this machine, I am going to leave these settings as default. Click next when ready to proceed.

Default Instance

  1. The Next Screen is the Server Configuration screen. You should run each service under a domain account with a strong password. Whether you use managed accounts or do not enforce password expiration is up to you. However, these accounts are going to be running your services and should always be available. As this is a test machine not connected to a domain, I will leave the defaults. You can also select your default server collation from this window by clicking on the tab at the top highlighted in yellow. It is important also to set the start-up type parameters to allow services to start automatically on reboot if needed.

The ability to allow Perform Volume Maintenance tasks to be checked from this screen is a new feature. This is a best practice among SQL server DBA to allow instant file initialization. Previously this had to be done outside of the installation window.

Server Config


New for SQL Server 2016

  • The ability to allow Perform Volume Maintenance tasks to be checked from the Server Configuration Screen.


  1. Next up is the database configuration screen, and we are going to step through the tables.

In the first tab you will want to add all the users that require sysadmin access. To administer the server, click on the Add Current user and Add buttons. I always use mixed mode authentication so I can still get to the server if Active Directory plays up, and I add a complex password for security. When your done, click on the Data Directories tab at the top.

Database Config1

This is the screen where you set up all of your default directories for your databases.

Best practice states that we should put Log files on separate disks to Data Files, as there are two different access patterns for these, so they would be more performant on separate disks. Backups should also be on their own disks where possible. Additionally, the OS should also have its own drive separate from all SQL server files. As this is a test server and I only have a c drive, I will leave them as default. Click TempDB tab when ready.

Database Config 2


New For SQL 2016

  • This is new in SQL 2016, and previously had to be configured after install. This screen allows us to create files for temp db. Best practice stated there should be 1 file per logical core up to a maximum of 8 as I have 4 cores in my machine I have created 4 files. You can also spread the files over more than one disk if needed. Once you’re happy with your selections click next.

Database Config 3


  1. The next screen is to configure Analysis Services. I have configured mine in Multidimensional mode adding myself as sysadmin and setting directories using same best practice as database engine. Look out for a further blog article on SQL server Analysis Services.



  1. Leave the default options for Reporting Services. Again keep an eye out for another article on Reporting Services.
  2. In the next screen you have to configure the users capable of using the Distributed Replay Controller.
  3. Give a name to the DRC.


  1. On the next Screen you will need to accept the terms of the features being installed by clicking accept, and then next.
  2. Finally, you can click install and that’s it! SQL Server 2016 and all its new features are installed.



Here is a great link on some of the new features available in SQL Server 2016

Categories: DBA Blogs

SQL Server 2016 : A New Security Feature – Always Encrypted

Tue, 2016-04-12 09:15

Security. This word is so important when it comes to data, and there is a reason why. Every business has it’s vital data, and this data has to be accessed only by those who are authorized. Back in 2009, I wrote an article on what measures to take when it comes to securing SQL Server.

There were days when we used to have a third party tool to encrypt the data inside SQL Server. Later, Microsoft introduced Transparent Data Encryption (TDE) bundled with the release of SQL Server 2008. You may be wondering why it is so important to encrypt the data. Inside our database, there may be a case that the customer/application has to enter and store the sensitive information such as Social Security Number (SSN) or Financial/Payment Data, which should not be read in plain text, even by a DBA. With this requirement, a strong encryption and/or data masking comes into the picture.

With the launch of SQL Server 2016 Release Candidate 0 (RC0) , Microsoft has introduced two new features that are my personal favorite – 1)  Always Encrypted and 2) Dynamic Data Masking. Today I am going to walk you through the Always Encrypted feature.

First and foremost, we need to have SQL Server 2016 RC0 installed so that we can test this feature. You can download the RC0 here.  Once you are ready with RC0, create a test database and a table with an Encrypted column to store the sensitive data. There are few prerequisites that I will list for you here. If you want, you can use the sample schema from MS.

  1. Create a sample database
  2. Create Column Master Key
  3. Generate a self-signed certificate (well, you will need to install this certificate on the machine where the application will run)
  4. Configure Column Encryption Key
  5. Create a test table with Always Encrypted column
  6. Create an application to Insert data into the sample table we created in previous step

I have created a sample app and a demo script for the reference which you can download here. Basically, what we have to remember is that we can not insert the value inside the Always Encrypted table directly, we will need to use the tool/app, and the data will always be encrypted when it goes inside the database. This will ensure that the intruder can not get the data as it travels to the database in a cipher text form.

Here is some further reading on this topic. Enjoy reading and testing an excellent feature of SQL Server 2016 RC0.

Categories: DBA Blogs

SQL On The Edge #9 – Azure SQL Database Threat Detection

Mon, 2016-04-11 15:30

Despite being well documented for several years now, every now and then we still run into clients that have bad experiences because of SQL injection attacks. If you’re not familiar, a SQL injection attack happens when an attacker exploits an application vulnerability in how they pass queries and data into the database and insert their own malicious SQL code to be executed. If you want to see different examples and get the full details, the Wikipedia page is very comprehensive.

Depending on how the application is configured, this kind of attack can go all the way from enabling attackers to see data they shouldn’t, to dropping an entire database if your application is allowed to do so. The fact that it’s an application based vulnerability also means that it really depends on proper coding and testing of all inputs in the application to prevent it. In other words, it can be very time-consuming to go back and plug all the holes if the application wasn’t securely built from the ground up.

Built-in Threat Detection

To attack this issue, and as part of the ongoing security story of SQL Server, Microsoft has now invested in the feature called Database Threat Detection. When enabled, the service will automatically scan the audit records generated from the database and will flag any anomalies that it detects. There are many patterns of injections so it makes sense to have a machine be the one reading all the SQL and flagging them. MS is not disclosing the patterns or the algorithms in an effort to make working around the detection more difficult.

What about on-premises?

This feature right now is only available on Azure SQL Db. However, we all know that Azure SQL Db is basically the testing grounds for all major new features coming to the box product. I would not be surprised if the threat detection eventually makes it to the on-premises product as well (my speculation though, nothing announced about this).

For this new feature you will need Azure SQL Db, you will also need to have auditing enabled on the database. The current way this works is by analyzing the audit records so it’s 100% reactive, nothing proactive. You will need a storage account as well since that’s where the audit logs get stored. The portal will walk you through this whole process, we’ll see that in the demo video.

Current State
As I mentioned, right now the tool is more of a reactive tool as it only lets you know after it has detected the anomaly. In the future, I would love to see a preventive configuration where one can specify a policy to completely prevent suspicious SQL from running. Sure, there can always be false alarms, however, if all the application query patterns are known, this number should be very low. If the database is open to ad-hoc querying then a policy could allow to only prevent the queries or even shut down the database after several different alerts have been generated. The more flexible the configuration, the better, but in the end what I want to see is a move from alerting me to preventing the injection to begin with.

In the demo, I’m going to go through enabling Azure SQL threat detection, some basic injection patterns and what the alerts look like. Let’s check it out!


Categories: DBA Blogs

Enter the Exadata X6

Mon, 2016-04-11 15:20

Data is exploding and Exadata is catching up. With the proliferation of cloud technology and in-memory databases; Oracle Exadata X6-2 and X6-8 has it all. It seems to be an ideal platform for hyper-convergence for any data center running Oracle products.

Following are some of the salient features of Oracle X6:

  • The compute nodes have twenty two-core Intel Xeon E5-2699 v4 processors
  • The memory is DDR4 and of size 256Gb and it can be expanded to 768Gb.
  • The local storage can now be upgraded to 8 drives from default of 4.
  • The cell servers have ten-core Intel Xeon E5-2630 v4 processors.
  • Flashcache has become massive here reaching up to 12.8TB. Full rack has 179TB of flash.
  • There will be up to 1.7 PB of disk capacity (raw) per rack.
  • The infiniband network will have 40Gb/second. There is no change to Infiniband. In X5 it became active/active – but that’s the only difference.
  • On the software side, there are many improvement but one thing which caught my eye is the feature that enables Storage Indexes to be moved along the data when a disk hits predictive failure or true failure. This surely will improve performance by a long way.

With X6, Exadata has surely come a long way forward.

Categories: DBA Blogs

Installing Sample Databases To Get Started In Microsoft SQL Server

Fri, 2016-04-08 09:20

Anyone interested in getting started in SQL server will need some databases to work with/on. This article hopes to help the new and future DBA/Developer get started with a few databases.

There are several places to get sample databases, but one starting out in SQL server should go to the Microsoft sample databases. The reason for this is that there are thousands of Blogs/Tutorials on the internet that use these databases as the basis for the tutorial.

The below steps will detail how to get the sample databases and how to attach them to SQL server to start working with them.

This blog assumes you have a version of SQL server installed if not you can click here for a great tutorial

These 2 Databases are a great start to learning SQL Server from both a Transactional and Data Warehousing point of view.

  • Now that we have downloaded these 2 files we will need to attach them one at a time. First, open SQL Server and connect to your instance.
  • Expand the object explorer tree until you can right click on the folder called databases and then left click on Attach…


  • Click the Add Button and navigate to and select the .mdf file (This is the database file you downloaded).


  • There is one step a lot of people getting started in SQL server often miss. As we have just attached a data file in order for SQL Server to bring the database online, it needs a log file which we don’t have. The trick to this is, if we remove the log file from the Attach Database window, SQL Server will automatically create a log file for us. To do this, simply select the log file and click remove.


  • Finally, when you window looks like below simply click ok to attach the database.


  • Repeat steps 3 to 6 for the second database file and any others you wish to attach.
  • The Databases are now online in SQL server and ready to be used.


And that’s it! You now have an OLTP database and a DW database for BI.

Below are links to some good starting tutorials and some additional databases.



Stack Overflow




Categories: DBA Blogs

Oracle Live SQL: Explain Plan

Thu, 2016-04-07 13:14

We’ve all encountered a situation when you want to check a simple query or syntax for your SQL and don’t have a database around. Of course, most of us have at least a virtual machine for that, but it takes time to fire it up, and if you work from battery, it can leave you without power pretty quickly. Some time ago, Oracle began to offer a new service called “Oracle Live SQL” . It provides you with the ability to test a sql query, procedure or function, and have a code library containing a lot of examples and scripts. Additionally, you can store your own private scripts to re-execute them later. It’s a really great online tool, but it lacks some features. I’ve tried to check the  execution plan for my query but, unfortunately, it didn’t work:

explain plan for 
select * from test_tab_1 where pk_id<10;

ORA-02402: PLAN_TABLE not found

So, what could we do to make it work? The workaround is not perfect, but it works and can be used in some cases. We need to create our own plan table using script from an installed Oracle database home $ORACLE_HOME/rdbms/admin/utlxplan.sql. We can open the file and copy the statement to create plan table to SQL worksheet in the Live SQL. And you can save the script in Live SQL code library, and make it private to reuse it later because you will need to recreate the table every time when you login to your environment again. So far so good. Is it enough? Let’s check.

explain plan for 
select * from test_tab_1 where pk_id<10;

Statement processed.

select * from table(dbms_xplan.display);

ERROR: an uncaught error in function display has happened; please contact Oracle support
       Please provide also a DMP file of the used plan table PLAN_TABLE
       ORA-00904: DBMS_XPLAN_TYPE_TABLE: invalid identifier

Ok, the package doesn’t work. I tried to create the types in my schema but it didn’t work. So far the dbms_xplan is not going to work for us and we have to request the information directly from our plan table. It is maybe not so convenient, but it give us enough and, don’t forget, you can save your script and just reuse it later. You don’t need to memorize the queries. Here is a simple example of how to get information about your last executed query from the plan table:

SELECT parent_id,id, operation,plan_id,operation,options,object_name,object_type,cardinality,cost from plan_table where plan_id in (select max(plan_id) from plan_table) order by 2;

 - 	0	SELECT STATEMENT	268	SELECT STATEMENT	 - 	 - 	 - 	9	49

I tried a hierarchical query but didn't find it too useful in the Live SQL environment. Also you may want to put unique identifier for your query to more easily find it in the plan_table. 

explain plan set statement_id='123qwerty' into plan_table for
select * from test_tab_1 where pk_id<10;

SELECT parent_id,id, operation,plan_id,operation,options,object_name,object_type,cardinality,cost from plan_table where statement_id='123qwerty' order by id;


Now I have my plan_table script and query saved in the Live SQL and reuse them when I want to check the plan for my query. I posted the feedback about the ability to use dbms_xplan and Oracle representative replied to me promptly and assured they are already working implementing dbms_xplan feature and many others including ability to run only selected SQL statement in the SQL worksheet (like we do it in SQLdeveloper). It sounds really good and promising and is going to make the service even better. Stay tuned.

Categories: DBA Blogs