Pythian Group

Subscribe to Pythian Group feed
Love Your Data
Updated: 9 hours 11 min ago

Log Buffer #522: A Carnival of the Vanities for DBAs

Mon, 2017-09-18 10:22

This edition of Log Buffer covers Cloud, Oracle, SQL Server and MySQL and much more.

Cloud:

Introducing managed SSL for Google App Engine

Using Cloud Foundry CUPS to inject Spring Security credentials into a Spring Boot Application

ClusterControl in the Cloud – All Our Resources

Monitoring Amazon Aurora Audit Events with Amazon CloudWatch

Integrating Teradata with Amazon Redshift Using the AWS Schema Conversion Tool

Oracle:

Demo App for REST enabled SQL

The Insurance of Oracle APEX – Can Your Low Code Framework Do This?

Oracle GoldenGate is now avaliable on Docker! All the details you need to get up and running on an image is included in Oracle’s public github.

Assign Pseudo Random Password for Oracle DB Users

Simple regexp to check that string contains word1 and word2 and doesn’t contain word3

SQL Server:

Automate MSBI deployments Using PowerShell

NEW – Machine Learning with SQL Server 2017 and R Tutorial

Optional Parameters and Missing Index Requests

Moving Dynamic Data Masked Data to New Locations

Indexing for Windowing Functions: WHERE vs. OVER

MySQL:

Lock, Stock and MySQL Backups: Data Guaranteed Webinar Follow Up Questions

gh-ost 1.0.42 released: JSON support, optimizations

Massive Parallel Log Processing with ClickHouse

cscope: Searching Code Efficiently

Updating InnoDB Table Statistics Manually

Categories: DBA Blogs

Log Buffer #521: A Carnival of the Vanities for DBAs

Tue, 2017-09-12 11:36

While cloud technologies are roaring ahead in full force; the traditional RDBMS like Oracle, Microsoft SQL Server and MySQL are adapting pretty fast. This Log Buffer Edition covers blogs related to that and more.

Oracle:

Oracle JET Simple Table with Fake JSON Server

Every time any page is requested from your website by a human or another program or an automated bot, the event is tracked in a log file that is stored on the web server.

Gitora 2 enabled developers to manage database objects that support the CREATE OR REPLACE command, namely PACKAGES, PROCEDURES, FUNCTIONS, TRIGGERS, VIEWS, SYNONYMS and TYPES in a Git repository.

Oracle just released its first REST JDBC driver on OTN, in conjunction with the 17.3.0 Oracle REST Data Services Beta release.

When you are restrained in the JRE that you can use with SQLcl, you can embed your own in the sqlcl directory tree.

SQL Server:

Understanding Azure Data Factory – A Cloud Based Integration Service

The Shortcomings of Predictive Accuracy

Setting Variables in Calling T-SQL Code While Using sp_executesql

Azure DWH part 15: PolyBase and Azure Data Lake

Creating Basic Indexes on the Stack Overflow Public Database

MySQL:

Heads Up: The List of Replication Defaults That Have Changed in 8.0.2

IO-bound table scan performance for MyRocks, InnoDB and TokuDB

MyRocks Experimental Now Available with Percona Server for MySQL 5.7.19-17

Timing load & index for sysbench tables

Automatic Partition Maintenance in MariaDB

Categories: DBA Blogs

Oracle OpenWorld 2017 Bloggers Meetup

Tue, 2017-09-05 18:41

DON’T PANIC. Yes, we are doing the OOW Bloggers Meetup this year. Yes, it’s the same time. Yes, it’s the same location. Yes, it’s more fun every year.

What: Oracle Bloggers Meetup 2017.

When: Wed, 04-Oct-2017, 5:30pm.

Where: Main Dining Room, Jillian’s Billiards @ Metreon, 175 Fourth Street, San Francisco, CA 94103 (street view). Yes, it’s a good time to try taking our blog down with DoS attack of “COUNT ME IN” comments below – if you’re coming, comment away!

As usual Pythian sponsors the venue, drinks and cool fun social stuff. The competition is on — you know what that means (new unique swag this year) and if not, come and learn. All blogger community participants are invited (self qualifying).

As usual, vintage t-shirts, ties, or bandanas from previous meetups will make you look extremely cool — feel free to wear them. The new wearable will be a nice addition to your collection!

For those of you who don’t know the history: The Bloggers Meetup during Oracle OpenWorld was started by Mark Rittman and continued by Eddie Awad. Alex Gorbachev picked up the flag in 2009 and delegated to me in 2017.

The meetups have been a great success for making new friends and catching up with the old, so let’s keep them this way! To give you an idea, here are some photos and a video of past events.

All Oracle technologies — Oracle Database, MySQL, Apps, Sun technologies, Java and ALL THE CLOUD PEEPS — come and meet like-minded “techies”. All bloggers are welcome. We’ve reached 150 attendees in the last few years — maybe another record breaking gathering this year.

If you are planning to attend, please comment here with the phrase “COUNT ME IN”. This will help us ensure we have the attendance numbers right. Please provide your blog URL with your comment — it’s a Bloggers Meetup after all! Please do make sure you comment here if you are attending so that we have enough room, food, and (most important) drinks.

And even if you choose to not attend the meetup, check out all Oracle OpenWorld sessions with Pythian speakers and book your seat in these sessions today.

And you can now fill your twitter feeds (linkedin, G+, instagram, mailing lists) by talking about how excited you are to attend, and talk about this year’s meetup with fellow bloggers you’ll meet at the show. See you all there — it will be fun!

Categories: DBA Blogs

Chart Your Course With sar

Tue, 2017-08-15 12:02

The sar (system activity reporter) is a Linux utility that actually started life long ago in System V Unix.

In spite of its ancient origins, sar is still quite useful, and has even been enhanced in recent versions of Linux.

The use of sar is not limited to root – any user can run sar and its utilities.  Most Linux systems have sar installed and enabled by default to collect sar data at 10 minute intervals and retain that data for 10 days.

The reports output by sar will look familiar:

 

jkstill@poirot ~/tmp $ sar -d | head -20
Linux 4.4.0-53-generic (poirot.jks.com)         08/07/17        _x86_64_        (1 CPU)

00:00:01          DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
00:02:01       dev7-0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:02:01       dev7-1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:02:01       dev7-2      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:02:01       dev7-3      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:02:01       dev7-4      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:02:01       dev7-5      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:02:01       dev7-6      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:02:01       dev8-0      0.40      0.00      4.97     12.43      0.00      0.26      0.26      0.01
00:02:01     dev252-0      0.63      0.00      4.90      7.78      0.00      0.16      0.16      0.01
00:02:01     dev252-1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:04:01       dev7-0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:04:01       dev7-1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:04:01       dev7-2      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:04:01       dev7-3      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:04:01       dev7-4      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:04:01       dev7-5      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
00:04:01       dev7-6      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00

This report format is good for a quick look at different metrics to get an overall idea of system perform, disk metrics in the previously shown example.

However what if you would like to collate this data into something more readily consumed for charting an analysis, such as a CSV file?  The standard output of sar does not readily lend itself to such use without quite a bit of work.

The sar utility is part of the Linux  sysstat package and contains not only sar, but sadf.  If sar is available then so is sadf. What is sadf you say?  sadf is a the sar data formatting utility, and can output data in CSV, XML and other formats.

Using sadf a bash script can be created that will extract sar data for many Linux subsystems and write that data in the format of your choice, CSV in this case.

The asp.sh script does just that, creating CSV files that can then be used for charting and analysis.

 

#!/bin/bash

# Jared Still - Pythian
# still@pythian.com jkstill@gmail.com
# 2017-07-20

# asp - another sar processor

# tested on Red Hat Enterprise Linux Server release 6.6 (Santiago)
# also tested on Linux Mint

help() {
	echo
	echo $0 -s source-dir -d dest-dir
	echo
}



# variables can be set to identify multiple sets of copied sar files
sarSrcDir='/var/log/sa' # RedHat, CentOS ...
#sarSrcDir='/var/log/sysstat' # Debian, Ubuntu ...

sarDstDir="sar-csv"

csvConvertCmd=" sed -e 's/;/,/g' "


while getopts s:d:h arg
do
	case $arg in
		d) sarDstDir=$OPTARG;;
		s) sarSrcDir=$OPTARG;;
		h) help; exit 0;;
		*) help; exit 1;;
	esac
done


cat << EOF

Source: $sarSrcDir
  Dest: $sarDstDir

EOF

#exit


mkdir -p $sarDstDir || {

	echo 
	echo Failed to create $sarDstDir
	echo
	exit 1

}

echo "lastSaDirEl: $lastSaDirEl"


# sar options
# -d activity per block device
  # -j LABEL: use label for device if possible. eg. sentryoraredo01 rather than /dev/dm-3
# -b IO and transfer rates
# -q load
# -u cpu
# -r memory utilization
# -R memory
# -B paging
# -S swap space utilization
# -W swap stats
# -n network
# -v kernel filesystem stats
# -w  context switches and task creation
#sarDestOptions=( '-d -j LABEL -p' '-b' '-q' '-u ALL' '-r' '-R' '-B' '-S' '-W' '-n DEV,EDEV,NFS,NFSD,SOCK,IP,EIP,ICMP,EICMP,TCP,ETCP,UDP' '-v' '-w')
# break up network into a separate file for each option
# not all options available depending on sar version
# for disk "-d" you may want one of ID, LABEL, PATH or UUID - check the output and the sar docs
sarDestOptions=( '-d -j ID -p' '-b' '-q' '-u ALL' '-r' '-R' '-B' '-S' '-W' '-n DEV' '-n EDEV' '-n NFS' '-n NFSD' '-n SOCK' '-n IP' '-n EIP' '-n ICMP' '-n EICMP' '-n TCP' '-n ETCP' '-n UDP' '-v' '-w')

#sarDestFiles=( sar-disk.csv sar-io.csv sar-load.csv sar-cpu.csv sar-mem-utilization.csv sar-mem.csv sar-paging.csv sar-swap-utilization.csv sar-swap-stats.csv sar-network.csv sar-kernel-fs.csv sar-context.csv)
sarDestFiles=( sar-disk.csv sar-io.csv sar-load.csv sar-cpu.csv sar-mem-utilization.csv sar-mem.csv sar-paging.csv sar-swap-utilization.csv sar-swap-stats.csv sar-net-dev.csv sar-net-ede.csv sar-net-nfs.csv sar-net-nfsd.csv sar-net-sock.csv sar-net-ip.csv sar-net-eip.csv sar-net-icmp.csv sar-net-eicmp.csv sar-net-tcp.csv sar-net-etcp.csv sar-net-udp.csv sar-kernel-fs.csv sar-context.csv)

lastSarOptEl=${#sarDestOptions[@]}
echo "lastSarOptEl: $lastSarOptEl"

#while [[ $i -lt ${#x[@]} ]]; do echo ${x[$i]}; (( i++ )); done;

# initialize files with header row
i=0
while [[ $i -lt $lastSarOptEl ]]
do
	CMD="sadf -d -- ${sarDestOptions[$i]}  | head -1 | $csvConvertCmd > ${sarDstDir}/${sarDestFiles[$i]} "
	echo CMD: $CMD
	eval $CMD
	#sadf -d -- ${sarDestOptions[$i]}  | head -1 | $csvConvertCmd > ${sarDstDir}/${sarDestFiles[$i]}
	echo "################"
	(( i++ ))
done

#exit

#for sarFiles in ${sarSrcDirs[$currentEl]}/sa??
for sarFiles in $(ls -1dtar ${sarSrcDir}/sa??)
do
	for sadfFile in $sarFiles
	do

		#echo CurrentEl: $currentEl
		# sadf options
		# -t is for local timestamp
		# -d : database semi-colon delimited output

		echo Processing File: $sadfFile

		i=0
		while [[ $i -lt $lastSarOptEl ]]
		do
			CMD="sadf -d -- ${sarDestOptions[$i]} $sadfFile | tail -n +2 | $csvConvertCmd  >> ${sarDstDir}/${sarDestFiles[$i]}"
			echo CMD: $CMD
			eval $CMD
			if [[ $? -ne 0 ]]; then
				echo "#############################################
				echo "## CMD Failed"
				echo "## $CMD"
				echo "#############################################

			fi
			(( i++ ))
		done

	done
done


echo
echo Processing complete 
echo 
echo files located in $sarDstDir
echo 


# show the files created
i=0
while [[ $i -lt $lastSarOptEl ]]
do
	ls -ld ${sarDstDir}/${sarDestFiles[$i]} 
	(( i++ ))
done

 

Following is an example execution of the script:

 

jkstill@poirot ~/pythian/blog/sar-tools-blog $ ./asp.sh -s /var/log/sysstat

Source: /var/log/sysstat
  Dest: sar-csv

lastSaDirEl:
lastSarOptEl: 23
CMD: sadf -d -- -d -j ID -p | head -1 | sed -e 's/;/,/g' > sar-csv/sar-disk.csv
################
CMD: sadf -d -- -b | head -1 | sed -e 's/;/,/g' > sar-csv/sar-io.csv
################
CMD: sadf -d -- -q | head -1 | sed -e 's/;/,/g' > sar-csv/sar-load.csv
################
CMD: sadf -d -- -u ALL | head -1 | sed -e 's/;/,/g' > sar-csv/sar-cpu.csv
################
CMD: sadf -d -- -r | head -1 | sed -e 's/;/,/g' > sar-csv/sar-mem-utilization.csv
################
CMD: sadf -d -- -R | head -1 | sed -e 's/;/,/g' > sar-csv/sar-mem.csv
################
CMD: sadf -d -- -B | head -1 | sed -e 's/;/,/g' > sar-csv/sar-paging.csv
################
CMD: sadf -d -- -S | head -1 | sed -e 's/;/,/g' > sar-csv/sar-swap-utilization.csv
################
CMD: sadf -d -- -W | head -1 | sed -e 's/;/,/g' > sar-csv/sar-swap-stats.csv
################
CMD: sadf -d -- -n DEV | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-dev.csv
################
CMD: sadf -d -- -n EDEV | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-ede.csv
################
CMD: sadf -d -- -n NFS | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-nfs.csv
################
CMD: sadf -d -- -n NFSD | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-nfsd.csv
################
CMD: sadf -d -- -n SOCK | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-sock.csv
################
CMD: sadf -d -- -n IP | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-ip.csv
Requested activities not available in file /var/log/sysstat/sa07
################
CMD: sadf -d -- -n EIP | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-eip.csv
Requested activities not available in file /var/log/sysstat/sa07
################
CMD: sadf -d -- -n ICMP | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-icmp.csv
Requested activities not available in file /var/log/sysstat/sa07
################
CMD: sadf -d -- -n EICMP | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-eicmp.csv
Requested activities not available in file /var/log/sysstat/sa07
################
CMD: sadf -d -- -n TCP | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-tcp.csv
Requested activities not available in file /var/log/sysstat/sa07
################
CMD: sadf -d -- -n ETCP | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-etcp.csv
Requested activities not available in file /var/log/sysstat/sa07
################
CMD: sadf -d -- -n UDP | head -1 | sed -e 's/;/,/g' > sar-csv/sar-net-udp.csv
Requested activities not available in file /var/log/sysstat/sa07
################
CMD: sadf -d -- -v | head -1 | sed -e 's/;/,/g' > sar-csv/sar-kernel-fs.csv
################
CMD: sadf -d -- -w | head -1 | sed -e 's/;/,/g' > sar-csv/sar-context.csv
################
Processing File: /var/log/sysstat/sa30
CMD: sadf -d -- -d -j ID -p /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-disk.csv
CMD: sadf -d -- -b /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-io.csv
CMD: sadf -d -- -q /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-load.csv
CMD: sadf -d -- -u ALL /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-cpu.csv
CMD: sadf -d -- -r /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-mem-utilization.csv
CMD: sadf -d -- -R /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-mem.csv
CMD: sadf -d -- -B /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-paging.csv
CMD: sadf -d -- -S /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-swap-utilization.csv
CMD: sadf -d -- -W /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-swap-stats.csv
CMD: sadf -d -- -n DEV /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-net-dev.csv
CMD: sadf -d -- -n EDEV /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-net-ede.csv
CMD: sadf -d -- -n NFS /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-net-nfs.csv
CMD: sadf -d -- -n NFSD /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-net-nfsd.csv
CMD: sadf -d -- -n SOCK /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-net-sock.csv
CMD: sadf -d -- -n IP /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-net-ip.csv
Requested activities not available in file /var/log/sysstat/sa30
CMD: sadf -d -- -n EIP /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-net-eip.csv
Requested activities not available in file /var/log/sysstat/sa30
CMD: sadf -d -- -n ICMP /var/log/sysstat/sa30 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-net-icmp.csv
Requested activities not available in file /var/log/sysstat/sa30
...

CMD: sadf -d -- -v /var/log/sysstat/sa07 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-kernel-fs.csv
CMD: sadf -d -- -w /var/log/sysstat/sa07 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-context.csv

Processing complete

files located in sar-csv

Note that for some commands the result is Requested activities not available.  What this means is that we have asked for something sar may be capable of collecting, but in this case there is no data. It may be that sar has not been configured to collect that particular metric. These messages can be ignored unless, of course, it is for a metric you would like to see. (sar configuration is outside the scope of this article)

Let’s break down the following sadf command line

sadf -d -- -d -j ID -p /var/log/sysstat/sa05 | tail -n +2 | sed -e 's/;/,/g' >> sar-csv/sar-disk.csv
The sadf ‘-d’ option

This option tells sadf to output data in  CSV format.  As CSV is an acronym for Comma-Separated-Values, the use of a semi-colon as a delimiter may at first seem a little puzzling.  However it makes sense to use something other than a comma, as it may be that a comma could appear in the output.  If that were the case sar would need to then surround the data with quotes.  While many programs can deal with data that may or may not be quoted, it is just simpler to use a semi-colon, which is unlikely to appear in data seen in sar.

The ‘–‘ option

This option doesn’t appear to be too well known. When used on the shell command line, — signifies the end of arguments for the program. A typical use might be if you were using grep to search for ‘-j’ in some text files.  This following command fails due to grep interpreting the text we want to search as an argument. Since there is no ‘-j’ argument for grep, the command fails.

 

jkstill@poirot ~/pythian/blog/sar-tools-blog $ grep '-j' *.sh
grep: invalid option -- 'j'
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.

 

Using ‘–‘ tells the shell to stop processing arguments for the current command, which is grep in this case, and so grep searches for ‘-j’ as intended.

jkstill@poirot ~/pythian/blog/sar-tools-blog $ grep -- '-j' *.sh
  # -j LABEL: use label for device if possible. eg. sentryoraredo01 rather than /dev/dm-3
#sarDestOptions=( '-d -j LABEL -p' '-b' '-q' '-u ALL' '-r' '-R' '-B' '-S' '-W' '-n DEV,EDEV,NFS,NFSD,SOCK,IP,EIP,ICMP,EICMP,TCP,ETCP,UDP' '-v' '-w')
sarDestOptions=( '-d -j ID -p' '-b' '-q' '-u ALL' '-r' '-R' '-B' '-S' '-W' '-n DEV' '-n EDEV' '-n NFS' '-n NFSD' '-n SOCK' '-n IP' '-n EIP' '-n ICMP' '-n EICMP' '-n TCP' '-n ETCP' '-n UDP' '-v' '-w')

Why in this case are we using ‘–‘?  It is because the other options following -d are not for sadf, but are sar options that sadf will use when calling sar.

The ‘-d’, ‘-j’ and ‘-p’ options

Be reading the sar man page you can learn what each of these are:

  • -d block device activity
  • -j display persistent device names – options are ID, LABEL, PATH and UUID
  • -p pretty print device names
tail -n +2

Just get the rows after the header row.  asp.sh calls sadf for each day’s sar data.  The first step of creating the header has already been done.

 

sed

sed is being used to change semi-colons to commas.  So far I have not seen any commas in sar data, and using a comma delimiter is just more convenient.

 

Charting in Excel with Perl

 

Having the sar data in the CSV format is convenient, as it will load directly into Excel or Google Sheets. What isn’t convenient is all the work required to load the data, format it, and then create multiple charts from the data.  There could be a number of CSV files this needs to be done with.

Perl to the rescue!  The Excel::Writer::XLSX Perl module can directly create formatted Excel files, complete with charts. I am going to make some basic use of this quite capable module via the dynachart.pl Perl script. This script reads CSV data and creates Excel files, optionally with line charts of selected columns.

If you would like to see some simple examples of using Perl to create Excel files, the documentation for the Perl Module Writer::Excel::XLSX contains several good examples, such as this one:  Example-1

There are several others to be found in the documentation for  Excel::Writer::XLSX

Following is the entire script.  This script and others can be found at the csv-tools github repo.

#!/usr/bin/env perl

# dynachart.pl
# Jared Still 2017-07-23
# still@pythian.com jkstill@gmail.com

# data is from STDIN

use warnings;
use strict;
use Data::Dumper;
use Pod::Usage;
use Getopt::Long;
use Excel::Writer::XLSX;

my $debug = 0;
my $combinedChart = 0;
my %optctl = ();
my ($help,$man);
my @chartCols;
my $categoryColName='';
my $categoryColNum=0;

Getopt::Long::GetOptions(
	\%optctl, 
	'spreadsheet-file=s',
	'debug!' => \$debug,
	'chart-cols=s{1,10}' => \@chartCols,
	'combined-chart!' => \$combinedChart,
	'worksheet-col=s',  # creates a separate worksheet per value of this column
	'category-col=s' => \$categoryColName,
	'h|help|?' => \$help, man => \$man
) or pod2usage(2) ;

pod2usage(1) if $help;
pod2usage(-verbose => 2) if $man;

my $xlFile = defined($optctl{'spreadsheet-file'}) ? $optctl{'spreadsheet-file'} : 'asm-metrics.xlsx';
my $workSheetCol = defined($optctl{'worksheet-col'}) ? $optctl{'worksheet-col'} : 0;

my %fonts = (
	fixed			=> 'Courier New',
	fixed_bold	=> 'Courier New',
	text			=> 'Arial',
	text_bold	=> 'Arial',
);

my %fontSizes = (
	fixed			=> 10,
	fixed_bold	=> 10,
	text			=> 10,
	text_bold	=> 10,
);

my $maxColWidth = 50;
my $counter = 0;
my $interval = 100;

# create workbook
my $workBook = Excel::Writer::XLSX->new($xlFile);
die "Problems creating new Excel file $xlFile: $!\n" unless defined $workBook;

# create formats
my $stdFormat = $workBook->add_format(bold => 0,font => $fonts{fixed}, size => $fontSizes{fixed}, color => 'black');
my $boldFormat = $workBook->add_format(bold => 1,font => $fonts{fixed_bold}, size => $fontSizes{fixed_bold}, color => 'black');
my $wrapFormat = $workBook->add_format(bold => 0,font => $fonts{text}, size => $fontSizes{text}, color => 'black');
$wrapFormat->set_align('vjustify');


my $labels=<>;
chomp $labels;
# sadf starts header lines with '# ' - remove that
$labels =~ s/^#\s+//;
my @labels = split(/,\s*/,$labels);

if ($debug) {

print qq{LABELS:\n};

print join("\n",@labels);

print "\n";

}

# get the X series category
if ( $categoryColName ) {

	my $want = $categoryColName;
	my $index = 0;
	++$index until ($labels[$index] eq $want) or ($index > $#labels);
	$categoryColNum = $index;	

}

#print join("\n",@labels);

# get the element number of the column used to segregate into worksheets
my $workSheetColPos;
if ($workSheetCol) {
	my $i=0;
	foreach my $label ( @labels)  {
		if ($label eq $workSheetCol) { 
			$workSheetColPos = $i;
			last;
		}
		$i++;
	}
}

print "\nworkSheetColPos: $workSheetColPos\n" if $debug;

# validate the columns to be charted
# use as an index into the labels
my @chartColPos=();
{
	my $i=0;
	foreach my $label ( @labels)  {
		foreach my $chartCol ( @chartCols ) {
			if ($label eq $chartCol) { 
				push @chartColPos, $i;
				last;
			}
		}
		$i++;
	}
}

if ($debug) {
	print "\nChartCols:\n", Dumper(\@chartCols);
	print "\nChartColPos:\n", Dumper(\@chartColPos);
	print "\nLabels:\n", Dumper(\@labels);
}

my %lineCount=();
my %workSheets=();


# the first worksheet is a directory
my $directoryName='Directory';
my $directory;
my $noDirectory=0;
my $directoryLineCount=0;

unless ($noDirectory) {
	$directory = $workBook->add_worksheet($directoryName)	;
	$directory->set_column(0,0,30);
	$directory->write_row($directoryLineCount++,0,['Directory'],$boldFormat);
}

while (<>) {

	chomp;
	my @data=split(/,/);

	my $currWorkSheetName;
	if ($workSheetCol) {
		$currWorkSheetName=$data[$workSheetColPos];
		if (length($currWorkSheetName) > 31 ) {
			# cut some out of the middle of the name as the Excel worksheet name has max size of 31
			$currWorkSheetName = substr($currWorkSheetName,0,14) . '...' . substr($currWorkSheetName,-14);
		}
	} else {
		$currWorkSheetName='DynaChart';
	}

	print "Worksheet Name: $currWorkSheetName\n" if $debug;

	unless (defined $workSheets{$currWorkSheetName}) {
		$workSheets{$currWorkSheetName} = $workBook->add_worksheet($currWorkSheetName);
		$workSheets{$currWorkSheetName}->write_row($lineCount{$currWorkSheetName}++,0,\@labels,$boldFormat);
		# freeze pane at header
		$workSheets{$currWorkSheetName}->freeze_panes($lineCount{$currWorkSheetName},0);
	}

	# setup column widths
	#$workSheet->set_column($el,$el,$colWidth);
	$workSheets{$currWorkSheetName}->write_row($lineCount{$currWorkSheetName}++,0,\@data, $stdFormat);

}

if ($debug) {
	print "Worksheets:\n";
	print "$_\n" foreach keys %workSheets;
	print Dumper(\%lineCount);
}

# each row consumes about 18 pixels
my $rowHeight=18; # pixels
my $chartHeight=23; # rows

# triple from default width of 480
my $chartWidth = 480 * 3;

=head1 Write the Charts

 The default mode is to create a separate chart for each metric
 By specifying the command line option --combined-chart the values will be charted in a single chart
 Doing so is probably useful only for a limited number of sets of values

 Some may question the apparent duplication of code in the sections to combine or not combine the charts
 Doing so would be a bit convoluted - this is easier to read and modify

=cut

foreach my $workSheet ( keys %workSheets ) {
	print "Charting worksheet: $workSheet\n" if $debug;

	my $chartNum = 0;

	if ($combinedChart) {
		my $chart = $workBook->add_chart( type => 'line', name => "Combined" . '-' . $workSheets{$workSheet}->get_name(), embedded => 1 );
		$chart->set_size( width => $chartWidth, height => $chartHeight * $rowHeight);
		
		# each chart consumes about 16 rows
		$workSheets{$workSheet}->insert_chart((($chartNum * $chartHeight) + 2),3, $chart);

		foreach my $colPos ( @chartColPos ) {
			my $col2Chart=$labels[$colPos];
			# [ sheet, row_start, row_end, col_start, col_end]
			$chart->add_series(
				name => $col2Chart,
				#categories => [$workSheet, 1,$lineCount{$workSheet},2,2],
				categories => [$workSheet, 1,$lineCount{$workSheet},$categoryColNum,$categoryColNum],
				values => [$workSheet, 1,$lineCount{$workSheet},$colPos,$colPos]
			);
		}
		
	} else {
		foreach my $colPos ( @chartColPos ) {
			my $col2Chart=$labels[$colPos];
			print "\tCharting column: $col2Chart\n" if $debug;
			my $chart = $workBook->add_chart( type => 'line', name => "$col2Chart" . '-' . $workSheets{$workSheet}->get_name(), embedded => 1 );
			$chart->set_size( width => $chartWidth, height => $chartHeight * $rowHeight);

			# each chart consumes about 16 rows
			$workSheets{$workSheet}->insert_chart((($chartNum * $chartHeight) + 2),3, $chart);
		

			# [ sheet, row_start, row_end, col_start, col_end]
			$chart->add_series(
				name => $col2Chart,
				#categories => [$workSheet, 1,$lineCount{$workSheet},2,2],
				categories => [$workSheet, 1,$lineCount{$workSheet},$categoryColNum,$categoryColNum],
				values => [$workSheet, 1,$lineCount{$workSheet},$colPos,$colPos]
			);

			$chartNum++;
		}
	}
}


# write the directory page
my $urlFormat = $workBook->add_format( color => 'blue', underline => 1 );
my %sheetNames=();

foreach my $worksheet ( $workBook->sheets() ) {
	my $sheetName = $worksheet->get_name();
	next if $sheetName eq $directoryName;
	$sheetNames{$sheetName} = $worksheet;
}

foreach my $sheetName ( sort keys %sheetNames ) {
	$directory->write_url($directoryLineCount++,0, qq{internal:'$sheetName'!A1} ,$urlFormat, $sheetName);
}

__END__

=head1 NAME

dynachart.pl

  --help brief help message
  --man  full documentation
  --spreadsheet-file output file name - defaults to asm-metrics.xlsx
  --worksheet-col name of column used to segragate data into worksheets 
    defaults to a single worksheet if not supplied
  --chart-cols list of columns to chart

 dynachart.pl accepts input from STDIN

 This script will read CSV data created by asm-metrics-collector.pl or asm-metrics-aggregator.pl


=head1 SYNOPSIS

dynachart.pl [options] [file ...]

 Options:
   --help brief help message
   --man  full documentation
   --spreadsheet-file output file name - defaults to asm-metrics.xlsx
   --worksheet-col name of column used to segragate data into worksheets 
     defaults to a single worksheet if not supplied
  --chart-cols list of columns to chart
  --category-col specify the column for the X vector - a timestamp is typically used 
    the name must exactly match that in the header
  --combined-chart create a single chart rather than a chart for each value specified in --chart-cols

 dynachart.pl accepts input from STDIN

 dynachart.pl --worksheet-col DISKGROUP_NAME < my_input_file.csv


 dynachart.pl --spreadsheet-file sar-disk-test.xlsx --combined-chart --worksheet-col DEV --category-col 'timestamp' --chart-cols 'rd_sec/s' --chart-cols 'wr_sec/s' < sar-disk-test.csv


=head1 OPTIONS

=over 8

=item B<-help>

Print a brief help message and exits.

=item B<-man>

Prints the manual page and exits.

=item B<--spreadsheet-file>

 The name of the Excel file to create.
 The default name is asm-metrics.xlsx

=item B<--worksheet-col>

 By default a single worksheet is created.
 When this option is used the column supplied as an argument will be used to segragate data into separate worksheets.

=item B<--chart-cols>

 List of columns to chart
 This should be the last option on the command line if used.

 It may be necessary to tell Getopt to stop processing arguments with '--' in some cases.

 eg.

 dynachart.pl dynachart.pl --worksheet-col DISKGROUP_NAME --chart-cols READS WRITES -- logs/asm-oravm-20150512_01-agg-dg.csv

=item B<--category-col>

 Column to use as the category for the X line in the chart - default to the first column
 The name must exactly match a column from the CSV file
 Typically this line is a timestamp

=back

=head1 DESCRIPTION

B<dynachart.pl> creates an excel file with charts for selected columns>

=head1 EXAMPLES

 dynachart.pl accepts data from STDIN
 
 dynachart.pl --worksheet-col DISKGROUP_NAME --spreadsheet-file mywork.xlsx

=cut


 

Creating Multiple Charts with Bash

 

A number of invocations of dynachart.pl can be found in sar-chart.sh as seen in the following example. This allows creating a number of Excel files from sar data for later analysis, all with one command.

 

 

#!/bin/bash


usage() {

	cat <<-EOF

	usage: $0 destination-directory

	example script that charts CSV data that has been generated from sar
	Excel XLXS files with charts are produced

	applicable to any CSV data

	
	example: sar-chart.sh data-dir

	This script is using data generated by asp.sh


	EOF

}


while getopts h arg
do
	case $arg in
		h) usage;exit 0;;
		*) usage;exit 1;;
	esac
done

destDir=$1

[ -d "$destDir" -a -w "$destDir" ] || {
	echo
	echo cannot read and/or write directory destDir: $destDir
	echo
	usage
	echo
	exit 1
}

### Template
#echo working on sar-
#dynachart.pl --spreadsheet-file ${destDir}/ --worksheet-col hostname --category-col 'timestamp' 
################


# if dynachart.pl is in the path then use it.
# if not then check for local directory copy or link
# otherwise error exit

dynaChart=$(which dynachart.pl)

if [[ -z $dynaChart ]]; then
	if [[ -f ./dynachart.pl ]]; then
		dynaChart='./dynachart.pl'
		[ -x "$dynaChart" ] || {
			echo
			echo $dynaChart is not executable	
			echo
			exit 2
		}
	else
		echo
		echo "dynachart.pl not found"
		echo
		exit 1
	fi
fi


# default of 1 chart per metric
echo working on sar-disk-default.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-disk-default.xlsx --worksheet-col DEV --category-col 'timestamp' --chart-cols 'rd_sec/s' --chart-cols 'wr_sec/s' < sar-disk.csv

# combine metrics into one chart
echo working on sar-disk-combined.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-disk-combined.xlsx --combined-chart --worksheet-col DEV --category-col 'timestamp' --chart-cols 'rd_sec/s' --chart-cols 'wr_sec/s' < sar-disk.csv


echo working on sar-network-device.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-network-device.xlsx --combined-chart --worksheet-col IFACE --category-col 'timestamp' --chart-cols 'rxkB/s' --chart-cols 'txkB/s' < sar-net-dev.csv

echo working on sar-network-error-device.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-network-error-device.xlsx --combined-chart --worksheet-col IFACE --category-col 'timestamp' --chart-cols 'rxerr/s' --chart-cols 'txerr/s' < sar-net-ede.csv

echo working on sar-network-nfs.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-network-nfs.xlsx --worksheet-col hostname --category-col 'timestamp' --chart-cols 'call/s' --chart-cols 'retrans/' --chart-cols 'read/s' --chart-cols 'write/s' --chart-cols 'access/s' --chart-cols 'getatt/s'  < sar-net-nfs.csv

echo working on sar-network-nfsd.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-network-nfsd.xlsx --worksheet-col hostname --category-col 'timestamp' --chart-cols 'scall/s' --chart-cols 'badcall/s' --chart-cols 'packet/s' --chart-cols 'udp/s' --chart-cols 'tcp/s' --chart-cols 'hit/s' --chart-cols 'miss/s' --chart-cols 'sread/s' --chart-cols 'swrite/s' --chart-cols 'saccess/s' --chart-cols 'sgetatt/s' < sar-net-nfsd.csv

echo working on sar-network-socket.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-network-socket.xlsx --worksheet-col hostname --category-col 'timestamp' --chart-cols 'totsck' --chart-cols 'tcpsck' --chart-cols 'udpsck' --chart-cols 'rawsck' --chart-cols 'ip-frag' --chart-cols 'tcp-tw' < sar-net-sock.csv

echo working on sar-context.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-context.xlsx --worksheet-col hostname --category-col 'timestamp' --chart-cols 'proc/s' --chart-cols 'cswch/s' < sar-context.csv

echo working on sar-cpu.xlsx
# extracted with -u ALL, so all CPU on one line
dynachart.pl --spreadsheet-file ${destDir}/sar-cpu.xlsx --worksheet-col hostname --category-col 'timestamp' --chart-cols '%usr' --chart-cols '%nice' --chart-cols '%sys' --chart-cols '%iowait' --chart-cols '%steal' --chart-cols '%irq' --chart-cols '%soft' --chart-cols '%guest' --chart-cols '%idle' < sar-cpu.csv


echo working on sar-io-default.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-io-default.xlsx --worksheet-col hostname --category-col 'timestamp' --chart-cols 'tps' --chart-cols 'rtps' --chart-cols 'wtps' --chart-cols 'bread/s' --chart-cols 'bwrtn/s' < sar-io.csv

echo working on sar-io-tps-combined.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-io-tps-combined.xlsx --combined-chart --worksheet-col hostname --category-col 'timestamp' --chart-cols 'tps' --chart-cols 'rtps' --chart-cols 'wtps' < sar-io.csv

echo working on sar-io-blks-per-second-combined.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-io-blks-per-second-combined.xlsx --combined-chart --worksheet-col hostname --category-col 'timestamp' --chart-cols 'bread/s' --chart-cols 'bwrtn/s' < sar-io.csv


echo working on sar-load-runq-threads.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-load-runq-threads.xlsx --combined-chart --worksheet-col hostname --category-col 'timestamp' --chart-cols 'runq-sz' --chart-cols 'plist-sz' --chart-cols 'ldavg-1' --chart-cols 'ldavg-5' --chart-cols 'ldavg-15' < sar-load.csv

echo working on sar-load-runq.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-load-runq.xlsx --combined-chart --worksheet-col hostname --category-col 'timestamp' --chart-cols 'runq-sz' --chart-cols 'ldavg-1' --chart-cols 'ldavg-5' --chart-cols 'ldavg-15' < sar-load.csv

echo working on sar-memory.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-memory.xlsx --combined-chart --worksheet-col hostname --category-col 'timestamp' --chart-cols 'frmpg/s' --chart-cols  'bufpg/s' < sar-mem.csv


echo working on sar-paging-rate.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-paging-rate.xlsx --combined-chart --worksheet-col hostname --category-col 'timestamp'  --chart-cols 'pgpgin/s' --chart-cols  'pgpgout/s' < sar-paging.csv

echo working on sar-swap-rate.xlsx
dynachart.pl --spreadsheet-file ${destDir}/sar-swap-rate.xlsx --combined-chart --worksheet-col hostname --category-col 'timestamp'  --chart-cols 'pswpin/s' --chart-cols 'pswpout/s' < sar-swap-stats.csv

Filter, rename and aggregate the data

Perhaps you want to see a subset of some data. Let’s say you wish too see the data for only a select group of devices; those making up the DATA diskgroup in an Oracle database.  In addition you would like the sar report to use names that match those seen by ASM. Rather than /dev/mapper/mp1, mp2,…, you would like to see DATA00, DATA01,…

In addition you would like to see the IOPS, read rate and write rate data for a set of disks aggregated per timestamp.

We can do that. The script remap.sh has an example (real life BTW) of how to map the ASM disk names to the linux device names, and create a new CSV file.

This query was used to map device names to disk names:

 

select
	'dm-name-' || substr(d.path,instr(d.path,'/',-1)+1,length(d.path) - instr(d.path,'/',-1)) path
	, g.name || lpad(d.disk_number,2,'0') name
from v$asm_disk d
join v$asm_diskgroup g on g.group_number = d.group_number
order by 2

15:10:17 SYSDBA> /

PATH			                   NAME
------------------------------ --------------------
dm-name-mpathmp1	             DATA00
dm-name-mpathnp1	             DATA01
dm-name-mpathop1	             DATA02
dm-name-mpathqp1	             DATA03
dm-name-mpathrp1	             DATA04
dm-name-mpathsp1	             DATA05
dm-name-mpathtp1	             DATA06
dm-name-mpathvp1	             DATA07
dm-name-mpathwp1	             DATA08
dm-name-mpathcp1	             FRA00

10 rows selected.

 

This information was used to then create a sed command to rewrite the CSV data as needed.

 


originalFile=sar-disk-original.csv
newFile=sar-disk.csv

sed \
	-e 's/dm-name-mpathmp1/DATA00/g' \
	-e 's/dm-name-mpathnp1/DATA01/g' \
	-e 's/dm-name-mpathop1/DATA02/g' \
	-e 's/dm-name-mpathqp1/DATA03/g' \
	-e 's/dm-name-mpathrp1/DATA04/g' \
	-e 's/dm-name-mpathsp1/DATA05/g' \
	-e 's/dm-name-mpathtp1/DATA06/g' \
	-e 's/dm-name-mpathvp1/DATA07/g' \
	-e 's/dm-name-mpathwp1/DATA08/g' \
	-e 's/dm-name-mpathcp1/FRA00/g' \
< $originalFile > $newFile

Following that csv-aggregator.sh and csv-aggregator.pl were used to filter and aggregate the data.  The result is a much smaller CSV file with data only for the DATA diskgroup, with the disk data rolled up to the disk group level.

Rather than post several hundred more lines of Perl here, I will recommend you to follow the links if you want to see how this aggregation and filtering is done. The command line though is shown here:

 


csvFile=sar-csv/sar-disk-test.csv

./csv-aggregator.pl --filter-cols DEV  --filter-vals 'DATA..'  --key-cols hostname --key-cols timestamp  --grouping-cols DEV  --agg-cols tps --agg-cols 'rd_sec/s' --agg-cols 'wr_sec/s'  --agg-cols 'avgrq-sz' --agg-cols 'avgqu-sz' < $csvFile

 

This is a chart taken directly from the Excel file created from this CSV data via dynachart.pl.

 

image

 

 

 

Installing Excel::Writer::XLSX

So that the dynachart.pl script will work, the Excel::Writer::XLSX Perl module must be installed.

There are several possible methods by which this may be done, but if you are not already fairly familiar with the Perl environment some of the methods are a little daunting.  As Oracle by default always installs Perl in ORACLE_HOME, you can make use of that existing installation.  These instructions will install the Excel::Writer::XLSX module and it’s dependencies directly into the Oracle Perl installation. Because of that, you would want to do this in a test environment, such as Virtual Machine running on your laptop.  It isn’t too likely that this would break something in the Oracle home, but then again, why take the chance?

An even better option would be to install your own Perl, and not worry about affecting any other software. This is not too difficult to do via Perlbrew. Check out https://perlbrew.pl/ for the simple instructions.

The following example installs Excel::Writer::XLSX in Oracle’s copy of Perl on a test server.  For my own use, I do use Perlbrew, but am just including the following as an example

 

[oracle@oravm01 ~]$ $ORACLE_HOME/perl/bin/perl -MCPAN -e shell

...

Would you like me to configure as much as possible automatically? [yes] yes

...

These next commands tell CPAN to automatically answer appropriately rather than prompt you to continually type ‘yes’.
This lasts the duration of this CPAN session.  Following that is the command to install Excel::Writer::XLSX.

 

cpan> o conf prerequisites_policy 'follow'
cpan> o conf build_requires_install_policy yes


cpan> install Excel::Writer::XLSX

...


cpan> exit
Terminal does not support GetHistory.
Lockfile removed.

 

Now test the installation.  This command simply tells Perl the we require version 99 of the module. The failure is normal, as only version 0.95 is installed.  A different message would appear if no version of the module was installed.

[oracle@oravm01 ~]$ $ORACLE_HOME/perl/bin/perl -MExcel::Writer::XLSX=99
Excel::Writer::XLSX version 99 required--this is only version 0.95 at /u01/app/oracle/product/11.2.0/db_1/perl/lib/5.10.0/Exporter/Heavy.pm line 122.
BEGIN failed--compilation aborted.
Categories: DBA Blogs

How Pythian Helped Nav Canada Migrate Their Oracle Database to a Microsoft Azure SQL Database

Wed, 2017-08-09 15:20

A common concern for businesses is optimizing database speed and performance, without raising costs.

Nav Canada supports flight planning with aviation weather, aeronautical information, and online flight planning. To address a need for a more cost-efficient environment for their critical flight planning data, while optimizing performance they decided to migrate from an on-premises Oracle system to Microsoft Azure SQL Database. They turned to the database and cloud experts at Pythian to assist them.

“Pythian’s breadth of expertise spans both traditional on-premises and cloud environments, making Pythian the perfect partner as we migrated critical workloads to the cloud.”

Pascal Chamma, Manager, Nav Canada Database Technology Services.

Pythian was tasked with creating a Proof of Concept (PoC) on an Azure database, while simulating the central SQL database. The project team demonstrated that the same level of performance could be achieved within a more cost-effective cloud environment. The PoC proved that the new system could run a process (3000 lines of PL/SQL code) every 20 seconds, performing calculations on up to 3 minutes of collected data.

Nav Canada is currently implementing the system across a production environment, with Pythian advising them as needed.

Read the full case study to learn how Nav Canada was able to migrate to the cloud to reduce costs, without sacrificing performance.

Categories: DBA Blogs

What You Should Know About Docker Containers for Oracle Data Integrator

Tue, 2017-08-01 14:12

Not long ago, Oracle adopted the Docker engine as one of the accepted platforms for its products and published a set of scripts and examples in the Oracle GitHub repository. This includes sets for rapid deployment of Oracle databases, Weblogic and for a number of other products. I tried some of the published docker implementations including Oracle database using my sandbox and it worked pretty well for me. Among the published containers, I didn’t find an example for Oracle Data Integrator (ODI) and decided to make one just for myself. I created it and found it to be useful when you need just to test one or another ODI scenario and wipe everything out after the tests.

I’ve decided to share my scripts with you and describe how it had been made. Just before publishing the post, I checked the Oracle git repository for docker and found a new “OracleDataIntegrator” template there. So, now we have the proper version from Oracle but even having the “official” deployment, you may find my version useful and I hope it may help you to understand the Docker engine.

To build my version, I used the Oracle developed images for Oracle Linux, Oracle Java, and Oracle Database as a basis for ODI making only minor modifications to have the final product better suited to my needs. You may find that other solutions are better and perhaps more adapted for your needs, but please keep in mind that my example is purely educational and not designed for any production use.

First we need the Docker engine on your machine. Depending on your platform, you need to download an appropriate engine from the Docker site and install it. The installation is pretty straightforward and easy. If you are on Linux you may need to add your user to the “docker” group. For example, I am adding user oracle to the group to be able to run docker containers and have access to the docker registry from the Oracle account:

[root@vm129-132 ~]# gpasswd -a oracle docker
Adding user oracle to group docker
[root@vm129-132 ~]# 

And make sure the docker service is up and running:

[root@vm129-132 ~]# systemctl status  docker.service
? docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/docker.service.d
           ??docker-sysconfig.conf
   Active: active (running) since Tue 2017-06-06 12:29:35 EDT; 1 months 24 days ago

Then you need to clone or download the Oracle git repository for docker containers to be able to build necessary base images before starting with the ODI. On your system with docker, you either run “git clone https://github.com/oracle/docker-images.git” or go to the https://github.com/oracle/docker-images in your browser, download and unzip the full archive. There you will be able to find scripts to build docker images with all the different sets of software. Here is my listing after cloning it from the Oracle git repository:

[oracle@vm129-132 docker-images]$ ls -l
total 68
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 ContainerCloud
drwxr-xr-x. 6 oracle oinstall  4096 May 11 16:42 GlassFish
drwx------. 2 root   root     16384 May 11 16:54 lost+found
drwxr-xr-x. 2 oracle oinstall  4096 May 11 16:42 MySQL
drwxr-xr-x. 7 oracle oinstall  4096 May 11 16:42 NoSQL
drwxr-xr-x. 5 oracle oinstall  4096 May 11 16:42 OpenJDK
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 OracleCoherence
drwxr-xr-x. 5 oracle oinstall  4096 Jul 18 14:40 OracleDatabase
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 OracleHTTPServer
drwxr-xr-x. 6 oracle oinstall  4096 May 11 16:42 OracleJava
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 OracleTSAM
drwxr-xr-x. 4 oracle oinstall  4096 May 11 16:42 OracleTuxedo
drwxr-xr-x. 5 oracle oinstall  4096 May 11 16:42 OracleWebLogic
-rw-r--r--. 1 oracle oinstall  1588 Jul 17 09:10 README.md

(The listing represents the state as of May 2017 and may look different now)

Let’s continue and go step by step preparing our images. To understand the process, let me recall what we need for ODI in standalone mode. We need a Linux box with installed JDK or JRE and Oracle database as a repository. Also, if we plan to use the ODI Studio, it makes sense to have either X window or a VNC server installed on the box. We start with building a Linux image with Java JRE or JDK for our ODI. Oracle provides “OracleJava” docker configuration where we can build an image with Java 7 or 8:

[oracle@vm129-132 docker-images]$ ll OracleJava
total 20
drwxr-xr-x. 2 oracle oinstall 4096 May 11 16:42 java-7
drwxr-xr-x. 2 oracle oinstall 4096 May 12 10:56 java-8
-rw-r--r--. 1 oracle oinstall 1886 May 11 16:42 README.md
drwxr-xr-x. 4 oracle oinstall 4096 May 11 16:42 windows-java-7
drwxr-xr-x. 4 oracle oinstall 4096 May 11 16:42 windows-java-8

I used Oracle JDK 8 instead of a server JRE distribution. To make that happen, I’ve slightly modified the Dockerfile in the “OracleJava/java-8” directory replacing the server JRE distribution by JDK. It is optional and you may choose to keep JRE instead. In my case, the original string in the file was replaced from:

ENV JAVA_PKG=server-jre-8u*-linux-x64.tar.gz

to:

ENV JAVA_PKG=jdk-8u*-linux-x64.tar.gz

After that, I downloaded the JDK from the Oracle OTN site, put it in the folder and ran the build.sh script. The script prepares an image with Oracle Linux 7 in minimal configuration with the Oracle JDK 8 installed.

[oracle@vm129-132 java-8]$ ll
total 181204
-rwxr-xr-x. 1 oracle oinstall        47 May 11 16:42 build.sh
-rw-r--r--. 1 oracle oinstall       644 May 11 16:42 Dockerfile
-rw-r--r--. 1 oracle oinstall 185540433 May 12 10:43 jdk-8u131-linux-x64.tar.gz
-rw-r--r--. 1 oracle oinstall       263 May 11 16:42 server-jre-8u101-linux-x64.tar.gz.download
[oracle@vm129-132 java-8]$ cp Dockerfile Dockerfile.orig
[oracle@vm129-132 java-8]$ vi Dockerfile
[oracle@vm129-132 java-8]$ ./build.sh 
Sending build context to Docker daemon 185.5 MB
Step 1 : FROM oraclelinux:7-slim
7-slim: Pulling from library/oraclelinux
...............
Successfully built 381e0684cea2
[oracle@vm129-132 java-8]$ 
[oracle@vm129-132 java-8]$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
oracle/serverjre      8                   381e0684cea2        10 weeks ago        490.6 MB
oraclelinux           7-slim              442ebf722584        3 months ago        114.4 MB

The next step is optional, but it makes life a bit easier. We need a number of tools to be installed on top of our minimal installation. In Docker, you can create a container, modify it and save as another image using command “commit”. The beauty of this that you are not really doubling your space consumption by those incremental images. Docker will just add those extra changes you’ve made as a separate volume behind the scenes and will use original image plus your changes for the newly created image. You may think about it as a set of “snapshots”. So, I created a container with name “orajrevnc” from the “oracle/serverjre:8” image, installed additional packages including some diagnostic, configuration and other useful packages like a VNC server, vi editor, and others.

[oracle@vm129-132 java-8]$ docker run --name serverjrevnc -p 5901:5901 -ti oracle/serverjre:8
bash-4.2# 
bash-4.2# yum -y install vim
bash-4.2# yum -y install net-tools
bash-4.2# yum -y install telnet
bash-4.2# yum -y install strace
bash-4.2# yum -y install gcc
bash-4.2# yum -y install xterm
.....

After that, I used the container to commit it as a new image and saved it as “oracle/serverjrevnc:8”.

[oracle@vm129-132 java-8]$ docker commit serverjrevnc oracle/serverjrevnc:8
sha256:ac5b4d85fccc5427c92e65a6c3b1c06e3e8d04ffbe7725bcca1759a2165353d7
[oracle@vm129-132 java-8]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
oracle/serverjrevnc 8 ac5b4d85fccc 3 minutes ago 1.661 GB
oracle/serverjre 8 381e0684cea2 34 minutes ago 490.6 MB
oraclelinux 7-slim 442ebf722584 3 weeks ago 114.4 MB
[oracle@vm129-132 java-8]$

The container “orajrevnc” can be deleted now :

[oracle@vm129-132 java-8]$ docker stop orajrevnc
[oracle@vm129-132 java-8]$ docker rm orajrevnc

Now we have a basic Oracle Linux image with the java installed and all necessary tools and utilities we need. The image can be used for the next step as a basis for our Oracle Database image. The database will serve as a repository for our ODI. We go to the folder “docker-images/OracleDatabase/dockerfiles/ ” where we replace a line “FROM oraclelinux:7-slim” by “FROM oracle/serverjrevnc:8”, download the 12.2.0.1 EE database software from “Oracle site” and build the image for Oracle Database 12.2.0.1.

[oracle@vm129-132 java-8]$ cd ../../OracleDatabase/dockerfiles/
[oracle@vm129-132 dockerfiles]$ ll
total 16
drwxr-xr-x. 2 oracle oinstall 4096 May 11 16:42 11.2.0.2
drwxr-xr-x. 2 oracle oinstall 4096 Jul 18 14:18 12.1.0.2
drwxr-xr-x. 2 oracle oinstall 4096 Jul 25 14:37 12.2.0.1
-rwxr-xr-x. 1 oracle oinstall 3975 May 11 16:42 buildDockerImage.sh
[oracle@vm129-132 dockerfiles]$ vi 12.2.0.1/Dockerfile.ee
[oracle@vm129-132 dockerfiles]$ ll 12.2.0.1/*.zip
total 3372836
-rw-r--r--. 1 oracle oinstall 3453696911 May 12 09:26 linuxx64_12201_database.zip

[oracle@vm129-132 dockerfiles]$ ./buildDockerImage.sh -v 12.2.0.1 -e 
Checking if required packages are present and valid...
......

  Build completed in 906 seconds.
  
[oracle@vm129-132 dockerfiles]$

Here is the new list of images we have after building the Oracle Database image.

[oracle@vm129-132 dockerfiles]$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
oracle/database       12.2.0.1-ee         91aaea30a651        28 minutes ago      16.18 GB
oracle/serverjrevnc   8                   2d7be7d163dc        50 minutes ago      1.543 GB
oracle/serverjre      8                   c2c247029798        About an hour ago   490.5 MB
oraclelinux           7-slim              08a01cc7be97        5 weeks ago         114.4 MB

Having all necessary images prepared we can start building our ODI. I have prepared the scripts to build the Docker image and published them on “https://github.com/gotochkin/docker-images”. You can either download or clone them using the git software. Let’s have a look at the scripts and discuss what they do:

[oracle@vm129-132 ~]$ ls -l docker-images/ODI/dockerfiles/
total 8
drwxr-xr-x. 2 oracle oinstall 4096 Jul 27 14:18 12.2.1.2.6
-rwxr-xr-x. 1 oracle oinstall 3195 Jul 27 11:54 buildDockerImage.sh
[oracle@vm129-132 ~]$ ls -l docker-images/ODI/dockerfiles/12.2.1.2.6/
total 2390372
-rw-r--r--. 1 oracle oinstall        277 May  9 14:42 Checksum.standalone
-rw-r--r--. 1 oracle oinstall       8572 Jul 17 12:43 createOdiDomainForStandaloneAgent.py
-rw-r--r--. 1 oracle oinstall       4270 Jul 26 10:41 Dockerfile.standalone
-rw-r--r--. 1 oracle oinstall       1289 May  9 14:49 install.file
-rw-r--r--. 1 oracle oinstall       6477 Jul 14 13:03 oracledi.gz
-rw-r--r--. 1 oracle oinstall         55 May  8 13:57 oraInst.loc
-rw-rw-r--. 1 oracle oinstall       2695 May 15 15:08 rcuResponseFile.properties
-rw-r--r--. 1 oracle oinstall       7920 Jul 27 14:18 README.md
-rwxr-xr-x. 1 oracle oinstall       5332 Jul 14 15:51 runOracle.sh
-rwxr-xr-x. 1 oracle oinstall       4406 Jul 27 11:08 startAgent.sh
[oracle@vm129-132 ~]$ 

— buildDockerImage.sh – A script to build the ODI image. It takes parameter -v as the version for the ODI (so far only 12.2.1.2.6) and -t to tell that we are going to configure ODI Agent in standalone mode.
— Checksum.standalone – to verify checksum for the downloaded installation files which you will need to put into the docker-images/ODI/dockerfiles/12.2.1.2.6 directory
— createOdiDomainForStandaloneAgent.py – A python script to create a domain for an ODI standalone agent. The original script had been taken from the “Oracle tutorial” and slightly modified for our needs.
— Dockerfile.standalone – The instructions/script for Docker how to build the image for the ODI standalone agent.
— install.file – A response file for ODI silent installation.
— oracledi.gz – Gzipped files with ODI connection properties, it will be uncompressed into the $HOME/.odi directory.
— oraInst.loc – The configuration for Oracle Inventory.
— rcuResponseFile.properties – A parameter file for Repository Creation Utility.
— README.md – An instruction how to build an image.
— runOracle.sh – A startup script for the Oracle database which is going to be used as a repository.
— startAgent.sh – A script to configure and start the ODI agent.

We need to download ODI installation files fmw_12.2.1.2.6_odi_Disk1_1of2.zip and fmw_12.2.1.2.6_odi_Disk1_2of2.zip files from “Oracle OTN site” and put them to the docker-images/ODI/dockerfiles/12.2.1.2.6 folder.

At last, everything is ready and we can build the image using the buildDockerImage.sh script. Of course, you can build it without the script since it is only a wrapper for “docker build” command. The script just makes it bit easier.

[oracle@vm129-132 dockerfiles]$ ./buildDockerImage.sh -v 12.2.1.2.6 -t

..............

Successfully built 239bdf178fbe

  ODI Docker Image for 'standalone' version 12.2.1.2.6 is ready to be extended: 
    
    --> oracle/odi:12.2.1.2.6-standalone

  Build completed in 1032 seconds.

We can see the built image on the list:

[oracle@vm129-132 dockerfiles]$ docker images
REPOSITORY            TAG                     IMAGE ID            CREATED             SIZE
oracle/odi            12.2.1.2.6-standalone   239bdf178fbe        24 seconds ago      23.73 GB
oracle/database       12.2.0.1-ee             91aaea30a651        About an hour ago   16.18 GB
oracle/serverjrevnc   8                       2d7be7d163dc        About an hour ago   1.543 GB
oracle/serverjre      8                       c2c247029798        About an hour ago   490.5 MB
oraclelinux           7-slim                  08a01cc7be97        5 weeks ago         114.4 MB

We are ready to create our container with ODI. When we create the container it will do several steps which can be in general listed as:
— Create a container with Oracle Linux 7 with JDK and other supplemental packages.
— Create or start an Oracle database.
— Create an ODI repository, if it is not created already.
— Configure an ODI agent in the repository or adjust hostname for the agent if it has been already configured in the repository.
— Create a Weblogic domain for standalone ODI agent.
— Start the agent.

We have an option to create a fresh repository database every time when we deploy a new container using the command:

[oracle@vm129-132 dockerfiles]$ docker run --name oditest -p 1521:1521 -p 5500:5500 -p 5901:5901 -p 5902:5902 --env ORACLE_BASE=/opt/oracle --env ORACLE_HOME=/opt/oracle/product/12.2.0.1/dbhome_1 oracle/odi:12.2.1.2.6-standalone

And the database will be created inside docker file system (docker FS). It is convenient when you want to have a fresh repository every time but it takes a time to create a new database and, as result, your deployment will be delayed.

Or we can define a volume for database files out of the docker file system in which case you may reuse database for your containers again and again which can save some time during deployment.

[oracle@vm129-132 dockerfiles]$ docker run --name oditest -p 1521:1521 -p 5500:5500 -p 5901:5901 -p 5902:5902 -v /home/oracle/docker-images/OracleDatabase/oradata:/opt/oracle/oradata --env ORACLE_BASE=/opt/oracle --env ORACLE_HOME=/opt/oracle/product/12.2.0.1/dbhome_1 oracle/odi:12.2.1.2.6-standalone

Just be aware that if you want to use more than one container with the same Oracle database the scripts should be adapted. In the current implementation a new deployment will try to use the same repository.
After executing the command you will see the log of creation and in the end, you get the container with a running standalone ODI agent.

[oracle@vm129-132 ~]$ docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS                                                                                        NAMES
71969452f5b3        oracle/odi:12.2.1.2.6-standalone   "/u01/app/oracle/star"   4 days ago          Up 2 days           0.0.0.0:1521->1521/tcp, 0.0.0.0:5500->5500/tcp, 0.0.0.0:5901-5902->5901-5902/tcp, 5903/tcp   oditest
[oracle@vm129-132 ~]$ 

Inside the container you can start a vnc server and run an ODI studio.

[oracle@vm129-132 ~]$ docker exec -ti oditest bash
[oracle@71969452f5b3 Middleware]$ vncserver

You will require a password to access your desktops.

Password:

After starting the ODI studio you get the usual questions like whether you want to import any settings or to allow send data to Oracle about ODI studio usage. You don’t have anything to import from the previous installation since this is the first one, so, you can ignore and say “no”. The studio is eventually up and you need to connect to the repository. When you push the button to connect to the repository, you will be asked if you want to store the credentials in a wallet.

You have an option to refuse and use the pre-created connection and saved credentials.

Ideally this will make the usage bit easier and convenient. Of course, for any kind of production development it is strongly recommended to use a wallet and a proper password.
If you’re someone making your first steps in Docker, I hope this article has been helpful. In my opinion, Docker can be extremely useful for test deployments. The persistent database files makes the deployment easy and quick. I have some reservations about using Docker for any production deployments of Oracle databases, but that discussion deserves a dedicated post. Stay tuned.

Categories: DBA Blogs

Oracle Scheduler Fail To Send Email Notifications

Fri, 2017-07-21 13:32

In this blog post I would like to share an interesting issue we encountered couple of months ago related to scheduler job email notifications. As some of you may know, starting with Oracle 11.2 you can subscribe to receive notification emails from a scheduler job. You can define an email to be sent on various job events (job_started, job_completed, job_failed etc.). The job email notification is defined with DBMS_SCHEDULER.ADD_JOB_EMAIL_NOTIFICATION procedure.

I am assuming you already have a configured and working SMTP server. If not, that can be done with DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE (attributes: email_server and email_sender).

The issue we encountered was on database version 12.1. After configuring the scheduler jobs and email notification lists, emails were not sent out.

This blog post should give you some guidance on how you can troubleshoot and properly define email job notifications.

The problem:

In our case, we used one “system” account to manage the job email notification subscription. Some of you might use the same approach, having a single and separate account used to manage job email notifications for all other accounts.

Let’s assume we have a job called JOB1 defined in schema IARSOV.

exec dbms_scheduler.create_job(job_name => 'JOB1', job_type => 'PLSQL_BLOCK', job_action => 'begin null; end;');

PL/SQL procedure successfully completed.

If we now try to add the a job notification email for IARSOV.JOB1 with the SYSTEM user we should receive an error: “ORA-24093: AQ agent SCHED$_AGT2$_xx not granted privileges of database user”.

exec dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

BEGIN dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com'); END;

*
ERROR at line 1:
ORA-24093: AQ agent SCHED$_AGT2$_101 not granted privileges of database user
SYSTEM
ORA-06512: at "SYS.DBMS_ISCHED", line 7847
ORA-06512: at "SYS.DBMS_SCHEDULER", line 4063
ORA-06512: at line 1

As a workaround we can grant the necessary privileges with the DBMS_AQADM.ENABLE_DB_ACCESS package used for managing Oracle Database Advanced Queuing (AQ).

exec DBMS_AQADM.ENABLE_DB_ACCESS(agent_name  => 'SCHED$_AGT2$_101', db_username => 'SYSTEM');

PL/SQL procedure successfully completed.

We can confirm the granted privileges via the DBA_AQ_AGENT_PRIVS dictionary view (Line 11):

set lines 120
col agent_name for a40
col db_username for a40

select * from dba_aq_agent_privs;

AGENT_NAME                     DB_USERNAME                    HTTP SMTP
------------------------------ ------------------------------ ---- ----
DB12C_3938_ORCL11G             DBSNMP                         NO   NO
SCHED$_AGT2$_101               IARSOV                         NO   NO
SCHED$_AGT2$_101               SYSTEM                         NO   NO
SCHEDULER$_EVENT_AGENT         SYS                            NO   NO
SCHEDULER$_REMDB_AGENT         SYS                            NO   NO
SERVER_ALERT                   SYS                            NO   NO
HAE_SUB                                                       NO   NO

7 rows selected.

Let’s now try to define job email notification for IARSOV.JOB1:

exec dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

PL/SQL procedure successfully completed.

set pages 200
set lines 200
col owner for a40
col job_name for a40
col recipient for a20
select owner, job_name, recipient, event from dba_scheduler_notifications where job_name = 'JOB1';

OWNER                          JOB_NAME                       RECIPIENT            EVENT
------------------------------ ------------------------------ -------------------- -------------------
IARSOV                         JOB1                           arsov@pythian.com    JOB_FAILED
IARSOV                         JOB1                           arsov@pythian.com    JOB_BROKEN
IARSOV                         JOB1                           arsov@pythian.com    JOB_SCH_LIM_REACHED
IARSOV                         JOB1                           arsov@pythian.com    JOB_CHAIN_STALLED
IARSOV                         JOB1                           arsov@pythian.com    JOB_OVER_MAX_DUR

The notification has been successfully defined, however, upon testing the events email was not sent. In our case, the events were ending up in the exception queue AQ$_SCHEDULER$_EVENT_QTAB_E and there was not much information we could derive from the AQ$ scheduler related tables.

Troubleshooting:

The DBA_SUBSCR_REGISTRATIONS view contains mapping definitions for each schema showing which event_queue:consumer_group it is subscribed to. If we check the subscription definition for IARSOV user we can see the event_queue:consumer_group is linked to USER# 5 which is the SYSTEM user. In this case IARSOV’s AQ agent SCHED$_AGT2$_101 is linked to wrong user.

What we’re interested in is the association between SUBSCRIPTION_NAME and USER# columns.

SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                                                     USER# STATUS
-------------------------------------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"SCHED$_AGT2$_101"                                         5 DB REG


SQL> select username, user_id from dba_users where user_id = 5;

USERNAME                          USER_ID
------------------------------ ----------
SYSTEM                                  5

In this case the emails won’t be sent out because the subscription registration is not properly initialized (linked) with the correct user (schema). In order for the notifications to work we need the proper link between the agent and the agent’s owner. In this case “SYS”.”SCHEDULER$_EVENT_QUEUE”:”SCHED$_AGT2$_101″ and the IARSOV schema should be properly linked – notice that the user’s ID is also part of the agent name.

What we now need to do is to drop all job email notifications (in this case only one) for IARSOV jobs. When dropping the last job email notification the subscription registration will be removed from DBA_SUBSCR_REGISTRATIONS.
However, note that you have to drop the job email notifications as the user to which the subscription registration is defined, in this case the SYSTEM user.

Hint: If you don’t know the password for the schema you need to connect to, you can can use the Proxy Authenticated Connection feature as documented in the blog article The power of the Oracle Database “proxy authenticated” connections.

SQL> show user;

USER is "SYSTEM"

SQL>
SQL> exec dbms_scheduler.remove_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

PL/SQL procedure successfully completed.

SQL> select subscription_name, user#, status from dba_subscr_registrations;

no rows selected

Once we clear the subscription, we can properly initialize the link by adding the first job notification with the job schema’s owner. This will properly initialize the event_queue:consumer_group with the correct user. After that we can add multiple job notifications from other users as long as we have appropriate privileges granted.

--as SYSTEM user.

SQL> show user;
USER is "SYSTEM"
SQL>
SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                       USER# STATUS
-------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"ILM_AGENT"                  0 DB REG

SQL>

--as IARSOV user.

SQL> show user;
USER is "IARSOV"
SQL>
SQL> exec DBMS_SCHEDULER.add_job_email_notification (job_name => 'IARSOV.JOB1', recipients => 'arsov@pyhian.com');

PL/SQL procedure successfully completed.

SQL>
SQL>

--as SYSTEM user.

SQL> show user;
USER is "SYSTEM"
SQL>
SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                       USER# STATUS
-------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"ILM_AGENT"                  0 DB REG
"SYS"."SCHEDULER$_EVENT_QUEUE":"SCHED$_AGT2$_101"         101 DB REG

SQL>
SQL>
SQL> select username from dba_users where user_id = 101;

USERNAME
----------------
IARSOV

SQL>

Conclusion:

If you decide to use scheduler job email notifications, and also prefer that the notifications management is done by a single user (such as “system”) I would advise you to create a “dummy” job notification (with the job owner’s schema) as soon as you create the first scheduler job. This will link the event_queue:consumer_group to the proper user. Afterwards, once you define the rest of the scheduler job notifications (under the common “system” user), you can clean-up that initial “dummy” job notification.

This behavior (bug) is fixed in 12.2 so that the notifications always go through the user’s AQ agent which defines the notifications.

Categories: DBA Blogs

Log Buffer #517: A Carnival of the Vanities for DBAs

Fri, 2017-07-21 11:47

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

Protecting Financial Data with Oracle WebCenter and Adobe LiveCycle

Oracle Forms 12c oracle.security.jps.JpsException Error after Database change

The Future of Content Management: Oracle Content & Experience Cloud

Today Oracle released a very large „monster“ Upgrade. This July 2017 Update includes the first time the new RU „Release Upgrade“ and RUR „Release Update Revision“ Patches.

Cloud Ward: Who Will Win the Battle for IT’s Future?

SQL Server:

SQL Server Management Studio add-ins

Resolve Network Binding Order Warning in failover cluster

Queries to inventory your SQL Server Agent Jobs

SQL Server 2016 ColumnStore Index String Predicate Pushdown

The Fast Route from Raw Data to Analysis and Display

MySQL:

Group-Replication, sweet & sour

You QA Most of Your System — What About Your Database?

Multi-Threaded Slave Statistics

Protecting your data! Fail-safe enhancements to Group Replication.

InnoDB Basics – Compaction: when and when not

Categories: DBA Blogs

Log Buffer #516: A Carnival of the Vanities for DBAs

Wed, 2017-07-12 11:02

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

12.2 New Feature: the FLEX ASM disk group part 2

Oracle ASM in Azure corruption – follow up

Set-based processing

ADF 12c BC Proxy User DB Connection and Save Point Error

Enabling A Modern Analytics Platform

SQL Server:

Batch SSIS pkg execution from Business Intelligence Development Studio

Find Database Connection Leaks in Your Application

Troubleshooting CPU Performance on VMware

SQLskills Wait Types Library now shows SentryOne data

PowerShell Tool Time: The Tool Framework

MySQL:

Installing Zabbix into Azure using a MySQL PaaS

Streaming Global Cyber Attack Analytics with Tableau and Python

Thread Statistics and High Memory Usage

On slave_parallel_workers and the logical clock

RDS / Aurora OS monitoring with Monyog v8.1.0

Categories: DBA Blogs

Log Buffer #515: A Carnival of the Vanities for DBAs

Mon, 2017-06-26 09:15

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

You may wish to clone an Oracle Home (for example you have all your databases on a single Oracle Home, but you want to separate Development from Test

Removing Outliers using stddev()

Installing Scala and Apache Spark on a Mac

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part V) – Pig

More on Optimistic Locking with ORA_ROWSCN

SQL Server:

How to Decipher sysschedules

SQL Azure Reporting Automation

Database Code Analysis

Visual Studio 2017 and Swagger: Building and Documenting Web APIs

Techniques to Cleanse Bad Data in SQL Server

MySQL:

ClickHouse in a General Analytical Workload (Based on a Star Schema Benchmark)

Debian 9 and MySQL: Watch Out!

How to Install LAMP on Ubuntu 16.04

Linux perf and the CPU regression in MySQL 5.7

Secure Binlog Server: Encrypted binary Logs and SSL Communication

Categories: DBA Blogs

EM13cR2 Installation Failing on BI Publisher Configuration

Wed, 2017-06-21 16:07

This is going to be a short post as there wasn’t much information on this type of failure when searching for it. Also, the IP’s are changed due to security reasons

I was working on an Oracle EM13cR2 installation and when the installation reached 78%, it failed on the BI Publisher configuration. First, I looked at the CfmLogger log.

INFO: oracle.sysman.top.oms:WLST offline debugging is in the file: /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170613152059.log.wlst_20170613152122.trc
INFO: oracle.sysman.top.oms:Securing BI Publisher to work with Enterprise Manager
INFO: oracle.sysman.top.oms:Locking Enterprise Manager ...
INFO: oracle.sysman.top.oms:OMS Console is locked. Access the console over HTTPS ports.
INFO: oracle.sysman.top.oms:BI Publisher is locked. Access BI Publisher over HTTPS ports.
INFO: oracle.sysman.top.oms:Restart OMS.
INFO: oracle.sysman.top.oms:Restarting Enterprise Manager
INFO: oracle.sysman.top.oms:Stopping Enterprise Manager, this can take some time ...
INFO: oracle.sysman.top.oms:Starting BI Publisher ...
INFO: oracle.sysman.top.oms:Failed to start Enterprise Manager. Diagnostic code 1.
INFO: oracle.sysman.top.oms:See log at /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170613152059.log for details.
INFO: oracle.sysman.top.oms:Error extending domain
INFO: oracle.sysman.top.oms:See log at /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170613152059.log for details.
INFO: oracle.sysman.top.oms:java.lang.Exception: See log at /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170613152059.log for details.
INFO: oracle.sysman.top.oms: at oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA.extendDomainWithBIP(BIPCA.java:3401)
INFO: oracle.sysman.top.oms: at oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA.main(BIPCA.java:1973)

When this led me to the bipca.log , it referenced the details below (which really didn’t tell me much):

[2017-05-09T21:55:17.585-07:00] [sysman] [ERROR] [] [oracle.sysman.bipca] [host: OracleEntMgr] [nwaddr: 192.168.51.10] [tid: 1] [userId: oracle] [ecid: a4412d2a-def5-49d8-ac27-1046c7243de9-00000001,0] [SRC_CLASS: oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA] [SRC_METHOD: handleFatalError] Error extending domain
[2017-05-09T21:55:17.586-07:00] [sysman] [ERROR] [] [oracle.sysman.bipca] [host: OracleEntMgr] [nwaddr: 192.168.51.10] [tid: 1] [userId: oracle] [ecid: a4412d2a-def5-49d8-ac27-1046c7243de9-00000001,0] [SRC_CLASS: oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA] [SRC_METHOD: handleFatalError] Fatal error:[[
java.lang.Exception: See log at /u01/app/oracle/middleware/cfgtoollogs/bip/bipca_20170509214828.log for details.
at oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA.extendDomainWithBIP(BIPCA.java:3401)
at oracle.sysman.sdkImpl.core.ip.model.bipca.BIPCA.main(BIPCA.java:1973)

A review of the bipca trace file didn’t tell me much about why the configuration was failing:

2017-05-24 09:26:51,250 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Selected DB vendor: Oracle
2017-05-24 09:26:51,250 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - adding normal datasource: emgc-mgmt_view-pool
2017-05-24 09:26:51,250 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - datasource: emgc-mgmt_view-pool component name: null
2017-05-24 09:26:51,251 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Found normal datasource: emgc-sysman-pool
2017-05-24 09:26:51,253 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Decomposing url: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))(CONNECT_DATA=(SID=emrep))) for driver: oracle.jdbc.OracleDriver
2017-05-24 09:26:51,253 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - There is an unsupported jdbc URL or JDBC driver is detected. See Stack trace:
java.lang.NullPointerException
at com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper.decomposeURL(DatasourceXBeanAspectHelper.java:316)
at java.lang.reflect.Method.invoke(Method.java:606)
at weblogic.WLST.main(WLST.java:29)
2017-05-24 09:26:51,253 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Found normal datasource: jvmd-ds
2017-05-24 09:26:51,255 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - Decomposing url: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))(CONNECT_DATA=(SID=emrep))) for driver: oracle.jdbc.OracleDriver
2017-05-24 09:26:51,255 FINE [1] com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper - There is an unsupported jdbc URL or JDBC driver is detected. See Stack trace:
java.lang.NullPointerException
at com.oracle.cie.domain.jdbc.DatasourceXBeanAspectHelper.decomposeURL(DatasourceXBeanAspectHelper.java:316)
at org.python.pycode._pyx95.updateDomain$21(/tmp/WLSTOfflineIni822569357732826272.py:103)
at org.python.pycode._pyx95.call_function(/tmp/WLSTOfflineIni822569357732826272.py)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyFunction.__call__(Unknown Source)

After reading for a while and working back and forth with Oracle, we identified the problem. The host wasn’t resolving via nslookup. Though we were able to ping it, what nslookup does is query a DNS domain nameserver to lookup and find IP address and it , so what we had to do was fix this :

[oracle@dbhost ~]# nslookup dbhost
Server: 128.101.1.256
Address: 128.101.1.256#53

Name: dbhost
Address: 128.101.1.1

Once we fixed this, we were able to install EM13cR2 successfully.

Note:This was originally published on rene-ace.com

Categories: DBA Blogs

Log Buffer #514: A Carnival of the Vanities for DBAs

Fri, 2017-06-16 13:19

This Log Buffer Edition encompasses few of the nifty blog posts from Oracle, SQL Server and MySQL.

Oracle:

Introduction to Oracle Big Data Cloud Service

Interpreting wpad.dat using jrunscript from JDK 8 to work out the right proxy.

Nice Trick to Get ADF LOV Description Text

Presentation: Dynamic Actions, Javascript, & CSS for APEX Beginners

E4 Session Developing Agnostic Data Services

SQL Server:

Setting up R in SQL 2016 – Avoiding Pitfalls

Setting Variables in Calling T-SQL Code While Using sp_executesql

To Fly, To Serve, To Fry Your Servers

Working with Windows Containers and Docker: Into your Stride

What is normal? Finding outliers with R

Two Approaches to Addressing Circular References in SSDT

MySQL:

Howto make MySQL point-in-time recovery faster ?

Three Methods of Installing Percona Monitoring and Management

Watch the tutorial: backup best practices for MySQL, MariaDB and Galera Cluster

MySQL 8.0: Kana-sensitive collation for Japanese

MySQL Document Store: Getting Started

Categories: DBA Blogs

Log Buffer #513: A Carnival of the Vanities for DBAs

Mon, 2017-06-12 08:57

This Log Buffer Edition includes blog posts from Oracle, SQL Server and MySQL.

Oracle:

In 12.2 you can: Convert a simple table to partitioned with multi-column automatic list partitions, partially indexed, with read only segments, filtering out unwanted data, online in one operation.

12c How to UN Expire Password for CDB Users

rman auxiliary (for duplicate)

How long will Oracle APEX remain an included feature of the Oracle Database?

How to easily delete files in the Oracle Cloud using CloudBerry Explorer

SQL Server:

Understanding OPENJSON and FOR JSON in Azure and SQL 2016

How to setup Machine Learning Services in SQL Server 2017 CTP2

The OUTPUT Clause for the MERGE Statements

Simple script to backup all SQL Server databases

Working with Windows Containers and Docker: Into your Stride

MySQL:

MySQL on Docker: Running Galera Cluster on Kubernetes

Summary of recent performance tests for MySQL 5.6, 5.7 and 8

A Quick Look at Parallel Rsync and How it Can Save a System Hours

Docker, MySQL and Experience in Containers

HopsFS running on top of MySQL Cluster 7.5 wins IEEE Scale Challenge 2017

Categories: DBA Blogs

Log Buffer #512: A Carnival of the Vanities for DBAs

Mon, 2017-06-05 09:50

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

Upgrade Existing TDE to Use New Unified Key Management in 12c Upgraded Database (non-cdb)

Instrumentation … not just for debugging

12.2 Index Deferred Invalidation (Atomica)

Collation in 12cR2 – when AA equals Å (or not?)

Configuring Oracle Traffic Director 12c with WebGate

SQL Server:

PowerShell Scripting to Replicate Cloud & SaaS Data to SQL Server

Comparing and Synchronizing Two Folders with Azure

How to Handle Hybrid Cloud Database Security with SQL Secure

Use T-SQL to Find Folders Using the Most Disk Space

Generate charts rapidly with SQL Server using R and T-SQL

MySQL:

Hybrid Data Encryption by Example using MySQL Enterprise Edition

Announcing Scotch Box 3.0 and Scotch Box Pro

Testing MySQL 8.0 – let me create a ton of undo files

MariaDB 10.1.24 and Connector/C 2.3.3 now available

Migration from MySQL Master-Slave pair to MySQL InnoDB Cluster: howto

Categories: DBA Blogs

Upgrade Existing TDE to Use New Unified Key Management in 12c Upgraded Database (non-cdb)

Tue, 2017-05-30 10:04

I’m really excited to be writing this post and I’m hoping it serves as helpful content. When reviewing the new unified key management in RDMS 12c, I came across old commands like ‘ALTER SYSTEM’ to manage the TDE keys that are still supported.  As a proof, I didn’t see any pre-reqs that exists on TDE part in the following upgrade related MOS note.

Complete Checklist for Manual Upgrades to Oracle Database 12c Release 1 (12.1) (Doc ID 1503653.1)

I had a test database which was running on RDBMS 11.2.0.4 version along with TDE configured (segments located in dedicated tablespace created with encryption clause). I upgraded this database into version 12.1.0.2  without any issues using the above note. No issues were found while accessing the encrypted segments using the old keys with autologin option existing on the “/etc/oracle/WALLETS/Test/” location on the server.

I wanted to use the new unified key management on this upgraded database. I checked MOS and surprisingly, no direct notes to fulfill this purpose. After doing multiple tests, I was able to get this method working.

1. First, I copied my existing keys into “/home/oracle/keys” location as a backup. I confirmed my existing TDE keys work fine.

SQL> select count(1) from test.tstbl;

COUNT(1)
———-
261440

2. I brought down my database and removed the keys from “/etc/oracle/WALLETS/Test/” location. I then started the database instance and created the dummy keystore using the new syntax.

SQL> startup
ORACLE instance started.

Total System Global Area 524288000 bytes
Fixed Size 2926320 bytes
Variable Size 197134608 bytes
Database Buffers 318767104 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> select * from v$encryption_wallet;

WRL_TYPE
——————–
WRL_PARAMETER
——————————————————————————–
STATUS WALLET_TYPE WALLET_OR FULLY_BAC
—————————— ——————– ——— ———
CON_ID
———-
FILE
/etc/oracle/WALLETS/Test/
NOT_AVAILABLE UNKNOWN SINGLE UNDEFINED
0
SQL> select key_id,activation_time from v$encryption_keys;

no rows selected

SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE ‘/etc/oracle/WALLETS/Test’ IDENTIFIED BY “<keystore_password>”;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY “<keystore_password>”;

keystore altered.

SQL> select key_id,activation_time from v$encryption_keys;

no rows selected

SQL> select * from v$encryption_wallet;

WRL_TYPE
——————–
WRL_PARAMETER
——————————————————————————–
STATUS WALLET_TYPE WALLET_OR FULLY_BAC
—————————— ——————– ——— ———
CON_ID
———-
FILE
/etc/oracle/WALLETS/Test/
OPEN_NO_MASTER_KEY PASSWORD SINGLE UNDEFINED
0

3. Next, I imported the keys from the files copied as per MOS note 2254960.1, but still I saw the same “OPEN_NO_MASTER_KEY” status on the data dictionary.

How to Merge a TDE Wallet From 11gR2 Into a New 12c Database Keystore? (Doc ID 2254960.1)

SQL> ADMINISTER KEY MANAGEMENT MERGE KEYSTORE ‘/home/oracle/keys’ IDENTIFIED BY “<11g_wallet_password>” INTO EXISTING KEYSTORE ‘/etc/oracle/WALLETS/Test’ IDENTIFIED BY “<keystore_password>” WITH BACKUP;

keystore altered.

SQL> !ls -ltr /etc/oracle/WALLETS/Test
total 16
-rw-r–r– 1 oracle oinstall 3112 May 4 23:16 ewallet.p12
-rw-r–r– 1 oracle oinstall 2408 May 4 23:16 ewallet_2017050503161274.p12

SQL> select * from v$encryption_wallet;

WRL_TYPE
——————–
WRL_PARAMETER
——————————————————————————–
STATUS WALLET_TYPE WALLET_OR FULLY_BAC
—————————— ——————– ——— ———
CON_ID
———-
FILE
/etc/oracle/WALLETS/Test/
OPEN_NO_MASTER_KEY PASSWORD SINGLE UNDEFINED
0
SQL> select key_id,activation_time from v$encryption_keys;

no rows selected

4. I restarted my database instance and saw CLOSED status, which is expected as autologin was not enabled yet.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 524288000 bytes
Fixed Size 2926320 bytes
Variable Size 197134608 bytes
Database Buffers 318767104 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> select key_id,activation_time from v$encryption_keys;

no rows selected

SQL> select * from v$encryption_wallet;

WRL_TYPE
——————–
WRL_PARAMETER
——————————————————————————–
STATUS WALLET_TYPE WALLET_OR FULLY_BAC
—————————— ——————– ——— ———
CON_ID
———-
FILE
/etc/oracle/WALLETS/Test/
CLOSED UNKNOWN SINGLE UNDEFINED
0
SQL> !ls -ltr /etc/oracle/WALLETS/Test/
total 16
-rw-r–r– 1 oracle oinstall 3112 May 4 23:16 ewallet.p12
-rw-r–r– 1 oracle oinstall 2408 May 4 23:16 ewallet_2017050503161274.p12

SQL> select count(1) from test.tstbl;
select count(1) from test.tstbl
*
ERROR at line 1:
ORA-28365: wallet is not open

5. The only option available was to open the wallet is using the old syntax.

SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY “<11g_wallet_password>”;

System altered.

SQL> select * from v$encryption_wallet;

WRL_TYPE
——————–
WRL_PARAMETER
——————————————————————————–
STATUS WALLET_TYPE WALLET_OR FULLY_BAC
—————————— ——————– ——— ———
CON_ID
———-
FILE
/etc/oracle/WALLETS/Test/
OPEN PASSWORD SINGLE NO
0
SQL> select count(1) from test.tstbl;

COUNT(1)
———-
261440

6. I configured the AUTOLOGIN feature using the new syntax and restarted the instance many times, no further issues reported with the keys availability.

SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE ‘/etc/oracle/WALLETS/Test’ IDENTIFIED BY “<keystore_password>”;

keystore altered.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 524288000 bytes
Fixed Size 2926320 bytes
Variable Size 197134608 bytes
Database Buffers 318767104 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> select * from v$encryption_wallet;

WRL_TYPE
——————–
WRL_PARAMETER
——————————————————————————–
STATUS WALLET_TYPE WALLET_OR FULLY_BAC
—————————— ——————– ——— ———
CON_ID
———-
FILE
/etc/oracle/WALLETS/Test/
OPEN AUTOLOGIN SINGLE NO
0
SQL> select count(1) from test.tstbl;

COUNT(1)
———-
261440

SQL> !ls -ltr /etc/oracle/WALLETS/Test
total 24
-rw-r–r– 1 oracle oinstall 3112 May 4 23:16 ewallet.p12
-rw-r–r– 1 oracle oinstall 2408 May 4 23:16 ewallet_2017050503161274.p12
-rw-r–r– 1 oracle oinstall 3157 May 4 23:21 cwallet.sso

Verification:
——————
SQL> select key_id,activation_time from v$encryption_keys;

KEY_ID
——————————————————————————
ACTIVATION_TIME
—————————————————————————
AQ**************************************AA

SQL> select name,utl_raw.cast_to_varchar2( utl_encode.base64_encode(’01’||substr(mkeyid,1,4))) || utl_raw.cast_to_varchar2( utl_encode.base64_encode(substr(mkeyid,5,length(mkeyid)))) masterkeyid_base64 FROM (select t.name, RAWTOHEX(x.mkid) mkeyid from v$tablespace t, x$kcbtek x where t.ts#=x.ts#);

NAME MASTERKEYID_BASE64
—————————— ————————————————————
TDE_TBS AQ*********************

The keys generated during older version of Oracle RDMS can be made active using the new TDE unified key management interface on RDMS 12c.

Categories: DBA Blogs

Automating Password Rotation for Oracle Databases

Fri, 2017-05-26 14:03

Password rotation is not the most exciting task in the world, and that’s exactly why it’s a perfect candidate for automation. Automating routine tasks like this are good for everyone – DBAs can work on something that’s more exciting, companies save costs as less time is spent on changing the passwords, and there’s no place for human error, either. At Pythian, we typically use Ansible for task automation, and I like it mainly because of its non-intrusive configuration (no agents need to be installed on the target servers), and its scalability (tasks are executed in parallel on the target servers). This post will briefly describe how I automated password rotation for oracle database users using Ansible.

Overview

This blog post is not an intro to what is Ansible and how to use it, but it’s rather an example of how a simple task can be automated using Ansible in a way that’s scalable, flexible and easily reusable, and also provides the ability for other tasks to pick up the new passwords from a secure password store.

  • Scalability – I’d like to take advantage of Ansible’s ability of executing tasks on multiple servers at the same time. For example, in a large environments of tens or hundreds of machines, a solution that executes password change tasks serially would not be suitable. This would be an example of a “serial” task (it’s not a real thing, but just an illustration that it “hardcodes” a few “attributes” (environment file, the username and the hostname), and creating a separate task for every user/database you’d want to change the password for would be required:
    - hosts: ora-serv01
      remote_user: oracle
      tasks:
      - name: change password for SYS
        shell: | 
          . TEST1.env && \
          sqlplus / as sysdba @change_pasword.sql SYS \
          \"{{lookup('password','/dev/null length=8')}}\"
    
  • Flexible – I want to be able to adjust the list of users for which the passwords are changed, and the list of servers/databases that the user passwords are changed for in a simple way, that doesn’t include changing the main task list.
  • Reusable – this comes together with flexibility. The idea is that the playbook would be so generic, that it wouldn’t require any changes when it’s implemented in a completely separate environment (i.e. for another client of Pythian)
  • Secure password store – the new passwords are to be generated by the automated password rotation tool, and a method of storing password securely is required so that the new passwords could be picked up by the DBAs, application owners or the next automated task that would reconfigure the application
The implementation Prerequisites

I chose to do the implementation using Ansible 2.3, because it introduces the passwordstore lookup, which enables interaction with the pass utility (read more about it in Passwordstore.org). pass is very cool. It store passwords in gpg-encrypted files, and it can also be configured to automatically update the changes to a git repository, which relieves us of the headache of password distribution. The password can be retrieved from git on the servers that need the access to the new passwords.

Ansible 2.3 runs on python 2.6, unfortunately, the passwordstore lookup requires Python 2.7, which can be an issue if the control host for Ansible runs on Oracle Linux 6 or RHEL 6, as they don’t provide Python 2.7 in the official yum repositories. Still, there are ways of getting it done, and I’ll write another blog post about it.

So, what we’ll need is:

  • Ansible 2.3
  • jmespath plugin on Ansible control host (pip install jmespath)
  • jinja2 plugin on Ansible control host (I had to update it using pip install -U jinja2 in few cases)
  • Python 2.7 (or Python 3.5)
  • pass utility
The Playbook

This is the whole list of files that are included in the playbook:

./chpwd.yml
./inventory/hosts
./inventory/orcl1-vagrant-private_key
./inventory/orcl2-vagrant-private_key
./roles/db_users/files/change_password.sql
./roles/db_users/files/exists_user.sql
./roles/db_users/defaults/main.yml
./roles/db_users/tasks/main.yml

Let’s take a quick look at all of them:

  • ./chpwd.yml – is the playbook and (in this case) it’s extremely simple as I want to run the password change against all defined hosts:
    $ cat ./chpwd.yml
    ---
    
      - name: password change automation
        hosts: all
        roles:
          - db_users
    
  • ./inventory/hosts, ./inventory/orcl1-vagrant-private_key, ./inventory/orcl2-vagrant-private_key – these files define the hosts and the connectivity. In this case we have 2 hosts – orcl1 and orcl2, and we’ll connect to vagrant user using the private keys.
    $ cat ./inventory/hosts
    [orahosts]
    orcl1 ansible_host=127.0.0.1 ansible_port=2201 ansible_ssh_private_key_file=inventory/orcl1-vagrant-private_key ansible_user=vagrant
    orcl2 ansible_host=127.0.0.1 ansible_port=2202 ansible_ssh_private_key_file=inventory/orcl2-vagrant-private_key ansible_user=vagrant
  • ./roles/db_users/files/change_password.sql – A sql script that I’ll execute on the database to change the passwords. It takes 2 parameters the username and the password:
    $ cat ./roles/db_users/files/change_password.sql
    set ver off pages 0
    alter user &1 identified by "&2";
    exit;
  • ./roles/db_users/files/exists_user.sql – A sql script that allows verifying the existence of the users. It takes 1 argument – the username. It outputs “User exists.” when the user is there, and “User {username} does not exist.” – when it’s not.
    $ cat ./roles/db_users/files/exists_user.sql
    set ver off pages 0
    select 'User exists.' from all_users where username=upper('&1')
    union all
    select 'User '||upper('&1')||' does not exist.' from (select upper('&1') from dual minus select username from all_users);
    exit;
  • ./roles/db_users/defaults/main.yml – is the default file for the db_users role. I use this file to define the users for each host and database for which the passwords need to be changed:
    $ cat ./roles/db_users/defaults/main.yml
    ---
    
      db_users:
        - name: TEST1
          host: orcl1
          env: ". ~/.bash_profile && . ~/TEST1.env > /dev/null"
          pwdstore: "orcl1/TEST1/"
          os_user: oracle
          become_os_user: yes
          users:
            - dbsnmp
            - system
        - name: TEST2
          host: orcl2
          env: ". ~/.bash_profile && . ~/TEST2.env > /dev/null"
          pwdstore: "orcl2/TEST2/"
          os_user: oracle
          become_os_user: yes
          users:
            - sys
            - system
            - ctxsys
        - name: TEST3
          host: orcl2
          env: ". ~/.bash_profile && . ~/TEST3.env > /dev/null"
          pwdstore: "orcl2/TEST3/"
          os_user: oracle
          become_os_user: yes
          users:
            - dbsnmp

    In this data structure, we define everything that’s needed to be known to connect to the database and change the passwords. each entry to the list contains the following data:

    • name – just a descriptive name of the entry in this list, normally it would be the name of the database that’s described below.
    • host – the host on which the database resides. It should match one of the hosts defined in ./inventory/hosts.
    • env – how to set the correct environment to be able to connect to the DB (currently it requires sysdba connectivity).
    • pwdstore – the path to the folder in the passwordstore where the new passwords will be stored.
    • os_user and become_os_user – these are used in case sudo to another user on the target host is required. In a typical configuration, I connect to the target host using a dedicated user for ansible, and then sudo to the DB owner. if ansible connects to the DB onwer directly, then become_os_user should be set to “no”.
    • users – this is the list of all users for which the passwords need to be changed.

    As you see, this structure greatly enhances the flexibility and reusability, because adding new databases, hosts or users to the list would be done by a simple change to the “db_users:” structure in this defaults file. In this example, dbsnmp and system passwords are rotated for TEST1@orcl1, sys, system and ctxsys passwords are rotated for TEST2@orcl2, and dbsnmp on TEST3@orcl2

  • ./roles/db_users/tasks/main.yml – this is the task file of the db_users role. The soul of the playbook and the main part that does the password change depending on the contents in the defaults file described above. Instead of pasting the whole at once, I’ll break it up task by task, and will provide some comments about what’s being done.
    • populate host_db_users – This task simply filters the whole db_users data structure that’s defined in the defaults file, and creates host_db_users fact with only the DBs that belong to the host the task is currently run on. Using the ansible “when” conditional would also be possible to filter the list, however in such case there’s a lot of “skipped” entries displayed when the task is executed, so I prefer filtering the list before it’s even passed to the Ansible task.
      ---
      
        - name: populate host_db_users
          set_fact: host_db_users="{{ db_users | selectattr('host','equalto',ansible_hostname) | list }}"
      
    • create directory for target on db hosts – for each unique combination of os_user and become_os_user on the target host, and “ansible” directly is created. A json_query is used here, to filter just the os_user and become_os_user attributes that are needed. It would also work with with_items: "{{ host_db_users }}", but in that case, the outputs become cluttered as the attributes are displayed during the execution.
        - name: create directory for target on db hosts
          file:
            path: "ansible"
            state: directory
          become_user: "{{ item.os_user }}"
          become: "{{ item.become_os_user }}"
          with_items: "{{ host_db_users | json_query('[*].{os_user: os_user, become_os_user: become_os_user }') | unique | list }}"
      
    • copy sql scripts to db_hosts – the missing scripts are copied from Ansible control host to the target “ansible” directories. “with_nested” is the method to create a loop in Ansible.
        - name: copy sql scripts to db_hosts
          copy:
            src="{{ item[1] }}"
            dest=ansible/
            mode=0644
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_nested:
            - "{{ host_db_users | json_query('[*].{os_user: os_user, become_os_user: become_os_user }') | unique | list }}"
            - ['files/change_password.sql','files/exists_user.sql']
      
    • verify user existence – I’m using a shell module to execute the sql script after setting the environment. The outputs are collected in “exists_output” variable. This task will not fail and will not show as “changed” because of failed_when and changed_when settings of “false”.
        - name: verify user existence
          shell: |
             {{ item[0].env }} && \
             sqlplus -S / as sysdba \
             @ansible/exists_user.sql {{ item[1] }}
          register: exists_output
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_subelements:
            - "{{ host_db_users |json_query('[*].{env: env, os_user: os_user, users: users, become_os_user: become_os_user }') }}"
            - users
          failed_when: false
          changed_when: false
      
    • User existence results – this task will fail when any of the users didn’t exist, and will display which user it was. This is done in a separate task to produce cleaner output, and in case it’s not wanted to fail if any of the users don’t exist (continue to change passwords for the existing users), this task can simply be commented or the “failed_when: false” can be uncommented.
        - name: User existence results
          fail: msg="{{ item }}"
          with_items: "{{ exists_output.results|rejectattr('stdout','equalto','User exists.')|map(attribute='stdout')|list }}"
          #failed_when: false
      
    • generate and change the user passwords – finally, this is the task that actually changes the passwords. The successful password change is detected by checking the output from the sqlscript, which should produce “User altered.” The rather complex use of lookups is there for a reason: the passwordstore lookup can also generate passwords, but it’s not possible to define the character classes that the new password should contain, however the “password” lookup allows defining these. Additionally, the 1st character is generated only containing “ascii_letters”, as there are usually some applications that “don’t like” passwords that start with numbers (this is why generating the 1st letter of the password is separated from the remaining 11 characters. And lastly, the “passwordstore” lookup is used with the “userpass=” parameter to pass and store the generated password into the passwordstore (and it also keeps the previous passwords). This part could use some improvement as in some cases different rules for the generated password complexity may be required. The password change outputs are recorded in “change_output” that’s checked in the last task.
        - name: generate and change the user passwords
          shell: |
             {{ item[0].env }} && \
             sqlplus -S / as sysdba \
             @ansible/change_password.sql \
             {{ item[1] }} \"{{ lookup('passwordstore',item[0].pwdstore + item[1] + ' create=true overwrite=true userpass=' +
                                       lookup('password','/dev/null chars=ascii_letters length=1') +
                                       lookup('password','/dev/null chars=ascii_letters,digits,hexdigits length=11')) }}\"
          register: change_output
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_subelements:
            - "{{ host_db_users |json_query('[*].{env: env, os_user: os_user, users: users, pwdstore: pwdstore, become_os_user: become_os_user}') }}"
            - users
          failed_when: false
          changed_when: "'User altered.' in change_output.stdout"
      
    • Password change errors – The “change_output” data are verified here, and failed password changes are reported.
         # fail if the password change failed.
        - name: Password change errors
          fail: msg="{{ item }}"
          with_items: "{{ change_output.results|rejectattr('stdout','equalto','\nUser altered.')|map(attribute='stdout')|list }}"
      
It really works!

Now, when you know how it’s built – it’s time to show how it works!
Please pay attention to the following:

  • The password store is empty at first
  • The whole password change playbook completes in 12 seconds
  • The tasks on both hosts are executed in parallel (see the order of execution feedback for each task)
  • The passwordstore contains the password entries after the playbook completes, and they can be retrieved by using the pass command
$ pass
Password Store

$ time ansible-playbook -i inventory/hosts chpwd.yml

PLAY [pasword change automation] *******************************************************

TASK [Gathering Facts] *****************************************************************
ok: [orcl1]
ok: [orcl2]

TASK [db_users : populate host_db_users] ***********************************************
ok: [orcl1]
ok: [orcl2]

TASK [db_users : create directory for target on db hosts] ******************************
changed: [orcl1] => (item={'become_os_user': True, 'os_user': u'oracle'})
changed: [orcl2] => (item={'become_os_user': True, 'os_user': u'oracle'})

TASK [db_users : copy sql scripts to db_hosts] *****************************************
changed: [orcl1] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/change_password.sql'])
changed: [orcl2] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/change_password.sql'])
changed: [orcl1] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/exists_user.sql'])
changed: [orcl2] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/exists_user.sql'])

TASK [db_users : verify user existance] ************************************************
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'sys'))
ok: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'dbsnmp'))
ok: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'system'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'system'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'ctxsys'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST3.env > /dev/null'}, u'dbsnmp'))

TASK [db_users : User existance results] ***********************************************

TASK [db_users : generate and change the user passwords] *******************************
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'sys'))
changed: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl1/TEST1/', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'dbsnmp'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'system'))
changed: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl1/TEST1/', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'system'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'ctxsys'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST3/', 'env': u'. ~/.bash_profile && . ~/TEST3.env > /dev/null'}, u'dbsnmp'))

TASK [db_users : Password change errors] ***********************************************

PLAY RECAP *****************************************************************************
orcl1                      : ok=6    changed=3    unreachable=0    failed=0
orcl2                      : ok=6    changed=3    unreachable=0    failed=0

real    0m12.418s
user    0m8.590s
sys     0m3.900s

$ pass
Password Store
|-- orcl1
|   |-- TEST1
|       |-- dbsnmp
|       |-- system
|-- orcl2
    |-- TEST2
    |   |-- ctxsys
    |   |-- sys
    |   |-- system
    |-- TEST3
        |-- dbsnmp

$ pass orcl1/TEST1/system
HDecEbjc6xoO
lookup_pass: First generated by ansible on 26/05/2017 14:28:50
Conclusions

For past 2 months I’ve been learning Ansible and trying it for various DBA tasks. It hasn’t always been a smooth ride, as I had to learn quite a lot, because I wasn’t exposed much to beasts like jinja2, json_query, YAML, python (very handy for troubleshooting) and Ansible itself before. I feel that my former PL/SQL coder’s experience had created some expectations from Ansible, that turned out not to be true. The biggest challenges to me were getting used to the linear execution of the playbook (while with PL/SQL I can call packages, functions, etc. to process the data “outside” the main linear code line), and the lack of execution feedback, because one has to learn creating Ansible tasks in a way that they either succeed or fail (no middle states like ‘this is a special case – process it differently’), as well as the amount of visual output is close to none – which does make sense to some degree, it’s “automation” after all, right? Nobody should be watching :)
A separate struggle for me was working with the complex data structure that I created for storing the host/database/user information. It’s a mix of yaml “dictionary” and “list”, and it turned out to be difficult to process it in a way I wanted – this is why I used the json_query at times (although not in a very complex way in this case). There are probably simpler ways I didn’t know of (didn’t manage finding), and I’d be glad if you’d let me know of possible improvements or even other approaches to such tasks that you have worked on and implemented.
Despite all the complaining above, I think it’s really worth investing time in automating tasks like this, it really works and once done it doesn’t require much attention. Happy Automating!

Categories: DBA Blogs

Log Buffer #511: A Carnival of the Vanities for DBAs

Tue, 2017-05-23 09:50

This Log Buffer Edition covers Oracle, SQL Server, MySQL.

Oracle:

A Sneak Peek at Oracle’s Chatbot Cloud Service and 5 Key Factors Necessary for Bot ROI

Oracle JET Hybrid – NavDrawer Template Menu/Header Structure

Oracle Enterprise Linux 6.6 AMI Available on AWS

Datascape Podcast Episode 9 – What’s Up with Oracle These Days?

Sequential Asynchronous calls in Node.JS – using callbacks, async and ES6 Promises

SQL Server:

Fixing an SSRS Password Error while Changing Credentials

Azure DWH Part 8: Accessing Azure SQL Data Warehouse with C#

Personal Data, Privacy, and the GDPR

Performance Myths : Truncate Can’t Be Rolled Back

Troubleshooting CPU Performance on VMware

MySQL:

MySQL Shell: eye candy for a future release !

MySQL 8.0: It’s doxygen time for MTR

How to login in MariaDB with OS user without password

MySQL Enterprise backup : How to setup a slave for a standalone server with zero downtime?

Command Line Aficionados: Introducing s9s for ClusterControl

Categories: DBA Blogs

Datascape Podcast Episode 9 – What’s Up with Oracle These Days?

Thu, 2017-05-18 09:06

On today’s episode of the Datascape Podcast we are going to talk about the Oracle Database. With the hype around cloud and cloud first initiatives, automation, DevOps, and very specialized cloud-based databases, I wondered what was going on with one of the datascape’s old faithful technologies, Oracle Database.

Joining us is Simon Pane, who is an Oracle consultant and expert. Simon is very much immersed in the community and truly has his feet on the ground. He’s been working in the Oracle DBA space as a consultant for almost his entire career, with a focus around Oracle’s core database technologies. Simon often speaks at various conferences around North America and Europe. He was recently inducted as a board member to Independent Oracle User Group (IOUG), one of the main Oracle user groups.

You can find it on iTunes, Google Play, Sound Cloud, and just about anywhere that shares podcasts.

Key Points From This Episode:

  • Learn more about which types of companies and organizations use the Oracle Database.
  • Understand the different barriers of adoption for using the Oracle Database.
  • Simon shares more about the average size implementation that they see at Oracle.
  • Discover why the main use case for Oracle XE is for teaching and training purposes.
  • Learn more about the current version of Oracle, and the new exciting features.
  • Understand why it is complicated to upgrade an Oracle database, and preparation needed.
  • Simon shares the top features and benefits of the Oracle database.
  • Find out what the ODA and Exadata Appliances are, and the differences between them.
  • Learn more about the Oracle Cloud and how they are working to become a major player.
  • Simon shares his thoughts on Oracle as a product, and whether or not it is dying/dead.
  • Hear about the project that Simon is most proud of, the best tool he uses, and much more!

Links Mentioned in Today’s Episode:

Oracle Database
Oracle Database Appliance
Oracle Exadata
DBA Course
Simon on Twitter
More about Simon
ODTUG
ODTUG Kscope17 Conference

Categories: DBA Blogs

Oracle E-Business Suite and Java Web Start. Finally!

Thu, 2017-04-27 20:36

Great things always happen overnight. That’s probably the case for European Oracle Apps DBAs in the same situation, like myself.

This morning I read Steven Chan’s latest blog post about Java Web Start certification with Oracle E-Business Suite and its support release to the public. We heard about it during some informal discussions at Collaborate 17 conference.  Everything is ready, the documentation is prepared, and they are just waiting for final bug related to Workflow Activity Monitor to be addressed…

…and about twenty days later,  it was released.

This is a superb evolutionary event! And, of course, I made time to fit in some good introductory, hands-on experience.

 

Initial thoughts while going through Doc ID 2188898.1

 

1. Java Runtime version for client.

  • JRE 8 Update 131 b31 or JRE 8 Update 121 b33 are required (as a minimum).
    This isn’t clearly visible – these are special update releases (look at b31 and b33), available only through My Oracle Support download via Patch 25767257 and 25450542. A public release containing the support is only scheduled for next quarter and is set to be released with Update 141.
  • JRE 6 and JRE 7 are out of scope! If you don’t have JRE 8 support there in your system, now is the right time to think about it.

2. Support for R12.2. Also there is a support for R12.1.3. No 11i.

Patching requirement is minimal (three small patches for AD/TXK/FND), but you need to be on latest AD/TXK.C.8 and ATG_PF.C.6, and new PSU overlay for 10.1.2 Oracle Home which nicely supersedes the previous overlay released.

There are eight functional module patches posted so far. There are still known issues mentioned for other modules. These are less popular modules, and are to be implemented and added to the list at some point, I believe.
I would include this list in your patch maintenance, especially Oracle Workflow related for Activity Monitor.

s_forms_launch_method new context variable is implemented which sets the required profile options at Site level. However, you can still configure exceptions for users via User Level profile option values. Both methods are available. You can still leave browser plugin method as default, and enable JWS only for particular users, let’s say, for testing purposes.

3. “Only one instance per Java applet can be running on the client PC.”

Yes, it’s only possible to have one environment opened at the same time. This is a JWS limitation.  Even IE’s noframemerging or “hey, I can use my second browser” is not an option here.

4. Chrome full usage support is finally available there for Forms.

Basically, browser support is not dependent on browser plugin availability. There is also Microsoft Edge support to mention and we can finally start using 64-bit Java, as there is no longer browser plugin dependency.

5. Known issues section.

From management and housekeeping perspective, docs are mentioning that the process is self cleanable, like frmservlet.jnlp downloaded files should be automatically cleared from disk after launch. There is a known issues section mentioning some nuances, open issues and workarounds for specific browsers and functionality areas in EBS. As always, it will be kept updated and will be periodically reviewed by many of us Apps DBAs for news.

 

Now to talk about my hands-on…

I used my R12.2.6 Vision lab instance built on AWS.  The patching exercise took something like 30 minutes in total (10.1.2 patch, ebs patches, JAR regeneration). But I was doing this in hotpatch mode and not through online patch cycle, (yes, not the right way, but my goal was to avoid ADOP time consuming tasks.
I also installed the required JRE 8 update 131 b31 64-bit version on my laptop.

I made a huge mistake starting this testing on my MacOS, which introduced a number of issues that almost led me to a huge “why Oracle?” facepalm result. And it’s not just Gatekeeper security feature known issue mentioned for Safari on MacOS. But I’ll talk about this later.

Brought up my Windows 7 VM.
All IE 11, Firefox’s latest public update, and Chrome just worked like a charm. I didn’t experience any issues like I faced previously on my Mac. “Save File” to Downloads folder and then a double-click, or “Open with” to open the applet immediately. And your Forms session is up. Look at the screenshot – CHROME!

Applet window is transparently opening.

The example below is from Workflow Activity Monitor using Firefox. No extra browser tab / window opened in my case, although there was a mention of known issue with Firefox.

Applet files downloaded are automatically cleared once the applet is loaded, you will not find them on the disk. Chrome, as an example, updates the status of each item under Downloads tab.

So again.. Windows platform is covered well. What about the rest?

Linux is not a certified platform for Oracle E-Business Suite end user desktops. Although I was successfully using it in my experience, and it should work, the Oracle team still is not testing and certifying it. You may use and play around it at your own risk, and it shouldn’t be the right production direction. I personally haven’t tested it yet for JWS, but my guess is that it will have the same problem as MacOS is having. Maybe a to-do for this blog post update at some point.

MacOS… Steven Chan’s blog post, as mentioned above, states that “Safari on MacOS” is not certified because of MacOS Gatekeeper security feature that is “making the Java Web Start user experience very challenging”. That’s fine. We know a workaround to go to System Preferences app and click on Open Anyway button, though it’s required every time we launch the Forms session.

But… we don’t even reach this point. In the example below, I will outline a sequence of “nuances” I faced. It will be based on Safari screenshots, as only Safari is officially certified on MacOS. Absolutely the same issues I faced in Firefox on MacOS, and in Chrome on same MacOS.

First – we are trying to open Forms, but just getting this familiar screen.

Why? Because the URL still goes to browser plugin mode – “/forms/frmservlet?config=browserMode&appletmode=…”

We can go and set the ICX_FORMS_LAUNCHER profile option to “/forms/frmservlet?config=jws” on Site level, as MOS note outlines.

This works, but it will be required every time we run AutoConfig, as the profile option is always being reset to just “/forms/frmservlet” on Site level, and Forms opening process is supposed to follow FND_ENABLE_JAVA_WEB_START value direction.

After the manual fix this is what’s happening. Safari is downloading the applet.

We launch it and face a known Gatekeeper issue (only now).

Applet is loaded, but not Forms.

I would expect Safari to cover this itself, but in my case I have Firefox configured as default Web browser in the system.
That caused Firefox tab to be opened (called by initial applet) and a second applet download to be requested.

Once the second applet is up, I finally get the Forms session running. Sort of, it’s a similar flow that is happening with a browser plugin, but it is just killing the user experience.
Initially second applet was blocked by same Gatekeeper, and I had to use the known workaround. But at a later testing MacOS is always blocking only the first applet while launching Forms, second – not anymore. Maybe, I suspect the issue is with a miss of jndi extension for the file.

I got the Forms running. But what a process it was…
And nothing is cleared. My Downloads folder is full of these fndservlet.jndi files downloaded.

To summarize

This is a great moment and long expected feature that got finally released by the Oracle EBS team. But is it ready?

I would say, if your company is following strict control standards what software is used, including the browsers, you are, most probably, using same IE on Windows – this feature will work well. Forms will work even with good old browser plugin mode, your current JRE 6 or JRE 7, and you shouldn’t care and rush with this implementation.
Same comments about Firefox ESR – it will work as before. But not for a long time period until next ESR release branch merge.

If your company is following BYOD approach, multiple platforms in use like MacOS, you seek for Chrome browser support – at this stage only Windows platform is covered well. Others – are not officially certified, including “Safari on MacOS”. You can still use it, but you need to accept the “head ache” process as described by the example above. And that’s not about Safari only, as other browsers, like Firefox and Chrome, behaves in the same way on MacOS.

And not all Apps DBAs will agree to manually fix ICX_FORMS_LAUNCHER every time AutoConfig is executed. Though there might be User level exceptions for a defined end user base list.

Categories: DBA Blogs

Simple Steps to Perform oPatch Maintenance with Ansible

Mon, 2017-04-24 10:44

Like the Universe, IT growth seems to be infinite, we always have more environments, more servers, more users, more disk usage, more databases to manage and it won’t stop. In fact, we are pretty sure that this expansion is going to be faster and faster.

We then have to adapt to this new, mutating IT environment being more productive in order to manage more and more targets in less time.

How to achieve this goal? Like human beings have always done from the early days – by using tools and by making better tools with the tools we have.

1/ The Ansible Tool

 

1.1/ A Word on Ansible

Ansible is an open source IT automation tool that was launched in early 2013 and bought by Red Hat in 2015. The most recent 2.3 version was released few days ago.

1.2/ Why Ansible?

Other the automation tools are professed to be easy, fast, able to manage thousands of thousands of targets, etc… so why Ansible instead of Puppet or Chef? For me, it’d because Ansible is agentless and does everything through standard SSH (or Paramiko which is a Python SSH implementation).

Indeed, ‘no agent’ really means easy to deploy, no agent to maintain (!), and it is very secure since it uses SSH. I am accustomed to working with companies that have tough security processes and challenging processes for any kind of installations. Be sure that it is easier to quickly deploy everything with these features:

  • Is it secure? Yes, it goes through SSH.
  • Anything to install on the targets? No.
  • Do you need root access? No, as long as what I need to do is doable with no root privilege.
  • Can it go through sudo? Yes, no worries.
  • What do you need then? An SSH key deployed on the targets (which also means that it is very easy to unsetup, you just have to remove that SSH key from the target)

For more information on the differences between Ansible, Puppet and Chef, just perform an online search.  You will find many in-depth comparatives.

2/ Manage oPatch with Ansible

To illustrate how quick and easy it is to use Ansible, I will demonstrate how to update oPatch with Ansible. oPatch is a very good candidate for Ansible as it needs to be frequently updated, exists in every Oracle home and also needs to be current every time you apply a patch (and for those who read my previous blogs, you know that I like to update opatch :))

2.1/ Install Ansible

The best way to install Ansible is to first refer to the official installation documentation .  There you will find the specific commands for your favorite platform (note that Ansible is not designed for Windows).

2.2/ Configure Ansible

To start, Ansible has to know the hosts you want to manage in a “host” file like:

oracle@control:~/work$ cat hosts_dev
[loadbalancer]
lb01

[database]
db01
db02 ansible_host=192.168.135.101
oracle@control:~/work$

We can split the hosts by group like [loadbalancer], [database] to have various hosts group. It is also possible that the host you are running Ansible on cannot resolve a host. We can then use the ansible_host parameter to specify the IP for it like I did for the db02 server. In fact, ansible_host defines the host Ansible will connect to and the name at the start of the line is an alias used if ansible_host is not defined

Note that I named the hosts file “hosts_dev” in my example. This was done so I would not use the default ansible hosts file which make it more modular. We then have to tell Ansible that we want to use this file instead of the default file in the ansible.cfg configuration file.

oracle@control:~/work$ cat ansible.cfg
[defaults]
inventory=./hosts_dev
oracle@control:~/work$

Please remember that Ansible uses SSH connectivity so you’ll need to exchange the SSH key of your “control” server to your targets. More extensive documentation on the subject can be found online. Here is an example with ssh-copy-id (if you don’t know the target user password, conduct a Google search for authorized_keys and you will find how to exchange an SSH key when you don’t know the target user password):

  oracle@control:~$ ssh-keygen                          # This will generate your SSH keys

  ... press ENTER at all prompts) ...

  oracle@control:~$ ssh-copy-id oracle@db01
  ...
  Are you sure you want to continue connecting (yes/no)? yes
  ...
  oracle@db01's password:                             # You will be prompted for the target password once
  ...
  Now try logging into the machine, with:   "ssh 'oracle@db01'"
  and check to make sure that only the key(s) you wanted were added.

  oracle@control:~$ ssh ansible@db01                   # Try to connect now
  Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-112-generic x86_64)
  Last login: Thu Apr 20 02:17:24 2017 from control
  oracle@db01:~$                                       # We are now connected with no password

 

2.3/ A First Playbook

A playbook is a collection of Ansible commands that are used to orchestrate what you want to do. Ansible uses the YAML language (please have a look at the official YAML website) for this purpose.

Let’s start with a first easy playbook that checks if the /etc/oratab file exists on my [database] hosts:

oracle@control:~/work$ cat upgrade_opatch.yml
---
- hosts: database                              # Specify only the hosts contained in the [database] group
  tasks:
  - name: Check if /etc/oratab exists          # A name for the task
    stat:                                      # I will use the stat module to check if /etc/oratab exists
      path: /etc/oratab                        # The file or directory I want to check the presence
    register: oratab                           # Put the return code in a variable named "oratab"

  - debug:                                     # A debug task to show an error message if oratab does not exist
      msg: "/etc/oratab does not exists"       # The debug message
    when: oratab.stat.exists == false          # The message is printed only when the /etc/oratab file does not exist

oracle@control:~/work$

Let’s run it now (we use ansible-playbook to run a playbook):

oracle@control:~/work$ ansible-playbook upgrade_opatch.yml

PLAY [database] ***************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [db02]
ok: [db01]

TASK [Check if /etc/oratab exists] ********************************************************************************************************************************************************************************
ok: [db02]
ok: [db01]

TASK [debug] ******************************************************************************************************************************************************************************************************
skipping: [db01]
ok: [db02] => {
    "changed": false,
    "msg": "/etc/oratab does not exists"
}

PLAY RECAP ********************************************************************************************************************************************************************************************************
db01                       : ok=2    changed=0    unreachable=0    failed=0
db02                       : ok=3    changed=0    unreachable=0    failed=0

oracle@control:~/work$

Since I removed /etc/oratab from db02 on purpose, I received the “/etc/oratab does not exists” error message (as expected).

Before going further, let’s add a test to see if unzip exists (we’ll need unzip to unzip the opatch zipfile). Put the db02’s oratab file back where it should be and run the playbook again:

  oracle@control:~/work$ cat upgrade_opatch.yml
  ---
  - hosts: database
    tasks:
    - name: Check if /etc/oratab exists
      stat:
        path: /etc/oratab
      register: oratab

    - debug:
        msg: "/etc/oratab does not exists"
      when: oratab.stat.exists == false

    - name: Check if unzip exists (if not we wont be able to unzip the opatch zipfile)
      shell: "command -v unzip"
      register: unzip_exists

    - debug:
        msg: "unzip cannot be found"
      when: unzip_exists == false
  oracle@control:~/work$ ansible-playbook upgrade_opatch.yml

  PLAY [database] ***************************************************************************************************************************************************************************************************

  TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
  ok: [db02]
  ok: [db01]

  TASK [Check if /etc/oratab exists] ********************************************************************************************************************************************************************************
  ok: [db01]
  ok: [db02]

  TASK [debug] ******************************************************************************************************************************************************************************************************
  skipping: [db01]
  skipping: [db02]

  TASK [Check if unzip exists (if not we wont be able to unzip the opatch zipfile)] *********************************************************************************************************************************
  changed: [db02]
  changed: [db01]

  TASK [debug] ******************************************************************************************************************************************************************************************************
  skipping: [db01]
  skipping: [db02]

  PLAY RECAP ********************************************************************************************************************************************************************************************************
  db01                       : ok=3    changed=1    unreachable=0    failed=0
  db02                       : ok=3    changed=1    unreachable=0    failed=0

  oracle@control:~/work$

Please note that I used the shell built-in module to test if unzip is present or not.

2.4/ Upgrade oPatch

To upgrade oPatch, we need to copy the zipfile to the target Oracle home and then unzip it — easy and straightforward. Let’s ask Ansible to do it for us.

First, let’s use the copy module to copy the oPatch zipfile to the target Oracle home:

- name: Copy the opatch zipfile to the target oracle home
   copy:
     src: p6880880_112000_Linux-x86-64.zip
     dest: /u01/oracle/11204

Unzip the zipfile in the target Oracle home (I use the shell module to unzip instead of the unarchive module on purpose. This will trigger a warning during the playbook execution, but I am not a big fan of the unarchive module… we could discuss that later on):

  - name: Upgrade opatch
    shell: unzip -o /u01/oracle/11204/p6880880_112000_Linux-x86-64.zip -d /u01/oracle/11204
    register: unzip
    failed_when: unzip.rc != 0

Let’s cleanup the zipfile we copied earlier using the file module (note that this is the keyword state: absent which will remove the file), we do not want to leave any leftovers:

  - name: Cleanup the zipfile from the target home
    file:
      name: /u01/oracle/11204/p6880880_112000_Linux-x86-64.zip
      state: absent

Now review the whole playbook:

  oracle@control:~/work$ cat upgrade_opatch.yml
---
- hosts: database
  tasks:
  - name: Check if /etc/oratab exists
    stat:
      path: /etc/oratab
    register: oratab

  - debug:
      msg: "/etc/oratab does not exists"
    when: oratab.stat.exists == false

  - name: Check if unzip exists (if not we wont be able to unzip the opatch zipfile)
    shell: "command -v unzip"
    register: unzip_exists

  - debug:
      msg: "unzip cannot be found"
    when: unzip_exists == false

  - name: Copy the opatch zipfile to the target oracle home
    copy:
      src: p6880880_112000_Linux-x86-64.zip
      dest: /u01/oracle/11204

  - name: Upgrade opatch
    shell: unzip -o /u01/oracle/11204/p6880880_112000_Linux-x86-64.zip -d /u01/oracle/11204
    register: unzip
    failed_when: unzip.rc != 0

  - name: Cleanup the zipfile from the target home
    file:
      name: /u01/oracle/11204/p6880880_112000_Linux-x86-64.zip
      state: absent

oracle@control:~/work$

and execute it:

oracle@control:~/work$ ansible-playbook upgrade_opatch.yml

PLAY [database] ***************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [db02]
ok: [db01]

TASK [Check if /etc/oratab exists] ********************************************************************************************************************************************************************************
ok: [db01]
ok: [db02]

TASK [debug] ******************************************************************************************************************************************************************************************************
skipping: [db01]
skipping: [db02]

TASK [Check if unzip exists (if not we wont be able to unzip the opatch zipfile)] *********************************************************************************************************************************
changed: [db02]
changed: [db01]

TASK [debug] ******************************************************************************************************************************************************************************************************
skipping: [db01]
skipping: [db02]

TASK [Copy the opatch zipfile to the target oracle home] **********************************************************************************************************************************************************
changed: [db01]
changed: [db02]

TASK [Upgrade opatch] *********************************************************************************************************************************************************************************************
 [WARNING]: Consider using unarchive module rather than running unzip

changed: [db01]
changed: [db02]

TASK [Cleanup the zipfile from the target home] *******************************************************************************************************************************************************************
changed: [db02]
changed: [db01]

PLAY RECAP ********************************************************************************************************************************************************************************************************
db01                       : ok=6    changed=4    unreachable=0    failed=0
db02                       : ok=6    changed=4    unreachable=0    failed=0

oracle@control:~/work$

We now have a playbook that can update all your oPatches in a blink!

Please note that this example is a very basic one since this is to give an overview on how to manage oPatch with Ansible.
Many features could be implemented here (and are implemented in the code we use here at Pythian) like:

  • Check the list of Oracle homes on each server — there are often many.
  • Check the version of each Oracle home’s oPatch.
  • Manager different oPatch versions : 11, 12 and 13.
  • Use the Ansible roles to make the code more modular and reusable.
  • Upgrade opatch only if it needs to and more…

I hope you enjoyed this Ansible overview!

Categories: DBA Blogs

Pages