Skip navigation.

Pythian Group

Syndicate content
Love Your Data
Updated: 7 hours 24 min ago

What Are Your Options For Migrating Enterprise Applications to the Cloud?

Fri, 2016-04-01 08:16

Migrating your enterprise applications from on-premises infrastructure to the public cloud is attractive for a number of reasons. It eliminates the costs and complexities of provisioning hardware and managing servers, storage devices, and network infrastructure; it gives you more compute capacity per dollar without upfront capital investment; and you gain opportunities for innovation through easier access to new technologies, such as advanced analytical capabilities.

So how do you get there?

You have a few options. At one end of the spectrum, you could simply wait and rationalize, making continuous incremental changes to gain efficiencies. This is obviously a “slow burn” approach. In the middle is a “lift-and-shift” from your current environment into the public cloud. And at the far extreme, you could plunge right in and re-architect your applications—a costly and probably highly complex task.

 

In fact, a true migration “strategy” will involve elements of each of these. For example, you could perform short-term optimizations and migrations on a subset of applications that are ready for the cloud, while transforming the rest of your application stack over the longer term.

 

What to expect from the major public cloud platforms

There are three leading public cloud platforms: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). As Google doesn’t seem to be driving customers to lift-and-shift their applications to GCP, I’m going to focus on AWS and Azure as potential cloud destinations and, for specificity, take Oracle enterprise databases as the use case.

 

Amazon Web Services

You have two options for migrating Oracle databases to the AWS cloud: infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS).

 

Deploying Oracle applications in AWS IaaS is much like deploying them on your in-house infrastructure. You don’t get flexible licensing options, but you do have the ability to easily allocate more or less capacity as needed for CPU, memory, and storage. However, because AWS IaaS is virtualized infrastructure, you may experience slower performance due to suboptimal CPU core allocation or processor caches. You’ll also have less flexibility with instance sizes, network topology, storage performance tiers, and the like.

 

AWS Relational Database Service (RDS) for Oracle is a managed PaaS offering where, in addition to giving you the benefits of IaaS, Amazon takes on major DBA and system administrator tasks including provisioning, upgrades, backups, and multi-availability zone replication. This significantly simplifies your operations—but also results in less control over areas such as configuration, patching, and maintenance windows. AWS RDS for Oracle can also be used with a pay-as-you-go licensing model included in the hourly rate.

 

Microsoft Azure

Azure does not have a managed offering for Oracle databases, so the only way to run Oracle Database on Azure is through its IaaS platform. The benefits are very similar to AWS IaaS, but Azure offers additional licensing options (with Windows-based license-included images) and its instances are billed by the minute rather than by the hour. What’s important to keep in mind is that Azure is not as broadly adopted as AWS and offers less flexibility for storage performance tiers and instance sizes. Oracle Database software running on Windows is also not as common as running on Linux.

 

For more in-depth technical details on these options, I encourage you to read our white paper, Migrating Oracle Databases to Cloud. My next blog in this series will look at one other option not discussed here: migrating to Oracle Cloud.

migratingtocloud

Categories: DBA Blogs

5 Phases for Migrating to a Cloud Platform

Thu, 2016-03-31 13:11

Businesses today are increasingly looking to migrate to the cloud to realize lower costs and increase software velocity. They are now asking themselves “when” they should migrate rather than if they “should”, and with many vendors and solutions in the market, it can be difficult to take the first steps in creating a cloud strategy.   

In our latest on-demand webinar, Chris Presley, Solution Architect at Pythian, and Jim Bowyer, Solution Architect at Azure-Microsoft Canada, discuss a five phase framework for cloud transformations, and the benefits of migrating to the cloud with Microsoft Azure.

The five phase framework helps businesses understand the journey to successfully migrate current applications to a cloud platform. Here is a snapshot of the five phases:

 

1. Assessment: Analysis and Planning

A majority of the time investment should be upfront in assessment and preparation because it sets the stage for the actual development and migration, resulting in faster projects, lower costs, and less risk.

In this phase, businesses want to begin understanding the performance and user characteristics of their applications, and any other additional information that will be important during the transformation, such as regulatory, compliance, and legal requirements.

 

2. Preparation: POC, Validation and Final Road Map

The preparation phase is meant to help understand what the rest of the migration is going to look like.

While beneficial in any project, proof of concepts (POCs) are increasingly simple to create and are a great strength when leveraging the cloud. POCs are used to show some functionality and advantage early so you can get everyone – especially business owners – excited about the migration.

 

3. Build: Construct Infrastructure

Once the expectations around the final migration road map are developed, the infrastructure can be built. Jim discusses that beginning to think about automation during this phase is important, and Chris agrees, in particular with developing an automated test bed to help smooth out the migration.

 

4. Migration: Execute Transformation

The migration activity for cloud environments is very short. By this stage, if the planning and preparation has been done properly, “flicking the light switch” to the new environment should be seamless and feel like the easiest part.

Chris talks about creating both detailed success and rollback criteria and how they are both crucial for success in the migration phase. Jim mentions that Microsoft Azure provides a variety of tools to help make rollbacks easier and safer.

 

5. Optimization: IaaS Enhancements

Continually transforming and enhancing after the migration is complete is important for increasing software velocity, which is why businesses migrate to the cloud in the first place. While a piece of functionality may not available today, it may be available tomorrow.

By going back to iterate and take advantage of new functionalities, businesses are able to squeeze out more improvements and create opportunities for differentiation.

 

Learn More

To learn about these five cloud transformation phases in more depth, and how to leverage the cloud with Microsoft Azure, download our free on-demand webinar.

Azure_Webinar (1)

Categories: DBA Blogs

GoldenGate 12.2 Big Data Adapters: part 3 – Kafka

Thu, 2016-03-31 09:39

This post continues my review of GoldenGate Big Data adapters started by review of HDFS and FLUME adapters. Here is list of all posts in the series:

  1. GoldenGate 12.2 Big Data Adapters: part 1 – HDFS
  2. GoldenGate 12.2 Big Data Adapters: part 2 – Flume
  3. GoldenGate 12.2 Big Data Adapters: part 3 – Kafka

In this article I will try the Kafka adapter and see how it works. Firstly, I think it may be worth reminding readers what Kafka is. Kafka is a streaming subscriber-publisher system. One can ask how it is different from Flume, and that question I’ve asked myself when I’ve heard about the Kafka. I think one of the best comparisons between Flume and Kafka has been made by Gwen Shapira & Jeff Holoman in the blog post Apache Kafka for Beginners . In essence, Kafka is general purpose system where most of the control and consumer functionality relays on your own built consumer programs. When in Flume you have pre-created sources, sinks, and can use interceptors for changing data. So, in Kafka you are getting on the destination exactly what you put on the source. Kafka and Flume can work together pretty well, and in this article I am going to use them both.
Let’s recall what we have in our configuration. We have an Oracle database running as a source, and Oracle GoldenGate for Oracle capturing changes for one schema in this database. We have OGG 12.2 and integrated extract on the source. The replication is going directly to trail files on the destination side where we have OGG for BigData installed on a Linux box. You can get more details about the installation on source and target from the first post in the series. I’ve made configuration as simple as possible dedicating most attention to the Big Data adapters functionality, which is after all the main point of the article.

Having installed OGG for Big Data, we need to setup the Kafka adapter. As for other adapters, we are copying the configuration files from $OGG_HOME/AdapterExamples/big-data directory.

bash$ cp $OGG_HOME/AdapterExamples/big-data/kafka/* $OGG_HOME/dirdat/

We need to adjust our kafka.props file to define Kafka/Zookeper topics for data and schema changes (TopicName and SchemaTopicName parameters), and the gg.classpath for Kafka and Avro java classes. I left rest of the parameters default including format for the changes which was defined as “avro_op” in the example.

[oracle@sandbox oggbd]$ cat dirprm/kafka.props

gg.handlerlist = kafkahandler
gg.handler.kafkahandler.type = kafka
gg.handler.kafkahandler.KafkaProducerConfigFile=custom_kafka_producer.properties
gg.handler.kafkahandler.TopicName =oggtopic
gg.handler.kafkahandler.format =avro_op
gg.handler.kafkahandler.SchemaTopicName=mySchemaTopic
gg.handler.kafkahandler.BlockingSend =false
gg.handler.kafkahandler.includeTokens=false

gg.handler.kafkahandler.mode =tx
#gg.handler.kafkahandler.maxGroupSize =100, 1Mb
#gg.handler.kafkahandler.minGroupSize =50, 500Kb


goldengate.userexit.timestamp=utc
goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE

gg.log=log4j
gg.log.level=INFO

gg.report.time=30sec

gg.classpath=dirprm/:/u01/kafka/libs/*:/usr/lib/avro/*:

javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=ggjava/ggjava.jar

[oracle@sandbox oggbd]$

The next file we have to correct is custom_kafka_producer.properties which contains information about our running Kafka server and define some addition parameters like compression. I left all the parameters unchanged except “bootstrap.servers” where I put information about my Kafka service.

[oracle@sandbox oggbd]$ cat dirprm/custom_kafka_producer.properties
bootstrap.servers=sandbox:9092
acks=1
compression.type=gzip
reconnect.backoff.ms=1000

value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
# 100KB per partition
batch.size=102400
linger.ms=10000
[oracle@sandbox oggbd]$

If we plan an initial load through Kafka we can use something like that parameter file I prepared for a passive replicat :

[oracle@sandbox oggbd]$ cat dirprm/irkafka.prm
-- Trail file for this example is located in "dirdat" directory
-- Command to run passive REPLICAT
-- ./replicat paramfile dirprm/irkafka.prm reportfile dirrpt/irkafka.rpt
SPECIALRUN
END RUNTIME
EXTFILE /u01/oggbd/dirdat/initld
--
TARGETDB LIBFILE libggjava.so SET property=dirprm/kafka.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
MAP ggtest.*, TARGET bdtest.*;
[oracle@sandbox oggbd]$

Before starting any replicat we need to prepare our system to receive the data. Since the Kafka itself is pure streaming system it cannot pass files to HDFS without other program or connector. In the first case we will be using Kafka passing data to Flume and from Flume will use its sink to HDFS. Please be aware that you need a Zookeeper to manage topics for Kafka. I am not going to discuss setting up Zookeeper in this article, just assume that we have it already and it is up and running on port 2181.
I used Kafka version 0.9.0.1 downloading it from http://kafka.apache.org/downloads.html. After downloading the archive I unpacked it, slightly corrected configuration and started it in standalone mode.

[root@sandbox u01]# wget http://apache.parentingamerica.com/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz
--2016-03-15 15:22:09--  http://apache.parentingamerica.com/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz
Resolving apache.parentingamerica.com... 70.38.15.129
Connecting to apache.parentingamerica.com|70.38.15.129|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 35650542 (34M) [application/x-gzip]
Saving to: `kafka_2.11-0.9.0.1.tgz'

100%[=========================================================================================================================================>] 35,650,542  2.95M/s   in 16s

2016-03-15 15:22:26 (2.10 MB/s) - `kafka_2.11-0.9.0.1.tgz' saved [35650542/35650542]

[root@sandbox u01]# tar xfz kafka_2.11-0.9.0.1.tgz

[root@sandbox u01]# ln -s kafka_2.11-0.9.0.1 kafka

[root@sandbox u01]# cd kafka

[root@sandbox kafka]# vi config/server.properties
[root@sandbox kafka]# grep -v '^$\|^\s*\#' config/server.properties
broker.id=0
listeners=PLAINTEXT://:9092
num.network.threads=3

num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
delete.topic.enable=true
[root@sandbox kafka]#
[root@sandbox kafka]# nohup bin/kafka-server-start.sh config/server.properties > /var/log/kafka/server.log &
[1] 30669
[root@sandbox kafka]# nohup: ignoring input and redirecting stderr to stdout

Now we need to prepare our two topics for the data received from the GoldenGate. As you remember we have defined topic “oggdata” for our data flow using parameter gg.handler.kafkahandler.TopicName in our kafka.props file and topic “mySchemaTopic” for schema changes. So, let’s create the topic using Kafka’s supplemented scripts:

[root@sandbox kafka]# bin/kafka-topics.sh --zookeeper sandbox:2181 --create --topic oggtopic --partitions 1 --replication-factor 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/kafka_2.11-0.9.0.1/libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Created topic "oggtopic".
[root@sandbox kafka]# bin/kafka-topics.sh --zookeeper sandbox:2181 --list
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/kafka_2.11-0.9.0.1/libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
oggtopic
[root@sandbox kafka]#

As matter of fact, all the necessary topics will also be created automatically when you start your GoldenGate replicat. You need to create the topic explicitly if you want to use some custom parameters for it. You also have the option to alter the topic later on when setting up configuration parameters.
Here is list of the topics we have when one of them is created manually and the second one is created automatically by the replicat process.

[root@sandbox kafka]# bin/kafka-topics.sh --zookeeper sandbox:2181 --describe --topic oggtopic
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/kafka_2.11-0.9.0.1/libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Topic:oggtopic	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: oggtopic	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
[root@sandbox kafka]# bin/kafka-topics.sh --zookeeper sandbox:2181 --describe --topic mySchemaTopic
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/kafka_2.11-0.9.0.1/libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Topic:mySchemaTopic	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: mySchemaTopic	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
[root@sandbox kafka]#

In our configuration we have only one server and the simplest configuration for Kafka. In a real business case it can be way more complex. Our replicat is going to post data changes to oggtopic, and all changes and definitions for schema to the mySchemaTopic. We’ve already mentioned that we are going to use Flume functionality to write to HDFS. I’ve prepared Flume with two sources and sinks to write data changes to the /user/oracle/ggflume HDFS directory. We had an option to split data and schema changes to different directories if we wish it. Here is my configuration for Flume:

[root@sandbox ~]# cat /etc/flume-ng/conf/flume.conf
# Name/aliases for the components on this agent
agent.sources = ogg1 ogg2
agent.sinks = hdfs1 hdfs2
agent.channels = ch1 ch2

#Kafka source
agent.sources.ogg1.type = org.apache.flume.source.kafka.KafkaSource
agent.sources.ogg1.zookeeperConnect = localhost:2181
agent.sources.ogg1.topic = oggtopic
agent.sources.ogg1.groupId = flume
agent.sources.ogg1.kafka.consumer.timeout.ms = 100

agent.sources.ogg2.type = org.apache.flume.source.kafka.KafkaSource
agent.sources.ogg2.zookeeperConnect = localhost:2181
agent.sources.ogg2.topic = mySchemaTopic
agent.sources.ogg2.groupId = flume
agent.sources.ogg2.kafka.consumer.timeout.ms = 100

# Describe the sink
agent.sinks.hdfs1.type = hdfs
agent.sinks.hdfs1.hdfs.path = hdfs://sandbox/user/oracle/ggflume
agent.sinks.hdfs2.type = hdfs
agent.sinks.hdfs2.hdfs.path = hdfs://sandbox/user/oracle/ggflume
#agent.sinks.hdfs1.type = logger

# Use a channel which buffers events in memory
agent.channels.ch1.type = memory
agent.channels.ch1.capacity = 1001
agent.channels.ch1.transactionCapacity = 1000
agent.channels.ch2.type = memory
agent.channels.ch2.capacity = 1001
agent.channels.ch2.transactionCapacity = 1000

# Bind the source and sink to the channel
agent.sources.ogg1.channels = ch1
agent.sources.ogg2.channels = ch2
agent.sinks.hdfs1.channel = ch1
agent.sinks.hdfs2.channel = ch2

As you can see, we have separate sources for each of our Kafka topics, and we have two sinks pointing to the same HDFS location. The data is going to be written down in Avro format.
All preparations are completed, and we are running Kafka server, two topics, and Flume is ready to write data to HDFS. Our HDFS directory is still empty.

[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
[oracle@sandbox oggbd]$

Let’s run the passive replicat with our initial data load trail file :

[oracle@sandbox oggbd]$ cd /u01/oggbd
[oracle@sandbox oggbd]$ ./replicat paramfile dirprm/irkafka.prm reportfile dirrpt/irkafka.rpt
[oracle@sandbox oggbd]$

Now we can have a look to results. We got 3 files on HDFS where first two files describe structure for the TEST_TAB_1 and TEST_TAB_2 accordingly, and the third file contains the data changes, or maybe better to say initial data for those tables. You may see that the schema definition was put on separate files when the data changes were posted altogether to the one file.

[oracle@sandbox ~]$ hadoop fs -ls /user/oracle/ggflume/
Found 3 items
-rw-r--r--   1 flume oracle       1833 2016-03-23 12:14 /user/oracle/ggflume/FlumeData.1458749691685
-rw-r--r--   1 flume oracle       1473 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691686
-rw-r--r--   1 flume oracle        981 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691718
[oracle@sandbox ~]$

[oracle@sandbox ~]$ hadoop fs -cat  /user/oracle/ggflume/FlumeData.1458749691685
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?????k?\??????S?A?%?{
  "type" : "record",
  "name" : "TEST_TAB_1",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
.........................


[oracle@sandbox ~]$ hadoop fs -cat  /user/oracle/ggflume/FlumeData.1458749691686
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?*
?e????xS?A?%N{
  "type" : "record",
  "name" : "TEST_TAB_2",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
  }, {


...............................

[oracle@sandbox ~]$hadoop fs -cat  /user/oracle/ggflume/FlumeData.1458749691718
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable??????c?C n??S?A?b"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-23T12:14:35.373000(00000000-10000002012
PK_ID1371O62FX&2014-01-24:19:09:20RJ68QYM5&2014-01-22:12:14:30"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-23T12:14:35.405000(00000000-10000002155
PK_ID2371O62FX&2014-01-24:19:09:20HW82LI73&2014-05-11:05:23:23"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-23T12:14:35.405001(00000000-10000002298
PK_ID3RXZT5VUN&2013-09-04:23:32:56RJ68QYM5&2014-01-22:12:14:30"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-23T12:14:35.405002(00000000-10000002441
PK_ID4RXZT5VUN&2013-09-04:23:32:56HW82LI73&2014-05-11:05:23:23"BDTEST.TEST_TAB_2I42016-02-16 19:17:40.76289942016-03-23T12:14:35.408000(00000000-10000002926
PK_IDRND_STR_1ACC_DATE7IJWQRO7T&2013-07-07:08:13:52[oracle@sandbox ~]$

Now we need to create our ongoing replication. Our extract was set up the same way as it was described in the first post of the series. It is up and running, passing changes to the replicat side to the directory ./dirdat

GGSCI (sandbox.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     GGEXT       00:00:09      00:00:03


[oracle@sandbox oggbd]$ ls -l dirdat/
total 240
-rw-r-----. 1 oracle oinstall   3028 Feb 16 14:17 initld
-rw-r-----. 1 oracle oinstall 190395 Mar 14 13:00 or000041
-rw-r-----. 1 oracle oinstall   1794 Mar 15 12:02 or000042
-rw-r-----. 1 oracle oinstall  43222 Mar 17 11:53 or000043
[oracle@sandbox oggbd]$

I’ve prepared parameter file for the Kafka replicat :

[oracle@sandbox oggbd]$ cat dirprm/rkafka.prm
REPLICAT rkafka
-- Trail file for this example is located in "AdapterExamples/trail" directory
-- Command to add REPLICAT
-- add replicat rkafka, exttrail dirdat/or, begin now
TARGETDB LIBFILE libggjava.so SET property=dirprm/kafka.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
MAP GGTEST.*, TARGET BDTEST.*;

[oracle@sandbox oggbd]$

We need only add and start our rkafka replica for the Big Data GoldenGate.

GGSCI (sandbox.localdomain) 1> add replicat rkafka, exttrail dirdat/or, begin now
REPLICAT added.


GGSCI (sandbox.localdomain) 2> start replicat rkafka

Sending START request to MANAGER ...
REPLICAT RKAFKA starting


GGSCI (sandbox.localdomain) 3> info rkafka

REPLICAT   RKAFKA    Last Started 2016-03-24 11:53   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           21041
Log Read Checkpoint  File dirdat/or000000000
                     2016-03-24 11:53:17.388078  RBA 0

You may remember that we don’t have dirdat/or000000000 file in our dirdat directory. So, our replicat has to be slightly corrected to work with proper trail files. I am altering sequence for my replicat to reflect actual sequence number for my last trail file.

GGSCI (sandbox.localdomain) 10> stop replicat rkafka

Sending STOP request to REPLICAT RKAFKA ...
Request processed.


GGSCI (sandbox.localdomain) 11> alter replicat rkafka EXTSEQNO 43

2016-03-24 12:03:27  INFO    OGG-06594  Replicat RKAFKA has been altered through GGSCI. Even the start up position might be updated, duplicate suppression remains active in next startup. To override duplicate suppression, start RKAFKA with NOFILTERDUPTRANSACTIONS option.

REPLICAT altered.


GGSCI (sandbox.localdomain) 12> start replicat rkafka

Sending START request to MANAGER ...
REPLICAT RKAFKA starting


GGSCI (sandbox.localdomain) 13> info rkafka

REPLICAT   RKAFKA    Last Started 2016-03-24 12:03   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:12 ago)
Process ID           21412
Log Read Checkpoint  File dirdat/or000000043
                     First Record  RBA 0


GGSCI (sandbox.localdomain) 14>

Let’s change some data:

orclbd> select * from test_tab_2;

           PK_ID RND_STR_1  ACC_DATE
---------------- ---------- ---------------------------
               7 IJWQRO7T   07/07/13 08:13:52


orclbd> insert into test_tab_2 values (8,'TEST_INS1',sysdate);

1 row inserted.

orclbd> commit;

Commit complete.

orclbd>
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
Found 5 items
-rw-r--r--   1 flume oracle       1833 2016-03-23 12:14 /user/oracle/ggflume/FlumeData.1458749691685
-rw-r--r--   1 flume oracle       1473 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691686
-rw-r--r--   1 flume oracle        981 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691718
-rw-r--r--   1 flume oracle        278 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268086
-rw-r--r--   1 flume oracle       1473 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268130
[oracle@sandbox oggbd]$

[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458836268086
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?Q???n?y?1?R#S?j???"BDTEST.TEST_TAB_2I42016-03-24 16:17:29.00033642016-03-24T12:17:31.733000(00000000430000043889
PK_IDRND_STR_1ACC_DATE8TEST_INS1&2016-03-24:12:17:26[oracle@sandbox oggbd]$
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458836268130
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?6F!?Z?-?ZA8r^S?j?oN{
  "type" : "record",
  "name" : "TEST_TAB_2",
  "namespace" : "BDTEST",

We got our schema definition file and a file with data changes.

orclbd> update test_tab_2 set RND_STR_1='TEST_UPD1' where pk_id=8;

1 row updated.

orclbd> commit;

Commit complete.

orclbd>

[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
Found 6 items
-rw-r--r--   1 flume oracle       1833 2016-03-23 12:14 /user/oracle/ggflume/FlumeData.1458749691685
-rw-r--r--   1 flume oracle       1473 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691686
-rw-r--r--   1 flume oracle        981 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691718
-rw-r--r--   1 flume oracle        278 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268086
-rw-r--r--   1 flume oracle       1473 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268130
-rw-r--r--   1 flume oracle        316 2016-03-24 12:28 /user/oracle/ggflume/FlumeData.1458836877420
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458836877420
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable]??u????????qS?t,??"BDTEST.TEST_TAB_2U42016-03-24 16:27:39.00035642016-03-24T12:27:42.177000(00000000430000044052
PK_IDRND_STR_1ACC_DATE8TEST_INS1&2016-03-24:12:17:268TEST_UPD1&2016-03-24:12:17:26[oracle@sandbox oggbd]$

You can see that we only got a file with data changes since no DDL changes were made. The transactions will be grouped to the files according to our Flume parameters as we discussed in the previous blog post.

You can also see old value for the updated record and the new one. Using that information we can reconstruct the changes, but we need to apply certain logic to decrypt the changes.

For deletion operation we are getting operation flag “F” and values for the deleted record. Again, no schema definition file since no changes were made.

Let’s try some DDL.

orclbd> truncate table test_tab_2;

Table TEST_TAB_2 truncated.

orclbd>
GGSCI (sandbox.localdomain) 4> info rkafka

REPLICAT   RKAFKA    Last Started 2016-03-24 12:10   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Process ID           21803
Log Read Checkpoint  File dirdat/or000043
                     2016-03-24 12:40:05.000303  RBA 45760


GGSCI (sandbox.localdomain) 5>

No new files on HDFS.

orclbd> insert into test_tab_2 select * from test_tab_3;

1 row inserted.

orclbd> commit;

Commit complete.

orclbd>
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
Found 8 items
-rw-r--r--   1 flume oracle       1833 2016-03-23 12:14 /user/oracle/ggflume/FlumeData.1458749691685
-rw-r--r--   1 flume oracle       1473 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691686
-rw-r--r--   1 flume oracle        981 2016-03-23 12:15 /user/oracle/ggflume/FlumeData.1458749691718
-rw-r--r--   1 flume oracle        278 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268086
-rw-r--r--   1 flume oracle       1473 2016-03-24 12:18 /user/oracle/ggflume/FlumeData.1458836268130
-rw-r--r--   1 flume oracle        316 2016-03-24 12:28 /user/oracle/ggflume/FlumeData.1458836877420
-rw-r--r--   1 flume oracle        278 2016-03-24 12:35 /user/oracle/ggflume/FlumeData.1458837310570
-rw-r--r--   1 flume oracle        277 2016-03-24 12:42 /user/oracle/ggflume/FlumeData.1458837743709
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458837743709
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable*?2??????>iS??\??"BDTEST.TEST_TAB_2I42016-03-24 16:42:04.00020042016-03-24T12:42:06.774000(00000000430000045760
PK_IDRND_STR_1ACC_DATE7IJWQRO7T&2013-07-07:08:13:52[oracle@sandbox oggbd]$

Again, we got only file with data changes. I tried to compare the file we were getting for the previous insert and insert after truncate, but couldn’t find difference except for the binary part of the avro file. It will require additional investigation and maybe clarification from Oracle. In the current state it looks like it is easy to miss a truncate command for a table on the destination side.

Let us change the table and add a column there.

orclbd> alter table test_tab_2 add test_col varchar2(10);
Table TEST_TAB_2 altered.

orclbd>

We are not getting any new files with new table definitions until we do any DML on the table. Both files (with the new schema definition and data changes) will appear after we insert, delete or update any rows there.

orclbd> insert into test_tab_2 values (8,'TEST_INS1',sysdate,'TEST_ALTER');

1 row inserted.

orclbd> commit;

Commit complete.

orclbd>
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume/
Found 10 items
...................................................
-rw-r--r--   1 flume oracle       1654 2016-03-24 12:56 /user/oracle/ggflume/FlumeData.1458838582020
-rw-r--r--   1 flume oracle        300 2016-03-24 12:56 /user/oracle/ggflume/FlumeData.1458838584891
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458838582020
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable-??ip??/?w?S??/{
  "type" : "record",
  "name" : "TEST_TAB_2",
  "namespace" : "BDTEST",
................
        "name" : "TEST_COL",
        "type" : [ "null", "string" ],
        "default" : null
.................

[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1458838584891
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritabletr?V?_$???:2??S??/w?"BDTEST.TEST_TAB_2I42016-03-24 16:56:04.00026042016-03-24T12:56:08.370000(00000000430000047682
PK_IDRND_STR_1ACC_DATETEST_COL8TEST_INS1&2016-03-24:12:56:01TEST_ALTER

I used JMeter to generate some load, and it could easily with almost no delays, replicate 225 transactions per second (30% inserts 80% updates). It was not a test for Kafka or Flume, which could sustain way more load, but rather combination of GoldenGate with the Big Data infrastructure. It was stable without any errors. I do understand that the current test is very far from any potential production workflow which may include Oracle Database (or any other RDBMS) + GoldenGate + Kafka + Storm + …. . And maybe the final data format will be completely different. So far the adapters are looking good and doing the job. In the next post I will observe the HBASE adapter. Stay tuned.

Categories: DBA Blogs

Log Buffer #467: A Carnival of the Vanities for DBAs

Thu, 2016-03-31 08:40

This Log Buffer Edition brings some top of the list blog posts from Oracle, SQL Server and MySQL.

Oracle:

An Exadata quarter rack has two database servers and three storage cells. In a typical setup, such a system would have three ASM disk groups, say DATA, RECO and DBFS_DG. Usually the disk group DATA would be high redundancy and the other two disk groups would be normal redundancy.

Best practice for calling web services from Oracle Process Cloud Service

2 Min Tech Tips at Oracle OpenWorld: Are You Ready for Your Close-Up?

Are your SQL Plus scripts going to ‘ell ?

New ways of input still on the verge of the enterprise

SQL Server:

Why Every SQL Server Installation Should Be a Cluster

When AUTO_UPDATE_STATISTICS Doesn’t Happen

Fixing Maintenance Plan Error code 0x534

SQL Server Table Smells

Some companies have been slow to acquire big data applications. They discovered that modern hardware platforms and database management systems were more than adequate for most of their business analytics needs.

MySQL:

Galera Cluster and Docker Swarm

MariaDB 10.1.13 and Connector/J 1.3.7 now available

Why an App-Centric View Isn’t Enough

How to Install and Configure MySQL Cluster on CentOS 7

Invalid datetime when converting to timestamp

Categories: DBA Blogs

In Depth: MySQL 5.6+ DDL

Tue, 2016-03-29 09:07
Overview

DDL (Data Definition Language) statements create, alter, and remove database objects. These types of changes can be a very dangerous action to take on such a critical piece of your infrastructure. You want to make sure that the command that you are executing has been given proper thought and testing.

In this post I go through multiple version of MySQL and verify the best course of action to take in regards to executing DDL statements.  There are many things that you have to consider when making these types of changes, such as disk space, load on the database server, slave replication, the type of DDL statement you are executing, and if it will lock the table. 

Because of these risks, there are tools that can be used to help mitigate some of the dangers. But unless you have tested and verified their functionality, these tools in themselves can cause trouble. Whenever in doubt, take the time to test and verify any changes that you will make. In my testing I will be using :

pt-online-schema-change in particular since it is a very popular tool and I have used it many times.  Also, the primary reason it was created was before MySQL offered online DDL changes. In some cases, depending on your environment, the best course of action may be removing the database server from being accessed, by failing over to a slave, or taking a cluster node offline.

I will be focusing on the most common DDL statements as I want to keep this post to a reasonable size. Many of the MySQL DDL statements by default are using the INPLACE algorithm where it is able, which is only available in MySQL 5.6 or later. In earlier versions 5.5 and 5.1 with the InnoDB plugin they had fast index creation but all other table alters were blocking. Online DDL with the INPLACE algorithm allows MySQL to make a copy of the table in the background, copy the data to this table, make your table alters, and then swap the tables, all without locking the table. Some DDL statements can be done instantaneously, such as dropping an index or renaming a column. When MySQL isn’t able to use the INPLACE algorithm it will have to revert to using the COPY algorithm which will in turn lock the table. An example of this is changing a column definition from VARCHAR to BLOB. Whenever you are doing an INPLACE alter you will want to specify the algorithm in your command. This will help protect you in the case that MySQL is unable to do an INPLACE alter. MySQL will return an error rather than running the command with the COPY algorithm.


ALTER TABLE employee_test ALGORITHM=INPLACE, CHANGE COLUMN first_name first_name BLOB NULL;
ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY.

All of my testing was done without specifying the algorithm, allowing MySQL to determine the best algorithm to use.  If there are any DDL statements that you want more information on, please refer to the documentation for the release of MySQL that you are using, as I will not be going into foreign keys.

The Setup

All of my testing was done in virtual machines (VMs) on my laptop. I have a VM that will be running mysqlslap to perform remote DML statements such as SELECT, UPDATE, DEELTE and INSERT, causing load on the database server. This will allow me to see any potential table locks or performance impact. Here is the setup of the MySQL machine and it’s components. I have created the table shown below and imported 10 million rows. While mysqlslap was running I performed each of the DDL statements and watched that the DML statements were being executed with no table locks. I then recorded the time as they completed.

MySQL Server Stats
  • CPU : 4x CPUs at 2.6 GHz Intel Core i7
  • Memory allocated to VM : 2 Gig
  • Memory allocated to MySQL Innodb buffer pool: 1 Gig
  • Flash Storage
  • Table has 10 Million Rows.
  • DML (Data Manipulation Language) statements such as select, insert, update, and delete, that will be executed against the table during DDL statements
Table Structure
CREATE TABLE `employee_test` (
`emp_no` int(11) NOT NULL AUTO_INCREMENT,
`birth_date` date NOT NULL,
`first_name` varchar(14) NOT NULL,
`last_name` varchar(16) NOT NULL,
`gender` enum('M','F') NOT NULL,
`hire_date` date NOT NULL,
PRIMARY KEY (`emp_no`),
KEY `ix_lastname` (`last_name`),
KEY `ix_firstname` (`first_name`)
) ENGINE=InnoDB AUTO_INCREMENT=10968502 DEFAULT CHARSET=latin1
MySQL DDL Commands
CREATE INDEX ix_hire_date ON employee_test (hire_date); --CREATE INDEX
CREATE FULLTEXT INDEX ix_lastname_fulltext ON employee_test(last_name); --CREATE FULLTEXT INDEX
DROP INDEX ix_hire_date ON employee_test; --DROP INDEX
OPTIMIZE TABLE employee_test; --OPTIMIZE TABLE
ALTER TABLE employee_test ADD COLUMN test_column INT NULL; --ADD COLUMN
ALTER TABLE employee_test DROP COLUMN f_name; --DROP COLUMN
ALTER TABLE employee_test CHANGE first_name f_name varchar(14) NOT NULL; --RENAME COLUMN
ALTER TABLE employee_test MODIFY COLUMN emp_no BIGINT AUTO_INCREMENT NOT NULL; --CHANGE COLUMN TYPE
pt-online-schema-change DDL Commands
pt-online-schema-change --execute --alter 'ADD FULLTEXT INDEX ix_lastname_fulltext (last_name)' D=employees,t=employee_test
pt-online-schema-change --execute --alter 'ENGINE=INNODB' D=employees,t=employee_test
pt-online-schema-change --execute --alter 'ADD COLUMN test_column3 INT NULL' D=employees,t=employee_test
pt-online-schema-change --execute --alter 'MODIFY COLUMN gender BLOB NULL' D=employees,t=employee_test
Results

This matrix is a representation of the testing that I performed and how quickly the commands took to execute. Be careful with Fulltext indexes on your tables since they potentially can cause additional locking by creating the necessary infrastructure in the background. Please see MySQL Innodb Fulltext Indexes for more details. This requirement causes a great deal of locking on the table.

DDL Matrix

pt-online-schema-change

For the DDL statements that cause locking of the table we wanted to look at incorporating pt-online-schema-change, to help us overcome this obstacle.

pt-online-schema-change results

pt-online-schema-change allowed us to perform the operations that locked the table previously with no locking. pt-onilne-schema-change also has many other features such as helping with the impact on slave replication, and handling foreign keys. But it also has it’s limitation such as not being able to run it on a table that already has triggers, or complications with foreign keys. There are also impacts on your environment if it is not properly tested and verified. One such example is, every time that I ran pt-online-schema-change in my test it caused a deadlock causing mysqlslap to die and no longer perform and further statements.

mysqlslap: Cannot run query UPDATE employee_test SET first_name = ‘BigPurpleDog’ WHERE last_name = ‘SmallGreenCat’; ERROR : Deadlock found when trying to get lock; try restarting transaction

This is why it is very important to try and determine the impact if any that pt-online-schema-change may have on your environment before starting to use it. I did not encounter this behavior with any of the MySQL DDL statements that I ran.

Performance Impact

While performing the changes there were consistent increases in CPU load, disk I/O, and disk usage as the new tables were being created for the table alters. We have to remember that when certain DDL statements are being executed, a full copy of the table is being performed, so you will want to make sure you have enough disk space to complete the change.  This is why it is very important to take into consideration the size of the table you are altering and the load on the MySQL server while performing DDL statements. It is preferred that you run any of the DDL statements that cause table copies, off hours as to avoid any delays or outages to the application that is using the data.

Query Execution Impact

Query Execution Baseline

Server Performance Impact

MySQL Alter Load
MySQL Alter Load

Conclusion

As I have observed in performing these tests, there are many things to consider when performing DDL statements to avoid potential downfalls. Here is a summary of the recommendations to executing DDL statements or using pt-online-schema-change. Before considering any of this determine if the statement you are going to perform is going to copy a table, and if it does, make sure you have enough disk space.

Without Fulltext
With Fulltext

If you are going to make changes to your production servers, make sure that you run your DDL statements during off hours when the server is at it’s lowest utilization for both CPU and disk.

For an added safety measure when you are performing any of the MySQL DDL statements that you are expecting to be executed INPLACE and will not lock the table, make sure you specify ALGORITHM=INPLACE in your statement. If MySQL is unable to execute the command in place, it will just return an error, instead of executing the statement with the COPY algorithm which will lock the table. Here are samples of the DDL statements that you should be able run INPLACE and not cause any locking of your table.

ALTER TABLE employee_test ALGORITHM=INPLACE, ADD INDEX ix_hire_date (hire_date); --CREATE INDEX
ALTER TABLE employee_test ALGORITHM=INPLACE, DROP INDEX ix_firstname; --DROP INDEX
ALTER TABLE employee_test ALGORITHM=INPLACE, ENGINE=INNODB; --OPTIMIZE TABLE
ALTER TABLE employee_test ALGORITHM=INPLACE, ADD COLUMN test_column INT NULL; --ADD COLUMN
ALTER TABLE employee_test ALGORITHM=INPLACE, DROP COLUMN f_name; --DROP COLUMN
ALTER TABLE employee_test ALGORITHM=INPLACE, CHANGE first_name f_name varchar(14) NOT NULL; --RENAME COLUMN

 

 

References

 

 

Categories: DBA Blogs

DataStax OpsCenter upgrade (4.1 to 5.1) for Cassandra – issue and resolution

Tue, 2016-03-29 08:42

For the Apache Cassandra cluster (version C* 1.2.16) that I’ve supported, the monitoring of the cluster is through DataStax OpsCenter, version 4.1.2. As part of the effort to improve the monitoring capability for this cluster, my team decided first to upgrade OpsCenter to version 5.1.4, the latest available version of OpsCenter that is compatible with Cassandra 1.2.16. The same OpsCenter is also used to monitor another cluster of DataStax Enterprise (DSE) 4.5.2 (it corresponds to Apache Cassandra version 2.0.10).

During the upgrade we ran into an issue, and  we couldn’t find a similar problem to this one on Google. We’d like to use this post to document the problems that we faced as well as the solutions and findings we found during the upgrade.

 

Problem Overview

The OpsCenter upgrade procedure is as straightforward as what is described in the DataStax OpsCenter document. After OpsCenter upgrade, the OpsCenter web portal detects mismatched version of datastax-agents on all nodes. Choose the “FixMe” option from the portal to upgrade datastax-agents to version 5.1.4 on all nodes being monitored. After the datastax-agent upgrade, we addressed some datastax-agent configuration issues in “address.yaml” file to reflect the changes brought by the new version of OpsCenter/datastax-agent.

After all this was done, we double checked the log files for OpsCenter and datastax-agent. The OpsCenter log file was mostly clear, the datastax-agent log for DSE 4.5.2 cluster was also clear, but the datastax-agent log for Cassandra 1.2.16 was NOT. The corresponding OpsCenter web portal was not able to display Cassandra metrics for C* 1.2.16 cluster.

On each of the datastax-agent log files in the C* 1.2.16 cluster, we saw a lot of repeating  errors like the ones below:

          ERROR [async-dispatch-3] 2016-02-19 12:57:52,750 There was an error when attempting to load stored rollups.
          com.datastax.driver.core.exceptions.InvalidQueryException: Undefined name key in where clause (‘key EQ ‘<… …>”)
          at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
          at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:291)
          at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:205)
          at clojurewerkz.cassaforte.client$execute.invoke(client.clj:289)
          … …
          ERROR [cassandra-processor-1] 2016-02-19 13:00:02,295 Error when proccessing cassandra callcom.datastax.driver.core.exceptions.InvalidQueryException: Unknown identifier key

 

Problem Analysis and Solution

The fact that the error showed up in datastax-agent log file gave me a hint that the error might be related with datastax-agent failing to write collected metrics into OpsCenter tables. So as the first step of the analysis, I compared the schema of “OpsCenter” keyspace between the two clusters monitored. Below is the example of two OpsCenter table definition comparison between the two clusters.

C* 1.2.16 Cluster

DSE 4.5.3 Cluster

CREATE TABLE events (  “KEY” blob,  column1 blob,  value blob,  PRIMARY KEY (“KEY”, column1)CREATE TABLE events (  key text,  action bigint,  level bigint,  success boolean,  time bigint,  PRIMARY KEY ((key))CREATE TABLE events_timeline (  “KEY” blob,  column1 bigint,  value blob,  PRIMARY KEY (“KEY”, column1)CREATE TABLE events_timeline (  key text,  column1 bigint,  value blob,  PRIMARY KEY ((key), column1)

 

From this table, we can clearly see that the upgrade process of OpsCenter and datastax-agent to verion 5.1.4 somehow doesn’t migrate OpsCenter schema properly for C* 1.2.16 cluster. The theory for the error is that the upgraded datastax-agent in C* 1.2.16 cluster is trying to query or update Cassandra metrics from OpsCenter tables in a fashion that matches the OpsCenter schema as in the DSE 4.5.2 cluster. But the actual OpsCenter schema in C* 1.2.16 still has the old definition, thus causing the invalid query exception as presented in the log file.

Once the problem is clear, the solution is straightforward. The steps are summarized below:

In C* 1.2.16 cluster,

  1. Take a snapshot for OpsCenter keyspace on all nodes
  2. Stop DataStax agents on all nodes, so they won’t try to write metrics into OpsCenter tables.
  3. Use CQL to drop OpsCenter tables and re-create them, matching the OpsCenter schema for DSE 4.5.3 cluster. Make sure that all table properties are the same.
  4. Once OpsCenter schema is recreated. Start DataStax agents on all nodes in.
  5. Verify the agent log file that the error message is gone.
  6. Restart OpsCenter service.

 

After these steps, we double checked the log files for all datastax-agents, and for OpsCenter, and we can confirm that there were no errors. The OpsCenter web portal was also able to display the Cassandra metrics properly.

Categories: DBA Blogs

Amazon Database Migration Service – first try

Mon, 2016-03-28 08:04

Recently, checking Amazon Web Services, I stumbled upon a service I hadn’t tested before. It was Data Migration Service (DMS). I read documentation and checked other resources. I found a good, fresh blog post AWS Database Migration Service written by Jeff Barr. It was really interesting and I decided to give a try and test the service.

I created an Oracle RDS on AWS as a target and an Oracle Linux box on Azure with Oracle 12c EE as a source database for migration. The source database sid was “test” and destination was “orcl”. I created tablespaces and users on both sides with the name “testmig” and created a table on the source database. Initially I loaded 1000000 records to the table and created an index. The schema on destination database was empty. I also enabled archivelog mode on the source database.

Creating user and table on the source:

test> create user testmig identified by welcome1 default tablespace testmig temporary tablespace temp;

User created.

test> grant connect,resource to testmig;

Grant succeeded.

test> conn test

test> create table test_tab_1 (pk_id number, rnd_str_1 varchar2(15),use_date date,rnd_str_2 varchar2(15), acc_date date);

Table created.

test>

Loading the data:

[oracle@oradb1 patchdepot]$ head test_tab_1.dat
340,MLBO07LV,10/30/13 15:58:04,NABCFVAQ,12/08/17 18:22:48
341,M48R4107,12/09/13 12:30:41,ACA79WO8,12/15/16 08:13:40
342,KARMF0ZQ,04/21/14 08:53:33,JE6SOE0K,06/18/17 07:12:29
343,8NTSYDIS,11/09/14 23:41:48,FBJXWQNX,08/28/15 20:47:39
344,0LVKBJ8T,09/28/12 06:52:05,VBX3FWQG,10/28/15 06:10:42
345,Z22W1QKW,06/06/13 11:14:32,26BCTA9L,08/21/17 08:35:15
346,CGGQO9AL,08/27/14 02:37:41,15SRXZSJ,11/09/17 19:58:58
347,WKHINIUK,07/02/13 14:31:53,65WSGVDG,08/02/15 10:45:50
348,HAO9X6IC,11/17/12 12:08:18,MUQ98ESS,12/03/15 20:37:20
349,D613XT63,01/24/15 16:49:11,3ELW98N2,07/03/16 11:03:40
[oracle@oradb1 patchdepot]$ export NLS_DATE_FORMAT="MM/DD/YY HH24:MI:SS"
[oracle@oradb1 patchdepot]$ sqlldr userid=testmig table=test_tab_1
Password:

SQL*Loader: Release 12.1.0.1.0 - Production on Wed Mar 16 13:07:50 2016

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Express Mode Load, Table: TEST_TAB_1
Path used:      External Table, DEGREE_OF_PARALLELISM=AUTO

Table TEST_TAB_1:
  100000 Rows successfully loaded.

Check the log files:
  test_tab_1.log
  test_tab_1_%p.log_xt
for more information about the load.
[oracle@oradb1 patchdepot]$

On the target system:

rdsorcl> create tablespace testmig;

Tablespace TESTMIG created.

rdsorcl> create user testmig identified by welcome1 default tablespace testmig;

User TESTMIG created.

rdsorcl>

In the blog post mentioned, the migration was done without replication and I was curious to test it with some kind of ongoing DML activity on the source database. I setup a linux box with Jmeter and started my load with pace about 15 transactions per second. The transactions were inserts and updates on the created table.

Everything was working fine so far and I switched to the Data Migration Service on AWS. The service has a pretty easy and clear workflow. You need just push the button “Create migration” and it will guide you through the process. In general, you need to create a replication instance, endpoints for source and target and task to start initial load and replication.

I created a replication instances and while it was creating (it took some time) was asked to setup endpoints for source and target. The first issue I hit when I tried to use a DNS name for my Azure instance. The test connection was failing by timeout and it was not clear where the problem were. It could be either connection or DNS problem. The issue was solved by providing IP address instead of domain name for my Azure instance.
Screen Shot 2016-03-16 at 1.26.40 PM
The test for target endpoint failed with the same timeout, but the reason was totally different. It was not DNS, but rather a connection issue. At first, I couldn’t figure that out because I was able to connect to my RDS instance from my laptop using server name and port but test endpoint in DMS was not working. Eventually I figured out that the problem was in security groups for endpoint in RDS. By default the AWS RDS instance was created with security group allowing connections outside but somehow restricting connections from DMS. I changed the security group for AWS RDS to “default” and was able to successfully test the endpoint in DMS.

The next step was to create a task. I created a task with initial load and ongoing replication for my testmig schema. The task was supposed to drop any tables on the target (you can choose truncate instead if you want) create objects, move data and keep replication until cutover day when you will be able to switch your applications to the new database. It will tell you that you need to setup supplemental logging for replication. Unfortunately it doesn’t tell you what kind of supplemental logging you have to setup.

So, I enabled minimal data supplemental logging on my Azure test instance.

test> alter database add supplemental log data;
Database add SUPPLEMENTAL altered.

test> exec dbms_capture_adm.prepare_table_instantiation('testmig.test_tab_1','keys')

PL/SQL procedure successfully completed.

test>

It was not enough and I got the error. By default you are not getting logging for your task but only configuration and statistics about replicated and loaded objects. As a result if you get an error, it is not clear where to look. I enabled supplemental logging for primary key on my replicated table and recreated task checking and logging checkbox. I got error again but I had a log and was able to see what was causing the issue.

2016-03-16T19:41:11 [SOURCE_CAPTURE  ]I:  Oracle compatibility version is 12.1.0.0.0  (oracle_endpoint_conn.c:86)
2016-03-16T19:41:11 [SOURCE_CAPTURE  ]I:  Oracle capture start time: now  (oracle_endpoint_capture.c:701)
2016-03-16T19:41:12 [SOURCE_CAPTURE  ]I:  New Log Miner boundaries in thread '1' : First REDO Sequence is '4', Last REDO Sequence is '4'  (oracdc_reader.c:589)
2016-03-16T19:41:18 [SOURCE_UNLOAD   ]W:  Supplemental logging is not defined for table with no key 'TESTMIG.TEST_TAB_1'  (oracle_endpoint_utils.c:831)
2016-03-16T19:41:18 [SOURCE_UNLOAD   ]E:  Supplemental logging for table 'TESTMIG.TEST_TAB_1' is not enabled properly [122310] Supplemental logging is not correct (oracle_endpoint_unload.c:245)
2016-03-16T19:41:18 [SOURCE_UNLOAD   ]I:  Unload finished for table 'TESTMIG'.'TEST_TAB_1' (Id = 1). 0 rows sent.  (streamcomponent.c:2567)
2016-03-16T19:41:18 [SOURCE_UNLOAD   ]E:  Failed to init unloading table 'TESTMIG'.'TEST_TAB_1' [122310] Supplemental logging is not correct (oracle_endpoint_unload.c:441)

It looked like my supplemental logging was not enough. So, I added supplemental logging for all columns and for entire schema testmig. I recreated task and started it again.

test> exec dbms_capture_adm.prepare_table_instantiation('testmig.test_tab_1','all');
PL/SQL procedure successfully completed.

test> exec dbms_capture_adm.prepare_schema_instantiation('testmig');
PL/SQL procedure successfully completed.

test>

It was working fine and was able to perform initial load.

2016-03-16T19:49:19 [SOURCE_CAPTURE  ]I:  Oracle capture start time: now  (oracle_endpoint_capture.c:701)
2016-03-16T19:49:20 [SOURCE_CAPTURE  ]I:  New Log Miner boundaries in thread '1' : First REDO Sequence is '4', Last REDO Sequence is '4'  (oracdc_reader.c:589)
2016-03-16T19:49:31 [SOURCE_UNLOAD   ]I:  Unload finished for table 'TESTMIG'.'TEST_TAB_1' (Id = 1). 100723 rows sent.  (streamcomponent.c:2567)
2016-03-16T19:49:31 [TARGET_LOAD     ]I:  Load finished for table 'TESTMIG'.'TEST_TAB_1' (Id = 1). 100723 rows received. 0 rows skipped. Volume transfered 45929688  (streamcomponent.c:2787)

What about ongoing changes? Yes, it was keeping the replication on and the tables were in sync. Replication lag for my case was minimal but we need to note that it was just one table with a low transaction rate. By the end I switched my load to AWS RDS database, stopped and deleted the DMS task. Migration was completed. I compared data in tables running a couple of simple checks for count and rows and running also one table “minus” other. Everything was fine.

rdsorcl> select max(pk_id) from testmig.test_tab_1;

      MAX(PK_ID)
----------------
         1000843

rdsorcl> select * from testmig.test_tab_1 where pk_id=1000843;

           PK_ID RND_STR_1       USE_DATE                    RND_STR_2       ACC_DATE
---------------- --------------- --------------------------- --------------- ---------------------------
         1000843 OUHRTHQ8        02/11/13 07:27:44           NFIAODAU        05/07/15 03:49:29

rdsorcl>

----------------

test> select max(pk_id) from testmig.test_tab_1;

      MAX(PK_ID)
----------------
         1000843

test> select * from testmig.test_tab_1 where pk_id=1000843;

           PK_ID RND_STR_1       USE_DATE                    RND_STR_2       ACC_DATE
---------------- --------------- --------------------------- --------------- ---------------------------
         1000843 OUHRTHQ8        02/11/13 07:27:44           NFIAODAU        05/07/15 03:49:29

test>

test> select count(*) from (select * from test_tab_1 minus select * from test_tab_1@rdsorcl);

        COUNT(*)
----------------
               0

test>

A summary of DMS:

    • We may need to adjust security groups for target RDS or EC2 systems. It may prevent connections.
    • Better to use IP for source endpoints since DNS may be not reliable.
    • Enable logging when you create task.
    • If you enable replication from Oracle database you have to setup full supplemental logging for the replicated schemas on your source system.
    • It requires basic knowledge about replication and how it works to understand and fix the error.

Next time I will try heterogeneous replication from MySQL to Oracle and then the other way around.

Categories: DBA Blogs

MySQL Memory Usage Docs Get a FaceLift

Mon, 2016-03-28 07:59

The MySQL Documentation team recently gave these docs on how MySQL uses memory a much needed face-lift. The new page provides a much clearer overview on how MySQL allocates memory, and provides many helpful links to be able to dig deeper.

For instance, if you weren’t aware of how Performance Schema memory utilization changed in 5.7, there is this helpful paragraph (emphasis mine):

The MySQL Performance Schema is a feature for monitoring MySQL server execution at a low level. As of MySQL 5.7, the Performance Schema dynamically allocates memory incrementally, scaling its memory use to actual server load, instead of allocating required memory during server startup. Once memory is allocated, it is not freed until the server is restarted. For more information, see Section 22.14, “The Performance Schema Memory-Allocation Model”.

Therefore, if you are starting a new project on MySQL 5.7, or upgrading an existing environment, and you have Performance Schema enabled, you might see your memory footprint rising inexplicably. According to the linked Performance Schema Memory-Allocation Model documentation, one reason might because of auto-scaling Performance Schema variables:


performance_schema_accounts_size
performance_schema_hosts_size
performance_schema_max_cond_instances
performance_schema_max_file_instances
performance_schema_max_index_stat
performance_schema_max_metadata_locks
performance_schema_max_mutex_instances
performance_schema_max_prepared_statements_instances
performance_schema_max_program_instances
performance_schema_max_rwlock_instances
performance_schema_max_socket_instances
performance_schema_max_table_handles
performance_schema_max_table_instances
performance_schema_max_table_lock_stat
performance_schema_max_thread_instances
performance_schema_users_size

Of course, you can limit each variable by supplying a value to prevent autoscaling beyond a point.

There might me some areas missing, such as explicit MEMORY tables, but by-and-large it is a vast improvement.

Other honorable mentions that I’ve seen updates in the documentation include Limiting memory utilization of range optimizations and Configuring innodb_buffer_pool_size

Happy reading!

Categories: DBA Blogs

Log Buffer #466: A Carnival of the Vanities for DBAs

Mon, 2016-03-28 07:47

This Log Buffer Edition covers weekly round up of blog posts from Oracle, SQL Server and MySQL.

Oracle:

The Universal Theme introduced with APEX 5 is immensely good looking and comes with a powerful Theme Roller to customize it.

The implementation of Ksplice has been greatly simplified. Now you just need to register your system(s) with Unbreakable Linux Network (ULN), subscribe to the appropriate Ksplice channel, use the yum command to install the uptrack package, and perform any custom configuration. Your systems will be automatically updated with the latest kernel and user space patches.

Every business book you read talks about delegation. It’s a core requirement for successful managers: surround yourself with good people, delegate authority and responsibility to them, and get out of their way.

Accelerating SQL Queries that Span Hadoop and Oracle Database

Oracle Big Data SQL 3.0 adds support for Hortonworks Data Platform and commodity clusters

SQL Server:

Instant File Initialization : Impact During Setup

Enumerate Windows Group Members

How to execute an SSIS package from the command line or a batch file

When AUTO_UPDATE_STATISTICS Doesn’t Happen

SQL Server Table Smells

MySQL:

MySQL replication primer with pt-table-checksum / pt-table-sync, part 2

How do you dig down into the JSON data, say like in comments on a blog post?

Percona XtraBackup 2.3.4 is now available

Connection timeout parameters in MySQL

What have we learnt in two decades of MySQL?

Categories: DBA Blogs

To apply or not to apply that Cumulative Update (CU)

Thu, 2016-03-24 11:56

Today the SQL Server Engineering posted an important shift in their recommendations regarding applying Cumulative Updates (often referred to as CUs) on their blog. You can find it here.

About 4 months ago we had an internal debate regarding the best patch strategy and I noticed that our SQL Server DBAs were divided on the best approach.

Some insisted that installing CUs as they were released was the best practice, while others insisted that you should only patch if you needed the Hotfix. I don’t know what spurred it but I saw other discussions on the subject pop up in the community a few days later.

Throughout my career, I’ve been torn on the best strategy myself. I like to keep my systems up to date but I’d always taken the approach that if you needed the hotfix, then with proper testing you should apply a CU. The release of a CU has never been the trigger for me to patch all my SQL Servers in any other occasion – except – when a lot of time had passed between Service Packs which did happen.

I think that strategy of waiting a long time before applying a service pack is a flawed one and don’t recommend it. I don’t think it’s a good idea to be “one release behind” or wait a year. That said, as a career DBA I don’t think I’ll rush out and apply the CU unless it’s fixing something. If I have the cycles, I may test it early on, but I’ll probably wait a month or two and see what the community has to say about it before apply it to production.

I predict that you’ll see Microsoft (and other vendors) move away from large service packs as they move into a more agile approach to their own software. I suspect it won’t belong before we see a formal cancellation of large releases. This is all the more reason for us to ensure we have automation in place for testing and deployment so that the release of an update isn’t a significant topic in our systems-planning meetings.

Do you apply CUs right away or delay? What’s your patch-strategy?

Categories: DBA Blogs

Three essential practices for security compliance

Wed, 2016-03-23 13:40

No IT or business person needs to be told twice that a major security breach can have a devastating impact on a business. Yet enterprises routinely find themselves non-compliant with security best practices and even their own policies.

Why? First, there’s a lot of complexity to manage. And second, with IT teams constantly putting out fires, background functions like security tend to get shortchanged.

That said, there are a few simple things you can do to strengthen how you protect your data and your business.

1. Stay patched and monitor for unauthorized changes

You really aren’t safe without up-to date security patches for your vulnerable systems — which means most of them. Any software that faces out or touches the Internet is definitely at risk. But internal personnel can pose threats, as well , meaning even “inside” systems can be vulnerable.

The problem with patching is scale. If you’re a bank with 300 branches across the country, all with their own IT systems, you don’t have the time or the people to manually patch every system in a centralized, whole-enterprise way. Automation is essential: a mechanism for pushing patches out across all your departments and locations — and verifying successful installation.

Patching is essential, but it’s not enough. If you’re breached, the intruders will try to downgrade or otherwise weaken your defenses. So you need an automated auditing platform that: a) looks for unauthorized changes that could weaken your software systems; and b) reverts compromised systems back to the authorized version of software.

2: Only allow access that’s strictly necessary

Mindset is a big part of security. When it comes to controlling access to system resources, data and applications, your default should be that no one has access to anything. “Permissions” then become very deliberate enablement of specific apps and services to specific users based on specific needs. People should only ever have access to the data and systems they need to do their immediate jobs.

Access rights should be linked to your provisioning systems so that when a person changes jobs or leaves your company, their old rights are immediately removed.

In general, security should match risk to systems, with levels of increasing verification when an employee’s behavior is unusual. For example, if someone has never logged in from a particular location but appears to be doing so now, serve them up an additional verifying question. If they’ve never logged into a system before, get them to verify their location and identity.

Users should also be prompted to confirm or deny unusual behaviour. Did you just log in from a new computer? Did you just change your password? These kinds of security health checks are being integrated into applications, periodically forcing users to review their settings and ensure their identity and security information is up to date.

What happens when you don’t have stringent rules like these? Weakly enforced access rules were at the root of a recent, headline-grabbing security breach at a major U.S. retailer. The intruders had access to one hacked device, but by exploiting weak permissions were able to access many other devices — and make off with 40 million credit card numbers.

3: Assume you’ve been hacked.

It’s easy to have a defensive mindset about security: “We’ll stop the bad guys from getting in.” But the reality is they may already be in. The strongest security position comes from assuming you’ve already been hacked. Keep a vigilant watch for evidence of it.

This starts by imposing tight controls on systems that are key to your business operations. Audit all planned changes daily, recording these with approvals in a change log accessible only through off-site logging not connected to systems. If an intruder makes changes, the change log will be your first line of defense — it will be impossible for the hacker to cover their tracks because you will have a forensic change record in a protected location.

Security needs to be a priority in every area of your business. Business units should test the security of their operational practices as part of quarterly business continuity planning. You should regularly test your company-wide systems internally to identify vulnerabilities, and consider hiring professional security experts to attack or socially engineer access to your systems. When it comes to enterprise security, offence really is the best defense.

Categories: DBA Blogs

A Tale of Three Cities: Perspectives on innovation from New York, San Francisco and Sydney

Tue, 2016-03-22 11:29

Recently, Pythian hosted a number of Velocity of Innovation (Velocity) events. I moderated two of these: one last June in New York, and one in November in San Francisco. Another event in Sydney, Australia was moderated by Tom McCann, senior customer experience analyst with Forrester.

Our Velocity events have given us unique insights into what IT professionals in various regions see as their top priorities or concerns. And although we always framed our discussions with similar questions, it was interesting to see the different directions they took in each location — especially when it came to the topic of innovation.

So what makes a particular region fertile ground for innovation? And can you measure it?

The Global Innovation Index (GII) ranks countries based on a multitude of indicators of innovation. The United States ranks number 2 on the GII, behind Switzerland, while Australia is number 17, out of 141 countries. According to the GII website, the index aims to capture the multi-dimensional facets of innovation and provide the tools to assist in tailoring policies to promote long-term output growth, improved productivity and job growth.

The ideas discussed in the US and Australian locations seemed to align with the GII results, with US panelists expressing more positive attitudes and concrete ideas on how companies can improve agility and become more innovative. And while being at the forefront of technology in the Asia-Pacific region, the Australian panelists and audience members described more cautious approaches to achieving innovation.

Sydney: Cautiously moving forward

Early in the Sydney panel discussion, Chris Mendez, executive consultant big data and analytics from Industrie IT, sparked a lively discussion about innovation by asserting that innovation is lacking in that region.

“I actually don’t think there’s enough innovation in Australia, in particular. There’s a lot of talk about it, people are doing a lot of experiments, and there are some companies who’ve set up business purely based on tool sets that use data to innovate. But there are a few things that seem to be working against innovation, and I think one of those things is that it doesn’t stand on its own,” Mendez said.

According to Francisco Alvarez, vice president, APAC at Pythian, the risks associated with innovation might be holding companies back in Australia. “The main problem for most companies is that innovation equals risk,” Alvarez said.

Alvarez also commented on what it takes to make innovation work. “If you take a step back and look at the companies that are doing well in the market, you can see that there is one factor that differentiates them: they were not afraid to try to innovate. And because of that innovation they are getting their share of the market and gaining ground. Just look at the financial market. CBA was considered crazy a few years ago for all the investment they were making in technology, social media, apps and so on. They got ahead. And now everybody is trying to do the same,” he said.

Mendez thinks that innovation needs to start from the top. “I think there’s also a very big misunderstanding at board levels about innovation because boards are there to actually stop you changing your business. The fundamental tenant is: ‘We’ve got a great business model here, it’s running well, we’ve got to make sure that any change to it doesn’t damage that.’ There’s a natural caution at board levels and it’s totally understandable,” he said.

While cautious, the Sydney panelists expressed that they thought there is hope for more innovation in the future. They expressed a need to proceed slowly, watching what works for innovation leaders.

“The key is to have a balance,” Alvarez said.

New York: Early adopters

If you were to put our New York panelists on Geoffrey Moore’s https://en.wikipedia.org/wiki/Geoffrey_Moore Technology Adoption Lifecycle, you might classify them as early adopters, rather than true innovators. Not surprising, since New York’s competitive industries such as banking and publishing rely on innovative technologies, but they don’t create them.

According to New York panelist, Forrester Analyst Gene Leganza, what makes an enterprise agile is the ability to sense what’s going on in the marketplace and to quickly respond to it. But, he said that innovation comes at a cost. “The flip side of agility is innovation. An interesting aspect of innovation is getting really hot talent into your environment. Getting the right talent and doing smart things and being leading edge are challenges. You have to figure out what level to drop in on, where you are in the industry. You need to determine if you are a startup or a state organization that needs to be a fast follower,” Leganza said.

Otto Toth, CTO at Huffington Post warned that innovating quickly is not always in the best interest of the business, or it may not be the way to do it properly. He asserted that quick innovation can actually work against the business, and that instead of making your business faster, being very agile can slow everything down.

“Too many decision-makers just slow down the process. It’s better to have a few people or a core team who make the decisions and come up with new features,” he added.

Leganza went on to describe what it takes at various levels of the organization. He said that there’s a notion at the engineer level that agility means bureaucracy won’t get in their way. Then there’s agility at the enterprise level, which is about reducing risk and understanding how soon change can be in production.

“The higher up you go, the more people are going to be receptive to what improves the whole portfolio rather than one project. This is where architects come in. They have been hands-on, but have the credibility and knowledge to guide the organization more strategically,” Leganza said.

San Francisco: The innovators

In San Francisco the narratives on innovation were quite different. Although cities don’t have their own GII ranking, you might assume that the West Coast IT leaders are the innovators. And judging by the discussion at the San Francisco event, this assumption seemed to be true.

Cory Isaacson, CTO at RMS was one of our San Francisco panelists. His company runs catastrophe models for some of the world’s largest insurance companies, like scenarios that will tell what a disaster like an earthquake or hurricane might cost them. Isaacson has been working on bringing big data and scalable systems together to create a new cloud-based platform.

“At my company some of the things that we’re trying to do are, honestly, more advanced than most other things I’ve ever seen in my career. But when you’re doing innovation, it is risky. There’s no way around it. There is a lot to evaluate: from different algorithms to the risk models and the catastrophe models,” said Isaacson.

Sean Rich, director of IT at Mozilla added to the San Francisco discussion by talking about some of the concrete innovations his company is working on. They’re taking a partnership approach to enable agility.

“Innovation is doing something new. In an effort toward achieving agility, one of the things that we’re doing is enabling the agility of our business partners, by changing our own operating model. Instead of traditional IT where we run all the services and infrastructure necessary to drive the business, we’re taking more of an enabler or partnership approach,” Rich said.

“We’re now doing things like encouraging shadow IT, encouraging the use of SaaS applications and helping them really do that better through different service offerings like vendor management or change management of user adoption for certain platforms and data integration” he added.

“Overall, we’re looking at ourselves differently, and asking what new capabilities we need to develop, and what processes, tools and skills we need to enable agility for our marketing group or our product lines, as an example,” Rich said.

Aaron Lee, the Chief Data Officer at Pythian, runs a team that specializes in helping clients harness technology to deliver real outcomes. Usually they involve things like big data, DevOps, cloud, advanced analytics — he’s involved in some of the most leading edge initiatives for Pythian customers. He takes a practical approach to innovation with clients, and said that companies could improve innovation by looking at the root of the motivation for it.

“They need to ask: Why are we going down this path, trying to innovate something and what is the value of that thing we’re trying to innovate?

“If the shared goals around innovation opportunities aren’t defined in a way that actually lead to success over time, then the business is just like any other organism: it starts to get more risk averse. Then it becomes harder and harder to execute any kind of change agenda. Planning in a way that is likely to have a good long-term outcome, even at the outset of any sort of initiative, is one key success criteria that we put in place to help ourselves and our customers get to a good place,” Lee said.

Isaacson added that companies like Google have been known to allow an engineer to take a day a week or a day every two weeks to just look at things. “I think though, the challenge is you have to get your organization up to the point where this is an economically viable thing to do. Once we get more ahead of the curve, I think we could do that kind of thing,” he said.

Interested in being a part of a discussion like these? VELOCITY OF INNOVATION is a series of thought-leadership events for senior IT management hosted by Pythian. Pythian invites leading IT innovators to participate in discussions about today’s disruptive technologies: big data, cloud, advanced analytics, DevOps, and more. These events are by invitation only.

If you are interested in attending an upcoming Velocity of Innovation event in a city near you, please contact events@pythian.com. To view our schedule of upcoming events visit our Velocity of Innovation page.

Categories: DBA Blogs

Apache Cassandra 2.1 Incremental Repair

Mon, 2016-03-21 14:05

The “incremental repair” feature has been around since Cassandra’s 2.1. Conceptually the idea behind incremental repair is straightforward, but it can get complicated. The official Datastax document describes the procedure for migrating to incremental repair, but in my opinion, it doesn’t give a full picture. This post aims to fill in this gap by summarizing and consolidating the information of Cassandra incremental repair.

Note: this post assumes the reader has a basic understanding of Apache Cassandra, especially the “repair” concept within Cassandra.

 

1. Introduction

The idea of incremental repair is to mark SSTables that are already repaired with a flag (a timestamp called repairedAt indicating when it was repaired) and when the next run of repair operation begins, only previously unrepaired SSTables are scanned for repair. The goal of an “incremental repair” is two-fold:

1) It aims to reduce the big expense that is involved in a repair operation that sets out to calculate the “merkle tree” on all SSTables of a node;

2) It also makes repair network efficient because only rows that are marked as “inconsistent” will be sent across the network.

2. Impact on Compaction

“Incremental repair” relies on an operation called anticompaction to fulfill its purpose. Basically, anticompaction means splitting an SSTable into two: one contains repaired data and the other contains non-repaired data. With the separation of the two sets of SSTables, the compaction strategy used by Cassandra also needs to be adjusted accordingly. This is because we cannot merge/compact a repaired SSTable with an unrepaired SSTable together. Otherwise, we lose the repaired states.

Please note that when an SSTable is fully covered by a repaired range, no anticompaction will occur. It will just rewrite the repairedAt field in SSTable metadata.

SizeTiered compaction strategy takes a simple strategy. Size-Tiered compaction is executed independently on the two sets of SSTables (repaired and unrepaired), as the result of incremental repair Anticompaction operation.

For Leveled compaction strategy, leveled compaction is executed as usual on repaired set of SSTables, but for unrepaired set of SSTables, SizeTiered compaction will be executed.

For DateTiered compaction strategy, “incremental repair” should NOT be used.

3. Migrating to Incremental Repair

By default, “nodetool repair” of Cassandra 2.1 does a full, sequential repair. We can use “nodetool repair” with “-inc” option to enable incremental repair.

For Leveled compaction strategy, incremental repair actually changes the compaction strategy to SizeTiered compaction strategy for unrepaired SSTables. If a nodetool repair is executed for the first time on Leveled compaction strategy, it will do SizeTiered compaction on all SSTables because until the first incremental repair is done, Cassandra doesn’t know the repaired states. This is a very expensive operation and it is therefore recommended to migrate to incremental repair one node at a time, and follow the following procedure to migrate to incremental repair:

  1. Disable compaction on the node using nodetool disableautocompaction
  2. Run the default full, sequential repair.
  3. Stop the node.
  4. Use the tool sstablerepairedset to mark all the SSTables that were created before you disabled compaction.
  5. Restart cassandra
3.1 Tools for managing SSTable repaired/unrepaired state

Cassandra offers two utilities for SSTable repaired/unrepaired state management:

  • sstablemetadata is used to check repaired/unrepaired state of an SSTable. The syntax is as below:

             sstablemetadata <sstable filenames>

  • sstablerepairedset is used to manually mark if an SSTable is repaired or unrepaired. The syntax is as below. Note that this tool has to be used when Cassandra is stopped.

             sstablerepairedset [–is-repaired | –is-unrepaired] [-f <sstable-list> | <sstables>]

Please note that with utility sstablerepairedset, you can also stop incremental repair on Leveled compaction and restore the data to be leveled again with the “—is-unrepaired” option. Similarly, the node needs to be stopped first.

4. Other Considerations with Incremental Repair

There are some other things to consider when using incremental repair.

  • For Leveled compaction, once an incremental repair is used, it should be done so continuously. Otherwise, only SizeTiered compaction will be executed. It is recommended to run incremental repair daily and run full repairs weekly to monthly.
  • Recovering from missing data or corrupted SSTables require a non-incremental full repair.
  • “nodetool repair” –local option should be only used with full repair, not with incremental repair.
  • In C* 2.1, sequential repair and incremental repair does NOT work together.
  • With SSTable’s repaired states being tracked via it’s metadata, some Cassandra tools can impact the repaired states:
    1. Bulk loading will make loaded SSTables unrepaired, even if was repaired in a different cluster.
    2. If scrubbing causes dropped rows, new SSTables will be marked as unrepaired. Otherwise, SSTables will keep their original repaired state.
Categories: DBA Blogs

Pythian at Collaborate 16

Mon, 2016-03-21 13:27

Collaborate is a conference for Oracle power users and IT leaders to discuss and find solutions and strategies based on Oracle technologies. This many Oracle experts in one place only happens one per year, and Pythian is excited to be attending. If you are attending this year, make sure to register for some of the sessions featuring Pythian’s speakers, listed below.

Collaborate 16 is on April 10-14, 2016 at the Mandalay Bay Resort and Casino in Las Vegas, Nevada, US.

 

Pythian Collaborate 16 Speaker List:

 

Michael Abbey | Consulting Manager | Oracle ACE

Communications – the Good, the Bad, and the Best

Tues April 12 | 9:15 a.m. – 10:15 a.m. | North Convention, Room South Pacific D

Traditional DB to PDB: The Options

Tues April 12 | 2:15 p.m. – 3:15 p.m. | Room Jasmine A

Documentation – A Love/Hate Relationship (For Now)

Wed April 13 | 8:00 a.m. – 9:00 a.m. | Room Palm A

 

Nelson Caleroa | Database Consultant | Oracle ACE

Exadata Maintenance Tasks 101

Tues April 12 | 10:45 a.m. – 11:45 am | Room Palm C

Evolution of Performance Management: Oracle 12c Adaptive Optimization

Tues April 12 | 3:30 p.m. – 4:30 p.m | Room Jasmine A

 

Subhajit Das Chaudhuri | Team Manager

Deep Dive Into SSL Implementation Scenarios for Oracle Application E-Business Suite

Wed April 13 | 8:00 a.m. – 9:00 a.m. | Room Breakers E

 

Alex Gorbachev | CTO | Oracle ACE Director

Oaktable World: TED Talks

Wed April 13 | 12:00 p.m. – 12:30 p.m. | Room Mandalay Bay Ballroom

Oaktable World: Back of a Napkin Guide to Oracle Database in the Cloud

Wed April 13 | 4:15 p.m. – 5:15 p.m. | Room Mandalay Bay Ballroom

 

Gleb Otochkin | Principal Consultant

Two towers or story about data migration. Story about moving data and upgrading databases.

Mon April 11 | 4:30 p.m. – 5:30 p.m. | Room Jasmine A

 

Simon Pane | ATCG Principal Consultant | Oracle Certified Expert

Oracle Database Security: Top 10 Things You Could & Should Be Doing Differently

Mon April 11 | 2 p.m. – 3 p.m. | Room Palm A

Time to get Scheduling: Modernizing your DBA scripts with the Oracle Scheduler (goodbye CRON)

Tues April 12 | 10:45 a.m. – 11:45 a.m. | Room Palm A

 

Roopesh Ramklass | Principal Consultant

Oracle Certification Master Exam Prep Workshop

Sun April 10 | 9:00 a.m. – 3:00 p.m. | Room Jasmine C

Fast Track Your Oracle Database 12c Certification

Wed April 13 | 8:00 a.m. – 9:00 a.m. | Room Jasmine A

 

Categories: DBA Blogs

The 5 Best Things That Will Happen to DBAs When SQL Server Moves to Linux

Fri, 2016-03-18 13:50

 

In the second half of 2017, SQL Server will start calling Linux its second home. Azure Data Lake for Ubuntu was the sign that Microsoft was serious about going Linux. Private preview is already available for SQL Server on Linux, and this scribe is not part of it but let me wildly guess what would warm the hearts of those DBAs who have played with Oracle on Linux and SQL Server on Windows:

 

  1. Cleanup won’t require sifting through registry entries and cleaning it up. The uninstall would be quick, simple, and a breeze.
  2. No confusion about Windows authentication or SQL Server authentication.
  3. A much much better system utilities for monitoring and root cause analysis.
  4. Boasting and bragging rights about managing an enterprise database on Linux.

 

Probably the most serious thing that will happen with SQL Server on Linux is that more and more Oracle DBAs who prefer to use Linux, will start taking an interest in managing SQL Server.

Pythian is perfectly poised to leverage this change in technology from Microsoft. We have world class SQL Server DBAs, Linux gurus, and some magnificent Oracle DBAs. Existing clients as well as new clients can count on these resources to get their SQL Server databases supported or migrated to Linux.

And of course, it doesn’t matter really whether these database are on cloud or not come 2017, because Pythian has already covered that too.

Categories: DBA Blogs

Pythian Tech Chatter

Fri, 2016-03-18 12:23

At Pythian, the DevOps, Big Data, and Data Science teams use Slack for our IM system. We’re a diverse group drawn to a wide range of technologies so there’s some interesting and valuable chatter about what folks are reading. Here are some subjects that came across our channels the past couple of weeks:

DevOps

Pythian engineers support Solr and Elasticsearch for a number of clients. Here’s a great summary of the important tunables in elasticsearch:
https://tech.scrunch.com/blog/lessons-learned-from-a-year-of-running-elasticsearch-in-production/

We often are called into companies to help manage deployment infrastructure, and it is quite common to encounter an unwieldy monolithic application that has been cobbled together over the years. Our head of DevOps pointed out this book as a great resource for strategies to decompose those into microservices:
http://www.amazon.com/Microservices-Patterns-Applications-Designing-fine-grained/dp/069242427X

Big Data

As Data Lake architectures mature, we’re seeing more comprehensive offerings from vendors. This blog talks about Microsoft’s offerings. I like the integration of active directory for strong security and the U-SQL approach to pulling data from data lakes.  While I’m not a huge fan of C#, the concept of having a library of extractors and outputters is a nice nut and bolt approach:
http://tomkerkhove.ghost.io/2015/10/22/exploring-azures-data-lake/

Another thread from the Data Lakes discussion highlights the critical importance of Data Governance. Waterline’s Data Inventory tool is a strong player for MetaData/Governance automation:
http://blog.waterlinedata.com/blog/the-d-artagnan-of-hadoop-spoiler-alert-data-governance-for-hadoop

Data Science

Facebook released its implementation of deep learning neural nets last year. The Data Science team has been spending some time with it as they evaluate and build AI tools:
https://github.com/facebook/MemNN

The team has also been using some great Java tools for natural language processing from Stanford:
http://stanfordnlp.github.io/CoreNLP/index.html

Categories: DBA Blogs

GoldenGate 12.2 Big Data Adapters: part 2 – FLUME

Thu, 2016-03-17 11:36

In this blog post we continue our review of the new Oracle GoldenGate Big Data adapters. In the first part of the series I tested the basic HDFS adapter and checked how it worked with some DML and DDL. In this article I will try the Flume adapter and see how it works.

A quick reminder on what Flume is. It is not a topic about the popular Australian musician. Today we are talking about Apache Flume. In short, it is a pipeline or a kind of streaming system that allows you to move large amount of data. It has simple architecture and, in general, there are three main components:

a) Source: where data enters into Flume from an outside system.
b) Sink: responsible for passing data to the destination system whether it is the final destination, or another flow.
c) Channel: connects the Source and Sink.

I know that it is a rather simplistic description, but the main subject of this article is not what Flume can do, but how we can pass our data from Oracle to Flume using GoldenGate.
My first post discussed how you set up an Oracle source system, and how to start GoldenGate initial load and extract. I am not repeating it here. Let’s assume we have the source system. It is Oracle database, replicating DML and DDL for one particular schema, and GGTEST using Oracle GoldenGate 12.2 to trail files on our box where we already have GoldenGate for Big Data. Have a look at the first part to see how to set up the GoldenGate for Big Data (OGG BD).

So, we have our OGG BD setup and the manager up and running.

GGSCI (sandbox.localdomain) 1> info manager

Manager is running (IP port sandbox.localdomain.7839, Process ID 18521).


GGSCI (sandbox.localdomain) 2>

What we need now is to prepare our Flume agent to accept messages from OGG. I’ve already set up my Flume’s agent-ng service on my Linux box, and now we need to prepare the configuration file for the agent to handle the income stream, and pass it to the destination system. We will set up our source to “avro” and sink will be writing to HDFS. The source can be either avro or thrift. According to oracle documentation the Flume handler can stream data from a trail file to Avro or Thrift RPC Flume sources.

I have to admit that the destination as HDFS looks quite artificial since we have a special adapter for HDFS and don’t need a Flume to write there. But such a configuration can help us compare different adapters and what they can do.
I used Flume version 1.6.0:

[oracle@sandbox flume-ng]$ bin/flume-ng version
Flume 1.6.0
Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
Revision: 2561a23240a71ba20bf288c7c2cda88f443c2080
Compiled by hshreedharan on Mon May 11 11:15:44 PDT 2015
From source with checksum b29e416802ce9ece3269d34233baf43f
[oracle@sandbox flume-ng]$

Here is my configuration file for the Flume agent:

# Name/aliases for the components on this agent
agent.sources = ogg1
agent.sinks = hdfs1
agent.channels = ch1
#Avro source
agent.sources.ogg1.type = avro
agent.sources.ogg1.bind = 0.0.0.0
agent.sources.ogg1.port = 4141

# Describe the sink
agent.sinks.hdfs1.type = hdfs
agent.sinks.hdfs1.hdfs.path = hdfs://sandbox/user/oracle/ggflume
#agent.sinks.hdfs1.type = logger

# Use a channel which buffers events in memory
agent.channels.ch1.type = memory
agent.channels.ch1.capacity = 100000
agent.channels.ch1.transactionCapacity = 10000

# Bind the source and sink to the channel
agent.sources.ogg1.channels = ch1
agent.sinks.hdfs1.channel = ch1

I’ve made the configuration simple and clear. You may change agent.sources.ogg1.port and agent.sinks.hdfs1.hdfs.path  depending on your system.

On the target HDFS we have to create directory as it was defined in our sink configuration.

[oracle@sandbox ~]$ hadoop fs -mkdir /user/oracle/ggflume
[oracle@sandbox ~]$ hadoop fs -ls /user/oracle/ggflume
[oracle@sandbox ~]$

We can start our Flume agent now.

[root@sandbox conf]# service flume-ng-agent start
Starting Flume NG agent daemon (flume-ng-agent):           [  OK  ]
[root@sandbox conf]# service flume-ng-agent status
Flume NG agent is running                                  [  OK  ]
[root@sandbox conf]#
[root@sandbox conf]# tail /var/log/flume-ng/flume.log
25 Feb 2016 11:56:37,113 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:120)  - Monitored counter group for type: CHANNEL, name: ch1: Successfully registered new MBean.
25 Feb 2016 11:56:37,121 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:96)  - Component type: CHANNEL, name: ch1 started
25 Feb 2016 11:56:37,122 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173)  - Starting Sink hdfs1
25 Feb 2016 11:56:37,123 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184)  - Starting Source ogg1
25 Feb 2016 11:56:37,139 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.source.AvroSource.start:228)  - Starting Avro source ogg1: { bindAddress: 0.0.0.0, port: 4141 }...
25 Feb 2016 11:56:37,146 INFO  [lifecycleSupervisor-1-2] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:120)  - Monitored counter group for type: SINK, name: hdfs1: Successfully registered new MBean.
25 Feb 2016 11:56:37,147 INFO  [lifecycleSupervisor-1-2] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:96)  - Component type: SINK, name: hdfs1 started
25 Feb 2016 11:56:38,114 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:120)  - Monitored counter group for type: SOURCE, name: ogg1: Successfully registered new MBean.
25 Feb 2016 11:56:38,115 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:96)  - Component type: SOURCE, name: ogg1 started
25 Feb 2016 11:56:38,116 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.source.AvroSource.start:253)  - Avro source ogg1 started.
[root@sandbox conf]#

Flume is ready, and we can now prepare our OGG configuration. We have examples for the Flume adapter configuration files in $OGG_HOME/AdapterExamples/big-data/flume/ :

[oracle@sandbox oggbd]$ ll AdapterExamples/big-data/flume/
total 12
-rw-r--r--. 1 oracle oinstall 107 Dec  9 12:56 custom-flume-rpc.properties
-r-xr-xr-x. 1 oracle oinstall 812 Dec  9 12:56 flume.props
-rw-r--r--. 1 oracle oinstall 332 Dec  9 12:56 rflume.prm
[oracle@sandbox oggbd]$

We can copy the examples to our configuration directory and adjust them to our needs:

[oracle@sandbox oggbd]$ cp AdapterExamples/big-data/flume/* dirprm/

Here is configuration file for our adapter:

[oracle@sandbox oggbd]$ cat dirprm/flume.props

gg.handlerlist = flumehandler
gg.handler.flumehandler.type=flume
gg.handler.flumehandler.RpcClientPropertiesFile=custom-flume-rpc.properties
gg.handler.flumehandler.format=avro_op
gg.handler.flumehandler.mode=tx
#gg.handler.flumehandler.maxGroupSize=100, 1Mb
#gg.handler.flumehandler.minGroupSize=50, 500 Kb
gg.handler.flumehandler.EventMapsTo=tx
gg.handler.flumehandler.PropagateSchema=true
gg.handler.flumehandler.includeTokens=false
gg.handler.flumehandler.format.WrapMessageInGenericAvroMessage=true

goldengate.userexit.timestamp=utc
goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE

gg.log=log4j
gg.log.level=INFO

gg.report.time=30sec

gg.classpath=dirprm/:/usr/lib/flume-ng/lib/*:

javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=ggjava/ggjava.jar

You will need to adjust your gg.classpath depending on your system, as it has to include Flume java classes and a file with Flume source properties (custom-flume-rpc.properties file).
Here is my example for the custom-flume-rpc.properties file which will be used by OGG adapter to connect to the flume-ng agent. I’ve placed it to the dirprm directory along with other parameters.

[oracle@sandbox oggbd]$ cat dirprm/custom-flume-rpc.properties
client.type=default
hosts=h1
hosts.h1=localhost:4141
batch-size=100
connect-timeout=20000
request-timeout=20000</pre>
<pre>

As you can see, my flume-ng agent is on the same host as the OGG which may not be the same for you. In your case you may need to provide hostname and port for your running glume-ng agent. We need to prepare the configuration file for our initial load. The OGG trail file is located in the dirdat/ directory and has name initld.

</pre>
<pre></pre> <pre>Here is our parameter file for initial load using passive replicat: [oracle@sandbox oggbd]$ cat dirprm/irflume.prm --initial REPLICAT irflume -- Command to run REPLICAT in passive mode -- ./replicat paramfile dirprm/irflume.prm reportfile dirrpt/irflume.rpt SPECIALRUN END RUNTIME EXTFILE /u01/oggbd/dirdat/initld --DDLERROR default discard DDL include all TARGETDB LIBFILE libggjava.so SET property=dirprm/flume.props REPORTCOUNT EVERY 1 MINUTES, RATE GROUPTRANSOPS 10000 MAP ggtest.*, TARGET bdtest.*; 

Let’s run the load and see what we get in the end:

[oracle@sandbox oggbd]$ ./replicat paramfile dirprm/irflume.prm reportfile dirrpt/irflume.rpt

The command completed successfully and we got three new files on HDFS. The first 2 files had the schema description and the 3-d one had the data for the replicated tables.

[root@sandbox ~]# hadoop fs -ls /user/oracle/ggflume
Found 12 items
-rw-r--r--   1 flume oracle       1833 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634620
-rw-r--r--   1 flume oracle       1762 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634621
-rw-r--r--   1 flume oracle       1106 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634622
[root@sandbox ~]# hadoop fs -tail /user/oracle/ggflume/FlumeData.1457626634620
 {
        "name" : "PK_ID",
        "type" : [ "null", "string" ],
        "default" : null
      }, {
        "name" : "PK_ID_isMissing",
        "type" : "boolean"
      }, {
        "name" : "RND_STR",
        "type" : [ "null", "string" ],
        "default" : null
      }, {
        "name" : "RND_STR_isMissing",
        "type" : "boolean"
     ..................

[root@sandbox ~]# hadoop fs -tail /user/oracle/ggflume/FlumeData.1457626634621
 "string"
  }, {
    "name" : "primary_keys",
    "type" : {
      "type" : "array",
      "items" : "string"
    }
  }, {
    "name" : "tokens",
    "type" : {
      "type" : "map",
      "values" : "string"
    },
...........................
[root@sandbox ~]# hadoop fs -tail /user/oracle/ggflume/FlumeData.1457626634622
:?v??8?????	SaQm?"BDTEST.TEST_TAB_1Ñ??
                                          ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-10T11:17:14.448000(00000000-10000002012
PK_ID1371O62FX&2014-01-24:19:09:20RJ68QYM5&2014-01-22:12:14:30"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-10T11:17:14.459000(00000000-10000002155
PK_ID2371O62FX&2014-01-24:19:09:20HW82LI73&2014-05-11:05:23:23"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-10T11:17:14.459001(00000000-10000002298
PK_ID3RXZT5VUN&2013-09-04:23:32:56RJ68QYM5&2014-01-22:12:14:30"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-03-10T11:17:14.460000(00000000-10000002441
PK_ID4RXZT5VUN&2013-09-04:23:32:56HW82LI73&2014-05-11:05:23:23"BDTEST.TEST_TAB_2?????"BDTEST.TEST_TAB_2I42016-02-16 19:17:40.76289942016-03-10T11:17:14.466000(00000000-10000002926
PK_IDRND_STR_1ACC_DATE7IJWQRO7T&2013-07-07:08:13:52

The initial load has succeeded, and now we can create and start the proper ongoing replication to HDFS through Flume.
Let’s prepare a new parameter file for our permanent Flume replicat and starting it up.

GGSCI (sandbox.localdomain) 2> edit param rflume
REPLICAT rflume
-- Trail file for this example is located in "dirdat/" directory
-- Command to add REPLICAT
-- add replicat rflume, exttrail dirdat/or
TARGETDB LIBFILE libggjava.so SET property=dirprm/flume.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
HANDLECOLLISIONS
MAP ggtest.*, TARGET bdtest.*;


GGSCI (sandbox.localdomain) 1> add replicat rflume, exttrail dirdat/or, begin now
REPLICAT added.


GGSCI (sandbox.localdomain) 2> start replicat rflume

Sending START request to MANAGER ...
REPLICAT RFLUME starting

Let’s insert a row and see what we get on the target system.

orclbd> insert into ggtest.test_tab_1
  2  values (7,dbms_random.string('x', 8), sysdate-(7+dbms_random.value(0,1000)),
  3  dbms_random.string('x', 8), sysdate-(6+dbms_random.value(0,1000))) ;

1 row inserted.

orclbd> commit;

Commit complete.

orclbd>

As soon as commit had been executed we received a couple of new files on HDFS where the first had the schema for the changed table, and the second had the data for the transaction or  “payload”.

[root@sandbox ~]# hadoop fs -ls /user/oracle/ggflume
.................
-rw-r--r--   1 flume oracle       1833 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634620
-rw-r--r--   1 flume oracle       1762 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634621
-rw-r--r--   1 flume oracle       1106 2016-03-10 11:17 /user/oracle/ggflume/FlumeData.1457626634622
-rw-r--r--   1 flume oracle       1833 2016-03-10 12:43 /user/oracle/ggflume/FlumeData.1457631817021
-rw-r--r--   1 flume oracle        605 2016-03-10 12:43 /user/oracle/ggflume/FlumeData.1457631817022
[root@sandbox ~]#

[root@sandbox ~]# hadoop fs -cat /user/oracle/ggflume/FlumeData.1457631817021
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable???:]B?9?k??	]kTSa?m??{
  "type" : "record",
  "name" : "TEST_TAB_1",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
  }, {
    "name" : "op_type",
    "type" : "string"
  }, {
..............................
[root@sandbox ~]# hadoop fs -cat /user/oracle/ggflume/FlumeData.1457631817022
{EQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable~,?`?aHTZRB?Sa?ny
  "type" : "record",
  "name" : "generic_wrapper",
  "namespace" : "oracle.goldengate",
  "fields" : [ {
    "name" : "table_name",
    "type" : "string"
  }, {
    "name" : "schema_hash",
    "type" : "int"
  }, {
    "name" : "payload",
    "type" : "bytes"
  } ]
}Sa?nz?"BDTEST.TEST_TAB_1Ñ??
                            ?"BDTEST.TEST_TAB_1I42016-03-10 17:43:31.00169042016-03-10T12:43:33.464000(00000000080001408270
PK_ID7XYJN3Z31&2014-04-21:09:01:21FL6Z8RPN&2013-08-06:21:40:02

I prepared and executed a small regression testing of inserts and updates to the table using jmeter, and started to push inserts and updates with a rate about 29 transaction per second.Even with one flume channel and my small Hadoop environment, it had a pretty good response time without trashing any errors. Flume put about 900 transactions per a HDFS file.

-rw-r--r--   1 flume oracle     123919 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485465
-rw-r--r--   1 flume oracle      35068 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485466
-rw-r--r--   1 flume oracle     145639 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485467
-rw-r--r--   1 flume oracle     178943 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485468
-rw-r--r--   1 flume oracle     103285 2016-03-10 14:52 /user/oracle/ggflume/FlumeData.1457639485469
[oracle@sandbox Downloads]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457639485467 | wc -l
804
[oracle@sandbox Downloads]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457639485468 | wc -l
988
[oracle@sandbox Downloads]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457639485469 | wc -l
570
[oracle@sandbox Downloads]$

I’ve also tried the “thrift” datasource for Flume and it worked well too. To switch from “avro” to “thrift” I changed the value in the parameter agent.sources.ogg1.type in the flume.conf and restarted the flume agent. You also have to change client.type from default to thrift in your custom-flume-rpc.properties file. It worked fine, and I was able to get the information from the trail and write to the hdfs.

[oracle@sandbox oggbd]$ ./replicat paramfile dirprm/irflume.prm reportfile dirrpt/irflume.rpt
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/ggflume
Found 3 items
-rw-r--r--   1 flume oracle       1833 2016-02-25 16:05 /user/oracle/ggflume/FlumeData.1456434311892
-rw-r--r--   1 flume oracle       1762 2016-02-25 16:05 /user/oracle/ggflume/FlumeData.1456434311893
-rw-r--r--   1 flume oracle       1106 2016-02-25 16:05 /user/oracle/ggflume/FlumeData.1456434311894
[oracle@sandbox oggbd]$
[oracle@sandbox oggbd]$ hadoop fs -cat  /user/oracle/ggflume/FlumeData.1456434311892
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritableOG?????$?{qS@]?{
  "type" : "record",
  "name" : "TEST_TAB_1",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
  }, {
.....


[oracle@sandbox oggbd]$ hadoop fs -cat  /user/oracle/ggflume/FlumeData.1456434311894
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable?????t?E?9j??S@??"BDTEST.TEST_TAB_1Ñ??
                                                                                                              ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-02-25T16:05:11.480000(00000000-10000002012
PK_ID1371O62FX&amp;2014-01-24:19:09:20RJ68QYM5&amp;2014-01-22:12:14:30"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-02-25T16:05:11.498000(00000000-10000002155
PK_ID2371O62FX&amp;2014-01-24:19:09:20HW82LI73&amp;2014-05-11:05:23:23"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-02-25T16:05:11.498001(00000000-10000002298
PK_ID3RXZT5VUN&amp;2013-09-04:23:32:56RJ68QYM5&amp;2014-01-22:12:14:30"BDTEST.TEST_TAB_1Ñ??
                                                                                   ?"BDTEST.TEST_TAB_1I42016-02-16 19:17:40.74669942016-02-25T16:05:11.499000(00000000-10000002441
PK_ID4RXZT5VUN&amp;2013-09-04:23:32:56HW82LI73&amp;2014-05-11:05:23:23"BDTEST.TEST_TAB_2?????"BDTEST.TEST_TAB_2I42016-02-16 19:17:40.76289942016-02-25T16:05:11.505000(00000000-10000002926
PK_IDRND_STR_1ACC_DATE7IJWQRO7T&amp;2013-07-07:08:13:52[oracle@sandbox oggbd]$

You can see from the output that in the FlumeData.1456434311894 file we are getting the schema description and in the FlumeData.1456434311894 we have the data from the tables TEST_TAB_1 and TEST_TAB_2.

Let’s try some simple DDL commands.
If we truncate a table:

orclbd> truncate table ggtest.test_tab_1;

Table GGTEST.TEST_TAB_1 truncated.

orclbd>

It is not going to be replicated. If we are altering the table, we are not seeing it as a separate command, but it is going to be reflected in the new schema definition for any new transaction replicated to HDFS. You will get a file with new schema definition and the transaction itself in a next file.

orclbd> alter table ggtest.test_tab_1 add (new1 varchar2(10));

Table GGTEST.TEST_TAB_1 altered.

orcl> insert into ggtest.test_tab_1
  2  values (7,dbms_random.string('x', 8), sysdate-(7+dbms_random.value(0,1000)),
  3  dbms_random.string('x', 8), sysdate-(6+dbms_random.value(0,1000)),'new_col' );

1 row created.

orcl> commit;

Commit complete.

orcl>
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457117136700
SEQ!org.apache.hadoop.io.LongWritable"org.apache.hadoop.io.BytesWritable????)0???4(SB?Yc{
  "type" : "record",
  "name" : "TEST_TAB_1",
  "namespace" : "BDTEST",
  "fields" : [ {
    "name" : "table",
    "type" : "string"
........
........
        "name" : "NEW1",
        "type" : [ "null", "string" ],
        "default" : null
      }, {
        "name" : "NEW1_isMissing",
        "type" : "boolean"
........
[oracle@sandbox oggbd]$ hadoop fs -cat /user/oracle/ggflume/FlumeData.1457117136701
........
}SB???"BDTEST.TEST_TAB_1?????"BDTEST.TEST_TAB_1I42016-03-04 18:45:30.00131442016-03-04T13:45:34.156000(00000000000000014363
PK_ID7U09D0CTU&2013-08-15:12:53:50W0BSUWLL&2013-08-16:09:28:12new_col

As I’ve mentioned in my previous post, the deeper investigation of supported DDL changes is going to be the subject of a dedicated blog post. Here we can conclude that the adapter worked as expected, and supported the flow of transactions from our Oracle database down to Flume using Avro and Thrift sources. Of course it is not production implementation, and serves only as a basic functional and elementary regression testing. For a serious production workflow we need to develop appropriate architecture.

In my next few posts I plan to check Kafka and HBASE adapters and see how they work. Stay tuned!

Categories: DBA Blogs

Connection timeout parameters in MySQL

Thu, 2016-03-17 07:09
Introduction
  • wait_timeout
  • interactive_timeout
  • net_read_timeout
  • net_write_timeout

What do these timeouts do in MySQL? If you search the web for one or more of these, you may find complaints that no comprehensive explanation exists for all of these timeouts in one place (besides the obvious documentation of dynamic server system variables in MySQL). This blog post seeks to provide a central documentation source for timeouts and provide some practical explanation.

Knowing what timeouts do helps in a troubleshooting effort. It’s good to understand when an issue is timeout related and when it’s not, and to know the right reasons for changing timeout variables, or the right time to ask the developer or ad-hoc user to please tune the variables in the session, instead of asking the DBA to change the global variables.

Before diving into the meat of this topic, here are a few introductory concepts that I will touch on.

I mentioned session variables. The four timeout variables we are discussing: interactive_timeout, wait_timeout, net_read_timeout, net_write_timeout can all be set within the context of the session using the SET command. e.g. “set session wait_timeout=3600;“.  A general rule of thumb that I always try to follow: Once I have what I think are correct timeouts to protect my mission critical application, I do not want to change them unless there is a really good reason. If I’m encountering a client disconnection issue and it’s not coming from my main application, I need to see if the problem can be alleviated by adjusting timeouts on the session level using SET commands.

Second, it’s important to note that timeouts are there to protect your server and critical application. You don’t want too many clients connected to your server doing nothing (they can crowd out connections that need to do something). You don’t want clients that are in a failed state due to a network connection or other interruptive problem to continue to consume resources such as locks on your database. Timeouts should be set very thoughtfully, based on the unique environment. If you’re unsure, leave at defaults and adjust when needed.

Finally, I created a perl script to illustrate wait_timeout situations. I made a great effort to also incorporate net_read_timeout and net_write_timeout into the script and you’ll see empty functions there as place-holders. There are many layers to the “MySQL client/server onion” involving buffers on the system, client, and server layers. In lieu of providing a direct scripting examples of net_read_timeout and net_write_timeout, I hope you’ll find the other methods of explanation included below, useful.

interactive_timeout

According to the MySQL manual, Interactive_timeout is only used for connections from interactive clients like the command line MySQL clients.

The technical explanation is: “clients utilizing the “CLIENT_INTERACTIVE option to mysql_real_connect().” Check the specific documentation to your favorite client to see whether it fits this description.

If your client fits this behavior, wait_timeout is set to the value in interactive_timeout. The only benefit to this parameter is flexibility. If your command line or other interactive clients have a vastly different requirement than your application, feel free to set this differently than wait_timeout.

wait_timeout

Wait timeout is simply there to protect you in the common case of clients sitting there doing nothing but absorbing a connection. You want to set this to the lowest acceptable number of seconds in order to protect your server against an application malfunction or some other event that may cause too many connections to the database to be opened, crowding out other clients. If a client is doing nothing for wait_timeout seconds, the MySQL server will terminate the connection. The proper setting for this variable depends on the particular environment.

An environment I worked in for 3+ years set this to 120 seconds, because jboss connection pooling (and associated parameters) took care of making sure the mission critical application in the environment had available connections.

A common misconception and common misuse is to try to adjust wait_timeout for a situation in which a query has been interrupted. Wait timeout only applies to idle sessions. If the connection is in any other state besides idle, wait_timeout does not apply. This is why utilities like pt-kill exist. It does not help that an idle connection being interrupted by KILL produces the same error message.

Example using my perl script and the MySQL processlist:


jscott@js-trusty1:~$ ./test_timeouts.pl -K
Doing Killable Wait Timeout Test
ATTN: You have 20 seconds to kill the MySQL thread!!

in another shell

jscott@js-trusty1:~$ mysql -e "show processlist"
+----+------+-------------------+--------------+---------+-------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-------------------+--------------+---------+-------+-------+------------------+
| 36 | js | 172.19.73.2:58892 | NULL | Sleep | 28514 | | NULL |
| 79 | root | localhost:43350 | timeout_test | Sleep | 4 | | NULL |
| 80 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+------+-------------------+--------------+---------+-------+-------+------------------+
jscott@js-trusty1:~$ mysql -e "kill 79";

back to the perl script output.

DBD::mysql::st execute failed: MySQL server has gone away at ./test_timeouts.pl line 134.
-------------
Err: 2006
ErrStr: MySQL server has gone away
State: HY000

 

Note that we got MySQL error code 2006 “MySQL server has gone away”.

Now we’ll allow wait_timeout to trigger instead of killing a process.


jscott@js-trusty1:~$ ./test_timeouts.pl -W
Doing Wait Timeout Test
DBD::mysql::st execute failed: MySQL server has gone away at ./test_timeouts.pl line 151.
-------------
Err: 2006
ErrStr: MySQL server has gone away
State: HY000

The fact that a killed idle process and a process receiving wait_timeout get the same error message and code (2006 “MySQL server has gone away”) causes much confusion. Rest assured, wait_timeout only kills idle connections, even if the error messages are the same.  If there is a running query, you will typically get error code 2013: “Lost connection to MySQL server during query”.

The most common reason a query is killed, in my experience, is a backup. Many backup utilities for MySQL have the default behavior of killing long-running queries.

net_read_timeout

net_read_timeout is the number of seconds mysql will wait between bytes as you are sending it data. As mentioned above, I tried very hard to synthetically cause a net_read_timeout with a perl script and was unsuccessful.

The best way to explain net_read_timeout is to use the example of an ETL (Extract, Transform, Load) job. At the beginning of an ETL job, you select data from one data source, then transform and load it into another data source, such as a MySQL data warehouse. Tools like “Pentaho Data Integration” (cited below) provide a GUI view of ETL.

Pentaho ETL Process

Pentaho ETL Process

If your ETL job is complex, and you use something similar to Pentaho Kettle, you have very little visibility into what is happening behind the scenes.

If the steps in between the initial query in an ETL job and an output step consume more than net_read_timeout seconds, then an error may occur due to incomplete statements being sent to the “output” steps.  In mid-transaction, if the MySQL server fails to receive data within the timeout interval, the connection will be terminated.

Another way net_read_timeout can occur: Normally all database connections used in a Pentaho ETL transformation are opened at the beginning of the transformation. If you have “use result streaming cursor” turned off on the database connection used for “table input”, the result set may take longer than “net_read_timeout” seconds to receive, causing an error on the database connection being used for the output steps.

Have you ever executed a simple query (with a large result set) on the MySQL command line and wondered why there is a pause of several seconds or even minutes prior to beginning to receive the results? Your client actually has begun receiving the results, but they simply have not been presented to you. The reason for this is, it’s generally better for a MySQL server to be “finished” with a query as soon as possible; therefore, the default behavior of the MySQL client libraries on a read-only query is to receive all the results first, before passing the results onto the program. The default behavior is called “mysql_store_result“.  The MySQL reference manual does a good job of explaining this in detail. As the manual states, you should not change the default behavior unless you are doing a minimal amount of processing on each result row.

Homemade ETL programs in other languages can have the same issue. Take care to use auto_reconnect features and/or streaming cursors / “mysql_use_result”, when appropriate.

ETLs and scripts are good examples of clients which can be modified with SET commands.  “SET SESSION net_read_timeout=<longer value>;” should be considered for jobs outside of your main application.

Lastly, net_read_timeout is also often caused by bad network connections. This is the default explanation offered in many blog posts on this topic.

The Error returned to the client will be (2013) “Lost connection to MySQL server during query”.

net_write_timeout

net_write_timeout is the number of seconds MySQL will wait for the client to receive additional data before terminating a connection. Using the example of the ETL job above, if steps in the middle of the transformation cause the “input” steps to pause for longer than net_write_timeout seconds, it’s possible to receive an error. I’ve never seen net_write_timeout hit because of an ETL job. The MySQL client has default behavior of buffering result sets.  Also, I try to use the “use result streaming cursor” in Pentaho or the “mysql_use_result” option in the Perl DBI, sparingly.

The most typical case of receiving net_write_timeout is a backup such as a mysqldump. During a long data transfer such as a mysqldump, a myriad of factors could come into play, not the least of which is network connectivity. Remember, if you run into problems with net_write_timeout on a mysqldump backup, mysqldump has its own section of the my.cnf. You can set an option file on a client or a server to specifically set this and other variables for a mysqldump session.

Categories: DBA Blogs

Log Buffer #465: A Carnival of the Vanities for DBAs

Wed, 2016-03-16 11:17

This Log Buffer Edition covers some tricks, tips, workarounds, and tech-dives covered in various blog posts from Oracle, SQL Server and MySQL.

Oracle:

SQLcl can run Ruby script!!.. JRuby to be precise.

Financial Information Discovery Integration with Oracle Assets

Data Vault Modeling and Snowflake Elastic Data Warehouse

Couchbase Bucket Index Status in NetBeans IDE 8.1

Make SQL Developer Run Faster

SQL Server:

Find a String in a String

Microsoft cloud platform allows you to provide resiliency for your on-premises SQL Server deployments. Marcin Policht describes this functionality, available as part of the Azure Site Recovery solution.

Trust, or Verify – How FKs Can (or Cannot) Help Performance

10 New Features Worth Exploring in SQL Server 2016

Validating a Windows Cluster Prior to Installing SQL Server 2014

MySQL:

When you think of Cloud services for database administration, Database as a Service (DBaaS) is what tends to comes to mind, but one of the ways that the Cloud has had an enormous impact on our work productivity is in the area of collaboration.

Introduction to MySQL Troubleshooting Webinar: Q & A for How to Create a Test Setup

MariaDB is a community-developed fork of MySQL and aims to be an enhanced, drop-in replacement for it.

The MySQL binary package have been growing a lot the last GA versions.

Indexing JSON documents via Virtual Columns

Categories: DBA Blogs

Breaking Down Barriers with DevOps

Wed, 2016-03-16 08:53

In a recently recorded webinar,  How To Deliver Faster, Better Software Releases with DevOps, Pythian’s Chief Digital Officer, Aaron Lee and guest speaker Amy DeMartine, Senior Research Analyst at Forrester Research, discussed DevOps and the important role it plays in the race to deliver the best customer experience with faster and better software releases.

One of the webinar participants asked about the critical success factors when implementing a rapid release software model. 

The case for adopting agile operations is very compelling. Rapid release cycles can reduce costs, provide better and more stable solutions, increase customer and employee satisfaction, and most importantly, outpace competition and drive revenue.

Driving both efficiency and effectiveness of operations is easier said than done. DeMartine highlighted 7 habits of effective DevOps, but stressed that culture is the number one enemy.

She said, “this fear of change is a huge cultural gap. DevOps is at the point of cultural change, and this is one of the biggest ones that IT has to break”.

Historically, IT teams have held large and lengthy change boards, requiring a strong reliance on rigid approval processes. Combine that history with the complexity and degree of unknowns that operations faces, and it isn’t surprising that some teams are risk averse and consensus driven.

While new technologies, processes, and automation are critical to implementing rapid release models and enhancing DevOps, both DeMartine and Lee agreed that it boils down to the people, and the way Dev and Ops interact with each other.

“It’s clear what the responsibilities of a product manager are. It’s clear where the developers responsibilities start and stop,” said Lee. Often this leads to operations “picking up everything else” and facing an increased amount of unknowns.

Part of the solution is understanding how to embrace and maximize the opportunity for change, as opposed to minimizing the occurrence of change.

Most importantly, Dev and Ops need to know and understand each other. By setting and balancing clear boundaries and expectations, each team can understand the role the other is playing, and begin to see the environment under a common goal.

For more on this topic, download the on-demand webinar below:

devopswebinar_CTA (1)

Categories: DBA Blogs