Skip navigation.

DBA Blogs

Loose Coupling and Discovery of Services With Consul — Part 2

Pythian Group - Fri, 2014-09-05 08:33
Creating a Consul Client Docker Image

In my previous post, I demonstrated how to create a cluster of Consul servers using a pre-built Docker image. I was able to do this because our use case was simple: run Consul.

In this post, however, we will be creating one or more Consul clients that will register services they provide, which can then be queried using Consul’s DNS and / or HTTP interfaces. As we are now interested in running Consul and an application providing a service, things are going to get a bit more complicated.

Before proceeding, I’m going to need to explain a little bit about how Docker works. Docker images, such as progrium/consul we used in the previous post, are built using instructions from a special file called Dockerfile. There are two related instructions that can be specified in this file which control the container’s running environment: that is, the process or shell that is run in the container. They are ENTRYPOINT and CMD.

There can be only one ENTRYPOINT instruction in a Dockerfile, and it has two forms: either an array of strings, which will be treated like an exec, or a simple string which will execute in ‘/bin/sh -c’. When you specify an ENTRYPOINT, the whole container runs as if it were just that executable.

The CMD instruction is a bit different. It too can only be specified once, but it has three forms: the first two are the same as ENTRYPOINT, but the third form is an array of strings which will be passed as parameters to the ENTRYPOINT instruction. It’s important to note that parameters specified in an ENTRYPOINT instruction cannot be overridden, but ones in CMD can. Therefore, the main purpose of CMD is to provide defaults for an executing container.

It’s probably becoming clear to you by now that Docker images are designed to run one process or shell. We want to run two processes, however: the Consul agent and an application. Thankfully, there is an image available called phusion/baseimage that provides runit for service supervision and management, which will make it easy for me to launch Consul and another service (such as nginx) within my containers.

SIDEBAR: There is quite a bit of debate about the intended / recommend use of Docker, and whether the process run in the container should be your application or an init process that will spawn, manage and reap children. If you’re interested in reading more about the pros and cons of each of these approaches, please refer to Jérôme Petazzoni’s post, phusion’s baseimage-docker page, and / or Google the topics of ‘separation of concerns’ and ‘microservices’.

Now that I’ve provided some background, let’s get into the specifics of the Docker image for my Consul clients. I’ll begin with the full contents of the Dockerfile and then describe each section in detail.

FROM phusion/baseimage:latest
MAINTAINER Bill Fraser 

# Disable SSH
RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh

# Install nginx
RUN \
  add-apt-repository -y ppa:nginx/stable && \
  apt-get update && \
  apt-get install -y nginx zip && \
  chown -R www-data:www-data /var/lib/nginx

# Clean up apt
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Define mountable directories.
VOLUME ["/data", "/etc/nginx/sites-enabled", "/var/log/nginx"]

# Add runit configuration for nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
ADD files/nginx/nginx-runit /etc/service/nginx/run
RUN chmod 755 /etc/service/nginx/run

# Install consul
RUN curl -s -L -O https://dl.bintray.com/mitchellh/consul/0.3.0_linux_amd64.zip
RUN unzip -d /usr/bin 0.3.0_linux_amd64.zip
RUN chmod 555 /usr/bin/consul

# Add service configuration
ADD files/consul/consul.json /etc/consul.d/consul.json
RUN chmod 644 /etc/consul.d/consul.json

# Add runit configuration for consul
ADD files/consul/consul-runit /etc/service/consul/run
RUN chmod 755 /etc/service/consul/run

# Expose nginx ports
EXPOSE 80 443

# Expose consul ports
EXPOSE 53/udp 8300 8301 8301/udp 8302 8302/udp 8400 8500

ENV HOME /root

ENTRYPOINT [ "/sbin/my_init" ]

The first section specifies that my image will be based on that of phusion/baseimage, and that I am the maintainer of my image. So far so good.

Next, I am removing the SSHD service from the container. This is part of phusion’s image and is not something I am interested in using for the purposes of my demonstration.

The next step is to install nginx and should look fairly straight forward. I have taken the liberty of installing zip at the same time, as I will be using it later on to install Consul.

The VOLUME instruction lets me define mount points that can be used for mounting volumes in the container, passed as arguments of the docker run command. I am not actually using this in my demonstration, it is just there to make you aware of the capability.

Next I am telling nginx not to daemonize itself, and am adding an nginx configuration for runit. The ADD instruction adds a local file to the image in the specified path. The runit configuration I am adding is pretty simple and looks like this:

#!/bin/sh
exec /usr/sbin/nginx -c /etc/nginx/nginx.conf 2>&1

Now that I am done with nginx, I want to install and configure Consul. I simply retrieve the binary package and extract it into /usr/bin in the image. I then use another ADD instruction to supply a configuration file for Consul. This file is in JSON format and tells Consul to register a service named ‘nginx’.

{
	"service": {
		"name": "nginx",
		"port": 80
	}
}

I then use an ADD instruction to supply a runit configuration for Consul in the same manner I did for nginx. Its content is as follows:

#!/bin/sh
if [ -f "/etc/container_environment.sh" ]; then
  source /etc/container_environment.sh
fi

# Make sure to use all our CPUs, because Consul can block a scheduler thread
export GOMAXPROCS=`nproc`

# Get the public IP
BIND=`ifconfig eth0 | grep "inet addr" | awk '{ print substr($2,6) }'`

exec /usr/bin/consul agent \
  -config-dir="/etc/consul.d" \
  -data-dir="/tmp/consul" \
  -bind=$BIND \
  ${CONSUL_FLAGS} \
  >>/var/log/consul.log 2>&1

With all of the hard stuff out of the way, I now define the nginx and Consul ports to EXPOSE to other containers running on the host, and to the host itself.

And last but not least, I set the HOME environment variable to /root and set the init process of /sbin/my_init as the ENTRYPOINT of my container.

This creates a good foundation. If we were to run the image as is, we would end up with nginx running and listening on port 80, and Consul would be running as well. However, we haven’t provided Consul with any details of the cluster to join. As you have probably guessed, that’s what CONSUL_FLAGS is for, and we’ll see it in action in the next section.

Creating Consul Clients With Vagrant

So far we’ve gone to the trouble of creating a Docker image that will run Consul and nginx, and we’ve supplied configuration to Consul that will have it register nginx as a service. Now we’ll want to create some clients with Vagrant and see querying of services in action.

Let’s start by modifying our Vagrantfile. Just as was done with the Consul servers, we’ll want to create an array for the nginx members and tell Vagrant to use the Docker provider. This time, however, some additional configuration will be necessary. The full Vagrantfile is now going to look like this:

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  JOIN_IP = ENV['JOIN_IP']
  
  # A hash of containers to define.
  # These will be the Consul cluster members.
  consul_members = [ "consul1", "consul2", "consul3" ]
  consul_members.each do |member|
    config.vm.define member do |consul_config|

      # Use Docker provider
      consul_config.vm.provider "docker" do |docker|
        docker.name = member
        docker.image  = 'progrium/consul'
        docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]
      end
    end
  end

  # Create an nginx container running the consul agent
  nginx_members = [ "nginx1", "nginx2", "nginx3" ]
  nginx_members.each do | member|
    config.vm.define member do |nginx_config|
      nginx_config.vm.provider "docker" do |docker|
        docker.name = member
        docker.build_args = [ "-t", "bfraser/consul-nginx", "--rm=true" ]
        docker.build_dir = "."
        docker.cmd = [ "/bin/bash" ]
        docker.create_args = [ "--dns=#{JOIN_IP}", "-t", "-i" ]
        docker.env = { "CONSUL_FLAGS" => "-node=#{member} -join=#{JOIN_IP}" }
      end
    end
  end
end

Note that this time we are not using docker.image to supply the name of an existing Docker image to use for our containers. Instead, we are going to use docker.build_args and docker.build_dir to build our own.

docker.build_args = [ "-t", "bfraser/consul-nginx", "--rm=true" ]

This is a list of extra arguments to pass to the docker build command. Specifically, I am naming the image bfraser/consul-nginx and telling Docker to remove intermediate containers after a successful build.

docker.build_dir = "."

This should be fairly self-explanatory: I am simply telling Docker to use the current working directory as the build directory. However, I have some files (including the Vagrantfile) that I do not want to be part of the resulting image, so it is necessary to tell Docker to ignore them. This is accomplished with a file called .dockerignore and mine looks like this:

.git
.vagrant
Vagrantfile

Next, I am using docker.cmd to pass /bin/bash as an extra parameter to the image’s ENTRYPOINT, which allows me to have a shell in the container. A little later, I will show you how this can be useful.

The next line:

docker.create_args = [ "--dns=#{JOIN_IP}", "-t", "-i" ]

is a list of extra arguments to pass to the ‘docker run‘ command. Specifically, I am providing a custom DNS server and instructing Docker to allocate a TTY and keep STDIN open even if not attached to the container.

Lastly, I am supplying a hash to docker.env which will expose an environment variable named CONSUL_FLAGS to the container. The environment variable contains additional parameters to be used when starting Consul.

With this configuration in place, we can now use Vagrant to create three additional containers, this time running Consul and nginx.

$ JOIN_IP=172.17.42.1 vagrant up --provider=docker

This time if we check the output of ‘consul members‘ we should see our host and six containers: three Consul servers and three nginx servers functioning as Consul clients.

$ consul members -rpc-addr=172.17.42.1:8400
Node     Address           Status  Type    Build  Protocol
nginx1   172.17.0.18:8301  alive   client  0.3.0  2
nginx2   172.17.0.19:8301  alive   client  0.3.0  2
laptop   172.17.42.1:8301  alive   server  0.3.0  2
consul2  172.17.0.9:8301   alive   server  0.3.0  2
consul3  172.17.0.10:8301  alive   server  0.3.0  2
consul1  172.17.0.8:8301   alive   server  0.3.0  2
nginx3   172.17.0.20:8301  alive   client  0.3.0  2
Querying Services

As I mentioned in ‘Where Does Consul Fit In?’ in my original post, Consul is a tool for enabling discovery of services, and it provides two interfaces for doing so: DNS and HTTP. In this section, I’ll show you how we can use each of these interfaces to query for details of services being provided.

First, let’s use the HTTP interface to query which services are being provided by members of the Consul cluster.

$ curl http://172.17.42.1:8500/v1/catalog/services | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    25  100    25    0     0  12722      0 --:--:-- --:--:-- --:--:-- 25000
{
    "consul": [],
    "nginx": []
}

This returns JSON-encoded data which shows that ‘consul’ and ‘nginx’ services are being provided. Great, now let’s query for details of the ‘nginx’ service.

$ curl http://172.17.42.1:8500/v1/catalog/service/nginx | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   362  100   362    0     0   210k      0 --:--:-- --:--:-- --:--:--  353k
[
    {
        "Address": "172.17.0.18",
        "Node": "nginx1",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
    },
    {
        "Address": "172.17.0.19",
        "Node": "nginx2",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
    },
    {
        "Address": "172.17.0.20",
        "Node": "nginx3",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
    }
]

We can see here that there are three nodes providing the nginx service, and we have details of the IP address and port they are listening on. Therefore, if we were to open http://172.17.0.18 in a web browser, we would see the ‘Welcome to nginx!’ page.

Notice how the REST endpoint changed between the first and second curl requests, from /v1/catalog/services to /v1/catalog/service/nginx. Consul provides extensive documentation of the various REST endpoints available via the HTTP API.

While the HTTP API is the most powerful method of interacting with Consul, if we are only interested in querying for information about nodes and services, it is also possible to use its DNS server for simple name lookups. Querying for details of the nginx service via the DNS interface is as simple as running the following:

$ dig @172.17.42.1 -p 8600 nginx.service.consul

; <> DiG 9.9.5-3-Ubuntu <> @172.17.42.1 -p 8600 nginx.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3084
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;nginx.service.consul.		IN	A

;; ANSWER SECTION:
nginx.service.consul.	0	IN	A	172.17.0.19
nginx.service.consul.	0	IN	A	172.17.0.18
nginx.service.consul.	0	IN	A	172.17.0.20

;; Query time: 1 msec
;; SERVER: 172.17.42.1#8600(172.17.42.1)
;; WHEN: Sat Aug 16 22:35:51 EDT 2014
;; MSG SIZE  rcvd: 146

As you can see, while it is certainly possible to develop a client to tightly integrate with Consul through its API, it is also easy to take advantage of its DNS interface and not have to write a client at all.

Attaching To A Docker Container

I have one last tip, which is especially useful if you are new to Docker: how to attach to your containers.

I mentioned earlier in this post that I was including the following line in my Vagrantfile:

docker.cmd = [ "/bin/bash" ]

What this does is pass /bin/bash as an extra parameter to the image’s ENTRYPOINT instruction, resulting in the /sbin/my_init process spawning a bash shell.

I also instructed Vagrant, via the docker.create_args line, to have Docker allocate a TTY and keep STDIN open even if not attached to the container. This means I can attach to my containers and interact with them through a bash shell as follows (note: press the ‘Enter’ key following the command to get the prompt):

$ docker attach nginx1

root@4b5a98093740:/# 

Once you are done working with the container, you can detach from it by pressing ^P^Q (that’s CTRL-P followed by CTRL-Q).

Summary

With that, we have reached the end of my demonstration. Thanks for sticking with me!

First I described the importance of loose coupling and service discovery in modern service-oriented architectures, and how Consul is one tool that can be used for achieving these design goals.

Then I detailed, by way of a demonstration, how Vagrant and Docker can be used to form a Consul cluster, and how to create a custom Docker image that will run both your application and a Consul agent.

And, last but not least, I showed how you can make use of Consul’s HTTP API and DNS interface to query for information about services provided.

Hopefully you have found these posts useful and now have some ideas about how you can leverage these technologies for managing your infrastructure. I encourage you to provide feedback, and would be very interested in any tips, tricks or recommendations you may have!

Categories: DBA Blogs

Log Buffer #387, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-09-05 08:04

Benefits of blogs transcends the technologies as they not only enable the bloggers to pen down their valued experiences but also provide readers to get first hand practical information. This Log Buffer Edition shares those benefits from Oracle, SQL Server and MySQL.

Oracle:

Cloud Application Foundation is the innovator’s complete and integrated modern cloud application infrastructure, built using best of breed components, such as Oracle WebLogic Server 12c, the industry’s best application server for building and deploying enterprise Java EE applications.

Migrating Existing PeopleSoft Attachments into the Managed Attachments Solution.

How to identify SQL performing poorly on an APEX application?

How can you keep your Oracle Applications and systems running at peak performance? What will it take to get more out of your Oracle Premier Support coverage?

Projects Create Accounting Performance Issues With 11.2.0.3 Database Version.

SQL Server:

If your log restores aren’t happening when they’re meant to, you want to know about it. You’ll be relying on restoring from logs should anything happen to your databases, and if you can’t restore to a certain point in time, you risk losing valuable data.

Agile data warehousing can be challenging. Pairing the right methodologies and tools can help.

Introduction to Azure PowerShell Modules for the SQL Server DBA.

The Clustered columnstore index generates “unable to find index entry” error and a memory dump after few DMLs on the table.

With any application organizations face consistent key challenges such as high efficiency and business value, complex configuration, and low total cost of ownership. Extending applications to the cloud in hybrid scenarios addresses many of these challenges.

MySQL:

Analyzing Twitter Data using Datasift, MongoDB and Pig

Cloud storage for MySQL Enterprise Backup (MEB) users

Tracing down a problem, finding sloppy code.

Cloud storage for MySQL Enterprise Backup (MEB) users.

MySQL Enterprise Backup (MEB) is a highly efficient tool for taking backups of your MySQL databases.

Categories: DBA Blogs

Implementation Training Initiative: ORACLE Sales Cloud and ORACLE HCM Cloud

Oracle Human Capital Management Cloud: Modern HR differentiates the business with a talent centric and consumer based strategy that leverages technology to provide a...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Oracle WebLogic 12c Enabling Development of Modern Applications: WebSocket and Maven Support

Cloud Application Foundation is the innovator’s complete and integrated modern cloud application infrastructure, built using best of breed components, such as Oracle WebLogic Server 12c, the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

OS Watcher Paging Example

Bobby Durrett's DBA Blog - Thu, 2014-09-04 13:13

I have an example of paging in some Exadata OS Watcher log files.  We got an error in the alert log about a high load but reviewing AWR reports it did not seem like we had an unusual number of active sessions.  Also, the CPU use seemed low but the system load was high.  Reviewing the OS Watcher logs and finding a similar case on Oracle’s support site convinced me that our issue was due to a large amount memory being paged out to the swap file.

Here is the top report from the problematic time:

top - 11:07:58 up 239 days, 14:57,  0 users,  load average: 204.84, 183.48, 106.74
Tasks: 1097 total,  71 running, 1022 sleeping,   0 stopped,   4 zombie
Cpu(s):  1.3%us, 98.0%sy,  0.0%ni,  0.2%id,  0.5%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:  74026796k total, 72456380k used,  1570416k free,     2436k buffers
Swap: 16771852k total,  7884656k used,  8887196k free,   597336k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                   
  894 oracle    15   0  324m  26m  10m S 309.1  0.0 129:46.34 /u01/app/11.2.0.2/grid/bin/diskmon.bin -d -f                                             
 1053 root      20  -5     0    0    0 R 100.0  0.0  32:10.62 [kswapd0]                                                                                
 3366 oracle    16   0 1440m  54m  18m S 146.6  0.1   3:03.54 /u01/app/11.2.0.2/grid/bin/oraagent.bin                                                  
23809 oracle    18   0  992m 179m  16m S 128.1  0.2 158:45.67 /u01/app/oracle/Middleware/agent11g/bin/emagent                                          
28878 orarom    18   0  982m 360m  11m S 101.5  0.5  94:58.36 /opt/OracleHomes/agent_home/core/12.1.0.3.0/jdk/bin/java -Xmx154M -XX:MaxPermSize=96M -se
 4501 oracle    19   0 4423m  96m  24m R 89.6  0.1   2435:54 ora_dia0_ourprdfs1                                                                        
18318 oracle    20   0 26.3g 610m  70m R 86.3  0.8   2025:17 ora_p001_ourprd1                                                                          
18226 oracle    18   0 26.3g 595m  61m D 82.3  0.8 212:20.62 ora_p013_ourprd1                                                                          
 7619 oracle    19   0 1727m  16m  14m R 80.5  0.0   0:50.29 oracle+ASM1 (LOCAL=NO)                                                                    
 4389 oracle    25   0 25.9g 158m  27m R 78.5  0.2   2557:06 ora_dia0_ourprd1                                                                          
11008 oracle    18   0 26.3g 561m 133m D 77.9  0.8 373:34.37 ora_p010_ourprd1                                                                          
 8484 root      25   0  238m 6276 2600 R 77.3  0.0   0:46.60 python /usr/local/bin/dcli -s -q -l root -g cell_group exec 2>&1; cellcli -e list quaranti
 7937 oracle    18   0 26.3g 603m  86m R 76.5  0.8   1656:21 ora_p021_ourprd1                                                                          
24118 oracle    25   0 26.3g 602m 103m R 72.7  0.8   2997:08 ora_p007_ourprd1                                                                          
 4491 oracle    16   0 5912m 121m  25m D 72.6  0.2   2407:59 ora_dia0_obrprd1                                                                          
 7919 oracle    25   0 26.4g 680m 247m R 72.2  0.9   4256:57 ora_p011_ourprd1                                                                          
 4384 oracle    18   0 4407m  95m  25m D 70.8  0.1   2344:36 ora_dia0_otrprd1                                                                          
 4397 oracle    23   0 25.8g  44m  23m D 69.5  0.1   1695:21 ora_lmd0_ourprd1                                                                          
 2548 oracle    20   0 1809m 116m  44m R 69.3  0.2   2393:41 asm_dia0_+ASM1

Note the load of 204 and the system cpu % of 98%.  By 11:07 CPU utilization had gone back up so I assume that the swapping I/O had completed but then a bunch of system I/O kicked off.  Also I assume that [kswapd0] has something to do with swapping.  Here is the mpstat report:

             CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal   %idle    intr/s
10:59:00     all    3.49    0.00    2.05   93.81    0.01    0.16    0.00    0.48   4144.00
10:59:05     all    4.94    0.00    2.45   92.21    0.01    0.11    0.00    0.28   2548.00
10:59:16     all    1.42    0.00    1.82   95.64    0.02    0.13    0.00    0.97   3108.66
10:59:21     all    3.84    0.00    3.58   90.68    0.04    0.49    0.00    1.38   3868.40
10:59:26     all    1.99    0.00    1.85   93.56    0.01    0.13    0.00    2.46   2942.48
10:59:31     all    1.38    0.00    1.85   91.37    0.01    0.10    0.00    5.29   2479.04
10:59:36     all    1.75    0.00    2.81   94.19    0.01    0.16    0.00    1.08   2762.60
10:59:41     all    0.89    0.00    1.09   97.41    0.00    0.05    0.00    0.56   2116.80
10:59:46     all    1.19    0.00    1.21   93.91    0.03    0.11    0.00    3.55   2992.40
10:59:51     all    1.79    0.00    1.75   95.88    0.01    0.28    0.00    0.29   3820.00
10:59:56     all    2.09    0.00    1.55   92.85    0.03    0.21    0.00    3.28   4410.80
11:00:01     all    1.85    0.00    1.91   96.00    0.01    0.11    0.00    0.11   3433.47
11:00:06     all    1.50    0.00    1.45   89.07    0.01    0.11    0.00    7.85   3746.00
11:00:11     all    6.41    0.00   10.68   80.29    0.09    0.80    0.00    1.73   9269.46
11:00:16     all   17.14    0.00   19.42   57.12    0.14    0.83    0.00    5.35  17018.64
11:00:21     all    8.58    0.00   80.85    9.01    0.05    0.23    0.00    1.29   6951.70
11:00:27     all    6.21    0.00   91.77    1.12    0.04    0.09    0.00    0.78   5417.11
11:00:32     all    5.65    0.00   93.75    0.53    0.01    0.04    0.00    0.02   3289.82
11:00:38     all    2.72    0.00   96.81    0.35    0.01    0.02    0.00    0.09   2480.87
11:00:44     all    1.73    0.00   97.89    0.28    0.00    0.02    0.00    0.08   1745.89
11:01:00     all    1.85    0.00   97.72    0.16    0.01    0.04    0.00    0.22   2710.93
11:01:05     all    0.79    0.00   98.84    0.25    0.00    0.04    0.00    0.09   2543.39
11:02:23     all    1.00    0.00   98.72    0.15    0.00    0.02    0.00    0.10   1558.00
11:02:33     all    0.95    0.00   98.74    0.09    0.00    0.09    0.00    0.13   1488.49
11:02:39     all    1.64    0.00   98.13    0.08    0.01    0.05    0.00    0.08   1441.73
11:02:44     all    0.46    0.00   99.29    0.10    0.00    0.09    0.00    0.06   1465.73
11:03:20     all    1.54    0.00   97.96    0.25    0.01    0.07    0.00    0.17   2757.95
11:03:25     all    0.40    0.00   99.24    0.21    0.00    0.02    0.00    0.12   1550.60
11:03:30     all    0.39    0.00   99.22    0.23    0.00    0.01    0.00    0.15   1421.20
11:03:35     all    3.29    0.00   96.32    0.25    0.01    0.03    0.00    0.10   1952.51
11:03:40     all    5.90    0.00   93.76    0.28    0.00    0.01    0.00    0.05   1407.00
11:03:45     all    1.14    0.00   98.56    0.20    0.00    0.01    0.00    0.09   1418.00
11:03:50     all    0.19    0.00   99.55    0.21    0.00    0.03    0.00    0.03   1356.69
11:04:23     all    0.36    0.00   99.31    0.18    0.00    0.04    0.00    0.10   1621.67
11:06:13     all    0.58    0.00   99.13    0.14    0.00    0.04    0.00    0.11   1550.73
11:06:18     all    3.32    0.00   96.36    0.23    0.00    0.04    0.00    0.06   2055.30
11:06:24     all    4.39    0.00   95.32    0.24    0.00    0.02    0.00    0.03   1991.53
11:06:31     all    4.14    0.00   95.48    0.28    0.01    0.02    0.00    0.07   1598.53
11:07:21     all    1.38    0.00   98.32    0.19    0.00    0.01    0.00    0.09   1444.49

Notice how it is almost all IO until around 11 am and then it switches to all system cpu%.  I assume the I/O was swapping memory out.  The meminfo report has these numbers:

zzz ***Mon Sep 1 11:00:11 CDT 2014
MemTotal:     74026796 kB
MemFree:        330888 kB
...
SwapTotal:    16771852 kB
SwapFree:      8153352 kB

I take this to mean that we have about 330 meg free out of 74 gig total, but still 8 gig of swap.  Presumably when free memory gets low then idle processes get swapped out.  A member of our Unix team pointed out the swapping on the vmstat output.  Here is a section that shows a lot of pages getting swapped out (so).  Each line represents 5 seconds and I think it starts 10:57:56.

zzz ***Mon Sep 1 10:05:31 CDT 2014 Sample interval 5 seconds
-- deleted 629 lines = 629*5 seconds = 3145 seconds
-- 10:57:56 am
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
86 16 7567956 328800    800 610940   56 5350   157  5381 1626 10013  4 96  0  0  0
44 32 7602804 327956    996 613940  222 10464   554 10510 2177 13818  0 99  0  0  0
 1 109 7632732 330412   2284 619780  290 7986  1738  8130 3382 14002  1 21  7 70  0
35 41 7702076 353744   2504 621392  623 16054  1290 16206 8776 19234  2  3  2 92  0
 1 103 7749296 327480   2692 622004  422 11307   789 11574 3839 16138  3  2  1 94  0
 4 109 7849812 330972   3036 624808 1974 24691  2844 25110 11705 36080  4  3  1 92  0
 0 66 7869628 346228   3108 626444  859 6309  1055  6309 5538 16922  3  1  1 94  0
23 104 7895176 1998796   3300 624864 1032 7582  1283  7784 4032 24443 56  3  1 40  0
 3 90 7918016 328212   4144 632880 4910 5939  6866  6406 13908 47820 40  8  3 49  0
 1 101 7983136 333008   4480 627416 1180 15655  1523 15960 3977 16945  5  3  1 91  0
 1 125 8033148 327376   4712 628312  922 12213  1302 12295 3545 16946  4  3  0 93  0
 1 141 8072048 329060   4856 626976  764 10862   986 11082 3684 21305  2  1  1 96  0
 2 76 8106056 327556   5056 627124  797 8855  1048  8946 3968 15044  2  2  0 96  0
 0 103 8181216 328348   5284 611140 2415 16583  3076 16938 4374 17445  4  5  2 90  0
 4 92 8222500 330044   5408 610708 1470 9698  1874  9898 2842 15110  2  2  2 94  0
 4 94 8285204 328932   5936 614128 1015 14754  1678 15213 3324 18240  2  2  5 91  0
 1 111 8326908 327348   6184 614600 1034 9759  1465 10012 2577 14809  2  2  1 95  0
 1 119 8338520 328352   6332 616076  610 4896   834  4952 2505 15373  1  1  1 97  0
 0 98 8378304 329308   6536 615944  975 9795  1334  9931 3802 15897  2  2  3 93  0
 0 89 8404456 327204   6800 618872 1232 8327  1662  8413 4557 16856  2  2  2 94  0
 0 123 8430092 327272   7092 619016 1004 7379  1393  7672 2852 14689  1  1  2 96  0
 1 92 8453420 327212   7284 622700 1738 7250  2259  7494 4241 17781  2  2  2 94  0
 7 76 8562308 329132   7860 628932 2616 23847  3409 25162 6833 20938  4  7  7 82  0
 5 60 8691628 334124   7012 635948 6066 28059  8119 32201 13584 32154 15 23  1 61  0
63 37 8683600 327404   2812 582200 8138  661 12947  2870 12507 26754 12 44  6 38  0
98 22 8696516 328056   1260 560572 10494 7070 15550  9394 28854 128944  3 96  0  0  0
72 14 8694008 327148   1960 557616 1693    3  3222   161 5014 29030  5 95  0  0  0
125 19 8693816 327172   1020 558488  226    1   569   227 1991 13841  0 99  0  0  0
97 15 8703600 327044   1032 554792 5083 3217 10447  3722 26436 202860  1 99  0  0  0
102 20 8733556 327272   1432 556224 2573 6713  5399  7194 16118 66262  1 98  0  0  0
84  9 8732984 327164   1100 555640  470    2  1090   117 1585 11922  0 99  0  0  0
85 13 8732940 327304    880 554024   60    0   317    50 1470 11247  0 99  0  0  0
149 24 8732808 327428    988 553988  133    0   814   334 1962 12802  4 95  0  0  0
153 22 8732640 327284    592 553576  187    0   875   548 4619 36573  2 98  0  0  0
159 15 8732584 327104    604 552956   50    0    96   118 2294 11239  0 99  0  0  0
142 14 8732432 328644   1524 554196  174    0  1706   260 6035 45843  0 99  0  0  0
158 12 8755436 327460   2704 555568 1130 4927  2564  5486 14539 103001  0 99  0  0  0
125 14 8754920 327980    748 554368  406    0   662     6 1583 10952  0 100  0  0  0
115 16 8752204 327472   1724 553740 2453    5  6014   532 11366 82957  1 99  0  0  0
124 19 8752276 327076   1092 554200   78   28   113    81 1767 11712  1 99  0  0  0
82  8 8757568 346356   2384 556132 1316 3202  4373  3837 6633 42986  1 99  0  0  0
151 57 8756348 329524   3632 559164  828    0  3718   429 2076 11613  3 96  0  0  0
159 49 8755120 328488   3248 557720 1033    9  4298   212 2248 12981  4 95  0  0  0
107 36 8771748 332068   1016 557272 2765 3977  9481  4370 14773 103588  2 98  0  0  0
117 16 8771592 327180   1168 558788  156    0   979    58 1466 9988  1 99  0  0  0
 3 101 7875688 1448712   5300 599300 29898 73754 52723 83528 62497 360936  2 94  0  4  0
 3 96 7874004 1260800   8856 624552 7470    0 14911   569 3814 17649  5  1 33 61  0
 1 92 7859256 558968  11228 649948 5590    0 11118  2798 10027 41964 20  5 12 63  0
 0 96 7915604 327600  14060 652636 4674 11822  7818 12173 9053 22067  8  7 20 65  0
 8 98 7963060 327240  11800 635164 1994 11097  3294 12031 6762 16549  3  7  1 90  0
 1 104 7969976 340320   8812 579564 9600 1558 12002  2998 8588 22984 17 34 10 39  0
 4 89 7642788 666372  10244 603740 7154    0 12148  2375 9681 23420  7  5 22 67  0
 1 94 7642548 327912  12556 625672 7011    3 12173  1704 7865 21465  6  5 18 71  0
 7 95 7630564 327572   3396 557488 5970   33 10358  1827 4756 15284  5 67  3 25  0
113 45 7630124 330244   1900 545524 3819    5  7540  1314 3781 13454  2 88  0 10  0
 0 88 6897380 1321012   3560 573532 3635    1  9076  2802 3414 11193  3 64  7 26  0
 1 94 6889436 981560   6188 624532 8463    0 16870  4122 4949 17243  2  1 31 65  0
 0 79 6889308 890800   8444 666940 7332    0 13390  3077 3337 15930  5  1 33 60  0
 1 74 6878180 797212  10720 715612 6776    0 14430  4653 3424 13069 11  6 37 47  0
 0 141 6876812 406704  12856 739628 7202    0 11406  2342 4737 23064  6  5 24 64  0
 2 66 6874848 336480  13428 656312 6296    1 10126  1848 4375 16755  7  9 32 52  0
 2 69 6874192 338016   8288 577196 7297    0 14230  1208 5771 16410 10  5 38 47  0
 1 72 6873696 342908   3796 576568 6161    1 12562  1149 5506 13421 11  5 34 50  0
 2 78 6873524 336716   3992 589684 5718    0 12693   280 5268 11889  9  2 31 58  0
97 48 6873180 331280   2884 559732 5110    4  7616   934 6994 14502  7 38 12 43  0
120 41 6872636 327192   1504 549972 4309   32  7965  1497 7029 14159  5 92  0  3  0
101 26 6870204 327620   1992 548548 4122   19  8110  1191 9559 20993  4 96  0  0  0
82 16 6886048 327184    984 547140 2050 3186  3662  3385 4978 16982  2 98  0  0  0
83  6 6886384 327096   1376 545516  563   69  1284   184 1946 7812  1 99  0  0  0
93 11 6886364 328100    836 545120  798    0   911    18 5495 23655  0 100  0  0  0
27 42 7008372 328376   3216 550820 7050 26176  9220 26390 9919 42056  0 100  0  0  0
 3 112 7044188 328628   4552 554876 2424 8493  3526  8570 2402 7432  2 78  1 19  0
 3 113 7108916 348172   5532 560420 3186 14382  4325 14512 9715 15529  4  6  4 87  0
 0 125 7136504 366528   7040 567140 4077 10618  5413 10906 15231 26777  4  1 12 83  0
 1 111 7149172 387580   7320 568044  391 4344   719  4394 2460 7519  1  1  0 99  0
 1 94 7178000 328136   9984 580152 5768 7927  8617  8990 20026 30064  8  3  9 80  0
 0 86 7204696 327100  10268 583912 1972 6991  2825  7515 5943 12124  3  2  8 87  0
 0 103 7218656 347300  10564 585360  457 4985   733  5144 1936 8120  0  1  6 93  0
 5 98 7236196 345460  10784 585224 1298 5318  1467  5477 7973 13715  3  2  6 89  0
 1 111 7243488 328920  10932 586500  488 3097   839  3136 1843 7555  1  1  6 92  0
 0 103 7265100 328016  11132 591452 1439 6088  2170  6241 3984 9707  2  1  2 95  0
 2 91 7274576 335116  11272 591240  709 3526   789  3750 2162 7734  3  1  2 95  0
 1 124 7303804 329200  11408 593516 1170 7348  1587  7614 4230 9955  3  1  6 90  0
 4 61 7403224 343940  11724 601684 3143 20878  4191 22518 7000 14053  4  4  3 89  0
11 62 7589208 349448  12504 607764 4350 38374  5784 38847 13038 21696  6  6  9 79  0
 3 56 7777608 338348  13328 614952 6506 38412  7919 38711 11993 18575  6  6 16 71  0
 2 49 8025300 331604  13568 618408 5642 50410  6727 50490 14740 19779  7  7 35 51  0
 1 69 8159548 977560  14304 624024 4725 27423  6727 28257 13122 22037 10  7 12 72  0
 5 50 8159156 869324  18188 647964 9421    0 13542  1034 14117 30070 16  4 24 55  0
 2 46 8158644 768600  20324 663440 10323    0 13395  2023 18356 27259 15  3 52 30  0
 3 43 8158308 551160  22344 678548 12241    0 16135   623 14080 23349 19  3 42 36  0

We got this message in the alert log:

Mon Sep 01 11:09:28 2014
 GES: System Load is HIGH.
 GES: Current load is 204.89 and high load threshold is 80.00

After this the instance crashed:

Instance terminated by LMS0, pid = 4532

Bug 17316830, “ROOT CAUSE ALL DATABASE INSTANCES CRASHED ON ONE OF OUR NODES”, relates high paging and low free memory to the same GES load messages and to instances crashing as ours did.  So, it seems that when you have a lot of paging it can cause an instance to crash.  Is this a bug or a feature? :)

It was interesting to look at the OS Watcher files to see information about the nature of the system when our issues occurred.  Looks like lots of paging.  The only question is what caused the paging?  We are trying to figure out if we just had some abnormally high memory use from our application or if some bug caused some system processes to eat up the normally free memory.  But, I thought I would share the investigation we did using the OS Watcher logs because it was interesting to see an example with paging, even though we have not gotten to the root of the problem.

– Bobby

 

Categories: DBA Blogs

Fixing Windows RAC Listener to Listen On IP Address of Hostname

Pythian Group - Wed, 2014-09-03 08:31

Recently, there was an issue after a node was added to an existing Windows RAC cluster. After everything was set up, the local listener was not listening to the IP address (192.168.0.37) corresponding to the node’s hostname. Oracle version was 11.2.0.4.

The listener.ora was configured properly with the following line:

LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent

The above setting works properly in linux, but does not in Windows

The listener.log showed the following error message, when listener started:

TNS-12542: TNS:address already in use
TNS-12560: TNS:protocol adapter error
TNS-00512: Address already in use
64-bit Windows Error: 48: Unknown error

Status of the listener was

lsnrctl status

LSNRCTL for 64-bit Windows: Version 11.2.0.4.0 - Production on 29-AUG-2014 04:16:42

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for 64-bit Windows: Version 11.2.0.4.0 - Production
Start Date                19-AUG-2014 01:47:12
Uptime                    10 days 2 hr. 29 min. 33 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   P:\oracle\app\11.2.0\grid\network\admin\listener.ora
Listener Log File         P:\oracle\app\11.2.0\grid\log\diag\tnslsnr\DEV-02\listener\alert\log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\LISTENERipc)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.38)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+asm2", status READY, has 1 handler(s) for this service...
Service "DEV.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
Service "DEVXDB.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
The command completed successfully

The node’s vip address was 192.168.0.38.

To force the local listener to listen to the ip address corresponding to the hostname of the node,
the listener.ora file was changed to

LISTENER=
   (DESCRIPTION=
     (ADDRESS_LIST=
        (ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))
        (address=(protocol=tcp)(host=192.168.0.37)(port=1521))
      )
    )

#LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))           # line added by Agent

After the listener was re-started using

srvctl stop listener -n DEV-02

srvctl start listener -n DEV-02

The listener started listening on 192.168.0.37, which is the IP address for the hostname (DEV-02)

lsnrctl status

LSNRCTL for 64-bit Windows: Version 11.2.0.4.0 - Production on 01-SEP-2014 17:52:46

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for 64-bit Windows: Version 11.2.0.4.0 - Production
Start Date                01-SEP-2014 17:52:46
Uptime                    0 days 0 hr. 1 min. 3 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   P:\oracle\app\11.2.0\grid\network\admin\listener.ora
Listener Log File         P:\oracle\app\11.2.0\grid\log\diag\tnslsnr\DEV-02\listener\alert\log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\LISTENERipc)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.37)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.38)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+asm2", status READY, has 1 handler(s) for this service...
Service "DEV.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
Service "DEVXDB.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
The command completed successfully
Categories: DBA Blogs

Loose Coupling and Discovery of Services With Consul — Part 1

Pythian Group - Wed, 2014-09-03 08:16

Today I would like to demonstrate the use of Consul to help achieve loose coupling and discovery of services, two important principles of service-oriented architecture (SOA) present in modern, elastic infrastructures.

If you have ever designed an architecture to be deployed in a cloud computing environment, then you are probably already familiar with the idea that you should design for failure. Having the expectation that one or more components of your infrastructure may fail at any given time forces you to design your services in such a way that they are loosely coupled from one another. This usually means running multiple instances of web, application, and database servers, and making use of load balancers, message queues and / or APIs for communication between them.

The environment is scaled horizontally, by adding more instances to the pool. Because of this, instances need to be able to advertise the service they provide, and discover providers of other services.

Where Does Consul Fit In?

Consul’s introduction page does an excellent job of describing the features it provides and what its basic architecture looks like, but I’ll provide a brief summary here.

Consul is a tool for discovering and configuring services in your infrastructure. At its core, it provides service discovery, health checking, and a key/value store. It supports multiple data centers without having to add a layer of abstraction. Nodes that provide services run a Consul agent which talks to one or more Consul servers. Services or nodes can be discovered by querying any of the Consul servers or agents. Data is exposed through DNS and / or HTTP interfaces.

Its capabilities overlap with a number of different types of software including: service discovery systems such as ZooKeeper and etcd; configuration management systems such as Puppet or Chef; and monitoring systems such as Nagios or Sensu. Again, there is an excellent page on the Consul website that describes how it compares with these tools.

The Demo Environment

For my Consul demonstration, I elected to use a combination of Docker and Vagrant. Docker because it makes it easy for me to run multiple lightweight containers on the same machine, and Vagrant because it gives me a great deal of flexibility in building and controlling the containers I will be creating. If you are unfamiliar with either of these tools, allow me to provide a brief overview.

Docker

Per Wikipedia, Docker “.. is an open-source project that automates the deployment of applications inside software containers”. It consists of: Docker Engine, a container that runs on the host operating system; and Docker Hub, a cloud service similar to GitHub which allows users to share containers and automate workflows.

Docker makes use of the following features of the Linux kernel: cgroups (or control groups), which make it possible to limit and isolate resource usage (CPU, memory, etc) of process groups; and namespace isolation, where process groups’ view of operating environment resources such as process trees, network, and file systems are isolated from one another.

Vagrant

Vagrant is a tool for building complete, reproducible development environments, with a focus on automation. It started out as essentially a ‘wrapper’ for VirtualBox, but now supports VMware, Docker, kvm/libvirt and Amazon EC2 as providers, as well as a number of provisioners including Puppet, Chef, and Docker.

Vagrant was created by Mitchell Hashimoto, who initially maintained it in his free time. After it went on to become wildly successful, he formed HashiCorp so he could work on the product full time. HashiCorp has since released other products including Packer, Serf, Consul, and most recently Terraform.

Installing The Tools Docker

Ubuntu 14.04 LTS includes Docker in its universe repository under the name docker.io and can therefore be installed this way:

$ sudo apt-get update
$ sudo apt-get install docker.io

Keep in mind that this may not be the latest version of Docker. If you would like to try the latest version, you will need to add the Docker repository key and add the repository to apt’s sources list. A script has been made available to automate this process for you, so all that is necessary is to run the following:

$ curl -sSL https://get.docker.io/ubuntu/ | sudo sh

On Fedora, Docker is provided by the package named docker-io. If you are running Fedora 19 and have the (unrelated) docker package installed, you will need to remove it before continuing.

$ sudo yum -y remove docker

With Fedora 21 and later, the docker package’s functionality is provided by another package named wmdocker, and it does not conflict with docker-io. Use the following command to install Docker.

$ sudo yum -y install docker-io

Whichever operating system you are running Docker on, you will likely want to be able to use the commands as your regular, non-privileged user, without having to elevate privileges to root. Therefore, you will probably want to make yourself a member of the docker group so you have access to the socket file used by the various Docker commands.

$ sudo usermod -a -G docker your_name
Vagrant

For Vagrant, you will need at least version 1.6.0 as that is when the Docker provider was introduced. This demonstration was tested with version 1.6.3. To install Vagrant, visit its download page and obtain the appropriate package for your operating system. You can install the package on Ubuntu using the following command:

$ sudo dpkg -i vagrant_1.6.3_x86_64.deb

That’s it. In the next section, we will install Consul and continue with setting up our cluster.

Setting Up The Consul Cluster

Let’s begin by establishing the Consul cluster, which will be used by nodes to register themselves as providers of services, and can be queried to discover which services are being provided.

The first step is to download and install Consul on the host which will be running the Docker containers.

$ wget https://dl.bintray.com/mitchellh/consul/0.3.1_linux_amd64.zip
$ unzip -d /usr/local/bin/ 0.3.1_linux_amd64.zip

Now we can start a Consul server that will bind to the IP address of the docker0 interface on your host.

$ consul agent -server -bootstrap -bind=172.17.42.1 -client=172.17.42.1 -data-dir /tmp/consul

This server will elect itself leader of the cluster (since it is currently the only member), and is what the rest of the Consul servers will connect to when joining. Ultimately, the architecture will look like this:

Consul Demo Architecture

Before we can bring up the rest of the Consul servers, however, we’ll need to do some prep work. As mentioned above, the intent is to use Vagrant with the Docker provider to create this environment. Therefore, let’s begin by creating a working directory to contain our Vagrantfile.

$ mkdir -p vagrant/consul && cd vagrant/consul
$ vagrant init

The contents of Vagrantfile should look like this:

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  JOIN_IP = ENV['JOIN_IP']
  
  # A hash of containers to define.
  # These will be the Consul cluster members.
  consul_members = [ "consul1", "consul2", "consul3" ]
  consul_members.each do |member|
  	config.vm.define member do |consul_config|

  	  # Use Docker provider
  	  consul_config.vm.provider "docker" do |docker|
  	  	docker.name = member
  	  	docker.image = 'progrium/consul'
  	  	docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]
  	  end
  	end
  end
end

What should be self evident is that Vagrant is assigning the value of an environment variable called JOIN_IP to a local variable with the same name, and then enumerating through an array of three Consul members which it will create using the Docker provider.

What might not be so obvious, however, are the docker.image and docker.cmd lines, so I will explain them in greater detail.

docker.image = 'progrium/consul'

This line tells Docker to launch (or ‘run’) an instance of the progrium/consul image as found on the Docker Hub Registry. It is a small container based on BusyBox used to run Consul. The project’s source page can be found on GitHub if you’re interested in learning more. The next line:

docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]

is an array of strings which is used to build a custom command to run on the container. Vagrant will perform string interpolation on the member and JOIN_IP variables, replacing them with the current member’s name and the IP address that was provided via the JOIN_IP environment variable. The end result is that the container runs a command such as this:

/bin/consul agent -config-dir=/config -server -node=consul1 -join=172.17.42.1

Let’s see it in action by telling Vagrant to create our containers.

$ JOIN_IP=172.17.42.1 vagrant up --provider=docker
Bringing machine 'consul1' up with 'docker' provider...
Bringing machine 'consul2' up with 'docker' provider...
Bringing machine 'consul3' up with 'docker' provider...
==> consul2: Creating the container...
    consul2:   Name: consul2
    consul2:  Image: progrium/consul
    consul2:    Cmd: -server -node=consul2 -join=172.17.42.1
    consul2: Volume: /home/bfraser/vagrant/consul:/vagrant
    consul2:  
    consul2: Container created: d85fbfacdb45cabc
==> consul2: Starting container...
==> consul2: Provisioners will not be run since container doesn't support SSH.
==> consul1: Creating the container...
    consul1:   Name: consul1
    consul1:  Image: progrium/consul
    consul1:    Cmd: -server -node=consul1 -join=172.17.42.1
    consul1: Volume: /home/bfraser/vagrant/consul:/vagrant
==> consul3: Fixed port collision for 22 => 2222. Now on port 2200.
==> consul3: Creating the container...
    consul3:   Name: consul3
    consul3:  Image: progrium/consul
    consul3:    Cmd: -server -node=consul3 -join=172.17.42.1
    consul3: Volume: /home/bfraser/vagrant/consul:/vagrant
    consul1:  
    consul1: Container created: 413dfa1a63c94bcc
==> consul1: Starting container...
==> consul1: Provisioners will not be run since container doesn't support SSH.
    consul3:  
    consul3: Container creaited: fb54d80e8ce58a46
==> consul3: Starting container...
==> consul3: Provisioners will not be run since container doesn't support SSH.

This created three containers and substituted the node name and join address as expected. Now let’s see what Docker reports.

$ docker ps
CONTAINER ID        IMAGE                    COMMAND                CREATED              STATUS              PORTS                                                                          NAMES
fb54d80e8ce5        progrium/consul:latest   /bin/start -server -   About a minute ago   Up About a minute   53/udp, 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 8500/tcp   consul3             
413dfa1a63c9        progrium/consul:latest   /bin/start -server -   About a minute ago   Up About a minute   53/udp, 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 8500/tcp   consul1             
d85fbfacdb45        progrium/consul:latest   /bin/start -server -   About a minute ago   Up About a minute   53/udp, 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 8500/tcp   consul2             

There are in fact three containers running. Other details are provided but they aren’t relevant at this point, I just wanted to show you how to view the status of your newly created Docker containers.

Now if we check Consul, we should see each of the containers listed as members of the cluster (note: we can’t just run consul members as we need to tell it which IP address to use as the RPC address).

$ consul members -rpc-addr=172.17.42.1:8400
Node     Address           Status  Type    Build  Protocol
laptop   172.17.42.1:8301  alive   server  0.3.0  2
consul2  172.17.0.5:8301   alive   server  0.3.0  2
consul1  172.17.0.6:8301   alive   server  0.3.0  2
consul3  172.17.0.7:8301   alive   server  0.3.0  2

As you can see, we have successfully created a cluster of Consul servers, comprised of an initial server running on the host and three servers running as Docker containers created by Vagrant. In my next post, we will see how to add Consul clients providing services (‘providers’), register the services, and query available services from clients (‘consumers’).

Categories: DBA Blogs

12.1.0.2 Introduction to Zone Maps Part I (Map Of The Problematique)

Richard Foote - Wed, 2014-09-03 02:06
Zone Maps are new index-like structures that enables the “pruning” of disk blocks during accesses of the table by storing the min and max values of selected columns for each “zone” of a table. A zone is simply a range of contiguous blocks within a table. Zone Maps are similar in concept to Exadata storage […]
Categories: DBA Blogs

Data Profile: Better Knowing Your Data

Pythian Group - Tue, 2014-09-02 08:40

Have you ever needed to start a new data analysis project or create a report for the business users querying a database you never worked before? Or simply know the data distribution of a database to create better indexing strategies?

Working as a consultant, I constantly face this challenge where I have to work with a customer’s database that I don’t know about very deeply. For instance, that “Gender” column stores data as “M” and “F” or “Male” and “Female”? Or even, do they use a bit column for that? (Yeah, I saw that a lot already). Does that “Surname” column accept NULL values? If so, what percent of the table contains NULL for that specific column? In a date/time column, what is the minimum and maximum values so I can create my “Time” dimension in a Data warehouse?

This data discovery process, where I need an overview of the data, usually takes a lot of time and a lot of query writing, doing DISTINCT, MIN, MAX, AVG kind of queries and analyzing the result of each individual query. Even with a lot of really good code, completing third party tools out there, it is a cumbersome task and sometimes the customer is not willing to wait while I learn everything about their environment before expecting results.

Today I want to show you a not-so-new feature that we have in SQL Server that will help with the data discovery process. The feature is the Data Profiler Task in SQL Server Integration Services and the Data Profile Viewer.

Now, that’s the time when you ask me, “Data what?!

It’s easy, you’ll see. One of the several tasks in the SQL Server Integration Services that you never use and never took the time to google what is used for is called Data Profiling Task. This task allows you to select a table and what kind of data analysis you want to do in that table/column. When you run the SSIS package it will analyze the table and generate a XML file. Once you have the XML file, all you need to do is to open it using the Data Profile Viewer, which will take care of creating a nice user interface for you to analyze the XML, as you can see in the Figure 1.

 

DataProfile-Image1

Figure 1: Data Profile Viewer

Cool, now let’s see how to create our own analysis.

Step 1: Open SQL Data Tools or SQL BIDS if you’re using SQL Server 2008 R2 or below

Step 2: Create a new SSIS project

Step 3: Add the Data Profiling Task on your project

DataProfile-Image2

Step 4: Double click in the Data Profiling task so we can configure it. In the General tab we have to set the Destination, that means, the location you want to save the XML file. You can choose to save directly to the file system using a File Connection or store in a XML variable inside your package in the case you want to do something else with the XML, maybe store in a database. Let’s leave the default FileConnection option for the Destination Type option and click in New File Connection in the Destination option.

DataProfile-Image3

Step 5: Now we can choose the file location, on my example I am using one of the most used folders every on windows. The “tmp” folder, sometimes also called as “temp” or just “stuff”. (Note: the author doesn’t recommend storing everything in folders called temp nor saving everything in the desktop)

DataProfile-Image4

Step 6: Ok, we’re back to the main window, we have now to choose which kind of analysis we want to run, the database and the table. We have two options, the first one is to use the Profile Requests tab and choose one by one the data analysis, table and columns. The other option and also the simplest one is to use the Quick Profile tab. Using this option we can define one specific table and what analysis you want to run on that table. If you want to run the analysis on multiple tables you will have to click in the Quick Profile option and choose one by one (nothing on this world is perfect).

DataProfile-Image5

As you can see in the image above, I have chosen the Production.Product table of the AdventureWorks2012 database. In the Compute option you have to choose what data analysis you want to run, the names of the options kind of explain what they’ll do, but if you want a detailed explanation of each option you can check the product documentation on this link: http://technet.microsoft.com/en-us/library/bb895263.aspx

Now all you have to do is to run the SSIS package to create the XML file. Once you’re done, you can use the Data Profile Viewer tool to open the XML and analyze its results.

DataProfile-Image6

The Data Profile Viewer is a simple tool that doesn’t need much explanation, just try it for yourself and you’ll certainly like the result.

I hope this can help you to save some time when you need to quickly learn more about the data inside a database. If you have any questions or want to share what your approach is when you need to complete this task, just leave a comment!

 

Categories: DBA Blogs

Oracle Database: Script to Purge aud$ Table Using dbms_audit_mgmt Package

Pythian Group - Tue, 2014-09-02 07:59

With an increase in requirements on database security, database auditing is part of most production databases nowadays. The growth on Oracle database audit table”‘aud$” always add spice on a DBA’s life. Since this table growth directly impacts the database performance, this table got special place on every DBA’s heart.

Traditionally we follow many methods to purge the old data from this table, all these methods require application downtime for most of the time. Oracle introduced a new in-house package named “DBMS_AUDIT_MGMT”, which gives more control for a DBA over management of auditing records.

I tried to test this package on my 11gR2 test database. Although this can be done from oracle rdbms version 10.2.0.3.0. But we need to apply the required patches on 10.2.0.3 (Patch 6989148) and 10.2.0.4 (Patch 6996030) versions. This package is installed by default on versions 10.2.0.5 and 11.1.0.7. I configured audit_trail parameter value to db_extended and enabled database auditing. Later I moved this db audit table and associated LOB segments to dedicated tablespace named “AUDTBS”. I confirmed the audit functionality is running fine after the tablespace change.

Thanks to MOS notes 1362997.1 and 1508787.1, I successfully tested this purge procedure. I configured the database scheduler jobs in such a way that they should run once per 12 hours, purges data from aud$ table, which are older than 7 days. Here is the script(purge_job.sql) used myself to configure the required jobs.

Script: purge_job.sql ==> Run this script as SYS database user account.

prompt start of the script
set serveroutput on
prompt Change based on our customization done
update dam_config_param$ set string_value=’AUDTBS’ where audit_trail_type#=1 and param_id=22;
commit;

prompt First Step: init cleanup (if not already)

BEGIN
IF NOT DBMS_AUDIT_MGMT.IS_CLEANUP_INITIALIZED
(DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD)
THEN
dbms_output.put_line(‘Calling DBMS_AUDIT_MGMT.INIT_CLEANUP’);
DBMS_AUDIT_MGMT.INIT_CLEANUP(
audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD,
default_cleanup_interval => 12);
else
dbms_output.put_line(‘Cleanup for STD was already initialized’);
end if;
end;
/

prompt revert back to default values again
update dam_config_param$ set string_value=’SYSAUX’ where audit_trail_type#=1 and param_id=22;
commit;

prompt set last archive timestamp to older than 7 days

begin
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
last_archive_time => sysdate – 7);
end;
/

prompt setup a purge job

BEGIN
DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
AUDIT_TRAIL_PURGE_INTERVAL => 12,
AUDIT_TRAIL_PURGE_NAME => ‘Standard_Audit_Trail_PJ’,
USE_LAST_ARCH_TIMESTAMP => TRUE );
END;
/

prompt call DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP regularly to advance the last archive timestamp

create or replace procedure set_archive_retention
(retention in number default 7) as
begin
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
last_archive_time => sysdate – retention);
end;
/

BEGIN
DBMS_SCHEDULER.create_job (
job_name => ‘advance_archive_timestamp’,
job_type => ‘STORED_PROCEDURE’,
job_action => ‘SET_ARCHIVE_RETENTION’,
number_of_arguments => 1,
start_date => SYSDATE,
repeat_interval => ‘freq=hourly;interval=12′ ,
enabled => false,
auto_drop => FALSE);
dbms_scheduler.set_job_argument_value
(job_name =>’advance_archive_timestamp’,
argument_position =>1,
argument_value => 7);
DBMS_SCHEDULER.ENABLE(‘advance_archive_timestamp’);
End;
/

BEGIN
DBMS_SCHEDULER.run_job (job_name => ‘advance_archive_timestamp’,
use_current_session => FALSE);
END;
/

prompt End of the script

To verify the purge status and configured jobs status execute the following queries.

SQL> select min(NTIMESTAMP#) from aud$;
SQL> select LAST_START_DATE,LAST_RUN_DURATION,NEXT_RUN_DATE from dba_scheduler_jobs where job_name=’ADVANCE_ARCHIVE_TIMESTAMP’;
SQL> select LAST_START_DATE,LAST_RUN_DURATION,NEXT_RUN_DATE from dba_scheduler_jobs where job_name=’STANDARD_AUDIT_TRAIL_PJ’;

We can definitely customize this script based on requirement, which is different for each database. But testing is required on the cloned database before configuring these purge jobs on the production database.

 

Categories: DBA Blogs

Azure Storage: Creating, Maintaining, and Deleting SQL Server Backups

Pythian Group - Tue, 2014-09-02 07:50

This post covers how to create, monitor, maintain, and automatically delete SQL Server backups.

What is Azure Storage and why should you use it?

Microsoft Windows Azure is Microsoft’s cloud offering for offsite storage. It offers the ability to seamlessly enable massive storage, Virtual Servers, SQL Server database instances, and many other options without having to worry about the hardware or maintenance in house.

Many companies are currently using Azure as offsite storage for their nightly Production backups. A company chooses one of 15 datacenters that Microsoft has around the world. This datacenter automatically and transparently maintains three copies of each backup file, and also replicates to a second datacenter in a different geographic location. The replication is not real-time, but in general there will always be six copies of each backup file available in case of an emergency.

In the event the Primary datacenter fails, Microsoft will decide when or if to failover to the Secondary datacenter. However, in the coming months they plan to roll out an API which would allow individual clients to make that decision.

SQL Server 2012 SP1 CU6+ is required.

The current pricing is about $90 per month per TB of storage used.

Accessing the Azure front end
To access the Azure front end:

  • Open Internet Explorer and navigate to http://portal.azure.com.
    • You will be prompted to login with a Microsoft MSDN Account.
  • The Azure administrator in your company should have granted this account access.
  • Click on the Azure Portal icon to bring up the Azure Main Page for your account.
  • Click on the Storage Icon on the left.
  • Drill down into your storage account to open the Storage Main Page.
  • Click on the Containers tab
  • Drill down into your containerThis is a list of all of the backups being written to the Production container.

The backups are ordered by their name, and unfortunately there is no way to sort by Date Modified or other field. You can see more backups than are listed on the front page by clicking on the arrow at the bottom left.

Checking the status of an Azure backup file
To check the status of a backup to Azure storage, you have two options:
1. The Azure Portal
2. SQL Server Management Studio

To use the Azure Portal, navigate to the container the backup is being written to, and find it in the list of files. If the size is 1 TB, then the backup is currently running. Using the Azure Portal, you can only see if a backup file is being created. You cannot see how much time is remaining until it is done.

To use SQL Server Management Studio, open SSMS and connect to the instance running the backups. Run the following command to get percent done & estimated time to completion:


SELECT
r.session_id,
r.command,
CONVERT(NUMERIC(6,2),r.percent_complete) AS [PercentComplete],
CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2, CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END) FROM sys.dm_exec_sql_text(sql_handle)))
FROM
sys.dm_exec_requests r
WHERE
command = 'BACKUP DATABASE'

Backing up to an Azure storage container
In order to write a backup file to a Microsoft Azure storage container, two items are needed:

      • A SQL Server credential to connect to the Azure Account
        • This requires the Account Name and a Storage Key.
      • The container URL

To find the Azure Account Name and Storage Key, navigate to the Azure Main Page. At the bottom of the page, there is a “Manage Access Keys”. Clicking on this icon bring ups the Account Name and a Primary & Secondary Access Key.

1. Copy the Account Name and one of the Access Keys. DO NOT REGENERATE THE KEYS.
2.    Copy the URL.
3.    Open SQL Server Management Studio and connect to the RLPRODMSSQL01 instance.
4.    From a new query window, run the following command:


CREATE CREDENTIAL AzureBackups
WITH
IDENTITY = ‘Account Name’,
SECRET = ‘Storage Key’

5.    Run this same command on all instances that will backup to this container.
6.    Run the following command to backup a database to the Azure Storage container:


BACKUP DATABASE db_name
FROM URL = ‘Container URL + Backup File Name’
WITH
CREDENTIAL = ‘AzureBackups’

Restoring from an Azure Storage Container
To restore from an Azure Storage Container, two items are needed:

1. A credential to connect to the Azure Account (See steps 1-4 of Backing up to an Azure Storage Container)
2. The backup file URL

To get the backup file URL, navigate to the container where the backup file is stored. The URL is to the left of the backup name.

1. Copy the URL.
2. Run the following command on the instance you want to restore the database onto:


RESTORE DATABASE db_name
FROM URL = ‘Backup File URL’
WITH
CREDENTIAL = ‘AzureBackups’

Deleting SQL Server backups from Azure storage

In SQL Server 2012, Azure storage is not fully integrated with Maintenance Plans and deleting old backups is a manual process. This causes issues, because there is no way to quickly delete a batch of backups, and if this is forgotten for a few days then the cost of storage begins to rise quickly.

I have written the below code to create an executable that will connect to the Azure storage container and delete any backups older than x days.

In addition, the code can check for any backups that have a “locked lease”, break the lease, and then delete them.

The parameters for the executable are:

  • Parameter 1 – MS Azure Account Name (string)
  • Parameter 2 – MS Azure Storage Key (string)
  • Parameter 3 – Azure Container Name (string)
  • Parameter 4 – Number of days backups to retain (positive integer)
  • Parameter 5 – File type to delete (.bak, .trn, etc..)
  • Parameter 6 – Delete backups with locked lease? (True/False)
    • Note that a True value for parameter 6 will cause the executable to ignore parameters 4 and 5.
    • This is meant to be run after a failed backup job.

In order to work, the executable will need the Windows Azure Storage Client Library.

Alternatively, you can download and run the executable using this ZIP file.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Blob;

namespace DeleteAzureBackups
{
class Program
{

static void Main(string[] args)
{
if (args.Length != 6) { Console.WriteLine(“Please run with correct number of parameters. Type ? for help.”); return; }

if (args[0] == “?” || args[0] == “help” || args[0] == “h”)
{
Console.WriteLine(“==============================================================”);
Console.WriteLine(“Pythian Azure Backup Delete Utility”);
Console.WriteLine(“”);
Console.WriteLine(“Parameter 1 : String : MS Azure Account Name”);
Console.WriteLine(“Parameter 2 : String : MS Azure Account Key”);
Console.WriteLine(“Parameter 3 : String : Container Name”);
Console.WriteLine(“Parameter 4 : Positive Integer : Number of days to retain backups”);
Console.WriteLine(“Parameter 5 : String : File type to delete (.bak, .trn, etc…)”);
Console.WriteLine(“Parameter 6 : True/False : Delete backups with locked leases (will ignore Parameters 4 & 5)”);
Console.WriteLine(“==============================================================”);
}

// Account name and key.
string accountName = args[0].ToLower(); //Account Name
string accountKey = args[1]; //Account Key
string containerName = args[2]; //Container Name
int numberOfDays = Int16.Parse(args[3]); //Number of Days before deleting
string fileType = args[4];
bool deleteLockedBlobs = bool.Parse(args[5]);

try
{
CloudBlobContainer container = openConnection(accountName, accountKey, containerName);

if (!deleteLockedBlobs)
{ deleteAzureBackups(container, numberOfDays, fileType); }
else
{ unlockLeasedBlobs(container); }

}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}

return;
}

static CloudBlobContainer openConnection(string accountName, string accountKey, string containerName)
{
try
{
//Get a reference to the storage account, with authentication credentials
StorageCredentials credentials = new StorageCredentials(accountName, accountKey);
CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, true);

//Create a new client object.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference(containerName);

return container;
}
catch (StorageException ex)
{
Console.WriteLine(“Failed to create connection to MS Azure Storage.”);
Console.WriteLine(ex.Message);
return null;
}
}

static void deleteAzureBackups(CloudBlobContainer container, int numberOfDays, string fileType)
{
DateTimeOffset now = DateTimeOffset.Now;

foreach (IListBlobItem item in container.ListBlobs(null, false))
{
CloudPageBlob blob = (CloudPageBlob)item;

//If date blob was last modified is more than x days out, then it gets deleted.
if ((now – blob.Properties.LastModified.Value).Days >= numberOfDays &&
blob.Name.Substring(blob.Name.Length – fileType.Length) == fileType)
{
deleteBlob(blob);
}
}
}

static void unlockLeasedBlobs(CloudBlobContainer container)
{
foreach (IListBlobItem item in container.ListBlobs(null, false))
{
CloudPageBlob blob = (CloudPageBlob)item;

if (blob.Properties.LeaseStatus == LeaseStatus.Locked)
{
try
{
Console.WriteLine(“Breaking lease on {0} blob.”, blob.Name);
blob.BreakLease(new TimeSpan(), null, null, null);
Console.WriteLine(“Successfully broken lease on {0} blob.”, blob.Name);

deleteBlob(blob);
}
catch (StorageException ex)
{
Console.WriteLine(“Failed to break lease on {0} blob.”, blob.Name);
Console.WriteLine(ex.Message);
}
}
}
}

static void deleteBlob(CloudPageBlob blob)
{
try
{
Console.WriteLine(“Attempting to delete {0}”, blob.Name);
blob.Delete(DeleteSnapshotsOption.IncludeSnapshots);
Console.WriteLine(“Successfully deleted {0}”, blob.Name);
}
catch (StorageException ex)
{
Console.WriteLine(“Failed to delete {0}.”, blob.Name);
Console.WriteLine(ex.Message);
}
}
}
}

 

Categories: DBA Blogs

Webcast - Oracle Multitenant Option

On their road to adoption of Oracle Database 12c, many are considering the Multitenant Database Architecture, where a multitenant container database can hold many pluggable databases, enabling...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Windows 7 error “key not valid for use in specified state”

The Oracle Instructor - Sun, 2014-08-31 00:37

When you see that error upon trying to install or upgrade something on your Windows 7 64-bit machine, chances are that it is caused by a Windows Security update that you need to uninstall. There is probably no point in messing around with the registry or the application that you want to upgrade. Instead, remove the Windows update KB2918614 like this:

Open the control panel, then click  Windows Update

Windows 7 control panelClick Update History and then Installed Updates:

Update HistoryScroll down to Microsoft Windows and look for KB2918614 (I have removed it already before I took the screenshot):

Uninstall KB2918614Finally, hide that update so you don’t get it installed later on again:

Hide KB2918614I’m using a corporate notebook with automatic Windows security updates coming from time to time and encountered that problem while trying to upgrade VirtualBox to version 4.3.12. It is not a VirtualBox issue, though, other installs or upgrades may fail for the same reason. For me, this was a serious problem, because I rely on virtual machines for many demonstrations. Kudos to the virtualbox.org forums! They helped me resolve that problem within a day. Thank you once again, guys! :-)


Tagged: #KB2918614
Categories: DBA Blogs

OSCON 2014: Complete Video Compilation

Surachart Opun - Sat, 2014-08-30 03:30
OSCON 2014 - Today, it's not only developers, system administrators or organizations have use the Open Source. Businesses have established to use the Open Source as well. So, you can not ignore about Open Source. At OSCON, you'll encounter the open source ecosystem. It helps digging deep into the business of open source.

Five Reasons to Attend OSCON: Get straight to the epicenter of all things open source and get better at what you do, Learn from the best and make valuable connections, Get solutions to your biggest challenges that you can apply today, See the latest developments, products, services, and career trends and Hear it first at OSCON.

It's very good idea to attend the OSCON, if you missed OSCON2014. I mention OSCON 2014: Complete Video Compilation. You can download these videos or view them through our HD player, and learn about open source with more than 350 presenters, including Matthew McCullough (GitHub), Leslie Hawthorn (Elasticsearch), James Turnbull (Docker), Andrei Alexandrescu (Facebook), Tim Berglund (DataStax), Paco Nathan (Zettacap), Kirsten Hunter (Akamai), Matt Ray (Chef Software, Inc.), and Damian Conway (Thoughtstream) among them. In these videos, you are able to see a lot of tracks (Business,Cloud,Community,Computational Thinking,Databases & Datastores,Education,Emerging anguages,Geek Lifestyle,Java & JVM,JavaScript - HTML5 - Web,Mobile Platforms,Open Hardware,Operations & System Admin,Perl,PHP,Python,Security,Tools & Techniques,User Experience).

You will able to learn many tracks as I told you. Anyway, Oreilly has improved video streaming and downloading. Additional, it's very useful for playback speed control and mobile viewing.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Using #DB12c PDB as a Repository for #EM12c…. Not so fast!

DBASolved - Fri, 2014-08-29 14:51

Last year, I wrote a post on how to setup Oracle Enterprise Manger 12c (12.1.0.3) with Oracle Database 12c (12.1.0.1) and what you could expect.  At the time the configuration was not supported by Oracle and I only did it to see if it could be done.  If you would like to read that post it can be found here.

This past May (2014), I attended the Oracle Customer Advisory Board (CAB) for Oracle Enterprise Manager 12c. It was a great gathering of power users and customers of the Oracle Enterprise Manager product line with a lot of useful information being shared.  One thing that came out in the CAB is that Oracle Enterprise Manager 12c (12.1.0.4) is now fully supported against Oracle Database 12c (12.1.0.1 or later).  During the discussion, I asked if Pluggable Databases (PDBs) would be supported for the repository database; I was surprised when the initial answer was “Yes”.  I was excited to say the least and wanted to try it out.  The ability to house multiple monitoring tools in a single CDB via multiple PDBs would shrink many monitoring application databases footprints within organizations.

Unfortunately, Oracle and I don’t share the same outlook with that use case.  In doing more research, installations and discussing the use case with a few friends, inside and outside of Oracle, I can safely say that Pluggable Databases (PDBs) are not supported as a Oracle Management Repository (OMR).  Although a Pluggable Database (PDB) is not supported as the OMR, Oracle Enterprise Manger 12c (12.1.0.4) can still be ran with a Oracle Database 12c (12.1.0.1 or later) OMR as long as it is a traditional database (non-CDB).

In summary, I’ve answered two (2) questions for you.

1. Are Pluggable Databases (PDBs) supported as Oracle Management Repositories for EM12c?  -> NO

2. Can an Oracle Database 12c (12.1.0.1 or later), in traditional (non-CDB) mode, be used as an Oracle Management Repository? -> YES

In Image 1, you can see that I have EM12c (12.1.0.4) running with a DB12c repository.

Image 1:

12104_db12_oem.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Enjoy!

about.me: http://about.me/dbasolved

 


Filed under: Database, OEM
Categories: DBA Blogs

Log Buffer #386, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-29 08:16

If you ever wanted an easy-peazy way to get few of the best blog posts of the week from Oracle, SQL Server and MySQL then Log Buffer Editions are the place to be.

Oracle:

The Product Management team have released a knowledge article for Enterprise Performance Management (EPM) 11.1.2.2.x and 11.1.2.3.x containing details for EPM support with Internet Explorer (IE) 11.

As if anyone needs to be reminded, there’s a ridiculous amount of hype surrounding clouds and big data. There’s always oodles of hype around any new technology that is not well understood.

By mapping an external table to some text file, you can view the file contents as if it were data in a database table.

Vikram has discovered a utility adopreports utility in R12.2.

As a lot of the new APEX 5 features are “by developers for developers”, this one is also a nifty little thing that make our lives easier.

SQL Server:

Data Mining: Part 15 Processing Data Mining components with SSIS.

SQL Server AlwaysOn Availability Groups Fail the Initial Failover Test.

Stairway to PowerPivot and DAX – Level 6: The DAX SUM() and SUMX() Functions.

Questions about T-SQL Expressions You Were Too Shy to Ask

SQL Server Service Engine fails to start after applying CU4 for SQL Server 2008 SP1.

MySQL:

MySQL for Visual Studio 1.2.x recently became a GA version. One of the main features included in this version was the new MySQL ASP.NET MVC Wizard.

Resources for Database Clusters: Performance Tuning for HAProxy, Support for MariaDB 10, Technical Blogs & More.

Trawling the binlog with FlexCDC and new FlexCDC plugins for MySQL

InnoDB provides a custom mutex and rw-lock implementation.

You probably already know that Sphinx supports MySQL binary network protocol. But, if you haven’t heard– Sphinx can be accessed with the regular ol’ MySQL API.

Categories: DBA Blogs

Presenting at OOW 2014

DBASolved - Fri, 2014-08-29 07:26

This year I’ll be presenting at Oracle Open World with many of the best in the industry.  If you are going to be in the San Francisco area between September 28 thru October 2 2014, stop by and check out the conference.  Registration information can be found here.

The topics which I’ll be presenting or assisting with this year are:

  • OTN RAC Attack – Sunday, September 28, 2014 – 9 am – 3 pm PST @ OTN Lounge
  • How many ways can I monitor Oracle GoldenGate – Sunday, September 28, 2014 0 3:30 pm – 4:15 pm PST @ Moscone South 309
  • Oracle Exadata’s Exachk and Oracle Enterprise Manager 12c: Keeping Up with Oracle Exadata – Thursday, October 2, 2014 10:45 am – 11:30 am PST @ Moscone South 310

Hope to see you there!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

Tungsten Replicator: MariaDB Master-Master and Master-Slave Topologies

Pythian Group - Thu, 2014-08-28 12:45

A common concern in the MySQL community is how to best implement high availability for MySQL. There are various built-in mechanisms to accomplish this such as Master/Master and Master/Slave replication using binary logs as well as FOSS solutions such as Galera and Tungsten, just to name a few. Often times, IT Managers and DBAs alike opt to avoid implementing a third party solution due to the added administrative overhead without fully evaluating the available solutions. In today’s blog post, I would like to describe the process for configuring a Master/Slave topology and switching to a Master/Master topology with Tungsten Replicator.

Tungsten Replicator is a well known tool that has gained much acclaim in the area of MySQL Enterprise database implementation, however, many teams tend to stay away from the implementation to avoid over-complicating the replication topology. I have listed and described all of the steps required to configure a replication topology for 1 to N nodes (today’s how-to guide serves for a 2-node implementation but I will described the additional steps that would be required to implement these topologies for N nodes).

The 2 nodes I will be using are vm128-142 and vm129-117, the first part of the document contains the steps that need to be performed on both nodes and the latter describes the steps to be performed on either one of the two nodes. As soon as Tungsten Replicator has been installed on both nodes with the same configuration files the switch is as simple as “one, two, three” – all it requires is running the script that configures the topology of your choice. The main topologies that are available are :

  • Master – Slave: Replication flowing from 1 .. N nodes using Tungsten Replicator
  • Master – Master: Bi-directional replication for 1 .. N nodes
  • Star Topology: A central node acts as a hub and all spokes are Master nodes
  • Fan-in Topology: A single slave node with replication from 1 .. N Master nodes

(Check out https://code.google.com/p/tungsten-replicator/wiki/TRCMultiMasterInstallation for further details)

So, let’s continue with the actual steps required (please note I’m using the “root” account with SSH passwordless authentication for the purposes of this article, it is best to define another user on production systems). The parameters and values in red text require customization for your system / topology. The configuration files are all indented in the text is royal blue:

### The following commands should be executed on all nodes (vm128-142 & vm129-117 in this how-to)

su - root
cd /root # or alternatively to a place like /opt/ or /usr/local/
vi /etc/yum.repos.d/MariaDB.repo

 # MariaDB 5.5 CentOS repository list - created 2014-08-25 16:59 UTC
 # http://mariadb.org/mariadb/repositories/
 [mariadb]
 name = MariaDB
 baseurl = http://yum.mariadb.org/5.5/centos6-amd64
 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 gpgcheck=1

vi /etc/security/limits.conf

 # add the following line
 * - nofile 65535

yum update

yum install wget MariaDB-server MariaDB-client ruby openssh-server rsync 
yum install java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el6_5.x86_64 
yum install http://www.percona.com/downloads/XtraBackup/LATEST/binary/redhat/6/x86_64/percona-xtrabackup-2.2.3-4982.el6.x86_64.rpm
ln -s /usr/bin/innobackupex /usr/bin/innobackupex-1.5.1

wget http://downloads.tungsten-replicator.org/download.php?file=tungsten-replicator-2.2.1-403.tar.gz
tar -xzvf download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
rm download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
cd tungsten-replicator-2.2.1-403/

vi cookbook/COMMON_NODES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 export NODE1=vm128-142.dlab.pythian.com
 export NODE2=vm129-117.dlab.pythian.com
 #export NODE3=host3
 #export NODE4=host4
 #export NODE5=host5
 #export NODE6=host6
 #export NODE7=host7
 #export NODE8=host8

vi cookbook/USER_VALUES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 # User defined values for the cluster to be installed.

 cookbook_dir=$(dirname $0 )

 # Where to install Tungsten Replicator
 export TUNGSTEN_BASE=/opt/tungsten-replicator/installs/cookbook

 # Directory containing the database binary logs
 export BINLOG_DIRECTORY=/var/lib/mysql

 # Path to the script that can start, stop, and restart a MySQL server
 export MYSQL_BOOT_SCRIPT=/etc/init.d/mysql

 # Path to the options file
 export MY_CNF=/etc/my.cnf

 # Database credentials
 export DATABASE_USER=tungsten
 export DATABASE_PASSWORD=tungsten
 export DATABASE_PORT=3306

 # Name of the service to install
 export TUNGSTEN_SERVICE=cookbook

 # Replicator ports
 export RMI_PORT=10000
 export THL_PORT=2112

 # If set, replicator starts after installation
 [ -z "$START_OPTION" ] && export START_OPTION=start

 ##############################################################################
 # Options used by the "direct slave " installer only
 # Modify only if you are using 'install_master_slave_direct.sh'
 ##############################################################################
 export DIRECT_MASTER_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_SLAVE_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_MASTER_MY_CNF=$MY_CNF
 export DIRECT_SLAVE_MY_CNF=$MY_CNF
 ##############################################################################

 ##############################################################################
 # Variables used when removing the cluster
 # Each variable defines an action during the cleanup
 ##############################################################################
 [ -z "$STOP_REPLICATORS" ] && export STOP_REPLICATORS=1
 [ -z "$REMOVE_TUNGSTEN_BASE" ] && export REMOVE_TUNGSTEN_BASE=1
 [ -z "$REMOVE_SERVICE_SCHEMA" ] && export REMOVE_SERVICE_SCHEMA=1
 [ -z "$REMOVE_TEST_SCHEMAS" ] && export REMOVE_TEST_SCHEMAS=1
 [ -z "$REMOVE_DATABASE_CONTENTS" ] && export REMOVE_DATABASE_CONTENTS=0
 [ -z "$CLEAN_NODE_DATABASE_SERVER" ] && export CLEAN_NODE_DATABASE_SERVER=1
 ##############################################################################


 #
 # Local values defined by the user.
 # If ./cookbook/USER_VALUES.local.sh exists,
 # it is loaded at this point

 if [ -f $cookbook_dir/USER_VALUES.local.sh ]
 then
 . $cookbook_dir/USER_VALUES.local.sh
 fi

service iptables stop 

 # or open ports listed below:
 # 3306 (MySQL database)
 # 2112 (Tungsten THL)
 # 10000 (Tungsten RMI)
 # 10001 (JMX management)

vi /etc/my.cnf.d/server.cnf

 # These groups are read by MariaDB server.
 # Use it for options that only the server (but not clients) should see
 #
 # See the examples of server my.cnf files in /usr/share/mysql/
 #

 # this is read by the standalone daemon and embedded servers
 [server]

 # this is only for the mysqld standalone daemon
 [mysqld]
 open_files_limit=65535
 innodb-file-per-table=1
 server-id=1 # make server-id unique per server
 log_bin
 innodb-flush-method=O_DIRECT
 max_allowed_packet=64M
 innodb-thread-concurrency=0
 default-storage-engine=innodb
 skip-name-resolve

 # this is only for embedded server
 [embedded]

 # This group is only read by MariaDB-5.5 servers.
 # If you use the same .cnf file for MariaDB of different versions,
 # use this group for options that older servers don't understand
 [mysqld-5.5]

 # These two groups are only read by MariaDB servers, not by MySQL.
 # If you use the same .cnf file for MySQL and MariaDB,
 # you can put MariaDB-only options here
 [mariadb]

 [mariadb-5.5]

service mysql start
mysql -uroot -p -e"CREATE USER 'tungsten'@'%' IDENTIFIED BY 'tungsten';"
mysql -uroot -p -e"GRANT ALL PRIVILEGES ON *.* TO 'tungsten'@'%' WITH GRANT OPTION;"
mysql -uroot -p -e"FLUSH PRIVILEGES;"

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub | ssh vm129-117 'cat >> ~/.ssh/authorized_keys' # from vm128-142
cat ~/.ssh/id_rsa.pub | ssh vm128-142 'cat >> ~/.ssh/authorized_keys' # from vm129-117
chmod 600 authorized_keys

cookbook/validate_cluster # this is the command used to validate the configuration

vi cookbook/NODES_MASTER_SLAVE.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 CURDIR=`dirname $0`
 if [ -f $CURDIR/COMMON_NODES.sh ]
 then
 . $CURDIR/COMMON_NODES.sh
 else
 export NODE1=
 export NODE2=
 export NODE3=
 export NODE4=
 export NODE5=
 export NODE6=
 export NODE7=
 export NODE8=
 fi

 export ALL_NODES=($NODE1 $NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)
 # indicate which servers will be masters, and which ones will have a slave service
 # in case of all-masters topologies, these two arrays will be the same as $ALL_NODES
 # These values are used for automated testing

 #for master/slave replication
 export MASTERS=($NODE1)
 export SLAVES=($NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)

## The following commands should be performed on just one of the nodes
## In my case either vm128-142 OR 129-117

cookbook/install_master_slave # to install master / slave topology
cookbook/show_cluster # here we see master - slave replication running

 --------------------------------------------------------------------------------------
 Topology: 'MASTER_SLAVE'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 cookbook [master] seqno: 1 - latency: 0.514 - ONLINE
 # node vm129-117.dlab.pythian.com
 cookbook [slave] seqno: 1 - latency: 9.322 - ONLINE

cookbook/clear_cluster # run this to destroy the current Tungsten cluster 

cookbook/install_all_masters # to install master - master topology 
cookbook/show_cluster # and here we've switched over to master - master replication

 --------------------------------------------------------------------------------------
 Topology: 'ALL_MASTERS'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 alpha [master] seqno: 5 - latency: 0.162 - ONLINE
 bravo [slave] seqno: 5 - latency: 0.000 - ONLINE
 # node vm129-117.dlab.pythian.com
 alpha [slave] seqno: 5 - latency: 9.454 - ONLINE
 bravo [master] seqno: 5 - latency: 0.905 - ONLINE

Categories: DBA Blogs

12c: How to Restore/Recover a Small Table in a Large Database

Pythian Group - Thu, 2014-08-28 09:35

As a DBA, you will receive requests from developers or users, indicating that they deleted some data in a small table in a large database a few hours prior. They will probably want you to recover the data as soon as possible, and it will likely be a critical production database. Flashback will not be enabled, and the recycle bin will have been purged. Restoring a full database using RMAN might take you over 10 hours, and you will need a spare server with big storage. Looks like it’s going to be a difficult and time consuming task for you.

In Oracle Database 12c, there is a method available which allows us to recover the table more efficiently, and at a lower cost. The method is to create a second database (often called a stub database) using the backup of the first database. In this situation, we restore the SYSTEM, SYSAUX, and UNDO tablespaces and the the individual tablespaces that contain the data that we want to restore. After the restore is complete, we alter any tablespaces that we did not restore offline. We then apply the archived redo logs to the point in time that we want to restore the table to. Having restored the database to the appropriate point in time, we then use Oracle Data Pump to export the objects, and then you import them into the original database, again using Oracle Data Pump. Oracle Database 12c introduces new functionality in RMAN that supports point-in-time restore of individual database tables and individual table partitions.

Here is an example of when I tested this new feature:

1. The database TEST has 9 tablespaces and a schema called Howie. I created a table with 19377 records called TEST1 which is in the tablespace DATA_HOWIE.

SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME                                                        VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS     SHU DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO  CON_ID INSTANCE_MO EDITION FAMILY
--------------- ---------------- ---------------------------------------------------------------- ----------------- --------- ------------ --- ---------- ------- --------------- ---------- --- ----------------- ------------------ --------- --- ---------- ----------- ------- --------------------------------------------------------------------------------
1 TEST             12cServer1                                                       12.1.0.1.0        17-AUG-14 OPEN         NO           1 STARTED                 ALLOWED    NO  ACTIVE            PRIMARY_INSTANCE   NORMALNO            0 REGULAR     EE

SQL> select tablespace_name from dba_tablespaces order by tablespace_name;

TABLESPACE_NAME
------------------------------
DATA_HOWIE
DATA_TB1
DATA_TB2
DATA_TB3
SYSAUX
SYSTEM
TEMP
UNDOTBS1
USERS

9 rows selected.

SQL> conn howie
Enter password:
Connected.
SQL> create table test1 as select * from dba_objects;

Table created.

SQL> select count(*) from test1;

COUNT(*)
----------
19377

SQL> select table_name,tablespace_name from user_tables where table_name='TEST1';

TABLE_NAME                                                                                                                       TABLESPACE_NAME
-------------------------------------------------------------------------------------------------------------------------------- ------------------------------
TEST1                                                                                                                            DATA_HOWIE

2. The database is in archivelog mode, and I took a full backup of the database.

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Sun Aug 17 20:16:17 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> run
{
allocate channel d1 type disk format '/u01/app/oracle/RMAN/rmn_%d_t%t_p%p';
backup
incremental level 0
tag backup_level0
filesperset 1
(database)
plus archivelog ;
release channel d1;
}2> 3> 4> 5> 6> 7> 8> 9> 10> 11>

3. The data in the table howie.test1 has been deleted.

SQL> select sysdate,current_scn from v$database;

SYSDATE             CURRENT_SCN
------------------- -----------
08/17/2014 21:01:15      435599

SQL> delete test1;

19377 rows deleted.

SQL> commit;

Commit complete.

4. I ran following scripts to recover the data to an alternative table howie.test1_temp to the point in time “08/17/2014 21:01:15″

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Sun Aug 17 21:01:35 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> recover table howie.test1
until time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')"
auxiliary destination '/u01/app/oracle/aux'
remap table howie.test1:test1_temp;2> 3> 4>

5. The scripts above will take care of everything and you will see the data has been restored to howie.test1_temp

SQL> select count(*) from TEST1_TEMP;

COUNT(*)
----------
19377

SQL> select count(*) from TEST1;

COUNT(*)
----------
0

Let’s take a look at the log of RMAN recovery and find out how it works.

1. Creation of the auxiliary instance

Creating automatic instance, with SID='ktDA'

initialization parameters used for automatic instance:
db_name=TEST
db_unique_name=ktDA_pitr_TEST
compatible=12.1.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u01/app/oracle
db_create_file_dest=/u01/app/oracle/aux
log_archive_dest_1='location=/u01/app/oracle/aux'
#No auxiliary parameter file used

2. Restore of the control file for the auxiliary instance

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
}

3. A list of datafiles that will be restored, followed by their restore and recovery in the auxiliary instance

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  8 online";
# recover and open resetlogs
recover clone database tablespace  "DATA_HOWIE", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

4. Export of tables from the auxiliary instance via Oracle Data Pump

Performing export of tables...
EXPDP> Starting "SYS"."TSPITR_EXP_ktDA_BAkw":
EXPDP> Estimate in progress using BLOCKS method...
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
EXPDP> Total estimation using BLOCKS method: 3 MB
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
EXPDP> . . exported "HOWIE"."TEST1"                             1.922 MB   19377 rows
EXPDP> Master table "SYS"."TSPITR_EXP_ktDA_BAkw" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_ktDA_BAkw is:
EXPDP>   /u01/app/oracle/aux/tspitr_ktDA_70244.dmp
EXPDP> Job "SYS"."TSPITR_EXP_ktDA_BAkw" successfully completed at Sun Aug 17 21:03:53 2014 elapsed 0 00:00:14
Export completed

5. Import of tables, constraints, indexes, and other dependent objects into the target database from the Data Pump export file

contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_ktDA_lube" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_ktDA_lube":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported "HOWIE"."TEST1_TEMP"                        1.922 MB   19377 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job "SYS"."TSPITR_IMP_ktDA_lube" successfully completed at Sun Aug 17 21:04:19 2014 elapsed 0 00:00:19
Import completed

6. Clean-up of the auxiliary instance

Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_temp_9z2yqst6_.tmp deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_3_9z2yrkqm_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_2_9z2yrj35_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_1_9z2yrh2r_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/datafile/o1_mf_data_how_9z2yrcnq_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_sysaux_9z2yptms_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_undotbs1_9z2yq9of_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_system_9z2yp0mk_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/controlfile/o1_mf_9z2yos1l_.ctl deleted
auxiliary instance file tspitr_ktDA_70244.dmp deleted
Finished recover at 17-AUG-14
Categories: DBA Blogs