Skip navigation.

DBA Blogs

Fixing Windows RAC Listener to Listen On IP Address of Hostname

Pythian Group - Wed, 2014-09-03 08:31

Recently, there was an issue after a node was added to an existing Windows RAC cluster. After everything was set up, the local listener was not listening to the IP address (192.168.0.37) corresponding to the node’s hostname. Oracle version was 11.2.0.4.

The listener.ora was configured properly with the following line:

LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent

The above setting works properly in linux, but does not in Windows

The listener.log showed the following error message, when listener started:

TNS-12542: TNS:address already in use
TNS-12560: TNS:protocol adapter error
TNS-00512: Address already in use
64-bit Windows Error: 48: Unknown error

Status of the listener was

lsnrctl status

LSNRCTL for 64-bit Windows: Version 11.2.0.4.0 - Production on 29-AUG-2014 04:16:42

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for 64-bit Windows: Version 11.2.0.4.0 - Production
Start Date                19-AUG-2014 01:47:12
Uptime                    10 days 2 hr. 29 min. 33 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   P:\oracle\app\11.2.0\grid\network\admin\listener.ora
Listener Log File         P:\oracle\app\11.2.0\grid\log\diag\tnslsnr\DEV-02\listener\alert\log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\LISTENERipc)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.38)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+asm2", status READY, has 1 handler(s) for this service...
Service "DEV.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
Service "DEVXDB.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
The command completed successfully

The node’s vip address was 192.168.0.38.

To force the local listener to listen to the ip address corresponding to the hostname of the node,
the listener.ora file was changed to

LISTENER=
   (DESCRIPTION=
     (ADDRESS_LIST=
        (ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))
        (address=(protocol=tcp)(host=192.168.0.37)(port=1521))
      )
    )

#LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))           # line added by Agent

After the listener was re-started using

srvctl stop listener -n DEV-02

srvctl start listener -n DEV-02

The listener started listening on 192.168.0.37, which is the IP address for the hostname (DEV-02)

lsnrctl status

LSNRCTL for 64-bit Windows: Version 11.2.0.4.0 - Production on 01-SEP-2014 17:52:46

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for 64-bit Windows: Version 11.2.0.4.0 - Production
Start Date                01-SEP-2014 17:52:46
Uptime                    0 days 0 hr. 1 min. 3 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   P:\oracle\app\11.2.0\grid\network\admin\listener.ora
Listener Log File         P:\oracle\app\11.2.0\grid\log\diag\tnslsnr\DEV-02\listener\alert\log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\LISTENERipc)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.37)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0.38)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+asm2", status READY, has 1 handler(s) for this service...
Service "DEV.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
Service "DEVXDB.WORLD" has 1 instance(s).
  Instance "dev2", status READY, has 1 handler(s) for this service...
The command completed successfully
Categories: DBA Blogs

Loose Coupling and Discovery of Services With Consul — Part 1

Pythian Group - Wed, 2014-09-03 08:16

Today I would like to demonstrate the use of Consul to help achieve loose coupling and discovery of services, two important principles of service-oriented architecture (SOA) present in modern, elastic infrastructures.

If you have ever designed an architecture to be deployed in a cloud computing environment, then you are probably already familiar with the idea that you should design for failure. Having the expectation that one or more components of your infrastructure may fail at any given time forces you to design your services in such a way that they are loosely coupled from one another. This usually means running multiple instances of web, application, and database servers, and making use of load balancers, message queues and / or APIs for communication between them.

The environment is scaled horizontally, by adding more instances to the pool. Because of this, instances need to be able to advertise the service they provide, and discover providers of other services.

Where Does Consul Fit In?

Consul’s introduction page does an excellent job of describing the features it provides and what its basic architecture looks like, but I’ll provide a brief summary here.

Consul is a tool for discovering and configuring services in your infrastructure. At its core, it provides service discovery, health checking, and a key/value store. It supports multiple data centers without having to add a layer of abstraction. Nodes that provide services run a Consul agent which talks to one or more Consul servers. Services or nodes can be discovered by querying any of the Consul servers or agents. Data is exposed through DNS and / or HTTP interfaces.

Its capabilities overlap with a number of different types of software including: service discovery systems such as ZooKeeper and etcd; configuration management systems such as Puppet or Chef; and monitoring systems such as Nagios or Sensu. Again, there is an excellent page on the Consul website that describes how it compares with these tools.

The Demo Environment

For my Consul demonstration, I elected to use a combination of Docker and Vagrant. Docker because it makes it easy for me to run multiple lightweight containers on the same machine, and Vagrant because it gives me a great deal of flexibility in building and controlling the containers I will be creating. If you are unfamiliar with either of these tools, allow me to provide a brief overview.

Docker

Per Wikipedia, Docker “.. is an open-source project that automates the deployment of applications inside software containers”. It consists of: Docker Engine, a container that runs on the host operating system; and Docker Hub, a cloud service similar to GitHub which allows users to share containers and automate workflows.

Docker makes use of the following features of the Linux kernel: cgroups (or control groups), which make it possible to limit and isolate resource usage (CPU, memory, etc) of process groups; and namespace isolation, where process groups’ view of operating environment resources such as process trees, network, and file systems are isolated from one another.

Vagrant

Vagrant is a tool for building complete, reproducible development environments, with a focus on automation. It started out as essentially a ‘wrapper’ for VirtualBox, but now supports VMware, Docker, kvm/libvirt and Amazon EC2 as providers, as well as a number of provisioners including Puppet, Chef, and Docker.

Vagrant was created by Mitchell Hashimoto, who initially maintained it in his free time. After it went on to become wildly successful, he formed HashiCorp so he could work on the product full time. HashiCorp has since released other products including Packer, Serf, Consul, and most recently Terraform.

Installing The Tools Docker

Ubuntu 14.04 LTS includes Docker in its universe repository under the name docker.io and can therefore be installed this way:

$ sudo apt-get update
$ sudo apt-get install docker.io

Keep in mind that this may not be the latest version of Docker. If you would like to try the latest version, you will need to add the Docker repository key and add the repository to apt’s sources list. A script has been made available to automate this process for you, so all that is necessary is to run the following:

$ curl -sSL https://get.docker.io/ubuntu/ | sudo sh

On Fedora, Docker is provided by the package named docker-io. If you are running Fedora 19 and have the (unrelated) docker package installed, you will need to remove it before continuing.

$ sudo yum -y remove docker

With Fedora 21 and later, the docker package’s functionality is provided by another package named wmdocker, and it does not conflict with docker-io. Use the following command to install Docker.

$ sudo yum -y install docker-io

Whichever operating system you are running Docker on, you will likely want to be able to use the commands as your regular, non-privileged user, without having to elevate privileges to root. Therefore, you will probably want to make yourself a member of the docker group so you have access to the socket file used by the various Docker commands.

$ sudo usermod -a -G docker your_name
Vagrant

For Vagrant, you will need at least version 1.6.0 as that is when the Docker provider was introduced. This demonstration was tested with version 1.6.3. To install Vagrant, visit its download page and obtain the appropriate package for your operating system. You can install the package on Ubuntu using the following command:

$ sudo dpkg -i vagrant_1.6.3_x86_64.deb

That’s it. In the next section, we will install Consul and continue with setting up our cluster.

Setting Up The Consul Cluster

Let’s begin by establishing the Consul cluster, which will be used by nodes to register themselves as providers of services, and can be queried to discover which services are being provided.

The first step is to download and install Consul on the host which will be running the Docker containers.

$ wget https://dl.bintray.com/mitchellh/consul/0.3.1_linux_amd64.zip
$ unzip -d /usr/local/bin/ 0.3.1_linux_amd64.zip

Now we can start a Consul server that will bind to the IP address of the docker0 interface on your host.

$ consul agent -server -bootstrap -bind=172.17.42.1 -client=172.17.42.1 -data-dir /tmp/consul

This server will elect itself leader of the cluster (since it is currently the only member), and is what the rest of the Consul servers will connect to when joining. Ultimately, the architecture will look like this:

Consul Demo Architecture

Before we can bring up the rest of the Consul servers, however, we’ll need to do some prep work. As mentioned above, the intent is to use Vagrant with the Docker provider to create this environment. Therefore, let’s begin by creating a working directory to contain our Vagrantfile.

$ mkdir -p vagrant/consul && cd vagrant/consul
$ vagrant init

The contents of Vagrantfile should look like this:

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  JOIN_IP = ENV['JOIN_IP']
  
  # A hash of containers to define.
  # These will be the Consul cluster members.
  consul_members = [ "consul1", "consul2", "consul3" ]
  consul_members.each do |member|
  	config.vm.define member do |consul_config|

  	  # Use Docker provider
  	  consul_config.vm.provider "docker" do |docker|
  	  	docker.name = member
  	  	docker.image = 'progrium/consul'
  	  	docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]
  	  end
  	end
  end
end

What should be self evident is that Vagrant is assigning the value of an environment variable called JOIN_IP to a local variable with the same name, and then enumerating through an array of three Consul members which it will create using the Docker provider.

What might not be so obvious, however, are the docker.image and docker.cmd lines, so I will explain them in greater detail.

docker.image = 'progrium/consul'

This line tells Docker to launch (or ‘run’) an instance of the progrium/consul image as found on the Docker Hub Registry. It is a small container based on BusyBox used to run Consul. The project’s source page can be found on GitHub if you’re interested in learning more. The next line:

docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]

is an array of strings which is used to build a custom command to run on the container. Vagrant will perform string interpolation on the member and JOIN_IP variables, replacing them with the current member’s name and the IP address that was provided via the JOIN_IP environment variable. The end result is that the container runs a command such as this:

/bin/consul agent -config-dir=/config -server -node=consul1 -join=172.17.42.1

Let’s see it in action by telling Vagrant to create our containers.

$ JOIN_IP=172.17.42.1 vagrant up --provider=docker
Bringing machine 'consul1' up with 'docker' provider...
Bringing machine 'consul2' up with 'docker' provider...
Bringing machine 'consul3' up with 'docker' provider...
==> consul2: Creating the container...
    consul2:   Name: consul2
    consul2:  Image: progrium/consul
    consul2:    Cmd: -server -node=consul2 -join=172.17.42.1
    consul2: Volume: /home/bfraser/vagrant/consul:/vagrant
    consul2:  
    consul2: Container created: d85fbfacdb45cabc
==> consul2: Starting container...
==> consul2: Provisioners will not be run since container doesn't support SSH.
==> consul1: Creating the container...
    consul1:   Name: consul1
    consul1:  Image: progrium/consul
    consul1:    Cmd: -server -node=consul1 -join=172.17.42.1
    consul1: Volume: /home/bfraser/vagrant/consul:/vagrant
==> consul3: Fixed port collision for 22 => 2222. Now on port 2200.
==> consul3: Creating the container...
    consul3:   Name: consul3
    consul3:  Image: progrium/consul
    consul3:    Cmd: -server -node=consul3 -join=172.17.42.1
    consul3: Volume: /home/bfraser/vagrant/consul:/vagrant
    consul1:  
    consul1: Container created: 413dfa1a63c94bcc
==> consul1: Starting container...
==> consul1: Provisioners will not be run since container doesn't support SSH.
    consul3:  
    consul3: Container creaited: fb54d80e8ce58a46
==> consul3: Starting container...
==> consul3: Provisioners will not be run since container doesn't support SSH.

This created three containers and substituted the node name and join address as expected. Now let’s see what Docker reports.

$ docker ps
CONTAINER ID        IMAGE                    COMMAND                CREATED              STATUS              PORTS                                                                          NAMES
fb54d80e8ce5        progrium/consul:latest   /bin/start -server -   About a minute ago   Up About a minute   53/udp, 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 8500/tcp   consul3             
413dfa1a63c9        progrium/consul:latest   /bin/start -server -   About a minute ago   Up About a minute   53/udp, 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 8500/tcp   consul1             
d85fbfacdb45        progrium/consul:latest   /bin/start -server -   About a minute ago   Up About a minute   53/udp, 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 8500/tcp   consul2             

There are in fact three containers running. Other details are provided but they aren’t relevant at this point, I just wanted to show you how to view the status of your newly created Docker containers.

Now if we check Consul, we should see each of the containers listed as members of the cluster (note: we can’t just run consul members as we need to tell it which IP address to use as the RPC address).

$ consul members -rpc-addr=172.17.42.1:8400
Node     Address           Status  Type    Build  Protocol
laptop   172.17.42.1:8301  alive   server  0.3.0  2
consul2  172.17.0.5:8301   alive   server  0.3.0  2
consul1  172.17.0.6:8301   alive   server  0.3.0  2
consul3  172.17.0.7:8301   alive   server  0.3.0  2

As you can see, we have successfully created a cluster of Consul servers, comprised of an initial server running on the host and three servers running as Docker containers created by Vagrant. In my next post, we will see how to add Consul clients providing services (‘providers’), register the services, and query available services from clients (‘consumers’).

Categories: DBA Blogs

12.1.0.2 Introduction to Zone Maps Part I (Map Of The Problematique)

Richard Foote - Wed, 2014-09-03 02:06
Zone Maps are new index-like structures that enables the “pruning” of disk blocks during accesses of the table by storing the min and max values of selected columns for each “zone” of a table. A zone is simply a range of contiguous blocks within a table. Zone Maps are similar in concept to Exadata storage […]
Categories: DBA Blogs

Data Profile: Better Knowing Your Data

Pythian Group - Tue, 2014-09-02 08:40

Have you ever needed to start a new data analysis project or create a report for the business users querying a database you never worked before? Or simply know the data distribution of a database to create better indexing strategies?

Working as a consultant, I constantly face this challenge where I have to work with a customer’s database that I don’t know about very deeply. For instance, that “Gender” column stores data as “M” and “F” or “Male” and “Female”? Or even, do they use a bit column for that? (Yeah, I saw that a lot already). Does that “Surname” column accept NULL values? If so, what percent of the table contains NULL for that specific column? In a date/time column, what is the minimum and maximum values so I can create my “Time” dimension in a Data warehouse?

This data discovery process, where I need an overview of the data, usually takes a lot of time and a lot of query writing, doing DISTINCT, MIN, MAX, AVG kind of queries and analyzing the result of each individual query. Even with a lot of really good code, completing third party tools out there, it is a cumbersome task and sometimes the customer is not willing to wait while I learn everything about their environment before expecting results.

Today I want to show you a not-so-new feature that we have in SQL Server that will help with the data discovery process. The feature is the Data Profiler Task in SQL Server Integration Services and the Data Profile Viewer.

Now, that’s the time when you ask me, “Data what?!

It’s easy, you’ll see. One of the several tasks in the SQL Server Integration Services that you never use and never took the time to google what is used for is called Data Profiling Task. This task allows you to select a table and what kind of data analysis you want to do in that table/column. When you run the SSIS package it will analyze the table and generate a XML file. Once you have the XML file, all you need to do is to open it using the Data Profile Viewer, which will take care of creating a nice user interface for you to analyze the XML, as you can see in the Figure 1.

 

DataProfile-Image1

Figure 1: Data Profile Viewer

Cool, now let’s see how to create our own analysis.

Step 1: Open SQL Data Tools or SQL BIDS if you’re using SQL Server 2008 R2 or below

Step 2: Create a new SSIS project

Step 3: Add the Data Profiling Task on your project

DataProfile-Image2

Step 4: Double click in the Data Profiling task so we can configure it. In the General tab we have to set the Destination, that means, the location you want to save the XML file. You can choose to save directly to the file system using a File Connection or store in a XML variable inside your package in the case you want to do something else with the XML, maybe store in a database. Let’s leave the default FileConnection option for the Destination Type option and click in New File Connection in the Destination option.

DataProfile-Image3

Step 5: Now we can choose the file location, on my example I am using one of the most used folders every on windows. The “tmp” folder, sometimes also called as “temp” or just “stuff”. (Note: the author doesn’t recommend storing everything in folders called temp nor saving everything in the desktop)

DataProfile-Image4

Step 6: Ok, we’re back to the main window, we have now to choose which kind of analysis we want to run, the database and the table. We have two options, the first one is to use the Profile Requests tab and choose one by one the data analysis, table and columns. The other option and also the simplest one is to use the Quick Profile tab. Using this option we can define one specific table and what analysis you want to run on that table. If you want to run the analysis on multiple tables you will have to click in the Quick Profile option and choose one by one (nothing on this world is perfect).

DataProfile-Image5

As you can see in the image above, I have chosen the Production.Product table of the AdventureWorks2012 database. In the Compute option you have to choose what data analysis you want to run, the names of the options kind of explain what they’ll do, but if you want a detailed explanation of each option you can check the product documentation on this link: http://technet.microsoft.com/en-us/library/bb895263.aspx

Now all you have to do is to run the SSIS package to create the XML file. Once you’re done, you can use the Data Profile Viewer tool to open the XML and analyze its results.

DataProfile-Image6

The Data Profile Viewer is a simple tool that doesn’t need much explanation, just try it for yourself and you’ll certainly like the result.

I hope this can help you to save some time when you need to quickly learn more about the data inside a database. If you have any questions or want to share what your approach is when you need to complete this task, just leave a comment!

 

Categories: DBA Blogs

Oracle Database: Script to Purge aud$ Table Using dbms_audit_mgmt Package

Pythian Group - Tue, 2014-09-02 07:59

With an increase in requirements on database security, database auditing is part of most production databases nowadays. The growth on Oracle database audit table”‘aud$” always add spice on a DBA’s life. Since this table growth directly impacts the database performance, this table got special place on every DBA’s heart.

Traditionally we follow many methods to purge the old data from this table, all these methods require application downtime for most of the time. Oracle introduced a new in-house package named “DBMS_AUDIT_MGMT”, which gives more control for a DBA over management of auditing records.

I tried to test this package on my 11gR2 test database. Although this can be done from oracle rdbms version 10.2.0.3.0. But we need to apply the required patches on 10.2.0.3 (Patch 6989148) and 10.2.0.4 (Patch 6996030) versions. This package is installed by default on versions 10.2.0.5 and 11.1.0.7. I configured audit_trail parameter value to db_extended and enabled database auditing. Later I moved this db audit table and associated LOB segments to dedicated tablespace named “AUDTBS”. I confirmed the audit functionality is running fine after the tablespace change.

Thanks to MOS notes 1362997.1 and 1508787.1, I successfully tested this purge procedure. I configured the database scheduler jobs in such a way that they should run once per 12 hours, purges data from aud$ table, which are older than 7 days. Here is the script(purge_job.sql) used myself to configure the required jobs.

Script: purge_job.sql ==> Run this script as SYS database user account.

prompt start of the script
set serveroutput on
prompt Change based on our customization done
update dam_config_param$ set string_value=’AUDTBS’ where audit_trail_type#=1 and param_id=22;
commit;

prompt First Step: init cleanup (if not already)

BEGIN
IF NOT DBMS_AUDIT_MGMT.IS_CLEANUP_INITIALIZED
(DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD)
THEN
dbms_output.put_line(‘Calling DBMS_AUDIT_MGMT.INIT_CLEANUP’);
DBMS_AUDIT_MGMT.INIT_CLEANUP(
audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD,
default_cleanup_interval => 12);
else
dbms_output.put_line(‘Cleanup for STD was already initialized’);
end if;
end;
/

prompt revert back to default values again
update dam_config_param$ set string_value=’SYSAUX’ where audit_trail_type#=1 and param_id=22;
commit;

prompt set last archive timestamp to older than 7 days

begin
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
last_archive_time => sysdate – 7);
end;
/

prompt setup a purge job

BEGIN
DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
AUDIT_TRAIL_PURGE_INTERVAL => 12,
AUDIT_TRAIL_PURGE_NAME => ‘Standard_Audit_Trail_PJ’,
USE_LAST_ARCH_TIMESTAMP => TRUE );
END;
/

prompt call DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP regularly to advance the last archive timestamp

create or replace procedure set_archive_retention
(retention in number default 7) as
begin
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
last_archive_time => sysdate – retention);
end;
/

BEGIN
DBMS_SCHEDULER.create_job (
job_name => ‘advance_archive_timestamp’,
job_type => ‘STORED_PROCEDURE’,
job_action => ‘SET_ARCHIVE_RETENTION’,
number_of_arguments => 1,
start_date => SYSDATE,
repeat_interval => ‘freq=hourly;interval=12′ ,
enabled => false,
auto_drop => FALSE);
dbms_scheduler.set_job_argument_value
(job_name =>’advance_archive_timestamp’,
argument_position =>1,
argument_value => 7);
DBMS_SCHEDULER.ENABLE(‘advance_archive_timestamp’);
End;
/

BEGIN
DBMS_SCHEDULER.run_job (job_name => ‘advance_archive_timestamp’,
use_current_session => FALSE);
END;
/

prompt End of the script

To verify the purge status and configured jobs status execute the following queries.

SQL> select min(NTIMESTAMP#) from aud$;
SQL> select LAST_START_DATE,LAST_RUN_DURATION,NEXT_RUN_DATE from dba_scheduler_jobs where job_name=’ADVANCE_ARCHIVE_TIMESTAMP’;
SQL> select LAST_START_DATE,LAST_RUN_DURATION,NEXT_RUN_DATE from dba_scheduler_jobs where job_name=’STANDARD_AUDIT_TRAIL_PJ’;

We can definitely customize this script based on requirement, which is different for each database. But testing is required on the cloned database before configuring these purge jobs on the production database.

 

Categories: DBA Blogs

Azure Storage: Creating, Maintaining, and Deleting SQL Server Backups

Pythian Group - Tue, 2014-09-02 07:50

This post covers how to create, monitor, maintain, and automatically delete SQL Server backups.

What is Azure Storage and why should you use it?

Microsoft Windows Azure is Microsoft’s cloud offering for offsite storage. It offers the ability to seamlessly enable massive storage, Virtual Servers, SQL Server database instances, and many other options without having to worry about the hardware or maintenance in house.

Many companies are currently using Azure as offsite storage for their nightly Production backups. A company chooses one of 15 datacenters that Microsoft has around the world. This datacenter automatically and transparently maintains three copies of each backup file, and also replicates to a second datacenter in a different geographic location. The replication is not real-time, but in general there will always be six copies of each backup file available in case of an emergency.

In the event the Primary datacenter fails, Microsoft will decide when or if to failover to the Secondary datacenter. However, in the coming months they plan to roll out an API which would allow individual clients to make that decision.

SQL Server 2012 SP1 CU6+ is required.

The current pricing is about $90 per month per TB of storage used.

Accessing the Azure front end
To access the Azure front end:

  • Open Internet Explorer and navigate to http://portal.azure.com.
    • You will be prompted to login with a Microsoft MSDN Account.
  • The Azure administrator in your company should have granted this account access.
  • Click on the Azure Portal icon to bring up the Azure Main Page for your account.
  • Click on the Storage Icon on the left.
  • Drill down into your storage account to open the Storage Main Page.
  • Click on the Containers tab
  • Drill down into your containerThis is a list of all of the backups being written to the Production container.

The backups are ordered by their name, and unfortunately there is no way to sort by Date Modified or other field. You can see more backups than are listed on the front page by clicking on the arrow at the bottom left.

Checking the status of an Azure backup file
To check the status of a backup to Azure storage, you have two options:
1. The Azure Portal
2. SQL Server Management Studio

To use the Azure Portal, navigate to the container the backup is being written to, and find it in the list of files. If the size is 1 TB, then the backup is currently running. Using the Azure Portal, you can only see if a backup file is being created. You cannot see how much time is remaining until it is done.

To use SQL Server Management Studio, open SSMS and connect to the instance running the backups. Run the following command to get percent done & estimated time to completion:


SELECT
r.session_id,
r.command,
CONVERT(NUMERIC(6,2),r.percent_complete) AS [PercentComplete],
CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2, CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END) FROM sys.dm_exec_sql_text(sql_handle)))
FROM
sys.dm_exec_requests r
WHERE
command = 'BACKUP DATABASE'

Backing up to an Azure storage container
In order to write a backup file to a Microsoft Azure storage container, two items are needed:

      • A SQL Server credential to connect to the Azure Account
        • This requires the Account Name and a Storage Key.
      • The container URL

To find the Azure Account Name and Storage Key, navigate to the Azure Main Page. At the bottom of the page, there is a “Manage Access Keys”. Clicking on this icon bring ups the Account Name and a Primary & Secondary Access Key.

1. Copy the Account Name and one of the Access Keys. DO NOT REGENERATE THE KEYS.
2.    Copy the URL.
3.    Open SQL Server Management Studio and connect to the RLPRODMSSQL01 instance.
4.    From a new query window, run the following command:


CREATE CREDENTIAL AzureBackups
WITH
IDENTITY = ‘Account Name’,
SECRET = ‘Storage Key’

5.    Run this same command on all instances that will backup to this container.
6.    Run the following command to backup a database to the Azure Storage container:


BACKUP DATABASE db_name
FROM URL = ‘Container URL + Backup File Name’
WITH
CREDENTIAL = ‘AzureBackups’

Restoring from an Azure Storage Container
To restore from an Azure Storage Container, two items are needed:

1. A credential to connect to the Azure Account (See steps 1-4 of Backing up to an Azure Storage Container)
2. The backup file URL

To get the backup file URL, navigate to the container where the backup file is stored. The URL is to the left of the backup name.

1. Copy the URL.
2. Run the following command on the instance you want to restore the database onto:


RESTORE DATABASE db_name
FROM URL = ‘Backup File URL’
WITH
CREDENTIAL = ‘AzureBackups’

Deleting SQL Server backups from Azure storage

In SQL Server 2012, Azure storage is not fully integrated with Maintenance Plans and deleting old backups is a manual process. This causes issues, because there is no way to quickly delete a batch of backups, and if this is forgotten for a few days then the cost of storage begins to rise quickly.

I have written the below code to create an executable that will connect to the Azure storage container and delete any backups older than x days.

In addition, the code can check for any backups that have a “locked lease”, break the lease, and then delete them.

The parameters for the executable are:

  • Parameter 1 – MS Azure Account Name (string)
  • Parameter 2 – MS Azure Storage Key (string)
  • Parameter 3 – Azure Container Name (string)
  • Parameter 4 – Number of days backups to retain (positive integer)
  • Parameter 5 – File type to delete (.bak, .trn, etc..)
  • Parameter 6 – Delete backups with locked lease? (True/False)
    • Note that a True value for parameter 6 will cause the executable to ignore parameters 4 and 5.
    • This is meant to be run after a failed backup job.

In order to work, the executable will need the Windows Azure Storage Client Library.

Alternatively, you can download and run the executable using this ZIP file.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Blob;

namespace DeleteAzureBackups
{
class Program
{

static void Main(string[] args)
{
if (args.Length != 6) { Console.WriteLine(“Please run with correct number of parameters. Type ? for help.”); return; }

if (args[0] == “?” || args[0] == “help” || args[0] == “h”)
{
Console.WriteLine(“==============================================================”);
Console.WriteLine(“Pythian Azure Backup Delete Utility”);
Console.WriteLine(“”);
Console.WriteLine(“Parameter 1 : String : MS Azure Account Name”);
Console.WriteLine(“Parameter 2 : String : MS Azure Account Key”);
Console.WriteLine(“Parameter 3 : String : Container Name”);
Console.WriteLine(“Parameter 4 : Positive Integer : Number of days to retain backups”);
Console.WriteLine(“Parameter 5 : String : File type to delete (.bak, .trn, etc…)”);
Console.WriteLine(“Parameter 6 : True/False : Delete backups with locked leases (will ignore Parameters 4 & 5)”);
Console.WriteLine(“==============================================================”);
}

// Account name and key.
string accountName = args[0].ToLower(); //Account Name
string accountKey = args[1]; //Account Key
string containerName = args[2]; //Container Name
int numberOfDays = Int16.Parse(args[3]); //Number of Days before deleting
string fileType = args[4];
bool deleteLockedBlobs = bool.Parse(args[5]);

try
{
CloudBlobContainer container = openConnection(accountName, accountKey, containerName);

if (!deleteLockedBlobs)
{ deleteAzureBackups(container, numberOfDays, fileType); }
else
{ unlockLeasedBlobs(container); }

}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}

return;
}

static CloudBlobContainer openConnection(string accountName, string accountKey, string containerName)
{
try
{
//Get a reference to the storage account, with authentication credentials
StorageCredentials credentials = new StorageCredentials(accountName, accountKey);
CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, true);

//Create a new client object.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference(containerName);

return container;
}
catch (StorageException ex)
{
Console.WriteLine(“Failed to create connection to MS Azure Storage.”);
Console.WriteLine(ex.Message);
return null;
}
}

static void deleteAzureBackups(CloudBlobContainer container, int numberOfDays, string fileType)
{
DateTimeOffset now = DateTimeOffset.Now;

foreach (IListBlobItem item in container.ListBlobs(null, false))
{
CloudPageBlob blob = (CloudPageBlob)item;

//If date blob was last modified is more than x days out, then it gets deleted.
if ((now – blob.Properties.LastModified.Value).Days >= numberOfDays &&
blob.Name.Substring(blob.Name.Length – fileType.Length) == fileType)
{
deleteBlob(blob);
}
}
}

static void unlockLeasedBlobs(CloudBlobContainer container)
{
foreach (IListBlobItem item in container.ListBlobs(null, false))
{
CloudPageBlob blob = (CloudPageBlob)item;

if (blob.Properties.LeaseStatus == LeaseStatus.Locked)
{
try
{
Console.WriteLine(“Breaking lease on {0} blob.”, blob.Name);
blob.BreakLease(new TimeSpan(), null, null, null);
Console.WriteLine(“Successfully broken lease on {0} blob.”, blob.Name);

deleteBlob(blob);
}
catch (StorageException ex)
{
Console.WriteLine(“Failed to break lease on {0} blob.”, blob.Name);
Console.WriteLine(ex.Message);
}
}
}
}

static void deleteBlob(CloudPageBlob blob)
{
try
{
Console.WriteLine(“Attempting to delete {0}”, blob.Name);
blob.Delete(DeleteSnapshotsOption.IncludeSnapshots);
Console.WriteLine(“Successfully deleted {0}”, blob.Name);
}
catch (StorageException ex)
{
Console.WriteLine(“Failed to delete {0}.”, blob.Name);
Console.WriteLine(ex.Message);
}
}
}
}

 

Categories: DBA Blogs

My Speaking Schedule for Oracle Open World 2014

Galo Balda's Blog - Wed, 2014-08-27 12:22

A quick post to let you know about the two presentations that I’ll be doing at Oracle Open World 2014.

Session ID:         UGF4482
Session Title:     “Getting Started with SQL Pattern Matching in Oracle Database 12c
Venue / Room:  Moscone South – 301
Date and Time:  9/28/14, 13:30 – 14:15

Session ID:          CON4493
Session Title:      “Regular Expressions in Oracle Database 101″
Venue / Room:   Moscone South – 303
Date and Time:   10/2/14, 13:15 – 14:00

As usual, you might have to check before the session to make sure the room has not changed.

I hope to see you there.


Filed under: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL Tagged: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL
Categories: DBA Blogs

Hot off the press : Latest Release of Oracle Enterprise Manager 12c (R4)

Pankaj Chandiramani - Tue, 2014-06-03 06:53

Read more here about the PRESS RELEASE:  Oracle Delivers Latest Release of Oracle Enterprise Manager 12c


Richer
Service Catalog for Database and Middleware as a Service; Enhanced
Database and Middleware Management Help Drive Enterprise-Scale Private
Cloud Adoption


In coming weeks  , i will be covering latest topics like :



  1. DbaaS Service Catalog incorporating High Availability and Disaster Recovery

  2. New Rapid Start kit

  3. Other new Features 


Stay Tuned !

Categories: DBA Blogs

Interesting info-graphics on Data-center / DB-Manageability

Pankaj Chandiramani - Mon, 2014-05-19 04:21


 Interesting info-graphics on Data-center / DB-Manageability



Categories: DBA Blogs

Tackling the challange of Provisoning Databases in an agile datacenter

Pankaj Chandiramani - Wed, 2014-05-14 01:03

One of the key task that a DBA performs repeatedly is Provisioning of Databases which also happens to one of the top 10 Database Challenges as per IOUG Survey .

Most of the challenge comes in form of either Lack of Standardization or it being a Long and Error Prone Process . This is where Enterprise Manager 12c can help by making this a standardized process using profiles and lock-downs ; plus have a role and access separation where lead dba can lock certain properties of database (like character-set or Oracle Home location  or SGA etc) and junior DBA's can't change those during provisioning .Below image describes the solution :



In Short :



  • Its Fast

  • Its Easy 

  • And you have complete control over the lifecycle of your dev and production resources.


I actually wanted to show step by step details on how to provision a 11204 RAC using Provisioning feature of DBLM  , but today i saw a great post by MaaZ Anjum that does the same , so i am going to refer you to his blog here :


Patch and Provision in EM12c: #5 Provision a Real Application Cluster Database


Other Resources : 


Official Doc : http://docs.oracle.com/cd/E24628_01/em.121/e27046/prov_db_overview.htm#CJAJCIDA


Screen Watch : https://apex.oracle.com/pls/apex/f?p=44785:24:112210352584821::NO:24:P24_CONTENT_ID%2CP24_PREV_PAGE:5776%2C1


Others : http://www.oracle.com/technetwork/oem/lifecycle-mgmt-495331.html?ssSourceSiteId=ocomen



Categories: DBA Blogs

Nationwide Deploys Database Applications 600% Faster

Pankaj Chandiramani - Mon, 2014-04-28 03:37

Nationwide Deploys Database Applications 600% Faster





Heath Carfrey of Nationwide, a leading global insurance and
financial services organization, discusses how Nationwide saves time and
effort in database provisioning with Oracle Enterprise Manager
.


Key-points :



  1. Provisioning Databases using Profiles  (aka Gold Images)

  2. Automated Patching

  3.  Config/Compliance tracking




Categories: DBA Blogs

EMCLI setup

Pankaj Chandiramani - Mon, 2014-04-28 02:15

A quick note on how to install EMCLI which is used for various CLI operations from EM . I was looking to test some Database provisioning automation via EMCLI and thus was looking to setup the same . 


EMCLI Setup
To set up EMCLI on the host, follow these steps:
1.    Download the emcliadvancedkit.jar from the OMS using URL https://<omshost>:<omsport>/em/public_lib_download/emcli/kit/emcliadvancedkit.jar
2.    Set your JAVA_HOME environment variable and ensure that it is part of your PATH. You must be running Java 1.6.0_43 or greater. For example:
o    setenv JAVA_HOME /usr/local/packages/j2sdk
o    setenv PATH $JAVA_HOME/bin:$PATH
3.    You can install the EMCLI with scripting option in any directory either on the same machine on which the OMS is running or on any machine on your network (download the emcliadvancedkit.jar to that machine)
java -jar emcliadvancedkit.jar client -install_dir=<emcli client dir>
4.    Run emcli help sync from the EMCLI Home (the directory where you have installed emcli) for instructions on how to use the "sync" verb to configure the client for a particular OMS.
5.    Navigate to the Setup menu then the Command Line Interface. See the Enterprise Manager Command Line Tools Download page for details on setting EMCLI.



Categories: DBA Blogs

Webcast: Database Cloning in Minutes using Oracle Enterprise Manager 12c Database as a Service Snap Clone

Pankaj Chandiramani - Thu, 2014-04-17 04:02

Since the demands
from the business for IT services is non-stop, creating copies of production
databases in order to develop, test and deploy new applications can be
labor intensive and time consuming. Users may also need to preserve private
copies of the database, so that they can go back to a point prior to when
a change was made in order to diagnose potential issues. Using Snap Clone,
users can create multiple snapshots of the database and “time
travel
” across these snapshots to access data from any point
in time.


Join us for an in-depth
technical webcast and learn how Oracle Cloud Management Pack for Oracle
Database's capability called Snap Clone, can fundamentally improve the
efficiency and agility of administrators and QA Engineers while saving
CAPEX on storage. Benefits include:



  • Agile provisioning
    (~ 2 minutes to provision a 1 TB database)

  • Over 90% storage
    savings

  • Reduced administrative
    overhead from integrated lifecycle management


Register
Now!


April 24 — 10:00 a.m. PT | 1:00 p.m. ET

May 8 — 7:00 a.m. PT | 10:00 a.m. ET | 4:00 p.m. CET

May 22 — 10:00 a.m. PT | 1:00 p.m. ET





Categories: DBA Blogs

Wed, 1969-12-31 18:00