Skip navigation.

DBA Blogs

Managing Files with SaltStack

Pythian Group - Thu, 2014-08-14 09:06

Before we begin, take a look at my previous two blog posts, SaltStack for Remote Parallel Execution of Commands and Using SaltStack for Configuration Management.

We manage files using configurations management much for the same reasons that we manage packages and services – we want/need consistency across all of our boxes, and to spend our time on tasks that add more business value than logging into all of our servers to change a line in a config file.

Using Salt I will show you have to manage the message of the day (MOTD) on the server.

Templates

There are many examples of configuration files which differ only by a few lines between staging and production. A great example of this is a config file which has database host/user/pass information. The drive for consistency tells us that we would like the environment that our release is tested on to match (as closely as possible) the production environment where the code will run.

Using templates allows us to affect the few lines in a configuration file, which we would like to change, while leaving the rest alone. This also simplifies the management of our servers and the configuration repository, allowing us to maintain one config file for many servers.

Salt grains

The salt minions know a lot of information about the boxes they run on. Everything from the hostname, to the IP address, to the kernel, and more is available to be queried by the salt master. These pieces of information are called “grains” in the salt world and allow us to insert dynamic variables into our templates.

A great use case for grains would be the expansion of our Apache formula from my last post. On Red Hat-based machines, the Apache package is called “httpd” but on Debian-based machines the package is called “Apache2″ Using the “osfamily” grain we can dynamically redefine the package name for each minion while maintaining a single formula for all servers.

Likewise, any configurations files which need to have the current box IP address can benefit from grains. As each minion installs that configuration file it will see that the variable needs to be populated with a “grain” and will then do so as requested. Rather than maintaining an Apache vhost file for each of your 10 web servers where the only difference is the IP address declaration you can maintain one dynamic template which will ensure that everything else in that config file matches on each of the 10 nodes other then the one thing that needs to be dynamic (the IP address).

Putting it all together – the MOTD

In your /srv/salt dir we are going to create a subdir called motd. inside of that directory you will find 2 files. an init.sls which is the default top level formula for the director and an motd.template file which is our config file. The init.sls looks like this:

/etc/motd: 
  file.managed: 
    - user: root 
    - group: root 
    - mode: 0644 
    - source: salt://motd/motd.template 
    - template: jinja

For the file /etc/motd we are telling Salt that we want to manage the file that its owner and group should be root, that we want the file to have 0644 permissions. We are letting Salt know that it will find the config file (source) under the motd subdir, the salt:// maps to /srv/salt and that our template will be in the jinja format.

Our template will look like:

------------------------------------------------------------------------------
Welcome to {{ grains['fqdn'] }}

Server Stats at a Glance:
------------------------

OS: {{ grains['osfullname'] }}
Kernel: {{ grains['kernelrelease'] }}
Memory: {{ grains['mem_total'] }} MB

This server is managed using a configuration management system (saltstack.org).
Changes made to this box directly will likely be over-written by SALT. Instead
modify server configuration via the configuration management git repository.
------------------------------------------------------------------------------

As each minion installs this MOTD file it will see the variables in use because they are grains the minion will know that it has the information required to populate the variable and will do so for each server. This will give you a final MOTD that looks like this:

[root@ip-10-0-0-172 ~]# cat /etc/motd

------------------------------------------------------------------------------
Welcome to ip-10-0-0-172.ec2.internal

Server Stats at a Glance:
------------------------

OS: Amazon Linux AMI
Kernel: 3.10.42-52.145.amzn1.x86_64
Memory: 996 MB

This server is managed using a configuration management system (saltstack.org).
Changes made to this box directly will likely be over-written by SALT.  Instead
modify server configuration via the configuration management git repository.
------------------------------------------------------------------------------
[root@ip-10-0-0-172 ~]#

As you can see each variable was populated with the information specific to the node.

If we wanted to add, remove, or change anything in the MOTD, rather than having to box walk the entire infrastructure (which depending on your side, could tie up a resource for days), we can edit the single template file on the master and allow the tool to propagate the change out to the boxes for us, reducing that task from a very boring day (or more) to a few minutes!

Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 2

Pythian Group - Thu, 2014-08-14 08:47

This blog post is the second in a series of three which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. Today’s post will be about creating certificates.

CREATING CERTIFICATES

At this step, we will create and upload a certificate. This certificate will be used to authenticate the VPN clients and are performed in few steps:

  • Generate the certificate
  • Upload the root certificate to the Azure Management Portal
  • Generate a client certificate
  • Export and install the client certificate

Let’s start …

  1. We will need to use the MakeCert tool. MakeCert is part of “Microsoft Visual Studio Express” available here.
  2. After successfully downloading the tool, start the setup and follow the installation steps. Note that you can generate this certificate in any computer, not only in the computer where you are configuring the VPN.
    After the installation, you can find MakeCert at:

    • C:\Program Files (x86)\Windows Kits\8.1\bin\x64
    • C:\Program Files (x86)\Windows Kits\8.1\bin\x86
  3. Launch the command prompt as Administrator. Point the path to one of the folders referred in the previous step and execute the following command (note: keep the command line opened):
    makecert -sky exchange -r -n “CN=RootCertificateMurilo” -pe -a sha1 -len 2048 -ss My “RootCertificateMurilo.cer”
    (where “RootCertificateMurilo” is teh certificate name).Screen Shot 2014-07-30 at 11.38.38
    This command will create and install a root certificate in the Personal certificate store and create the define RootCertificateMurilo.cer file in the same directory that you are executing the command.Screen Shot 2014-07-30 at 11.59.41Note: Store this certificate in a safe location.
  4. Now, go to the Windows Azure Management Portal https://manage.windowsazure.com/ in order to upload the certificate.
  5. In the networks section, select the previously created network and go to the certificate page.Screen Shot 2014-07-30 at 13.02.05
  6. Click Upload a root certificate, select your certificate, and click in the check mark.Screen Shot 2014-07-30 at 13.04.10
    • Depending on the time zone of the server where you created the certificate, you might receive an error message, “The certificate is not valid yet, effective date is [date and time].” To work around this, delete the created certificate, and create another one adding the following parameter (change the date):-b “07/30/2014″It will be valid form 00:00:00 hours for the day you set.
  7. Now we need to create a Client Certificate. We will use the Root Certificate to do this.
    In the same command line window, opened before, execute the following command:makecert.exe -n “CN=ClientCertificateMurilo” -pe -sky exchange -m 96 -ss My -in “RootCertificateMurilo” -is my -a sha1This certificate will be stored in your personal certificate store.
  8. Now we need to export this certificate, as this should be installed on each computer that needs to be connected to the virtual network. To achieve this, enter the command “mmc”, still in the opened command line. The following window will be shown: Screen Shot 2014-07-30 at 16.52.59
    • Go to File->Add/Remove Snap-in.
    • Select “Certificates” and click on “Add >”.Screen Shot 2014-07-30 at 16.54.02
    • Select My user account and click Finish.Screen Shot 2014-07-30 at 16.54.56
    • Click OK in the remaining window.
    • Now you will be able to see your certificates under the “Personal\Certificates” folder:Screen Shot 2014-07-30 at 16.56.13
  9. To export the certificate, right click the Client certificate and click on “All Tasks->Export…”, as shown:Screen Shot 2014-07-30 at 17.00.46
  10. A wizard will be presented. Choose Yes, export the private key and click.Screen Shot 2014-07-31 at 11.15.06
  11. Leave this as default, and click Next.Screen Shot 2014-07-31 at 11.22.15
  12. Choose a strong password (try to remember this) and click Next.Screen Shot 2014-07-31 at 11.23.39
  13. Now you need to set the path to store you .pfx file.Screen Shot 2014-07-31 at 11.25.01
  14. Click Next, then Finish.
  15. To finalize the “Certificates part”, we will need to install the certificate on all the servers where we want to setup the VPN.To accomplish this, you just need to:
    • Copy the exported .pfx file (step 13) to all the servers.
    • Double-click the pfx on all the servers.
    • Enter the password.
    • Proceed with the installation, maintaining the default location.

Stay tuned for my next blog post on how to configure the VPN client.

Categories: DBA Blogs

Michael Abbey: Still Presenting After All These Years

Pythian Group - Thu, 2014-08-14 07:50

A cool, wintery day in late 1989. This kid’s working for the Office of the Auditor General of Canada. I’d been working with Oracle and in my fourth year. I had cut my teeth on 6.0.27.9.4 after first seeing V3 some four years prior. I stumbled across a well-placed ad for a show happening in Anaheim USA in September 1990. I’ve got the bug. I apply to go to the show and was told by my employer ,”Just a sec, David and I were thinking of going to that show – let us get back to you.” Some three weeks I am told it’s a go.

I am off to sunny California for six wonderful days of International Oracle User Week (IOUW); this was a joint effort put on by Oracle and the International Oracle User Group (IOUG). I had spent the better part of the summer of 1969 in southern Cali so this was shaping up to be a resurrection. I toddle off to Cali and have a wonderful time. It’s magic – such a learning opportunity. I even came away knowing how to place a database in archivelog mode. I was so pleased with myself and got to meet one of my heroes. I had only been working with the software for 4 years, but already knew of Ken Jacobs (a.k.a. Dr. DBA).

I had the bug to present almost from day one. I saw an ad in one of the bazillion pieces of paper I brought home from that IOUW about a show in DC – Sheraton Woodley Park to be exact. I don’t even think that it exists anymore. I figured I’d attend ECO then present an abstract for IOUW 1991 in Miami. Some of the history is described in a blog post I made in 2013 located here. Enough said about that. It was quite a whirlwind of activity on the presentation circuit in those days. Starting in 1992 I became very active in the IOUG holding a handful of board positions up to the 2006 or maybe 2007 time frame. I attended a gazillion conferences in those days and the pinnacle was a show in Philly in 1995. I had been on the board of the IOUW for a few years and the paid attendance count at that show was staggering. Chubby Checker played at the big bash and arrangements were made for me to sit in on the bass guitar for the Twist. That got cancelled at the last minute but it was close. My paper was in one of the biggest rooms in the convention centre. There were over 1,500 people in attendance and it was intoxicating. I was pleased when I got my evals to find out the attendees were as pleased as I was. It was all (or close to all) about the CORE database technology in those days. In 1995, Oracle7 was the hot item having been on the street for over 3 years.

As guests of Oracle, a handful of us had the pleasure of attending the launch of Oracle7 at the Hudson Theatre in the Hotel Macklowe on 44th St. in beloved NYC. We were thrilled to be very close in those days to Ray Lane, then President of Oracle Corp. and we introduced Ray to a lot of his direct reports at that “party.” A mere four years later we were back for the release launch of Oracle8 at Radio City Music Hall. Again, a pleasant time was had by all. There turned out to be surprisingly little coverage/mention of Oracle8 at that event. It was more concentrated on Oracle Network Computer (NC) designed to bring computing power to every desktop at a low cost. Once during that Oracle8 launch, the operator of the boom mic in then pit swept the stage to get from one side to the other and almost hit LJE in the side of the head. I think I was the only one who heard what Larry said – “Watch out Bill.” Does anyone get the reference but me?

My torrid Oracle technology career was just that. Between 1991 and the date of this post I have probably given over 100 papers at shows from Ottawa to Hyderabad, Brighton to San Diego, and Vienna to Addis Ababa. There is still a voracious hunger out there for the heart of my expertise – anything that starts with an “O” and ends in an “E” and has the word database tagged on the end. After becoming very close to some of the kernel developers at Oracle, we discussed how they were still in the middle of their workday when the Loma Prieta quake hit in October 1989. Me and a few close friends hung out with the guys whose names litter the bottom of the “this change was done when” section of the ?/rdbms/admin directory on Oracle database software installs. We were in David Anderson’s office schmoozing and asked what he happened to be up to that day. He was ftp’ing source code from a VAX to a Sun box in preparation for the base-platform change that happened in the early 1990s. It was a magic carpet ride.

In some ways it still is. To finish off this year I am appearing at:

  • OOW (Oracle Open World) in San Francisco – September 29-October 2
  • ECO (East Coast Oracle) event in Raleigh/Durham – November 3-5
  • MOUS (Michigan Oracle User Summit) in Livonia – November 13
  • UKOUG in Liverpool – December 8-10

My personal top 10 moments (actually top 11 – the exchange rate) in my still developing tech career you say … drum roll:

Rank Event Date 11 First ever tech show 1990 10 Longest lasting tech contact – Yossi Amor 25 years 9 Number of IOUG yearly events attended 23 8 Books published in Oracle Press series (including translations) 42 7 Most attendees at a presentation – 1500 (Philadelphia) 1995 6 Fewest attendees at a presentation – 1 2013 5 Most exciting event attended – CODA in Burlingame CA 1993 4 First PL/SQL code block to compile – Oracle7 1993 3 Favourite version of SQL*Forms – 2.3 1993 2 First got hands wet with this famous technology – 5.1.22 1986 1 Biggest thrill – the rush of speaking to live audiences 1991-??
Categories: DBA Blogs

keeping my fingers crossed just submitted abstract for RMOUG 2015 Training Days ...

Grumpy old DBA - Tue, 2014-08-12 12:06
The Rocky Mountain Oracle Users Group has been big and organized for a very long time.  I have never been out there ( my bad ) but am hoping to change that situation in 2015.

Abstracts are being accepted for Training Days 2015 ... my first one is in there now thinking about a second submission but my Hotsos 2014 presentation needs some more work/fixing.  Ok lets be honest I need to shrink it considerably and tighten the focus of that one.

Information on RMOUG 2015 can be found here: RMOUG Training Days 2015

Keeping my fingers crossed!
Categories: DBA Blogs

Watch Oracle DB Session Activity With The Real-Time Session Sampler

Watch Oracle DB Session Activity With My Real-Time Session Sampler
Watching session activity is a great way to diagnose and learn about Oracle Database tuning. There are many approaches to this. I wanted something simple, useful, modifiable, no Oracle licensing
issues and that I could give away. The result is what I call the Oracle Real-Time Session Sampler (OSM: rss.sql).

The tool is simple to use.  Based on a number filtering command line inputs, it repeatedly samples active Oracle sessions and writes the output to a file in /tmp. You can do a "tail -f" on the file to watch session activity in real time!

The rss.sql tool is included in the OraPub System Monitor (OSM) toolkit (v13j), which can be downloaded HERE.

If you simply want to watch a video demo, watch below or click HERE.


The Back-Story
Over the past two months I have been creating my next OraPub Online Institute seminar about how to tune Oracle with an AWR/Statspack report using a quantitative time based approach. Yeah... I know the title is long. Technically I could have used Oracle's Active Session History view (v$active_session_history) but I didn't want anyone to worry about ASH licensing issues. And ASH is not available with Oracle Standard Edition.

The Real-Time Session Sampler is used in a few places in the online seminar where I teach about Oracle session CPU consumption and wait time. I needed something visual that would obviously convey the point I wanted to make. The Real-Time Session Sampler worked perfectly for this.

What It Does
Based on a number of command line inputs, rss.sql repeatedly samples active Oracle sessions and writes the output to file in /tmp. The script contains no dml statements. You can do a "tail -f" on the output file to see session activity in real time. You can look at all sessions, a single session, sessions that are consuming CPU or waiting or both, etc. You can even change the sample rate. For example, once every 5.0 seconds or once every 0.25 seconds! It's very flexible and it's fascinating to watch.

Here is an example of some real output.



How To Use RSS.SQL
The tool is run within SQL*Plus and the output is written to the file /tmp/rss_sql.txt. You need two windows: one to sample the sessions and other other to look at the output file. Here are the script parameter options:

rss.sql  low_sid  high_sid  low_serial  high_serial  session_state  wait_event_partial|%  sample_delay

low_sid is the low Oracle session id.
high_sid is the high Oracle session id.
low_serial is the low Oracle session's serial number.
high_serial is the high Oracle session's serial number.
session_state is the current state of the session at the moment of sampling: "cpu", "wait" or for both "%".
wait_event_partial is when the session is waiting, select the session only with this wait event. Always set this to "%" unless you want to tighten the filtering.
sample_delay is the delay between samples, in seconds.

Examples You May Want To Try
By looking at the below examples, you'll quickly grasp that this tool can be used in a variety of situations.

Situation: I want to sample a single session (sid:10 serial:50) once every five seconds.

SQL>@rss.sql  10 10 50 50 % % 5.0

Situation: I want to essentially stream a single session's (sid:10 serial:50) activity.

SQL>@rss.sql 10 10 50 50 % % 0.125

Situation: I want to see what sessions are waiting for an row level lock while sampling once every second.

SQL>@rss.sql 0 99999 0 99999 wait enq%tx%row% 1.0

Situation: I want to see which sessions are consuming CPU, while sampling once every half second.

SQL>@rss.sql 0 99999 0 99999 cpu % 0.50

Be Responsible... It's Not OraPub's Fault!
Have fun and explore...but watch out! Any time you are sample repeatedly, you run the risk of impacting the system under observation. You can reduce this risk by sampling less often (perhaps once every 5 seconds), by limiting the sessions you want to sample (not 0 to 99999) and by only select sessions in either a "cpu" or "wait" state.

A smart lower impact strategy would be to initially keep a broader selection criteria but sample less often; perhaps once every 15 seconds. Once you know what you want to look for, tighten the selection criteria and sample more frequently. If you have identified a specific session of interest, then you stream the activity (if appropriate) every half second or perhaps every quarter second.

All the best in your Oracle Database tuning work,

Craig.
https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch the seminar introductions for free on YouTube!If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.






Categories: DBA Blogs

Offline Visualization of Azkaban Workflows

Pythian Group - Mon, 2014-08-11 07:51

As mentioned in my past adventures, I’m often working with the workflow management tool ominously called Azkaban. Its foreboding name is not really deserved; it’s relatively straightforward to use, and offers a fairly decent workflow visualization. For that last part, though, there is a catch: to be able to visualize the workflow, you have to (quite obviously) upload the project bundle to the server. Mind you, it’s not that much of a pain, and could easily managed by, say, a Gulp-fueled watch job. But still, it would be nice to tighten the feedback loop there, and be able to look at the graphs without having to go through the server at all.

Happily enough, all the information we need is available in the Azkaban job files themselves, and in a format that isn’t too hard to deal with. Typically, a job file will be called ‘foo.job’ and look like

type=command
command=echo "some command goes here"
dependencies=bar,baz

So what we need to do to figure out a whole workflow is to begin at its final job, and recursively walk down all its dependencies.

use 5.12.0;

use Path::Tiny;

sub create_workflow {
  my $job = path(shift);
  my $azkaban_dir = $job->parent;

  my %dependencies;

  my @files = ($job);

  while( my $file = shift @files ) {
    my $job = $file->basename =~ s/\.job//r;

    next if $dependencies{$job}; # already processed

    my @deps = map  { split /\s*,\s*/ }
               grep { s/^dependencies=\s*// }
                    $file->lines( { chomp => 1 } );

    $dependencies{$job} = \@deps;

    push @files, map { $azkaban_dir->child( $_.'.job' ) } @deps;
  }

  return %dependencies;
}

Once we have that dependency graph, it’s just a question of drawing the little boxes and the little lines. Which, funnily enough, is a much harder job one would expect. And better left off to the pros. In this case, I decided to go with Graph::Easy, which output text and svg.

use Graph::Easy;

my $graph = Graph::Easy->new;

while( my( $job, $deps ) = each %dependencies ) {
    $graph->add_edge( $_ => $job ) for @$deps;
}

print $graph->as_ascii;

And there we go. We put those two parts together in a small script, and we have a handy cli workflow visualizer.

$ azkaban_flow.pl target/azkaban/foo.job

  +------------------------+
  |                        v
+------+     +-----+     +-----+     +-----+
| zero | --> | baz | --> | bar | --> | foo |
+------+     +-----+     +-----+     +-----+
               |                       ^
               +-----------------------+

Or, for the SVG-inclined,

$ azkaban_flow.pl -f=svg target/azkaban/foo.job

which gives us

Screen Shot 2014-08-10 at 3.09.42 PM
Categories: DBA Blogs

Partner Webcast - The Revolution of Oracle Java 8

Java 8, released in March 2014, is a revolutionary release of the world’s #1 development platform. It is the single largest upgrade ever to the programming model, with coordinated core code evolution...

We share our skills to maximize your revenue!
Categories: DBA Blogs

12c: Fun with WITH!

Pythian Group - Fri, 2014-08-08 11:30

Last night I couldn’t sleep and what else you’re going to do? I was thinking about Oracle stuff.

In Oracle version 12, Oracle has enhanced the WITH clause – traditionally used for sub-query factoring – to allow the declaration of functions and procedures. This can be (ab)used to create a very interesting scenario, that is not very common in Oracle: Reading data within the same SELECT statement, but from two different points in time. And the points in time are in the future, and not in the past.

Let’s say I want to take a snapshot of the current SCN, and then another one 5 or 10 seconds after that. Traditionally we’d have to store that somewhere. What if I could take two snapshots – at different SCNs – using a single SELECT statement ? Without creating any objects ?

col value for a50
set lines 200 pages 99

with  
procedure t (secs in number, scn out varchar2)
  is
    pragma autonomous_transaction;
  begin
    dbms_lock.sleep(secs);
    select 'at ' || to_char(sysdate,'HH24:MI:SS') || ' SCN: ' 
                 || dbms_flashback.get_system_change_number 
      into scn 
      from dual;
  end;
function wait_for_it (secs in number) 
 return varchar2 is
    l_ret varchar2(32767);
  begin
    t(secs, l_ret);
    return l_ret;
  end;
select 1 as time, 'at ' || to_char(sysdate,'HH24:MI:SS') || ' SCN: ' 
                || dbms_flashback.get_system_change_number as value 
  from dual
union all
select 5, wait_for_it(5) from dual
union all
select 10, wait_for_it(5) from dual
/

And the result is:

      TIME VALUE
---------- --------------------------------------------------
         1 at 09:55:49 SCN: 3366336
         5 at 09:55:54 SCN: 3366338
        10 at 09:55:59 SCN: 3366339

 


We can clearly see there, that the SCN is different, and the time shown matches the intervals we’ve chosen, 5 seconds apart. I think there could be some very interesting uses for this. What ideas can you folks come up with ?

Categories: DBA Blogs

Log Buffer #383, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-08 07:34

This Log Buffer Edition picks few of the informative blog posts from Oracle, SQL Server, and MySQL fields of database.


Oracle:

g1gc logs – Ergonomics -how to print and how to understand

In Solaris 11.2, svcs gained a new option, “-L”.  The -L option allows a user to easily look at the most recent log events for a service.

ADF Thematic Map component from DVT library was updated in ADF 12c with marker zoom option and area layer styling

When cloning pluggable databases Oracle gives you also SNAPSHOT COPY clause to utilize storage system snapshot capabilities to save on storage space.

It is normal for bloggers including myself to post about the great things they have done.

SQL Server:

In six years Microsoft has come from almost zero corporate knowledge about how cloud computing works to it being an integral part of their strategy.

A brief overview of Columnstore index and its usage with an example.

The Road To Hell – new article from the DBA Team

Encryption brings data into a state which cannot be interpreted by anyone who does not have access to the decryption key, password, or certificates.

How to test what a SQL Server application would do in the past or in the future with date and time differences.

MySQL:

MySQL for Visual Studio 1.2.3 GA has been released

An approach to MySQL dynamic cross-reference query.

The MySQL replication and load balancing plugin for PHP, PECL/mysqlnd_ms, aims to make using a cluster of MySQL servers instead of a single server as transparent as possible.

Picking the Right Clustering for MySQL: Cloud-only Services or Flexible Tungsten Clusters? New webinar-on-demand.

Collation options for new MySQL schemas and tables created in MySQL for Excel

Categories: DBA Blogs

Oracle Database RAC Diagnostics and Tuning

Oracle Real Application Clusters (Oracle RAC) is a clustered version of Oracle Database based on a comprehensive high-availability stack that can be used as the foundation of a database cloud system...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Space used by objects

DBA Scripts and Articles - Thu, 2014-08-07 12:35

Calculate the space used by a single object This script will help you calculate the size of a single object : [crayon-5405726dc695a691254306/] Calculate the space used by a whole schema If you want the space used by a whole schema, then here is a variation of the first query : [crayon-5405726dc6966571497805/]

The post Space used by objects appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

ECEMEA Webcast - Getting Started with your Big Data project

Big data is a new kind of power that transforms everything it touches in business, government, and private life. As a result, bringing big data to your company has the potential to provide big...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Database 12.1.0.2 Released (22 July 2014)

Oracle in Action - Thu, 2014-08-07 00:54

RSS content

Oracle Database 12.1.0.2 has been released on 22 July 2014.

Following are some of the new features and options:

• In-memory column store
• Attribute clustering on disk
• Oracle Flashback archive capabilities for pluggable databases
• Rapid Oracle home provisioning
• Centralized security key vault capabilities
• Storage and query capabilities for nonrelational data
• Advanced Index Compression
• Oracle Big Data SQL
• Oracle JSON Document Store
• Oracle REST Data Services
• Improvements to Oracle Multitenant
• Zone Maps
• Approximate Count Distinct
• Attribute Clustering
• Full Database Caching
…..

You can download it here.

Enjoy!!!



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracle Database 12.1.0.2 Released (22 July 2014)], All Right Reserved. 2014.

The post Oracle Database 12.1.0.2 Released (22 July 2014) appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Why Write-Through is still the default Flash Cache Mode on #Exadata X-4

The Oracle Instructor - Wed, 2014-08-06 12:41

The Flash Cache Mode still defaults to Write-Through on Exadata X-4 because most customers are better suited that way – not because Write-Back is buggy or unreliable. Chances are that Write-Back is not required, so we just save Flash capacity that way. So when you see this

CellCLI> list cell attributes flashcachemode
         WriteThrough

it is likely to your best :-)
Let me explain: Write-Through means that writing I/O coming from the database layer will first go to the spinning drives where it is mirrored according to the redundancy of the diskgroup where the file is placed that is written to. Afterwards, the cells may populate the Flash Cache if they think it will benefit subsequent reads, but there is no mirroring required. In case of hardware failure, the mirroring is already sufficiently done on the spinning drives, as the pictures shows:

Flash Cache Mode Write-Through

Flash Cache Mode WRITE-THROUGH

That changes with the Flash Cache Mode being Write-Back: Now writes go primarily to the Flashcards and popular objects may even never get aged out onto the spinning drives. At least that age out may happen significantly later, so the writes on flash must be mirrored now. The redundancy of the diskgroup where the object in question was placed on determines again the number of mirrored writes. The two pictures assume normal redundancy. In other words: Write-Back reduces the usable capacity of the Flashcache at least by half.

Flash Cache Mode Write-Back

Flash Cache Mode WRITE-BACK

Only databases with performance issues on behalf of writing I/O will benefit from Write-Back, the most likely symptom of which would be high numbers of the Free Buffer Waits wait-event. And Flash Logging is done with both Write-Through and Write-Back. So there is a good reason behind turning on the Write-Back Flash Cache Mode only on demand. I have explained this just very similar during my present Oracle University Exadata class in Frankfurt, by the way :-)


Tagged: exadata
Categories: DBA Blogs

Alter Session Kill on Steroids

Pythian Group - Wed, 2014-08-06 10:27

Perhaps you have encountered something like this: A session that is consuming too many resources needs to be killed. You locate the session and use ALTER SYSTEM KILL SESSION ‘SID,SERIAL#’ to kill the session. As you continue to monitor the database you find that the status of the session in v$session is ‘KILLED’, but the session does not go away. You also notice that the SERIAL# is continually changing.

Now you find there is no OS process associated with the session, but the session continues as PMON is unable to finish cleanup for the session. Usually when this happens, the session will be holding a lock. When that happens, the only method to release the lock is to bounce the database. There are some bugs that may be responsible for this problem, such as this one described by Oracle Support:

Pmon Spins While Cleaning Dead Process (Doc ID 1130713.1)

This particular bug affects Oracle 10.2.0.1 – 11.1.0.7. I have personally seen this same behavior happen on many versions of the database from 7.0 on. To avoid these hanging sessions many DBA’s have adopted the habit of first killing the OS process with an OS utility, and if the session is still visible in v$session, issue the ALTER SYSTEM KILL command.

The OS command used on linux/unix is usually ‘kill -9′. On windows it is OraKill. This method usually avoids the problems encountered when killing a session that is holding a lock and processing DML.

I don’t know just what circumstances trigger this behavior, as I have never been able to reproduce it at will. When it does happen though, it is more than annoying as the only way to clear locks held by the recalcitrant session is to bounce the database.

Quite some time ago (at least as far back as Oracle 8i) Oracle introduced the new IMMEDIATE keyword to use with ALTER SYSTEM KILL SESSION. Using this keyword removes the need to use an OS command to kill a session – Oracle will do it for you! To test this I am using Oracle 10.2.0.4 on Oracle Linux 5.5. I have previously run these same tests in 11.2.0.3 with the same results. Had I access to an 8i or 9i database I would have run the tests there. To start with let’s see what happens when a session is killed without the immediate keyword.

Login to the session to be killed:

$ sqlplus scott/tiger@10gr2

Login as SYSDBA from another terminal and check for scott’s session:

SQL> l
  1  select
  2     s.username,
  3     s.sid,
  4     s.serial#,
  5     p.spid spid
  6  from v$session s, v$process p
  7  where s.username = 'SCOTT'
  8*    and p.addr = s.paddr
SQL> /

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SCOTT                                 133         35 22870

1 row selected.

All that has happened at this point is that Oracle has made an internal call that has disconnected Scott’s session. (tracing that operation is a different topic.) The process on the server has not been terminated. This can be seen by the following experiment:

Logon again as Scott.

In a SYSDBA session check for Scott’s:

 SQL> @scott

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SCOTT                                 146         81 23678

Now check for the shadow process associated with scott’s session on the server:


[root@ora10gR2 tmp]# ps -fp 23678
UID PID PPID C STIME TTY TIME CMD
oracle 23678 1 0 16:56 ? 00:00:00 oraclejs01 (LOCAL=NO)

Kill the session and check the status:

SQL> alter system kill session '146,81';

SQL> l
  1  select
  2     s.username,
  3     s.sid,
  4     s.serial#,
  5     p.spid spid
  6  from v$session s, v$process p
  7  where s.username = 'SCOTT'
  8*    and p.addr = s.paddr
SQL>/

no rows selected

Check again on the server for the process:

[root@ora10gR2 tmp]# ps -fp 23678
UID PID PPID C STIME TTY TIME CMD
oracle 23678 1 0 16:56 ? 00:00:00 oraclejs01 (LOCAL=NO)

Interesting, isn’t it? We know the process is still alive on the server, but the session information is no longer associated with the process. This happens because Oracle has disconnected the session, which allows the process to continue until the sqlplus session is terminated. The session information is still available in v$session, but is no longer associated with a server process:

select
  2     s.username,
  3     s.status,
  4     s.sid,
  5     s.serial#
  6  from v$session s
  7* where s.username = 'SCOTT'
SQL>/

USERNAME                       STATUS          SID    SERIAL#
------------------------------ -------- ---------- ----------
SCOTT                          KILLED          146         81

1 row selected.

 1* select pid,spid from v$process where pid = 146
SQL>/

no rows selected

When exiting the Scott session, I can see that the session was killed:

SQL> exit
ERROR:
ORA-00028: your session has been killed

Let’s perform the experiment again, but this time use the IMMEDIATE keyword.

Logon as scott:

> sqlplus scott/tiger@10gr2

SQL*Plus: Release 11.2.0.3.0 Production on Tue Aug 5 17:18:53 2014

Logon as SYSDBA and check for the scott session;

SQL> @scott

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SCOTT                                 146         83 23939

1 row selected.

Before killing scott’s session:

  • get my OS PID
  • enable 10046 trace

The OS PID will be used for strace on the SYSDBA session shadow process on the server.
The 10046 trace is so we can see what is happening in the strace output.

SQL> l
1 select
2 s.username,
3 s.sid,
4 s.serial#,
5 p.spid spid
6 from v$session s, v$process p
7 where s.username is not null
8 and p.addr = s.paddr
9 and userenv('SESSIONID') = s.audsid
10* order by username, sid
SQL>/

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SYS                                   145         65 23947

1 row selected.

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

Now ssh to the db server , check for Scott session shadow process and start strace:

[root@ora10gR2 tmp]# strace -o 23947.strace -p 23947
^Z
[1]+ Stopped strace -o 23947.strace -p 23947
[root@ora10gR2 tmp]# bg
[1]+ strace -o 23947.strace -p 23947 &

[root@ora10gR2 tmp]# ps -p 23939
PID TTY TIME CMD
23939 ? 00:00:00 oracle

Now kill Scott’s session and exit the SYSDBA session:

SQL> alter system kill session '146,83' immediate;

System altered.

The strace command will now have exited on the server.

First check again for Scott’s session:

[root@ora10gR2 tmp]# ps -p 23939
PID TTY TIME CMD
[root@ora10gR2 tmp]#

So the Scott shadow process has terminated.

As the 10046 trace was enabled, the output to the oracle trace file will appear in the strace file, which allows searching for ‘alter system kill’ in the strace file.

From the strace file:

write(5, "alter system kill session '146,8"..., 44) = 44

Now searching for the PID of scott’s session 23939:

read(10, "23939 (oracle) S 1 23939 23939 0"..., 999) = 228
close(10) = 0
open("/proc/23939/stat", O_RDONLY) = 10
read(10, "23939 (oracle) S 1 23939 23939 0"..., 999) = 228
close(10) = 0
kill(23939, SIGKILL) = 0
kill(23939, SIGCONT) = 0
open("/proc/23939/stat", O_RDONLY) = 10
read(10, "23939 (oracle) Z 1 23939 23939 0"..., 999) = 178
close(10) = 0

From the previous text I can see that Oracle opened the status file for PID 23939.
Why it did so twice I am not sure.

What happens after that is the interesting part.

kill(23939, SIGKILL) = 0

That line means that the SIGKILL signal was successfully sent to Scott’s shadow process.

What does that mean? Run kill -l to get a list of signals:

kill -l
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
16) SIGSTKFLT   17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU     25) SIGXFSZ
26) SIGVTALRM   27) SIGPROF     28) SIGWINCH    29) SIGIO       30) SIGPWR
31) SIGSYS      34) SIGRTMIN    35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3
38) SIGRTMIN+4  39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7
58) SIGRTMAX-6  59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX

Notice that signal 9 (kill -9) is SIGKILL.

So when killing a session with ALTER SYSTEM KILL SESSION ‘PID,SERIAL#’ IMMEDIATE Oracle is actually doing the kill -9 for you, and has been for many years now.

Though not shown here, this same test was run when the session was killed without using the IMMEDIATE keyword, and there were no attempts to kill the session. This was inferred as well by the fact the the process was still running on the server up until the time the Scott sqlplus session was exited.

Categories: DBA Blogs

Passed OCP 12c test

Bobby Durrett's DBA Blog - Tue, 2014-08-05 15:27

Well, I finally passed the Oracle Certified Professional (OCP) 12c upgrade exam, 1Z0-060.  I got 86% right on the new features section and 78% correct on the general DBA section.  So, that means I missed roughly 7 questions on each section which was more than I expected because I felt good about all of my answers.

I’m happy with the three resources I used which are:

  1. Self Test software – $99
  2. Matthew Morris book – $10
  3. Sam Alapati book – $44

I spent a lot of time in the 12c manuals.  Be careful, though, because the 12.1.0.2 manuals just came out and some things are different in 12.1.0.2 from the test, because the test designers based the test on 12.1.0.1.

I built two Linux virtual machines for test preparation.  On one I installed the 12.1.0.1 database software and created a CDB and a non-CDB for testing.  On the other I installed the grid software and database software and created a CDB on ASM disks.  I spent many hours testing 12c features on these two environments and three databases.

I learned a ton both about new 12c features and features that exist in earlier versions, but it was a long and tedious process and I’m glad to finally be done.

- Bobby

Categories: DBA Blogs

Some Observations on Puppetrun with Foreman

Pythian Group - Tue, 2014-08-05 11:50

After joining Pythian I was introduced to several configuration management systems and Puppet was one of them. Foreman is a system management tool which can be integrated with Puppet to manage puppet modules and to initiate puppet runs on hosts from web interface. This is very useful if you want to configure large number of systems.

Puppet kick, which was previously used to initiate puppet run from foreman is deprecated now.

For initiating puppet run from foreman interface, I used mcollective. Mcollective can be used to execute parallel jobs in remote systems. There are 3 main components,

    Client – Connects to the mcollective Server and send commands.
    Server – Runs on all managed systems and execute commands.
    Middleware – A message broker like activemq.

I used mcollective puppet module from Puppetlabs for my setup.

# puppet module install puppetlabs-mcollective

My setup includes middleware(activemq) and mcollective client in the puppet server and mcollective servers in all managed systems.

After the implementation, I found that Puppet run from foreman web interface is failing for some servers.

I found following in /var/log/foreman-proxy/proxy.log,

D, [2014-04-18T07:20:54.392392 #4256] DEBUG — : about to execute: /usr/bin/sudo /usr/bin/mco puppet runonce -I server.pythian.com
W, [2014-04-18T07:20:56.167627 #4256] WARN — : Non-null exit code when executing ‘/usr/bin/sudo/usr/bin/mcopuppetrunonce-Ireg-app-02.prod.tprweb.net’
E, [2014-04-18T07:20:56.175034 #4256] ERROR — : Failed puppet run: Check Log files

You can see that mco command is trying to execute a puppet run in server.pythian.com and failing. mco command uses several sub commands called ‘applications’ to interact with all systems and ‘puppet’ is one of them.

While running the command in commandline, I received following,

# mco puppet runonce -I server.pythian.com| [ > ] 0 / 1warn 2014/04/11 08:05:34: client.rb:218:in `start_receiver’ Could not receive all responses. Expected : 1. Received : 0

Finished processing 0 / 1 hosts in 22012.79 ms

No response from:

server.pythian.com

I am able to ping the server.

When I ran ‘mco ping’ I found that the server with issue is identified with short hostnames and others with fqdn.

$ mco pingserver time=89.95 ms
server3.pythian.com time=95.26 ms
server2.pythian.com time=96.16 ms

So mcollective is exporting a short hostname when foreman is expecting an FQDN (Fully Qualified Domain Name) from this server.

Foreman takes node name information from puppet certificate name and that is used for filtering while sending mco commands.

Mcollective exports identity differently. From http://docs.puppetlabs.com/mcollective/configure/server.html#facts-identity-and-classes,

identity
The node’s name or identity. This should be unique for each node, but does not need to be.Default: The value of Ruby’s Socket.gethostname method, which is usually the server’s FQDN.
Sample value: web01.example.com
Allowed values: Any string containing only alphanumeric characters, hyphens, and dots — i.e. matching the regular expression /\A[\w\.\-]+\Z/

I passed FQDN as identity in the servers using mcollective module, which resulted in following setting,

# cat /etc/mcollective/server.cfg |grep identity
identity = server.pythian.com

This allowed the command to run successfully and getting ‘Puppet Run’ from foreman to work.

# mco puppet runonce -I server.pythian.com* [ ============================================================> ] 1 / 1

Now ‘mco ping’ looks good as well.

$ mco pingserver.pythian.com time=91.34 ms
server3.pythian.com time=91.23 ms
server2.pythian.com time=82.16 ms

Now let us check why this was happening.

mcollective identity is exported from ruby function Socket.gethostname.

From ruby source code you can see that Socket.gethostname is getting the value from gethostname().

./ext/socket/socket.c#ifdef HAVE_GETHOSTNAME
/*
* call-seq:
* Socket.gethostname => hostname
*
* Returns the hostname.
*
* p Socket.gethostname #=> “hal”
*
* Note that it is not guaranteed to be able to convert to IP address using gethostbyname, getaddrinfo, etc.
* If you need local IP address, use Socket.ip_address_list.
*/
static VALUE
sock_gethostname(VALUE obj)
{
#if defined(NI_MAXHOST)
# define RUBY_MAX_HOST_NAME_LEN NI_MAXHOST
#elif defined(HOST_NAME_MAX)
# define RUBY_MAX_HOST_NAME_LEN HOST_NAME_MAX
#else
# define RUBY_MAX_HOST_NAME_LEN 1024
#endif

char buf[RUBY_MAX_HOST_NAME_LEN+1];

rb_secure(3);
if (gethostname(buf, (int)sizeof buf – 1) < 0)
rb_sys_fail(“gethostname(3)”);

buf[sizeof buf - 1] = ”;
return rb_str_new2(buf);
}

gethostname is a glibc function which calls uname system call and copy the value from returned nodename.

So when foreman uses the FQDN value which it collects from puppet certificate name, mcollective exports the hostname returned by gethostname().

Now let us see how gethostname() gives different values in different systems.

When passing the complete FQDN in HOSTNAME parameter in /etc/sysconfig/network, we can see that Socket.gethostname is returning FQDN.

[root@centos ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=centos.pythian.com[root@centos ~]# hostname -v
gethostname()=`centos.pythian.com’
centos.pythian.com

[root@centos ~]# irb
1.9.3-p484 :001 > require ‘socket’
=> true
1.9.3-p484 :002 > Socket.gethostname
=> “centos.pythian.com”
1.9.3-p484 :003 >

The system which was having problem was having following configuration.

[root@centos ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=server[root@centos ~]# hostname -v
gethostname()=`server’
server

[root@centos ~]# irb
1.9.3-p484 :001 > require ‘socket’
=> true
1.9.3-p484 :002 > Socket.gethostname
=> “server”
1.9.3-p484 :003 >

Here ruby is only returning the short hostname for Socket.gethostname. But it was having following entry in /etc/hosts.

192.168.122.249 server.pythain.com server

This allowed system to resolve FQDN.

[root@centos ~]# hostname -f -v
gethostname()=`server’
Resolving `server’ …
Result: h_name=`server.pythain.com’
Result: h_aliases=`server’
Result: h_addr_list=`192.168.122.249′
server.pythain.com

From ‘man hostname’.

The FQDN of the system is the name that the resolver(3) returns for the
host name.Technically: The FQDN is the name gethostbyname(2) returns for the host name returned by gethost-
name(2). The DNS domain name is the part after the first dot.

As the resolver is able to resolve the hostname from /etc/hosts, puppet is able to pick up the fqdn value for certificate which it later used by foreman.
But mcollective exports the short hostname returned by gethostname().

To fix the issue in Red Hat based linux distributions, we can try any of the following,

* Pass an FQDN in /etc/sysconfig/network like below.

# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=server.pythian.com

OR

* Use a short hostname as HOSTNAME but make sure that it would not resolve to an FQDN in /etc/hosts or DNS (not really suggested).

OR

* Pass short hostname or FQDN as HOSTNAME but, make sure that there is an entry like below in /etc/hosts and mcollective is exporting fqdn as identity.

192.168.122.249 server.pythian.com server
Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 1

Pythian Group - Tue, 2014-08-05 06:24

This blog post is the first in a series of three which will demonstrate how to configure a Point-to-Site VPN step-by-step. Today’s post will teach you how to configure a virtual network and a dynamic routing gateway, and the following blog posts will demonstrate how to create the certificates, and how to configure the VPN client.

Nowadays we are opting to move parts of, or even entire systems to the cloud. In order to build a hybrid environment, we need to find a way to connect our enterprise/local network, also known as on-premises, and the cloud.

Currently, we have two options to connect Azure and On-Premises:

  1. Using a Point-to-Site VPN
  2. Using a Site-to-Site VPN

The first option, using a Point-to-Site VPN is the option I’ll be demonstrating. It is recommended when you need to connect only some servers of your network to Azure. On the other hand, the Site-to-Site VPN connects your entire on-premises network to Azure.

CONFIGURE A VIRTUAL NETWORK AND A DYNAMIC ROUTING GATEWAY

To start, connect to your Azure account (https://manage.windowsazure.com/) and click in the “add button”, in the bottom left corner.

    1. Now follow the options that you can see in the image, and create a custom virtual network:|Screen Shot 2014-07-29 at 23.41.53
    2. Fill the Virtual Network name and the location you want to create.Screen Shot 2014-07-29 at 23.44.36
    3. Check “Configure a Point-to-Site VPN” (DNS server is an option setting, used for name resolution between this virtual network and your on-premises network):Screen Shot 2014-07-29 at 23.45.59
    4. Set the the IP range accordingly, after verify if this range is not overlapping with your on-premises network.Screen Shot 2014-07-29 at 23.54.26
    5. Click in the “add gateway subnet” button and than in the finish button (check mark).Screen Shot 2014-07-29 at 23.57.52
    6. Now you need to wait few minutes, while the virtual network is being created.Screen Shot 2014-07-29 at 23.58.11
    7. You will see a message like this when the process is done:Screen Shot 2014-07-30 at 00.00.24
    8. At this stage, you will be able to see the network created, under the network section.Screen Shot 2014-07-30 at 00.22.20
    9. Now we need to create a “Dynamic Routing Gateway”. To complete this, click on the network you just created and go to the Dashboard.Screen Shot 2014-07-30 at 00.31.00
    10. Click on “CREATE GATEWAY” button, in the page bottom and confirm your intention by selecting “Yes”.Screen Shot 2014-07-30 at 00.58.58
    11. It may take few minutes. You will see the message “CREATING GATEWAY”, as shown in the image bellow:Screen Shot 2014-07-30 at 00.59.47
    12. After a successfully creating, you will see the following:Screen Shot 2014-07-30 at 01.22.39

At this point, we are done with the Virtual Network creation. Now we can proceed to the certificate creation steps… Stay tuned for my next two posts.

Categories: DBA Blogs