Skip navigation.

DBA Blogs

Storing Trailing NULLs in a table

Hemant K Chitale - Thu, 2014-03-27 09:49
Oracle has an optimization in that if, for a row, all trailing columns are NULL, it does not have to store the NULL in each column, but saves space by reducing the physical size of the row in the block (in effect "compressing the NULLs").  If, however, the NULLs appear in intermediate columns between the first column and the last column (with one or more non-NULL values intervening or at the end), it has to write one byte for each NULL, without any "compression".

Here is a simple demo with two tables holding 9 data columns each.  In 90% of the rows, 6 columns have NULL values.

In the first table, an intermediate column and the last column have non-NULL value, thus resulting in the columns having NULLs  interspersed with non-NULLs.

SQL> drop table intermediate_nulls;

Table dropped.

SQL>
SQL> create table intermediate_nulls
2 (
3 id_column number,
4 data_col_1 varchar2(25),
5 data_col_2 varchar2(25),
6 data_col_3 varchar2(25),
7 data_col_4 varchar2(25),
8 data_col_5 varchar2(25),
9 data_col_6 varchar2(25),
10 data_col_7 varchar2(25),
11 data_col_8 varchar2(25),
12 data_col_9 varchar2(25)
13 )
14 /

Table created.

SQL>
SQL> REM Insert 1 million rows
SQL> REM 9 in 10 rows have NULLs for cols 2 to 5, 7 to 8 but all have values in cols 1,6,9
SQL> insert into intermediate_nulls
2 select rownum,
3 'Col 1 Data',
4 decode(mod(rownum,10),0,'Col 2 Data', null),
5 decode(mod(rownum,10),0,'Col 3 Data', null),
6 decode(mod(rownum,10),0,'Col 4 Data', null),
7 decode(mod(rownum,10),0,'Col 5 Data', null),
8 'Col 6 Data',
9 decode(mod(rownum,10),0,'Col 7 Data', null),
10 decode(mod(rownum,10),0,'Col 8 Data', null),
11 'Col 9 data'
12 from dual
13 connect by level < 1000001
14 /

1000000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','INTERMEDIATE_NULLS');

PL/SQL procedure successfully completed.

SQL>
SQL> select num_rows, avg_row_len, blocks
2 from user_tables
3 where table_name = 'INTERMEDIATE_NULLS'
4 /

NUM_ROWS AVG_ROW_LEN BLOCKS
---------- ----------- ----------
1000000 50 7678

SQL>

In the second table, all 6 trailing columns have NULLs.

SQL> drop table trailing_nulls;

Table dropped.

SQL>
SQL> create table trailing_nulls
2 (
3 id_column number,
4 data_col_1 varchar2(25),
5 data_col_2 varchar2(25),
6 data_col_3 varchar2(25),
7 data_col_4 varchar2(25),
8 data_col_5 varchar2(25),
9 data_col_6 varchar2(25),
10 data_col_7 varchar2(25),
11 data_col_8 varchar2(25),
12 data_col_9 varchar2(25)
13 )
14 /

Table created.

SQL>
SQL>
SQL> REM Insert 1 million rows
SQL> REM 9 in 10 rows have NULLs for cols 4 to 9 -- i.e. all trailing cols
SQL> insert into trailing_nulls
2 select rownum,
3 'Col 1 Data',
4 'Col 2 Data',
5 'Col 3 Data',
6 decode(mod(rownum,10),0,'Col 4 Data', null),
7 decode(mod(rownum,10),0,'Col 5 Data', null),
8 decode(mod(rownum,10),0,'Col 6 Data', null),
9 decode(mod(rownum,10),0,'Col 7 Data', null),
10 decode(mod(rownum,10),0,'Col 8 Data', null),
11 decode(mod(rownum,10),0,'Col 9 Data', null)
12 from dual
13 connect by level < 1000001
14 /

1000000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','TRAILING_NULLS');

PL/SQL procedure successfully completed.

SQL>
SQL> select num_rows, avg_row_len, blocks
2 from user_tables
3 where table_name = 'TRAILING_NULLS'
4 /

NUM_ROWS AVG_ROW_LEN BLOCKS
---------- ----------- ----------
1000000 50 6922

SQL>

We can see that the TRAILING_NULLS table consumes approximately 10% less disk space.
Note : The AVG_ROW_LEN is the small.  When the data is queried, NULLs (one byte each) are returned to the client.  It is in the data block where the "compression" is done.
.
.
.
Categories: DBA Blogs

MySQL Community Dinner at Pedro’s

Pythian Group - Thu, 2014-03-27 08:00

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members, as many of you will be in town for the MySQL Conference that week. Here are the details:

What: The MySQL Community Pay-Your-Own-Way Dinner

When: Wednesday April 2, 2014 – Meet us at 6:30 PM in the lobby of the Hyatt Santa Clara, or at 7 PM at Pedro’s (You are welcome to show up later, too!)

Cost: The meal will be $25 USD including tax and gratuities. Please bring cash. (See menu below)

Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

How: RSVP through Eventbrite

Please note: Due to the historically high attendance for this event, Pedro’s has asked that each person pays in cash to simplify billing. Pedro’s can handle large groups of people, but we would like to have an idea of how many people are attending to give Pedro’s an accurate number of attendees so that they can be adequately prepared.

Attendees:

1. Paul Vallee (Pythian)

2. Wagner Bianchi (Pythian)

3. Danil Zburivsky (Pythian)

4. Alex Gorbachev (Pythian)

5. Derek Downey (Pythian)

6. Chad Scheiter (Pythian)

7. Andrew Moore

8. Lawrence Schwartz

9. Tom Diederich

10. Erin O’Neill

11. Your name here!!

Looking forward to seeing you all at the event!

menu_pedros
Categories: DBA Blogs

Oracle 12c Auditing Chapters

Bobby Durrett's DBA Blog - Wed, 2014-03-26 17:51

Spent a good amount of time yesterday and today reading about auditing in Oracle 12c.  Can’t say I read every word, but I think it was worth reading the three chapters in the Security manual related to auditing:

Chapter 21 Introduction to Auditing
Chapter 22 Configuring Audit Policies
Chapter 23 Administering the Audit Trail

I haven’t used these features but it seems like a major new piece of code with the Unified Audit Trail.

I also read this chapter of the VLDB guide because it seemed to have a lot of things that were either new to 12c or new to me:

Chapter 5 Managing and Maintaining Time-Based Information

This chapter describes features that cause data to age out and get moved on to less expensive storage automatically over time.

Anyway, just wanted to pass on some full chapters that I’ve read and am pondering as I try to comprehend the new 12c features.

- Bobby

 

 

 

 

Categories: DBA Blogs

C14 OakTable World Las Vegas

Pythian Group - Wed, 2014-03-26 12:55

If you haven’t yet made the decision to attend COLLABORATE 14 – IOUG Forum in Las Vegas taking place on 7-11 April, 2014 at the Venetian Hotel, this might just help you to make the call. You know you want to be there.

OakTable Network will be holding its OakTable World for the very first time during the COLLABORATE conference. While it’s a little bit last moment, IOUG was able to provide a room for us to use for the whole day and we at OakTable quickly put the schedule together. The agenda is selected by the OakTable speakers on the topics they are really passionate about. As history shows, this is generally what your also want to hear about.

The history of OakTable World comes from the underground event started by Mogens Nørgaard during Oracle OpenWorld somewhere around 2007-2009. After several years of success and growing popularity, the even has become knows as OakTable World run during OOW12 (link) and OOW13 (link) and was a huge success. Last year, we have also run OTWUK13 around UKOUG TEBS 13 conference in Manchester. Needless to say it was a success.

Thanks to many good companies sponsoring the events all those years — you know who you are. This year, the sponsor is really IOUG who managed to find a room for us. I’ll probably gather few usual suspect to print special t-shirts again so stay tuned for that update — I wanted to get the info out-there ASAP so that people could plan attending.

The up to date schedule is in Google Calendar. You can also sign up to the calendar feed using XML or iCal feeds.

Embedded schedule is below:

Note that we have two hours of TED-style lightning talks with or without slides. Each talk is 10-15 minutes when the author shares an innovative perspective relevant to the database industry. The talks are the mix of technical and non-technical topics. It’s been first attempted at UKOUG conference in 2011 and 2012 I believe and were very popular and well attended.

You will hear about some specific technologies or methods that are under-utilized, about provocative approaches to database tasks we have at hand, about disruptive trends and about totally non-technical things that will make you a better IT professional — stories of someone’s life or professional career. It’s usually entertaining and thoughts-provoking. I strongly recommend them.

As you can see the speakers for the rest of the day are stellar. Frits Hoogland of Enkitec Europe fame comes to C14 all the way from Holland and he will start the day by getting very very very deep inside Oracle execution code. Frits will demonstrate live use of Linux GDB debugger techniques to diagnose and troubleshoot very complex scenarios when you really need to understand internal of certain database operations. You really need your brain fresh for this session so scheduling it first thing in the morning is the only reasonable time to reduce the risk of your brain being fried in the first 15 minutes. Hardcode folks will love this.

Next goes Kyle Hailey with Agile Data story to teach us about modern techniques in making Oracle database deployment, cloning and things as easy as deploying a Linux RPM package with YUM. Following thatMoved to 3pm, Graham Wood from Oracle will be presenting Unreal World Performance use cases themed after popular Real World Performance Tour (but without two other amigos) with fresh look at bad practices of managing connections to Oracle databases — time for a refresher on those very popular RWP Youtube videos.

After lunch is when Lightning talks will be running and also a very cool session by Jeff Needham of Scalabilities where you can learn about disruptive trends in modern data warehousing and what customers are doing today (and more will be doing tomorrow) to scale their data processing in cost effective way. How vendors respond to Big Data technologies disrupting their traditional products and how customers handle it (note that I’m trying hard to be politically correct here — come, it will be very refreshing). By the way, Jeff will give away his book copies (Disruptive Possibilities) during Lightning talk as well.

Unfortunately Jeff won’t be able to be here. We might still get some of his books. However, we have Gwen Shapira presenting on loading Twitter data into Hadoop and analyzing with complete live on the spot demo!

Late Karl Arao addition is also on the schedule!

Here are the schedule for the Lightning Talks. The schedule might change slightly between two slots (and we still have one slot to be confirmed):

Lightning Talk Part I:

  • Kyle Hailey – “Importance of Data Visualization”
  • Jeff Needham – “Hadoop Myth-Busting (plus books giveaway)”
  • Tim Gorman – “Eagles and Seagulls” about being an independent IT consultant
  • Kellyn Pot’vin – “Finding Your Path”

Lightning Talk Part II:

  • Jonathan Gennick – “Seven Ways toward Better Writing”
  • Kellyn Pot’vin – “Snapclone with DBaaS in EM12c”
  • Graham Wood – How Hardware Timings Affect Your Oracle Database Performance
  • TBC

Lightning talks details are in the events details in the calendar if you open the event. Remember that there might be some last minute changes so I recommend simply signing up for the calendar feed on your mobile device you carry with you at the conference.

Also, there will be the OakTable folks and other crowd hanging around that room for the whole day so come over. I think it would be a good experiment to reintroduce couple things that Mogens had back in the old days:

  • FIVE-MINUTES-OF-FAME – Any attendee can get up and present a technical solution (5-minute time limit) to the group for admiration.
  • FIVE-MINUTES-OF-HELP – Any attendee can get up and pose a technical problem (5-minute time limit) to the group for help.

Well, that’s the news. If you are excited — post here in the comments that you are coming, share this on Twitter, Facebook, LinkedIn and what not. Looking forward to see as many of you there as possible. As Tim Gorman said — “more fun than a barrel of drunken DBAs, is the un-conference formerly known as “Oracle Closed World“, now known as “Oak Table World“.” And this year we are in Vegas!

Categories: DBA Blogs

Learned a couple of things from RMAN restore

Bobby Durrett's DBA Blog - Mon, 2014-03-24 14:37

A RMAN restore and recovery that I completed today answered a couple of questions that remained after the recovery that was the topic of my post from June.  Both today’s recovery and June’s involved a restore of a production database on another host and a recovery of that database to a particular point in time.

Question 1: How to rename redo logs?

When doing a restore and recovery to a point in time RMAN does not restore the redo logs.  So, the production redo log directory does not have to exist on your target.  All you have to do is rename the redo logs after the restore and recover rman commands and before the alter database open resetlogs command.

Oracle document 1338193.1 in step 8 titled “Relocate all the online redo logs” documents the needed command and when to run it.

For each production redo log you run a command like this on the mounted but not opened restored and recovered database:

alter database rename file 
'old redo log path and name' to 
'new redo log path and name';

Question 2: How to I prevent the restored archive logs from filling up the archive filesystem?

It turns out that there is an option of the recover command that limits the amount of space the restored archive logs will take up and there is another option that causes the recover command to delete the archive logs after applying them:

recover database delete archivelog maxsize 50 G;

Otherwise this was the same case as the earlier blog post.  But, at least in this case I didn’t worry about archivelogs filling up the filesystem and I was able to put the redo logs where I wanted them.

- Bobby

 

 

 

 

Categories: DBA Blogs

Open and Migrate Microsoft Access in Oracle SQL Developer 4

DBASolved - Mon, 2014-03-24 12:40

For many people getting start with databases revolve around using Microsoft Access (MS Access). MS Access is an entry level “database” (if you can call it a database) that Microsoft has been putting out for years. Often people want to move older MS Access “databases” into enterprise databases as they become reliant on the information stored in them. Oracle has recognized this and has enabled Oracle SQL Developer to interact with MS Access and allow for a quick copy of data from MS Access to Oracle.

I have been a baseball fan for as long as I can remember; however, I don’t dwell on stats and win-lose records. I honestly just like to watch the game and watch my boys learn the game at a completive level. With this in mind I went looking for baseball stats that I can put into a database and use for demo purposes. What I found was an MS Access “database” full of information from 1996 up through 2013 thanks to Sean Lahman (here).

Now that I have data I want for testing, I really want to test it in Oracle! Using Oracle SQL Developer I’m able to access the data stored in MS Access and with a right-click of the mouse move the data into Oracle. Sounds simple, right!?  Let’s take a look.

The tools I’m using to do this migration are:

  • Microsoft Access 2010
  • Oracle SQL Developer 4.0  (4.0.1.14.48 w/ JDK 1.7.0_51)
  • Oracle Database 12c (12.1.0.1)
Setup Microsoft Access

In order to access the data in a MS Access “database”, you need to enable the viewing of system objects in Access. In MS Access 2010 this can be accomplished by doing the following once the MS Access database is open.

Open the options for the database.

clip_image002

Once the Access Options dialog is open, then go to the Navigation button.

image

After clicking on the navigation button, the Navigation Options dialog will open. On this dialog you want to enable “Show System Objects”.

image

After enabling “Show System Objects”, click OK all the way out to the “database”. You will notice in the All Access Objects tree there are some system tables that appear to be greyed out.

clip_image007

These are the system tables that Oracle SQL Developer needs access to in order to connect.
Connect to MS Access from SQL Developer

To setup a connection to MS Access in Oracle SQL Developer, is just like setting up a connection for Oracle. From the Connections dialog, click on the green plus sign. This will open the connections dialog box.

clip_image008

You will see a tab that says Access. Click on the tab to open up the dialog to use an MS Access MDB file. Use the Browse button to locate the MDB and give it a connection name. The dot in the password field is just there upon connection. Username is not needed since connections to MS Access is as Admin by default.

clip_image010

Once connected to the MS Access database from SQL Developer, you will see the connection and tables that are in the database.

clip_image011

From here, you can see all the tables in the database by expanding the TABLE category of the connection.

clip_image013

With the connection to MS Access set and usable, I wanted to move all these tables into an Oracle Database 12c for testing.
Quickly moving data to Oracle

All the baseball data can be quickly moved to Oracle in a few simple steps. No, this process does not involve using the migration features of SQL Developer; instead it is a simple right-click.

Start by highlighting one or more tables you would like to copy to Oracle.

clip_image014

Next perform a right-click on the highlighted tables. You will see an option for Copy to Oracle. Click this option.

clip_image016

After clicking Copy To Oracle, a connection dialog will open. For my Oracle connection, I’m connecting to a PDB with a local user named SCOUT. Click Apply.

clip_image018

At this point, SQL Developer is copying all the metadata and data over to my Oracle 12c PDB under the user SCOUT.

clip_image019

When the process is done copying, I can verify that all the tables are there by looking at my Oracle connection pane and opening the Tables node on the tree.

image

Now I have all the baseball data I want to play with loaded into Oracle Database 12c (PDB). Let the fun times begin!

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database
Categories: DBA Blogs

Watch for Pythian Speakers at Collaborate 14

Pythian Group - Mon, 2014-03-24 07:50

This year COLLABORATE 14 is being held at The Venetian and Sands Expo Center in Las Vegas, April 7-11. Some Pythian folks will be attending and speaking at the event, so be sure to watch for their presentations:

Session Date Start Time Session Room Session Title Presenter Name April 9, 2014 8:30 AM Level 3, Lido 3101B Database Private Clouds with Oracle Database 12c Marc Fielding April 9, 2014 8:30 AM Level 3, Lido 3005 DBA 101 : Calling all New Database Administrators René Antunez April 9, 2014 4:30 PM Sands, Level 1 – 309 What’s New in Oracle E-Business Suite R12.2 for Database Administrators? Vasu Balla April 10, 2014 3:00 PM Level 3, San Polo 3403 Accelerate Your Exadata Deployment with the Skills You Already Have Marc Fielding April 10, 2014 9:45 AM Level 3, Lido 3101B 12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management Fahd Mirza April 10, 2014 4:15 PM Level 3, Murano 3306 Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden Paul Vallee April 10, 2014 11:00 AM Level 3, San Polo 3403 My First 100 days with an Exadata René Antunez April 11, 2014 9:45 AM Level 3, Lido 3005 Ensuring Your Physical Standby is Usable Michael Abbey April 11, 2014 8:30 AM Level 3, Lido 3103 RMAN in 12c: The Next Generation René Antunez April 11, 2014 9:45 AM Level 3, San Polo 3405 Practical Machine Learning for DBAs Alex Gorbachev
Categories: DBA Blogs

Vim::X – VimL is Eldritch, Let’s Write Perl!

Pythian Group - Mon, 2014-03-24 07:49

Last week, I finally got around writing a few macros to help with conflict resolution in Vim:


" conflict resolution - pick this one / option 1 / option 2
map ,. $?\v^[<=]{7}jy/\v^[=>]{7}?\v^[<]{7}"_d/\v^\>{7}"_ddP
map ,<  $?\v^[<]{7}<,>.
map ,>  $?\v^[<]{7}/\v^[=]{7}<,>. 
" ... or keep'em both
map ,m  $?\v^[<]{7}"_dd/\v[=]{7}cc/\v[>]{7}"_dd

With that, I can go from conflict to conflict and pick sides with the ease of two keystrokes, never have to manually delete those <<<<<<<, ======= and >>>>>>> lines again. Sweet, eh?

Now, any sane person would have stopped there. I found myself thinking it’d be nice to transform that line of garbage into a neat little function.

There is an obvious problem, though: my VimL-fu is pretty weak. However, my vim is always compiled with Perl support. Sure, the native interface is kinda sucky, but… maybe we can improve on that?

Interfacing Vim with Perl

That’s where Vim::X enter the picture (yes, I know, rather poor name. Anybody has a better suggestion?). The module has two functions:

  1. give us a bunch of helper functions to interact with Vim as painlessly as possible.
  2. deal with all the fiddly bridgey things required to give us access to functions defined in Perl modules from Vim.
Putting the ‘V’ back in ‘DWIM’

Vim::X comes with a small, but growing, bag of helper functions, as well as with helper classes — Vim::X::Window, Vim::X::Buffer, Vim::X::Line — that provide nice wrappers to the Vim entities. I still have to document them all, but the implementation of my ‘ResolveConflict’ function should give you an idea of how to use them:


package Vim::X::Plugin::ResolveConflict;

use strict;
use warnings;

use Vim::X;

sub ResolveConflict {
        my $side = shift;

        my $here   = vim_cursor;
        my $mine   = $here->clone->rewind(qr/^<{7}/);
        my $midway = $mine->clone->ff( qr/^={7}/ );
        my $theirs = $midway->clone->ff( qr/^>{7}/ );

        $here = $side eq 'here'   ? $here
              : $side eq 'mine'   ? $mine
              : $side eq 'theirs' ? $theirs
              : $side eq 'both'   ? $midway
              : die "side '$side' is invalid"
              ;

        vim_delete( 
                # delete the marker
            $mine, $midway, $theirs, 
                # and whichever side we're not on
            ( $midway..$theirs ) x ($here < $midway), 
            ( $mine..$midway )   x ($here > $midway),
        );
};

1;

Sure, it’s more verbose than the original macros. But now, we have a fighting chance to understand what is going on. As it my habit, I am overloading the heck of my objects. For example, the line objects will be seen as their line number, or their content, depending of the context. Evil? Probably. But make for nice, succinct code:



sub Shout {
    my $line = vim_cursor;
    $line <<= uc $line;
}

Fiddly bridgey things

This is where I expect a few ‘oooh’s and ‘aaaah’s. So we have ‘ResolveConflict’ in a Perl module. How do we make Vim see it?

First, you add a ‘:Vim’ attribute to the function:


sub ResolveConflict :Vim(args) {
    ...

Then, in your .vimrc:


" only if the modules aren't already in the path
perl push @INC, '/path/to/modules/';

perl use Vim::X::Plugin::ResolveConflict

map ,<  call ResolveConflict('mine')
map ,>  call ResolveConflict('theirs')
map ,.  call ResolveConflict('here')
map ,m  call ResolveConflict('both')

Isn’t that way more civilized than the usual dark invocations?

One more step down the rabbit hole

Once I had my new ‘ResolveConflict’ written, it goes without saying that I wanted to test it. At first, I wrote a vspec test suite:


describe 'basic'

    perl push @INC, './lib'
    perl use Vim::X::Plugin::ResolveConflict

    before
        new
        read conflict.txt
    end

    after
        close!
    end

    it 'here mine'
        normal 3G
        call ResolveConflict('here')

        Expect getline(1) == "a"
        Expect getline(2) == "b"
        Expect getline(3) == "c"
    end

    it 'here theirs'
        normal 6G
        call ResolveConflict('here')

        Expect getline(1) == "d"
        Expect getline(2) == "e"
        Expect getline(3) == "f"
    end

    it 'mine'
        normal 6G
        call ResolveConflict('mine')

        Expect getline(1) == "a"
        Expect getline(2) == "b"
        Expect getline(3) == "c"
    end

    it 'theirs'
        normal 6G
        call ResolveConflict('theirs')

        Expect getline(1) == "d"
        Expect getline(2) == "e"
        Expect getline(3) == "f"
    end

    it 'both'
        normal 6G
        call ResolveConflict('both')

        Expect getline(1) == "a"
        Expect getline(2) == "b"
        Expect getline(3) == "c"
        Expect getline(4) == "d"
        Expect getline(5) == "e"
        Expect getline(6) == "f"
    end

end

But then I found myself missing my good ol’ TAP. If only there was an interface to run those Perl modules within v–

oh.

So I changed the test suite to now look like:


package ResolveConflictTest;

use Vim::X;
use Vim::X::Plugin::ResolveConflict;

use Test::Class::Moose;

sub test_setup {
    vim_command( 'new', 'read conflict.txt' );
}

sub test_teardown {
    vim_command( 'close!' );
}

sub here_mine :Tests {
    vim_command( 'normal 3G' );
    vim_call( 'ResolveConflict', 'here' );

    is join( '', vim_lines(1..3) ) => 'abc', "here, mine";
    is vim_buffer->size => 3, "only 3 lines left";
};

sub here_theirs :Tests { 
    vim_command( 'normal 6G' );
    vim_call( 'ResolveConflict', 'here' );

    is join( '', vim_lines(1..3) ) => 'def';
    is vim_buffer->size => 3, "only 3 lines left";
};

sub mine :Tests {
    vim_call( 'ResolveConflict', 'mine' );

    is join( '', vim_lines(1..3) ) => 'abc';
    is vim_buffer->size => 3, "only 3 lines left";
};

sub theirs :Tests {
    vim_call( 'ResolveConflict', 'theirs' );

    is join( '', vim_lines(1..3) ) => 'def';
    is vim_buffer->size => 3, "only 3 lines left";
};

sub both :Tests {
    vim_call( 'ResolveConflict', 'both' );

    is join( '', vim_lines(1..6) ) => 'abcdef';
    is vim_buffer->size => 6, "only 6 lines left";
};

__PACKAGE__->new->runtests;

I also wrote a little vim_prove script to run the show:


#!perl -s

exec 'vim', qw/ -V -u NONE -i NONE -N -e -s /,
    ( map { 1; '-c' => "perl push \@INC, '$_'" } split ":", $I ),
    '-c', "perl do '$ARGV[0]' or die $@",
    '-c', "qall!";

Aaaand whatdyaknow:


$ perl bin/vim_prove -I=lib contrib/test.vim
#
# Running tests for ResolveConflictTest
#
    1..5
        ok 1
        ok 2 - only 6 lines left
        1..2
    ok 1 - both
        ok 1 - here, mine
        ok 2 - only 3 lines left
        1..2
    ok 2 - here_mine
        ok 1
        ok 2 - only 3 lines left
        1..2
    ok 3 - here_theirs
        ok 1
        ok 2 - only 3 lines left
        1..2
    ok 4 - mine
        ok 1
        ok 2 - only 3 lines left
        1..2
    ok 5 - theirs
ok 1 - ResolveConflictTest
What’s Next?

The current prototype is on GitHub. I’ll try to push it to CPAN once I have a little bit of documentation and a little more order in the code. But if you are interested, please, fork away, write plugins, and PR like there is no tomorrow.

Categories: DBA Blogs

Plan HASH_VALUE remains the same for the same Execution Plan, even if ROWS and COST change

Hemant K Chitale - Sun, 2014-03-23 02:41
Here is a simple demo that shows that the Plan Hash_Value does not consider the ROWS and COST but only the Execution Plan.  Thus, even with more rows added into a table, if the Execution Plan remains the same for a query, it is independent of the number of ROWS and the COST changing.

SQL> -- create the table
SQL> create table branch_list
2 (country_code varchar2(3), branch_code number, branch_city varchar2(50));

Table created.

SQL>
SQL> -- create an index
SQL> create index branch_list_cntry_ndx
2 on branch_list(country_code);

Index created.

SQL>
SQL>
SQL>
SQL> -- populate it with 100 rows, one third being 'IN'
SQL> insert into branch_list
2 select decode(mod(rownum,3),0,'IN',1,'US',2,'US'), rownum, dbms_random.string('X',32)
3 from dual
4 connect by level < 101
5 /

100 rows created.

SQL>
SQL> -- gather statistics
SQL> exec dbms_stats.gather_table_stats('','BRANCH_LIST');

PL/SQL procedure successfully completed.

SQL>
SQL> -- get an execution plan
SQL> explain plan for
2 select branch_code, branch_city
3 from branch_list
4 where country_code = 'IN'
5 /

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 513528032

-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50 | 1950 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| BRANCH_LIST | 50 | 1950 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | BRANCH_LIST_CNTRY_NDX | 50 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("COUNTRY_CODE"='IN')

14 rows selected.

SQL>
SQL>
SQL> -- add another 400 rows, none of them being 'IN'
SQL> insert into branch_list
2 select decode(mod(rownum,6),0,'SG',1,'US',2,'US',3,'US',4,'AU',5,'UK'), rownum+100, dbms_random.string('X',32)
3 from dual
4 connect by level < 401
5 /

400 rows created.

SQL>
SQL> -- update statistics
SQL> exec dbms_stats.gather_table_stats('','BRANCH_LIST');

PL/SQL procedure successfully completed.

SQL>
SQL> -- get the execution plan again
SQL> explain plan for
2 select branch_code, branch_city
3 from branch_list
4 where country_code = 'IN'
5 /

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 513528032

-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 33 | 1320 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| BRANCH_LIST | 33 | 1320 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | BRANCH_LIST_CNTRY_NDX | 33 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("COUNTRY_CODE"='IN')

14 rows selected.

SQL>
SQL> select column_name, histogram
2 from user_tab_columns
3 where table_name = 'BRANCH_LIST';

COLUMN_NAME HISTOGRAM
------------------------------ ---------------
COUNTRY_CODE FREQUENCY
BRANCH_CODE NONE
BRANCH_CITY NONE

SQL> select count(*) from user_tab_histograms
2 where table_name = 'BRANCH_LIST'
3 and column_name = 'COUNTRY_CODE'
4 /

COUNT(*)
----------
5

SQL>


After the addition of 400 rows to a 100 row table, the distribution of rows has changed. At the second Gather_Table_Stats call, Oracle has properly omputed a Frequency Histogram on the COUNTRY_CODE column for the 5 countries ('IN','US','SG','AU','UK').  The estimate for the COUNTRY_CODE='IN' is now more accurate.

However, what I want to demonstrate here is that although "ROWS" (and "BYTES for that many ROWS) and "COST" have changed in the new Execution Plan, the PLAN HASH VALUE ("513528032") remains the same.  Thus, the PLAN HASH VALUE is independent of changes to the ROWS/BYTES and COST.  The Execution Plan, per se, hasn't changed.
.
.
.



Categories: DBA Blogs

BOGOF from Packt

Hemant K Chitale - Sat, 2014-03-22 07:14
Packt Publishing is celebrating the publishing of their 2000th book with a Buy One Get One Free offer.
.
.
.
Categories: DBA Blogs

Oracle APEX Cookbook : Second Edition

Surachart Opun - Sat, 2014-03-22 05:18
 Second EditionOracle Application Express is the great tool for developing Web Application with SQL and PL/SQL. Users can develop and deploy Web based applications for desktops and mobile devices.  If We will be talking about books about APEX. I mention a book title Oracle APAC Cookbook - Second Edition by Michel Van Zoest and Marcel Van Der Plas. I had a chance as technical viewer on this book. So, I found it is interesting book about Oracle APEX. It was written to cover Oracle APEX 4.0 - 4.2 with 14 Chapters. 
  • Explore APEX to build applications with the latest techniques in AJAX and Javascript using features such as plugins and dynamic actions
  • With HTML5 and CSS3 support, make the most out of the possibilities that APEX has to offer
  • Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible

Examples in a book are very useful. Readers can follow each topic (example) and practice it. A book is for some people who are new with Oracle APEX and want to start it. For some people who know and work with Oracle APEX, they can use this book like reference and learn something new in APEX. So, I think it's a good idea to keep this book in your APEX shelf. I still believe a book is easy for reading... and learning in APEX. 
Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Don Burleson

Bobby Durrett's DBA Blog - Fri, 2014-03-21 14:39

This is the second in a series of posts about different performance tuning authors and speakers who have had a big impact on my career.  My first post was on Craig Shallahamer.  I am ordering these posts chronologically.  Craig was the first to impact my career by improving my Oracle performance tuning knowledge.  The next person was Don Burleson.

I have heard Don Burleson speak at several user group conferences and got a lot out of these talks, but I got the most benefit from his book titled “Oracle 9i High-Performance Tuning with STATSPACK“.  Don’s 9i Statspack book helped me understand what was in the various v$ views that Statspack captures.  I mentioned in an earlier post that a coworker of mine asked how to learn about the different parts of an AWR report.  I learned about these initially from Don Burleson’s book.  Today, three major database versions later, on Oracle 12c and with the more full featured AWR reports I still use information I learned about v$ views from Don’s 9i Statspack book.  The idea of taking snapshots of the v$ views like v$system_event and getting the difference between the values captured in two snapshots remains a critical component to Oracle database tuning.

In addition to understanding the v$ views I learned how to add operating system metrics like CPU utilization, disk I/O latency, and network latency to the database metrics captured by Statspack.  A very cool connection occurred when I realized that Don Burleson’s approach to augmenting Statspack database v$ information with Unix operating system metrics mirrored the way Craig Shallahamer’s OSM Toolkit stores both v$ information and Unix OS metrics.  It has been a huge help to understand what the operating system tools like sar and vmstat are telling you and compare that to what Statspack is saying as you observe the changes in v$ view values over time.

Lastly, I remember learning about database internals and their impact on performance.  One example that stands out is that I learned from Don Burleson’s book that indexes are more effective if rows in the table are ordered by the indexed columns.  Later I learned more about “clustering factor” and how this value attempts to quantify how ordered the rows are relative to the indexed columns.  But, Don’s 9i book first opened my eyes to the relevance of the way the rows are sorted in the table relative to the columns in the index.  I’m sure I learned other key things but the relationship between the indexed columns and the order of rows really opened my eyes to how data organization on the disk can impact performance.

In conclusion, Don Burleson was the second major figure in my Oracle performance tuning journey.  His 9i Statspack book in particular improved my knowledge of v$ tables, operating system performance metrics, and Oracle database tuning topics and internals.  Each of these three types of knowledge have proven invaluable in developing my performance tuning skills.

- Bobby

p.s. I believe this is the expanded more current AWR based version of the 9i Statspack book: Oracle Tuning: The Definitive Reference, Third Edition

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Categories: DBA Blogs

RMAN – Validating Archivelog Backup

Pythian Group - Fri, 2014-03-21 13:07

There are many posts out there about validating backup. However, none seem to address some recent concerns a client of mine had.

Currently, backup validation is performed once a week and the question asked,  “How to validate all archivelog backup?”

List Backups - [D}atafile and [A]rchivelog backup from Incremental Level 0/1

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
43      B  A  A DISK        20-MAR-2014 21:02:30 1       1       YES        AINC0_THU20
44      B  0  A DISK        20-MAR-2014 21:02:39 1       1       YES        DINC0_THU20
45      B  0  A DISK        20-MAR-2014 21:02:51 1       1       YES        DINC0_THU20
46      B  0  A DISK        20-MAR-2014 21:02:56 1       1       YES        DINC0_THU20
47      B  0  A DISK        20-MAR-2014 21:02:59 1       1       YES        DINC0_THU20
48      B  0  A DISK        20-MAR-2014 21:03:00 1       1       YES        DINC0_THU20
49      B  0  A DISK        20-MAR-2014 21:03:04 1       1       YES        DINC0_THU20
50      B  A  A DISK        20-MAR-2014 21:03:07 1       1       YES        AINC0_THU20
51      B  F  A DISK        20-MAR-2014 21:03:11 1       1       NO         TAG20140320T210309
60      B  F  A DISK        21-MAR-2014 07:02:53 1       1       NO         TAG20140321T070249
61      B  A  A DISK        21-MAR-2014 11:27:47 1       1       YES        AINC1_FRI21
62      B  1  A DISK        21-MAR-2014 11:27:54 1       1       YES        DINC1_FRI21
63      B  1  A DISK        21-MAR-2014 11:27:55 1       1       YES        DINC1_FRI21
64      B  1  A DISK        21-MAR-2014 11:27:59 1       1       YES        DINC1_FRI21
65      B  1  A DISK        21-MAR-2014 11:28:00 1       1       YES        DINC1_FRI21
66      B  1  A DISK        21-MAR-2014 11:28:01 1       1       YES        DINC1_FRI21
67      B  1  A DISK        21-MAR-2014 11:28:06 1       1       YES        DINC1_FRI21
68      B  A  A DISK        21-MAR-2014 11:28:08 1       1       YES        AINC1_FRI21
69      B  F  A DISK        21-MAR-2014 11:28:14 1       1       NO         TAG20140321T112810

RMAN>

List Backups for Archivelog All : sequence 53 – 63  / scn  1010308 – 1048901

RMAN> list backup of archivelog all;

List of Backup Sets
===================

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
43      2.49M      DISK        00:00:00     20-MAR-2014 21:02:30
        BP Key: 43   Status: AVAILABLE  Compressed: YES  Tag: AINC0_THU20
        Piece Name: /oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0

  List of Archived Logs in backup set 43
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    53      1013038    20-MAR-2014 16:31:34 1019638    20-MAR-2014 20:58:44
  1    54      1019638    20-MAR-2014 20:58:44 1019722    20-MAR-2014 20:59:53
  1    55      1019722    20-MAR-2014 20:59:53 1019884    20-MAR-2014 21:01:19
  1    56      1019884    20-MAR-2014 21:01:19 1019955    20-MAR-2014 21:02:29

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
50      10.00K     DISK        00:00:00     20-MAR-2014 21:03:07
        BP Key: 50   Status: AVAILABLE  Compressed: YES  Tag: AINC0_THU20
        Piece Name: /oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0

  List of Archived Logs in backup set 50
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    57      1019955    20-MAR-2014 21:02:29 1019981    20-MAR-2014 21:03:05

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
61      3.27M      DISK        00:00:01     21-MAR-2014 11:27:47
        BP Key: 61   Status: AVAILABLE  Compressed: YES  Tag: AINC1_FRI21
        Piece Name: /oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1

  List of Archived Logs in backup set 61
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    58      1019981    20-MAR-2014 21:03:05 1020108    20-MAR-2014 21:04:05
  1    59      1020108    20-MAR-2014 21:04:05 1040311    21-MAR-2014 06:48:37
  1    60      1040311    21-MAR-2014 06:48:37 1041387    21-MAR-2014 07:01:35
  1    61      1041387    21-MAR-2014 07:01:35 1041425    21-MAR-2014 07:02:46
  1    62      1041425    21-MAR-2014 07:02:46 1048879    21-MAR-2014 11:27:44

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ --------------------
68      2.00K      DISK        00:00:00     21-MAR-2014 11:28:08
        BP Key: 68   Status: AVAILABLE  Compressed: YES  Tag: AINC1_FRI21
        Piece Name: /oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1

  List of Archived Logs in backup set 68
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    63      1048879    21-MAR-2014 11:27:44 1048901    21-MAR-2014 11:28:07

RMAN>

Validate using archivelog all does not work since RMAN does not recognize deleted backups.

RMAN> restore validate archivelog all;

Starting restore at 21-MAR-2014 09:45:09
using channel ORA_DISK_1

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 03/21/2014 09:45:09
RMAN-06026: some targets not found - aborting restore
RMAN-06025: no backup of archived log for thread 1 with sequence 52 and starting SCN of 998131 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 51 and starting SCN of 998032 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 50 and starting SCN of 997986 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 49 and starting SCN of 997779 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 48 and starting SCN of 974617 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 47 and starting SCN of 974437 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 46 and starting SCN of 974348 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 45 and starting SCN of 973251 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 44 and starting SCN of 943517 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 43 and starting SCN of 942296 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 42 and starting SCN of 942262 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 41 and starting SCN of 941967 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 40 and starting SCN of 917029 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 39 and starting SCN of 916561 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 38 and starting SCN of 908363 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 37 and starting SCN of 907850 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 36 and starting SCN of 875257 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 35 and starting SCN of 837127 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 34 and starting SCN of 791810 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 33 and starting SCN of 749949 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 32 and starting SCN of 749893 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 31 and starting SCN of 749681 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 30 and starting SCN of 712625 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 29 and starting SCN of 672466 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 28 and starting SCN of 646365 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 27 and starting SCN of 616449 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 26 and starting SCN of 581487 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 25 and starting SCN of 540184 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 24 and starting SCN of 519475 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 23 and starting SCN of 494335 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 22 and starting SCN of 470043 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 21 and starting SCN of 432577 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 20 and starting SCN of 412641 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 19 and starting SCN of 353256 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 18 and starting SCN of 306546 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 17 and starting SCN of 306515 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 16 and starting SCN of 306404 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 15 and starting SCN of 306211 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 14 and starting SCN of 280433 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 13 and starting SCN of 253917 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 12 and starting SCN of 227620 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 11 and starting SCN of 227291 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 10 and starting SCN of 226756 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 9 and starting SCN of 226342 found to restore

Validate using arbitrary date and time does not meet the requirements.

RMAN> restore archivelog from time "TRUNC(sysdate)" until time "sysdate";

Starting restore at 21-MAR-2014 09:46:32
using channel ORA_DISK_1

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=59
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=60
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2ap3naeg_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2ap3naeg_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=61
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2hp3nagn_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2hp3nagn_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 21-MAR-2014 09:46:37

RMAN> restore archivelog from time "TRUNC(sysdate-1)" until time "TRUNC(sysdate)";

Starting restore at 21-MAR-2014 09:51:21
using channel ORA_DISK_1

archived log for thread 1 with sequence 59 is already on disk as file /oradata/fra/DB01/archivelog/2014_03_21/o1_mf_1_59_9lrv79h1_.arc
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 03/21/2014 09:51:21
RMAN-06026: some targets not found - aborting restore
RMAN-06025: no backup of archived log for thread 1 with sequence 52 and starting SCN of 998131 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 51 and starting SCN of 998032 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 50 and starting SCN of 997986 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 49 and starting SCN of 997779 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 48 and starting SCN of 974617 found to restore

RMAN>

Validate using from scn 1013038 until scn 1048901 works, but where is the information stored so that it can be automated in a shell script?

RMAN>  restore validate archivelog from scn 1013038 until scn 1048901;

Starting restore at 21-MAR-2014 11:34:02
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=107 device type=DISK

channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished restore at 21-MAR-2014 11:34:06

RMAN>

View v$backup_archivelog_summary will provide the information needed.

ARROW:(SYS@db01):PRIMARY> select min_first_change#,max_next_change# from v$backup_archivelog_summary;

MIN_FIRST_CHANGE# MAX_NEXT_CHANGE#
----------------- ----------------
          1013038          1048901

ARROW:(SYS@db01):PRIMARY>

Run validate_archivelog.sh

$ ./validate_archivelog.sh

Recovery Manager: Release 11.2.0.4.0 - Production on Fri Mar 21 11:41:44 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN>
echo set on

RMAN> connect target;
connected to target database: DB01 (DBID=1470673955)

RMAN> restore validate archivelog from scn        1013038 until scn      1048901;
Starting restore at 21-MAR-2014 11:41:45
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=107 device type=DISK

channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished restore at 21-MAR-2014 11:41:49

RMAN> exit

Recovery Manager complete.

Script validate_archivelog.sh

#!/bin/sh
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
min_scn=`sqlplus -SL "/ as sysdba" <<END
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
set pages 0 head off veri off feed off term off echo off pause off numw 32
select min_first_change# from v\\$backup_archivelog_summary;
exit
END
`
if [ "$?" != "0" ]; then
echo "*** ERROR: $min_scn"
exit 1;
fi
max_scn=$(sqlplus -SL "/ as sysdba" <<END
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
set pages 0 head off veri off feed off term off echo off pause off numw 32
select max_next_change# from v\$backup_archivelog_summary;
exit
END
)
if [ "$?" != "0" ]; then
echo "*** ERROR: $max_scn"
exit 1;
fi
rman <<END
set echo on
connect target;
restore validate archivelog from scn ${min_scn} until scn ${max_scn};
exit
END
if [ "$?" != "0" ]; then
echo "*** ERROR: RMAN restore validate"
exit 1;
fi
exit

UPDATE:  Based on recent comment, improve script to reduce call to database.

Run validate_archivelog2.sh

$ ./validate_archivelog2.sh

Recovery Manager: Release 11.2.0.4.0 - Production on Fri Mar 21 13:40:35 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN>
echo set on

RMAN> connect target;
connected to target database: DB01 (DBID=1470673955)

RMAN> restore validate archivelog from scn 1013038 until scn 1048901;
Starting restore at 21-MAR-2014 13:40:36
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=15 device type=DISK

channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_21p3m7b6_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140320_28p3m7cb_1_1.inc0 tag=AINC0_THU20
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2jp3nq1i_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1
channel ORA_DISK_1: piece handle=/oradata/backup/DB01_1470673955_20140321_2qp3nq28_1_1.inc1 tag=AINC1_FRI21
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished restore at 21-MAR-2014 13:40:40

RMAN> exit

Recovery Manager complete.

Script validate_archivelog2.sh

#!/bin/sh
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"

cmd=`sqlplus -SL “/ as sysdba” <<END
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
set pages 0 head off veri off feed off term off echo off pause off numw 32
select ‘restore validate archivelog from scn ‘||min_first_change#||’ until scn ‘||max_next_change# from v\\$backup_archivelog_summary;
exit
END
`
if [ "$?" != "0" ]; then
echo “*** ERROR: $cmd”
exit 1;
fi
rman <<END
set echo on
connect target;
${cmd};
exit
END
if [ "$?" != "0" ]; then
echo “*** ERROR: RMAN restore validate”
exit 1;
fi
exit

Categories: DBA Blogs

Partner Webcast – Oracle Endeca Information Discovery (EID) & Oracle Real-Time Decisions (RTD): Evolving IT and Customer Experience

Businesses today must have a fast, focused response when something unexpected happens, or risk losing customers and revenue. Top performers know how hard it can be to analyze information that is...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #364, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-03-21 07:32

For some, blogging brings the same pleasure as laying under the swaying palms, where on the white sandy beaches the turquoise waters kiss their feet. This passion oozes into their posts, and this edition of Log Buffer is a manifestation of exactly that. Enjoy.

Oracle:

DBRM for 12c Container Database in Multi-tenant Environment.

A discussion of ORA-01442 error.

Oracle Developer Diversity Realized: How to Get Started in a Career in Tech.

Simplified UI and the Oracle User Experience in Oracle Applications Cloud Release 8.

Issues with using latest release of ODM.

SQL Server:

Easily grant exec permission to all stored procedures to a database role in SQL 2005 or higher.

The PoSh DBA – Specifying and Gathering Performance Counters.

Stairway to T-SQL: Beyond The Basics Level 4: Using Views to Simplify Your Query.

Microsoft SQL Server 2014 released to manufacturing.

Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA’s troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions.

MySQL:

The MySQL 5.6 certification exams (Developer and DBA) are now GA!

A review of several MySQL developer tools.

Clustering SugarCRM with MySQL Galera for High Availability and Performance.

Buongiorno Relies on MySQL Enterprise Edition to Power its Mobile Services.

How to log slow queries on Slave in MySQL 5.0 with pt-query-digest.

Categories: DBA Blogs

MM-DD or DD-MM?

Pythian Group - Thu, 2014-03-20 10:19

4 … I repeat 4 … 4 digits that can cause mass confusion. Was that April 3 or March 4? Being a database guy,  I have always had an ear to numbers. I was fascinated by those 10 digits from a very early age. A few years ago I stumbled into one of my most enjoyable podcasts courtesy of iTunes. It presented many examples of what it called “tricks” but were really just another way of doing things.

The traditional way of multiplying is done right to left. Picture 5*56. This is how we are used to doing it:

56
 5
--
First step:  5 = 5*6 (carry a 3) = 0 ;
Next step :  5*5 + the 3 we carried = 28
Final step:  stick the 0 on the end of the 28
ANSWER       280

Let’s look at an alternative method to doing the above calculation:

56
 5
--
(5*50) + (5*6) = 250+30 = 280

Was not that fun and different? Let’s try one more before moving on:

387
  8
---

(8*300) + (8*80) + (8*7) = 2400 + 640 + 56 = 3096

How is that relevant to the topic at hand? There is only one way to do something with numbers; only ONE way is right and ALL other are wrong. So why is it so complicated when numbers represent dates? Without worrying about the YYYY portion (yes 4 digit year –> remember that mess in the millenium?), is 1201 December 1 or January 12? Is 0708 July 8 or August 7? There are 12 days a year where this does not matter. 0101 is always January 1 and 1010 always October 10.

According to Wikipedia, at least one country in Europe insists on DD-MM format. Many more other countries use MM-DD. Need I say any more. How could we possibly let ourselves fall into this trap? It is beyond me how we allow ourselves to introduce complexities and confusion into a very simple concept … dates. Numbers have always been fun in my lifetime. One of the first number games we all may have played was at a very young age – multiplying a 1 digit integer by 9′s on our fingers.

Suppose each set of vertical lines below is a digit on your hand:

|  |  |  |  |      |  |  |  |  |
|  |  |  |  |      |  |  |  |  |

Let’s multiply 4*9. Start at the right and count 1, 2, 3, 4. That places us on the digit shown below with the * on top:

         *
|  |  |  |  |      |  |  |  |  |
|  |  |  |  |      |  |  |  |  |

Now put the designated digit down leaving

|  |  |     |      |  |  |  |  |
|  |  |     |      |  |  |  |  |

Count the digits up to the space (3) then the digits after (6). Lo and behold the answer is 36.

I always take for granted that 1204 means a certain thing when expressing a calendar date but have learned to not take everything for granted. The next time you see a 4 digit date think of me. Suffice to say, once past the 12th day of the month this becomes obvious. Enjoy the confusion and maybe your life will be as simple as mine, and things as mundane as the subject of this blog post will amuse you to no end :).

Categories: DBA Blogs

Using hints with coe_xfr_sql_profile.sql

Bobby Durrett's DBA Blog - Wed, 2014-03-19 15:16

In an earlier blog post I showed how I used coe_xfr_sql_profile.sql from the sqlt toolkit from Oracle support to force a particular sql statement to run with a more efficient plan that it had used in the past.

Today, I’m trying a slight variation.  Take the problem query, run it with hints, and use coe_xfr_sql_profile.sql to apply the resulting plan to the original query.  I built a quick and dirty test to make sure it works.

-- create test table with index and stats

SQL> create table test as select * from dba_tables;

SQL> create index testi on test(owner);

SQL> execute dbms_stats.gather_table_stats(NULL,'TEST');

-- run query unhinted

SQL> select sum(blocks) from test where owner='SYS';

SUM(BLOCKS)
-----------
    1042402

-- show plan - uses the index

SQL> select * from 
table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  a0nw2xjnmjcxd, child number 0
-------------------------------------
select sum(blocks) from test where owner='SYS'

Plan hash value: 1551939256

--------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes |
--------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |
|   1 |  SORT AGGREGATE              |       |     1 |     9 |
|   2 |   TABLE ACCESS BY INDEX ROWID| TEST  |  2122 | 19098 |
|*  3 |    INDEX RANGE SCAN          | TESTI |  2122 |       |
--------------------------------------------------------------

-- use hint to force full scan

SQL> select /*+full(test) */ sum(blocks) from test 
where owner='SYS';

SUM(BLOCKS)
-----------
    1042402

-- get plan with full scan

SQL> select * from 
table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  5521zhmq67vun, child number 0
-------------------------------------
select /*+full(test) */ sum(blocks) from test 
where owner='SYS'

Plan hash value: 3467505462

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   295 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |     9 |            |
|*  2 |   TABLE ACCESS FULL| TEST |  2122 | 19098 |   295   (1)|
----------------------------------------------------------------

So, let’s pretend that sql_id=a0nw2xjnmjcxd is the real production query you are trying to tune.  Without hints it is doing an index range scan.  With a hint you run a similar query forcing a full scan.  The new query, sql_id=5521zhmq67vun has a plan with this hash value: 3467505462.  So, I used coe_xfr_sql_profile.sql to force the unhinted original query to run with the hinted query’s plan:

cd /var/opt/oracle/admin/sqlt/utl

-- login to sqlplus as system

@coe_xfr_sql_profile.sql a0nw2xjnmjcxd 3467505462

-- login to sqlplus as system

@coe_xfr_sql_profile_a0nw2xjnmjcxd_3467505462.sql

Now when I run the original query it runs with the new plan and indicates that the profile is in use:

SQL> select sum(blocks) from test where owner='SYS';

SUM(BLOCKS)
-----------
    1042402

SQL> select * from 
table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  a0nw2xjnmjcxd, child number 0
-------------------------------------
select sum(blocks) from test where owner='SYS'

Plan hash value: 3467505462

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   295 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |     9 |            |
|*  2 |   TABLE ACCESS FULL| TEST |  2122 | 19098 |   295   (1)|
----------------------------------------------------------------

Note
-----
   - SQL profile coe_a0nw2xjnmjcxd_3467505462 used for 
     this statement

I edited the output to make it more readable.  See the zip with the full scripts and logs.

So, if you have a recurring production SQL query that runs faster with hints just run that query once with the hints and then use coe_xfr_sql_profile.sql to force the real query to run unchanged but with the same plan as the hinted query.

- Bobby

 

 

 

 

 

 

 

 

 

 

 

 

 

Categories: DBA Blogs

Oracle Data Guard Switchover via DGMGRL vs. #em12c

DBASolved - Wed, 2014-03-19 13:12

When you start to look at high availability and replication of data many people look at either Oracle Data Guard or Oracle GoldenGate.  Personally, I opt for Oracle GoldenGate; however, it is not always the best fit for smaller environments where cost is a concern.  When cost is a concern, Oracle Data Guard is a good choice and can be used with Enterprise (EE)  and Standard (SE) editions of Oracle Database.  There are multiple options for Oracle Data Guard, i.e. Redo Apply, SQL Apply and Active Data Guard.  All these options have its pros and cons.

Advice: I’ll say this up front; after configuring Oracle Data Guard from the command line (SQL) and from OEM 12c, I would take OEM 12c if I have the option.  Much easier, personal opinion.  I’ll write a post on this at some point.

In this post I want to cover the ways that Oracle Data Guard can switchover between Primary database and Physical Standby database from the command line and then from OEM12c. 

DGMRL – Command Line

Let’s take a look at how to perform a switch over from the command line.

In order to perform a switchover, I will be using the Oracle Data Guard Broker (DGMGRL).   With DGMGRL, I can manage the configuration of  Oracle Data Guard and provides an easier way to interact with the environment after it is setup. 

To start working with DGMGRL, first log into the standby database server.  Then execute DGMGRL and connect to the local database (Image 1).

Image 1:
image

Before doing anything with Oracle Data Guard, it is good to check the status of the configuration.  This can be done by using the SHOW CONFIGURATION command (Image 2).

Image 2:
image

In order to switch over to the standby database, I simply need to execute the SWITCHOVER TO <standby database> command (Image 3).

Image 3:
image

Once the switch over starts, the broker tries to switch everything over to db11g2tst.  During the switch over, I get an error on shutting down the original primary database and disconnects me from the databases.  Upon trying to look at the configuration, I see that the configuration is in ERROR status (Image 4).

Image 4:
image

From looking at this configuration problem, lets try restarted the database that it is complaining about and see if it clears the error. 

Image 5:
image

Now that the database has been restarted in mount mode, the DGMGRL is reporting that the switchover is in progress (Image 5).  With the database bounced and placed in mount mode, the switchover was able to complete successfully (Image 6).

Image 6:
image

Although, the switchover was successful, I had to intervene by rebooting the database that was becoming the new standby database.  Successful yes, but I’m not completely happy with the process.  Thought this was suppose to be hands free?  Let’s take a look at the OEM 12c approach.

OEM12c Approach

Taking a look at how to administer Oracle Data Guard from OEM12c, I will be using the broker again; this time from within OEM 12c. 

Note: The nice thing is that when you configure a standby database with OEM12c, the broker is setup for you.  Smile

To access the the broker items for Oracle Data Guard in OEM12c, I first have to go to the Database landing page (Targets –> Databases).  In Image 1, I’m looking for my test database (%tst%).  Once I have them listed, then I need to click on the database  that is the standby. 

Image 1:

image

After clicking on the standby database, OEM takes me to the landing page for that standby database.  Just as with any database, I see a set of menus under the database name (Image 2). 

Image 2:image

At this point, I want to use the Availability menu to access the Data Guard Administration Page (Image 3) (Availability –> Data Guard Administration).

Image 3:
image

Before OEM will take me to the Data Guard Administration page, I have to log into the standby database.  Since the standby database is in mount mode, the only way to log in is using the SYS AS SYSDBA user.  Image 4 shows that I have already setup a Named Credential for my standby database.

Image 4:image

Once logged in to the standby database, the Data Guard Administration page, provides you with a  lot of information pertaining to the Data Guard environment (Image 5).   

Image 5:

image

The important part on this page is the Standby Databases (Image 6) section.  This section provides all the information needed for the standby database.  In my example, I can quickly see the status of Data Guard, what role it is in, the last received and applied logs and the estimated time it will take to failover.

Image 6:

image

Now that I know what standby databases are available, I can quickly switchover to the standby selected by using the Switchover button (Image 7) in the Standby Databases section.

Image 7:

image

After clicking the Switchover button, OEM will ask you to log into both hosts that will partake in the switchover (not pictured).   Once logged into both hosts, a confirmation page for switching over is provided (Image 8).  At the time of this switchover, I have the option to swap monitoring settings as well (check box).  Since I want to swing everything over, I clicked the check box and then click the Yes button.

Image 8:

image

After clicking Yes, the switchover begins (Image 9).  I can monitor the status of the switchover from the output being displayed in OEM.

Image 9:image

Once the switchover is complete, OEM returns me to the Data Guard Administration page.  Where I can clearly see that everything was successful and that the new standby database is the old primary database (Image 10).

Image 10:image

Summary

I have showed two different ways of using the Data Guard Broker to perform a switch over.  Both methods are valid.  The main difference in the approaches is that OEM 12c approach took a few more steps due to the screen involved.  Where as the DGMGRL command line option I only had to run one (1) command to switch over.  In the end, everything switched over and I would leave it up to personal preference on which approach is used. 

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Data Guard, Database, Replication
Categories: DBA Blogs

When RMAN Validate Creates New Files

Pythian Group - Wed, 2014-03-19 07:41

While doing some testing I found something happening with RMAN that was unexpected.

After making an RMAN backup, I would run the VALIDATE RECOVERY FILES command.

When it completed I found there were twice as many backup files as when I started.

Please note that this is Oracle 11.2.0.3 – that will be important later on.

Here is the list of current backup files:

RMAN crosscheck backup;
 using channel ORA_DISK_1
 crosschecked backup piece: found to be 'AVAILABLE'
 backup piece handle=/u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup RECID=112 STAMP=842454367
 crosschecked backup piece: found to be 'AVAILABLE'
 backup piece handle=/u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup RECID=113 STAMP=842454432
 crosschecked backup piece: found to be 'AVAILABLE'
 backup piece handle=/u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup RECID=114 STAMP=842454556
 Crosschecked 3 objects

Following are some pertinent parameters:

12:46:52 SYS@js01 AS SYSDBA show parameter db_recovery_file_dest

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest		     string	 /u01/app/oracle/fra
db_recovery_file_dest_size	     big integer 4G

12:47:00 SYS@js01 AS SYSDBA show parameter log_archive_dest_1

NAME				     TYPE	 VALUE
------------------------------------ ----------- ----------------------------------
log_archive_dest_1		     string	 LOCATION=USE_DB_RECOVERY_FILE_DEST

Now see what happens when VALIDATE RECOVERY FILES is run.
Listings may be edited for brevity.

RMAN validate recovery files;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
specification does not match any datafile copy in the repository
channel ORA_DISK_1: starting validation of archived log
channel ORA_DISK_1: specifying archived log(s) for validation
input archived log thread=1 sequence=1212 RECID=581 STAMP=842454820
input archived log thread=1 sequence=1213 RECID=582 STAMP=842454821
...
input archived log thread=1 sequence=1232 RECID=601 STAMP=842531265
input archived log thread=1 sequence=1233 RECID=602 STAMP=842531265
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01

List of Archived Logs
=====================
Thrd Seq     Status Blocks Failing Blocks Examined Name
---- ------- ------ -------------- --------------- ---------------
1    1212    OK     0              97494           /u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1212_9lgwwng0_.arc
1    1213    OK     0              97494           /u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1213_9lgwwnqx_.arc
...
1    1232    OK     0              13              /u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1232_9lk7kkvh_.arc
1    1233    OK     0              1               /u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1233_9lk7kkww_.arc
channel ORA_DISK_1: input backup set: count=114, stamp=842454366, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140317T150606_9lk8nfr3_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:35
channel ORA_DISK_1: input backup set: count=115, stamp=842454431, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140317T150606_9lk8ojtw_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:01
channel ORA_DISK_1: input backup set: count=117, stamp=842454556, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140317T150915_9lk8okwy_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:03
Finished validate at 18-MAR-14

Notice that for each existing backup file an exact copy was made.
This was verified by using md5sum to compare the file check sums.

== as shown by md5sum, these are exact duplicates

[oracle@dev ]$ md5sum /u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup /u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup /u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup
21b1c12d47216ce8ac2413e8c7e3fc6e  /u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup
7524091d41785c793ff7f3f504b76082  /u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup
974bb354db9eb49770991334c891add5  /u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup

[oracle@dev ]$ md5sum /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140317T150606_9lk8nfr3_.bkp /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140317T150606_9lk8ojtw_.bkp /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140317T150915_9lk8okwy_.bkp
21b1c12d47216ce8ac2413e8c7e3fc6e  /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140317T150606_9lk8nfr3_.bkp
7524091d41785c793ff7f3f504b76082  /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140317T150606_9lk8ojtw_.bkp
974bb354db9eb49770991334c891add5  /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140317T150915_9lk8okwy_.bkp

It then occurred to me that maybe this behavior was for some reason due to creating backups outside the FRA, and Oracle for some reason wanted a copy of each file in the FRA. If so this would probably be a bug, but I thought it interesting enough to run a test.

The following shows that all previous backups were removed, new ones created, as well as space consumed in the FRA.

== Delete all backups, and create backups in FRA only

RMAN list backup;
specification does not match any backup in the repository

RMAN crosscheck backup;
using channel ORA_DISK_1
specification does not match any backup in the repository

====== create new backups in FRA

RMAN backup database;

Starting backup at 18-MAR-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/u01/oradata/JS01/datafile/o1_mf_users_8g69rzg7_.dbf
input datafile file number=00003 name=/u01/oradata/JS01/datafile/o1_mf_undotbs1_8g69rgd1_.dbf
input datafile file number=00002 name=/u01/oradata/JS01/datafile/o1_mf_sysaux_8g69qxt0_.dbf
input datafile file number=00001 name=/u01/oradata/JS01/datafile/o1_mf_system_8g69qb0g_.dbf
input datafile file number=00005 name=/u01/oradata/JS01/datafile/o1_mf_atg_data_8hk7kc7f_.dbf
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp tag=TAG20140318T125302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp tag=TAG20140318T125302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-MAR-14

RMAN backup archivelog all delete input;

Starting backup at 18-MAR-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=1212 RECID=581 STAMP=842454820
input archived log thread=1 sequence=1213 RECID=582 STAMP=842454821
...
input archived log thread=1 sequence=1233 RECID=602 STAMP=842531265
input archived log thread=1 sequence=1234 RECID=603 STAMP=842532824
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp tag=TAG20140318T125344 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1212_9lgwwng0_.arc RECID=581 STAMP=842454820
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1213_9lgwwnqx_.arc RECID=582 STAMP=842454821
...
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1233_9lk7kkww_.arc RECID=602 STAMP=842531265
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1234_9lk928kg_.arc RECID=603 STAMP=842532824
Finished backup at 18-MAR-14

RMAN crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp RECID=145 STAMP=842532783
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp RECID=146 STAMP=842532809
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp RECID=147 STAMP=842532824
Crosschecked 3 objects

12:54:40 SYS@js01 AS SYSDBA @fra

FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE                          0                         0               0
REDO LOG                              0                         0               0
ARCHIVED LOG                          0                         0               0
BACKUP PIECE                      35.24                         0               3
IMAGE COPY                            0                         0               0
FLASHBACK LOG                         0                         0               0
FOREIGN ARCHIVED LOG                  0                         0               0

7 rows selected.

Again there are three backup files, this time in the FRA. The files are using 35% of the FRA space.

Let’s run another VALIDATE RECOVERY FILES and find out what happens.


RMAN validate recovery files;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
specification does not match any archived log in the repository
specification does not match any datafile copy in the repository
channel ORA_DISK_1: input backup set: count=140, stamp=842532782, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk955rv_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:15
channel ORA_DISK_1: input backup set: count=141, stamp=842532808, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk95nvg_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:01
channel ORA_DISK_1: input backup set: count=142, stamp=842532824, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk95oxv_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:03
Finished validate at 18-MAR-14

RMAN crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp RECID=145 STAMP=842532783
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk955rv_.bkp RECID=148 STAMP=842532917
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp RECID=146 STAMP=842532809
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk95nvg_.bkp RECID=149 STAMP=842532932
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp RECID=147 STAMP=842532824
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk95oxv_.bkp RECID=150 STAMP=842532933
Crosschecked 6 objects

12:54:41 SYS@js01 AS SYSDBA @fra

FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE                          0                         0               0
REDO LOG                              0                         0               0
ARCHIVED LOG                          0                         0               0
BACKUP PIECE                      70.47                     35.24               6
IMAGE COPY                            0                         0               0
FLASHBACK LOG                         0                         0               0
FOREIGN ARCHIVED LOG                  0                         0               0

7 rows selected.

That is pretty clear – there are duplicates of each file. This is also shown by the FRA now being 70% consumed by backup pieces, whereas previously on 35% of the FRA was used.

This seems like a bug, and a brief search of My Oracle Support finds this relevant document:

Bug 14248496 RMAN ‘validate recovery files’ creates a piece copy for every execution

This fits the situation pretty well, and the version of this database, 11.2.0.3, is one of the affected versions.
As per the doc this bug is fixed in 11.2.0.4

The next step of course is to try this same operation in 11.2.0.4.
This is also a Linux database running on Linux 6 – the only difference is that the database version is 11.2.0.4.

RMAN crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/rman/rman-db-02p3ggdi_1_1.bkup RECID=1 STAMP=842547637
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/rman/rman-db-03p3gggk_1_1.bkup RECID=2 STAMP=842547732
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/rman/rman-db-04p3ggjt_1_1.bkup RECID=3 STAMP=842547838
Crosschecked 3 objects

RMAN validate recovery files;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
specification does not match any archived log in the repository
specification does not match any datafile copy in the repository
skipping backup sets; RECOVERY FILES, RECOVERY AREA or DB_RECOVERY_FILE_DEST option cannot validate backup set
Finished validate at 18-MAR-14

That wasn’t exactly promising – the VALIDATE RECOVERY FILES now just exits with a message that backup sets cannot be validated with this command.
Apparently ‘fixing’ the bug was just a matter of disabling this bit of functionality.
This is at odds with the Oracle 11g Documentation for RMAN VALIDATE
From the section “RECOVERY FILES”

Validates all recovery files on disk, whether they are stored in the fast recovery area or other locations on disk. Recovery files include full and incremental backup sets, control file autobackups, archived redo log files, and data file copies. Flashback logs are not validated.

The Oracle 12c Documentation for 12c RMAN VALIDATE says the same thing, that is that backup sets are included in the files to be validated.

Clearly the intent seems to have been for this to work with VALIDATE RECOVERY FILES, but for some reason the fix was simply to disable the functionality.

So, what can you use instead?

Now the VALIDATE BACKUPSET command must be used to validate the backups. This is not nearly as convenient as simply issuing the VALIDATE RECOVERY FILES command, as VALIDATE BACKUPSET takes a mandatory argument, which is the primary key of the backup set.

The documentation recommends using the LIST BACKUPSET command, but this is rather inconvenient as the keys must be parsed from report text as seen.

RMAN list backupset;

List of Backup Sets
==================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1       Full    9.36M      DISK        00:00:04     18-MAR-14
        BP Key: 1   Status: AVAILABLE  Compressed: NO  Tag: TAG20140318T170034
        Piece Name: /u01/app/oracle/rman/rman-db-02p3ggdi_1_1.bkup
  SPFILE Included: Modification time: 18-MAR-14
  SPFILE db_unique_name: ORCL
  Control File Included: Ckp SCN: 1014016      Ckp time: 18-MAR-14

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2       Full    1.07G      DISK        00:01:36     18-MAR-14
        BP Key: 2   Status: AVAILABLE  Compressed: NO  Tag: TAG20140318T170212
        Piece Name: /u01/app/oracle/rman/rman-db-03p3gggk_1_1.bkup
  List of Datafiles in backup set 2
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/system01.dbf
  2       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/sysaux01.dbf
  3       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/undotbs01.dbf
  4       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/users01.dbf
  5       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/example01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
3       Full    9.36M      DISK        00:00:02     18-MAR-14
        BP Key: 3   Status: AVAILABLE  Compressed: NO  Tag: TAG20140318T170212
        Piece Name: /u01/app/oracle/rman/rman-db-04p3ggjt_1_1.bkup
  SPFILE Included: Modification time: 18-MAR-14
  SPFILE db_unique_name: ORCL
  Control File Included: Ckp SCN: 1014639      Ckp time: 18-MAR-14

This is fine for manually validating just a few files, but is really not a workable solution for programmatically validating backup sets. Fortunately there is a better method – just use the v$backup_set_details view.

  1  select session_key, session_recid, session_stamp, bs_key, recid
  2  from v$backup_set_details
  3* order by session_key
15:58:37 dev.jks.com - jkstill@js01 SQL /

SESSION_KEY SESSION_RECID SESSION_STAMP     BS_KEY	RECID
----------- ------------- ------------- ---------- ----------
	469	      469     842532214        106	  106
	469	      469     842532214        107	  107
	469	      469     842532214        105	  105

3 rows selected.

RMAN> validate backupset 105;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp tag=TAG20140318T125302
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:07
Finished validate at 18-MAR-14

-- the same was done for BS_KEY values 106 and 107

It is usually a good idea to investigate when something is found to work differently than expected.
It was certainly beneficial in this case, as I was developing code on 11.2.0.3 that would later run on 11.2.0.4.
While that bit of code would not work as expected on 11.2.0.4, it would also not cause an error, and probably not be noticed until it caused a recovery problem.

Using VALIDATE BACKUPSET is a workable solution, but not nearly as convenient as using VALIDATE RECOVERY FILES.
Perhaps there will be a fix for it in future releases.

Categories: DBA Blogs

Use DCLI to check #em12c agents on Exadata – all at once

DBASolved - Tue, 2014-03-18 21:08

For all the benefits of Oracle Enterprise Manager 12c (OEM), there is one draw back when using OEM with Real Application Clusters or Exadata…. checking the status of the EM Agent on each node.  When a node is bounced, some times the EM Agent does not always restart.  When this happens on a single instance box or a RAC; normally you would have to check  each node individually and restart the agent.  With Exadata, Oracle has made this check easier by providing the Distributed Command Line Interface (DCLI).  Using DCLI, an administrator can run commands across all the nodes of the Exadata.

Note: The DCLI is a shell script that allows commands to be ran on multiple machines with a single invocation.

The example I’m going to use in this post is an example of checking the Enterprise Management Agents after rebooting the  computer (db) nodes of an Exadata.

After rebooting each of the nodes, I needed to validate that the EM Agents were successfully restarted.  All the agents were installed on each of the nodes in the same directory structure (/u01/app/oracle/product/agent12c), which makes using DCLI a breeze.   Now that I know where the agents are installed, I need to create a file that lists all the compute (db) nodes in the Exadata; this file name is dbs_group.  The last thing I need is a login for each of the nodes.  On all Exadata systems the Oracle user can be used to login to each of the nodes.

Now that I have all the information I need; I need two command line switches that will be used with the DCLI command.  These switches are –l (login) and –g (group). 

The command that will be used to check all the EM Agents on the Exadata is:

 

dcli -l oracle -g ~/dbs_group /u01/app/oracle/product/agent12c/core/12.1.0.3.0/bin/emctl status agent


Once I run the DCLI command, I get output for all the EM Agents across all the nodes of the Exadata.

[e###db01:oracle:F####1] /home/oracle
> dcli -l oracle -g ~/dbs_group /u01/app/oracle/product/agent12c/core/12.1.0.3.0/bin/emctl status agent
e###db01: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db01: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db01: —————————————————————
e###db01: Agent Version     : 12.1.0.3.0
e###db01: OMS Version       : 12.1.0.3.0
e###db01: Protocol Version  : 12.1.0.1.0
e###db01: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db01: Agent Binaries    : /u01/app/oracle/product/agent12c/core/12.1.0.3.0
e###db01: Agent Process ID  : 6638
e###db01: Parent Process ID : 6307
e###db01: Agent URL         :
https://e###db01.acme.com:3873/emd/main/
e###db01: Repository URL    : https://c#####01.acme.com:4904/empbs/upload
e###db01: Started at        : 2014-03-16 02:29:10
e###db01: Started by user   : oracle
e###db01: Last Reload       : (none)
e###db01: Last successful upload                       : 2014-03-17 10:15:41
e###db01: Last attempted upload                        : 2014-03-17 10:15:41
e###db01: Total Megabytes of XML files uploaded so far : 3.84
e###db01: Number of XML files pending upload           : 0
e###db01: Size of XML files pending upload(MB)         : 0
e###db01: Available disk space on upload filesystem    : 29.98%
e###db01: Collection Status                            : Collections enabled
e###db01: Heartbeat Status                             : Ok
e###db01: Last attempted heartbeat to OMS              : 2014-03-17 10:16:56
e###db01: Last successful heartbeat to OMS             : 2014-03-17 10:16:56
e###db01: Next scheduled heartbeat to OMS              : 2014-03-17 10:17:56
e###db01:
e###db01: —————————————————————

e###db01: Agent is Running and Ready
e###db02: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db02: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db02: —————————————————————
e###db02: Agent Version     : 12.1.0.3.0
e###db02: OMS Version       : 12.1.0.3.0
e###db02: Protocol Version  : 12.1.0.1.0
e###db02: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db02: Agent Binaries    : /u01/app/oracle/product/agent12c/core/12.1.0.3.0
e###db02: Agent Process ID  : 28588
e###db02: Parent Process ID : 28478
e###db02: Agent URL         :
https://e###db02.acme.com:3873/emd/main/
e###db02: Repository URL    : https://c#####01.acme.com:4904/empbs/upload
e###db02: Started at        : 2014-03-16 02:24:59
e###db02: Started by user   : oracle
e###db02: Last Reload       : (none)
e###db02: Last successful upload                       : 2014-03-17 10:15:40
e###db02: Last attempted upload                        : 2014-03-17 10:15:40
e###db02: Total Megabytes of XML files uploaded so far : 3.18
e###db02: Number of XML files pending upload           : 0
e###db02: Size of XML files pending upload(MB)         : 0
e###db02: Available disk space on upload filesystem    : 38.92%
e###db02: Collection Status                            : Collections enabled
e###db02: Heartbeat Status                             : Ok
e###db02: Last attempted heartbeat to OMS              : 2014-03-17 10:16:59
e###db02: Last successful heartbeat to OMS             : 2014-03-17 10:16:59
e###db02: Next scheduled heartbeat to OMS              : 2014-03-17 10:17:59
e###db02:
e###db02: —————————————————————

e###db02: Agent is Running and Ready
e###db03: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db03: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db03: —————————————————————
e###db03: Agent Version     : 12.1.0.3.0
e###db03: OMS Version       : 12.1.0.3.0
e###db03: Protocol Version  : 12.1.0.1.0
e###db03: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db03: Agent Binaries    : /u01/app/oracle/product/agent12c/core/12.1.0.3.0
e###db03: Agent Process ID  : 5834
e###db03: Parent Process ID : 5733
e###db03: Agent URL         :
https://e###db03.acme.com:3873/emd/main/
e###db03: Repository URL    : https://c#####01.acme.com:4904/empbs/upload
e###db03: Started at        : 2014-03-16 02:42:35
e###db03: Started by user   : oracle
e###db03: Last Reload       : (none)
e###db03: Last successful upload                       : 2014-03-17 10:12:39
e###db03: Last attempted upload                        : 2014-03-17 10:12:39
e###db03: Total Megabytes of XML files uploaded so far : 8.54
e###db03: Number of XML files pending upload           : 0
e###db03: Size of XML files pending upload(MB)         : 0
e###db03: Available disk space on upload filesystem    : 39.97%
e###db03: Collection Status                            : Collections enabled
e###db03: Heartbeat Status                             : Ok
e###db03: Last attempted heartbeat to OMS              : 2014-03-17 10:16:38
e###db03: Last successful heartbeat to OMS             : 2014-03-17 10:16:38
e###db03: Next scheduled heartbeat to OMS              : 2014-03-17 10:17:38
e###db03:
e###db03: —————————————————————

e###db03: Agent is Running and Ready
e###db04: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db04: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db04: —————————————————————
e###db04: Agent Version     : 12.1.0.3.0
e###db04: OMS Version       : 12.1.0.3.0
e###db04: Protocol Version  : 12.1.0.1.0
e###db04: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db04: Agent Binaries    : /u01/app/oracle/product/agent12c/core/12.1.0.3.0
e###db04: Agent Process ID  : 5853
e###db04:
Parent Process ID : 5734
e###db04: Agent URL         :
https://e###db04.acme.com:3873/emd/main/
e###db04: Repository URL    : https://c#####01.acme.com:4904/empbs/upload
e###db04: Started at        : 2014-03-16 02:53:08
e###db04: Started by user   : oracle
e###db04: Last Reload       : (none)
e###db04: Last successful upload                  : 2014-03-17 10:16:33
e###db04: Last attempted upload                   : 2014-03-17 10:16:33
e###db04: Total Megabytes of XML files uploaded so far : 2.81
e###db04: Number of XML files pending upload           : 0
e###db04: Size of XML files pending upload(MB)         : 0
e###db04: Available disk space on upload filesystem    : 45.75%
e###db04: Collection Status                         : Collections enabled
e###db04: Heartbeat Status                             : Ok
e###db04: Last attempted heartbeat to OMS           : 2014-03-17 10:16:50
e###db04: Last successful heartbeat to OMS          : 2014-03-17 10:16:50
e###db04: Next scheduled heartbeat to OMS           : 2014-03-17 10:17:50
e###db04:
e###db04: —————————————————————

e###db04: Agent is Running and Ready

 

From the output,  I can clearly see that all the EM Agents are monitoring the Exadata  and are in running status.  I really like the DCLI command option, makes life simple especially when needing to check on multiple agents within an environment.  Makes me wonder if I can put DCLI on a traditional RAC system?

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Exadata, OEM
Categories: DBA Blogs