Skip navigation.

DBA Blogs

Log Buffer #364, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-03-21 07:32

For some, blogging brings the same pleasure as laying under the swaying palms, where on the white sandy beaches the turquoise waters kiss their feet. This passion oozes into their posts, and this edition of Log Buffer is a manifestation of exactly that. Enjoy.

Oracle:

DBRM for 12c Container Database in Multi-tenant Environment.

A discussion of ORA-01442 error.

Oracle Developer Diversity Realized: How to Get Started in a Career in Tech.

Simplified UI and the Oracle User Experience in Oracle Applications Cloud Release 8.

Issues with using latest release of ODM.

SQL Server:

Easily grant exec permission to all stored procedures to a database role in SQL 2005 or higher.

The PoSh DBA – Specifying and Gathering Performance Counters.

Stairway to T-SQL: Beyond The Basics Level 4: Using Views to Simplify Your Query.

Microsoft SQL Server 2014 released to manufacturing.

Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA’s troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions.

MySQL:

The MySQL 5.6 certification exams (Developer and DBA) are now GA!

A review of several MySQL developer tools.

Clustering SugarCRM with MySQL Galera for High Availability and Performance.

Buongiorno Relies on MySQL Enterprise Edition to Power its Mobile Services.

How to log slow queries on Slave in MySQL 5.0 with pt-query-digest.

Categories: DBA Blogs

MM-DD or DD-MM?

Pythian Group - Thu, 2014-03-20 10:19

4 … I repeat 4 … 4 digits that can cause mass confusion. Was that April 3 or March 4? Being a database guy,  I have always had an ear to numbers. I was fascinated by those 10 digits from a very early age. A few years ago I stumbled into one of my most enjoyable podcasts courtesy of iTunes. It presented many examples of what it called “tricks” but were really just another way of doing things.

The traditional way of multiplying is done right to left. Picture 5*56. This is how we are used to doing it:

56
 5
--
First step:  5 = 5*6 (carry a 3) = 0 ;
Next step :  5*5 + the 3 we carried = 28
Final step:  stick the 0 on the end of the 28
ANSWER       280

Let’s look at an alternative method to doing the above calculation:

56
 5
--
(5*50) + (5*6) = 250+30 = 280

Was not that fun and different? Let’s try one more before moving on:

387
  8
---

(8*300) + (8*80) + (8*7) = 2400 + 640 + 56 = 3096

How is that relevant to the topic at hand? There is only one way to do something with numbers; only ONE way is right and ALL other are wrong. So why is it so complicated when numbers represent dates? Without worrying about the YYYY portion (yes 4 digit year –> remember that mess in the millenium?), is 1201 December 1 or January 12? Is 0708 July 8 or August 7? There are 12 days a year where this does not matter. 0101 is always January 1 and 1010 always October 10.

According to Wikipedia, at least one country in Europe insists on DD-MM format. Many more other countries use MM-DD. Need I say any more. How could we possibly let ourselves fall into this trap? It is beyond me how we allow ourselves to introduce complexities and confusion into a very simple concept … dates. Numbers have always been fun in my lifetime. One of the first number games we all may have played was at a very young age – multiplying a 1 digit integer by 9′s on our fingers.

Suppose each set of vertical lines below is a digit on your hand:

|  |  |  |  |      |  |  |  |  |
|  |  |  |  |      |  |  |  |  |

Let’s multiply 4*9. Start at the right and count 1, 2, 3, 4. That places us on the digit shown below with the * on top:

         *
|  |  |  |  |      |  |  |  |  |
|  |  |  |  |      |  |  |  |  |

Now put the designated digit down leaving

|  |  |     |      |  |  |  |  |
|  |  |     |      |  |  |  |  |

Count the digits up to the space (3) then the digits after (6). Lo and behold the answer is 36.

I always take for granted that 1204 means a certain thing when expressing a calendar date but have learned to not take everything for granted. The next time you see a 4 digit date think of me. Suffice to say, once past the 12th day of the month this becomes obvious. Enjoy the confusion and maybe your life will be as simple as mine, and things as mundane as the subject of this blog post will amuse you to no end :).

Categories: DBA Blogs

Using hints with coe_xfr_sql_profile.sql

Bobby Durrett's DBA Blog - Wed, 2014-03-19 15:16

In an earlier blog post I showed how I used coe_xfr_sql_profile.sql from the sqlt toolkit from Oracle support to force a particular sql statement to run with a more efficient plan that it had used in the past.

Today, I’m trying a slight variation.  Take the problem query, run it with hints, and use coe_xfr_sql_profile.sql to apply the resulting plan to the original query.  I built a quick and dirty test to make sure it works.

-- create test table with index and stats

SQL> create table test as select * from dba_tables;

SQL> create index testi on test(owner);

SQL> execute dbms_stats.gather_table_stats(NULL,'TEST');

-- run query unhinted

SQL> select sum(blocks) from test where owner='SYS';

SUM(BLOCKS)
-----------
    1042402

-- show plan - uses the index

SQL> select * from 
table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  a0nw2xjnmjcxd, child number 0
-------------------------------------
select sum(blocks) from test where owner='SYS'

Plan hash value: 1551939256

--------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes |
--------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |
|   1 |  SORT AGGREGATE              |       |     1 |     9 |
|   2 |   TABLE ACCESS BY INDEX ROWID| TEST  |  2122 | 19098 |
|*  3 |    INDEX RANGE SCAN          | TESTI |  2122 |       |
--------------------------------------------------------------

-- use hint to force full scan

SQL> select /*+full(test) */ sum(blocks) from test 
where owner='SYS';

SUM(BLOCKS)
-----------
    1042402

-- get plan with full scan

SQL> select * from 
table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  5521zhmq67vun, child number 0
-------------------------------------
select /*+full(test) */ sum(blocks) from test 
where owner='SYS'

Plan hash value: 3467505462

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   295 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |     9 |            |
|*  2 |   TABLE ACCESS FULL| TEST |  2122 | 19098 |   295   (1)|
----------------------------------------------------------------

So, let’s pretend that sql_id=a0nw2xjnmjcxd is the real production query you are trying to tune.  Without hints it is doing an index range scan.  With a hint you run a similar query forcing a full scan.  The new query, sql_id=5521zhmq67vun has a plan with this hash value: 3467505462.  So, I used coe_xfr_sql_profile.sql to force the unhinted original query to run with the hinted query’s plan:

cd /var/opt/oracle/admin/sqlt/utl

-- login to sqlplus as system

@coe_xfr_sql_profile.sql a0nw2xjnmjcxd 3467505462

-- login to sqlplus as system

@coe_xfr_sql_profile_a0nw2xjnmjcxd_3467505462.sql

Now when I run the original query it runs with the new plan and indicates that the profile is in use:

SQL> select sum(blocks) from test where owner='SYS';

SUM(BLOCKS)
-----------
    1042402

SQL> select * from 
table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  a0nw2xjnmjcxd, child number 0
-------------------------------------
select sum(blocks) from test where owner='SYS'

Plan hash value: 3467505462

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   295 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |     9 |            |
|*  2 |   TABLE ACCESS FULL| TEST |  2122 | 19098 |   295   (1)|
----------------------------------------------------------------

Note
-----
   - SQL profile coe_a0nw2xjnmjcxd_3467505462 used for 
     this statement

I edited the output to make it more readable.  See the zip with the full scripts and logs.

So, if you have a recurring production SQL query that runs faster with hints just run that query once with the hints and then use coe_xfr_sql_profile.sql to force the real query to run unchanged but with the same plan as the hinted query.

- Bobby

 

 

 

 

 

 

 

 

 

 

 

 

 

Categories: DBA Blogs

Oracle Data Guard Switchover via DGMGRL vs. #em12c

DBASolved - Wed, 2014-03-19 13:12

When you start to look at high availability and replication of data many people look at either Oracle Data Guard or Oracle GoldenGate.  Personally, I opt for Oracle GoldenGate; however, it is not always the best fit for smaller environments where cost is a concern.  When cost is a concern, Oracle Data Guard is a good choice and can be used with Enterprise (EE)  and Standard (SE) editions of Oracle Database.  There are multiple options for Oracle Data Guard, i.e. Redo Apply, SQL Apply and Active Data Guard.  All these options have its pros and cons.

Advice: I’ll say this up front; after configuring Oracle Data Guard from the command line (SQL) and from OEM 12c, I would take OEM 12c if I have the option.  Much easier, personal opinion.  I’ll write a post on this at some point.

In this post I want to cover the ways that Oracle Data Guard can switchover between Primary database and Physical Standby database from the command line and then from OEM12c. 

DGMRL – Command Line

Let’s take a look at how to perform a switch over from the command line.

In order to perform a switchover, I will be using the Oracle Data Guard Broker (DGMGRL).   With DGMGRL, I can manage the configuration of  Oracle Data Guard and provides an easier way to interact with the environment after it is setup. 

To start working with DGMGRL, first log into the standby database server.  Then execute DGMGRL and connect to the local database (Image 1).

Image 1:
image

Before doing anything with Oracle Data Guard, it is good to check the status of the configuration.  This can be done by using the SHOW CONFIGURATION command (Image 2).

Image 2:
image

In order to switch over to the standby database, I simply need to execute the SWITCHOVER TO <standby database> command (Image 3).

Image 3:
image

Once the switch over starts, the broker tries to switch everything over to db11g2tst.  During the switch over, I get an error on shutting down the original primary database and disconnects me from the databases.  Upon trying to look at the configuration, I see that the configuration is in ERROR status (Image 4).

Image 4:
image

From looking at this configuration problem, lets try restarted the database that it is complaining about and see if it clears the error. 

Image 5:
image

Now that the database has been restarted in mount mode, the DGMGRL is reporting that the switchover is in progress (Image 5).  With the database bounced and placed in mount mode, the switchover was able to complete successfully (Image 6).

Image 6:
image

Although, the switchover was successful, I had to intervene by rebooting the database that was becoming the new standby database.  Successful yes, but I’m not completely happy with the process.  Thought this was suppose to be hands free?  Let’s take a look at the OEM 12c approach.

OEM12c Approach

Taking a look at how to administer Oracle Data Guard from OEM12c, I will be using the broker again; this time from within OEM 12c. 

Note: The nice thing is that when you configure a standby database with OEM12c, the broker is setup for you.  Smile

To access the the broker items for Oracle Data Guard in OEM12c, I first have to go to the Database landing page (Targets –> Databases).  In Image 1, I’m looking for my test database (%tst%).  Once I have them listed, then I need to click on the database  that is the standby. 

Image 1:

image

After clicking on the standby database, OEM takes me to the landing page for that standby database.  Just as with any database, I see a set of menus under the database name (Image 2). 

Image 2:image

At this point, I want to use the Availability menu to access the Data Guard Administration Page (Image 3) (Availability –> Data Guard Administration).

Image 3:
image

Before OEM will take me to the Data Guard Administration page, I have to log into the standby database.  Since the standby database is in mount mode, the only way to log in is using the SYS AS SYSDBA user.  Image 4 shows that I have already setup a Named Credential for my standby database.

Image 4:image

Once logged in to the standby database, the Data Guard Administration page, provides you with a  lot of information pertaining to the Data Guard environment (Image 5).   

Image 5:

image

The important part on this page is the Standby Databases (Image 6) section.  This section provides all the information needed for the standby database.  In my example, I can quickly see the status of Data Guard, what role it is in, the last received and applied logs and the estimated time it will take to failover.

Image 6:

image

Now that I know what standby databases are available, I can quickly switchover to the standby selected by using the Switchover button (Image 7) in the Standby Databases section.

Image 7:

image

After clicking the Switchover button, OEM will ask you to log into both hosts that will partake in the switchover (not pictured).   Once logged into both hosts, a confirmation page for switching over is provided (Image 8).  At the time of this switchover, I have the option to swap monitoring settings as well (check box).  Since I want to swing everything over, I clicked the check box and then click the Yes button.

Image 8:

image

After clicking Yes, the switchover begins (Image 9).  I can monitor the status of the switchover from the output being displayed in OEM.

Image 9:image

Once the switchover is complete, OEM returns me to the Data Guard Administration page.  Where I can clearly see that everything was successful and that the new standby database is the old primary database (Image 10).

Image 10:image

Summary

I have showed two different ways of using the Data Guard Broker to perform a switch over.  Both methods are valid.  The main difference in the approaches is that OEM 12c approach took a few more steps due to the screen involved.  Where as the DGMGRL command line option I only had to run one (1) command to switch over.  In the end, everything switched over and I would leave it up to personal preference on which approach is used. 

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Data Guard, Database, Replication
Categories: DBA Blogs

When RMAN Validate Creates New Files

Pythian Group - Wed, 2014-03-19 07:41

While doing some testing I found something happening with RMAN that was unexpected.

After making an RMAN backup, I would run the VALIDATE RECOVERY FILES command.

When it completed I found there were twice as many backup files as when I started.

Please note that this is Oracle 11.2.0.3 – that will be important later on.

Here is the list of current backup files:

RMAN crosscheck backup;
 using channel ORA_DISK_1
 crosschecked backup piece: found to be 'AVAILABLE'
 backup piece handle=/u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup RECID=112 STAMP=842454367
 crosschecked backup piece: found to be 'AVAILABLE'
 backup piece handle=/u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup RECID=113 STAMP=842454432
 crosschecked backup piece: found to be 'AVAILABLE'
 backup piece handle=/u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup RECID=114 STAMP=842454556
 Crosschecked 3 objects

Following are some pertinent parameters:

12:46:52 SYS@js01 AS SYSDBA show parameter db_recovery_file_dest

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest		     string	 /u01/app/oracle/fra
db_recovery_file_dest_size	     big integer 4G

12:47:00 SYS@js01 AS SYSDBA show parameter log_archive_dest_1

NAME				     TYPE	 VALUE
------------------------------------ ----------- ----------------------------------
log_archive_dest_1		     string	 LOCATION=USE_DB_RECOVERY_FILE_DEST

Now see what happens when VALIDATE RECOVERY FILES is run.
Listings may be edited for brevity.

RMAN validate recovery files;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
specification does not match any datafile copy in the repository
channel ORA_DISK_1: starting validation of archived log
channel ORA_DISK_1: specifying archived log(s) for validation
input archived log thread=1 sequence=1212 RECID=581 STAMP=842454820
input archived log thread=1 sequence=1213 RECID=582 STAMP=842454821
...
input archived log thread=1 sequence=1232 RECID=601 STAMP=842531265
input archived log thread=1 sequence=1233 RECID=602 STAMP=842531265
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01

List of Archived Logs
=====================
Thrd Seq     Status Blocks Failing Blocks Examined Name
---- ------- ------ -------------- --------------- ---------------
1    1212    OK     0              97494           /u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1212_9lgwwng0_.arc
1    1213    OK     0              97494           /u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1213_9lgwwnqx_.arc
...
1    1232    OK     0              13              /u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1232_9lk7kkvh_.arc
1    1233    OK     0              1               /u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1233_9lk7kkww_.arc
channel ORA_DISK_1: input backup set: count=114, stamp=842454366, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140317T150606_9lk8nfr3_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:35
channel ORA_DISK_1: input backup set: count=115, stamp=842454431, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140317T150606_9lk8ojtw_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:01
channel ORA_DISK_1: input backup set: count=117, stamp=842454556, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140317T150915_9lk8okwy_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:03
Finished validate at 18-MAR-14

Notice that for each existing backup file an exact copy was made.
This was verified by using md5sum to compare the file check sums.

== as shown by md5sum, these are exact duplicates

[oracle@dev ]$ md5sum /u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup /u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup /u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup
21b1c12d47216ce8ac2413e8c7e3fc6e  /u01/app/oracle/rman/orcl-rman-db-3ip3dlau_1_1.bkup
7524091d41785c793ff7f3f504b76082  /u01/app/oracle/rman/orcl-rman-db-3jp3dlcv_1_1.bkup
974bb354db9eb49770991334c891add5  /u01/app/oracle/rman/orcl-rman-arch-3lp3dlgs_1_1.bkup

[oracle@dev ]$ md5sum /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140317T150606_9lk8nfr3_.bkp /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140317T150606_9lk8ojtw_.bkp /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140317T150915_9lk8okwy_.bkp
21b1c12d47216ce8ac2413e8c7e3fc6e  /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140317T150606_9lk8nfr3_.bkp
7524091d41785c793ff7f3f504b76082  /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140317T150606_9lk8ojtw_.bkp
974bb354db9eb49770991334c891add5  /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140317T150915_9lk8okwy_.bkp

It then occurred to me that maybe this behavior was for some reason due to creating backups outside the FRA, and Oracle for some reason wanted a copy of each file in the FRA. If so this would probably be a bug, but I thought it interesting enough to run a test.

The following shows that all previous backups were removed, new ones created, as well as space consumed in the FRA.

== Delete all backups, and create backups in FRA only

RMAN list backup;
specification does not match any backup in the repository

RMAN crosscheck backup;
using channel ORA_DISK_1
specification does not match any backup in the repository

====== create new backups in FRA

RMAN backup database;

Starting backup at 18-MAR-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/u01/oradata/JS01/datafile/o1_mf_users_8g69rzg7_.dbf
input datafile file number=00003 name=/u01/oradata/JS01/datafile/o1_mf_undotbs1_8g69rgd1_.dbf
input datafile file number=00002 name=/u01/oradata/JS01/datafile/o1_mf_sysaux_8g69qxt0_.dbf
input datafile file number=00001 name=/u01/oradata/JS01/datafile/o1_mf_system_8g69qb0g_.dbf
input datafile file number=00005 name=/u01/oradata/JS01/datafile/o1_mf_atg_data_8hk7kc7f_.dbf
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp tag=TAG20140318T125302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp tag=TAG20140318T125302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-MAR-14

RMAN backup archivelog all delete input;

Starting backup at 18-MAR-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=1212 RECID=581 STAMP=842454820
input archived log thread=1 sequence=1213 RECID=582 STAMP=842454821
...
input archived log thread=1 sequence=1233 RECID=602 STAMP=842531265
input archived log thread=1 sequence=1234 RECID=603 STAMP=842532824
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp tag=TAG20140318T125344 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1212_9lgwwng0_.arc RECID=581 STAMP=842454820
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_17/o1_mf_1_1213_9lgwwnqx_.arc RECID=582 STAMP=842454821
...
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1233_9lk7kkww_.arc RECID=602 STAMP=842531265
archived log file name=/u01/app/oracle/fra/JS01/archivelog/2014_03_18/o1_mf_1_1234_9lk928kg_.arc RECID=603 STAMP=842532824
Finished backup at 18-MAR-14

RMAN crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp RECID=145 STAMP=842532783
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp RECID=146 STAMP=842532809
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp RECID=147 STAMP=842532824
Crosschecked 3 objects

12:54:40 SYS@js01 AS SYSDBA @fra

FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE                          0                         0               0
REDO LOG                              0                         0               0
ARCHIVED LOG                          0                         0               0
BACKUP PIECE                      35.24                         0               3
IMAGE COPY                            0                         0               0
FLASHBACK LOG                         0                         0               0
FOREIGN ARCHIVED LOG                  0                         0               0

7 rows selected.

Again there are three backup files, this time in the FRA. The files are using 35% of the FRA space.

Let’s run another VALIDATE RECOVERY FILES and find out what happens.


RMAN validate recovery files;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
specification does not match any archived log in the repository
specification does not match any datafile copy in the repository
channel ORA_DISK_1: input backup set: count=140, stamp=842532782, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk955rv_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:15
channel ORA_DISK_1: input backup set: count=141, stamp=842532808, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk95nvg_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:01
channel ORA_DISK_1: input backup set: count=142, stamp=842532824, piece=1
channel ORA_DISK_1: starting piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp
piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk95oxv_.bkp comment=NONE
channel ORA_DISK_1: finished piece 1 at 18-MAR-14
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:03
Finished validate at 18-MAR-14

RMAN crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp RECID=145 STAMP=842532783
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk955rv_.bkp RECID=148 STAMP=842532917
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk91s40_.bkp RECID=146 STAMP=842532809
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_ncsnf_TAG20140318T125302_9lk95nvg_.bkp RECID=149 STAMP=842532932
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk928t8_.bkp RECID=147 STAMP=842532824
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_annnn_TAG20140318T125344_9lk95oxv_.bkp RECID=150 STAMP=842532933
Crosschecked 6 objects

12:54:41 SYS@js01 AS SYSDBA @fra

FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE                          0                         0               0
REDO LOG                              0                         0               0
ARCHIVED LOG                          0                         0               0
BACKUP PIECE                      70.47                     35.24               6
IMAGE COPY                            0                         0               0
FLASHBACK LOG                         0                         0               0
FOREIGN ARCHIVED LOG                  0                         0               0

7 rows selected.

That is pretty clear – there are duplicates of each file. This is also shown by the FRA now being 70% consumed by backup pieces, whereas previously on 35% of the FRA was used.

This seems like a bug, and a brief search of My Oracle Support finds this relevant document:

Bug 14248496 RMAN ‘validate recovery files’ creates a piece copy for every execution

This fits the situation pretty well, and the version of this database, 11.2.0.3, is one of the affected versions.
As per the doc this bug is fixed in 11.2.0.4

The next step of course is to try this same operation in 11.2.0.4.
This is also a Linux database running on Linux 6 – the only difference is that the database version is 11.2.0.4.

RMAN crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/rman/rman-db-02p3ggdi_1_1.bkup RECID=1 STAMP=842547637
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/rman/rman-db-03p3gggk_1_1.bkup RECID=2 STAMP=842547732
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/rman/rman-db-04p3ggjt_1_1.bkup RECID=3 STAMP=842547838
Crosschecked 3 objects

RMAN validate recovery files;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
specification does not match any archived log in the repository
specification does not match any datafile copy in the repository
skipping backup sets; RECOVERY FILES, RECOVERY AREA or DB_RECOVERY_FILE_DEST option cannot validate backup set
Finished validate at 18-MAR-14

That wasn’t exactly promising – the VALIDATE RECOVERY FILES now just exits with a message that backup sets cannot be validated with this command.
Apparently ‘fixing’ the bug was just a matter of disabling this bit of functionality.
This is at odds with the Oracle 11g Documentation for RMAN VALIDATE
From the section “RECOVERY FILES”

Validates all recovery files on disk, whether they are stored in the fast recovery area or other locations on disk. Recovery files include full and incremental backup sets, control file autobackups, archived redo log files, and data file copies. Flashback logs are not validated.

The Oracle 12c Documentation for 12c RMAN VALIDATE says the same thing, that is that backup sets are included in the files to be validated.

Clearly the intent seems to have been for this to work with VALIDATE RECOVERY FILES, but for some reason the fix was simply to disable the functionality.

So, what can you use instead?

Now the VALIDATE BACKUPSET command must be used to validate the backups. This is not nearly as convenient as simply issuing the VALIDATE RECOVERY FILES command, as VALIDATE BACKUPSET takes a mandatory argument, which is the primary key of the backup set.

The documentation recommends using the LIST BACKUPSET command, but this is rather inconvenient as the keys must be parsed from report text as seen.

RMAN list backupset;

List of Backup Sets
==================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1       Full    9.36M      DISK        00:00:04     18-MAR-14
        BP Key: 1   Status: AVAILABLE  Compressed: NO  Tag: TAG20140318T170034
        Piece Name: /u01/app/oracle/rman/rman-db-02p3ggdi_1_1.bkup
  SPFILE Included: Modification time: 18-MAR-14
  SPFILE db_unique_name: ORCL
  Control File Included: Ckp SCN: 1014016      Ckp time: 18-MAR-14

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2       Full    1.07G      DISK        00:01:36     18-MAR-14
        BP Key: 2   Status: AVAILABLE  Compressed: NO  Tag: TAG20140318T170212
        Piece Name: /u01/app/oracle/rman/rman-db-03p3gggk_1_1.bkup
  List of Datafiles in backup set 2
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/system01.dbf
  2       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/sysaux01.dbf
  3       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/undotbs01.dbf
  4       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/users01.dbf
  5       Full 1014604    18-MAR-14 /u02/app/oracle/oradata/orcl/example01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
3       Full    9.36M      DISK        00:00:02     18-MAR-14
        BP Key: 3   Status: AVAILABLE  Compressed: NO  Tag: TAG20140318T170212
        Piece Name: /u01/app/oracle/rman/rman-db-04p3ggjt_1_1.bkup
  SPFILE Included: Modification time: 18-MAR-14
  SPFILE db_unique_name: ORCL
  Control File Included: Ckp SCN: 1014639      Ckp time: 18-MAR-14

This is fine for manually validating just a few files, but is really not a workable solution for programmatically validating backup sets. Fortunately there is a better method – just use the v$backup_set_details view.

  1  select session_key, session_recid, session_stamp, bs_key, recid
  2  from v$backup_set_details
  3* order by session_key
15:58:37 dev.jks.com - jkstill@js01 SQL /

SESSION_KEY SESSION_RECID SESSION_STAMP     BS_KEY	RECID
----------- ------------- ------------- ---------- ----------
	469	      469     842532214        106	  106
	469	      469     842532214        107	  107
	469	      469     842532214        105	  105

3 rows selected.

RMAN> validate backupset 105;

Starting validate at 18-MAR-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fra/JS01/backupset/2014_03_18/o1_mf_nnndf_TAG20140318T125302_9lk90z0k_.bkp tag=TAG20140318T125302
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:07
Finished validate at 18-MAR-14

-- the same was done for BS_KEY values 106 and 107

It is usually a good idea to investigate when something is found to work differently than expected.
It was certainly beneficial in this case, as I was developing code on 11.2.0.3 that would later run on 11.2.0.4.
While that bit of code would not work as expected on 11.2.0.4, it would also not cause an error, and probably not be noticed until it caused a recovery problem.

Using VALIDATE BACKUPSET is a workable solution, but not nearly as convenient as using VALIDATE RECOVERY FILES.
Perhaps there will be a fix for it in future releases.

Categories: DBA Blogs

Use DCLI to check #em12c agents on Exadata – all at once

DBASolved - Tue, 2014-03-18 21:08

For all the benefits of Oracle Enterprise Manager 12c (OEM), there is one draw back when using OEM with Real Application Clusters or Exadata…. checking the status of the EM Agent on each node.  When a node is bounced, some times the EM Agent does not always restart.  When this happens on a single instance box or a RAC; normally you would have to check  each node individually and restart the agent.  With Exadata, Oracle has made this check easier by providing the Distributed Command Line Interface (DCLI).  Using DCLI, an administrator can run commands across all the nodes of the Exadata.

Note: The DCLI is a shell script that allows commands to be ran on multiple machines with a single invocation.

The example I’m going to use in this post is an example of checking the Enterprise Management Agents after rebooting the  computer (db) nodes of an Exadata.

After rebooting each of the nodes, I needed to validate that the EM Agents were successfully restarted.  All the agents were installed on each of the nodes in the same directory structure (/u01/app/oracle/product/agent12c), which makes using DCLI a breeze.   Now that I know where the agents are installed, I need to create a file that lists all the compute (db) nodes in the Exadata; this file name is dbs_group.  The last thing I need is a login for each of the nodes.  On all Exadata systems the Oracle user can be used to login to each of the nodes.

Now that I have all the information I need; I need two command line switches that will be used with the DCLI command.  These switches are –l (login) and –g (group). 

The command that will be used to check all the EM Agents on the Exadata is:

 

dcli -l oracle -g ~/dbs_group /u01/app/oracle/product/agent12c/core/12.1.0.3.0/bin/emctl status agent


Once I run the DCLI command, I get output for all the EM Agents across all the nodes of the Exadata.

[e###db01:oracle:F####1] /home/oracle
> dcli -l oracle -g ~/dbs_group /u01/app/oracle/product/agent12c/core/12.1.0.3.0/bin/emctl status agent
e###db01: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db01: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db01: —————————————————————
e###db01: Agent Version     : 12.1.0.3.0
e###db01: OMS Version       : 12.1.0.3.0
e###db01: Protocol Version  : 12.1.0.1.0
e###db01: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db01: Agent Binaries    : /u01/app/oracle/product/agent12c/core/12.1.0.3.0
e###db01: Agent Process ID  : 6638
e###db01: Parent Process ID : 6307
e###db01: Agent URL         :
https://e###db01.acme.com:3873/emd/main/
e###db01: Repository URL    : https://c#####01.acme.com:4904/empbs/upload
e###db01: Started at        : 2014-03-16 02:29:10
e###db01: Started by user   : oracle
e###db01: Last Reload       : (none)
e###db01: Last successful upload                       : 2014-03-17 10:15:41
e###db01: Last attempted upload                        : 2014-03-17 10:15:41
e###db01: Total Megabytes of XML files uploaded so far : 3.84
e###db01: Number of XML files pending upload           : 0
e###db01: Size of XML files pending upload(MB)         : 0
e###db01: Available disk space on upload filesystem    : 29.98%
e###db01: Collection Status                            : Collections enabled
e###db01: Heartbeat Status                             : Ok
e###db01: Last attempted heartbeat to OMS              : 2014-03-17 10:16:56
e###db01: Last successful heartbeat to OMS             : 2014-03-17 10:16:56
e###db01: Next scheduled heartbeat to OMS              : 2014-03-17 10:17:56
e###db01:
e###db01: —————————————————————

e###db01: Agent is Running and Ready
e###db02: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db02: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db02: —————————————————————
e###db02: Agent Version     : 12.1.0.3.0
e###db02: OMS Version       : 12.1.0.3.0
e###db02: Protocol Version  : 12.1.0.1.0
e###db02: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db02: Agent Binaries    : /u01/app/oracle/product/agent12c/core/12.1.0.3.0
e###db02: Agent Process ID  : 28588
e###db02: Parent Process ID : 28478
e###db02: Agent URL         :
https://e###db02.acme.com:3873/emd/main/
e###db02: Repository URL    : https://c#####01.acme.com:4904/empbs/upload
e###db02: Started at        : 2014-03-16 02:24:59
e###db02: Started by user   : oracle
e###db02: Last Reload       : (none)
e###db02: Last successful upload                       : 2014-03-17 10:15:40
e###db02: Last attempted upload                        : 2014-03-17 10:15:40
e###db02: Total Megabytes of XML files uploaded so far : 3.18
e###db02: Number of XML files pending upload           : 0
e###db02: Size of XML files pending upload(MB)         : 0
e###db02: Available disk space on upload filesystem    : 38.92%
e###db02: Collection Status                            : Collections enabled
e###db02: Heartbeat Status                             : Ok
e###db02: Last attempted heartbeat to OMS              : 2014-03-17 10:16:59
e###db02: Last successful heartbeat to OMS             : 2014-03-17 10:16:59
e###db02: Next scheduled heartbeat to OMS              : 2014-03-17 10:17:59
e###db02:
e###db02: —————————————————————

e###db02: Agent is Running and Ready
e###db03: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db03: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db03: —————————————————————
e###db03: Agent Version     : 12.1.0.3.0
e###db03: OMS Version       : 12.1.0.3.0
e###db03: Protocol Version  : 12.1.0.1.0
e###db03: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db03: Agent Binaries    : /u01/app/oracle/product/agent12c/core/12.1.0.3.0
e###db03: Agent Process ID  : 5834
e###db03: Parent Process ID : 5733
e###db03: Agent URL         :
https://e###db03.acme.com:3873/emd/main/
e###db03: Repository URL    : https://c#####01.acme.com:4904/empbs/upload
e###db03: Started at        : 2014-03-16 02:42:35
e###db03: Started by user   : oracle
e###db03: Last Reload       : (none)
e###db03: Last successful upload                       : 2014-03-17 10:12:39
e###db03: Last attempted upload                        : 2014-03-17 10:12:39
e###db03: Total Megabytes of XML files uploaded so far : 8.54
e###db03: Number of XML files pending upload           : 0
e###db03: Size of XML files pending upload(MB)         : 0
e###db03: Available disk space on upload filesystem    : 39.97%
e###db03: Collection Status                            : Collections enabled
e###db03: Heartbeat Status                             : Ok
e###db03: Last attempted heartbeat to OMS              : 2014-03-17 10:16:38
e###db03: Last successful heartbeat to OMS             : 2014-03-17 10:16:38
e###db03: Next scheduled heartbeat to OMS              : 2014-03-17 10:17:38
e###db03:
e###db03: —————————————————————

e###db03: Agent is Running and Ready
e###db04: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db04: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db04: —————————————————————
e###db04: Agent Version     : 12.1.0.3.0
e###db04: OMS Version       : 12.1.0.3.0
e###db04: Protocol Version  : 12.1.0.1.0
e###db04: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db04: Agent Binaries    : /u01/app/oracle/product/agent12c/core/12.1.0.3.0
e###db04: Agent Process ID  : 5853
e###db04:
Parent Process ID : 5734
e###db04: Agent URL         :
https://e###db04.acme.com:3873/emd/main/
e###db04: Repository URL    : https://c#####01.acme.com:4904/empbs/upload
e###db04: Started at        : 2014-03-16 02:53:08
e###db04: Started by user   : oracle
e###db04: Last Reload       : (none)
e###db04: Last successful upload                  : 2014-03-17 10:16:33
e###db04: Last attempted upload                   : 2014-03-17 10:16:33
e###db04: Total Megabytes of XML files uploaded so far : 2.81
e###db04: Number of XML files pending upload           : 0
e###db04: Size of XML files pending upload(MB)         : 0
e###db04: Available disk space on upload filesystem    : 45.75%
e###db04: Collection Status                         : Collections enabled
e###db04: Heartbeat Status                             : Ok
e###db04: Last attempted heartbeat to OMS           : 2014-03-17 10:16:50
e###db04: Last successful heartbeat to OMS          : 2014-03-17 10:16:50
e###db04: Next scheduled heartbeat to OMS           : 2014-03-17 10:17:50
e###db04:
e###db04: —————————————————————

e###db04: Agent is Running and Ready

 

From the output,  I can clearly see that all the EM Agents are monitoring the Exadata  and are in running status.  I really like the DCLI command option, makes life simple especially when needing to check on multiple agents within an environment.  Makes me wonder if I can put DCLI on a traditional RAC system?

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Exadata, OEM
Categories: DBA Blogs

some days it is harder to be grumpy than others ...

Grumpy old DBA - Tue, 2014-03-18 17:14
The official notification is out and the nomination that was submitted to recognize me as an Oracle Ace has been approved.  This is certainly an honor and along with my continued role at NEOOUG I am hoping to leverage this status to continue to help grow our user group and our yearly conference Great Lakes Oracle Conference ( GLOC )

Speaking of that our abstract selection has been finalized and you can see all of our great presentations and workshops from the link above.  We will have a session schedule out very soon.

Essentially we have four tracks DBA : Developer : Applicationss : BI/DW and each track has at least 1 session for each time slot and the DBA track having 2 sessions concurrently.

This conference is a great value with top presenters.  We have worked hard this year to give a stronger/deeper set of Applications and BI/DW offerings.  I could start rambling about our plans for next year but probably best to focus on 2014 for now!

The Grumpy Old DBA hopes to see everyone in Cleveland!


Geez oh man I read through briefly the pages in Oracle Ace agreement on how to display this logo hope I did not mess up my first attempt!  I can just see already "Grumpy Old DBA please report to the principals office immediately" ( hope not keep your fingers crossed for me ha ha )!
 
Categories: DBA Blogs

DBRM for 12c Container Database in Multi-tenant Environment

Pakistan's First Oracle Blog - Mon, 2014-03-17 17:58
In multi-tenant environment, Database Resource Manager (DBRM), at the Container Database (CDB) level enables us to manage the resources like CPU usage and number of parallel execution servers among the plugabble databases (PDBs). Then within each PDB, DBRM enables us to manage resources like CPU, parallelism and managing the runaway queries which exceed the certain thresholds of CPU, physical IO, logical IO or elapsed time.

In 12c version of Oracle database, at the CDB level, a CDB plan is implemented. This CDB plan allocates resources to the PDBs. A CDB plan is made up of directives, with each directive aimed at a single PDB. The directive controls the allocation of CPU and Parallel execution servers. The default CDB plan is DEFAULT_CDB_PLAN. CDB plan deals in share values and utilization limits.

Shares are like counters. More the share a PDB has, the more resources it would enjoy. The utilization limit for a PDB limits resource allocation to the PDB. By default, share value for each PDB is 1, and utilization limit is 100. Utilization limit restrains the system resource usage of a specific PDB. parallel_server_limit and PARALLEL_SERVERS_TARGET are used to limit parallel servers.


Example of a CDB Plan:

Following example enforces a CDB plan 'cdb_plan' for two databases 'PROD' and 'DEV'. Prod has 2 shares and 100% utilization limit, whereas DEV has half of it's shares i.e. 1 and 50% of utilization limit. Pending area is just a staging area to create, edit, and validate the plans.

exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
    plan    => 'cdb_plan',
    comment => 'CDB plan for CDB');
END;
/

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'cdb_plan',
    pluggable_database    => 'Prod',
    shares                => 2,
    utilization_limit     => 100,
    parallel_server_limit => 100);
END;
/

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan                  => 'cdb_plan',
    pluggable_database    => 'Dev',
    shares                => 1,
    utilization_limit     => 50,
    parallel_server_limit => 50);
END;
/

exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();


You may use the DBA_CDB_RSRC_PLANS view to display all of the CDB resource plans defined in the CDB.
Categories: DBA Blogs

My slideshare site has had 1000 views

Hemant K Chitale - Mon, 2014-03-17 08:17
I have put up some presentations and articles on slideshare.  My site there has had 1,000 views.
.
.
.

Categories: DBA Blogs

Tuning with ASH Analytics in #em12c

DBASolved - Sun, 2014-03-16 21:06

Performance tuning of the Oracle Database has always been a great thing to learn and a valuable skill for any database administer.  In a previous post, I talked about how to use the Active Workload Repository (AWR) and Active Session History (ASH) to performance turn a database.  In this post, I want to take a look at the Active Session History (ASH) Analytics that can be found in Oracle Enterprise Manager 12c.

Active Session History (ASH) Analytics is used to help the database administrator determine the cause of spikes within database activity.  The ASH Analytics page provides stacked area charts to help visualize the ASH data from various dimensions.  Some of these dimensions are:

  • Wait Class
  • Module
  • Actions
  • SQL ID
  • Instance
  • User Session
  • Consumer Group

In order to use the ASH Analytics page, you first have to access the database home page.  Once on the database home page, then you can access ASH Analytics from the Performance menu (Performance –> ASH Analytics)(Image 1).

Note: If you are accessing the ASH Analytics page for the first time, it will ask you to install the PL/SQL package for it.

Image 1:
image

Once you are on the ASH Analytics page, you will notice three distinct sections of graphs and charts (Image 2). 

Image 2:
image

The top graph (Image 3) provides a high-level perspective of top activity during a selected period of time.  The grey shaded box, by default, shows a 5 minutes window of activity.  If you need to see a larger timeframe drag the box in either direction using the handles provided.

Image 3:
image

When you select a timeframe to look at the Activity graph below will change to show the activity for that time period (Image 4).   The Activity graph looks a lot like the Top Activity page graph. By highlighting or clicking on the graph or legend (right-hand of graph), you can see what wait category is taking the longest. 

Example: In the case of the graph in image 4, the wait is a configuration (brown) wait due to other (pink) wait.   This is due to log writer (LWGR) issues on my small server.

Image 4:
image

Notice in image 4, that there are a few option in the upper left corner from the graph.  You see the word “Filter None”.  The graph can be filtered based on dimensions stated earlier. If you change the view of the graph to the Load Map and then click on one of the waits, the filter will dimension from Wait Class to the wait you clicked on in the load map. Image 5 shows that the load map is being filtered by “Wait Event: log buffer space”.

Image 5:
image

So far, I have selected the timeframe I wanted to see and identified what was taking the most database time during (LGWR and Log Buffer Space) that timeframe.  The last section on the ASH Analytics page is actually two different listing that can be filtered by the dimensions already outlined (Image 6 & Image 7).

In Image 6, you see a list of SQL_IDs that are taking up database time.  You can correlate these SQL_ID activities back to the wait categories in the Activity graph.   At this point, if you wanted to create SQL Tuning Set or Tune the SQL directly; buttons are provided to accomplish these tasks.

Example: I don’t need to tune the SQL because it is a SQL statement I’ve been working with and I already know that the LGWR and Log Buffer Space are the issues.

Image 6:
image 

In Image 7, the output was filtered by Wait Class as the dimension.  As you can tell the wait taking the longest is the Configuration wait.  Again this goes right a long with the earlier findings shown on the page. 

Image 7:
image

At this point, I have identified that the Configuration wait (LGWR) event is causing the problem with the performance on the system.  In order to fix this I would need to look at my redo log sizes and locations to make sure that I’m not trying to perform to many I/O operations over the same I/O interface.

Summary

By adding the ASH Analytics to your DBA Toolkit, you should be able to quickly identify performance problems.  Couple ASH Analytics with ASH Report and AWR Report; you have a solid performance tuning basis to  begin diagnosing problems with the database.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database, OEM, Performance
Categories: DBA Blogs

Create a tablespace in #DB12C

DBASolved - Sun, 2014-03-16 14:48

I’ve been messing with Oracle Database 12c (12.1.0.1) today.  As I’ve been working at setting up a few things to test, I realized I need to create a tablespace or two and I didn’t have a post on how to create tablespaces with Oracle Database 12c.

Before I can do any testing, I have to setup a few tablespaces for the SLOB testing tool.  Below is how to simply create tablespaces using Oracle Managed Files (OMF) within Oracle Database 12c.

Traditional/Container Database

This is a top level database.  For a traditional style database (non-CDB), created a tablespace is just like in previous versions.

create tablespace SLOB datafile size 1M autoextend on next 1m;

With a Container Database (CDB), first make sure you are in the correct container.  This can be done with either SHOW CON_ID or SHOW CON_NAME.

image

The CDB is always container id of 1.  Using the CON_NAME option, the top level container name is called CDB$ROOT.

Now to create a tablespace in the CDB, you need to be logged in as a SYSDBA. Then you can run the same command to create the tablespace as you did for a traditional database.

create tablespace SLOB datafile size 1M autoextend on next 1m;

Pluggable Databases

To create tablespaces in the pluggable databases (PDB), you need to ensure that you are in the correct container you want to create the tablespace for.  PDBs have container ids greater than 2.  The CON_NAME option is also a good way to ensure that you are in the correct container as well.

image

Now that you have confirmed that you are in the correct container; now you can create the tablespace.

create tablespace SLOB datafile size 1M autoextend on next 1m;

How to check where the tablespaces are located

Everyone’s first response for checking on where the location of tablespace are located is to look at DBA_TABLESPACES. Within the Database 12c architecture, this view is valid for the container you are in.  Oracle has provided a few new views to use with Database 12c.  These views are called the CDB views.  If you want to see the tablespaces you created above and in what container they are in, the CDB_TABLESPACES view needs to be used.

Set the container back to CDB$ROOT and verify that you are in the correct container.

image

Now that you are back in the CDB container lets use the CDB_TABLESPACE view to see what containers you created a SLOB tablespace in.

image

As you can see the SLOB tablespace has been created in the containers 1, 3 and 4.  If you want to find the names for the con_ids, these can be found in the CDB_PDBS view.  CON_ID for 2, is for the seed PDB which is used for cloning of PDBs.

image

Oracle Enterprise Manager 12c view

Everything that has been shown has used SQL and SQL*Developer.  The same information can be found in Oracle Enterprise Manager 12c  (OEM) under the container database.  The screen shot below shows the SLOB tablespaces in OEM.

image

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database
Categories: DBA Blogs

Oracle PL/SQL Programming, 6th Edition

Surachart Opun - Sat, 2014-03-15 03:48
PL/SQL is procedural language, that's very useful when you work with Oracle Database.  As DBA, you might have to write PL/SQL to do some tasks. Developer on Oracle Database must know about PL/SQL.
This post I mention a book title - Oracle PL/SQL Programming by Steven Feuerstein (@stevefeuerstein), Bill Pribyl. This book is a new edition that covers PL/SQL on Oracle Database 12c. Readers can use examples in a book for practice and get many ideas for programming in PL/SQL.

A book is easy to read and make understand in PL/SQL. Readers can use a book as guidance and learn a lot of real-world problems with great Authors in PL/SQL. PL/SQL Developers should not miss this book.
In a book, Reader will see 28 chapters in 6 parts.
Part I: Programming in PL/SQL
Part II: PL/SQL Program Structure
Part III: PL/SQL Program Data
Part IV: SQL in PL/SQL
Part V: PL/SQL Application Construction
Part VI: Advanced PL/SQL Topics
Free Sampler.Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Two performance analytics examples this week

Bobby Durrett's DBA Blog - Fri, 2014-03-14 17:00

I attended two events this week that used the term “analytics” to describe a performance tuning feature that displays metrics in various useful ways.

On Tuesday, I attended a Delphix 4.0 Webinar where Delphix’s CTO Adam Leventhal described the new Delphix Analytics including the very interesting Latency Heat Map.  The latency heat map looked like a customer usable front end to the sort of performance information that Delphix support had collected when I worked with them on some network issues.  In this case the word analytics seems to mean the ability to look at the performance information in an easy to understand visual way that you can customize to see various perspectives. such as NFS latency, disk latency, etc.

Yesterday, Oracle gave us a four-hour on site OEM 12c seminar which included “ASH analytics”.  It was similar to the Delphix analytics in that you could see performance information visually and choose among various dimensions – machine, wait, session id, etc.

Perhaps Delphix and Oracle drew from business reporting – cubes, dimensions, drilling down – and applied the approach to performance information.  I have not used either example to solve a real problem so I can not vouch for the usefulness of either, but I thought it was an interesting coincidence that I ran into two such similar features in the same week.

- Bobby

 

Categories: DBA Blogs

is this old school ... using dbms_shared_pool keep to pin plsql code in shared pool?

Grumpy old DBA - Fri, 2014-03-14 09:05
These days with the shared pool area getting so large many people ( including me ) tend to think of reserving/pinning plsql code into the shared pool as an old school "probably no longer necessary" approach.  Of course many 11g systems are now running with 4 gb shared pools ( and much more sometimes ).

In my recent hotsos session I polled the attendee's and ( kind of as expected ) confirmed that most people are not doing this any more.

However after I finished someone came up and recommended that I add in as material to my presentation taking a look at an oracle provided toolkit ( PIND ) that accomplishes this for you.  He noted that oracle ebiz suite deployments are perhaps particularly significant in maybe still needing this kind of setup.  ( So this information has been added to my presentation ).

Of course dbms_shared_pool.keep is the way to do it yourself and no shortage of web posts about doing this stuff yourself.

If you want to look at this oracle toolkit look at oracle doc ids 301171.1 and 311689.1
Categories: DBA Blogs

Log Buffer #363, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-03-14 07:52

This Log Buffer Editions encompasses the wide arena of Oracle, SQL Server, and MySQL databases and what bloggers are saying about them.

Oracle:

David Peake recently announced an updated APEX 5.x Statement of Direction.

Finished reading 12c Concepts manual.

What MLB Season Scheduling Means to Your Supply Chain.

SQL for date ranges, gaps and overlaps.

Mandated Third Party Static Analysis: Bad Public Policy, Bad Security.

SQL Server:

If you need to architect a multi-tenant Analysis Services project, what are your options?

With the right tools, it is possible to rapidly deploy upgraded databases.

Using the Statistical Program R for Running Statistics .

Why I Developed SQL Code Guard: A Fairy Tale from a DBA.

Adding SQL Server AlwaysOn to existing Failover Clusters.

MySQL:

Ghosts of MySQL past, part 8.1: Five Years.

Insert benchmark for InnoDB, MongoDB and TokuMX and flash storage.

Database benchmarks are hard but not useless.

MySQL Workbench 6.1: Performance Schema Reports.

How to use the ClusterControl REST API to automate your Database Cluster.

Categories: DBA Blogs

#EM12c Browser Security – Certificate Installation

DBASolved - Thu, 2014-03-13 09:38

One thing that bugs me is browser security.  When I access Oracle Enterprise Manager 12c (OEM12c) through a browser and get the certificate error (Image 1), it just pushes my buttons.   Why not ship a valid certificate with OEM12c?  In reality, the problem is not with OEM12c; it is actually with the browser you choose to use.  In my case, I use Google Chrome a lot with OEM12c.  So how can I get rid of this security certificate warning?

Image 1:
image

To get rid of this warning message, the security certificate needs to be installed in the browser.  Where can I get the security certificate? 

Within Google Chrome, if I right click on the lock (Image 2).

Image 2:
image

I’m presented with a menu (Image 3).  From this menu, I can see that the connection has not been verified.

Image 3:
image

If I click on the “Certificate Information” link, I’m taken to an information dialog about the certificate (Image 4).  The information provided, clearly states that the certificate could not be verified to a trusted certification authority.

Image 4:
image

Obviously,  I need to add the certificate as a trusted authority.  In order to do this, I need to first save the certificate to my hard drive.  This is done from the Details tab (Image 5), you will see a Copy to File option.

Image 5:
image

When clicking on the Copy to File option, it takes me to the Certificate Export wizards (Image 6). Using this wizard, I need to export the certificate to my desktop.

Image 6:
image

I typically export the certificate in a X.509 format (Base –64) (Image 7). 

Image 7:
image

Lastly, I save the certificate to my desktop (Image 8):

Image 8:
image

Finally, close the certificate wizard (Image 9).

Image 9:
image

Now that I have the certificate saved to my desktop, I can import the certificate into the browser as a trusted authority.  In order to do this Google Chrome, I need to go to the Settings page in Chrome (Image 10).

Image 10:

image

Once on the Settings page, I open the Show Advanced Settings (Image 11) and look for HTTPS/SSL  (Image 12) to manage certificates.

Image 11:
image

Image 12:
image

Clicking the Manage Certificates button will open the Certificates dialog (Image 13). I then need to go to the Trusted Root Certification Authorities tab.  On this table I see an import button that I can use to import the certificate I saved to my desktop.

Image 13:
image

Using the import wizard (not pictured), I can import the certificate as a trusted certificate.  As part of the import, I receive a Security Warning, clicking Yes will install the certificate (Image 14).

Image 14:
image

Now that the certificate is installed, the next time I attempt to access fred.acme.com to work with OEM12c, I go directly to the login page (Image 15).

Image 15:
image

 

If you want to know how to install the certificate for Internet Explorer or Firefox, the steps can be found here.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

The 5 Worst Things About Working at Pythian

Pythian Group - Thu, 2014-03-13 08:12

Have you ever seen an employee post negatively about their company using his or her real name? If not, then you are about to for the first time — you can count that as number 1 out of the 5 worst things about working at Pythian. There are several great reasons to work at Pythian, but if I write about those things, many would brush it aside as unctuous. I think people want to know the dirt, and I’m here to tell you about it. Let’s look at the remaining 4 worst things about working at Pythian:

2. Like many other companies, Pythian encourages and pays for its employees to attend and present at technical conferences. But how many companies spoil their employees by allowing them to enjoy them at the same time?  For instance, during conferences Pythian provides quality opportunities to socialize and network.

3. If you work at Pythian, then you are very good at what you do, which means you are in big trouble. Why? Pythian will make sure you remain with them even if you switch continents. Here I am—still with Pythian— even after moving from Asia to Australia.

4. When was the last time your boss told you to take a vacation? Not only do my bosses at Pythian insist I take time off, they help me plan my holiday. In a previous job, I worked five straight years without a break — not even statuary holidays let alone a vacation, and nobody noticed. I admit, I’m a terrible workaholic and I brought that habit with me to Pythian. Now I have my boss and my family chalking out the days I should take off.

5. This last one is the worst… I call it Pythian’s zero-gravity culture. There is nothing at Pythian that pulls you down, and nothing that holds you back. Nothing prevents you from being innovative, making changes, and suggesting improvements. You are free to let your inner force propel you.

Some may argue that none of these points are actually negative. Rather, to some they are a dream come true. However, I consider them problematic because now there is no way I could work anywhere else. I could look for something better, or at least similar, but the truth is that there is no place like Pythian.

Categories: DBA Blogs

Oracle IT Architecture Certification Program

Years of experience in implementing the world’s most complex business and technology projects have provided Oracle the expertise to meet any strategic goal. Helping you evaluate products is just one...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Impressions from #oug_ire 2014

The Oracle Instructor - Thu, 2014-03-13 01:53

The Oracle User Group Ireland had its annual conference for 2014 at the Convention Centre in Dublin – which is a very nice location:

Convention Centre in DublinThere have been many well known Oracle Experts speaking, as you can see from the agenda. Oracle University was represented by Joel Goodman and myself – we really enjoyed talking there and listening to the other speakers :-)

Joel Goodman and Uwe Hesse at the OUG Ireland Conference 2014The first presentation I attended was delivered by Tom Kyte, who in the absence of Maria Colgan delivered an Introduction to the New Oracle Database In-Memory Option. This extra charged option will probably be available with 12.1.0.2, which we expect to be released this calendar year. Safe Harbor etc. as usual. One key point that I took away here was that it is extremely easy to implement the In-Memory Option: Just set one initialization parameter and designate the use on the segment level with alter table e.g. Especially, the storage structure on disk remains unchanged and the whole thing is completely transparent for the application layer.

Next talk was delivered by fellow Oakie Tim Hall: An Oracle DBA’s Guide to WebLogic Server.

Tim Hall

He tried to pack all the information that he wished he knew before he dealt with WebLogic for the first time into that presentation. Very useful indeed for starters, I suppose. There was no single key point here to highlight, instead we got a collection of small but helpful tips that I will definitely come back to when implementing WebLogic myself.

Next I went to Simon Haslam, who talked about What’s So Special about the Oracle Database Appliance?

Simon HaslamSome key points here: ODA is definitely to be considered for rapid RAC deployment, especially for small to midsized projects with less demand for extremely good I/O performance. Also, it is probably a clever move to install it as virtualized platform based on Oracle VM, even in case that at first only databases are supposed to run there. This is easy to install initially (but an effort to change that after the fact) and has no negative performance impact.

Afterwards I did my own two presentations: Real-Time Query for Data Guard 11g in Action and Data Guard 12c New Features in Action

Uwe Hesse presenting about Data GuardMany thanks to my dear colleague Mina Sagha Zadeh who took the picture and introduced my talk in the most charming and flattering way I have ever experienced so far :-)

All in all it was a fantastic event in Dublin, kudos to the Oracle User Group Ireland for organizing and running it so well – great job!


Tagged: OUG Ireland 2014
Categories: DBA Blogs

I think you should submit an abstract

This is not your typical posting from me. But I just received a LinkedIn message and it got me motivated  enough to write this.

A colleague, who has been working with Oracle for over 15 years, sent me a message about the pearls of working for a consulting company that has kept him on the road for about a year now. He's had enough and is looking for something else that will keep him close to home. While this type of life can be painful,  it can also be plentiful.

I challenged him to submit an abstract to IOUG/Collaborate next year. While this year's big IOUG conference only a few weeks away (April 7-11) in Las Vegas, he should start thinking about submitting an abstract now.

He has tons of experience. I wrote, "You have tons of experience that younger folks need to hear about. Perhaps taking just one topic/problem that you found really interesting and had a chance to dig into this past year. People love that kind of stuff."

Most people think that their abstract will not get accepted. I've been involved with Oracle User Groups since I started working with Oracle technology in 1989! And I can tell you from experience, that I always rank a well written abstract with a well thought out outline and a very small yet deep scope very high. In fact, I recruit people for IOUG in my track that I know will do this.

So I wrote him, "There are a lot of papers accepted in the DBA area, especially when they are about real life experiences that you turn into a rock'n conference presentation!"

Think about your DBA experiences; the real specific ones that you feel proud about, the ones that you're really interested in. That's what people love to hear about, because they feel your passion!

I feel so strongly about this, I'll offer this: I will pre-review your abstract and help make it a better one.

Hope to see your abstract at next years IOUG!

All the best in your Oracle performance endeavors,

Craig.


If you enjoy my blog, I suspect you'll get a lot out of my online or in-class training. Go to www.orapub.com . I also offer on-site training and consulting services.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com. Another option is to send an email to OraPub's general email address, which is currently info@ orapub. com.
  

Categories: DBA Blogs