Skip navigation.

DBA Blogs

Learn a bit Oracle Scheduler with BROKEN state

Surachart Opun - Mon, 2014-01-27 03:15
On Oracle Database, DBAs can check broken job for Oracle Job (dbms_job) at *_JOBS.BROKEN column. Anyway, DBAs have changed from DBMS_JOB to DBMS_SCHEDULER. So, I was curious How to check broken job for Oracle Scheduler (DBMS_SCHEDULER)? After found out... DBAs can check on *_SCHEDULER_JOBS.STATE column.

STATEVARCHAR2(15)Current state of the job:
  • DISABLED
  • RETRY SCHEDULED
  • SCHEDULED
  • RUNNING
  • COMPLETED
  • BROKEN
  • FAILED
  • REMOTE
  • SUCCEEDED
  • CHAIN_STALLED

When does Oracle Scheduler change STATE to be BROKEN?
Then, DBAs should know some columns as well.

FAILURE_COUNTNUMBERNumber of times the job has failed to runMAX_FAILURESNUMBERNumber of times the job will be allowed to fail before being marked broken
*_SCHEDULER_JOBS.STATE column will change to "BROKEN", when *_SCHEDULER_JOBS.FAILURE_COUNT value = _SCHEDULER_JOBS.MAX_FAILURES value. Really! yes... but I had some example to show about it. I tested on 11.2.0.1.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL>
SQL>
SQL> show user;
USER is "DEMO"
SQL>
SQL> BEGIN
  2    DBMS_SCHEDULER.create_job (
  3      job_name        => 'test_my_job',
  4      job_type        => 'PLSQL_BLOCK',
  5      job_action      => 'BEGIN DBMS_STATS.gather_schema_stats(''SCOTT1''); END;',
  6      start_date      => SYSTIMESTAMP,
  7      repeat_interval => 'freq=daily',
  8      end_date        => NULL,
  9      enabled         => TRUE);
 10  END;
 11  /

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              1

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              2

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed. SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;
JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3

SQL>
SQL>I created Oracle Scheduler and set max_failures attribute. First Idea: set MAX_FAILURES = FAILURE_COUNT.
 SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value => 3);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3            3

SQL>
SQL> exec  DBMS_SCHEDULER.run_job (job_name            =>t 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL>  select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3            3Case 1: set MAX_FAILURES = FAILURE_COUNT after ran JOB... Nothing change! in user_scheduler_jobs view, I checked on *_scheduler_job_log and *_scheduler_job_run_details views. Nothing change either!
I belief DBAs should disable JOB, then set MAX_FAILURE attribute and enable JOB, because after enabled... FAILURE_COUNT value will be reset to be 0.
Anyway, Tried MAX_FAILURES value = 4.
SQL> exec  DBMS_SCHEDULER.run_job (job_name           => 'test_my_job',use_current_session=> FALSE);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3            3

SQL>
SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value => 4);
PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              3            4

SQL> exec  DBMS_SCHEDULER.run_job (job_name           => 'test_my_job',use_current_session ==> FALSE);

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    BROKEN          FALSE             4            4Now!  *_SCHEDULER_JOBS.STATE = "BROKEN". How to fix "BROKEN" state? - Just enable Job.
SQL> EXEC DBMS_SCHEDULER.enable(name=> 'test_my_job');

PL/SQL procedure successfully completed.

SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              0            4

SQL>
SQL> exec  DBMS_SCHEDULER.run_job (job_name           => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              1            4How to unset MAX_FAILURES value?
SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value=> NULL);
BEGIN DBMS_SCHEDULER.set_attribute (name=> 'test_my_job', attribute=> 'max_failures',value=> NULL); END;

      *
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00307: too many declarations of 'SET_ATTRIBUTE' match this call
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored


SQL>
SQL>
SQL>
SQL>
SQL>
SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value=> 0);
BEGIN DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute => 'max_failures',value => 0); END;

*
ERROR at line 1:
ORA-27465: invalid value 0 for attribute MAX_FAILURES
ORA-06512: at "SYS.DBMS_ISCHED", line 4370
ORA-06512: at "SYS.DBMS_SCHEDULER", line 2850
ORA-06512: at line 1

SQL> exec DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute=> 'max_failures',value => '');

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              1

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              5Just run "DBMS_SCHEDULER.set_attribute (name => 'test_my_job', attribute=> 'max_failures',value => '')".

Go back to CASE 1: ... Idea to set MAX_FAILURE attribute!
SQL> EXEC DBMS_SCHEDULER.disable(name => 'test_my_job');

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_sched
uler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    DISABLED        FALSE             5

SQL> SQL> exec DBMS_SCHEDULER.set_attribute (name =>  'test_my_job', attribute =>  'max_failures',value => 5);

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    DISABLED        FALSE             5            5

SQL>
SQL> EXEC DBMS_SCHEDULER.enable(name=> 'test_my_job');

PL/SQL procedure successfully completed.

SQL>
SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    SCHEDULED       TRUE              0            5

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name           => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> exec  DBMS_SCHEDULER.run_job (job_name            => 'test_my_job',use_current_session => FALSE);

PL/SQL procedure successfully completed.

SQL> select job_name, state,enabled, failure_count, max_failures from user_scheduler_jobs;

JOB_NAME                       STATE           ENABL FAILURE_COUNT MAX_FAILURES
------------------------------ --------------- ----- ------------- ------------
TEST_MY_JOB                    BROKEN          FALSE             5            5Read More... Oracle Document.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

LAST_CALL_ET in V$SESSION

Hemant K Chitale - Sun, 2014-01-26 09:21
The LAST_CALL_ET column in V$SESSION represents, for an ACTIVE, session "represents the elapsed time (in seconds) since the session has become active."

So, if you are running an DML operation, it represents how long the DML has been running.

What if you are running a SELECT that is sending a large number of rows to a client ?  If the query runs for 10minutes and you repeatedly query it, does LAST_CALL_ET show the time since the query began ?

What if you are running a PL/SQL Procedure or an Anonymous PL/SQL block which calls one or more SQL statements ?  At any instant in time the session may be running one of the SQLs.  Does LAST_CALL_ET queried at that instant  show the time that SQL has been running ?


Think about it.

UPDATE 01-Feb-14 :  Here are two discussions around LAST_CALL_ET :

https://community.oracle.com/thread/1115312?start=0&tstart=0
https://community.oracle.com/thread/2584291?start=0&tstart=0

Three earlier posts by me :

16-Aug-2010

17-Aug-2010

22-Aug-2010

.
.
.

Categories: DBA Blogs

My Article on SearchOracle about Exadata Administration

Pakistan's First Oracle Blog - Sat, 2014-01-25 20:48
The massive explosion in data, combined with an ever-increasing hunger to boost processing speeds while keeping costs low, has led to engineered systems in data centers.

Read More
Categories: DBA Blogs

Partner Webcast - Oracle Mobility Business Awareness - 29 Jan 2014

According to many analysts, the number of mobile devices users will exceed the number of computer users by the end of 2013, and will become the dominant platform for accessing applications and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SS Enqueue and sort segment request waits on 8.1.7.4 after trying to drop temp tablespace

Bobby Durrett's DBA Blog - Fri, 2014-01-24 16:48

Participated in a long forum thread that I started with this title:

SS Enqueue and sort segment request waits on 8.1.7.4 after trying to drop temp tablespace

We have had quite the saga on an old 8.1.7 database.  Really, it has been going on since December 26th.  It was pretty cool to see the interaction with the Oracle community while at the same time getting some good help from Oracle support on such an old release.  I definitely appreciate everyone’s efforts!

The core issue relates to dropping objects in a dictionary managed tablespace that have many small extents.  I had similar issues a number of years ago that I mentioned on my old blog:

April 24, 2008 post

May 19, 2008 post

June 2, 2008 post

Bottom line is that if you still have a system with dictionary managed tablespaces don’t let your tables and indexes have small extent sizes or you may end up with an object that is difficult to drop and various issues will ensue if you do drop it.

- Bobby

 

 

 

 

Categories: DBA Blogs

Log Buffer #356, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-01-24 07:38

This Log Buffer edition drops right from the clouds to check what bloggers are doing with their Oracles, SQL Servers, and MySQLs. Enjoy!

Oracle:

The Oracle E-Business Suite provides a large number of diagnostic and monitoring solutions.

A free e-book: Consolidate Your Applications on Oracle WebLogic Server.

How to read the values of bind variables for currently executing statements real time monitoring kicks in.

Problems when doing a V2V or P2V of a windows server to Oracle VM.

Oracle Social Relationship Management: Access the Specialization Exams Today.

SQL Server:

Skipping Columns (& other tricks) Using BCP Format Files (SQL Spackle) .

Search multiple SQL Server Error Logs at the same time.

An Overview of Project Conversion – Level 14 of the Stairway to Integration Services.

Free e-book: Performance Tuning with SQL Server Dynamic Management Views.

Calculating the Median Value within a Partitioned Set Using T-SQL.

MySQL:

Percona Server: Improve Scalability with Percona Thread Pool.

MySQL and PostgreSQL JSON functions: Do they differ much?

Select into outfile and load data infile are not complementary by default.

MySQL Workbench 6.1.1 BETA has been released.

Noel Herrick have covered some strange default behavior around nulls in MySQL.

Categories: DBA Blogs

Merge Replication Identity Range Management,an identity disorder? Part 1

Pythian Group - Fri, 2014-01-24 07:32

No no , we are not opening a new psychology practice at Pythian; unless we consider something like picking an optimal execution plan a form of schizophrenia!

Those who work with Merge replication know replicating articles with identity columns need some planning, especially if the identity column is part of the primary key or if there are many subscribers. Merge replication provides automatic identity range management across replication nodes that can offset the overhead of managing them manually. This blog will briefly shed some light on identity range management and how it works. In part 2, we will explore a specific case when batch inserts can affect the identity range management.

Why Identity Columns ?

Identity columns are very popular (and even touted more by DBA’s) to use as Primary Keys other than GUIDs for multiple reasons, including :

- They are smaller in size ; GUIDs are four times larger than INT

- Indexes over GUIDs tend to be larger. If the GUID is clustered index then it will affected every NON-Clustered index as well. Check out those two excellent blogs about GuIDs as PK and as Clustered indexes. http://www.sqlskills.com/blogs/kimberly/category/guids/

- Because they are random, indexes fragmentation are common on indexes over GUIDs. You can reduce this by using NEWSEQUENTIALID() function but it has a security disadvantage http://technet.microsoft.com/en-us/library/ms189786.aspx

Here’s a very good blog about GUIDs comparing them to INT/BIGINT
http://blogs.msdn.com/b/sqlserverfaq/archive/2010/05/27/guid-vs-int-debate.aspx



How Merge Replication Manages Identity Ranges ?

So, you have a few articles with identity columns that you want to replicate between a publisher and few subscriptions in a merge replication setup so the subscribers will also be able to update data back. Each “node” in the replication topology should be assigned a range which the user processes will use and then replicate to other nodes. It is important, especially if identity column is part of primary key, that no duplicates happen.

Let’s say we want to publish one article with identity column as a primary key to two subscribers. If we do not assign the correct ranges to each “node” here, then it’s possible that user processes will create same identity values. Once the merge agent tries to synchronize, it will insert duplicate values to the tables from other nodes and will fail if identity column is part of a primary key.

If we designate that each “node” gets isolated range of identity values, then user application will write values that cannot be used elsewhere. This is simply what Automatic Identity range does!

- On the publisher , the identity pool is used to assign values when adding new articles. There are two kinds of pools (ranges)

1) Pool assigned to publisher and subscribers that will be used directly to insert data in these subscribers.

This is controlled through @identity_range parameter of sp_addmergearticle

2) If any of the subscribers are SERVER subscriptions , which can possibly republish data , they are assigned a pool (range) so that they can use their own subscribers. Usually this pool is larger than the pool in point one because it can serve multiple subscribers. This is controlled through @pub_identity_range parameter of sp_addmergearticle

Article

Example: If we configure the publisher to have range from 1-1000, Subscriber 1 to have 1100-2100 and subscriber 2 to have range from 2200-3200, then merge replication should NOT insert any duplicate values at any nodes. Kindly note that Merge agent processes do NOT increase the identity values when synchronizing , they just insert the plain values,  like the case when we use “set identity_insert ON”!

How the Ranges are Enforced

Through the usage of constraints on the identity column. Starting with SQL server 2005, each node is assigned two ranges, primary and secondary. Both primary and secondary ranges are equal in size and once the primary range is exhausted, the secondary one is used and merge agent assigns a new range to subscriber (becoming the secondary range); this is when automatic range management is chosen when the article is added (please see @identityrangemanagementoption parameter of sp_addmergearticle).

If you need more documentation, please refer to the following links:
http://blogs.msdn.com/b/suhde/archive/2010/03/21/inf-automatic-identity-range-management-with-merge-replication-in-microsoft-sql-server-2005-2008.aspx

Now some hands on!

A Simple Tutorial

Let us demonstrate a very simple merge replication topology with a publisher and two subscriptions. I am doing this on the same instance of my machine. The instance has to have SQL server agent to run job so STD, DEV or ENT edition is needed.

Let’s create three user databases and one table under the database that will be published:

Use Master
GO
Create database Pub;
GO
Create database sub1;
GO
Create database sub2;
GO
create table pub.dbo.tbl_pub (col1 int identity(1,1), col2 datetime)
GO

Enable distributor feature and create distribution database:

use master
GO
Declare @instance nvarchar(1000)

Set @instance = @@servername;

exec sp_adddistributor @distributor = @instance, @password = N''
exec sp_adddistributiondb @database = N'distribution', @log_file_size = 2, @min_distretention = 0, @max_distretention = 72, @history_retention = 48, @security_mode = 1
exec sp_adddistpublisher @publisher = @instance , @distribution_db = N'distribution', @security_mode = 1,@trusted = N'false', @thirdparty_flag = 0, @publisher_type = N'MSSQLSERVER'
GO

Create publication and add article:

use [Pub]
exec sp_replicationdboption @dbname = N'Pub', @optname = N'merge publish', @value = N'true'
GO
-- Adding the merge publication
use [Pub]
exec sp_addmergepublication @publication = N'Pub1',
@sync_mode = N'native', @retention = 14, @allow_push = N'true', @allow_pull = N'true', @allow_anonymous = N'false', @publication_compatibility_level = N'100RTM', @replicate_ddl = 1
GO
exec sp_addpublication_snapshot @publication = N'Pub1', @frequency_type = 1, @frequency_interval = 0, @frequency_relative_interval = 0,@publisher_security_mode = 1

use [Pub]
exec sp_addmergearticle @publication = N'Pub1',
@article = N'tbl_pub', @source_owner = N'dbo', @source_object = N'tbl_pub', @type = N'table', @pre_creation_cmd = N'drop', @schema_option = 0x000000010C034FD1, @identityrangemanagementoption = N'auto', @pub_identity_range = 1000, @identity_range = 100, @threshold = 80, @destination_owner = N'dbo', @force_reinit_subscription = 1, @column_tracking = N'false',  @allow_interactive_resolver = N'false'
GO

The important parameters here are @pub_identity_range = 1000, @identity_range = 100
@identity_range is what is allocated to the publisher and subscribers (i.e: this is the range that controls the values inserted on the publisher and subscribers as well).

@pub_identity_range : has a role when the subscribers also re-publish data because those republishing subscribers will synchronize data with their own subscribers and eventually will synchronize with the original publisher. It’s not a common practice but can happen , we won’t talk about it here.

Let’s create the subscriptions and run the snapshot:

use [Pub]
exec sp_addmergesubscription @publication = N'Pub1', @subscriber = N'MYINSTANCE',
@subscriber_db = N'sub1', @subscription_type = N'Push', @sync_type = N'Automatic',
@subscriber_type = N'Global', @subscription_priority = 75, @use_interactive_resolver = N'False'

Declare @instance nvarchar(1000)
Set @instance = @@servername;

exec sp_addmergepushsubscription_agent @publication = N'Pub1', @subscriber =@instance,
@subscriber_db = N'sub1', @subscriber_security_mode = 1, @publisher_security_mode = 1;
GO

exec sp_addmergesubscription @publication = N'Pub1', @subscriber = N'MYINSTANCE',
@subscriber_db = N'sub2', @subscription_type = N'Push', @sync_type = N'Automatic',
@subscriber_type = N'Global', @subscription_priority = 75, @use_interactive_resolver = N'False'

Declare @instance nvarchar(1000)
Set @instance = @@servername;

exec sp_addmergepushsubscription_agent @publication = N'Pub1', @subscriber =@instance,
@subscriber_db = N'sub2', @subscriber_security_mode = 1, @publisher_security_mode = 1;
GO

-- Start snapshot agent

Exec pub..sp_startpublication_snapshot 'pub1'
GO
Waitfor delay '00:00:15'
GO

Get snapshot agent output:

Exec sp_MSenum_replication_agents @type = 1
GO

dbname name status publisher publisher_db publication start_time time duration comments distribution MYINSTANCE-Pub-Pub1-2 2 MYINSTANCE Pub Pub1 20140118 18:05:30.750 20140118 18:05:40.523 10 [100%] A snapshot of 1 article(s) was generated.

Start Merge agent for each subscription to push initial snapshot:
– job_name = —-

Exec msdb..sp_start_job @job_name='MYINSTANCE-Pub-Pub1-MYINSTANCE-3';
GO
Exec msdb..sp_start_job @job_name='MYINSTANCE-Pub-Pub1-MYINSTANCE-4';
GO

Waitfor delay '00:00:15'
-- Get output of the merge agent
use [master]
exec [distribution].sys.sp_replmonitorhelpmergesession @agent_name = N'MYINSTANCE-Pub-Pub1-MYINSTANCE-3', @hours = -1, @session_type = 1
GO
Session_id Status StartTime EndTime Duration UploadedCommands DownloadedCommands ErrorMessages PercentageDone LastMessage 3 2 1/18/2014 18:14:42 1/18/2014 18:14:54 13 0 0 0 100 Applied the snapshot and merged 0 data change(s) (0 insert(s),0 update(s), 0 delete(s), 0 conflict(s)).

This is where Merge replication metadata tracks the history allocated ranges:
http://technet.microsoft.com/en-us/library/ms182607.aspx

select
publication,
article,
subscriber,
subscriber_db,
range_begin,
range_end,
is_pub_range,
next_range_begin,
next_range_end
from
Distribution..MSmerge_identity_range_allocations
order by range_begin
GO
publication article subscriber subscriber_db range_begin range_end is_pub_range next_range_begin next_range_end Pub1 tbl_pub MYINSTANCE Pub 1 101 0 101 201 Pub1 tbl_pub MYINSTANCE sub2 201 301 0 301 401 Pub1 tbl_pub MYINSTANCE sub1 401 501 0 501 601 Pub1 tbl_pub MYINSTANCE sub2 601 1601 1 1601 2601 Pub1 tbl_pub MYINSTANCE sub1 2601 3601 1 3601 4601

is_pub_range=1 is reserved for the subscribers in case they will re-publish the data.

Let’s check the schema on each table of the replication nodes:


select 'Publisher' [Node], name, definition from pub.sys.check_constraints where type = 'C' and parent_object_id =
(select object_id from pub.sys.objects where name ='tbl_pub')
union all
select 'Sub1', name, definition from Sub1.sys.check_constraints where type = 'C' and parent_object_id =
(select object_id from Sub1.sys.objects where name ='tbl_pub')
union all
select 'Sub2', name, definition from Sub2.sys.check_constraints where type = 'C' and parent_object_id =
(select object_id from Sub2.sys.objects where name ='tbl_pub')
GO
Node name definition Publisher repl_identity_range_21A0E50F_D0F6_4E43_9FE1_349C1210247F ([col1]>=(1) AND [col1](101) AND [col1]<=(201)) Sub1 repl_identity_range_21A0E50F_D0F6_4E43_9FE1_349C1210247F ([col1]>(401) AND [col1](501) AND [col1]<=(601)) Sub2 repl_identity_range_21A0E50F_D0F6_4E43_9FE1_349C1210247F ([col1]>(201) AND [col1](301) AND [col1]<=(401))

Here we can see that each node has a primary and secondary range enforced through the constrainers.

Let’s add some data!


Insert into pub.dbo.tbl_pub (col2) select getdate();
Go 25
Insert into sub1.dbo.tbl_pub (col2) select getdate();
Go 25
Insert into sub2.dbo.tbl_pub (col2) select getdate();
Go 25

Start sync:


Exec msdb..sp_start_job @job_name='MYINSTANCE-Pub-Pub1-MYINSTANCE-3';
GO
Exec msdb..sp_start_job @job_name='MYINSTANCE-Pub-Pub1-MYINSTANCE-4';
GO

Count rows in each table, should be 75 (3 * 25 rows)
select (select count(*) from pub.dbo.tbl_pub),(select count(*) from sub1.dbo.tbl_pub), (select count(*) from sub2.dbo.tbl_pub)

75 — 75 — 75

Now, let’s see what will happen when we insert many rows that exceed the range. Let’s start with the publisher. We had inserted 25 rows and the primary + secondary ranges are 200.

Insert into sub1.dbo.tbl_pub (col2) select getdate();
Go 177

Let’s see the new range:

publication article subscriber subscriber_db range_begin range_end is_pub_range next_range_begin next_range_end Pub1 tbl_pub MYINSTANCE Pub 4601 4701 0 4701 4801

How? Well , the largest range last allocated was 4601 (republishing range).

How “new range” was allocated

Under each table , there are 3 triggers corresponding for Insert, Update & Delete. The insert trigger ***On Publisher*** has an extra part to check for the identity range and use secondary range if Primary range is exhausted. If both primary and secondary range are exhausted, a new range is allocated.

Here’s what the trigger code looks here:


ALTER trigger [dbo].[MSmerge_ins_21A0E50FD0F64E439FE1349C1210247F] on [Pub].[dbo].[tbl_pub] for insert   as
    declare @is_mergeagent bit, @at_publisher bit, @retcode smallint

    set rowcount 0
    set transaction isolation level read committed

        select @is_mergeagent = convert(bit, sessionproperty('replication_agent'))
        select @at_publisher = 0
    if (select trigger_nestlevel()) = 1 and @is_mergeagent = 1
        return
    if is_member('db_owner') = 1
    begin
        -- select the range values from the MSmerge_identity_range table
        -- this can be hardcoded if performance is a problem
        declare @range_begin numeric(38,0)
        declare @range_end numeric(38,0)
        declare @next_range_begin numeric(38,0)
        declare @next_range_end numeric(38,0)

        select @range_begin = range_begin,
               @range_end = range_end,
               @next_range_begin = next_range_begin,
               @next_range_end = next_range_end
            from dbo.MSmerge_identity_range where artid='21A0E50F-D0F6-4E43-9FE1-349C1210247F' and subid='95208516-4B98-451D-B264-C1F27B20449A' and is_pub_range=0

        if @range_begin is not null and @range_end is not NULL and @next_range_begin is not null and @next_range_end is not NULL
        begin
            if IDENT_CURRENT('[dbo].[tbl_pub]') = @range_end
            begin
                DBCC CHECKIDENT ('[dbo].[tbl_pub]', RESEED, @next_range_begin) with no_infomsgs
            end
            else if IDENT_CURRENT('[dbo].[tbl_pub]') >= @next_range_end
            begin
                exec sys.sp_MSrefresh_publisher_idrange '[dbo].[tbl_pub]', '95208516-4B98-451D-B264-C1F27B20449A', '21A0E50F-D0F6-4E43-9FE1-349C1210247F', 2, 1
                if @@error<>0 or @retcode<>0
                    goto FAILURE
            end
        end
    end

--

The important part is this:

if @range_begin is not null and @range_end is not NULL and @next_range_begin is not null and @next_range_end is not NULL
        begin
            if IDENT_CURRENT('[dbo].[tbl_pub]') = @range_end
            begin
                DBCC CHECKIDENT ('[dbo].[tbl_pub]', RESEED, @next_range_begin) with no_infomsgs
            end
            else if IDENT_CURRENT('[dbo].[tbl_pub]') >= @next_range_end
            begin
                exec sys.sp_MSrefresh_publisher_idrange '[dbo].[tbl_pub]', '95208516-4B98-451D-B264-C1F27B20449A', '21A0E50F-D0F6-4E43-9FE1-349C1210247F', 2, 1
--

A very important note: Only members of Db_owner will be able to allocate a new range. This is a limitation because, in many instances, you want to limit the permissions of your connected users. If the user inserting the data is not a member of DB_OWNER, then merge agent has to run OR you need to manually allocate a new range using sp_adjustpublisheridentityrange http://msdn.microsoft.com/en-us/library/ms181527.aspx

What about subscribers? Should we expect a new range to be generated if we insert more rows than the allowed range? Let’s see

Insert into sub2.dbo.tbl_pub (col2) select getdate();
Go 175

use sub2
GO
Dbcc checkident (tbl_pub , noreseed)

--Checking identity information: current identity value '401', current column value '426'.

--We just reached the limit of the secondary range, any new insert will violate the constraint unless a new range is allocated

Insert into sub2.dbo.tbl_pub (col2) select getdate();
GO


Msg 548, Level 16, State 2, Line 1
The insert failed. It conflicted with an identity range check constraint in database ‘sub2′, replicated table ‘dbo.tbl_pub’, column ‘col1′. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

Why wasn’t this handled like the publisher? Simply put , it’s by design. New ranges are allocated for subscribers when they sync with the publisher. This is even reflected in the trigger code. The following part is missing from the trigger on the subscriber side:

exec sys.sp_MSrefresh_publisher_idrange ‘[dbo].[tbl_pub]‘, ’95208516-4B98-451D-B264-C1F27B20449A’, ’21A0E50F-D0F6-4E43-9FE1-349C1210247F’, 2, 1

Some people complained about this, but, no dice!

http://connect.microsoft.com/SQLServer/feedback/details/330476/identity-range-not-working-for-merge-replication-in-sql-server-2005



Conclusion

Here we learnt few points:

- Merge replication can handle identity ranges automatically.

- Starting with SQL 2005 , there are two range pools assigned to each publisher and subscriber.

- A republishing subscriber will have a “republishing” range to use for its own subscribers.

- The size of each pool can be configured.

Some recommendations here as well:

1- You need to choose the datatype of your identity column carefully because you do not want the subscribers to run out of range. SMALLINT, for example, allows -2^15 (-32,768) to 2^15-1 (32,767) which is quite low for any production system while BIGINT is generous here with range of -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807), it consumes more storage though.
2- When adding articles , choose a range that can cover your operations without the need to frequently assign new ones. If you choose a datatype like BIGINT then you can be a bit generous as a safe factor.
3- Plan the subscribers sync before the subscribers consume all range values so that new ranges are assigned. You can monitor the ranges consumption and kick off sync manually.
4- On publisher , if processes inserting data are not part of a db_owner, then you will need to run agent or manually run sp_adjustpublisheridentityrange

What about you folks? How do you manage identities on Merge replication? Did you face any problems?

Happy reading!

Categories: DBA Blogs

how to read the values of bind variables for currently executing statements real time monitoring kicks in

Grumpy old DBA - Thu, 2014-01-23 18:19
As usual Tanel Poder has done an excellent job of writing up this approach.

Caveats I have tested this in 11.2.0.2.x and 11.2.0.3.x might have some issues earlier not quite sure but hey Tanel probably has this documented also.

This came in really handy recently looking at some SQL chewing up large amounts of LIO.

Here is Tanels writeup: bind variable sql monitor leading over to here ...


Categories: DBA Blogs

Script to Collect Database Information Quickly

Pythian Group - Thu, 2014-01-23 15:38
As a DBA there are occasions where we are required to collect details from a database which can 
be used for further analysis. For example, getting the status of the database (i.e. File Size, 
Free Space, Status of the database, and who is the current owner of the database). This kind of 
information is required, and very useful, for auditing purposes in addition to tracking the database/database 
file's size for various reasons. I had a script which does this job for me, exactly the way 
I want it; however, I have to run it for each database separately.  

One fine day, while answering some question's on a forum, I found a script by Dan Guzman which 
retrieved most of the information I needed, and it does this for all the databases. I have 
adopted Dan G.'s script for my use and modified it by adding some more details to it. 

Please review the script below. Let me know if you like it or dislike it. I will try to 
make further improvements on this script.
--==================================================================================================================
-- Script Originally Written By: Dan Guzman | http://www.dbdelta.com/ 
-- Modified by: Hemantgiri S. Goswami 
-- Reference: 
-- http://social.msdn.microsoft.com/Forums/en/transactsql/thread/226bbffc-2cfa-4fa8-8873-48dec6b5f17f
--==================================================================================================================
DECLARE
    @SqlStatement NVARCHAR(MAX)
    ,@DatabaseName SYSNAME;

IF OBJECT_ID(N'tempdb..#DatabaseSpace') IS NOT NULL
    DROP TABLE #DatabaseSpace;

CREATE TABLE #DatabaseSpace
(
    SERVERNAME        SYSNAME,
    DBID            INT,
    DATABASE_NAME    SYSNAME,
    Recovery_Model    VARCHAR(15),
    DBOWNER            VARCHAR(25),
    LOGICAL_NAME    SYSNAME,
    FILE_PATH        SYSNAME,
    FILE_SIZE_MB    DECIMAL(12, 2),
    SPACE_USED_MB    DECIMAL(12, 2),
    FREE_SPACE_MB    DECIMAL(12, 2),
    GROWTH_OPTION    VARCHAR(15),
    MAXIMUM_SIZE    INT,
    AUTOGROWTH        INT,
    DB_STATUS        VARCHAR(100)
);

DECLARE DatabaseList CURSOR LOCAL FAST_FORWARD FOR
    SELECT name FROM sys.databases WHERE STATE = 0;

OPEN DatabaseList;
WHILE 1 = 1
BEGIN
    FETCH NEXT FROM DatabaseList INTO @DatabaseName;
    IF @@FETCH_STATUS = -1 BREAK;
    SET @SqlStatement = N'USE '
        + QUOTENAME(@DatabaseName)
        + CHAR(13)+ CHAR(10)
        + N'INSERT INTO #DatabaseSpace
                SELECT
                [ServerName]         = @@ServerName
                ,[DBID]             = SD.DBID
                ,[DATABASE_NAME]    = DB_NAME()
                ,[Recovery_Model]    = d.recovery_model_desc
                ,[DBOwner]             = SUSER_SNAME(sd.sid)
                ,[LOGICAL_NAME]     = f.name
                ,[File_Path]         = sf.filename
                ,[FILE_SIZE_GB]     = (CONVERT(decimal(12,2),round(f.size/128.000,2))/1024)
                ,[SPACE_USED_GB]     = (CONVERT(decimal(12,2),round(fileproperty(f.name,''SpaceUsed'')/128.000,2))/1024)
                ,[FREE_SPACE_GB]     = (CONVERT(decimal(12,2),round((f.size-fileproperty(f.name,''SpaceUsed''))/128.000,2))/1024)
                ,[Growth_Option]     = case sf.status 
                                        & 0x100000
                                        WHEN 1048576    THEN    ''Percentage''                                        
                                        WHEN 0            THEN    ''MB''
                                      END
                ,[Maximum_Size]     = SF.MaxSize
                ,[AutoGrowth(MB)]     = (SF.Growth*8/1024)
                ,[DB_Status]        =
                                    CASE SD.STATUS
                                        WHEN 0 THEN ''Normal''
                                        WHEN 1 THEN ''autoclose'' 
                                        WHEN 2 THEN ''2 not sure'' 
                                        WHEN 4 THEN ''select into/bulkcopy'' 
                                        WHEN 8 THEN ''trunc. log on chkpt'' 
                                        WHEN 16 THEN ''torn page detection'' 
                                        WHEN 20 THEN ''Normal'' 
                                        WHEN 24 THEN ''Normal'' 
                                        WHEN 32 THEN ''loading'' 
                                        WHEN 64 THEN ''pre recovery'' 
                                        WHEN 128 THEN ''recovering'' 
                                        WHEN 256 THEN ''not recovered'' 
                                        WHEN 512 THEN ''offline'' 
                                        WHEN 1024 THEN ''read only'' 
                                        WHEN 2048 THEN ''dbo use only'' 
                                        WHEN 4096 THEN ''single user'' 
                                        WHEN 8192 THEN ''8192 not sure'' 
                                        WHEN 16384 THEN ''16384 not sure'' 
                                        WHEN 32768 THEN ''emergency mode'' 
                                        WHEN 65536 THEN ''online'' 
                                        WHEN 131072 THEN ''131072 not sure'' 
                                        WHEN 262144 THEN ''262144 not sure'' 
                                        WHEN 524288 THEN ''524288 not sure'' 
                                        WHEN 1048576 THEN ''1048576 not sure'' 
                                        WHEN 2097152 THEN ''2097152 not sure'' 
                                        WHEN 4194304 THEN ''autoshrink'' 
                                        WHEN 1073741824 THEN ''cleanly shutdown''
                                    END 
            FROM SYS.DATABASE_FILES F
            JOIN 
            MASTER.DBO.SYSALTFILES SF
            ON F.NAME = SF.NAME
            JOIN 
            MASTER.SYS.SYSDATABASES SD
            ON 
            SD.DBID = SF.DBID
            JOIN
            MASTER.SYS.DATABASES D
            ON 
            D.DATABASE_ID = SD.DBID
            AND DATABASEPROPERTYEX(SD.NAME,''Updateability'') <> ''OFFLINE''
            ORDER BY [File_Size_GB] DESC';
    EXECUTE(@SqlStatement);

END
CLOSE DatabaseList;
DEALLOCATE DatabaseList;

SELECT * FROM #DatabaseSpace;

DROP TABLE #DatabaseSpace;
GO

-- Hemantgiri S. Goswami |Linkedin | Twitter

Categories: DBA Blogs

Log Buffer: Marching from 2013 to 2014

Pythian Group - Thu, 2014-01-23 15:15

Pythian’s cherished Log Buffer carnival is celebrating the year 2013, while looking passionately towards yet another promising year for 2014. So much digital water has flown under the bridges in 2013, and Log Buffer covered it all from the eyes of bloggers across database technologies like Oracle, SQL Server, MySQL, and big data.

Almost 50 Log Buffer episodes in year 2013 threw light on new releases of software, cool features, elucidating obscure technical corners, conferences stories, database news, quirks, nifty tricks, trivia, and much more.

Log Buffer remained focused on every facet of database technology, and while doing so, noticed that there were a few salient technologies which interested bloggers and their readers a lot. Some of these technologies and areas were old and some new.

One evergreen area of interest for the database bloggers is discussing performance tuning. This is the one area of databases where the interest carried on from the past year, remaining hot throughout the year and will be the center of discussion next year.

The hot topics this year among the database bloggers will include big data, Cloud Computing, Engineered Systems, and the new version of Oracle database 12c.  Regardless of the database vendor, bloggers loved to explore, discuss, share, and present about these brave new bleeding edge technologies which are still evolving.

One prime takeaway from the Log Buffer carnival is the unique ability of the database bloggers to set the tone of the technology. Database vendors take these bloggers very seriously and improve their offerings based on the blog posts and resulting discussions. There is so much to learn, and database blog posts are a quick and efficient way to stay on top of the ever-changing and ever-evolving data oriented technologies.

Log Buffer is all geared up to keep encompassing all things data related in 2014. Stay tuned.

Pythian’s cherished Log Buffer carnival is celebrating the year 2013, while looking passionately towards yet another promising year for 2014. So much digital water has flown under the bridges in 2013, and Log Buffer covered it all from the eyes of bloggers across database technologies like Oracle, SQL Server, MySQL, and big data.

Almost 50 Log Buffer episodes in year 2013 threw light on new releases of software, cool features, elucidating obscure technical corners, conferences stories, database news, quirks, nifty tricks, trivia, and much more.

Log Buffer remained focused on every facet of database technology, and while doing so, noticed that there were a few salient technologies which interested bloggers and their readers a lot. Some of these technologies and areas were old and some new.

One evergreen area of interest for the database bloggers is discussing performance tuning. This is the one area of databases where the interest carried on from the past year, remaining hot throughout the year and will be the center of discussion next year.

The hot topics this year among the database bloggers will include big data, Cloud Computing, Engineered Systems, and the new version of Oracle database 12c.  Regardless of the database vendor, bloggers loved to explore, discuss, share, and present about these brave new bleeding edge technologies which are still evolving.

One prime takeaway from the Log Buffer carnival is the unique ability of the database bloggers to set the tone of the technology. Database vendors take these bloggers very seriously and improve their offerings based on the blog posts and resulting discussions. There is so much to learn, and database blog posts are a quick and efficient way to stay on top of the ever-changing and ever-evolving data oriented technologies.

Log Buffer is all geared up to keep encompassing all things data related in 2014. Stay tuned.

Categories: DBA Blogs

Configure OEM12c to perform checkups on EXADATA (EXACHK)

DBASolved - Wed, 2014-01-22 21:10

Warning: This one may be longer than normal

How many times have you ran an exachk, produced the text file, and then had to either read the file on the compute node or copy it to your local machine?  Well there is another way to view the report of the EXACHK; it can even be ran on a timely basis so you have current EXACHK information on a regular basis.  What I’m talking about is using Oracle Enterprise Manager 12c for scheduling and producing the exachks. 

If you look at the Oracle Enterprise Manager 12c (OEM) documentation, you will eventually find information on the plug-in for “Oracle Engineered System Healthchecks”.  I’ll make it easy for you, here it is.  This plug-in allows OEM to process the XML output from the Exachk tool, which is part of the OEM by default.  This approach can be used to automate the assessment of Exadata V2, X2-2, X2-8, SPARC Supercluster, Exalogic, Exalytics, and Availability machine system for known configuration problems and best practices.

So why would you want to use OEM to review Exachk items?  The OEM interface allows you to view the output from an interface that is easy to view.  It allows you to setup metrics against the report to keep track of the health of your engineered system.  Lastly, the report can be scheduled to run on a predetermined timeframe so you don’t forget to run it.

Prerequisites

As with anything Oracle there are prerequisites that need to be meet.  Fro the healthcheck plug-in there are four simple ones.

  • Review Oracle Exadata Best Practices and Oracle Database Machine Monitoring Best Practices (757552.1/1110675.1)
  • Verify and enable the Exachk Tool.  The plug-in supports Exachk 2.2.2 and later
  • Operating systems and platforms (docs has the list, here)
  • Oracle Enterprise Manager 12c (12.1.0.3)
Plug-In

Once the prerequisites have been confirmed, then you will need to deploy the plug-in.  Before the plug-in can be deployed, in most environments, it needs to be applied to the software library.  To download or apply the plug-in to the software library, this is done from the Setup –> Extensibility –> Self Update menu.  Figure 1, shows that the plug-in has already been downloaded and applied to the software library.

Figure 1: Plug-in applied to software library

image

Add Healthcheck Targets

Ever wonder what the “Add Targets Declaratively by Specifying Target Monitoring Properties” when adding target manually is used for?  Well, this is one of those times where you get to use it.  When adding Healthcheck targets,  you will use the Setup –> Add Targets –> Add Targets Manually menu items.  When you reach the Add Targets Manually page,  select the “Add Targets Declaratively by Specifying Target Monitoring Properties” option; then select “Oracle Engineered System Healthchecks” from the drop down menu, then add the monitoring agent for one of the compute nodes in the Exadata machine.  Lastly click the Add Manually button.  Once all this is completed, you should end up on the “Add Oracle Engineered System Healthchecks” page (Figure 2).

Figure 2: Add Oracle Engineered System Healthchecks page

image

A few things to notice on this page.  First the Target Name, this can be any name you want to give the healthcheck target.  It will be used to look the target up in the All Targets page.  Other than All Targets, there is no direct way to get to the healthcheck targets (I’ve looked, still searching). 

In the properties dialog area, notice the Max Interval Allowed.  This is the number of days between consecutive Exachk runs.  You can make this longer or shorter; depends on you environment.  31 is the default and recommended by Oracle. 

The Exachk Results Directory is where OEM will read the Exachk information from.  In most of the environments, I usually use /home/oracle.  Fill in what you need.

Prerequisites for Exachk Tool

I know, I know, thought you took care of the prerequisites earlier before starting.  Well, this is another set of prerequisites.  This time, you need to make changes to the .profile (.bash_profile) for the Oracle use.    There are two entries that you need to add to the .profile.  Table 1 below provides the details and values that should be set with these parameters.

Table 1: Environment parameters for profiles

Parameter Value Why RAT_COPY_EM_XML_FILES

1

Setting this environment variable will enable copying of results between all nodes in the cluster RAT_OUTPUT <exachk output directory> Location where results will be copied to for all nodes in the cluster. Run Exachk Tool

Here comes the fun part; you get to run the Exachk tool.  When I say run, I mean configure it to run in the back ground on the Exadata Machine.   What there is a way to keep Exachk running in the background?…Yep! 

The way this is done is by using the daemon (-d start) option to keep it running (Listing 1).  Although the Exachk process will start to run as a daemon, you will still be prompted for the information required.  Once you have provided all the information (password and such), the daemon will remember this until the daemon is stopped.

Listing 1: Start Exachk as daemon

./exachk –d start

Now with Exachk running in the background as a process; how do you get Exachk to run?  Well, you initially have to run Exachk.  When you do, everything you entered at the prompts previously will be used. 

To run Exachk in silent mode, use the –daemon option.  For the purpose of OEM, it is best to gather all the information the report can obtain.  From the command prompt run Exachk (Listing 2).

Listing 2: Run Exachk silently

./exachk –daemon –a

Metrics Collected

Now that the Exachk is done running, you want to see the results in OEM.  In order to do this, you need to tell the agent to upload the results.  Over time, the results will be uploaded; however, on the initial setup, you will want to run the upload process manually.  Listing 3 provides the command used to force the upload.

Listing 3: Force agent to upload Exachk results

./emctl control agent runCollection <target_name>:oracle_exadata_hc ExadataResults

Checking to see what is in OEM

Now that everything has been configured, Exachk ran and the management agent uploaded it, where can you see the results of this configuration?  The easiest way to see is from All Targets –> Engineered Systems –> Oracle Engineered System Healthchecks.  Then you need to click on the target name.

Once you are on the healthchecks page, you will notice a lot of information.  There are 3 main areas: Summary, Incidents and Problems, and Results Summary.  This is all the information that the Exachk report generated and the management agent uploaded.  Figure 3 is a small snapshot of the overall screen.

Figure 3: partial view of healthchecks page

image

Summary

Once again, Oracle Enterprise Manager 12c, has proven to be a tool that can do more than is expected.  Although, us command line junkies who like to run reports and various other items from the command line may look down on this; the Healthcheck plug-in is a cool way to review Exachk information.  Also a slick way to remember to run it each month….because it is scheduled.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database, Exadata, OEM
Categories: DBA Blogs

Java Performance: The Definitive Guide By Scott Oaks

Surachart Opun - Wed, 2014-01-22 12:26
Java is a programming language and computing platform. You will see lots of applications and websites that are written in Java. Java is fast, secure and reliable. How about performance? Java performance is a matter of concern because lots of business software has been written in Java.

I mention a book titles - Java Performance: The Definitive Guide By Scott Oaks. Readers will learn about the world of Java performance. It will help readers get the best possible performance from a Java application.
In a book, Chapter 2 written about testing Java applications, including pitfalls of Java benchmarking, Chapter 3, talked an overview of some of the tools available to monitor Java applications.
If you are someone who are interested in Java or develop applications in Java. Performance is very important for you. This book is focused on how to best use the JVM and Java Platform APIs so that program run faster. If You are interested in improving your applications in Java. This book can help.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Bug 13930580 Workaround Effective

Bobby Durrett's DBA Blog - Mon, 2014-01-20 14:24

We put the workaround for Bug 13930580 in on Friday and the results have been much better than I expected.  Here is when the workaround went in as reported in the alert log:

Fri Jan 17 18:38:26 2014
ALTER SYSTEM SET _use_adaptive_log_file_sync='FALSE' SCOPE=BOTH;

Here are the log file sync average times.  Notice how they go down after 7 pm Friday:

END_INTERVAL_TIME          number of waits ave microseconds
-------------------------- --------------- ----------------
17-JAN-14 12.00.49.669 AM            78666       15432.6923
17-JAN-14 01.00.27.862 AM            13380       15509.9778
17-JAN-14 02.00.11.834 AM            15838       17254.2949
17-JAN-14 03.00.56.429 AM            10681       29832.4282
17-JAN-14 04.00.39.502 AM            26127       14880.2097
17-JAN-14 05.00.22.716 AM            21637       10952.5322
17-JAN-14 06.00.01.558 AM            67162       9756.44207
17-JAN-14 07.00.45.358 AM           123705       11755.6535
17-JAN-14 08.00.29.811 AM           223799       11341.2467
17-JAN-14 09.00.19.275 AM           319051       13651.4647
17-JAN-14 10.00.09.089 AM           507335       13991.5543
17-JAN-14 11.00.59.502 AM           583835       11609.8432
17-JAN-14 12.00.44.044 PM           627506       10857.4556
17-JAN-14 01.00.30.133 PM           610232       11233.9348
17-JAN-14 02.00.18.961 PM           664368       10880.3887
17-JAN-14 03.00.05.694 PM           647896       9865.96367
17-JAN-14 04.00.44.694 PM           538270       10425.6479
17-JAN-14 05.00.24.376 PM           343863       9873.98468
17-JAN-14 06.00.11.481 PM           169654       9735.80996
17-JAN-14 07.00.03.087 PM            87590       7046.92633
17-JAN-14 08.00.52.390 PM            69297       2904.62955
17-JAN-14 09.00.29.888 PM            38244       3017.15969
17-JAN-14 10.00.09.436 PM            28166       3876.77469
17-JAN-14 11.00.54.765 PM            23220       11109.3063
18-JAN-14 12.00.33.790 AM            13293       9749.99428
18-JAN-14 01.00.17.853 AM            15332       3797.76839
18-JAN-14 02.00.56.050 AM            16137       6167.15127
18-JAN-14 03.00.33.908 AM            14621       9664.63108
18-JAN-14 04.00.12.383 AM             9708        6024.9829
18-JAN-14 05.00.56.348 AM            14565       3618.76938
18-JAN-14 06.00.39.683 AM            14323       3517.45402
18-JAN-14 07.00.29.535 AM            38243       3753.46422
18-JAN-14 08.00.16.778 AM            44878       2280.22924
18-JAN-14 09.00.01.176 AM            73082       9689.52484
18-JAN-14 10.00.45.168 AM            99302       2094.03293
18-JAN-14 11.00.35.070 AM           148789       1898.40424
18-JAN-14 12.00.23.344 PM           151780       1932.64997
18-JAN-14 01.00.08.631 PM           186040       2183.18563
18-JAN-14 02.00.59.839 PM           199826       2328.87331
18-JAN-14 03.00.45.441 PM           210098        1335.9759
18-JAN-14 04.00.36.453 PM           177331       1448.39219
18-JAN-14 05.00.21.669 PM           150837       1375.07256
18-JAN-14 06.00.59.959 PM           122234       1228.21767
18-JAN-14 07.00.37.851 PM           116396       1334.64569
... skip a few to find some higher load times...
19-JAN-14 10.00.01.434 AM           557020       2131.02737
19-JAN-14 11.00.42.786 AM           700781       1621.16596
19-JAN-14 12.00.31.934 PM           715327       1671.72335
19-JAN-14 01.00.10.699 PM           718417       1553.98083
19-JAN-14 02.00.51.524 PM           730149        2466.6241
19-JAN-14 03.00.38.088 PM           628319       2465.45829

When the system is busy we are seeing less than 3000 microseconds = 3 milliseconds log file sync which is good.  We were seeing 10 milliseconds or more which isn’t that great.

Oracle support has been pushing this for a long time but our own testing wasn’t able to recreate the problem.  Have to hand it to them.  They were right!

Here is a link to my previous post on this issue: url

- Bobby

 

 

 

Categories: DBA Blogs

2013 in Review — SQL Server

Pythian Group - Mon, 2014-01-20 13:24

It’s that time of year. When I take some time away from work, hide out, and reflect on the past year.

2013 was a big year for Microsoft, particularly SQL Server, Windows Azure, and HDInsight. While I’d love to recap every Microsoft moment that excited me, there are too many to count, so I’ll just list the highlights and leave you with some things to look forward to in 2014.

HDInsight

HDInsight went from a selective beta to a general release. Microsoft’s Hadoop partner, HortonWorks, released version 2.0. Although Windows Azure has not upgraded to HDP 2.0 yet, I’m told it will upgrade soon. I think there is a lot of potential to use Hadoop in conjunction with SQL Server and this is something I’ll be experimenting with soon.

SQL Server 2012

Microsoft released several patches for SQL Server 2012. I’ve seen a lot of organizations begin migrating to the platform, which is excellent. There is a lot of interest in AlwaysOn Availability Groups, which is one of my favorite new(er) technologies in SQL Server.

SQL Server 2014

Microsoft announced SQL Server 2014, and then proceeded to release not one but two community preview versions (CTP1 and CTP2) to the public. A full release (RTM) is expected sometime in 2014. I’m very excited about SQL Server 2014 and its full list of features. My favorites are the expanded replicas for AlwaysOn Availability Groups, the change in how SQL Server responds if a replica is offline, and in memory tables. The SQL Server 2014 CTP releases were placed in the image gallery, making it easy to explore the new product in minutes. There are also some new features slated to be added to the full product that will solve a lot of technical challenges for DBAs.

Windows Azure upgrades

Microsoft continued to enhance Windows Azure, and also announced support for Oracle databases. I’m a regular Windows Azure user and a big fan of the platform. Gartner predicts that cloud adoption will continue to grow and accelerate, so it’s time to accept the shift. If you haven’t checked out Windows Azure, I recommend you spend some time on the platform. It’s a strong player in the market and will only get better in the future.

These are a few of my highlights. What are yours?

Categories: DBA Blogs

OTNYathra 2014

Hemant K Chitale - Mon, 2014-01-20 10:18
The February 2014 OTNYathra in India.

.
.
.

Categories: DBA Blogs

Speed up Import with TRANSFORM=DISABLE_ARCHIVE_LOGGING in #Oracle 12c

The Oracle Instructor - Mon, 2014-01-20 07:47

A very useful 12c New Feature is the option to suppress the generation of redo during Data Pump import. I was talking about it during my recent 12c New Features class in Finland and like to share that info with the Oracle Community here. My usual demo user ADAM owned a table named BIG with one index on it. Both were in LOGGING mode when I exported them. The Data Pump export did not use any 12c New Feature and is not shown therefore.

SQL> select log_mode,force_logging from v$database;

LOG_MODE     FORCE_LOGGING
------------ ---------------------------------------
ARCHIVELOG   NO

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                             0                         0               0          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

8 rows selected.

The database is not in force logging mode – else the new Data Pump parameter would be ignored. Archive log mode was just turned on, therefore no archive log file yet. First I will show the redo generating way to import, which is the default. Afterwards the new feature for comparison.

SQL> host impdp adam/adam directory=DPDIR tables=big

Import: Release 12.1.0.1.0 - Production on Mon Jan 20 11:50:42 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "ADAM"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "ADAM"."SYS_IMPORT_TABLE_01":  adam/******** directory=DPDIR tables=big
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ADAM"."BIG"                                660.1 MB 5942016 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "ADAM"."SYS_IMPORT_TABLE_01" successfully completed at Mon Jan 20 11:54:32 2014 elapsed 0 00:03:48

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         13.47                         0              12          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

8 rows selected.

The conventional way with redo generation took almost 4 minutes and generated 12 archive logs – my online logs are 100 megabyte in size. Now let’s see the new feature:

SQL> drop table big purge;

Table dropped.

SQL> host impdp adam/adam directory=DPDIR tables=big transform=disable_archive_logging:y

Import: Release 12.1.0.1.0 - Production on Mon Jan 20 11:57:19 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "ADAM"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "ADAM"."SYS_IMPORT_TABLE_01":  adam/******** directory=DPDIR tables=big transform=disable_archive_logging:y
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ADAM"."BIG"                                660.1 MB 5942016 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "ADAM"."SYS_IMPORT_TABLE_01" successfully completed at Mon Jan 20 11:58:21 2014 elapsed 0 00:01:01

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         13.47                         0              12          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

8 rows selected.

SQL> select table_name,logging from user_tables;

TABLE_NAME                                                   LOG
------------------------------------------------------------ ---
BIG                                                          YES

SQL> select index_name,logging from user_indexes;

INDEX_NAME                                                   LOG
------------------------------------------------------------ ---
BIG_IDX                                                      YES

Note that the segment attributes are not permanently changed to NOLOGGING by the Data Pump import.
The comparison shows a striking improvement in run time – because the 2nd run did not generate additional archive logs, we still see the same number as before the 2nd call.

Another option is to suppress redo generation only for the import of indexes, in my example with the command

impdp adam/adam directory=DPDIR tables=big transform=disable_archive_logging:y:index

That is a safer choice because indexes are always reproducible. Keep in mind that any NOLOGGING operation is a risk – that is the price to pay for the speed up.
As always: Don’t believe it, test it! :-)


Tagged: 12c New Features
Categories: DBA Blogs

wow this looks interesting and not too far away world Information Architecture day 2014 ( Ann Arbor MI )

Grumpy old DBA - Sun, 2014-01-19 12:37
Just saw this posted ... hmm road trip time from Cleveland on Saturday Feb 15?  Only possibly dicey thing is driving/traveling on Saturday in mid February in the Midwest you can never predict what that is going to be like.  But hey 4 wheel drive in the Honda CRV so ... time to check schedule and see who else might want to attend.

See this link for the Ann Arbor event World Information Architecture Day 2014 Ann Arbor MI here is the overall site maybe there is an event planned near you WIAD 2014 ?

Categories: DBA Blogs

Yet another way to monitor Oracle GoldenGate with OEM12c

DBASolved - Sat, 2014-01-18 22:06

In a recent post on monitoring Oracle GoldenGate, I talked about how to configure the JAGENT to use Oracle Enterprise Manager 12c to monitor Oracle GoldenGate.  If you would like to review that post, you can find it here.  For this post, I’ll show you how you can use Metric Extensions within Oracle Enterprise Manager 12c to monitor Oracle GoldenGate. 

Note: Another good post on this topic can be found from my friend Maaz.  Here is his post.

As Maaz and I were talking about the finer aspects of the JAGENT, we started to talk about Metric Extensions to monitor Oracle GoldenGate.  This got me to thinking how this could be accomplished. 

One of the first things that is needed before you can set up a Metric Extension is how you are going to monitor the processes.  Being that most of the Oracle GoldenGate instances that I monitor are on Unix/Linux, I decided to use a Perl script to grab the basic info associated with Oracle GoldenGate.  This basically means, I just wanted to see what the current status of the processes was.  This is achieved from GGSCI using the INFO ALL command (Figure 1).

Figure 1: Info All
image

As you can tell, all of the Oracle GoldenGate processes are running.  In order to monitor these processes, they need to be put into a format that Oracle Enterprise Manager 12c can understand.  In order to do this, I used a Perl script to get the output in Figure 1 into a pipe delimited string.  The Perl script that I used to do this is located in Listing 1.

Listing 1: Perl Script
#!/usr/bin/perl -w
#
#
use strict;
use warnings;

#Static Variables

my $gghome = “/oracle/app/product/12.1.2/ggate”;

#Program

my @buf = `$gghome/ggsci << EOF
info all
EOF`;

foreach (@buf)
{
        if(/EXTRACT/||/REPLICAT/)
        {
                s/\s+/\|/g;
                print $_.”\n”;
        }
}

The Perl script provided (Listing 1) basically reads the output from the INFO ALL command into a buffer.  For everything in the buffer, look for any lines that have EXTRACT or REPLICAT in it.  Then replace all the spaces with a pipe (|) .  Lastly print out the output I want.  When the script is ran, you should get output similar to Figure 2.

Figure 2: Output from Perl script
image

Now that the output is in a pipe (|) format, I can use this in the Metric Extension.

Before we take a look at Metric Extensions, if you have never used them they are a great way to extend Oracle Enterprise Manager 12c to be more efficient in monitoring granular things. The documentation associated with Metric Extensions in Oracle docs are great and they are also covered in the Expert Oracle Enterprise Manager 12c book out by Apress.

To begin setting up a Metric Extension for monitoring Oracle GoldenGate, you need to go to the Metric Extensions page within Oracle Enterprise Manager 12c (Enterprise –> Monitoring –> Metric Extensions).    Once on the page, you are presented with a Create button about half way down the page (Figure 3).  The Create option can also be accessed from the Actions drop down.

Figure3: Metric Extensions Page

image

Once you click on the Create button, you will be taken to a wizard to begin developing the Metric Extension.  On the General Properties page, you need to fill out the required fields.  Since you are going to monitor this with a Perl script, the Target Type will be Host.  Then you need to name the Metric Extension (ME$).  Next provide the display name; this is the name that will show up in the All Metrics later.  Finally, the Adaptor drop down, select OS Command – Multiple Columns.  This way the metric will read the pipe (|) sign and break the string up.  Figure 4 provides you a screen shot.

Figure 4: General Properties
image

On the next screen, you need to provide the command that will run the script and the script that you want ran.  The delimiter field is a pipe (|) by default.  Fill in the required information (Figure 5).

Figure 5: Adapter
image

The Columns screen is next.  This screen is where you will setup the names of the columns and map what will go where.  Also, you can set the Alert Threshold for the the column that will be monitored.  One key think to point out here, you will need a PK for the data.  This basically means a unique way of identify an Oracle GoldenGate process.  For this configuration, the PK is the uniqueness between Program and Object.  Figure 6 illustrates how I have the columns configured for this setup.

Figure 6: Columns
image

Once you have the columns established, Oracle Enterprise Manager 12c will ask you what Credentials you want to use with this metric.  The default monitoring credentials are fine (Figure 7).

Figure 7: Credentials
image

Finally, you can test the Metric Extension and see how it is going to look.  This is your chance to make sure that all the values from the script return and appear how you expect them to be.  Figure 8, shows a successful test run.  If you have everything configured as expected, your output should come up in the Test Results without the pipe (|) signs.  If an error message is returned then investigate why you received it. 

In my testing, the errors I received were due to the PK issue with the columns when defined for the metric.

Figure 8: Test
image

Lastly, when you make it to the review screen, just check everything and click Finish.  Once you click Finish, you will be taken back to the Metric Extension landing page.  From the landing page, you can now deploy the metric to target the Metric Extension is to be associated with.

To deploy the Metric Extension, use the Action menu and select Deploy to Target.  Select the target which is should be deployed to (Figure 9).

Figure 9: Deploy to Targets….
image

Once you Metric Extension has been deployed to the selected target, you can then go to that target and check in All Metrics to see  how it looks.  For a host, you can do this by going to Targets –> Hosts.  Then select the host that you wanted.  From the Host menu, select Monitoring –> All Metrics.  Once on the All Metrics landing page, look for the name of the metric in the drop down tree then click on it.  You will be presented with the current status of the metric in the right-hand pane (Figure 10).

Figure 10: Metric Extension status in All Metrics
image

At this point, the Metric Extension for monitoring Oracle GoldenGate processes has been successfully developed, tested and deployed to a target.  The next step you need to accomplish is to ensure that you are notified when something happens to a process.  This is done with Notification Rules (not covered here).  Once the notification rules are established, monitoring Oracle GoldenGate using Metric Extensions is a great way to work around any JAGENT issues you may have.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate, OEM
Categories: DBA Blogs

Log Buffer #355, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-01-17 07:55

It’s no secret anymore that social networks like Facebook are fast loosing their hyper-active charm. But ask anybody about blogging, and its still there and the future is as solid as rock. Innovation, creation, and quality content with public engagement is what makes it lasting. This Log Buffer acknowledges that fact for the database bloggers.
Oracle:

The  Critical Patch Update (CPU) for January 2014 was released on January 14, 2014.  Oracle strongly recommends applying the patches as soon as possible.

Inject filesystem I/O latency. Oracle Linux 5 and 6

What to expect at the Oracle Value Chain Summit

Get a Closer Look at Oracle Database Appliance

EBS Release 12 Certified with Mac OS X 10.9 with Safari 7 and JRE 7

SQL Server:

Calculating Duration Using DATETIME Start and End Dates (SQL Spackle)

Learn SQL Server in Arabic

There is nothing mysterious about SQL Server backups. They are essential, however you use your databases.

Stairway to Advanced T-SQL Level 1: The TOP Clause

Free eBook: Troubleshooting SQL Server: A Guide for the Accidental DBA

MySQL:

Multi-master, multi-region MySQL deployment in Amazon AWS

In fact, Docker is a wrapper around LXC. It is fun to use. Docker has the philosophy to virtualize single applications using LXC.

Analyzing WordPress MySQL queries with Query Analytics

MySQL Cluster: The Latest Developments in Management, Free webinar

MySQL Proxy lives – 0.8.4 alpha released

Categories: DBA Blogs

Getting started with 12c

Bobby Durrett's DBA Blog - Thu, 2014-01-16 15:12

Back in July I finally got Oracle 12c installed on my laptop as documented in this post: url

But, that was as far as I got.  The last thing I did was get an error message creating a user.  Well, I figured out how to create a new user and a few other things.  I’m working with the ORCL database that comes with the install and all the parameters, etc. that come with it.

Evidently the default install comes with a PDB called PDBORCL.  So, I have two tns entries one for the parent CBD and one for the child PDB and they look like this:

ORCL.WORLD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.133.128)
(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl.mydomain.com)
    )
  )

PDB.WORLD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.133.128)
(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdborcl.mydomain.com)
    )
  )

I guess the service name has the name of the PDB in it.

So, if I connect as SYSTEM/password@orcl I’m connected to the CDB and if I connect to SYSTEM/password@pdb I’m connected to the PDB.  When I connected to the PDB I could create a new user without getting an error.

But, when I first tried connecting to the PDB I got this error, even though the database was up:

ORA-01033: ORACLE initialization or shutdown in progress

So, to bring the database up (by the way, I’m on 64 bit Linux) after booting the Linux VM the following steps were required:

lsnrctl start

sqlplus / as sysdba

startup

alter session set container=PDBORCL;

startup

Probably this could all be scripted but that’s what I did today.

Interestingly there is only one pmon:

$ ps -ef | grep pmon
oracle   29495     1  0 06:52 ?        00:00:00 ora_pmon_orcl

But you get different results when you query dba_data_files depending on whether connected to the CDB or PDB:

CDB

FILE_NAME                                 
------------------------------------------
/u01/app/oracle/oradata/orcl/system01.dbf 
/u01/app/oracle/oradata/orcl/sysaux01.dbf 
/u01/app/oracle/oradata/orcl/undotbs01.dbf
/u01/app/oracle/oradata/orcl/users01.dbf

PDB

FILE_NAME                                                     
--------------------------------------------------------------
/u01/app/oracle/oradata/orcl/pdborcl/system01.dbf             
/u01/app/oracle/oradata/orcl/pdborcl/sysaux01.dbf             
/u01/app/oracle/oradata/orcl/pdborcl/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/orcl/pdborcl/example01.dbf

So, I guess each PDB has its own SYSTEM and SYSAUX tablespaces?

Lastly when running my scripts to poke around I edited my sqlplus header script to report which container you are in.  It looks like this now:

set linesize 32000
set pagesize 1000
set long 2000000000
set longchunksize 1000
set head off;
set verify off;
set termout off;

column u new_value us noprint;
column n new_value ns noprint;
column c new_value cs noprint;

select name n from v$database;
select user u from dual;
SELECT SYS_CONTEXT('USERENV', 'CON_NAME') c FROM DUAL;

set sqlprompt &ns:&cs:&us>

set head on
set echo on
set termout on
set trimspool on

spool &ns..&cs..logfilename.log

Replace “logfilename” with whatever you want for your script name.

It puts out a prompt like this:

CDB

ORCL:CDB$ROOT:SYSTEM>

PDB

ORCL:PDBORCL:SYSTEM>

And the log file names:

ORCL.CDB$ROOT.sessions.log

ORCL.PDBORCL.sessions.log

Anyway, this is just a quick post about my first attempts to get around in 12c.

- Bobby

 

 

 

 

 

Categories: DBA Blogs