Skip navigation.

DBA Blogs

Identify difference in CBO parameters across two executions of a SQL

Oracle in Action - Fri, 2015-04-10 01:16

RSS content

Various initialization parameters  influence the choice of execution plan by the optimizer. Multiple executions of the same SQL with  different optimizer parameters  may employ different execution plans and hence result in  difference in performance . In this post, I will demonstrate how we can identify the CBO parameters which had  different values during two executions of the same SQL.

I will explore two scenarios :

  • the cursors are still available in shared pool
  • the cursors have been flushed to AWR

Overview:

  •  Flush shared pool
  •  Execute the same statement in two sessions with different values of OPTIMIZER_MODE
  •  Verify that the cursors for the statement are still in the shared pool
  •  Find out the execution plans employed by two cursors
  •  Find out the optimizer parameters which had different values during two executions of the SQL
  •  Take snapshot so that cursors are flushed to AWR
  •  Verify that the cursors for the statement are in AWR
  •  Find out the execution plans employed by two cursors
  •  Find out the value of parameter OPTIMIZER_MODE during two executions of the SQL

Implementation

- Flush shared pool

SQL>conn / as sysdba

alter system flush shared_pool;

- Execute the same statement in two sessions with different values of OPTIMIZER_MODE

– Note that there is difference in the time elapsed during two executions of the same statement

– Session – I –

SQL>alter session set optimizer_mode = 'ALL_ROWS';

set timing on
select s.quantity_sold, s.amount_sold, p.prod_name from sh.sales s,
sh.products p where s.prod_id = p.prod_id;
set timing off

918843 rows selected.

Elapsed: 00:01:19.57

-- Session - II  --
SQL>alter session set optimizer_mode = 'FIRST_ROWS_1';

set timing on
select s.quantity_sold, s.amount_sold, p.prod_name from sh.sales s,
sh.products p where s.prod_id = p.prod_id;
set timing off

918843 rows selected.

Elapsed: 00:01:16.63

 – Verify that the cursors for the statement are still in the shared pool and find out SQL_ID of the statement

SQL>set pagesize 200
col sql_text for a50

select sql_id, child_number, sql_text
from v$sql
where sql_text like '%select  s.quantity_sold, s.amount_sold%';

SQL_ID CHILD_NUMBER SQL_TEXT
------------- ------------ ----------------------------------------------
394uuwwbyjdnh 0 select sql_id, child_number, sql_text from v$sql w
here sql_text like '%select s.quantity_sold, s.am
ount_sold%'

6y2xaw3asr6xv 0 select s.quantity_sold, s.amount_sold, p.prod_nam
e from sh.sales s, sh.products p where s.prod_id =
p.prod_id

6y2xaw3asr6xv 1 select s.quantity_sold, s.amount_sold, p.prod_nam
e from sh.sales s, sh.products p where s.prod_id =
p.prod_id

 – Find out the execution plans employed by two cursors

Note that

  • child 0 uses hash join whereas child 1 employs nested loops join
  • Outline Data shows that optimizer_mode was ALL_ROWS during first execution and FIRST_ROWS(1) during second execution.
SQL>select * from table (dbms_xplan.display_cursor('6y2xaw3asr6xv', 0, format => 'TYPICAL +OUTLINE'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 6y2xaw3asr6xv, child number 0
-------------------------------------
select s.quantity_sold, s.amount_sold, p.prod_name from sh.sales s,
sh.products p where s.prod_id = p.prod_id

Plan hash value: 1019954709
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 495 (100)| || |
|* 1 | HASH JOIN | | 918K| 36M| 495 (3)| 00:00:06 | | |
| 2 | TABLE ACCESS FULL | PRODUCTS | 72 | 2160 | 3 (0)| 00:00:01 | | |
| 3 | PARTITION RANGE ALL| | 918K| 10M| 489 (2)| 00:00:06 | 1 | 28 |
| 4 | TABLE ACCESS FULL | SALES | 918K| 10M| 489 (2)| 00:00:06 | 1 | 28 |
-------------------------------------------------------------------------

Outline Data
-------------
 /*+
 BEGIN_OUTLINE_DATA
 IGNORE_OPTIM_EMBEDDED_HINTS
 OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
 DB_VERSION('11.2.0.1')
 ALL_ROWS
 OUTLINE_LEAF(@"SEL$1")
 FULL(@"SEL$1" "P"@"SEL$1")
 FULL(@"SEL$1" "S"@"SEL$1")
 LEADING(@"SEL$1" "P"@"SEL$1" "S"@"SEL$1")
 USE_HASH(@"SEL$1" "S"@"SEL$1")
 END_OUTLINE_DATA
 */

Predicate Information (identified by operation id):
---------------------------------------------------
 1 - access("S"."PROD_ID"="P"."PROD_ID")
select * from table (dbms_xplan.display_cursor('6y2xaw3asr6xv', 1, format => 'TYPICAL +OUTLINE'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 6y2xaw3asr6xv, child number 1
-------------------------------------
select s.quantity_sold, s.amount_sold, p.prod_name from sh.sales s,
sh.products p where s.prod_id = p.prod_id

Plan hash value: 3603960078
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| | | |
| 1 | NESTED LOOPS | | | | | | | |
| 2 | NESTED LOOPS | | 1 | 42 | 3 (0)| 00:00:01 | | |
| 3 | PARTITION RANGE ALL | | 1 | 12 | 2 (0)| 00:00:01 | 1 | 28 |
| 4 | TABLE ACCESS FULL | SALES | 1 | 12 | 2 (0)| 00:00:01 | 1 | 28 |
|* 5 | INDEX UNIQUE SCAN | PRODUCTS_PK | 1 | | 0 (0)| | | |
| 6 | TABLE ACCESS BY INDEX ROWID| PRODUCTS | 1 | 30 | 1 (0)| 00:00:01 | | |
-------------------------------------------------------------------------

Outline Data
-------------
 /*+
 BEGIN_OUTLINE_DATA
 IGNORE_OPTIM_EMBEDDED_HINTS
 OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
 DB_VERSION('11.2.0.1')
 FIRST_ROWS(1)
 OUTLINE_LEAF(@"SEL$1")
 FULL(@"SEL$1" "S"@"SEL$1")
 INDEX(@"SEL$1" "P"@"SEL$1" ("PRODUCTS"."PROD_ID"))
 LEADING(@"SEL$1" "S"@"SEL$1" "P"@"SEL$1")
 USE_NL(@"SEL$1" "P"@"SEL$1")
 NLJ_BATCHING(@"SEL$1" "P"@"SEL$1")
 END_OUTLINE_DATA
 */

Predicate Information (identified by operation id):
---------------------------------------------------
 5 - access("S"."PROD_ID"="P"."PROD_ID")

- It can be verified from v$sql_shared_cursor also that there was optimizer mode mismatch between two executions of the SQL

SQL>select SQL_ID, child_number, OPTIMIZER_MODE_MISMATCH
from v$sql_shared_cursor
where sql_id = ‘6y2xaw3asr6xv';

SQL_ID CHILD_NUMBER O
————- ———— –
6y2xaw3asr6xv 0 N
6y2xaw3asr6xv 1 Y

 – We can also employ v$sql_optimizer_env to find out optimizer parameters which have different values during 2 executions

SQL>set line 500
col name for a15

select child1.name, child1.value child1_value, child2.value child2_value
from v$sql_optimizer_env child1, v$sql_optimizer_env child2
where child1.sql_id = '6y2xaw3asr6xv' and child1.name like 'optimizer%'
and child2.sql_id = '6y2xaw3asr6xv' and child2.name like 'optimizer%'
and child1.name = child2.name
and child1.value <> child2.value;

NAME CHILD1_VALUE CHILD2_VALUE
--------------- ------------------------- -------------------------
optimizer_mode first_rows_1 all_rows
optimizer_mode all_rows first_rows_1

 – Take snapshot so that cursors are flushed to AWR

SQL>exec dbms_workload_repository.create_snapshot ('ALL');

 – Verify that the statement is  in AWR

SQL> set pagesize 200
SQL> col sql_text for a50
SQL>  select sql_id, sql_text from dba_hist_sqltext where sql_text like '%select s.quantity_sold, s.amount_sold%';
SQL_ID SQL_TEXT

------------- --------------------------------------------------
6y2xaw3asr6xv select s.quantity_sold, s.amount_sold, p.prod_nam
e from sh.sales s,
sh.products

9fr7ycjcfqydb select sql_id, sql_text from v$sql where sql_text
like '%select s.quantity_sold

 – Find out the execution plans that were  employed by various  cursors of the same statement

Note that

  • one cursor  uses hash join whereas other cursor employs nested loops join
  • The value of parameter OPIMIZER_MODE was ALL_ROWS during one execution and FIRST_ROWS(1) during second execution
SQL>select * from table(dbms_xplan.display_awr ('6y2xaw3asr6xv', format => 'TYPICAL +OUTLINE'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 6y2xaw3asr6xv
--------------------
select s.quantity_sold, s.amount_sold, p.prod_name from sh.sales s,
sh.products p where s.prod_id = p.prod_id

Plan hash value: 1019954709
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 495 (100)| || |
| 1 | HASH JOIN | | 918K| 36M| 495 (3)| 00:00:06 | | |
| 2 | TABLE ACCESS FULL | PRODUCTS | 72 | 2160 | 3 (0)| 00:00:01 | | |
| 3 | PARTITION RANGE ALL| | 918K| 10M| 489 (2)| 00:00:06 | 1 | 28 |
| 4 | TABLE ACCESS FULL | SALES | 918K| 10M| 489 (2)| 00:00:06 | 1 | 28 |
-------------------------------------------------------------------------

Outline Data
-------------
 /*+
 BEGIN_OUTLINE_DATA
 IGNORE_OPTIM_EMBEDDED_HINTS
 OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
 DB_VERSION('11.2.0.1')
 ALL_ROWS
 OUTLINE_LEAF(@"SEL$1")
 FULL(@"SEL$1" "P"@"SEL$1")
 FULL(@"SEL$1" "S"@"SEL$1")
 LEADING(@"SEL$1" "P"@"SEL$1" "S"@"SEL$1")
 USE_HASH(@"SEL$1" "S"@"SEL$1")
 END_OUTLINE_DATA
 */

SQL_ID 6y2xaw3asr6xv
--------------------
select s.quantity_sold, s.amount_sold, p.prod_name from sh.sales s,
sh.products p where s.prod_id = p.prod_id

Plan hash value: 3603960078
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| | | |
| 1 | NESTED LOOPS | | | | | | | |
| 2 | NESTED LOOPS | | 1 | 42 | 3 (0)| 00:00:01 | | |
| 3 | PARTITION RANGE ALL | | 1 | 12 | 2 (0)| 00:00:01 | 1 | 28 |
| 4 | TABLE ACCESS FULL | SALES | 1 | 12 | 2 (0)| 00:00:01 | 1 | 28 |
| 5 | INDEX UNIQUE SCAN | PRODUCTS_PK | 1 | | 0 (0)| | | |
| 6 | TABLE ACCESS BY INDEX ROWID| PRODUCTS | 1 | 30 | 1 (0)| 00:00:01 | | |
-------------------------------------------------------------------------

Outline Data
-------------
 /*+
 BEGIN_OUTLINE_DATA
 IGNORE_OPTIM_EMBEDDED_HINTS
 OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
 DB_VERSION('11.2.0.1')
 FIRST_ROWS(1)
 OUTLINE_LEAF(@"SEL$1")
 FULL(@"SEL$1" "S"@"SEL$1")
 INDEX(@"SEL$1" "P"@"SEL$1" ("PRODUCTS"."PROD_ID"))
 LEADING(@"SEL$1" "S"@"SEL$1" "P"@"SEL$1")
 USE_NL(@"SEL$1" "P"@"SEL$1")
 NLJ_BATCHING(@"SEL$1" "P"@"SEL$1")
 END_OUTLINE_DATA
 */

 — We can also use dba_hist_sqlstat to check the optimizer mode during two executions

SQL>select sql_text, optimizer_mode   
   from dba_hist_sqlstat h, dba_hist_sqltext t
     where h.sql_id = '6y2xaw3asr6xv'
       and t.sql_id = '6y2xaw3asr6xv';

SQL_TEXT OPTIMIZER_
-------------------------------------------------- ----------
select s.quantity_sold, s.amount_sold, p.prod_nam ALL_ROWS
e from sh.sales s,
sh.products

select s.quantity_sold, s.amount_sold, p.prod_nam FIRST_ROWS
e from sh.sales s,
sh.products

Summary:

The difference in values of various CBO parameters during various executions of the same statement can be found out while the statement is in shared pool or has been flushed to disk .

————————————————————————————————————————-

Related links:

Home
Tuning Index



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Identify difference in CBO parameters across two executions of a SQL], All Right Reserved. 2015.

The post Identify difference in CBO parameters across two executions of a SQL appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

LPAR and Oracle Database

Pakistan's First Oracle Blog - Tue, 2015-04-07 19:30
What is LPAR?

LPAR stands for Logical Partitioning and it's a feature of IBM's operating system AIX (Also available in Linux). By abstracting all the physical devices in a system, LPAR creates a virtualized computing environment.

In a server; the processor, memory, and storage are divided into multiple sets. Each set in a server consist of resources like processor, memory and storage. Each set is called as LPAR.

One server can have many LPARs operating at the same time. These LPARs communicate with each other as if they are on separate machines.

What is DLPAR?

DLPAR stands for Dynamic Logical Partitioning and with DLPAR the LPARs can be configured dynamically without restart. With DLPAR, memory, CPU and storage can be moved between LPARs on the fly.

What is HMC?

HMC stands for Hardware Management Console. The Hardware Management Console (HMC) is interface which is used to manage the LPARs. Its Java based and can be used to manage many systems.

If LPAR is in shared processor mode, without the following fix, LPAR may see excessive CPu usage: 


APARs for WAITPROC IDLE LOOPING CONSUMES CPU:
IV01111 AIX 6.1 TL05 if before SP08 (fixed in SP08)
IV06197 AIX 6.1 TL06 if before SP07 (fixed in SP07)
IV10172 AIX 6.1 TL07 if before SP02 (fixed in SP02)
IV09133 AIX 7.1 TL00 if before SP05 (fixed in SP05)
IV10484 AIX 7.1 TL01 if before SP02 (fixed in SP02)

This problem can effect POWER7 systems running any level of Ax720 firmware prior to Ax720_101. But it is recommended to update to the latest available firmware. If required, AIX and Firmware fixes can be obtained from IBM Support Fix Central:
http://www-933.ibm.com/support/fixcentral/main/System+p/AIX
Categories: DBA Blogs

OakTable World at IOUG COLLABORATE15

Pythian Group - Thu, 2015-04-02 18:52

Update history:
5-Apr: WIT panel added, Alex removed, Gwen and Pete schedule shifted.
11-Apr: Gwen and Pete swapped sessions.
13-Apr: Jonathan off lightning talks.

Guess what? OakTable World at IOUG C15 is happening again! Last year, we had awesome sessions and wonderful attendees. The sessions were so successful, in fact, that we needed a bigger room this year (there were other reasons too, but hey we can fit more people now!).

What: OakTable World C15
When: Wednesday, April 15, 2015, 8:00am – 5:30pm
Where: Mandalay Ballroom K

I really hope that, if you are reading this, you are planning to attend COLLABORATE 15 – IOUG Forum at the Mandalay Bay Resort & Casino in Vegas from April 12-16. If you haven’t yet planned your trip, this might just help you make the call. You know you want to be there!

OakTable Network will be holding its highly anticipated OakTable World during COLLABORATE 15! As always, IOUG was able to provide a room for us to use for the whole day (and boy what a big room it is!). The agenda is determined by the OakTable speakers, who choose topics they are passionate about. And if history is any indicator, these are also the topics you really want to hear about.

For those of you who aren’t familiar with OakTable World, Mogens Nørgaard started it as an underground event during Oracle OpenWorld—somewhere between 2007 and 2009. After several successful years and increasing popularity, the event became known as OakTable World during OOW12 and OOW13. Last year, we hosted OTWC14 at IOUG COLLABORATE 14 in Vegas. Needless to say, it was a success. So…Vegas here we come again!

Thank you to all the great companies who have sponsored this event over the years—you know who you are. This year, the usual suspects have pitched in to make it happen again—Pythian, Enkitec and Delphix. Once again, we will be printing unique t-shirts with cool graphics and awesome sponsors’ logos. Be part of history!

The OTW sessions are (mostly) aligned with conference sessions, except we start a tad later (you will appreciate it) and we’ve shifted a few sessions by 15 minutes to pack in as many as possible. Don’t worry, though, we don’t run anything during lunch or afternoon nap. :)

The current schedule is below, but check back regularly as it may change due to random events.

Time Presenter Title 8:10-8:15 someone authorized Opening Notes 8:15-9:00 Tim Gorman Augmenting SQL Monitor 9:15-10:15 guest session Women in Technology Panel 10:30-11:10 Pete Sharman Knowledge Sharing – Why Do It? 11:15-12:00 Gwen Shapira Kafka for DBAs – Because Inquiring Minds Need to Know 14:00-15:00 see below Lightning Talks! 15:15-16:15 Cary Millsap The Go/No-Go Matrix for Thinking Clearly About Testing 16:30-17:30 Jared Still Knowledge Builds Intuition

Lightning Talks are 10-minute presentations done in a rapid-fire fashion. They are always a huge success—you’ve got to be there! They may be technical, motivational, or inspirational, but regardless they are always cool speeches. The sequence of the talks may change, but everything will be presented within the hour.

Presenter Lightning Talk Kyle Hailey What is DevOps Kellyn Pot’Vin-Gorman SQLT in AWR Warehouse Alex Gorbachev #100miles Pete Sharman SnapClones++ Jonah H. Harris Performing MongoDB-Compatible NoSQL on Top of Oracle SQL

The OakTable Network folks and other great people will be hanging around, so make sure you drop by! This is an awesome place to grow your network. Remember that the presenters determine the agenda. Our passion to share and educate is what drives us. Come join us.

Vegas, here we come!

Categories: DBA Blogs

SQLcl, a revolution for SQL*Plus users

DBA Scripts and Articles - Thu, 2015-04-02 08:34

What is SQLcl ? SQLcl is a new command line interface like SQL*PLUS coming along with SQL Developper 4.1 Early Adopter. It is a lightweight tool (only 11MB) developed by the SQL Developer team fully compatible with Windows and Unix/Linux. You don’t need to install it so it is totally portable. The tool does not need … Continue reading SQLcl, a revolution for SQL*Plus users →

The post SQLcl, a revolution for SQL*Plus users appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

IOUG Collaborate 2015 – #C15LV

DBASolved - Wed, 2015-04-01 19:31

Like many other Oracle professionals and speakers I will be attending IOUG Collaborate 2015 this year in Las Vegas.  I’m not a big fan of Las Vegas, but hey cannot turn down an opportunity to speak, especially when IOUG asked me to do more than one session.  

This year my schedule is going to keep me busy; yet full of good topics that cover both EM12c and GoldenGate.  If you are going to be a Collaborate, come check out my sessions and many others.

My sessions this year:

4/12/2015
09:00 am – 03:00 pm – RAC SIG Function (RAC Attack)

4/13/2015:  
10:30 am – Writing to Lead Panel discussion
12:00 pm – Exadata Exachk and EM12c: Keeping up with Exadata                 
17:30 pm – IOUG Data Integration SIG Meeting

4/14/2015:  
11:00 am – Enable Oracle GoldenGate Monitoring for the Masses with EM12c                     

4/15/2015:  
08:00 am – Examine Oracle GoldenGate Trail Files: How and When to use Logdump Utility
10:45 am – Extreme Replication: Performance Tuning Oracle GoldenGate for the Real World

If you are going to be a Collaborate, I look forward to see you there and hopefully in one of my sessions.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database, Golden Gate, OEM
Categories: DBA Blogs

sar -d on Linux

Bobby Durrett's DBA Blog - Wed, 2015-04-01 17:21

I started using sar -d to look at disk performance on a Linux system this week and had to look up what some of the returned numbers meant.  I’ve used sar -d on HP Unix but the format is different.

Here is an edited output from a Linux VM that we are copying files to:

$ sar -d 30 1
Linux 2.6.32-504.3.3.el6.x86_64 (myhostname)  04/01/2015      _x86_64_        (4 CPU)

05:26:55 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
05:27:25 PM  dev253-9   7669.55      2.44  61353.93      8.00     35.39      4.61      0.03     19.80

I edited out the real host name and I removed all the lines with devices except the one busy device, dev253-9.

Earlier today I got confused and thought that rd_sec/s meant read I/O requests per second but it is not.  Here is how the Linux man page describes rd_sec/s:

Number  of  sectors  read from the device. The size of a sector is 512
bytes.

In the example above all the activity is writing so if you look at wr_sec/s it is the same kind of measure of activity:

Number of sectors written to the device. The size of a sector  is  512
bytes.

So in the example you have 61353.93 512 byte sectors written per second.  Divide by 2 to get kilobytes = 30676 KB/sec.  Divide by 1024 and round-up to get 30 megabytes per second.

But, how many write I/O operations per second does this translate to?  It looks like you can’t tell in this listing.  You can get overall I/O operations per second including both reads and writes from the tps value which the man page defines as:

Total number of transfers per second  that  were  issued  to  physical
devices.   A transfer is an I/O request to a physical device. Multiple
logical requests can be combined into a  single  I/O  request  to  the
device.  A transfer is of indeterminate size.

Of course there aren’t many read requests so we can assume all the transfers are writes so that makes 7669.55 write IOPS.  Also, you can find the average I/O size by dividing rd_sec/s  + wr_sec/s by tps.  This comes out to just about 8 which is the same as avgrq-sz which the man page defines as

The  average size (in sectors) of the requests that were issued to the
device.

So, avgrq-sz is kind of superfluous since I can calculate it from the other values but it means that our average I/O is 8 * 512 bytes = 4 kilobytes.  This seems like a small I/O size considering that we are copying large data files over NFS.  Hmmm.

Also, the disk device is queuing the I/O requests but the device is only in use 19% of the time.  Maybe there are bursts of 4K writes which queue up and then gaps in activity?  Here are the definitions for the remaining items.

avgqu-sz

The average queue length of the  requests  that  were  issued  to  the
device.

await

The  average  time  (in  milliseconds)  for I/O requests issued to the
device to be served. This includes the time spent by the  requests  in
queue and the time spent servicing them.

svctm

The  average service time (in milliseconds) for I/O requests that were
issued to the device.

%util

Percentage of CPU time during which I/O requests were  issued  to  the
device  (bandwidth  utilization  for  the  device).  Device saturation
occurs when this value is close to 100%.

The service time is good – only .03 milliseconds – so I assume that the I/Os are writing to a memory cache.  But the total time is higher – 4.61 – which is mostly time spent waiting in the queue.  The average queue length of 35.39 makes sense given that I/Os spend so much time waiting in the queue.  But it’s weird that utilization isn’t close to 100%.  That’s what makes me wonder if we are having bursts of activity.

Anyway, I have more to learn but I thought I would pass along my thoughts on Linux’s version of sar -d.

– Bobby

P.S. Here is the output on HP-UX that I am used to:

HP-UX myhostname B.11.31 U ia64    04/02/15

11:27:14   device   %busy   avque   r+w/s  blks/s  avwait  avserv
11:27:44    disk1    1.60    0.50       3      95    0.00   10.27
            disk6    0.03    0.50       1       6    0.00    0.64
           disk15    0.00    0.50       0       0    0.00    3.52
           disk16  100.00    0.50     337    5398    0.00    5.52

r+w/s on HP-UX sar -d seems to be the equivalent of tps on Linux.  blks/s on HP-UX appears to be the same as rd_sec/s  + wr_sec/s on Linux.  The other weird difference is that in HP-UX avwait is just the time spent in the queue which I believe is equal to await – svctm on Linux.  I am more accustomed to the HP-UX tool so I needed to get up to speed on the differences.

Categories: DBA Blogs

Pillars of Powershell #2: Commanding

Pythian Group - Tue, 2015-03-31 07:28
Introduction

This is the second blog post as a continuance in the series on the Pillars of PowerShell. In the initial blog post we went over the various interfaces that can be used to work with PowerShell. In this blog post we are going to start out by going through a few terms you might find when you start reading up on PowerShell. Otherwise, I will go over three of the cmdlets you will find can be used to discover and get documentation on the cmdlets available to you in PowerShell.

Pillar 2: Commanding

The following are a few terms I will use throughout this series, and ones you might find referenced in any reading material, which I wanted to introduce so we start out on the same page:

  • Session
    When you open PowerShell.exe or PowerShell_ISE.exe it will create a session for you, essentially a blank slate for you to build and create. You can think of this as a query window within the SQL Server Management Studio.
  • Cmdlets
    Pronounced “command-lets”, these are the bread and butter of PowerShell that allow you to do things from getting information or manipulating it. Microsoft has coined the format of Verb-Noun for the cmdlets and it has pretty much stuck. Each version of PowerShell as it is released comes with additional cmdlets, and then product teams like SQL Server and Active Directory are also releasing cmdlets to allow you to interact with them through PowerShell. Each time you open a session with PowerShell there are a set of core cmdlets that are automatically loaded for you.
  • Module
    A module is basically just a set of cmdlets that can be added within your session of PowerShell. When you load a module into your session, the commands are then made available to you. If you close that session and open a new one, you will then have to reload that module to access the commands again.
  • Objects
    PowerShell is based on .NET, this is what is behind the scenes more or less, thus with .NET being an object oriented language PowerShell treats the data that is returned as objects. So, if I use a cmdlet to return the processes running on a machine, each process that it returns is as an object.
  • The Pipeline
    This is named after the symbol used to connect cmdlets together, “|” (vertical bar on your keyboard). You can think of this like a train, each train car represents a set of objects and each car you pass it through can do something to each object until you reach the end.
  • PowerShell Profile
    This is basically the ability to customize your PowerShell session each time you open or start PowerShell. You can do things such as pre-load modules, create custom bits of PowerShell code for reuse or easy access, and many other things. I would compare this to your profile in Windows that keeps up with things like the icons or applications you have pinned to the taskbar or the default browser.

Now I want to take you through a few core cmdlets that are used most commonly to discover what is available in the current version of PowerShell or the module you might be working with in it. I tend to use these commands almost every time I open PowerShell. I do not try to memorize everything, especially when I can look it up so quickly in PowerShell.

Get-Command

This cmdlet does exactly what you think it does, gets a list of commands that are available in your current PowerShell session. You can use the parameters of this cmdlet to filter the list down to what interest you, say all the “get” cmdlets:

Get-Command -Verb Get -CommandType Cmdlet
p2_get-command-verb-get Get-Help

Now you might be wondering where the documentation is for all of the cmdlets you saw using Get-Command? Where is the Books Online equivalent to what you get with SQL Server? Well unlike SQL Server you can actually get the documentation via the cmdlet Get-Help. This cmdlet can return the information to you or you can use a parameter to open it up in the browser, if it is available. So for example one of the best things to look up documentation on initially is the Get-Help cmdlet itself:

Get-Help

The output of this command is good to read through but the main items I want to pull out are three particular parameters:

  1. Online: This will take you to the TechNet page of the documentation for the cmdlet. This may not work with every cmdlet you come across but if Microsoft owns it there should be something.
  2. Examples: This is going to provide a few examples and descriptions of how you can use the cmdlet and the more common parameters.
  3. Full: This will show you pretty much what document is online. This just keeps you in PowerShell instead of view it in the browser.

So let me try bringing up the examples of the Get-Help cmdlet itself:

Get-Help Get-Help -Examples
p2_get-help-examples

If you are using Windows 7 OS or higher, you may receive something similar to this screenshot. This is something that was added in PowerShell version 3.0, the cmdlet Update-Help. This cmdlet is used to actually update the help files on a computer as needed. In the event Microsoft updates the help files, or the online TechNet page, you can use this to download a current version of it. Microsoft has moved to this method in place of trying to do the updates locally with cumulative updates or service packs. It does require Internet access to execute the cmdlet. If your machine is not on the Internet you can download them from Microsoft’s download center. In order to fix the above message I just need to issue the command: Update-Help.

You should see a progress bar as shown below while it is running through updating all the files (which that progress bar is actually done using PowerShell):

p2_update-help

I ended up getting two errors for certain modules and this is because I am not running the cmdlet with elevated privileges. If you open PowerShell.exe with the “Run As Administrator” option and then execute the cmdlet again it will be able to update all help files without error.

Now if you run the previous command again you should see the actual examples, although if you notice it can be annoying to try and scroll back up to read all that information. A tidbit I did not know about right away was that there is a parameter in Get-Help that opens up a separate window, which makes it easier to read called, “-ShowWindow“. It is basically the “-Full” output but with the option to filter out sections that do not interest you.

Get-Help Get-Help -ShowWindow
p2_get-help-showwindow

You actually can use Get-Help to search for cmdlets as well. I tend to do this more than trying to use Get-Command just because it is a bit quicker. You can just issue something like this to find all the Get” cmdlets:

Get-Help get*

One more thing about the help system in PowerShell, it also includes things called “about files” that are basically concept topics that go deeper into certain areas. They offer a wealth of information and you can also get to these online. Something for you to try on your own to see what is available is just issue this command:

Get-Help about*
Get-Member

This cmdlet is a little gem that you will use more than anything. If you pipe any cmdlet (or one-liner) to Get-Member it will provide you a list of the properties or methods available to you for the object(s) passed. This cmdlet also includes a filter of “-MemberType” that I can use to only return the properties available to me. The properties are those that we can “select” to return as output or pass to other cmdlets down the pipe.

Get-Command | Get-Member -MemberType Property
Out-GridView

I am only going to touch on this cmdlet. It can be used to output objects into a table like view, that also offers some filtering attributes too. There are a few different Out-* cmdlets that are available to you for outputting information to various destinations. You can find these using the Get-Command or Get-Help cmdlets. To use Out-GridView on a Windows Server OS you will have to add the PowerShell ISE feature. You will get an error stating as much if you do not.

Get-Command | Out-GridView

get-comand-get-member-out-gridview Summary

The three cmdlets Get-CommandGet-Help, and Get-Member that I spoke on above are ones I think you should become very familiar with and explore them deeply. Once you master using these it will provide you the ability to find out anything and everything about a cmdlet or module that you are trying to use. When you start working with various modules such as Azure or SQL Server PowerShell (SQLPS) these cmdlets are quite useful in discovering what is available.

Categories: DBA Blogs

Pillars of PowerShell #1: Interacting

Pythian Group - Tue, 2015-03-31 06:55
Introduction

PowerShell is a tool that if adopted can be used to help automate and standardize processes in your Windows and SQL Server environment (among other things). This blog series is intended to show you some of the basics (not all of them) that will get you up and running with PowerShell. I say not all of them, because there are areas in PowerShell that you can go pretty deep in, just like SQL Server. I want to just give you the initial tools to get you on your way to discovering the awesomeness within PowerShell. I decided to go with a Greek theme, and just break this series up into pillars. In this first blog post I just wanted to show you the tools that are available to allow you to interact with PowerShell itself.

Pillar 1: Interacting

Interacting with PowerShell is most commonly issuing commands directly on the command line interface (CLI), the step above that would be building out a script that contains multiple commands. The first two options are available “out-of-the-box” on a Windows machine that has PowerShell installed. After this, you have a few third party options available to you that I will point out.

  1. PowerShell.exe
    This is the command prompt (or console as some may call it) that most folks will spend their day-to-day life entering what are referred to as “one-liners”. This is the CLI for PowerShell. You can access this in Windows by going through the Start Menu, or just type it into the Run prompt.
    powershell.exe
  2. PowerShell_ISE.exe
    This is the PowerShell Integrated Scripting Environment and is included in PowerShell 2.0 and up. This tool gives you the ability to have a script editor and CLI in one place. You can find out more about this tool and the various features that come with each version here. You can access this similar to the same way you would PowerShell.exe. In Windows Server 2008 R2 and above though this it is a Windows Feature that has to be added or activated before you can use it.
    powershell_ise
  3. Visual Studio (VS) 2013 Community Edition + PowerShell Tools for Visual Studio 2013
    VS 2013 Community is the free version of Visual Studio that includes the equivalent functionality of Visual Studio Professional Edition. Microsoft opened up the door for many things when they did this, the main one being that you can now develop PowerShell scripts along side your C# or other .NET projects. Adam Driscoll (PowerShell MVP) developed and released an add-on specifically for VS 2013 Community that you can get from GitHub, here.
  4. Third party ISE/Editors
    The following are the main players in the third party offerings for PowerShell ISE or script editors. I have tried all of them before, but since they only exist on the machine you install them on I tend to stick with what is in Windows. They have their place and if you begin to develop PowerShell heavily (e.g. full project solutions) they can be very useful in the management of your scripts:

Summary

This was a fairly short post that just started out with showing you what your options are to start working and interacting with PowerShell. PowerShell is a fun tool to work with and discover new things that it can do for you. In this series I will typically stick with using the CLI (PowerShell.exe) for examples.

One more thing to point out is the versions of PowerShell currently released (as of this blog post) are 2.0, 3.0, and 4.0. The basic commands I am going to go over will work in any version, but where specific nuances exist between each version I will try to point out if needed.

Categories: DBA Blogs

PowerShell Script to Manipulate SQL Server Backup Files

Pythian Group - Tue, 2015-03-31 06:39
Scenario

I use Ola Hallengren’s famous backup solution to back up my SQL Server databases. The destination for full backups is a directory on local disk; let’s say D:\SQLBackup\

If you are familiar with Ola’s backup scripts, you know the full path for backup file looks something like:

D:\SQLBackup\InstanceName\DatabaseName\FULL\InstanceName_DatabaseName_FULL_yyyymmdd_hhmiss.bak

Where InstanceName is a placeholder for the name of the SQL server instance, similarly, DatabaseName is for the Database Name.

Problem

Depending upon my retention period settings, I may have multiple copies of full backup files under the said directory. The directory structure is complicated too (backup file for each database is under two parent folders). I want to copy the latest backup file (only) for each database to a UNC share and rename the backup file scrubbing everything but the database name.

Let’s say the UNC path is \\RemoteServer\UNCBackup. The end result would have the latest full backup file for all the databases copied over to \\RemoteServer\UNCBackup with files containing their respective database names only.

Solution

I wrote a PowerShell script to achieve the solution. This script can be run from a PowerShell console or PowerShell ISE. The more convenient way would be to use PS subsystems and schedule a SQL Server agent job to run this PowerShell script. As always, please run this on a test system first and use at your own risk. You may want to tweak the script depending upon your requirement.

 

<#################################################################################

   

Script Name: CopyLatestBackupandRename.ps1                       

Author     : Prashant Kumar                           

Date       : March 29th, 2015

   

Description: The script is useful for those using Ola Hallengren’s backup solution.

             This script takes SQL Server full backup parent folder as an input,

             a remote UNC path as another input and copies the latest backup file

             for each database, renames the backup file to the remote UNC path.

 

 

This Sample Code is provided for the purpose of illustration only and is not

intended to be used in a production environment. THIS SAMPLE CODE AND ANY

RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER

EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF

MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

##################################################################################>

 

#Clear screen

cls

 

#Specify Parent folder where Full backup files are originally being taken

$SourcePath = ‘D:\SQLBackup\InstanceName’

 

#Specify UNC path ot network share where backup files has to be copied

$UNCpath = ‘\\RemoteServer\UNCBackup’

 

#Browse thru subfolders (identical to database names) inside $SourcePath

$SubDirs = dir $SourcePath -Recurse | Where-Object {$_.PSIsContainer} | ForEach-Object -Process {$_.FullName}

 

#Browse through each sub-drorectory inside parent folder

ForEach ($Dirs in $SubDirs)

       {

    #List recent file (only one) within sub-directories

       $RecentFile = dir $Dirs | Where-Object {!$_.PSIsContainer} | Sort-Object {$_.LastWriteTime} -Descending | Select-Object -First 1

      

    #Perform operation on each file (listed above) one-by-one

       ForEach ($File in $RecentFile)

              {

       

              $FilePath = $File.DirectoryName

              $FileName = $File.Name

        $FileToCopy=$FilePath+‘\’+$FileName

        $PathToCopy=($filepath -replace [regex]::Escape($SourcePath), $UNCpath)+‘\’

       

        #Forecfully create the desired directory structure at destination if one doesn’t exist

        New-Item -ItemType Dir -Path $PathToCopy -Force

 

        #Copy the backup file

        Copy-Item $FileToCopy $PathToCopy

 

        #Trim the date time from the copied file name, store in a variable

        $DestinationFile = $PathToCopy+$FileName

        $RenamedFile = ($DestinationFile.substring(0,$DestinationFile.length-20))+‘.bak’

 

        #Rename the copied file

        Rename-Item $DestinationFile $RenamedFile

 

        }

             

       }

 

 

 

Categories: DBA Blogs

SQL Server 2012 SP2 Cummulative Update 5

Pythian Group - Tue, 2015-03-31 06:20

Hey folks,

Microsoft released the 5th Cummulative Update for SQL Server 2012 SP2. This update package contains fixes for 27 different issues, distributed as follows:

 

CU5

 

One very important issue that was fixed on this CU release was KB3038943 –   Error 4360 when you restore the backup of secondary replica to another server in AlwaysOn Availability Groups.

If you use SQL Server 2012 SP2 Always On and you offload your log backups to the secondary node it is recommended that you apply this patch!

The full Cummulative Update release and the download links can be found here: http://support.microsoft.com/en-us/kb/3037255/en-us

 

Categories: DBA Blogs

Log Buffer #416, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-03-30 12:29

This log buffer edition sprouts from the beauty, glamour and intelligence of various blog posts from Oracle, SQL Server, and MySQL.

Oracle:

Oracle Exadata Performance: Latest Improvements and Less Known Features

Exadata Storage Index Min/Max Optimization

Oracle system V shared memory indicated deleted

12c Parallel Execution New Features: Concurrent UNION ALL

Why does index monitoring makes Connor’s scratch his head and charges off to google so many times.

SQL Server:

Learn how to begin unit testing with tSQLt and SQL Server.

‘Temporal’ tables contain facts that are valid for a period of time. When they are used for financial information they have to be very well constrained to prevent errors getting in and causing incorrect reporting.

As big data application success stories (and failures) have appeared in the news and technical publications, several myths have emerged about big data. This article explores a few of the more significant myths, and how they may negatively affect your own big data implementation.

When effective end dates don’t align properly with effective start dates for subsequent rows, what are you to do?

In order to automate the delivery of an application together with its database, you probably just need the extra database tools that allow you to continue with your current source control system and release management system by integrating the database into it.

MySQL:

Ronald Bradford on SQL, ANSI Standards, PostgreSQL and MySQL.

How to Manage the World’s Top Open Source Databases: ClusterControl 1.2.9 Features Webinar Replay

A few interesting findings on MariaDB and MySQL scalability, multi-table OLTP RO

MariaDB: The Differences, Expectations, and Future

How to Tell If It’s MySQL Swapping

Categories: DBA Blogs

Oracle Database In-Memory Test Drive Workshop: Canberra 28 April 2015

Richard Foote - Sun, 2015-03-29 21:17
I’ll be running a free Oracle Database In-Memory Test Drive Workshop locally here in Canberra on Tuesday, 28th April 2015. Just bring a laptop with at least 8G of RAM and I’ll supply a VirtualBox image with the Oracle Database 12c In-Memory environment. Together we’ll go through a number of hands-on labs that cover: Configuring the Product Easily […]
Categories: DBA Blogs

Seriously proud of this and it doesn't make me grumpy!

Grumpy old DBA - Fri, 2015-03-27 18:27
So the GLOC 2015 conference registration is open (GLOC 2015 ) ( has been for a while ) and recently we completed posting all the speakers/topics.  That's been good darn good.

Just out today is our SAG  ( schedule at a glance ) which demonstrates just how good our conference will be.  Low cost high quality and just an event that you really should think about being in Cleveland for in may.

The schedule at a glance does not include our 4 top notch 1/2 day workshops going on monday but you can see them from the regular registration.

I am so grateful for the speakers we have on board.  It's a lot of work behind the scenes getting something like this rolling but when you see a lineup like this just wow!
Categories: DBA Blogs

Pythian at Collaborate 15

Pythian Group - Fri, 2015-03-27 15:05

Make sure you check out Pythian’s speakers at Collaborate 15. Stop by booth #1118 for a chance meet some of Pythian’s top Oracle experts, talk shop, and ask questions. This many Oracle experts in one place only happens once a year, have a look at our list of presenters, we think you’ll agree.

Click here to view a PDF of our presenters

 

Pythian’s Collaborate 15 Presenters | April 12 – 16 | Mandalay Bay Resort and Casino, Las Vegas

 

Christo Kutrovsky | ATCG Senior Consultant | Oracle ACE

 

Maximize Exadata Performance with Parallel Queries

Wed. April 15 | 10:45 AM – 11:45 AM | Room Banyan D

 

Big Data with Exadata

Thu. April 16 | 12:15 PM – 1:15 PM | Room Banyan D

 

Deiby Gomez Robles | Database Consultant | Oracle ACE

 

Oracle Indexes: From the Concept to Internals

Tue. April 14 | 4:30 PM – 5:30 PM | Room Palm C

 

Marc Fielding | ATCG Principal Consultant | Oracle Certified Expert

 

Ensuring 24/7 Availability with Oracle Database Application Continuity

Mon. April 13 | 2:00 PM – 3:00 PM | Room Palm D

 

Using Oracle Multi-tenant to Efficiently Manage Development and Test Databases

Tue. April 14 | 11:00 AM – 12:00 PM | Room Palm C

 

Maris Elsins | Oracle Application DBA | Oracle ACE

Mining the AWR: Alternative Methods for Identification of the Top SQLs in Your Database

Tue. April 14 | 3:15 PM – 4:15 PM | Room Palm B

 

Ins and Outs of Concurrent Processing Configuration in Oracle e-Business Suite

Wed. April 15 | 8:00 AM – 9:00 AM | Room Breakers B

 

DB12c: All You Need to Know About the Resource Manager

Thu. April 16 | 9:45 AM – 10:45 AM | Room Palm A

 

Alex Gorbachev | CTO | Oracle ACE Director

 

Using Hadoop for Real-time BI Queries

Tue, April 14 | 9:45 AM – 10:45 AM | Room Jasmine E

 

Using Oracle Multi-tenant to Efficiently Manage Development and Test Databases

Tue, April 14 | 11:00 AM – 12:00 PM | Room Palm C

 

Anomaly Detection for Database Monitoring

Thu, April 16 | 11:00 AM – 12:00 PM | Room Palm B

 

Subhajit Das Chaudhuri | Team Manager

 

Deep Dive: Integration of Oracle Applications R12 with OAM 11g, OID 11g , Microsoft AD and WNA

Tue, April 14 | 3:15 PM – 4:15 PM | Room Breakers D

 

Simon Pane | ATCG Senior Consultant | Oracle Certified Expert

 

Oracle Service Name Resolution – Getting Rid of the TNSNAMES.ORA File!

Wed, April 15 | 9:15 AM – 10:15 AM | Room Palm C

 

René Antunez | Team  Manager | Oracle ACE

 

Architecting Your Own DBaaS in a Private Cloud with EM12c

Mon. April 13 | 9:15 AM – 10:15 AM | Room Reef F

 

Wait, Before We Get the Project Underway, What Do You Think Database as a Service Is…

Mon, Apr 13 | 03:15 PM – 04:15 PM | Room Reef F

 

My First 100 days with a MySQL DBMS

Tue, Apr 14 | 09:45 AM – 10:45 AM | Room Palm A

 

Gleb Otochkin | ATCG Senior Consultant | Oracle Certified Expert

 

Your Own Private Cloud

Wed. April 15 | 8:00 AM – 9:00 AM | Room Reef F

 

Patching Exadata: Pitfalls and Surprises

Wed. April 15 | 12:00 PM – 12:30 PM | Room Banyan D

 

Second Wind for Your exadata

Tue. April 14 | 12:15 PM – 12:45 PM | Room Banyan C

 

Michael Abbey | Team Manager, Principal Consultants | Oracle ACE

 

Working with Colleagues in Distant Time Zones

Mon, April 13 | 12:00 PM – 12:30 PM | Room North Convention, South Pacific J

 

Manage Referential Integrity Before It Manages You

Tue, April 14 | 2:00 PM – 3:00 PM | Room Palm C

 

Nothing to BLOG About – Think Again

Wed, April 15 | 7:30 PM – 8:30 PM | Room North Convention, South Pacific J

 

Do It Right; Do It Once. A Roadmap to Maintenance Windows

Thu, April 16 | 11:00 AM – 12:00 PM | Room North Convention, South  Pacific J

Categories: DBA Blogs

1 million page views in less than 5 years

Hemant K Chitale - Fri, 2015-03-27 10:26
My Oracle Blog has recorded 1million page views in less than 5 years.

Although the blog began on 28-Dec-2006, the first month with recorded page view counts was July-2010 -- 8,176 page views.


.
.
.

Categories: DBA Blogs

Configuring Oracle #GoldenGate Monitor Agent

DBASolved - Thu, 2015-03-26 14:06

In a few weeks I’ll be talking about monitoring Oracle GoldenGolden using Oracle Enterprise Manager 12c at IOUG Collaborate in Las Vegas.  This is one of the few presentations I will be giving that week (going to be a busy week).  Although this posting, kinda mirrors a previous post on how to configure the Oracle GoldenGate JAgent, it is relevant because:

1. Oracle changed the name of the JAgent to Oracle Monitor Agent
2. Steps are a bit different with this configuration

Most people running Oracle GoldenGate and want to monitor the processes with EM12c, will try to use the embedded JAgent.  This JAgent will work with the OGG Plug-in 12.1.0.1.  To get many of the new features and use the new plug-in (12.1.0.2), the new Oracle Monitor Agent (12.1.3.0) needs to be downloaded and installed.  Finding the binaries for this is not that easy though.  In order to get the binaires, download Oracle GoldenGate Monitor v12.1.3.0.0 from OTN.oracle.com.

Once downloaded, unzip the file to a directory to a temp location

$ unzip ./fmw_12.1.3.0.0_ogg_Disk1_1of1.zip -d ./oggmonitor
Archive: ./fmw_12.1.3.0.0_ogg_Disk1_1of1.zip
 inflating: ./oggmonitor/fmw_12.1.3.0.0_ogg.jar

In order to install the agent, you need to have java 1.8 installed somewhere that can be used.  The 12.1.3.0.0 software is built using JDK 1.8.

$ ./java -jar ../../ggmonitor/fmw_12.1.3.0.0_ogg.jar

After executing the command, the OUI installer will start.  As you walk through the OUI, when the select page comes up; select the option to only install the Oracle GoldenGate Monitor Agent.


The proceed through the rest of the OUI and complete the installation.

After the installation is complete, then the JAgent needs to be configured.  In order to do this, navigate to the directory where the binaries were installed.

$ cd /u01/app/oracle/product/jagent/oggmon/ogg_agent

In this directory, look for a file called create_ogg_agent_instance.sh.  This files has to be ran first to create the JAgent that will be associated with Oracle GoldenGate. In order to run this script, the $JAVA_HOME variable needs to be pointed to the JDK 1.8 location as well.  Inputs that will need to be provided are the Oracle GoldenGate Home and where to install the JAgent (this is different from where the OUI installed).

$ ./create_ogg_agent_instance.sh
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/product/12.1.2.0/12c/oggcore_1
Please enter absolute path of OGG Agent instance : /u01/app/oracle/product/12.1.3.0/jagent
Sucessfully created OGG Agent instance.

Next, go to the directory for the OGG Agent Instance (JAgent), then to the configuration (cfg) directory.  In this directory, the Config.properities file needs to be edited.  Just like with the old embedded JAgent, the same variables have to be changed.

$ cd /u01/app/oracle/product/12.1.3.0/jagent
$ cd ./cfg
$ vi ./Config.properties

Change the following or keep the defaults, then save the file:

jagent.host=fred.acme.com (default is localhost)
jagent.jmx.port=5555 (default is 5555)
jagent.username=root (default oggmajmxuser)
jagent.rmi.port=5559 (default is 5559)
agent.type.enabled=OEM (default is OGGMON)

Then create the password that will be stored in the wallet directory under $OGG_HOME.  

cd /u01/app/oracle/product/12.1.3.0/jagent
$ cd ./bin
$ ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent:
Please confirm password for Java Agent:
Mar 26, 2015 3:18:46 PM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
Mar 26, 2015 3:18:47 PM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.

Now, enable monitoring in the GLOBALS file in $OGG_HOME.

$ cd /u01/app/oracle/product/12.1.2.0/12c/oggcore_1
$ vi ./GLOBALS


After enabling monitoring, the JAgent should appear when doing an info all inside of GGSCI.


Before starting the JAgent, create a datastore.  What I’ve found works is to delete the datastore, restart GGSCI and create a new one. 

$ ./ggsci
Oracle GoldenGate Command Interpreter for Oracle<br>Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug&nbsp; 7 2014 10:21:34
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (fred.acme.com)> info all
Program           Group Lag at Chkpt Time Since Chkpt
MANAGER  RUNNING
JAGENT   STOPPED

GGSCI (fred.acme.com)>; stop mgr!
Sending STOP request to MANAGER ...
Request processed.
Manager stopped.

GGSCI (fred.acme.com)>; delete datastore
Are you sure you want to delete the datastore? yes
Datastore deleted.
GGSCI (fred.acme.com)>; exit

$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug&nbsp; 7 2014 10:21:34
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (fred.acme.com)>; create datastore
Datastore created.

GGSCI (fred.acme.com)>; start mgr
Manager started.

GGSCI (fred.acme.com)>; start jagent
Sending START request to MANAGER ...
JAGENT starting

GGSCI (fred.acme.com)>; info all

Program  Group Lag at Chkpt Time Since Chkpt
MANAGER  RUNNING
JAGENT   RUNNING

With the JAgent running, now configure Oracle Enterprise Manager 12c to use the JAgent.

Note: In order to monitor Oracle GoldenGate with Oracle Enterprise Manager 12c, you need to deploy the Oracle GoldenGate Plug-in (12.1.0.2).

To configure discovery of the Oracle GoldenGate process, go to Setup -> Add Target -> Configure Auto Discovery

Select the Host where the JAgent is running.

Ensure the the Discovery Module for GoldenGate Discovery is enabled and then click the Edit Parameters to provided the username and rmx port specified in the Config.properties file.  And provide the password was setup in the wallet. Then click OK.

At this point, force a discovery of any new targets that need to be monitored by using the Discover Now button.

If the discovery was successful, the Oracle GoldenGate Manager process should be able to be seen and promoted for monitoring.

After promoting the Oracle GoldenGate processes, they can then be seen in the Oracle GoldenGate Interface within Oracle Enterprise Manager 12c (Target -> GoldenGate).

At this point, Oracle GoldenGate is being monitored by Oracle Enterprise Manager 12c.  The new plug-in for Oracle GoldenGate is way better than the previous one; however, there still are a few thing that could be better.  More on that later.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Oracle Database 12c In-Memory Q&A Webinar

Pythian Group - Thu, 2015-03-26 09:21

Today I will be debating Oracle 12c’s In-Memory option with Maria Colgan of Oracle (aka optimizer lady, now In-Memory lady).

This will be in a debate form with lots of Q&A from the audience. Come ask the questions you always wanted to ask.

Link to register and attend:
https://attendee.gotowebinar.com/register/7874819190629618178

Starts at 12:00pm EDT.

Categories: DBA Blogs

Parallel Execution -- 3 Limiting PX Servers

Hemant K Chitale - Tue, 2015-03-24 09:05
In my previous posts, I have demonstrated how Oracle "auto"computes the DoP when using the PARALLEL Hint by itself, even when PARALLEL_DEGREE_POLICY is set to MANUAL.  This "auto"computed value is CPU_COUNT x PARALLEL_THREADS_PER_CPU.

How do we limit the DoP ?

1.  PARALLEL_MAX_SERVERS is an instance-wide limit, not usable at the session level.

2.  Resource Manager configuration can be used to limit the number of PX Servers used

3.  PARALLEL_DEGREE_LIMIT, unfortunately, is not usable when PARALLEL_DEGREE_POLICY is MANUAL

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Mar 24 22:57:18 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 4
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel_degree_policy

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_policy string MANUAL
SYS>show parameter parallel_max

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
parallel_max_servers integer 64
SYS>
SYS>select * from dba_rsrc_io_calibrate;

no rows selected

SYS>
SYS>connect hemant/hemant
Connected.
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
1 16 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

As expected, the query uses 16 PX Servers (and not the table-level definition of 4).  Can we use PARALLEL_DEGREE_LIMIT ?

HEMANT>alter session set parallel_degree_limit=4;

Session altered.

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
2 32 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

No, it actually still used 16 PX servers f or the second execution.

What about PARALLEL_MAX_SERVERS ?

HEMANT>connect / as sysdba
Connected.
SYS>alter system set parallel_max_servers=4;

System altered.

SYS>connect hemant/hemant
Connected.
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
3 36 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

Yes, PARALLEL_MAX_SERVERS restricted the next run of the query to 4 PX Servers.  However, this parameter limits the total concurrent usage of PX Servers at the instance level.  It cannot be applied or derived to the session level.


UPDATE : Another method pointed out to me by Mladen Gogala is to use the Resource Manager to limit the number of PX Servers a session can request.

.
.

.
Categories: DBA Blogs

NYOUG Spring General Meeting

The Oracle Instructor - Mon, 2015-03-23 08:42

The New York Oracle User Group held their Spring General Meeting recently and I was presenting there about the Data Guard Broker and also about the Recovery Area.

Many thanks to the board for organizing this event, I really enjoyed being there! Actually, the Broker demonstration went not so smoothly – always dangerous to do things live – but I managed to get out of the mess in time and without losing too much of the message I wanted to get through. At least that’s what I hope ;-)

I took the opportunity to do some sightseeing in New York as well:

Me on Liberty Island


Tagged: NYOUG
Categories: DBA Blogs

Free Apache Cassandra Training Event in Cambridge, MA March 23

Pythian Group - Fri, 2015-03-20 14:24

I’ll be speaking, along with DataStax and Microsoft representatives at Cassandra Essentials Day this coming Monday (March 23) in Cambridge. MA. This free training event will cover the basics of Apache Cassandra and show you how to try it out quickly, easily, and free of charge on the Azure cloud. Expect to learn about the unique aspects of Cassandra and DataStax Enterprise and to dive into real-world use cases.

Space is limited, so register online to reserve a spot.

Categories: DBA Blogs