Skip navigation.

DBA Blogs

final version of my Hotsos 2014 presentation

Grumpy old DBA - Sat, 2014-03-08 12:38
This is the "really really" final version because it was updated and some additional references added even after the presentation was delivered.

Hoping to do it at Open World 2014 so ... I guess this is the early version of that presentation eh?

You can find it here: Three approaches to shared pool monitoring Hotsos 2014
Categories: DBA Blogs

So you think you have disabled ASMM/AMM oracle automatic SGA sizing behavior?

Grumpy old DBA - Sat, 2014-03-08 12:34
I polled my session attendee's at Hotsos 2014 ( Three approaches to shared pool monitoring ) and asked them who was using automatic stuff versus who was doing it all manually?

My guess is that there were about 100 people in the room and it looked like automatic "won" but not by much.  Maybe 55 percent automatic and 45 manual ... so there are still a whole bunch of people nervous about using that functionality.

My presentation was updated ( after delivering it ) to add in some additional doc id's and warnings ( both pro and con ) about manual memory settings.

Specifically worth noting is the un obvious fact that EVEN IF you have disabled oracle from doing SGA resizing operations automatically ... well it may do them anyhow.  Makes me a little grumpy eh?

So you can set SGA_TARGET to zero ( and/or MEMORY_TARGET to zero ) and still have oracle make decisions for you?  It turns out that after some point you have to set yet another hidden underscore parameter to tell the oracle software "yeah thanks but I really really mean it" ...

_MEMORY_IMM_MODE_WITHOUT_AUTOSGA

Oracle doc id 1269139.1 ... a pretty good writeup here ... really turn off ammasmm
Categories: DBA Blogs

Sqlplus script to check for row chaining

Bobby Durrett's DBA Blog - Fri, 2014-03-07 15:10

Here is the sqlplus scripting I use to check whether row chaining is degrading a query’s performance:

VARIABLE monitored_sid number;

begin

SELECT sid into :monitored_sid 
from v$session 
where audsid=USERENV('SESSIONID');

end;
/

select b.value "table fetch continued rows"
from
V$SESSTAT b
where b.SID = :monitored_sid
and b.STATISTIC# = (SELECT STATISTIC# FROM V$STATNAME 
WHERE NAME='table fetch continued row');

I create a test script with the problem query and put this code after it to see how much chaining affected it.

Here is an example of a fast query:

Elapsed: 00:00:15.87

table fetch continued rows
--------------------------
                     19723

Here is an example of a slow query:

Elapsed: 00:03:17.46

table fetch continued rows
--------------------------
                  23775056

This was the same query for two different date ranges.  The second range had a lot of row chaining and the query was much slower.

- Bobby

 

 

 

 

Categories: DBA Blogs

Log Buffer #362, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-03-07 09:11

This Log Buffer Editions rounds up, yet again, some cool blog posts from across the planet. Sit back, relax and enjoy.

Oracle:

Marc has completed review of enhancement of SQL regarding RMAN in the 12c.

This weekend we have finalized latest update for our ADF runtime performance audit tool – Red Samurai Performance Audit Tool v 2.4.

This isn’t a SQL Developer post per-se, however if you are using the Oracle Developer Day VirtualBox 12c image, you will find this useful.

aud$ and ASSM – a good combination.

Why You Don’t Want to Code Validation in Before Commit?

SQL Server:

Basit A. Farooq talks about Different approaches of counting number of rows in a table.

The Top 5 SQL Server Features You Aren’t Using And Why.

A CLR Application: Deleting Three Billion Rows.

The simple query to find the database backup status The select statment consists of ServerName dbname BackupStartDate BackupFinishDate BackupAge Size status Type The query will only run from sql 2005 and later version.

Characters you should never ever use in an object or schema name!

MySQL:

Migration of MTR suites to use InnoDB.

50 tips to boost MySQL Performance Webinar follow up .

MySQL Workbench is a great tool when someone asks you to look at their schemas.

MySQL for Excel Main Features Video.

Many-table joins in MySQL 5.6.

Categories: DBA Blogs

Blog second anniversary

Bobby Durrett's DBA Blog - Wed, 2014-03-05 09:52

Today is the second anniversary of my first post on this blog.

Some time ago I played with a free blog that came with a yahoo email account that my wife and I share but didn’t get very far with that.  Finally for this blog I decided to spend a small amount of money to get my own hosting and domain name.

I’m using iPage for the hosting.  I’ve been happy with their service.  There were a couple of billing issues, but I opened tickets with them and they were quickly resolved in a fair way.

I evaluated a few blog sites and chose WordPress because it allowed me to do what was most important.  WordPress’s software enabled me to upload sqlplus scripts and their logs along with PowerPoint, Microsoft Word, and PDF files.  WordPress also enabled me to include output from sqlplus and have the columns line up nicely like this:

SQL> select table_name, blocks
  2  from dba_tables
  3  where owner='SYS'
  4  order by table_name;

TABLE_NAME                         BLOCKS
------------------------------ ----------
ACCESS$                               759
ALERT_QT
APPLY$_CHANGE_HANDLERS                  0
APPLY$_CONF_HDLR_COLUMNS                0
APPLY$_CONSTRAINT_COLUMNS               0
APPLY$_DEST_OBJ                         0
APPLY$_DEST_OBJ_CMAP                    0
APPLY$_DEST_OBJ_OPS                     0
APPLY$_ERROR                            0

I’m pretty happy with the results.  I’ve had some nice interaction with the Oracle community and I’ve used links to my posts with coworkers on the job.  Plus, I actually query the site myself when I forget how to do something I wrote about.

I.e. If I forget where to plus sign on an outer join I search for it on my blog!

Overall it has been an enjoyable experience and I think helpful to me and my employer as I try to communicate with others the things I’m learning about Oracle database technology, especially in the performance tuning area.

- Bobby

 

 

Categories: DBA Blogs

Combat with an enqueue wait event (enq: HW–Contention)

DBASolved - Wed, 2014-03-05 07:24

Recently I’ve been combating a high water mark enqueue wait (enq: HW – contention) on a single node within an Exadata I’m supporting.   I first noticed the wait when I was looking at the performance page for the node in Oracle Enterprise Manager 12c.  What I noticed was the a lot of brown looking spikes (Image 1).  These spikes correspond to a Configuration wait.

Image 1:

image

When I clicked on Configuration in the legend on the side of the graph, I’m taken to the Active Session Waiting: Configuration  (Image 2)  page.  On this page, I can clearly see that the wait event I’m waiting on is the “enq: HW – contention” wait.

Image 2:

image

Now, that I know what wait event I’m waiting on, I needed to figure out what object was causing the wait.  Everyone has a different approach to how they performance tune; I find it easier to identify objects causing the wait by using the Active Session History (ASH) report.  ASH reports can be ran directly from the OEM Active Session Waiting page (page with the graphs) or from the command line.   Since I like the pretty graphs that OEM provides, I prefer to run ASH from the command line and use the graphs to help identify what direction my tuning is going.

Active Session History (ASH) Report

The Active Session History (ASH) report is handy in helping to identify what object(s) the wait is waiting on.  In order to run the ASH Report from the command line,  there are a few requirements needed.  They are:

  • Set ORACLE_HOME
  • Move to the directory where you would like to save the report

Once these requirements have been meet, the ASH Report can be ran from the command line using the ashrpt.sql script. 

                SQL> @$ORACLE_HOME/rdbms/admin/ashrpt.sh

When the script starts to run, it will ask for input on what format you would like the report in. For search ability purposes (from command line), I go with the “Text” option.   Next, it asks for a timeframe (Image 3); provide a timeframe as per the layout listed.

Image 3:
image

Lastly, it will ask you what you would like to name the report.  I normally prefix it with my initials and an underscore (bc_) (Image 4). 

Image 4:
image

Once I press enter, the report runs and the output will be stored in the directory you started SQL*Plus from.

Reading the ASH Report

In reading the ASH Report, I want to identify the primary object that is holding the “enq: HW – contention” wait.   In order to do this, I need to look at the “Top Event P1/P2/P3 Values” section of the report (Image 5).  Reviewing this section, I see that the Top Event is the “enq: HW – contention”, the same as I saw from the OEM performance pages.  Next, I need to find the values of P2 and P3.   

Image 5:
image

The values for P2 and P3 will tell me what data file (P2) to look at for the block (P3) holding the wait.  The values P2 & P3 for the wait are currently “85” and “16018”.

Finding the Object

Armed with the values for P2 and P3 from the ASH Report, I can now go and locate the object via SQL.  The SQL used to identify the object is as follows:

select /*+ parallel(a dba_extents, 4) */ a.owner, a.segment_type,a.segment_name
from DBA_EXTENTS a
where a.FILE_ID = &P2
and &P3 between a.block_id and a.block_id + blocks – 1;

Note: I’m using a parallel hint; this is to speed of the identification of the block.  Can help in VLDBs. 

Once the script is done running, it returns the owner, segment_type, and segment_name of the object holding the enq: HW – contention wait (Image 6).  In this case, the wait is on a system generated LOB index. 

Image 6:
image

Resolving the Wait

At this point, I have identified the following:

  • The wait: enq: HW – contention
  • The object holding the wait

Next, I want to resolve the wait as fast as I can.  Knowing that the wait is a high watermark (enq: HW –contention) wait event, there needs to be an basic understanding of high watermarks and how extents are allocated.  The basic reason why the ‘enq: HW – contention’ raises is because the  additional extents are not being allocated to in a timely manner for the data being inserted into the object. 

In order to resolve this wait, additional extents need to be allocated.  The environment I’m working in is using big file data files with automatic segment space management (ASSM).  When initially looking into how to increase the extents; there are many different ways.  Being that this database is using ASSM, makes the extent allocation simpler.  To allocate another extent for the object identified, use the following:

             SQL> alter index <owner>.SYS_IL0000253792C00002$$ allocate extent;

The above alter statement is correct; however, the problem is that I was trying to allocate an extent on a system generated index.  The below error message states that this is not allowed:

Error starting at line : 32 in command -
alter index <OWNER>.SYS_IL0000253792C00002$$ allocate extent
Error report -
SQL Error: ORA-22864: cannot ALTER or DROP LOB indexes
22864. 00000 -  “cannot ALTER or DROP LOB indexes”
*Cause:    An attempt was made to ALTER or DROP a LOB index.
*Action:   Do not operate directly on the system-defined LOB index.
           Perform operations on the corresponding LOB column.

In order to add extents, I needed to identify the object that is using the system generated LOB index.  This information can be found in DBA_LOBS (SQL below).

select owner, table_name, column_name, segment_name, index_name
from DBA_LOBS
where index_name = ‘SYS_IL0000253792C00002$$’;

The SQL returns the name of the owner, table_name and LOB column within the table that is using the system generated LOB index (Image 7).   Using this information, I can now allocate additional extents to the LOB.

Image 7:image 

In order to allocate an extent on the object identified, the following was used:

alter table <owner>.<table> modify lob(<column>) (allocate extent (size 5M));

Once this completed, I was expecting wait (enq: HW – contention) to go away.  After sometime, I saw no improvement.  This lead me to open an service request (SR) with Oracle Support in trying to resolve. 

Research

Opening the SR has lead me to dig a bit and do some research on extent allocation and LOBs while waiting on Oracle’s answer.  Prior to Oracle Database 11g (11.2.0.1), there is a known bug (6376915).  This bug is related to high watermark enqueue contention for ASSM LOB segments (Note ID: 6376915.8).  This bug was actually fixed in release 11.2.0.1 but it needs to be “turned on” in later releases.  To turn the fix for this bug on in 11.2.0.2+, an event needs to be set in the spfile.

SQL> ALTER SYSTEM SET EVENT=”44951 TRACE NAME CONTEXT FOREVER, LEVEL <1 .. 1024>” scope=spfile;

By setting this event between 1 and 1024 will identify the number of chunks to be cleaned up each time a reclamation operation is performed.  In turn this reduces the number of requests against the high watermark enqueue. 

Once the event has been set in the spfile, the database instance needs to be rebooted to make the change effective. 

Conclusion

The steps in resolving this enqueue event are not difficult; however, when extents need to be allocated manually it becomes very interesting.  The downside to setting the event, per Oracle Support, is the need to reboot the database instance to make the event active.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database, Exadata, Performance
Categories: DBA Blogs

Index Rebuild, the Need vs the Implications Support Note 989093.1 (Getting Better)

Richard Foote - Tue, 2014-03-04 23:49
Once upon a time, Oracle Support had a note called Script: Lists All Indexes that Benefit from a Rebuild (Doc ID 122008.1) which lets just say I didn’t view in a particularly positive light :) Mainly because it gave dubious advice which included that indexes should be rebuilt if: Deleted entries represent 20% or more of […]
Categories: DBA Blogs

Pythian Raises $6 Million to Help Retailers Eliminate Data Security Breaches

Pythian Group - Tue, 2014-03-04 15:46

Pythian announced today that it raised $6 million to expand the market reach of Adminiscope, its data security solution, to help retailers eliminate data security breaches. As global corporations continue to collect, store, and monetize massive amounts of sensitive data, they’re increasingly turning to Pythian to secure their infrastructures, especially from internal threats. Investors include the Business Development Bank of Canada (BDC) and Royal Bank of Canada (RBC).

“It’s clear there’s an appetite out in the marketplace for greater data security measures among all organizations, especially retailers who have to protect sensitive customer financial information,” said Paul Vallée, Founder and Executive Chairman at Pythian. “The recent challenges continue to underscore the need for solutions like Adminiscope, and strengthen our resolve to ensure our clients operate with the highest level of security and accountability. Our technology offers organizations and their customers unwavering confidence in their data’s security. With this investment, we will bring our solution to more retailers large and small, and help them increase consumer confidence and protect their brand.”

Adminiscope, a software-as-a-service (SaaS) privileged session management solution, secures access to enterprise systems, cloud infrastructures, databases, servers, and applications, and enables stakeholders to monitor and record in real time all privileged activity performed across an IT infrastructure. Database Trends and Applications magazine recently named Adminiscope to its list of Trend-Setting Products for 2014 in the data management and analysis space.

See the full press release here.

Categories: DBA Blogs

Finished reading multitenant parts of db admin manual

Bobby Durrett's DBA Blog - Tue, 2014-03-04 14:27

I’m trying to get up to speed on 12c, especially the multitentant features.  So, I decided to read the sections of the manuals that related to using multitentant.  Here is the name of the manual I was looking at:

Oracle® Database Administrator’s Guide 12c Release 1 (12.1)

It isn’t great reading.  It is kind of slow-paced and really more of a reference but since it is a large new feature I wanted to go over the practical details of using it.  Some of the sections had a command line sqlplus version and a web-based cloud control version so I skipped the cloud version.  I’m more of a command line, DBA and v$ view kind of DBA.

Here are the chapter numbers and titles of the sections I read:

36 Overview of Managing a Multitenant Environment
37 Creating and Configuring a CDB
38 Creating and Removing PDBs with SQL*Plus
40 Administering a CDB with SQL*Plus
42 Administering PDBs with SQL*Plus
43 Viewing Information About CDBs and PDBs with SQL*Plus
44 Using Oracle Resource Manager for PDBs with SQL*Plus
46 Using Oracle Scheduler with a CDB

I haven’t really used what I’ve learned yet but I feel like I have a good overview and know where to go to get more information.

I’m also working on reading through the 12c concepts manual on my Kindle but I haven’t gotten to the multitenant part.  Once I’m done with that I hope to get serious about studying up for the 12c OCP upgrade exam.  It looks like you can sign up for the tests now, but I don’t have any practice exam software or a book and I usually get one of these before I take the test.  I see one book that is supposed to be available in June so I may have to wait until then but I have plenty to do between now and then anyway so I may not be ready to take the test before the book is available so that should work out.

In any case, certification or no, it was helpful to read the db admin guide multitenant sections to get a feel for how it works.  Probably chapter 44 was the most interesting because it talked about using resource manager to divide resource usage between the pluggable databases that are sharing the same instance.

- Bobby

 

 

 

Categories: DBA Blogs

managed to make it to Dallas sunday morning ... Hotsos 2014 is rolling

Grumpy old DBA - Mon, 2014-03-03 19:01
It was a little dicey sunday morning with weather in Ohio but I caught very early flight and made it down to Dallas well before noon.  Lots of turbulence in the airplane ride and bounced around some but all in all not bad.  Lots of people traveling ran into hitches one way or another but by the end of the day Monday most everyone seems to be here now.

Even better I had pretty good turnout for my session maybe even close to 1/2 the attendees dunno exactly but room was pretty full.  Presentation went pretty well only blanked out once and then jumped into next bullet point.  Actually all in all for as many times as I had practiced this thing and thought I was going to fail at various points I think it went over pretty well.

Managed one really cheesy comparison of the shared pool to a picture that I will probably get in trouble for eventually.

Besides some of the people I already knew I have met several other nice speakers and now much more relaxed with my presentation over with.  Even managed to update my presentation slightly with some material from this morning Kerry Osborne and Tim Gorman.
Categories: DBA Blogs

Good blog post on learning Oracle performance

Bobby Durrett's DBA Blog - Mon, 2014-03-03 17:44

Saw this blog post about how Cary Millsap learned about Oracle performance tuning and I thought it was interesting:

http://carymillsap.blogspot.com/2014/02/how-did-you-learn-so-much-stuff-about.html

It is interesting because I’ve learned so much from Cary Millsap and Jeff Holt’s book and it is cool to see how Cary got to the position he is in as a well-known Oracle performance expert.  I guess I’ve always found the performance tuning part of my Oracle DBA job the most interesting so it is intriguing to see someone else who feels the same way about it and how they pursued their interest and made a living doing it.

Cary’s blog post contains good advice to someone who wants to be an Oracle DBA/performance analyst.  Read the Concepts manual and other relevant manuals.  Read books by the performance experts.  Find opportunities to work on real performance problems.  Pretty much these are the steps I’ve taken, although I’ve always worked as an employee for a company instead of being a consultant.  Still, working for a large company has been a good situation for me because we have a variety of systems and performance is consistently a problem that needs work.

- Bobby

 

 

 

Categories: DBA Blogs

Effective Customer Communications

Pythian Group - Mon, 2014-03-03 15:44

How well do you communicate with your customers? Are you listening, does your customer hear what you are saying? Every day we interact with our customers, team mates, colleagues and managers. Is your message getting delivered?

1. Actively listen.

It is easy to be on a call, but no one can see you “nod”. Make sure to acknowledge the person by replying with “I see” or paraphrase what they said. Side benefit: you look impressive!

2. Ask targeted questions.

Once you’ve practiced #1, you can now ask intelligent, pointed questions. Consider what your customer said, and more importantly what they didn’t say.

3. Show respect.

Respect your audience. Stop talking, try not to interrupt and focus. Avoid distractions such as typing on your computer or checking email. Be in the moment. When you demonstrate respect for your audience you show them you respect yourself as well as them!

4. Tell the truth.

“The whole truth, and nothing but the truth…” Are you on a call with a very upset customer because the DB crashed, and we were at fault? Escalate the call to your manager, and then listen. If we are at fault, we’ll always admit it. It’s counterproductive to try to shift blame. Even in times of stress, try not to lose sight of that fact that we’re in the solutions business. The customer can handle the truth, and they will respect you more for delivering it.

5. Understand what your customer values.

What systems or DBs are most important to your customer? Are they cost driven? React appropriately.

6. Be candid.

Be straight forward — don’t cover things up. State facts and avoid excuses. This, in turn, builds trust and a sense of partnership.

7. Be consistent.

“Say what you mean, and mean what you say.” Nothing could be truer when dealing with customers. Enough said.

8. Dedicate yourself.

Relationships aren’t built overnight. Dedicate yourself to the process, and you will reap the benefits in the long term. Patience is key.

9. Ask for feedback.

How do we get better if we don’t know where we need to improve? Asking for feedback will open the doors of honesty, so be open to the response that you’ve requested.

10. Be Persistent.

Still not sure on the requirements? Seek clarification by rephrasing your questions and using paraphrasing. Before the end of the conversation, be sure that you are clear on next steps. Ask until you know.

11. Build rapport.

Find common ground and let the customer know you can relate. Compliment them and focus on areas of agreement. They have lives outside of the office, so try get to know them.

12. Smile.

When you smile while you’re talking, your customer can hear it in your voice.

13. You’re an expert — act like one.

Don’t undermine your expertise by asking questions that can be answered internally. Write a note and ask your team.

14. Be flexible.

Be creative in finding solutions. The only limits are the ones we place on ourselves by thinking small. Customers needs change, we need to change and adapt with it. Don’t get stuck in the past.

15. Maintain constant communication.

Don’t restrict yourself to only talking to your customer when there is bad news. Share the good news, too!

16. Be Careful What You Say.

Customers often take things literally. Avoid words such as outage or crash, etc. Continue to be honest, but these words and ones like them trigger panic. Just use different ones.

Effective communication takes work and patience. Commit yourself to improving every day, not just with your customers but with your peers, managers & even your family! 

What do you think?  Have something to add, let me know what works for you!

Please note: Not all of these ideas were my own, but some are. Still I must give credit to that class I took called “Encouraging Upward Communication “. Author/teacher unknown.

Categories: DBA Blogs

Automating DataPump Export

Pythian Group - Mon, 2014-03-03 09:20

What’s the most elaborate thing you have done with DataPump?

So there I was, given the requirement to export multiple partitions for multiple tables where each partition has its own dump file having the format “tablename_partitionanme.dmp”, pondering how this can be done efficiently.

With the following metadata and requirements, what approach would you take?

If you are curious about the I approach I used, then read on.

TABLE_OWNER                    TABLE_NAME                     PARTITION_NAME
------------------------------ ------------------------------ ------------------------------
MDINH                          A_TAB                          P001
MDINH                          A_TAB                          P002
MDINH                          A_TAB                          P003
MDINH                          A_TAB                          P004
MDINH                          A_TAB                          P005
MDINH                          B_TAB                          P001
MDINH                          B_TAB                          P002
MDINH                          B_TAB                          P003
MDINH                          B_TAB                          P004
MDINH                          B_TAB                          P005
Here’s the demo:
$ nohup sqlplus "/ as sysdba" @exp_api.sql > exp_api.log 2>&1 &

$ cat exp_api.log
nohup: ignoring input

SQL*Plus: Release 11.2.0.4.0 Production on Wed Feb 26 20:28:07 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@db01):PRIMARY> -- DataPump Export (EXPDP) Fails With Errors ORA-39001 ORA-39000 ORA-31641 ORA-27054 ORA-27037 When The Dump File Is On NFS Mount Point (Doc ID 1518979.1)
ARROW:(SYS@db01):PRIMARY> -- Work around for the above mentioned error
ARROW:(SYS@db01):PRIMARY> alter system set events '10298 trace name context forever, level 32';

System altered.

Elapsed: 00:00:00.00
ARROW:(SYS@db01):PRIMARY> declare
  2      h1 number;
  3      dir_name varchar2(30);
  4  begin
  5      dir_name := 'DPDIR';
  6      for x in (
  7          select table_owner, table_name, partition_name
  8          from   dba_tab_partitions
  9          where  table_owner = 'MDINH' and table_name in ('A_TAB','B_TAB') and regexp_like(partition_name,'[0-4]$')
 10          order  by table_owner, table_name, partition_position
 11      ) loop
 12
 13          h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE');
 14
 15          dbms_datapump.add_file (
 16              handle    => h1,
 17              filename  => x.table_name||'_'||x.partition_name||'.dmp',
 18              reusefile => 1,
 19              directory => dir_name,
 20              filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
 21
 22          dbms_datapump.add_file (
 23              handle    => h1,
 24              filename  => 'exp_'||x.table_name||'_'||x.partition_name||'.log',
 25              directory => dir_name,
 26              filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
 27
 28          dbms_datapump.set_parameter (
 29              handle => h1,
 30              name   => 'INCLUDE_METADATA',
 31              value  => 0);
 32
 33          dbms_datapump.metadata_filter (
 34              handle => h1,
 35              name   => 'SCHEMA_EXPR',
 36              value  => 'IN ('''||x.table_owner||''')');
 37
 38          dbms_datapump.metadata_filter (
 39              handle => h1,
 40              name   => 'NAME_EXPR',
 41              value  => 'IN ('''||x.table_name||''')');
 42
 43          dbms_datapump.data_filter (
 44              handle      => h1,
 45              name        => 'PARTITION_LIST',
 46              value       => x.partition_name,
 47              table_name  => x.table_name,
 48              schema_name => x.table_owner);
 49
 50          dbms_datapump.start_job (handle => h1);
 51          dbms_datapump.detach (handle => h1);
 52      end loop;
 53  end;
 54  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:10.92
ARROW:(SYS@db01):PRIMARY> alter system set events '10298 trace name context off';

System altered.

Elapsed: 00:00:00.00
ARROW:(SYS@db01):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Review export log:
$ ls -l exp*.log-rw-r--r--. 1 oracle oinstall 2888 Feb 26 20:28 exp_api.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P001.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P002.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P003.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_A_TAB_P004.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P001.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P002.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P003.log
-rw-r--r--. 1 oracle oinstall  578 Feb 26 20:28 exp_B_TAB_P004.log
Review export dump:
$ ls -l *.dmp
-rw-r-----. 1 oracle oinstall   90112 Feb 26 20:28 A_TAB_P001.dmp
-rw-r-----. 1 oracle oinstall   98304 Feb 26 20:28 A_TAB_P002.dmp
-rw-r-----. 1 oracle oinstall  188416 Feb 26 20:28 A_TAB_P003.dmp
-rw-r-----. 1 oracle oinstall 1069056 Feb 26 20:28 A_TAB_P004.dmp
-rw-r-----. 1 oracle oinstall   90112 Feb 26 20:28 B_TAB_P001.dmp
-rw-r-----. 1 oracle oinstall   98304 Feb 26 20:28 B_TAB_P002.dmp
-rw-r-----. 1 oracle oinstall  188416 Feb 26 20:28 B_TAB_P003.dmp
-rw-r-----. 1 oracle oinstall 1069056 Feb 26 20:28 B_TAB_P004.dmp
Review job status:
$ grep "successfully completed" exp*.log
exp_api.log:PL/SQL procedure successfully completed.
exp_A_TAB_P001.log:Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Wed Feb 26 20:28:09 2014 elapsed 0 00:00:01
exp_A_TAB_P002.log:Job "SYS"."SYS_EXPORT_TABLE_03" successfully completed at Wed Feb 26 20:28:10 2014 elapsed 0 00:00:02
exp_A_TAB_P003.log:Job "SYS"."SYS_EXPORT_TABLE_04" successfully completed at Wed Feb 26 20:28:11 2014 elapsed 0 00:00:02
exp_A_TAB_P004.log:Job "SYS"."SYS_EXPORT_TABLE_05" successfully completed at Wed Feb 26 20:28:13 2014 elapsed 0 00:00:02
exp_B_TAB_P001.log:Job "SYS"."SYS_EXPORT_TABLE_06" successfully completed at Wed Feb 26 20:28:14 2014 elapsed 0 00:00:02
exp_B_TAB_P002.log:Job "SYS"."SYS_EXPORT_TABLE_07" successfully completed at Wed Feb 26 20:28:16 2014 elapsed 0 00:00:02
exp_B_TAB_P003.log:Job "SYS"."SYS_EXPORT_TABLE_08" successfully completed at Wed Feb 26 20:28:17 2014 elapsed 0 00:00:03
exp_B_TAB_P004.log:Job "SYS"."SYS_EXPORT_TABLE_09" successfully completed at Wed Feb 26 20:28:19 2014 elapsed 0 00:00:02
Review exported partition:
$ grep "exported" exp*.log
exp_A_TAB_P001.log:. . exported "MDINH"."A_TAB":"P001"                      6.351 KB       9 rows
exp_A_TAB_P002.log:. . exported "MDINH"."A_TAB":"P002"                      14.89 KB      90 rows
exp_A_TAB_P003.log:. . exported "MDINH"."A_TAB":"P003"                      101.1 KB     900 rows
exp_A_TAB_P004.log:. . exported "MDINH"."A_TAB":"P004"                      963.3 KB    9000 rows
exp_B_TAB_P001.log:. . exported "MDINH"."B_TAB":"P001"                      6.351 KB       9 rows
exp_B_TAB_P002.log:. . exported "MDINH"."B_TAB":"P002"                      14.89 KB      90 rows
exp_B_TAB_P003.log:. . exported "MDINH"."B_TAB":"P003"                      101.1 KB     900 rows
exp_B_TAB_P004.log:. . exported "MDINH"."B_TAB":"P004"                      963.3 KB    9000 rows
Example of completed log:
$ cat exp_B_TAB_P001.log
Starting "SYS"."SYS_EXPORT_TABLE_06":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8 MB
. . exported "MDINH"."B_TAB":"P001"                      6.351 KB       9 rows
Master table "SYS"."SYS_EXPORT_TABLE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_06 is:
  /tmp/B_TAB_P001.dmp
Job "SYS"."SYS_EXPORT_TABLE_06" successfully completed at Wed Feb 26 20:28:14 2014 elapsed 0 00:00:02
SQL Scripts:

exp_api.sql:

set timing on echo on
-- DataPump Export (EXPDP) Fails With Errors ORA-39001 ORA-39000 ORA-31641 ORA-27054 ORA-27037 When The Dump File Is On NFS Mount Point (Doc ID 1518979.1)
-- Work around for the above mentioned error
alter system set events '10298 trace name context forever, level 32';
declare
    h1 number;
    dir_name varchar2(30);
begin
    dir_name := 'DPDIR';
    for x in (
        select table_owner, table_name, partition_name
        from   dba_tab_partitions
        where  table_owner = 'MDINH' and table_name in ('A_TAB','B_TAB') and regexp_like(partition_name,'[0-4]$')
        order  by table_owner, table_name, partition_position
    ) loop

        h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE');

        dbms_datapump.add_file (
            handle    => h1,
            filename  => x.table_name||'_'||x.partition_name||'.dmp',
            reusefile => 1, -- REUSE_DUMPFILES=Y
            directory => dir_name,
            filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

        dbms_datapump.add_file (
            handle    => h1,
            filename  => 'exp_'||x.table_name||'_'||x.partition_name||'.log',
            directory => dir_name,
            filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

        -- CONTENT = DATA_ONLY    
        dbms_datapump.set_parameter (
            handle => h1,
            name   => 'INCLUDE_METADATA',
            value  => 0);

        dbms_datapump.metadata_filter (
            handle => h1,
            name   => 'SCHEMA_EXPR',
            value  => 'IN ('''||x.table_owner||''')');

        dbms_datapump.metadata_filter (
            handle => h1,
            name   => 'NAME_EXPR',
            value  => 'IN ('''||x.table_name||''')');

        dbms_datapump.data_filter (
            handle      => h1,
            name        => 'PARTITION_LIST',
            value       => x.partition_name,
            table_name  => x.table_name,
            schema_name => x.table_owner);

        dbms_datapump.start_job (handle => h1);
        dbms_datapump.detach (handle => h1);
    end loop;
end;
/
alter system set events '10298 trace name context off';
exit
Reference: DBMS_DATAPUMP
Categories: DBA Blogs

Tips on TempDB

Pythian Group - Mon, 2014-03-03 08:50

Are you curious about TempDB? Allow me to help make your understanding of the subject permanent. TempDB is an interesting system database that doesn’t behave quite like the others. It’s kind of like a quirky friend that you’ve come to rely on when you need a hand with getting something done.

Who is TempDB for?

TempDB is accessible to all users that are connected to an instance of SQL Server.

What is TempDB?

TempDB is a system database that’s used to store temporary objects. It utilizes minimal logging, meaning it only stores the information required to recover a transaction – this means no point-in-time recovery. You can’t perform a backup or restore on TempDB, and each time SQL Server is restarted, TempDB gets re-created with the last configured settings.

How does TempDB get used?

TempDB stores a number of different temporary objects:

  • User-created temporary objects: Such as global (prefixed by ##) or local (prefixed by #) temporary tables, temporary stored procedures and cursors.
  • Internal temporary objects: Like work tables for intermediate results (any sorting, such as GROUP BY or ORDER BY)
  • Version stores: Row versions for data-modification transactions when certain isolation levels are in use.
  • …And certain features like online index operations
Needless to say, TempDB gets used a lot. In certain cases, even READ ONLY operations benefit from the use of TempDB. Size Matters

Think about it like this: I’m 5’7” and 150 pounds. Would you rather have me or Dwayne “The Rock” Johnson helping you move your furniture? If it’s just a chair or two, I’m happy to help, but if you want me to move a sofa and don’t plan on doing any heavy-lifting yourself, you’ll probably want to get a bigger guy.

Just like when you’re moving furniture, when using TempDB, the size configurations are very important.

Common Issues

TempDB typically has three common issues that a DBA can run into: I/O bottlenecks, low disk space, and page contention. These issues and their resolutions are often interrelated.

If you really want my help, and I see that I’m not going to be big enough to move your sofa, I’ll try my hardest to bulk up so that I can perform the task. TempDB feels the same way, and will automatically grow in size (the default autogrowth setting is by 10% of its current size) in order to accomplish whatever job it set out to do. It’s great that TempDB is so willing to help, but in some cases, it can become so concerned with getting bigger that it causes I/O pressure on the disk, and performance can suffer. In even worse scenarios, TempDB might decide that it needs to be so huge that it consumes the whole disk. How am I going to help you move your furniture if my muscles can’t even fit through the doorframe anymore?!

Since TempDB is so handy, everyone wants a piece of it, and TempDB will put a lot of work on its plate, trying to satisfy everyone’s demands.  This can cause page contention, which we witness in the form of the PAGELATCH wait type (Note: This is not PAGEIOLATCH) denoting that a page is protected and already in memory. Queries needing TempDB’s help will have to wait for their turn.

Solving Common Issues

So, how do we keep TempDB from feeling over-worked and checking into the Smack-Down Hotel? The answer lies in capacity planning.

The basic concepts of capacity planning for TempDB include:

  • Set autogrowth to a pre-set amount: If TempDB gets to be the size of The Rock we don’t want it to automatically grow by a percentage of its current size each time. Set the number to a reasonable size to avoid the continuous need for growth, but keep it low enough to avoid wasted space.
  • Capture and replay activity using a trace or run individual queries: Monitor TempDB’s growth as it is used.
  • The tricky part: Estimate the work load (while accounting for concurrent user-activity)

You can use SYS.DM_DB_SESSION_SPACE_USAGE and SYS.DM_DB_TASK_SPACE_USAGE to identify queries which are consuming TempDB

  • Configure the appropriate number of TempDB DATA files:  If you thought having one Dwayne “The Rock” Johnson helping you move furniture was cool, imagine having two of him! Increasing the number of TempDB data files will allocate work in a round-robin form.  This will relieve some of the pressure on TempDB. It’s like having one Dwayne “The Rock” Johnson Feng Shui your living room while another does the same to your kitchen. It is best practice to set the number of TempDB data files equal to the number of logical CPU cores. For example, if you have a dual-core processor, then set the number of TempDB data files equal to two.  If you have more than 8 cores, start with 8 files and add four at a time as needed.
  • All TempDB files are created equal: This isn’t George Orwell’s Animal Farm. Ensure that the size and growth settings for all TempDB data files are configured in the same manner.
  • Disk Placement: If possible, spread TempDB data files across different disks to reduce I/O contention. Putting TempDB data files on a different disk than the user data files will further reduce I/O contention. Be sure to use fast disks whenever possible.

Keep in mind, that TempDB configurations are largely environment-specific. You might not actually need to have the same number of TempDB files as your logical CPU cores. In fact, having too many TempDB data files can cause performance problems due to slowing down the round-robin style allocation. Use your judgement and knowledge of your environment to determine what’s right for your system. If it’s configured properly, then you and your TempDB can be tag-team wrestling (or moving furniture) with queries like heavyweight champions.

For some more information, check out the following links:

Microsoft Technet: tempdb Database

Microsoft Technet: Capacity Planning for tempdb

SQL Skills: Paul Randal’s A SQL Server DBA myth a day: (12/30) tempdb should always have one data file per processor core

Categories: DBA Blogs

ORA-27054: NFS file system not mounted with correct options

VitalSoftTech - Mon, 2014-03-03 07:30

For the about a week I have been getting an “ORA-27054: NFS file system not mounted with correct options” error when running an RMAN backup to a NFS mount point. The system administrator has not been successful in identifying the cause as it shows that the correct mount point options are being used. Metalink Doc […]

The post ORA-27054: NFS file system not mounted with correct options appeared first on VitalSoftTech.

Categories: DBA Blogs

"Dropping" an Index Partition

Hemant K Chitale - Sat, 2014-03-01 09:06
Here is a simple way to "drop" an Index Partition  using the 11.2 behaviour of DEFERRED_SEGMENT_CREATION  by dropping the segment for the Index Partition.

CORRECTION : Although I posted this as DEFERRED_SEGMENT_CREATION=TRUE behaviour, Marko has pointed out that it applies even when D_S_C is set to FALSE.  So I am posting two test cases, one with D_S_C set to TRUE, and one with it set to FALSE.

This is the first case with DEFERRED_SEGMENT_CREATION=TRUE

HEMANT>show parameter deferre

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
HEMANT>
HHEMANT>l
1 create table sales_history
2 (sale_date date, product_id number, customer_id number, quantity number, price number, remarks varchar2(125))
3 partition by range (sale_date)
4 (partition P_2010 values less than (to_date('01-JAN-2011','DD-MON-YYYY')),
5 partition P_2011 values less than (to_date('01-JAN-2012','DD-MON-YYYY')),
6 partition P_2012 values less than (to_date('01-JAN-2013','DD-MON-YYYY')),
7 partition P_2013 values less than (to_date('01-JAN-2014','DD-MON-YYYY')),
8 partition P_2014 values less than (to_date('01-JAN-2015','DD-MON-YYYY')),
9 partition P_2015 values less than (to_date('01-JAN-2016','DD-MON-YYYY')),
10 partition P_MAX values less than (MAXVALUE)
11* )
HEMANT>/

Table created.

HEMANT>
HEMANT>l
1 create bitmap index sales_history_prdct_ndx
2 on sales_history(product_id)
3* local
HEMANT>/

Index created.

HEMANT>
HEMANT>select segment_name, partition_name
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4 and segment_name = 'SALES_HISTORY_PRDCT_NDX'
5 /

no rows selected

HEMANT>

Since deferred_segment_creation is set to TRUE, no index partition segments exist till data is loaded.

HEMANT>l
1 insert into sales_history
2 select to_date('01-JUL-2010','DD-MON-RR')+ (rownum*365), mod(rownum,5)+1,mod(rownum,1000)+1, 100, 12,'Sale Done'
3 from dual
4* connect by level < 5
HEMANT>/

4 rows created.

HEMANT>
HEMANT>l
1 select segment_name, partition_name
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4* and segment_name = 'SALES_HISTORY_PRDCT_NDX'
HEMANT>/

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
SALES_HISTORY_PRDCT_NDX P_2011
SALES_HISTORY_PRDCT_NDX P_2012
SALES_HISTORY_PRDCT_NDX P_2013
SALES_HISTORY_PRDCT_NDX P_2014

HEMANT>

So, now 4 index partitions are populated for the 4 rows in the 4 table partitions.

What happens when I make an Index Partition UNUSABLE ?

HEMANT>commit;

Commit complete.

HEMANT>alter index sales_history_prdct_ndx modify partition P_2011 unusable;

Index altered.

HEMANT>
HEMANT>l
1 select segment_name, partition_name
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4* and segment_name = 'SALES_HISTORY_PRDCT_NDX'
HEMANT>/

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
SALES_HISTORY_PRDCT_NDX P_2012
SALES_HISTORY_PRDCT_NDX P_2013
SALES_HISTORY_PRDCT_NDX P_2014

HEMANT>

The corresponding Index Partition Segment has also "disappeared".  I have released the space that was used by the Index Partition without actually deleting rows from the table.
This is possible with deferred_segment_creation set to TRUE.

Next is the test case with DEFERRED_SEGMENT_CREATION set to FALSE.
Here you will notice that Partitions with zero rows (i.e. those that are empty) still have Segments -- thus all the Partitions are created upfront even in the absence of rows in the table (and index).

HEMANT> show parameter deferred

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean FALSE
HEMANT>
HEMANT> create table transactions_history
2 (txn_id number, txn_date date, txn_product_id number, txn_value number, remarks varchar2(50))
3 partition by range (txn_date)
4 (partition P_2010 values less than (to_date('01-JAN-2011','DD-MON-YYYY')),
5 partition P_2011 values less than (to_date('01-JAN-2012','DD-MON-YYYY')),
6 partition P_2012 values less than (to_date('01-JAN-2013','DD-MON-YYYY')),
7 partition P_2013 values less than (to_date('01-JAN-2014','DD-MON-YYYY')),
8 partition P_2014 values less than (to_date('01-JAN-2015','DD-MON-YYYY')),
9 partition P_2015 values less than (to_date('01-JAN-2016','DD-MON-YYYY')),
10 partition P_MAX values less than (MAXVALUE)
11 )
12 /

Table created.

HEMANT>
HEMANT> create bitmap index txn_hist_prdct_id on
2 transactions_history(txn_product_id)
3 local
4 /

Index created.

HEMANT>
HEMANT> select segment_name, partition_name
2 from user_segments
3 where segment_name = 'TXN_HIST_PRDCT_ID'
4 and segment_type = 'INDEX PARTITION'
5 order by 2
6 /

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
TXN_HIST_PRDCT_ID P_2010
TXN_HIST_PRDCT_ID P_2011
TXN_HIST_PRDCT_ID P_2012
TXN_HIST_PRDCT_ID P_2013
TXN_HIST_PRDCT_ID P_2014
TXN_HIST_PRDCT_ID P_2015
TXN_HIST_PRDCT_ID P_MAX

7 rows selected.

HEMANT>
HEMANT> insert into transactions_history
2 select rownum, to_date('01-JUL-2010','DD-MON-RR')+ (rownum*365), mod(rownum,5)+1,rownum*100,'Txn Done'
3 from dual
4 connect by level < 5
5 /

4 rows created.

HEMANT>
HEMANT> select segment_name, partition_name
2 from user_segments
3 where segment_name = 'TXN_HIST_PRDCT_ID'
4 and segment_type = 'INDEX PARTITION'
5 order by 2
6 /

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
TXN_HIST_PRDCT_ID P_2010
TXN_HIST_PRDCT_ID P_2011
TXN_HIST_PRDCT_ID P_2012
TXN_HIST_PRDCT_ID P_2013
TXN_HIST_PRDCT_ID P_2014
TXN_HIST_PRDCT_ID P_2015
TXN_HIST_PRDCT_ID P_MAX

7 rows selected.

HEMANT>
HEMANT> alter index txn_hist_prdct_id modify partition P_2011 unusable;

Index altered.

HEMANT>
HEMANT> select segment_name, partition_name
2 from user_segments
3 where segment_name = 'TXN_HIST_PRDCT_ID'
4 and segment_type = 'INDEX PARTITION'
5 order by 2
6 /

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
TXN_HIST_PRDCT_ID P_2010
TXN_HIST_PRDCT_ID P_2012
TXN_HIST_PRDCT_ID P_2013
TXN_HIST_PRDCT_ID P_2014
TXN_HIST_PRDCT_ID P_2015
TXN_HIST_PRDCT_ID P_MAX

6 rows selected.

HEMANT>
HEMANT> select *
2 from transactions_history
3 order by txn_date
4 /

TXN_ID TXN_DATE TXN_PRODUCT_ID TXN_VALUE
---------- --------- -------------- ----------
REMARKS
--------------------------------------------------
1 01-JUL-11 2 100
Txn Done

2 30-JUN-12 3 200
Txn Done

3 30-JUN-13 4 300
Txn Done

4 30-JUN-14 5 400
Txn Done


HEMANT>


Here we see that setting the P_2011 Partition UNUSABLE resulting it in being dropped. The segments for Partitions P_2010, P_2015 and P_MAX persist even though they are empty. .
.
.
.

Categories: DBA Blogs

Partner Webcast – Foundation for Innovation: Oracle Fusion Middleware

Oracle Fusion Middleware is the leading business innovation platform for the enterprise and the cloud. It enables you to create and run agile, intelligent business applications while maximizing...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Database 12c Security: New Unified Auditing

According to the  2012 Data Breach Investigations Report from the Verizon RISK Team, more than 1 billion database records were breached around the world from 2004 to 2011. As companies...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Updates of tables with query high compression slow

Bobby Durrett's DBA Blog - Fri, 2014-02-28 14:22

In an earlier post about how query high compression eliminated row chaining in tables with more than 255 columns I got question about whether updates to tables with more than 255 columns and query high compression ran slower with this type of compression than on an uncompressed table.

The answer is that, at least in the tests I ran, the updates of a table with more than 255 columns took much longer on a table with query high compression than on the uncompressed table.

Here is a zip of my test scripts and their logs: zip

I took a table with 256 columns and populated it with 1,000,000 rows filling all columns with data.  Then I updated the first column, the last column, and then all columns in three separate transactions.

Here are the results:

                   Column 1    Column 256  All Columns

Uncompressed run 1 00:00:11.81 00:00:24.20 00:00:57.89
Uncompressed run 2 00:00:12.25 00:00:23.94 00:00:59.28

Compressed run 1   00:02:27.20 00:02:27.65 00:02:50.08
Compressed run 2   00:02:26.83 00:02:26.33 00:02:47.88

I don’t have the time or motivation to dig any deeper into the causes of this but here is what I think it going on based on these results.

  1. Row chaining in the uncompressed table with more than 255 columns causes the update of the last column or all columns to be slower than updating just the first column.
  2. Updating any or all columns of the table when it is compressed for query high uncompresses the table (I think it remains compressed for OLTP but did not check).
  3. The time it takes to uncompress the table during the update is much more than the time taken to update the uncompressed table, even in situations where we have row chaining.

Maybe someone could construct a situation where the time taken by row chaining exceeds the time taken to uncompress the rows for the update, but in my test case it isn’t close.  The uncompress takes more than twice the time of the updates that experience row chaining.

- Bobby

 

 

 

 

 

Categories: DBA Blogs

Fast Recovery Area Monitoring – Where are the metrics! #em12c

DBASolved - Fri, 2014-02-28 09:10

Oracle Enterprise Manager 12c is a great monitoring tool for the enterprise, I think I’ve said that more than once over the last two years; however, with every release small yet simple things change.  It is always the small things that will get you.  I had setup monitoring for a client using monitoring templates within OEM12c; everything was being monitored, so I thought!  I got a call from my client asking why nobody was alerted when the Fast Recovery Area (FRA) was filled due to archive logs.  My initial response was it should have alerted, I’ll look into what happen.

Naturally, the first place I started was with the monitoring template (Enterprise –> Monitoring –> Monitoring Templates –>View/Edit desired template) to check and make sure that the Archive Area Used (%) metric is set. 

image

The monitoring template for the database instances had the Archive Area Usage (%) metric and it is set to email a warning when 80% full and email a critical when 90% full.  Why was the emails not triggered?  The template has been applied to all database instances.

image

The easiest way to find out what this metric is “suppose” to do, is to look at the reference documentation on supported metrics (here).  This particular metric is listed under Database Instance.  In reading the description of the Archive Area Used (%) metric, I found a note that leads directly to what the issue was.

image

As the notes says, if the database is using the Fast Recovery Area (FRA) for archive logs; then the metrics associated with archive logs do not apply.  The metric Recovery Area Free Space (%) has to be used to monitor the Fast Recovery Area.  Ok, simple enough; lets just add the metric to the template. 

When trying to add Recovery Area Free Space (%) to the template using Database Instance Target Type, there is no metrics for Fast Recovery Area (Image shows a partial list of metric categories).  Where is Fast Recovery Area metrics?

image

Again, I go back to the reference guide and lookup Fast Recovery Metrics.  Section 5.34 of the reference guide has a good bit of information on the metrics related to the Fast Recovery Area, but no definitive answers on where these metrics are stored or how to add them to a template.

At this point, what do I know? 

  1. Archive Area Usage (%) cannot be used to monitor the Fast Recovery Area. 
  2. What metrics are needed to monitor Fast Recover Area, but cannot find them to add them to a template. 

Maybe “All Metrics” under a database target would shed some light on the situation.

To access “All Metrics” for a database instance, follow Targets –> Databases –> Database Instance.  Once I was at the database instance I wanted to look out, then I went  Oracle Database –> Monitoring –> All Metrics.

image

Once in “All Metrics”, I can see every metric that is associated with an Oracle Database Instance.  At the top of the metric tree, there is a search box for finding a metric.  When I search for “Fast”, I find all the Fast Recovery metrics.

image

Great, I found all the metrics that I want related to Fast Recovery Area.  Now how do I get them into a template so I can set thresholds for monitoring?  Back to the template (Enterprise –> Monitoring –> Monitoring Templates). 

When I edit the template, I noticed (have always noticed) the tabs at the top: General, Metric Thresholds, Other Collected Items, Access.  Normally, I’m only worried about the metrics on the Metric Thresholds tab; since I haven’t had any luck adding the metrics I wanted, lets take a look at the “Other Collected Items” tab.

image

Scrolling down through the “Other Collected Items” tab, I find the Fast Recovery category for metrics.

image

Apparently, the Fast Recovery metrics are already added to the template; how do the metrics, “Other Collected Items” tab, work or alerted against.  Again, back to the documentation.

This time when looking at the documentation,  I needed to look up templates to find the answer I needed.  In section 8.2 of the Oracle Enterprise Manager Cloud Control Administrator’s Guide, I find the answer I needed.  Here is why the Fast Recovery Area metrics are not configurable with thresholds:

image

Oracle has made all the metrics related to Fast Recovery Area non-metric!  That is right, OEM is gathering the information but not allowing you to alert on it with thresholds!  Although it is part of the template, the template will gather the information; but in the end I would need to go to “All Metrics” to see the results.

Workaround

If you want to monitor the Fast Recovery Area and have thresholds against metrics; the solution is to use Metric Extensions.  Metric Extensions allow the end user to create custom metrics for monitoring.  Once an Metric Extension is created, it will be seen in “All Metrics” and then can be added to a monitoring template with thresholds assigned.

Instead of going into how to develop Metric Extensions in this post, I have provided some really great posts on how to implement and use Metric Extensions below.  I have also provide a link to a similar post which includes showing how the metric extensions are setup by Courtney Llamas of Oracle. 

https://blogs.oracle.com/oem/entry/fast_recovery_area_for_archive

http://dbasolved.com/2014/01/19/yet-another-way-to-monitor-oracle-goldengate-with-oem12c/

http://www.slideshare.net/Enkitec/em12c-monitoring-metric-extensions-and-performance-pages

Summary

Almost everyone now is using Fast Recovery Area to store their backups and archive log.  Monitoring of this area is critical; however, out of the box Oracle Enterprise Manager 12c, needs to be adjusted to monitor the Fast Recovery Area with the correct metrics.  This slight change in metric monitoring came as a surprise versus previous editions of OEM.  In the end, OEM is still a good monitoring tool for the enterprise; just now we need to make some small adjustments.

References

Friendly Oracle Employees – Pete Sharman (would say find him on twitter as well but he doesn’t tweet)

Oracle Enterprise Manager 12c Documentation (http://docs.oracle.com/cd/E24628_01/index.htm)

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database, OEM
Categories: DBA Blogs