Skip navigation.

DBA Blogs

"Dropping" an Index Partition

Hemant K Chitale - Sat, 2014-03-01 09:06
Here is a simple way to "drop" an Index Partition  using the 11.2 behaviour of DEFERRED_SEGMENT_CREATION  by dropping the segment for the Index Partition.

CORRECTION : Although I posted this as DEFERRED_SEGMENT_CREATION=TRUE behaviour, Marko has pointed out that it applies even when D_S_C is set to FALSE.  So I am posting two test cases, one with D_S_C set to TRUE, and one with it set to FALSE.

This is the first case with DEFERRED_SEGMENT_CREATION=TRUE

HEMANT>show parameter deferre

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
HEMANT>
HHEMANT>l
1 create table sales_history
2 (sale_date date, product_id number, customer_id number, quantity number, price number, remarks varchar2(125))
3 partition by range (sale_date)
4 (partition P_2010 values less than (to_date('01-JAN-2011','DD-MON-YYYY')),
5 partition P_2011 values less than (to_date('01-JAN-2012','DD-MON-YYYY')),
6 partition P_2012 values less than (to_date('01-JAN-2013','DD-MON-YYYY')),
7 partition P_2013 values less than (to_date('01-JAN-2014','DD-MON-YYYY')),
8 partition P_2014 values less than (to_date('01-JAN-2015','DD-MON-YYYY')),
9 partition P_2015 values less than (to_date('01-JAN-2016','DD-MON-YYYY')),
10 partition P_MAX values less than (MAXVALUE)
11* )
HEMANT>/

Table created.

HEMANT>
HEMANT>l
1 create bitmap index sales_history_prdct_ndx
2 on sales_history(product_id)
3* local
HEMANT>/

Index created.

HEMANT>
HEMANT>select segment_name, partition_name
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4 and segment_name = 'SALES_HISTORY_PRDCT_NDX'
5 /

no rows selected

HEMANT>

Since deferred_segment_creation is set to TRUE, no index partition segments exist till data is loaded.

HEMANT>l
1 insert into sales_history
2 select to_date('01-JUL-2010','DD-MON-RR')+ (rownum*365), mod(rownum,5)+1,mod(rownum,1000)+1, 100, 12,'Sale Done'
3 from dual
4* connect by level < 5
HEMANT>/

4 rows created.

HEMANT>
HEMANT>l
1 select segment_name, partition_name
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4* and segment_name = 'SALES_HISTORY_PRDCT_NDX'
HEMANT>/

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
SALES_HISTORY_PRDCT_NDX P_2011
SALES_HISTORY_PRDCT_NDX P_2012
SALES_HISTORY_PRDCT_NDX P_2013
SALES_HISTORY_PRDCT_NDX P_2014

HEMANT>

So, now 4 index partitions are populated for the 4 rows in the 4 table partitions.

What happens when I make an Index Partition UNUSABLE ?

HEMANT>commit;

Commit complete.

HEMANT>alter index sales_history_prdct_ndx modify partition P_2011 unusable;

Index altered.

HEMANT>
HEMANT>l
1 select segment_name, partition_name
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4* and segment_name = 'SALES_HISTORY_PRDCT_NDX'
HEMANT>/

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
SALES_HISTORY_PRDCT_NDX P_2012
SALES_HISTORY_PRDCT_NDX P_2013
SALES_HISTORY_PRDCT_NDX P_2014

HEMANT>

The corresponding Index Partition Segment has also "disappeared".  I have released the space that was used by the Index Partition without actually deleting rows from the table.
This is possible with deferred_segment_creation set to TRUE.

Next is the test case with DEFERRED_SEGMENT_CREATION set to FALSE.
Here you will notice that Partitions with zero rows (i.e. those that are empty) still have Segments -- thus all the Partitions are created upfront even in the absence of rows in the table (and index).

HEMANT> show parameter deferred

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean FALSE
HEMANT>
HEMANT> create table transactions_history
2 (txn_id number, txn_date date, txn_product_id number, txn_value number, remarks varchar2(50))
3 partition by range (txn_date)
4 (partition P_2010 values less than (to_date('01-JAN-2011','DD-MON-YYYY')),
5 partition P_2011 values less than (to_date('01-JAN-2012','DD-MON-YYYY')),
6 partition P_2012 values less than (to_date('01-JAN-2013','DD-MON-YYYY')),
7 partition P_2013 values less than (to_date('01-JAN-2014','DD-MON-YYYY')),
8 partition P_2014 values less than (to_date('01-JAN-2015','DD-MON-YYYY')),
9 partition P_2015 values less than (to_date('01-JAN-2016','DD-MON-YYYY')),
10 partition P_MAX values less than (MAXVALUE)
11 )
12 /

Table created.

HEMANT>
HEMANT> create bitmap index txn_hist_prdct_id on
2 transactions_history(txn_product_id)
3 local
4 /

Index created.

HEMANT>
HEMANT> select segment_name, partition_name
2 from user_segments
3 where segment_name = 'TXN_HIST_PRDCT_ID'
4 and segment_type = 'INDEX PARTITION'
5 order by 2
6 /

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
TXN_HIST_PRDCT_ID P_2010
TXN_HIST_PRDCT_ID P_2011
TXN_HIST_PRDCT_ID P_2012
TXN_HIST_PRDCT_ID P_2013
TXN_HIST_PRDCT_ID P_2014
TXN_HIST_PRDCT_ID P_2015
TXN_HIST_PRDCT_ID P_MAX

7 rows selected.

HEMANT>
HEMANT> insert into transactions_history
2 select rownum, to_date('01-JUL-2010','DD-MON-RR')+ (rownum*365), mod(rownum,5)+1,rownum*100,'Txn Done'
3 from dual
4 connect by level < 5
5 /

4 rows created.

HEMANT>
HEMANT> select segment_name, partition_name
2 from user_segments
3 where segment_name = 'TXN_HIST_PRDCT_ID'
4 and segment_type = 'INDEX PARTITION'
5 order by 2
6 /

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
TXN_HIST_PRDCT_ID P_2010
TXN_HIST_PRDCT_ID P_2011
TXN_HIST_PRDCT_ID P_2012
TXN_HIST_PRDCT_ID P_2013
TXN_HIST_PRDCT_ID P_2014
TXN_HIST_PRDCT_ID P_2015
TXN_HIST_PRDCT_ID P_MAX

7 rows selected.

HEMANT>
HEMANT> alter index txn_hist_prdct_id modify partition P_2011 unusable;

Index altered.

HEMANT>
HEMANT> select segment_name, partition_name
2 from user_segments
3 where segment_name = 'TXN_HIST_PRDCT_ID'
4 and segment_type = 'INDEX PARTITION'
5 order by 2
6 /

SEGMENT_NAME PARTITION_NAME
------------------------------ ------------------------------
TXN_HIST_PRDCT_ID P_2010
TXN_HIST_PRDCT_ID P_2012
TXN_HIST_PRDCT_ID P_2013
TXN_HIST_PRDCT_ID P_2014
TXN_HIST_PRDCT_ID P_2015
TXN_HIST_PRDCT_ID P_MAX

6 rows selected.

HEMANT>
HEMANT> select *
2 from transactions_history
3 order by txn_date
4 /

TXN_ID TXN_DATE TXN_PRODUCT_ID TXN_VALUE
---------- --------- -------------- ----------
REMARKS
--------------------------------------------------
1 01-JUL-11 2 100
Txn Done

2 30-JUN-12 3 200
Txn Done

3 30-JUN-13 4 300
Txn Done

4 30-JUN-14 5 400
Txn Done


HEMANT>


Here we see that setting the P_2011 Partition UNUSABLE resulting it in being dropped. The segments for Partitions P_2010, P_2015 and P_MAX persist even though they are empty. .
.
.
.

Categories: DBA Blogs

Partner Webcast – Foundation for Innovation: Oracle Fusion Middleware

Oracle Fusion Middleware is the leading business innovation platform for the enterprise and the cloud. It enables you to create and run agile, intelligent business applications while maximizing...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Database 12c Security: New Unified Auditing

According to the  2012 Data Breach Investigations Report from the Verizon RISK Team, more than 1 billion database records were breached around the world from 2004 to 2011. As companies...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Updates of tables with query high compression slow

Bobby Durrett's DBA Blog - Fri, 2014-02-28 14:22

In an earlier post about how query high compression eliminated row chaining in tables with more than 255 columns I got question about whether updates to tables with more than 255 columns and query high compression ran slower with this type of compression than on an uncompressed table.

The answer is that, at least in the tests I ran, the updates of a table with more than 255 columns took much longer on a table with query high compression than on the uncompressed table.

Here is a zip of my test scripts and their logs: zip

I took a table with 256 columns and populated it with 1,000,000 rows filling all columns with data.  Then I updated the first column, the last column, and then all columns in three separate transactions.

Here are the results:

                   Column 1    Column 256  All Columns

Uncompressed run 1 00:00:11.81 00:00:24.20 00:00:57.89
Uncompressed run 2 00:00:12.25 00:00:23.94 00:00:59.28

Compressed run 1   00:02:27.20 00:02:27.65 00:02:50.08
Compressed run 2   00:02:26.83 00:02:26.33 00:02:47.88

I don’t have the time or motivation to dig any deeper into the causes of this but here is what I think it going on based on these results.

  1. Row chaining in the uncompressed table with more than 255 columns causes the update of the last column or all columns to be slower than updating just the first column.
  2. Updating any or all columns of the table when it is compressed for query high uncompresses the table (I think it remains compressed for OLTP but did not check).
  3. The time it takes to uncompress the table during the update is much more than the time taken to update the uncompressed table, even in situations where we have row chaining.

Maybe someone could construct a situation where the time taken by row chaining exceeds the time taken to uncompress the rows for the update, but in my test case it isn’t close.  The uncompress takes more than twice the time of the updates that experience row chaining.

- Bobby

 

 

 

 

 

Categories: DBA Blogs

Fast Recovery Area Monitoring – Where are the metrics! #em12c

DBASolved - Fri, 2014-02-28 09:10

Oracle Enterprise Manager 12c is a great monitoring tool for the enterprise, I think I’ve said that more than once over the last two years; however, with every release small yet simple things change.  It is always the small things that will get you.  I had setup monitoring for a client using monitoring templates within OEM12c; everything was being monitored, so I thought!  I got a call from my client asking why nobody was alerted when the Fast Recovery Area (FRA) was filled due to archive logs.  My initial response was it should have alerted, I’ll look into what happen.

Naturally, the first place I started was with the monitoring template (Enterprise –> Monitoring –> Monitoring Templates –>View/Edit desired template) to check and make sure that the Archive Area Used (%) metric is set. 

image

The monitoring template for the database instances had the Archive Area Usage (%) metric and it is set to email a warning when 80% full and email a critical when 90% full.  Why was the emails not triggered?  The template has been applied to all database instances.

image

The easiest way to find out what this metric is “suppose” to do, is to look at the reference documentation on supported metrics (here).  This particular metric is listed under Database Instance.  In reading the description of the Archive Area Used (%) metric, I found a note that leads directly to what the issue was.

image

As the notes says, if the database is using the Fast Recovery Area (FRA) for archive logs; then the metrics associated with archive logs do not apply.  The metric Recovery Area Free Space (%) has to be used to monitor the Fast Recovery Area.  Ok, simple enough; lets just add the metric to the template. 

When trying to add Recovery Area Free Space (%) to the template using Database Instance Target Type, there is no metrics for Fast Recovery Area (Image shows a partial list of metric categories).  Where is Fast Recovery Area metrics?

image

Again, I go back to the reference guide and lookup Fast Recovery Metrics.  Section 5.34 of the reference guide has a good bit of information on the metrics related to the Fast Recovery Area, but no definitive answers on where these metrics are stored or how to add them to a template.

At this point, what do I know? 

  1. Archive Area Usage (%) cannot be used to monitor the Fast Recovery Area. 
  2. What metrics are needed to monitor Fast Recover Area, but cannot find them to add them to a template. 

Maybe “All Metrics” under a database target would shed some light on the situation.

To access “All Metrics” for a database instance, follow Targets –> Databases –> Database Instance.  Once I was at the database instance I wanted to look out, then I went  Oracle Database –> Monitoring –> All Metrics.

image

Once in “All Metrics”, I can see every metric that is associated with an Oracle Database Instance.  At the top of the metric tree, there is a search box for finding a metric.  When I search for “Fast”, I find all the Fast Recovery metrics.

image

Great, I found all the metrics that I want related to Fast Recovery Area.  Now how do I get them into a template so I can set thresholds for monitoring?  Back to the template (Enterprise –> Monitoring –> Monitoring Templates). 

When I edit the template, I noticed (have always noticed) the tabs at the top: General, Metric Thresholds, Other Collected Items, Access.  Normally, I’m only worried about the metrics on the Metric Thresholds tab; since I haven’t had any luck adding the metrics I wanted, lets take a look at the “Other Collected Items” tab.

image

Scrolling down through the “Other Collected Items” tab, I find the Fast Recovery category for metrics.

image

Apparently, the Fast Recovery metrics are already added to the template; how do the metrics, “Other Collected Items” tab, work or alerted against.  Again, back to the documentation.

This time when looking at the documentation,  I needed to look up templates to find the answer I needed.  In section 8.2 of the Oracle Enterprise Manager Cloud Control Administrator’s Guide, I find the answer I needed.  Here is why the Fast Recovery Area metrics are not configurable with thresholds:

image

Oracle has made all the metrics related to Fast Recovery Area non-metric!  That is right, OEM is gathering the information but not allowing you to alert on it with thresholds!  Although it is part of the template, the template will gather the information; but in the end I would need to go to “All Metrics” to see the results.

Workaround

If you want to monitor the Fast Recovery Area and have thresholds against metrics; the solution is to use Metric Extensions.  Metric Extensions allow the end user to create custom metrics for monitoring.  Once an Metric Extension is created, it will be seen in “All Metrics” and then can be added to a monitoring template with thresholds assigned.

Instead of going into how to develop Metric Extensions in this post, I have provided some really great posts on how to implement and use Metric Extensions below.  I have also provide a link to a similar post which includes showing how the metric extensions are setup by Courtney Llamas of Oracle. 

https://blogs.oracle.com/oem/entry/fast_recovery_area_for_archive

http://dbasolved.com/2014/01/19/yet-another-way-to-monitor-oracle-goldengate-with-oem12c/

http://www.slideshare.net/Enkitec/em12c-monitoring-metric-extensions-and-performance-pages

Summary

Almost everyone now is using Fast Recovery Area to store their backups and archive log.  Monitoring of this area is critical; however, out of the box Oracle Enterprise Manager 12c, needs to be adjusted to monitor the Fast Recovery Area with the correct metrics.  This slight change in metric monitoring came as a surprise versus previous editions of OEM.  In the end, OEM is still a good monitoring tool for the enterprise; just now we need to make some small adjustments.

References

Friendly Oracle Employees – Pete Sharman (would say find him on twitter as well but he doesn’t tweet)

Oracle Enterprise Manager 12c Documentation (http://docs.oracle.com/cd/E24628_01/index.htm)

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database, OEM
Categories: DBA Blogs

Log Buffer #361, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-02-28 08:44

Winter is slowly receding, but the fragrance of blooming blog posts is already here in this Log Buffer Edition which covers Oracle, SQL Server and MySQL.

Oracle:

Patrick Barel is trying something with the types in Oracle.

Weijun is talking about a bug in Kerberos used by Java’s HTTP.

Paul Anderson has told us that the BIApps 11.1.1.7.1 ODI Cumulative Patch 1 has been released.

Courtney Llamas writes about the fast recovery area for archive destination.

Stand Apart From Your Peers With Oracle Database Performance and Tuning 2015 Certified Implementation Specialist Certification.

SQL Server:

This article will help us identify the backup which was used to restore the database.

Steve Jones is putting out an open call for guest editorial pieces in the spring of 2014.

DBAs Checklist for Designing and Reviewing SQL Server System Architectures.

Views are a valuable tool for the SQL Server Developer, because they hide complexity and allow for a readable style of SQL expression.

SQL SERVER – 28 Links for Learning SQL Wait Stats from Beginning.

MySQL:

Real-time data loading from MySQL to Hadoop.

Upcoming EMEA Events with MySQL 2014.

Repos and Distros: Upstream and Downstream.

One of Michael McLaughlin students asked how to convert MySQL Workbench’s default Crow’s Foot (IE) diagram to one of the other supported formats – Classic, Connect to Columns, UML, and IDEF1X.

The MySQL client has some functionality some of us never use. Why would you use them, and what is the added value of this?

Categories: DBA Blogs

Pretty close to getting out speaker notifications for GLOC 2014 ...

Grumpy old DBA - Thu, 2014-02-27 18:51
Our official notifications should be out by mid week ( next Wednesday ) and we appreciate all of the people submitting abstracts.  Thank everyone of you very much from all of us involved in the conference.

Wow this is going to be another outstanding conference.  Darn going to have to not accept a large number of quality presentations also.  We do not have the time or room to run all the sessions that we would like to present.  So my apologies for that ... our only good choice is to grow the conference even larger in the future!

Now me personally just hoping for a semi quiet weekend to finish final preparations for Dallas/Hotsos 2014 and also hoping to have no problems flying down there sunday morning!

Stay tuned for more news on GLOC 2014!
Categories: DBA Blogs

Production RMAN recovery

Bobby Durrett's DBA Blog - Thu, 2014-02-27 10:31

I was on call again last week and again had to do a database recovery using RMAN.  I thought I might as well document some lessons learned even though it went quite smoothly.

First, here are the steps I took.  I’ve modified the commands and script so that our real usernames, passwords,  database names, and host names are obscured, but otherwise it is exactly what I ran.

Environment wise we are on 11.2.0.3 Oracle database running on HP-UX 11.31 on 64 bit Itanium.

First I had to build the Unix script that I would run to do the recovery:

rman target / catalog CATUSER/CATPASSWORD@CATDB <<EOF

run {
allocate channel 'dev_0' type 'sbt_tape'
parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=OURDB,OB2BARLIST=VTL_ourhost_OURDB_ora4)';
allocate channel 'dev_1' type 'sbt_tape'
parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=OURDB,OB2BARLIST=VTL_ourhost_OURDB_ora4)';
allocate channel 'dev_2' type 'sbt_tape'
parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=OURDB,OB2BARLIST=VTL_ourhost_OURDB_ora4)';
allocate channel 'dev_3' type 'sbt_tape'
parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=OURDB,OB2BARLIST=VTL_ourhost_OURDB_ora4)';
allocate channel 'dev_4' type 'sbt_tape'
parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=OURDB,OB2BARLIST=VTL_ourhost_OURDB_ora4)';
set until time "to_date('Feb 20 2014 21:38:10','Mon DD YYYY HH24:MI:SS')";
restore database;
recover database;
}

exit
EOF

In my previous recovery scenario that I blogged about I didn’t use the catalog because I was recovering to a different database just to retrieve data from before the failure I was trying to recover from.  In this case I was recovering the production database in place to our Thursday night backup that we took just before embarking on a PeopleTools upgrade.  So, first thing I had to do was look up the user, password, and connect string for the catalog.  Then I just tested connecting without doing anything.

rman target / catalog CATUSER/CATPASSWORD@CATDB
exit

Next I had to figure out the allocate channel commands.  In our case we have a graphical front end to our data protector backup system and I was able to extract the log from Thursday night’s backup.  I copied the allocate channel commands unchanged from the log to my script.

The hardest part of setting up this script was knowing what time to use in the set until time command.  I decided to use the last time that appeared on the backup output.  I knew that the application was down and so not much updating was occurring so I wasn’t worried about going past the earliest possible point that I could use.  I knew that our upgrade didn’t start until after the backup finished so I reasoned that the last backup message would be safe.  But, the backup program spits out messages in a different time zone than our database is in.  (I’m in Arizona, but my database is on Chicago time…).  So, getting the timezone right was key otherwise I could be off by an hour.  The date and time in the last backup message looked like this in Arizona time:

Time: 2/20/2014 8:38:10 PM

I had to translate it to

Feb 20 2014 21:38:10

This got it into central time and 24 hour format.

Lastly, I considered whether we had enough space for all the archive logs that would be retrieved for the recovery.  Since we were recovering from just after a hot backup and with the application down I decided that it couldn’t be more than one or two archive logs.  As it turns out the archive logs were on the disk when the recovery ran so they didn’t even need to be restored from tape.

To actually run the script I first manually put the database in mount mode:

sqlplus / as sysdba
shutdown immediate
startup mount

Then I ran the recovery nohup in the background:

nohup ./recover.sh > recover.out &

Lastly, after carefully reviewing the output from the recovery to make sure it looked right I opened the database resetting logs:

sqlplus / as sysdba
alter database open resetlogs;

It went very smoothly.  It was a little nerve-racking recovering the production system in place, but we did make two different backups of it before doing the recovery so presumably we could have gone back to the way it was before the recovery.

- Bobby

 

 

 

 

 

 

 

 

Categories: DBA Blogs

Modify Primary Key Using Index (Learning To Fly)

Richard Foote - Thu, 2014-02-27 00:34
One of the things I love about working with Oracle Database technologies is that there’s just so much one can learn. I make it an active goal of mine to try to learn something new at work each and every day, no matter how big or small. One of these days, I might blog about a presentation I put together […]
Categories: DBA Blogs

Explaining Foreign Key Index in One Illustration

Pythian Group - Wed, 2014-02-26 08:45

Think of the leash as an index:

fk

Categories: DBA Blogs

SYSBACKUP, SYSDG and SYSKM Administrative Users in Oracle 12c

VitalSoftTech - Wed, 2014-02-26 07:00

Large enterprise database environments have dedicated DBAs performing specific database roles. Some of these specific roles include database storage management, tuning, backup and recovery, etc. Inside these organizations there has been growing concern to further ensure the use of the SOD (Segregation of Duties) concept, to fulfill duties for these roles.

The post SYSBACKUP, SYSDG and SYSKM Administrative Users in Oracle 12c appeared first on VitalSoftTech.

Categories: DBA Blogs

WebLogic Suite: Foundation Infrastructure for Oracle iAS (Internet App Server) Implementations

WebLogic has become the strategic application server infrastructure for Oracle Fusion Middleware and Applications. It is designed to provide a standards-based, mission critical and secure...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Recover a table with RMAN (Oracle 12c new feature)

DBA Scripts and Articles - Tue, 2014-02-25 14:06

Before oracle 12c, recover a table from loss or corruption was a difficult affair. Most of the time you notice that the table is lost or corrupted too late to use the flashback technology and then the only way to go is to duplicate your database to a time the table was still valid, export [...]

The post Recover a table with RMAN (Oracle 12c new feature) appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

OEM Agent Core directory deleted/re-add–What Happen? (EXADATA)

DBASolved - Mon, 2014-02-24 14:43

Ever have one of those days when someone calls and says “We/I accidently deleted the whole directory; can you get it back for me”?  Well,  over the weekend I had that happen with an OEM 12c agent on an Exadata, where the core directory for the agent was deleted by mistake.  Before I could evaluate the situation, I had to reassure the end user that the removal of the core directory under the agent home wasn’t a major issue.  The agent was still running in memory and reporting to OEM.  For all intensive-purposes the agent was still monitoring the Exadata node. 

After the end user was assured that the problem could be fixed, the question became:  How can the missing core directory be replaced?

The simplest way is to do a reinstall of the agent silently; however, this takes a lot of time and effort to get it working again.  I wanted the shortest possible way to recover the directory so there would not be a huge window of unmonitored time. 

In this post, what I want to show you how I recovered the agent’s core directory and didn’t loose any targets or have to resync the agent from OEM afterwards.

Note:  I have a good post on agent silent installs located here which is helpful to understand some of the process that was used.  Although, I cover every environment in my post for silent installs, Maaz Anjum covers silent installs for windows pretty well too; check it out here

As I mention in the note above, I needed to pull the correct agent binaries from the OMS library (outlined in the silent install post).  Once I had the binaries extracted to a temporary location, I needed to edit the response file (agent.rsp).  The response file was edited according to the silent install post.

The values that were changed within the response file were for:

OMS_HOST
EM_UPLOAD_PORT
AGENT_REGISTRATION_PASSWORD
AGENT_INSTANCE_HOME
AGENT_PORT
ORACLE_HOSTNAME
s_agentHomeName

All the values for these variables need to match what the existing agent had (information can be found in OEM under Setup –> Manage Cloud Control –> Agents).

image

With the response file ready, before I can run a silent install; the currently running agent needs to be stopped.  With the core directory gone, the only way to stop the running agent is to using “ps –ef | grep agent” and “kill –9 <process id>”. 

Reminder: there will be two (2) process that need to be killed; one which is a Perl process and the other a Java process.  Once the processes are killed the agent that was running is down.

Once the agent is down, installing the agent software using the silent install method can be done.  Now, here is where the install process becomes different from a normal agent installation.  Earlier I said the core directory under the agent home was deleted.  This means that everything else is still in place; only have to relink the core directory of the binaries.   How can I get only the core directory out of the binaries? 

In trying to answer this question, I used the -help option with the agentDeploy.sh script.  The –help option provides a few examples on how to use the agentDeploy.sh script.  I have listed these examples below:

Example1:
         agentDeploy.sh AGENT_BASE_DIR=/scratch/agent OMS_HOST=hostname.domain.com EM_UPLOAD_PORT=1000 INVENTORY_LOCATION=/scratch AGENT_REGISTRATION_PASSWORD=2Bor02B4
        This command is expected to do the complete agent install and configuration with the provided inputs.

Example2:
         agentDeploy.sh AGENT_BASE_DIR=/scratch/agt RESPONSE_FILE=/scratch/agent.rsp -softwareOnly -invPtrLoc /scratch/agent/oraInst.loc -debug
        This command is expected to copy the agent bits to the agent base directory.

Example3:
         agentDeploy.sh AGENT_BASE_DIR=/scratch/agent OMS_HOST=hostname.domain.com EM_UPLOAD_PORT=1000 -forceConfigure
        This command is expected to do the agent install and also force the agent configuration even though the oms host and port are not available.

Example4:
         agentDeploy.sh AGENT_BASE_DIR=/scratch/agent AGENT_INSTANCE_HOME=/scratch/agent/agent_inst -configOnly
        This command is expected to do the agent configuration only with the provided inputs.

Example5:
         agentDeploy.sh AGENT_BASE_DIR=/scratch/agent s_agentHomeName=myAgent -ignorePrereqs
        This command is expected to skip the prereqs and then continue with the agent deployment also notice in the inventory that instead of the default home name, myAgent home name will be assigned for the agent home.

As I looked at the examples, I noticed Example 2; a software only install.  I decided to give that a try.  Keep in mind all I needed was the core directory.  The command I used to do a software only install was:

./agentDeploy.sh AGENT_BASE_DIR=/u01/app/oracle/product/agent12c RESPONSE_FILE=/tmp/agent_12030/agent.rsp –softwareOnly

As the deployment started, I noticed that the rebuilds and relinks for all the binaries was be performed on the the agent home.  Once the deployment is done updating all the dependences, the deployment completes successfully and returned me to the command problem.

image

The software only deployment of the silent install replaced the missing core directory in the agent home.  Now the only question left was will the agent start?

To test if the agent would start, I needed to go into the agent home:

$ cd /u01/app/oracle/product/agent12c/core/12.1.0.3.0/bin

$ ./emctl status agent

In running the above commands, I was expecting to see the agent status as being down since I have just completed the agent deployment.  What I received instead is an unusual error.  The error was:

$ ./emctl status agent

EM Configuration issue. #DEFAULT_EMSTATE# not found.

In researching this error (#DEFAULT_EMSTATE#) in My Oracle Support (MOS), there were only two notes that were found (1607805.1/1543473.1).  From reading the notes and reviewing the emctl file under the core directory, I identified that the problem was a configuration problem.  In order to fix this configuration problem, what needed to be done?

To make a long story short, the simplest way to fix this issue was to copy an emctl from another Exadata node.    The reason why this was the simplest is due to all the nodes have the same agent home configurations.  Once the updated emctl was put in place, I was able to start and get all the information I wanted from the agent.

image

With the agent running, my next question was what was OEM’s reaction to the agent being reconfigured/built this way?   To my surprise, OEM didn’t have a problem. The agent was able to upload with no issues and OEM reported that no re-syncing was needed.  The only thing I can conclude from this is that the configuration files were never deleted and when the core directory was relinked, OEM thinks everything is as it was before the core directory was deleted.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Exadata, Golden Gate
Categories: DBA Blogs

Nothing to Blog About? Think Again!

Pythian Group - Mon, 2014-02-24 08:46

How often do you run off to your favourite sites and devour information put together by others? It’s quite remarkable how hard stuff is until you’ve done it one or more times. The seemingly insurmountable task becomes second nature once mastered. Hey, Alex Gorbachev or Jonathan Lewis were once beginners just like you and me. In the days when I got started there was no internet, no Twitter, no Metalink (MOS), and little, if no email. We used the good old-fashioned phone … a traditional landline at that. They used to have  round apparatus with holes in them called “dials.”

Something you have done may be a mystery to others and the seemingly most menial tasks (to you) may be like a vertical wall to others. Think back to 10 things you have done in the past few weeks. Estimate the number of those 10 things that would be “news” to others… Got that right… All 10. We owe it to others to blog about what we do on a daily basis and go out of our ways to find time to educate the masses.

To shed some light on an example that went down in the early to mid ’90s, picture the following (purveyors of SQL*Forms 2 and 3 may remember this). Triggers were used in a way similar to they are today. Events occurred as we moved around a screen (character-based at that :)). A common trigger was called POST-CHANGE and I became frustrated as we went to Forms 3 remembering that this trigger would not fire until the cursor left a field. I needed a way to execute a trigger while the cursor still resided in a field. Along comes a developer with 3 months experience. She suggests coding an ON-VALIDATE-FIELD trigger. Swell I said to myself knowing well that this trigger as well would not fire until the cursor left the field. So as not to offend here, I did just that. She also suggested placing the text “ENTER;” in the trigger code and all would proceed exactly as hoped.

I tried it out after chuckling to myself, based on what I already knew about Forms and it WORKED. There is the rub… No matter how little you may know, your expertise may lie in a corner of technology, others have not experienced yet. Your experiences are valuable to others and it is your obligation to blog. Nothing to blog about — think again.

Categories: DBA Blogs

Log Buffer #360, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2014-02-24 08:40

This week, there are some announcements from the database arena and then there are some tips and tricks coupled with some cool techniques, and this Log Buffer covers some of them.

Oracle:

startCD options to install Oracle E-Business Suite Release 12.2.

IDC Big Data in the Enterprise: When Worlds Collide.

Friday prank: select from join join join.

Friday Spotlight: Vitual Sysadmin Day and Hands On Labs

Are You “Millennial Ready”?

SQL Server:

Show the results for a date even if there is no data.

Hosting SQL Server in Hyper-V Replica Environment.

SQL Saturday is coming to Vienna on March 6 with a full-day of technical training and networking, featuring international speakers.

Understanding Graphical Execution Plans – Part 2: How to Create One.

SQL Server security related tasks can be divided into four main categories.

MySQL:

Before every release: A glimpse into Percona XtraDB Cluster CI testing.

MySQL Connector/Python v1.1.6 GA

No Hadoop Fun for Me at SCaLE 12X :(

FromDual.en: Why FromDual customers are using Galera Cluster for MySQL

Percona XtraDB Cluster 5.6.15-25.4 is now available.

Categories: DBA Blogs

RMAN Image Copy File Names

Hemant K Chitale - Sun, 2014-02-23 09:38
RMAN provides a convenient way to name Image Copies (backups) of datafiles to retain the same file name without the path.  This is possible with the "%b" FORMAT modifier.

Here's a demo :

[oracle@localhost ~]$ cd /IMA*/ORCL*
[oracle@localhost ORCL_DB]$ pwd
/IMAGE_BACKUP/ORCL_DB
[oracle@localhost ORCL_DB]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Feb 23 23:24:55 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> backup as copy database format '/IMAGE_BACKUP/ORCL_DB/%b';


Starting backup at 23-FEB-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
output file name=/IMAGE_BACKUP/ORCL_DB/sysaux01.dbf tag=TAG20140223T232527 RECID=3 STAMP=840324368
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/home/oracle/app/oracle/oradata/orcl/system01.dbf
output file name=/IMAGE_BACKUP/ORCL_DB/system01.dbf tag=TAG20140223T232527 RECID=4 STAMP=840324400
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/home/oracle/app/oracle/oradata/orcl/users01.dbf
output file name=/IMAGE_BACKUP/ORCL_DB/users01.dbf tag=TAG20140223T232527 RECID=5 STAMP=840324414
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_hemant_8pnowslc_.dbf
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 02/23/2014 23:26:55
ORA-01276: Cannot add file /IMAGE_BACKUP/ORCL_DB/o1_mf_hemant_8pnowslc_.dbf. File has an Oracle Managed Files file name.
continuing other job steps, job failed will not be re-run
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
output file name=/IMAGE_BACKUP/ORCL_DB/undotbs01.dbf tag=TAG20140223T232527 RECID=6 STAMP=840324420
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/home/oracle/app/oracle/oradata/orcl/example01.dbf
output file name=/IMAGE_BACKUP/ORCL_DB/example01.dbf tag=TAG20140223T232527 RECID=7 STAMP=840324428
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
copying current control file
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 02/23/2014 23:27:12
ORA-19715: invalid format b for generated name
ORA-27302: failure occurred at: slgpn
continuing other job steps, job failed will not be re-run
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 23-FEB-14
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 02/23/2014 23:27:12
ORA-19715: invalid format b for generated name
ORA-27302: failure occurred at: slgpn

RMAN>
RMAN> quit


Recovery Manager complete.
[oracle@localhost ORCL_DB]$ pwd
/IMAGE_BACKUP/ORCL_DB
[oracle@localhost ORCL_DB]$ pwd
/IMAGE_BACKUP/ORCL_DB
[oracle@localhost ORCL_DB]$ ls -l
total 2641496
-rw-rw---- 1 oracle oracle 85991424 Feb 23 23:27 example01.dbf
-rw-rw---- 1 oracle oracle 1320165376 Feb 23 23:26 sysaux01.dbf
-rw-rw---- 1 oracle oracle 881860608 Feb 23 23:26 system01.dbf
-rw-rw---- 1 oracle oracle 178266112 Feb 23 23:27 undotbs01.dbf
-rw-rw---- 1 oracle oracle 235937792 Feb 23 23:26 users01.dbf
[oracle@localhost ORCL_DB]$


However, this fails with OMF files and controlfile auto backups.

Also, you have to be careful to check that you do not have two datafiles with the same name in two different folders. RMAN recognises the presence of the first datafile copied and raises an error on the second datafile

For example :

SQL> create tablespace test_tbs datafile '/home/oracle/app/oracle/oradata/orcl/test_tbs_01.dbf' size 100M;

Tablespace created.

SQL> alter tablespace test_tbs add datafile '/tmp/test_tbs_01.dbf' size 50M;

Tablespace altered.

SQL>
SQL> !ls -l /home/oracle/app/oracle/oradata/orcl/test_tbs_01.dbf
-rw-rw---- 1 oracle oracle 104865792 Feb 23 23:34 /home/oracle/app/oracle/oradata/orcl/test_tbs_01.dbf

SQL> !ls -l /tmp/test_tbs_01.dbf
-rw-rw---- 1 oracle oracle 52436992 Feb 23 23:34 /tmp/test_tbs_01.dbf

SQL> exit

RMAN> backup as copy tablespace test_tbs format '/IMAGE_BACKUP/ORCL_DB/%b';

Starting backup at 23-FEB-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=47 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/orcl/test_tbs_01.dbf
output file name=/IMAGE_BACKUP/ORCL_DB/test_tbs_01.dbf tag=TAG20140223T233608 RECID=8 STAMP=840324970
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=/tmp/test_tbs_01.dbf
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 02/23/2014 23:36:13
ORA-19504: failed to create file "/IMAGE_BACKUP/ORCL_DB/test_tbs_01.dbf"
ORA-27038: created file already exists
Additional information: 1

RMAN>

The %b format for datafiles is also supposed to be usable with the SET NEWNAME clause in a RESTORE run.

.
.
.
Categories: DBA Blogs

emergency monitoring and forcing a flush of the shared pool ... use with caution ONLY if really needed

Grumpy old DBA - Sat, 2014-02-22 13:11
Flushing the shared pool has an impact on systems forcing at least the re parsing of sql statements and new ( perhaps even changed ) execution plans.  Use with caution on any production environment and test/test before deploying anything like this.

The usual approach for a system suffering from shared pool fragmentation and/or 4031 is to identify sql not using bind variables and consider implementing ( after logon session based database trigger best ) CURSOR_SHARING = FORCE ... but at times even that may need to be supplemented with some emergency monitoring and flushing.

This code below needs to be looked at and tested in your environment ( test system first ) ... may need setup and grants put in place to get it operational.  It currently is hard coded to flush when less than 256 mb is available as free memory ... could be kicked off every minute by some kind of scheduler or cron job or database job.

CREATE OR REPLACE PROCEDURE FLUSH_SHARE_POOL is
   v_free_space_meg NUMBER;
BEGIN
   SELECT round(bytes/1024/1024,0) INTO v_free_space_meg FROM v$sgastat
    WHERE name = 'free memory' AND pool =  'shared pool';
   IF ( v_free_space_meg < 256 ) THEN
      EXECUTE IMMEDIATE 'alter system flush shared_pool';
   END IF;    
END FLUSH_SHARE_POOL;
/
Categories: DBA Blogs

Oracle SOA Suite 11g - The Ingredient for Innovation: New Integration Adapters

Enterprise of all sizes, are evaluating cloud computing. Many businesses are making new economic investments in public cloud, while they continue to rely on their existing on-premise IT investments....

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQL Analytics

Hemant K Chitale - Fri, 2014-02-21 22:56
Oracle Learning Library video series on SQL Analytics
SQL Analytics Overview Video Series.
.
.
Categories: DBA Blogs