Skip navigation.

Feed aggregator

How to Configure an Azure Point-to-Site VPN – Part 1

Pythian Group - Tue, 2014-08-05 06:24

This blog post is the first in a series of three which will demonstrate how to configure a Point-to-Site VPN step-by-step. Today’s post will teach you how to configure a virtual network and a dynamic routing gateway, and the following blog posts will demonstrate how to create the certificates, and how to configure the VPN client.

Nowadays we are opting to move parts of, or even entire systems to the cloud. In order to build a hybrid environment, we need to find a way to connect our enterprise/local network, also known as on-premises, and the cloud.

Currently, we have two options to connect Azure and On-Premises:

  1. Using a Point-to-Site VPN
  2. Using a Site-to-Site VPN

The first option, using a Point-to-Site VPN is the option I’ll be demonstrating. It is recommended when you need to connect only some servers of your network to Azure. On the other hand, the Site-to-Site VPN connects your entire on-premises network to Azure.

CONFIGURE A VIRTUAL NETWORK AND A DYNAMIC ROUTING GATEWAY

To start, connect to your Azure account (https://manage.windowsazure.com/) and click in the “add button”, in the bottom left corner.

    1. Now follow the options that you can see in the image, and create a custom virtual network:|Screen Shot 2014-07-29 at 23.41.53
    2. Fill the Virtual Network name and the location you want to create.Screen Shot 2014-07-29 at 23.44.36
    3. Check “Configure a Point-to-Site VPN” (DNS server is an option setting, used for name resolution between this virtual network and your on-premises network):Screen Shot 2014-07-29 at 23.45.59
    4. Set the the IP range accordingly, after verify if this range is not overlapping with your on-premises network.Screen Shot 2014-07-29 at 23.54.26
    5. Click in the “add gateway subnet” button and than in the finish button (check mark).Screen Shot 2014-07-29 at 23.57.52
    6. Now you need to wait few minutes, while the virtual network is being created.Screen Shot 2014-07-29 at 23.58.11
    7. You will see a message like this when the process is done:Screen Shot 2014-07-30 at 00.00.24
    8. At this stage, you will be able to see the network created, under the network section.Screen Shot 2014-07-30 at 00.22.20
    9. Now we need to create a “Dynamic Routing Gateway”. To complete this, click on the network you just created and go to the Dashboard.Screen Shot 2014-07-30 at 00.31.00
    10. Click on “CREATE GATEWAY” button, in the page bottom and confirm your intention by selecting “Yes”.Screen Shot 2014-07-30 at 00.58.58
    11. It may take few minutes. You will see the message “CREATING GATEWAY”, as shown in the image bellow:Screen Shot 2014-07-30 at 00.59.47
    12. After a successfully creating, you will see the following:Screen Shot 2014-07-30 at 01.22.39

At this point, we are done with the Virtual Network creation. Now we can proceed to the certificate creation steps… Stay tuned for my next two posts.

Categories: DBA Blogs

Cascading Shuttle - Keep Selected Values

Denes Kubicek - Tue, 2014-08-05 06:14
There are probably a several good ways to solve the following problem:

    1. you have a simple select list which cascades a shuttle element
    2. you select something and the shuttle is populated
    3. after that, you pick a value in the shuttle and move it to the rigt side
    4. then you change the select list again
    5. the values you selected in your shuttle are now lost


This example shows how this can be done. This question apears in the APEX Forum from time to time and I decided to provide one simple solution for it.

Categories: Development

What Is Oracle DB Time, DB CPU, Wall Time and Non-Idle Wait Time

What Is Oracle DB Time, DB CPU, Wall Time and Non-Idle Wait Time
If you are into tuning Oracle Database systems, you care about time. And if you care about time, then you need to understand the most important time parameters: what they are, their differences, how they relate to each other and how to use them in your performance tuning work.

The key Oracle Database time parameters are elapsed time, database time (DB Time), non-idle wait time and server process CPU consumption (DB CPU) time.

This first post is pretty basic, yet core fundamental stuff. So in the following two posts I'll introduce elapsed time, add parallelism into the mix and revisit wall time. What initially seems simple can some take very interesting twists!

You probably know that I am all about quantitative Oracle performance analysis. I research, write, teach, and speak about it. I even have an OraPub Online Institute seminar about how to tune your Oracle Database systems from a standard AWR or Statspack report using an Oracle Time Based Analysis (OTBA) framework.

So let's get started!

Wall Time & Run Time
I'll start with Wall Time because that is close (hopefully) to what a user experiences. In fact, if there is no time gap between the Oracle Database and the user, then we can do a little math and figure out what the users are, on average, experiencing. I'll get back to wall time in the next post, where I include elapsed time and parallelism into the equation.

DB CPU
DB CPU is Oracle server/foreground/shadow process CPU consumption. Each Oracle server process gathers its own CPU consumption using the time and/or getrusage C function system call. So unless there is a major screw-up by either the operating system or the Oracle kernel developers, the time will be good... very good. The name DB CPU is taken from the actual statistic name, which is found in both v$sess_time_model and v$sys_time_model.

If you look at any AWR or Statspack report in the "Time Model" section, you will see DB CPU. The value shown will be all server process CPU consumption within the reporting snapshot interval, converted to seconds. (The raw statistic is stored in microseconds.)

Below is an example Time Model Statistics screen shot from a standard AWR report. I've highlighted DB CPU.



If you run one of my OraPub System Monitor (OSM) time related tools like ttpctx.sql or rtpctx.sql you see a CPU time statistic. That contains both the DB CPU (i.e., server process) and "background process cpu" statistics. Here's an example.

SQL> @ttpctx.sql
Remember: This report must be run twice so both the initial and
final values are available. If no output, press ENTER about 11 times.

Database: prod35 31-JUL-14 12:09pm
Report: ttpctx.sql OSM by OraPub, Inc. Page 1
Total Time Activity (142 sec interval)

Avg Time Time Wait
Time Component % TT % WT Waited (ms) (sec) Count(k)
------------------------------------- ------- ------- ----------- ----------- --------
CPU consumption: Oracle SP + BG procs 95.95 0.00 0.000 347.212 0
PX Deq: Slave Session Stats 1.45 35.74 0.113 5.240 47
library cache: mutex X 0.58 14.26 0.136 2.090 15
PX Deq: Slave Join Frag 0.43 10.57 0.067 1.550 23
PX Deq: Signal ACK EXT 0.29 7.16 0.045 1.050 23
control file parallel write 0.28 7.03 20.600 1.030 0
PX qref latch 0.27 6.75 0.012 0.990 85
latch free 0.20 4.91 0.090 0.720 8
log file parallel write 0.16 4.02 12.826 0.590 0

Non-Idle Wait Time
When an Oracle process can not consume CPU, it will pause. As an Oracle DBA, we know this as wait time. Sometimes a process waits and it's not a performance problem, so we call this Idle Wait Time. Oracle background processes typically have lots of idle wait time. However, when a user is waiting for sometime to complete and way down deep their Oracle server process is waiting to get perhaps a lock or latch, this is Non-Idle Wait Time. Obviously, when tuning Oracle we care a lot about non-idle wait time.

Below is a simple query showing wait event classifications. In this system there are 119 Idle wait events, so all the rest would be classified as non-idle wait events.

Oracle uses a variety of methods to determine wait time. I have a number of postings and educational content available about this. You'll see them if you do an OraPub or blog search for "time".

When working with non-idle wait time, remember the 80/20 rule. Most of the wait time we care about will be contained with in the largest ("top") two to four wait events. Don't waste YOUR time focusing on the 20%.

Here's an example. In the screen shot below, while not shown the total wait time is 1966 seconds.
If you add up the displayed "top" four wait events, their combined wait time is 1857. This is about 95% of all the non-idle wait time. This is a good example demonstrating that most of the wait time is found in the top two to four events.

My OSM toolkit has many wait time related tools. Most start with "sw" for "session wait" but the both ttpctx.sql or rtpctx.sql will contain the non-idle wait time and also CPU consumption. This is a good time to transition into DB Time.

DB Time
DB Time is a time model statistic that is the sum of Oracle process CPU consumption and non-idle wait time. When optimizing Oracle systems we typically focus on reducing "time", though many times database work is also part of the equation. This "time" is essentially DB Time, though sometimes I take control over what I consider idle wait time.

The name DB Time comes from the actual statistic name in both v$sess_time_model and v$sys_time_model.

If you look at any AWR or Statspack report in the "Time Model" section, you will see DB Time.
The DB time value is technically all server process CPU consumption plus the non-idle wait time within the reporting snapshot interval, converted to seconds. (The raw statistic is stored in microseconds.) Surprisingly, Oracle does not include "background cpu time" in the DB Time statistic. There are both good and not so good reasons the background CPU time is not include, but that's a topic for another posting.

A Little Math
We have enough detail to relate DB Time, DB CPU and non-idle wait time together... using a little math.

DB Time = DB CPU + non_idle_wait_time

And of course,

non_idle_wait_time = DB Time - DB CPU

This is important, because there is no single statistic that shows all the non-idle wait time. This must be derived. Shown above is one way to derive the non-idle wait time. Take a look at the AWR report snippet below.

In the Non-Idle Wait Time section above, I stated that the total non-idle wait time was 1966 seconds. I derived this from the Time Model screen shown above. I simply did:

non_idle_wait_time = DB Time - DB CPU
1966.16 = 4032.03 - 2065.87

Coming Up Next
I wanted to keep this post short, which means I left out the more interesting topics. So in the next post I'll merge into the picture elapsed time along with parallelism and revisit wall time. Then in the third post (that's my guess at this point), I'll actually demonstrate this in two different systems.

Thanks for reading,

Craig.

https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch the seminar introductions for free on YouTube!If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.






Categories: DBA Blogs

Advantages & Benefits of BI in your business: Additional Services Series pt. 2 [VIDEO]

Chris Foot - Tue, 2014-08-05 05:13

Transcript

To help our customers make the best possible business decisions, we offer a complete set of Business Intelligence support services for Microsoft’s Integration Services (SSIS), Analysis Services (SSAS), and Reporting Services (SSRS) product sets.

Why Business Intelligence you might ask? It helps you quickly identify business trends, empower your staff to use relevant information, gain insight into customer behavior for upsell opportunities – and more. Plus, the initial low-cost entry point of SSIS, SSAS, and SSRS requires no third party software and comes with everything needed to create a robust BI environment.

Microsoft’s BI product stack’s compatibility with other applications combined with having all your support provided by a single vendor helps to simplify your processes and take maximum advantage of your BI applications.

For more details on the importance of BI and how our product set can enhance your business, click on the link below to download our BI whitepaper.

In our next video, we’ll dive deeper into our specific BI products. See you there!
 

The post Advantages & Benefits of BI in your business: Additional Services Series pt. 2 [VIDEO] appeared first on Remote DBA Experts.

Get Up Offa That Thing

Doug Burns - Tue, 2014-08-05 04:00
No, no, no ... not *that* JB!
As regular readers will know, the JB who tends to get mentioned most often around these parts is John Beresniewicz who, up until recently, worked at Oracle on all the cool OEM Performance Pages and related instrumentation (alongside others, of course, such as Graham Wood, Uri Shaft, Kyle Hailey and probably a cast of thousands for all I know). Over recent years, JB has become a friend and has always posted deeply insightful comments whenever I’ve blogged about DB Time, Top Activity, Load Maps, ASH Analytics or Adaptive Thresholds. There can be few people who understand those subjects as well and he also has a great way of communicating how such powerful tools can be used to actually make things a lot simpler. (Click On The Big Stuff!, to pick one example) There can be a lot of unexpected complexity behind simplicity, believe me ;-)
So when the opportunity comes to learn from the best, I feel it’s only right I share it. The ASH Architecture and Advanced Usage presentation is a collective effort between Graham Wood, Uri Shaft and JB that has been refined over a number of years. This is the version that JB and Graham delivered at the RMOUG Training Days 2014. It’s excellent stuff from some true Oracle Performance experts.
JB might try to play the grumpiest man in California at times but, after all the work he has contributed to improving the performance analysis tools available to jobbing DBAs, I for one hope he sticks around on this Oracle Performance stuff and isn't distracted by Big Data or Anything-as-a-Service because he’d be too much of a loss (although I wouldn't have to listen to his whining so much, so maybe every cloud etc .... ;-))
Anyway – check out the presentation. It’s well worth your time. Better still, give that man a job so he doesn't have too much time on his hands and be reduced to starting to use Social Media!

SLOB Deployment – A Picture Tutorial.

Kevin Closson - Mon, 2014-08-04 19:31

SLOB can be obtained at this link: Click here.

This post is just a simple set of screenshots I recently took during a fresh SLOB deployment. There have been a tremendous number of SLOB downloads lately so I thought this might be a helpful addition to go along with the documentation. The examples I show herein are based on a 12.1.0.2 Oracle Database but these principles apply equally to 12.1.0.1 and all Oracle Database 11g releases as well.

Synopsis
  1. Create a tablespace for SLOB.
  2. Run setup.sh
  3. Verify user schemas
  4. Create The SLOB procedure In The USER1 Schema
  5. Execute runit.sh. An Example Of Wait Kit Failure and Remedy
  6. Execute runit.sh Successfully
  7. Using SLOB With SQL*Net
    1. Test SQL*Net Configuration
    2. Execute runit.sh With SQL*Net
  8. More About Testing Non-Linux Platforms

 

Create a Tablespace for SLOB

If you already have a tablespace to load SLOB schemas into please see the next step in the sequence.

SLOB-deploy-1

Run setup.sh

Provided database connectivity works with ‘/ as sysdba’ this step is quite simple. All you have to do is tell setup.sh which tablespace to use and how many SLOB users (schemas) load. The slob.conf file tells setup.sh how much data to load. This example is 16 SLOB schemas each with 10,000 8K blocks of data. One thing to be careful of is the slob.conf->LOAD_PARALLEL_DEGREE parameter. The name is not exactly perfect since this actually controls concurrent degree of SLOB schema creation/loading. Underneath the concurrency may be parallelism (Oracle Parallel Query) so consider setting this to a rather low value so as to not flood the system until you’ve practiced with setup.sh for a while.

 

SLOB-deploy-2

Verify Users’ Schemas

After taking a quick look at cr_tab_and_load.out, as per setup.sh instruction, feel free to count the number of schemas. Remember, there is a “zero” user so setup.sh with 16 will have 17 SLOB schema users.

SLOB-deploy-3

Create The SLOB Procedure In The USER1 Schema

After setup.sh and counting user schemas please create the SLOB procedure in the USER1 schema.

SLOB-deploy-4

Execute runit.sh. An Example Of Wait Kit Failure and Remedy

This is an example of what happens if one misses the detail to create the semaphore wait kit as per the documentation. Not to worry, simply do what the output of runit.sh directs you to do.

SLOB-deploy-5

Execute runit.sh Successfully

The following is an example of a healthy runit.sh test.

SLOB-deploy-6

Using SLOB with SQL Net

Strictly speaking this is all optional if all you intend to do is test SLOB on your current host. However, if SLOB has been configured in a Windows, AIX, or Solaris box this is how one tests SLOB. Testing these non-Linux platforms merely requires a small Linux box (e.g., a laptop or a VM running on the system you intend to test!) and SQL*Net.

Test SQL*Net Configuration

We don’t care where the SLOB database service is. If you can reach it successfully with tnsping you are mostly there.

SLOB-deploy-7

Execute runit.sh With SQL*Net

The following is an example of a successful runit.sh test over SQL*Net.

SLOB-deploy-8

More About Testing Non-Linux Platforms

Please note, loading SLOB over SQL*Net has the same configuration requirements as what I’ve shown for data loading (i.e., running setup.sh). Consider the following screenshot which shows an example of loading SLOB via SQL*Net.

SLOB-deploy-9

Finally, please see the next screenshot which shows the slob.conf file the corresponds to the proof of loading SLOB via SQL*Net.

SLOB-deploy-10

 

Summary

This short post shows the simple steps needed to deploy SLOB in both the simple Linux host-only scenario as well as via SQL*Net. Once a SLOB user gains the skills needed to load and use SLOB via SQL*Net there are no barriers to testing SLOB databases running on any platform to include Windows, AIX and Solaris.

 

 

 

 

 


Filed under: oracle

PeopleTools 8.54: Accessibility

PeopleSoft Technology Blog - Mon, 2014-08-04 15:46

This entry is one of a series of posts that will introduce readers to important features of this landmark release.

PeopleTools has long supported accessibility, and with PeopleTools 8.54 we are able to conform to WCAG 2.0 AA standards.  This means that upcoming PeopleSoft applications that are built on PeopleTools 8.54 will be designed to conform to those standards.  Customers that build pages, add-on applications, or customizations can also create conforming pages using PeopleTools 8.54. Most enterprises--especially those with a global footprint--are requiring WCAG conformance for the software they use.  In fact many companies that operate in the U.S. only are using the WCAG standards as the basis for their requirements.

Web Content Accessibility Guidelines (WCAG) are part of a series of guidelines published by the W3C's Web Accessibility Initiative. Their goal is to make content accessible, primarily for disabled users but also for all user agents, including highly limited devices, such as mobile phones. The current version (2.0) of the guidelines is also an ISO standard: ISO/IEC 40500:2012. WCAG 2.0 is an international standard used in most countries that enforce accessibility regulations. The new standard provides additional requirements to address new web technologies and has become the standard by which many organizations monitor software compliance. WCAG 2.0 extends standards described in WCAG 1.0 as well as the U.S.-specific regulations described in section 508 of Federal access standards.

Accessing remote databases from Oracle MAF with the TopLink/EclipseLink REST CRUD Services

Shay Shmeltzer - Mon, 2014-08-04 15:17

In the last post I showed you how simple it is to expose CRUD REST operations on your database with TopLink/EclipseLink.

The next logical step is to then consume those with Oracle MAF to build a mobile application.

This is quite simple with the REST data control. All you need to do is just map the right URLs and create the operation.

Here is a quick demo:

One trick I show in the demo is how to delay the call to a REST service until the user actually provides a value to a parameter. A common issue people have when they have the parameter form and the results on the same page. The solution is easy using the refresh condition of the executables of the page and using the "ne null" check on the parameter value. 

Categories: Development

R12.2 URL Validation failed. The error could have been caused through the use of the browser's navigation buttons ( the browser Back button or refresh, for example). If the error persists,

Vikram Das - Mon, 2014-08-04 12:32
Sravan came across this error while using SSO enabled URL for R12.2


URL validation failed. The error could have been caused through the use of the browser's navigation buttons ( the browser Back button or refresh, for example). If the error persists, Please contact system administrator.


This occurs for any user that is newly created. The user_guid column is not populated. Here is the Solution:

ssoreg_r12_linux erppgai1 DEV appspassword cleanup
ssoreg_r12_linux erppgai1 DEV appspassword dereg

./oamreg_linux erppgai1 DEV appspassword undeployag

./oamreg_linux erppgai1 DEV appspassword deployag

./oamreg_linux erppgai1 DEV appspassword reg
Categories: APPS Blogs

<b>Contributions by Angela Golla,

Oracle Infogram - Mon, 2014-08-04 11:18
Contributions by Angela Golla, Infogram Deputy Editor

My Oracle Support Video Training Series
From the basics to more advanced topics, learn what you need to know about My Oracle Support.  See Note:603505.1 for a complete list. 

Why automation is a key component of database administration

Chris Foot - Mon, 2014-08-04 10:54

When considering outsourcing to a remote database support service, look for professionals capable of automating key SQL Server tasks. 

Administrators have a variety of tasks they need to perform on a daily basis. In a world rife with IT security woes, it's imperative for DBAs to be able to dedicate enough time and resources to deterring sophisticated cyberattacks. Ordering rudimentary management jobs to occur without manual attention can help them succeed in this endeavor. 

Using SQL Server Agent 
According to MSDN, thorough knowledge of SQL Server Agent enables professionals to automatically execute jobs on a predefined schedule, respond to particular occurrences and execute backups whenever they're needed. The source outlined a number of additional benefits DBAs often capitalize on:

  • Multiple tasks can be executed at the same time
  • Jobs can be ordered to initiate when a central processing unit isn't conducting any operations
  • SQL Server performance orders can be automatically recorded 
  • Cookies can be inserted that monitor the actions of authorized operators, as well as intruders 
  • Logs of all security-related activities performed by administrators can be created

For more complex jobs, managers need to be able to receive notifications pertaining to mission-critical instances. For example, whenever hardware fails (due to a storm, or some other disruption), organizations need to be able to offload databases to other equipment or deploy backup strategies. For this reason, manually initiating alerts isn't conducive to business continuity. 

What can they do? 
Remote DBA experts recognize the benefits of automation, but where does the magic come from? Database Journal outlined several notifications these professionals can utilize through SQL Server Agent. For instance, a job's "lifecycle" can be reported as a success, failure or completion. 

While receiving alerts applicable to these three tasks is possible, it's not necessarily recommended. Deploying a task and receiving a failure is more constructive, because DBAs' inboxes won't get cluttered with a barrage of emails.

Yet, there are some jobs administrators may want to follow from execution to completion. In order to figure out which directions are set to employ notifications, DBAs can review the notify_level_email value in the sysjobs table in the SQL Server database.

If the value has a zero next to the WHERE category, personnel know that no alerts have been set up for a specific job. On the other hand, if a 1, 2 or 3 is displayed, then the notification will be sent to an email, pager or NET SEND, respectively. 

Essentially, automation enables experts to optimize database active monitoring thoroughness and speed. 

The post Why automation is a key component of database administration appeared first on Remote DBA Experts.

Oracle 12.1.0.2: Wait event histograms in μs

Yann Neuhaus - Mon, 2014-08-04 08:01

When an Oracle Database spends a high percentage of its DB time in User I/O, I usually check the wait event histograms in order to see if the storage system is working well. But today, with storage going to SSD, most I/O are less than 1 milliseconds and we have no details about those wait times.

Here is what is exposed by V$WAIT_EVENT_HISTOGRAM:

 

select event,wait_time_milli,wait_count from v$event_histogram where event like 'db file sequential read' order by event,wait_time_milli;
EVENT                          WAIT_TIME_MILLI WAIT_COUNT
------------------------------ --------------- ----------
db file sequential read                      1      27140
db file sequential read                      2          6
db file sequential read                      4          1

 

The latest Oracle 12c patchset, 12.1.0.2, besides changing the future of the database world with the In-Memory option, comes with a small new feature that helps us in our day-to-day tasks: the introduction of the V$EVENT_HISTOGRAM_MICRO view:

 

select event,wait_time_micro,wait_count,wait_time_format from v$event_histogram_micro where event like 'db file sequential read' order by event,wait_time_micro;
EVENT                          WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
------------------------------ --------------- ---------- ----------------
db file sequential read                      1        120   1 microsecond
db file sequential read                      2         24   2 microseconds
db file sequential read                      4         51   4 microseconds
db file sequential read                      8        212   8 microseconds
db file sequential read                     16      19600  16 microseconds
db file sequential read                     32       5958  32 microseconds
db file sequential read                     64        550  64 microseconds
db file sequential read                    128        492 128 microseconds
db file sequential read                    256         98 256 microseconds
db file sequential read                    512         14 512 microseconds
db file sequential read                   1024         21   1 millisecond
db file sequential read                   2048          6   2 milliseconds
db file sequential read                   4096          1   4 milliseconds

 

Here it is: the wait event are detailed up to microseconds. It's good for I/O when on SSD. It's good for In-Memory events as well.

Unfortunately, this has not been yet introduced in the AWR reports (I made an enhancement request for that).

Now, if you wonder which disk I'm using to get the microsecond i/o above...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

... here is how I created that database:

 

mkdir -p /mnt/ramdisk
mount -t tmpfs -o size=8G tmpfs /mnt/ramdisk
$ORACLE_HOME/bin/dbca -createDatabase -templateName General_Purpose.dbc -gdbName RAMDB -sid RAMDB -sysPassword oracle -systemPassword oracle -storageType FS -characterSet AL32UTF8 -listeners LISTENER -sampleSchema  true -totalMemory 600 -databaseType MULTIPURPOSE -silent -datafileDestination /mnt/ramdisk

Avoid UTL_FILE_DIR Security Weakness &#8211; Use Oracle Directories Instead

Eddie Awad - Mon, 2014-08-04 07:00

Integrigy:

The UTL_FILE database package is used to read from and write to operating system directories and files. By default, PUBLIC is granted execute permission on UTL_FILE. Therefore, any database account may read from and write to files in the directories specified in the UTL_FILE_DIR database initialization parameter [...] Security considerations with UTL_FILE can be mitigated by removing all directories from UTL_FILE_DIR and using the Directory functionality instead.

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags: ,

Related articles:

12c : Transport Database Over Network

Oracle in Action - Mon, 2014-08-04 05:18

RSS content

Oracle 12c introduces full transportable database import over network . It  employs

  •  Oracle Data Pump import  to extract  all of the system, user, and application metadata needed to transport the database from the source database over network.
  •   transportable tablespaces mechanism to move user and application data i.e. datafiles containing user and application data are physically copied to the target. This results in a migration that is very fast, even for very large volumes of data.

This post focuses on the use of full transportable import  to transport user and application tablespaces from a non-CDB to another non-CDB over the network . The intermediate dumpfile containing the metadata need not be created   as metadata is transferred over network by means of a database link.

– source database : orcl on RHEL5.4 64-bit server
. with sample schemas
. filesystem
. noarchivelog

-- destination database dest on same server
. no sample schemas
. filesystem
. noarchivelog

Overview:

- Create a source non-CDB  orcl with sample schemas
- Create destination non-CDB dest on the same  server without sample schemas
- Set the user and application tablespaces in the source database (orcl) to be READ ONLY
- Copy the  data files for tablespaces containing user/application data to the destination location
- Set system tablespace as default permanent tablespace in destination database dest
- Drop users tablespace from destination database
- Create a database link from destination  to source database orcl which connects as the user with datapump_imp_full_database privilege (system)
- Using an account that has the DATAPUMP_IMP_FULL_DATABASE privilege (system), import into the target database using
impdp with network_link , FULL=Y, TRANSPORT_DATAFILES parameters
- Restore the user tablespaces in source database to read/write mode

Implementation :

– Put application tablespaces (example and users) in read only mode in source database (orcl)

ORCL>conn / as sysdba

select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/orcl/system01.dbf
/u01/app/oracle/oradata/orcl/example01.dbf
/u01/app/oracle/oradata/orcl/sysaux01.dbf
/u01/app/oracle/oradata/orcl/undotbs01.dbf
/u01/app/oracle/oradata/orcl/users01.dbf

ORCL> alter tablespace example read only;
      alter tablespace users  read only;

-  Copy the  data files for tablespaces containing user/application data to the destination

ORCL> ho cp /u01/app/oracle/oradata/orcl/example01.dbf /u01/app/oracle/oradata/dest/example01_orcl.dbf

ho cp /u01/app/oracle/oradata/orcl/users01.dbf /u01/app/oracle/oradata/dest/users01_orcl.dbf

– Create a database link from destination database (dest) to source database (orcl) which connects as the user (system) with datapump_imp_full_database privilege

DEST>create public database link orcl_link connect to system identified by oracle using 'orcl';

- Using an account(system)  that has the DATAPUMP_IMP_FULL_DATABASE privilege , import into the target database using impdp with  network_link , FULL=Y, TRANSPORT_DATAFILES parameters

DEST> ho impdp system/oracle full=y network_link=orcl_link transportable=always transport_datafiles='/u01/app/oracle/oradata/dest/users01_orcl.dbf','/u01/app/oracle/oradata/dest/example01_orcl.dbf' version=12 logfile=import.log

Import: Release 12.1.0.1.0 - Production on Tue Aug 5 04:19:47 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** full=y network_link=orcl_link transportable=always directory=imp_dir transport_datafiles=/u01/app/oracle/oradata/dest/users01_orcl.dbf,/u01/app/oracle/oradata/dest/example01_orcl.dbf version=12 logfile=import.log
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE
Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29349: tablespace 'USERS' already exists

Job "SYSTEM"."SYS_IMPORT_FULL_01" stopped due to fatal error at Tue Aug 5 04:20:30 2014 elapsed 0 00:00:30

DEST> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS

DEST>drop tablespace users including contents and datafiles;

drop tablespace users including contents and datafiles
*
ERROR at line 1:
ORA-12919: Can not drop the default permanent tablespace

– Assign system as default permanent tablespace and drop users tablespace.

– Perform import

DEST> alter database default tablespace system;

      drop tablespace users including contents and datafiles;

       ho impdp system/oracle full=y network_link=orcl_link transportable=always transport_datafiles='/u01/app/oracle/oradata/dest/users01_orcl.dbf','/u01/app/oracle/oradata/dest/example01_orcl.dbf' version=12 logfile=import.log

Import: Release 12.1.0.1.0 - Production on Tue Aug 5 04:25:58 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** full=y network_link=orcl_link transportable=always directory=imp_dir transport_datafiles=/u01/app/oracle/oradata/dest/users01_orcl.dbf,/u01/app/oracle/oradata/dest/example01_orcl.dbf version=12 logfile=import.log
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE

...

Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
Processing object type DATABASE_EXPORT/AUDIT
Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 30 error(s) at Tue Aug 5 04:36:40 2014 elapsed 0 00:10:38

– Verify that tablespaces example and users have been successfully imported into target database

DEST> col tablespace_name for a10
col file_name for a50
set pagesize 200
set line 500

select tablespace_name, file_name, status, online_status from dba_data_files;

TABLESPACE FILE_NAME                                          STATUS    ONLINE_
---------- -------------------------------------------------- --------- -------
EXAMPLE    /u01/app/oracle/oradata/dest/example01_orcl.dbf    AVAILABLE ONLINE
UNDOTBS1   /u01/app/oracle/oradata/dest/undotbs01.dbf         AVAILABLE ONLINE
SYSAUX     /u01/app/oracle/oradata/dest/sysaux01.dbf          AVAILABLE ONLINE
SYSTEM     /u01/app/oracle/oradata/dest/system01.dbf          AVAILABLE SYSTEM
USERS      /u01/app/oracle/oradata/dest/users01_orcl.dbf      AVAILABLE ONLINE

– Restore the user tablespaces in source database to read/write mode

ORCL> alter tablespace example read write;
           alter tablespace users read write;

I hope this post was useful. Your comments and suggestions are always welcome!

References : Oracle Documentation

—————————————————————————————

Related Links:

Home

Database 12c

 12c: Transportable Database

———————————————————————————



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [12c : Transport Database Over Network], All Right Reserved. 2014.

The post 12c : Transport Database Over Network appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

SQL Server: DBCC CHECKDB does not detect corruption

Yann Neuhaus - Mon, 2014-08-04 02:12

During my audits at customer places, it still happens very often to find SQL Server databases with page verification option configured to "none". I always alert my customers on this configuration point because it can have an impact on the overall integrity of their databases. One of my customer told me that the integrity task of its maintenance will detect the corruption anyway and alert him by email - but is it really the case?

Of course, my response to the customer is that it depends of the corruption type and that a dbcc checkdb is not an absolute guarantee in this case. I like to show this example to my customers to convince them to change the page verify option to another thing than "none".

Let me show this to you:

 

use [master]; go   -- create corrupt db if DB_ID('corrupt') is not null        drop database corrupt;        create database corrupt; go   -- force page verify option to none alter database corrupt set page_verify none; go   use corrupt; go   -- create two tables t1 and t2 create table dbo.t1( id int identity primary key, col1 varchar(50)); create table dbo.t2( id int identity primary key, col1 int, col2 int, col3 varchar(50));   -- insert some sample data declare @random int; declare @i int = 1;   while @i begin        set @random = 10000/(rand()* 1000);        if @random > 50 set @random = 50;          insert t1(col1) values (REPLICATE('t', 50));        insert t2(col1, col2, col3) values (@i, @i + 1, REPLICATE('t', @random));          set @i = @i + 1; end   -- add a foreign key on column id to table t2 that references the table t1 on column id alter table dbo.t2 add constraint FK_t2_id foreign key (id) references dbo.t1(id); go   -- create a nonclustered covered index on table t1 create nonclustered index idx_t2_col1 on dbo.t2( col1, col2) include ( col3 ); go

 

At this point we have two tables named t1 and t2. Table t2 has a foreign key constraint on the id column that references the table t1 on the column with the same name.

Now let’s corrupt a data page in the clustered index on the table t1. First, we will find the first data page in the clustered index of the table t1:

 

-- get the first data page on the t1 table clustered index dbcc ind('corrupt', 'dbo.t1', 1); go

 

blog_14_-_dbcc_checkb_corrupt_-_1

 

Then we will find the first row. The first row is stored in slot 0 which is located at offset 0x060.

 

-- Display dump page id = 15 dbcc traceon(3604); go dbcc page ('corrupt', 1, 15, 3); go

 

blog_14_-_dbcc_checkb_corrupt_-_2

 

Now it’s time to corrupt the id column (id = 1) located to the offset 0x4 in the row. That means we have to place to the offset 0x60 + 0x4 to corrupt this column.

We will use the DBCC WRITEPAGE undocumented command to corrupt our page (again, a big thanks to Paul Randal for showing us how to use this command for testing purposes).

 

-- corrupt the concerned page alter database corrupt set single_user; go   dbcc writepage('corrupt', 1, 15, 100, 1, 0x00, 1)   alter database corrupt set multi_user; go

 

Now if we take a look at the page id=15, we notice that the id column value is now changed from 1 to 0.

 

blog_14_-_dbcc_checkb_corrupt_-_3

 

Ok, let’s run a DBCC CHECKDB command:

 

-- perform an integrity check with dbcc checkdb dbcc checkdb('corrupt') with no_infomsgs, all_errormsgs; go

 

blog_14_-_dbcc_checkb_corrupt_-_4

 

As you can see, the dbcc checkdb command does not detect any corruption! Now, let’s run the following statements:

 

-- first query select t2.col2, t2.col3 from dbo.t2        join dbo.t1              on t1.id = t2.id where t2.col1 = 1

 

blog_14_-_dbcc_checkb_corrupt_-_5

 

Do you notice that reading the corrupted page does not trigger an error in this case?

 

-- second query select t2.col2, t2.col3, t1.col1 from dbo.t2        join dbo.t1              on t1.id = t2.id where t2.col1 = 1

 

blog_14_-_dbcc_checkb_corrupt_-_6

 

As you can notice, adding the t1.col1 column to the query will give it a different result between the both queries. Strange behavior isn’t it? In fact, the two queries above don’t use the same execution plan as the following below:

Query 1:

 

blog_14_-_dbcc_checkb_corrupt_-_7

 

Query 2:

 

blog_14_-_dbcc_checkb_corrupt_-_8

 

In the query 1, due to the foreign key constraint, the query execution engine doesn’t need to join t2 to t1 to retrieve data because we need only data already covered by the idx_t2_col1 index on table t2. However the story is not the same with the query 2. Indeed, we want to retrieve an additional value provided by the col1 column from the table t1. In this case SQL Server has to join t1 and t2 because the covered index idx_t2_col1 cannot provide all the data we need. But remember we had corrupt the id column of the primary key of the table t1 by changing the value from 1 to 0. This is why the query 2 doesn’t display any results.

The main question here is: why dbcc checkdb doesn’t detect the corruption? Well, in this case corruption has occurring directly on the data value and dbcc checkdb doesn’t have a verification mechanism to detect a corruption issue. Having a checksum value stored in the page would help dbcc checkdb operation in this case because it could compare a computed checksum while reading the page with the stored checksum stored on it.

Below the output provided by dbcc checkdb command if checksum page verify option was enabled for the database …

 

blog_14_-_dbcc_checkb_corrupt_-_9

 

… or when we ran the query used earlier:

 

blog_14_-_dbcc_checkb_corrupt_-_10

 

My conclusion:

Do not hesitate to change your page verify option value when it is configured to "none".

PostgreSQL for Oracle DBAs - an introduction

Yann Neuhaus - Sun, 2014-08-03 22:16

Having worked for several years as an Oracle DBA, I decided to have a look at the PostgreSQL database and see how it functions in comparison to the Oracle Database.

The "Enterprise DB" graphical installation of PostgreSQL 9.3 is quite easy and rather fast. Under Linux you run the graphical installer, dialog boxes lead you through the installation process. You enter the specific information of your system and at the end of the PostgreSQL installation, the Stack Builder package is invoked if you need to install applications, drivers, agents or utilities.

You can download the Enterprise DB utility using the following URL:

http://www.enterprisedb.com/downloads/postgres-postgresql-downloads

I have installed PostgreSQL 9.3 using Enterprise DB as described below:

 

pg1

 

Choose Next.

 

pg2

 

Specify the installation directory where PostgreSQL 9.3 will be installed.

 

pg3

 

Select the directory that will store the data.

 

pg4

 

Provide a password to the PostgreSQL database user.

 

pg5

 

Select a port number.

 

pg6

 

Choose the locale for the new database cluster.

 

pg7

 

PostgreSQL is now ready to be installed.

 

pg8

 

You can choose to launch or not the Stack Builder - if not, the installation process will begin.

If you encounter any problem during the installation phase, the log files are generated in /tmp.

Under Linux, a shell script named uninstall-postgresql is created in the PostgreSQL home directory to de-install the software.

The installation phase is very quick, your PostgreSQL cluster database is ready to use. Furthermore, the Enterprise DB installation creates the automatic startup file in /etc/init.d/postgresql-9.3 to start PostgreSQL in case of a server reboot.

Once the Enterprise DB installation is processed, a database storage area is initialized on disk (a database cluster). After the installation, this database cluster will contain a database named postgres and will be used by utilities or users:

 

postgres=# \list                                 List of databases   Name   | Owner   | Encoding | Collate   |   Ctype   |   Access privileges-----------+----------+----------+------------+------------+-------------postgres | postgres | UTF8     | en_US.utf8 | en_US.utf8 |template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres      +           |         |         |           |           | postgres=CTc/postgrestemplate1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | postgres=CTc/postgres+           |         |         |           |           | =c/postgres

 

By default, a new database is created by cloning the system standard base named template1. The template0 allows you to create a database containing only pre-defined standard objects.

The sqlplus oracle equivalent command in PostgreSQL is psql. As you will see in the document, the PostgreSQL commands begin with the \ sign. The “\?” command lists every possibility.

For example, the following commands connects to the psi database:

 

-bash-3.2$ psql -d psi

Password:psql.bin (9.3.4)

Type "help" for help.No entry for terminal type "xterm";

using dumb terminal settings.

psi=# \q


If you do not want the system to ask for a password, you simply have to create a .pgpass file in the postgres home directory with the 0600 rights and the following syntax:

 

-bash-3.2$ more .pgpass

localhost:5432:PSI:postgres:password

 

-bash-3.2$ su - postgres

Password:

-bash-3.2$ psql -d psi

psql.bin (9.3.4)

Type "help" for help.

No entry for terminal type "xterm";

using dumb terminal settings.

psi=#

psi-# \q

 

At first you probably need to create a database. As an Oracle DBA, I was wondering about some typical problems such as character set or default tablespace. With PostgreSQL, it is quite easy to create a database.

As the locale en_US.utf8 has been chosen during the installation phase to be used by the cluster database, every database you will create will use it.

When you create a database you can specify a default tablespace and an owner. At first we create a tablespace:

 

postgres=# create tablespace psi location '/u01/postgres/data/psi';

CREATE TABLESPACE

 

The tablespace data is located in /u01/postgres/data/psi:

 

-bash-3.2$ ls

PG_9.3_201306121

-bash-3.2$ ls PG_9.3_201306121/

16526

-bash-3.2$ ls PG_9.3_201306121/16526/

12547     12587_vm  12624     12663     12728     12773

12547_fsm 12589     12625     12664     12728_fsm 12774

12664_vm  12730   12774_vm     12627     12666     12731     12776

 

Then we create the database:

 

postgres=# create database psi owner postgres tablespace psi;

CREATE DATABASE

 

We can list all databases with the \list command:

 

postgres=# \list                                

                 List of databases

   Name   | Owner   | Encoding | Collate   |   Ctype   |   Access privileges

-----------+----------+----------+------------+------------+-------------

postgres | postgres | UTF8     | en_US.utf8 | en_US.utf8 |

psi       | postgres | UTF8     | en_US.utf8 | en_US.utf8 |

template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres        

|         |         |           |           | postgres=CTc/postgres

template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | postgres=CTc/postgres+           |         |         |           |           | =c/postgres

 

Now, we can connect to the psi database and create objects, the syntax is quite similar to Oracle:

 

postgres=# \c psi

You are now connected to database "psi" as user "postgres".


We create a table and an index:

 

psi=# create table employe (name varchar);

CREATE TABLE

psi=# create index employe_ix on employe (name);

CREATE INDEX

 

We insert values in it:


psi=# insert into employe values ('bill');

INSERT 0 1

 

We reconnect to the psi database:


-bash-3.2$ psql -d psi

Password:

psql.bin (9.3.4)

Type "help" for help.

No entry for terminal type "xterm";

using dumb terminal settings.


The following command lists the tables:


psi=# \dt[+]                    

             List of relations

Schema | Name   | Type | Owner   | Size | Description

--------+---------+-------+----------+-------+-------------

public | employe | table | postgres | 16 kB |

(1 row)

psi=# select * from employe;

name

------

bill

(1 row)

 

The \d+ postgreSQL command is the equivalent of the Oracle desc command:


psi=# \d+ employe                            

                Table "public.employe"

Column |       Type       | Modifiers | Storage | Stats target | Description

--------+-------------------+-----------+----------+--------------+-------------

name   | character varying |          | extended |             |

Indexes:

   "employe_ix" btree (name)

Has OIDs: no


Obviously we also have the possibility to create a schema and create objects in this schema.

Let's create a schema:


psi=# create schema psi;

CREATE SCHEMA


Let's create a table, insert objects in it and create a view:


psi=# create table psi.salary (val integer);

CREATE TABLE

psi=# insert into psi.salary values (10000);

INSERT 0 1

psi=# select * from psi.salary;

val

-------

10000

psi=# create view psi.v_employe as select * from psi.salary;

CREATE VIEW

 

If we list the tables we can only see the public objects:


psi=# \d        

        List of relations

Schema | Name   | Type | Owner  

--------+---------+-------+----------

public | employe | table | postgres

(1 row)


If we modify the search path, all schemas are visible:


psi=# set search_path to psi,public;

SET

psi=# \d 

        List of relations

Schema | Name   | Type | Owner  

--------+---------+-------+----------

psi   | salary | table | postgres

public | employe | table | postgres


Oracle DBA’s are familiar with sql commands - e. g. to get the table list of a schema by typing select table_name, owner from user_tables, etc.

What is the equivalent query in postgreSQL?

PostgreSQL uses a schema named information_schema available in every database. The owner of this schema is the initial database user in the cluster. You can drop this schema, but the space saving is negligible.

You can easily query the tables of this schema to get precious informations about your database objects:

Here is a list of the schemas tables:


psi=# select table_name, table_schema from information_schema.tables where table_schema in ('public','psi');

table_name | table_schema

------------+--------------

employe   | public

salary     | psi


We can display the database character set:


psi=# select character_set_name from information_schema.character_sets;

character_set_name

--------------------

UTF8

 

We can display schema views:


psi=# select table_name from information_schema.views where table_schema='psi';

table_name

------------

v_employe


Using the information_schema schema helps us to display information about a lot of different database objects (tables, constraints, sequences, triggers, table_privileges …)

Like in Oracle you can run a query from the SQL or the UNIX prompt. For example, if you want to know the index name of the table employe, you shoud use the index.sql script:


select

t.relname as table_name,

i.relname as index_name,

a.attname as column_name

from

pg_class t,pg_class i,

pg_index ix,pg_attribute a

wheret.oid = ix.indrelid

and i.oid = ix.indexrelid

and a.attrelid = t.oid

and a.attnum = ANY(ix.indkey)

and t.relkind = 'r'

and t.relname = 'employe'

order byt.relname,i.relname;


If you want to display the employee index from the SQL prompt, you run:


psi=# \i index.sql

table_name | index_name | column_name

------------+------------+-------------

employe   | employe_ix | name


If you want to run the same query from the UNIX prompt:


-bash-3.2$ psql -d psi -a -f index.sql

Password:

table_name | index_name | column_name

------------+------------+-------------

employe   | employe_ix | name


However, typing an SQL request might be interesting, but - as many Oracle DBA - I like using an administration console because I think it increases efficiency.

I have discovered pgAdmin, an administration tool designed for Unix or Windows systems. pgAdmin is easy to install on a PostgreSQL environment and enables many operations for the administration of a cluster database.

pgAdmin3 is installed in the home directory of the user postgre - in my case in /opt/postgres/9.3.

To successfully enable pgAdmin3, it is necessary to correctly initialize the LD_LIBRARY_PATH variable:

 

export LD_LIBRARY_PATH=/opt/PostgreSQL/9.3/lib:/opt/PostgreSQL/9.3/pgAdmin3/lib

 

The pgadmin3 console:

 

pg9

 

As you can see, you can administer every database object (tables, functions, sequences, triggers, views…).

You can visualize the table creation scripts:

 

pg9

 

You can edit / change / modify the privileges of an object:

 

pg11

 

You also have the possibility to create scripst for the database creation:

 

pg12

 

Or even to backup the database:

 

pg13

 

This tool seems to be very powerful, even if for the moment, I did not find any performance tool available like in Cloud Control 12c.

 

Conclusion

Discovering PostgreSQL as an Oracle DBA, I realized how close the two products are. The PostgreSQL database has a lot of advantages such as the easy installation, the general usage and the price (because it’s free!).

For the processing of huge amounts of data, Oracle certainly has advantages, nevertheless the choice of a RDBMS always depends on what your application business needs are.

#GoldenGate Bound Recovery

DBASolved - Sun, 2014-08-03 19:40

Every once in awhile when I restart an extract, I see entries in the report file that reference “Bounded Recovery”.  What exactly is “Bounded Recovery”?

First, keep in mind that “Bounded Recovery” is only for Oracle databases!

Second, according to the documentation, “Bounded Recovery” is a component of the general extract checkpointing facility.  This component of the extract guarantees an efficient recovery after an extract stops for any reason, no matter how many uncommitted transactions are currently outstanding.  The Bounded Recovery parameter sets an upper boundary for the maximum amount of time that an extract is needed to recover to the point where it stopped before continuing normal processing.

The default settings for “Bounded Recovery” is set to 4 hours and needed recovery information is cached in the OGG_HOME/BR/<extract name> directory  This is verified when I look at the report file for my extract named EXT.


2014-07-21 17:26:30 INFO OGG-01815 Virtual Memory Facilities for: BR
 anon alloc: mmap(MAP_ANON) anon free: munmap
 file alloc: mmap(MAP_SHARED) file free: munmap
 target directories:
 /oracle/app/product/12.1.2/oggcore_1/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 4HOURS
BRDIR = /oracle/app/product/12.1.2/oggcore_1

According to documentation, the default setting so for “Bounded Recovery” should be sufficient for most environments.  It is also noted that the “Bounded Recovery” settings shouldn’t be changed without the guidance of Oracle Support.

Now that the idea of a “Bounded Recovery” has been established, lets try to understand a bit more about how a transaction is recovered in Oracle GoldenGate with the “Bounded Recovery” feature.

At the start of a transaction, Oracle GoldenGate must cache the transaction (even if it contains no data).  The reason for this is due to the need to support future operations of a transaction.  If the extract hits a committed transaction, then the cached transaction is written to the trail file and clears the transaction from memory.  If the extract hits a rollback, then the cached transaction is discarded from memory.  As long as a an extract is processing a transaction, before a commit or rollback, the transaction is considered an open transaction and will be collected.  If the extract is stopped before it encounters a commit or rollback, the extract needs all of the cached transaction information recovered before the extract can start.  This approach applies to all transactions that were open at the time of the extract being stopped.

There are three ways that an extract performs recovery:

  1. No open transactions when extract is stopped, the recovery begins at the current extract read checkpoint (Normal recovery)
  2. Open transactions whose start points in the log were very close in time to the time when the extracted was stopped, the extract begins its recovery by re-reading the logs from the beginning of the oldest open transaction (Considered a normal recovery)
  3. One or more open transactions that extract qualified as long-running open transactions, extract begins recovery (Bounded Recovery)

What defines a long-running transaction for Oracle GoldenGate?

Transactions in Oracle GoldenGate are long-running if the transaction has been open longer than one (1) “Bounded Recovery” interval.

A “bounded recovery interval” is the amount of time between “Bounded Recovery checkpoints” which persists the current state and data of the extract to disk.  “Bounded Recovery checkpoints” are used to identify a recovery position between tow “Bounded Recovery intervals”.  The extract will pick up from the last “bounded recovery checkpoint”, instead of processing from the log position where the open long-running transaction first appeared.

What is the maximum Bounded Recovery time?

The maximum bounded recovery time is no more than twice the current “Bounded Recovery checkpoint” interval.  However, the actual recovery time will be dictated by the following:

  1. The time from the last valid Bounded Recovery interval to when the extract was stopped
  2. Utilization of the extract in that period
  3. The percent of utilization for transaction that were previously written to the trail

Now that the basic details of “Bounded Recovery” have been discussed.  How can the settings for “Bounded Recovery” be changed?

“Bounded Recovery” can be changed by updating the extract parameter file with the following parameter:


BR
[, BRDIR directory]
[, BRINTERVAL number {M | H}]
[, BRKEEPSTALEFILES]
[, BROFF]
[, BROFFONFAILURE]
[, BRRESET]

As noted, there are a few options that can be set with the BR parameter.  If I wanted to shorten my “Bound Recovery” time and change directories where the cached information is stored I can do something similar to this:


--Bound Recovery
BR BRDIR ./dirbr BRINTERVAL 20M

In the example above, I’m changing the directory to a new directory called DIRBR (created manually as part of subdirs).  I also changed the interval from 4 hours to 20 minutes.

Note: 20 minutes is the smallest accepted time for the BRINTERVAL parameter.

After adding the BR parameter with options to the extract, the extract needs to be restarted.  Once the extract is up and running, the report file for the extract can be checked to verify that the new parameters have been taken.


2014-08-02 22:20:54  INFO    OGG-01815  Virtual Memory Facilities for: BR
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /oracle/app/product/12.1.2/oggcore_1/dirbr/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 20M
BRDIR      = ./dirbr

Hopefully, this post provided a better understanding of one least understood option within Oracle GoldenGate.

Enjoy!!

About me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Viewing Figures

Jonathan Lewis - Sun, 2014-08-03 17:05

The day has just started in Singapore – though it’s just coming up to midnight back home – and the view counter has reached 4,00,009 despite the fact that I’ve not been able to contribute much to the community for the last couple of months. Despite the temporary dearth of writing it’s time to have a little review to see what’s been popular and how things have changed in the 10 months it took to accumulate the last 500,000 views so here are some of the latest WordPress stats.

All time ratings AWR / Statspack 80,997 Updated from time to time NOT IN 51,673 February 2007 Cartesian Merge Join 39,265 December 2006 dbms_xplan in 10g 37,725 November 2006 Shrink Tablespace 31,184 November 2006 Ratings over the last year AWR / Statspack 13,905 Updated from time to time AWR Reports 9,494 February 2011 Shrink Tablespace 8,402 February 2010 Lock Modes 8,221 June 2010 NOT IN 6,388 February 2007

The figures for the previous half million views (opens in a new window) are very similar for most of the top 5 although “Analysing Statspack (1)” has been pushed from 5th place to 6th place in the all-time greats; and “Lock Modes” has swapped places with “NOT IN” in the annual ratings. As the annual WordPress summary says: “… your posts have staying powere, why not write more about …”.

The number of followers has gone up from about 2,500  to just over 3,900 but, as I said last time, I suspect that there’s a lot of double counting related to twitter.

 


A Quick Trip To The Mother Ship

Floyd Teter - Sun, 2014-08-03 15:11
The title of this post notwithstanding, I was not abducted by aliens last week.  Take off your tin-foil hat, it's all cool.  I spent a few days last week a few different teams at Oracle HQ, mostly digging into the progress of some cool new work in progress.  Thought I'd share what I learned.

One caveat before I start sharing.  My agreement with Oracle prevents me from talking about specific details and delivery dates.  Personally, I don't have much of a problem with that - product development news on Oracle's products is Oracle's news to share, if and when they decide to share it. Now that we're clear about that, let's get to the good stuff.

I was fortunate enough to have a good chunk of the brain trust from the Sierra-Cedar Oracle Higher Education Practice (that's the former Io Consulting group) with me:  Steve Kish, Elizabeth Malmborg, Anastasia Metros and Ted Simpson (yes, he of HEUG fame).  It was cool to watch them consider the new things coming for the Higher Education marketplace.  Gave me a measure of how the Higher Ed marketplace will respond.

Most of day one was spent with the leadership of the Oracle Higher Education development team, reviewing their progress in building the new Oracle Student Cloud product.  They're further along in the development lifecycle than I'd expected, which was a pleasant surprise.  And one thing became very clear to me as a result of the review:  planning to throw away PeopleSoft Campus Solutions should not be a part of anyone's short-term game plan.   Oracle Student Cloud is focused on offering a solution for managing continuing education.  Expectations are that early adopters of Oracle Student Cloud will be using the product as a value-added enhancement to the Campus Solutions product.

Don't get confused here.  Oracle has both the Oracle Student Cloud and the Oracle Higher Education Cloud in their development pipeline.  But we talking about two different products here with two different sets of target customers, development life cycles and different release dates.  The latter product will have a much larger focus than the former.

So, what's the best strategy for a higher ed institution that preserves their investment and offers maximum flexibility going forward?  Get to the latest release of whatever you're currently using, whether it's an Oracle product or not.  Make sure you're up to date - it's the best platform for moving forward.  And yes, there are other elements to the strategy as well, but that's not my main purpose for writing this particular post.

Day two was spent with the Oracle User Experience team.  Great stuff as usual.  A special thanks to Oracle's Michael LaDuke for putting the day together.  And it was fun to see the understanding of UX take shape in the minds of the Sierra-Cedar leadership team, especially during a discussion around wire framing practices.  We also some soon-to-be-released incremental progress with Simplified UI.  And, finally, we saw some cool new products in the works.  On this final note, it's pretty obvious that the UX team is now focused on innovating by applying Fusion Middleware technology to mobile use cases (both tablet and phone).  Saw some pretty good stuff with the potential for adding some high value to day-to-day business processing (both in terms of automation and collecting business intelligence).

I only got two days this trip...wasn't nearly enough.  The upshot?  Lots of cool stuff on the horizon.

GI Commands : 2 -- Managing the Local and Cluster Registry

Hemant K Chitale - Sun, 2014-08-03 07:51
In 11gR2

There are essentially two registries, the Local Registry and the Cluster Registry.

Let's check the Local Registry :

[root@node1 ~]# cat /etc/oracle/olr.loc
olrconfig_loc=/u01/app/grid/11.2.0/cdata/node1.olr
crs_home=/u01/app/grid/11.2.0
[root@node1 ~]#
[root@node1 ~]# file /u01/app/grid/11.2.0/cdata/node1.olr
/u01/app/grid/11.2.0/cdata/node1.olr: data
[root@node1 ~]#

So, like the Cluster Registry, the Local Registry is a binary file.  It is on a local filesystem on the node, not on ASM/NFS/CFS.  Each node in the cluster has its own Local Registry.

The Local Registry can be checked for consistency (corruption) using ocrcheck with the "-local" flag.  Note : As demonstrated in my previous post, the root account must be used for the check.

[root@node1 ~]# su - grid
-sh-3.2$ su
Password:
[root@node1 grid]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2696
Available space (kbytes) : 259424
ID : 1388021147
Device/File Name : /u01/app/grid/11.2.0/cdata/node1.olr
Device/File integrity check succeeded

Local registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

Now let's look at the Cluster Registry :

[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : /fra/ocrfile
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

The Cluster Registry is distributed across two ASM DiskGroups (+DATA and +FRA) and one filesystem (/fra/ocrfile). Yes, this is a special case that I've created to distribute the OCR in this manner.

I cannot add the OCR to a location which is an ASM diskgroup with a lower asm.compatible.

[root@node1 grid]# ocrconfig -add +DATA2
PROT-30: The Oracle Cluster Registry location to be added is not accessible
PROC-8: Cannot perform cluster registry operation because one of the parameters is invalid.
ORA-15056: additional error message
ORA-17502: ksfdcre:4 Failed to create file +DATA2.255.1
ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher
ORA-06512: at line 4

[root@node1 grid]#

I now remove the filesystem copy of the OCR.

[root@node1 grid]# ocrconfig -delete /fra/ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]#

Note, however, that the ocrconfig delete doesn't actually remove the filesystem file that I had created.

[root@node1 grid]# ls -l /fra/ocrfile
-rw-r--r-- 1 root root 272756736 Aug 3 21:27 /fra/ocrfile
[root@node1 grid]# rm /fra/ocrfile
rm: remove regular file `/fra/ocrfile'? yes
[root@node1 grid]#

I will now add a filesystem location for the OCR.

[root@node1 grid]# touch /fra/new_ocrfile
[root@node1 grid]# ocrconfig -add /fra/new_ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3668
Available space (kbytes) : 258452
ID : 605940771
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File Name : +FRA
Device/File integrity check succeeded
Device/File Name : /fra/new_ocrfile
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@node1 grid]# ls -l /fra/new_ocrfile
-rw-r--r-- 1 root root 272756736 Aug 3 21:30 /fra/new_ocrfile
[root@node1 grid]#

What about OCR Backups ?  (Note : Oracle does frequent automatic backups of the OCR, but *not* of the OLR).
N.B. : This listing doesn't show all the OCR backups you'd expect because I don't have my cluster running continuously through all the days.

[root@node1 grid]# ocrconfig -showbackup

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1 2011/10/22 03:09:03 /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1 2011/10/21 23:06:39 /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1 2014/06/16 22:14:05 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr

node1 2011/11/09 23:20:25 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_232025.ocr
[root@node1 grid]#

Let me run an additional backup from node2.

[root@node2 grid]# ocrconfig -manualbackup

node1 2014/08/03 21:37:17 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1 2014/06/16 22:14:05 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node2 grid]#

We can see that the backup done today (03-Aug) is listed at the top.  Let's check a listing from node1

[root@node1 grid]# ocrconfig -showbackup

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1 2011/10/22 03:09:03 /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1 2011/10/21 23:06:39 /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1 2014/07/06 21:53:25 /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1 2014/08/03 21:37:17 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1 2014/06/16 22:14:05 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node1 grid]#

Yes, the backup of 03-Aug is also listed.  But, wait ! Why is it on node1 ?  Let's go back to node2 and do a filesytem listing.

[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
ls: /u01/app/grid/11.2.0/cdata/rac/backup*: No such file or directory
[root@node2 grid]#

Yes, as we've noticed. The backup doesn't really exist on node2.

[root@node1 grid]# ls -lt /u01/app/grid/11.2.0/cdata/rac/
total 114316
-rw------- 1 root root 8024064 Aug 3 21:37 backup_20140803_213717.ocr
-rw------- 1 root root 8003584 Jul 6 22:39 backup_20140706_223955.ocr
-rw------- 1 root root 8003584 Jul 6 21:53 day.ocr
-rw------- 1 root root 8003584 Jul 6 21:53 week.ocr
-rw------- 1 root root 8003584 Jul 6 21:53 backup00.ocr
-rw------- 1 root root 8003584 Jul 5 17:30 backup_20140705_173025.ocr
-rw------- 1 root root 7708672 Jun 16 22:15 backup_20140616_221507.ocr
-rw------- 1 root root 7708672 Jun 16 22:14 backup_20140616_221405.ocr
-rw------- 1 root root 7688192 Nov 9 2011 backup_20111109_232025.ocr
-rw------- 1 root root 7667712 Nov 9 2011 backup_20111109_230940.ocr
-rw------- 1 root root 7647232 Nov 9 2011 backup_20111109_230916.ocr
-rw------- 1 root root 7626752 Nov 9 2011 backup_20111109_224725.ocr
-rw------- 1 root root 7598080 Nov 9 2011 backup_20111109_222941.ocr
-rw------- 1 root root 7593984 Oct 22 2011 backup01.ocr
-rw------- 1 root root 7593984 Oct 21 2011 backup02.ocr
[root@node1 grid]#

Yes, *ALL* the OCR backups to date have been created on node1 -- even when executed from node2.  node1 is still the "master" node for OCR backups as long as it is up and running.  I shut down Grid Infrastructure on node1.

[root@node1 grid]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.new_svc.svc' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.cvu' on 'node1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node1'
CRS-2673: Attempting to stop 'ora.gns' on 'node1'
CRS-2677: Stop of 'ora.cvu' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'node1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node1' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node1'
CRS-2677: Stop of 'ora.scan2.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'node2'
CRS-2676: Start of 'ora.cvu' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.new_svc.svc' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'node1'
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.node1.vip' on 'node2'
CRS-2676: Start of 'ora.scan3.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'node2'
CRS-2676: Start of 'ora.scan2.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'node2'
CRS-2677: Stop of 'ora.gns' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gns.vip' on 'node1'
CRS-2677: Stop of 'ora.gns.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gns.vip' on 'node2'
CRS-2676: Start of 'ora.node1.vip' on 'node2' succeeded
CRS-2676: Start of 'ora.gns.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.gns' on 'node2'
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'node2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA2.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'node1'
CRS-2676: Start of 'ora.gns' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA1.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA2.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'node2'
CRS-2676: Start of 'ora.oc4j' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
CRS-2673: Attempting to stop 'ora.crf' on 'node1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
CRS-2677: Stop of 'ora.crf' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2673: Attempting to stop 'ora.diskmon' on 'node1'
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node1 grid]#

So, all the Grid Infrastructure services are down on node1. I will run an OCR Backup from node2 and verify it's location.

[root@node2 grid]# ocrconfig -manualbackup

node2 2014/08/03 21:49:02 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr

node1 2014/08/03 21:37:17 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1 2014/07/06 22:39:55 /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1 2014/07/05 17:30:25 /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1 2014/06/16 22:15:07 /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr
[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
-rw------- 1 root root 8024064 Aug 3 21:49 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr
[root@node2 grid]#

Yes, the backup got created on node2 now.

Question : Would there have been a way to create a backup on node2 without shutting down node1 ?

.
.
.

Categories: DBA Blogs