Fuad Arshad

Subscribe to Fuad Arshad feed Fuad Arshad
This is Just stuff i find on Oracle From various different sources or from my own personal experience I can also be found at http://www.twitter.com/fuadar
Updated: 14 hours 29 min ago

#ThanksOTN OTN Appreciation Day: Recovery Appliance - Database Recovery on Steroids

Tue, 2016-10-11 15:36
+ORACLE-BASE.com Tim hall came up with a brilliant idea to appreciate OTN  for all the ways it helped shape the Oracle Community. I have to say thati  whole heartedly agree and here is my contribution for #ThanksOTN.

Recovery Appliance or RA or ZDLRA is something I've been very passionate about since its release and thus this very biased post on RA.  Recovery Appliance is Database Backup and Recovery on Steroids . The ability to do fulls and incremental backups is something that every product boasts, so whats special about ZDLRA. Its the Ability to sleep in peace, its the ability to know my backups are good.
To Quote this Article from DBTA  which is for Sqlserver and 2009
 "To summarize, data deduplication is a great feature for backing up desktops, collaboration systems, web applications, and email systems. If I were a DBA or storage administrator, however, I'd skip deduplicating databases files and backups and devote that expensive technology to the areas of my infrastructure where it can offer a strong ROI


This notion really hasn't changed much though de-duplication software has come a long way.
Why de-dup when you dont even send what you dont need , and thats what the Recovery Appliance brings to the table. Send less data and recover as whole , no more restoring L0's then applying L1's and redo . Just ask to recover a virtual Full and redo needed to get to that point will be sent . This makes the Restore and recovery Process automated much faster than traditional backups.
This couple with automatic block checking , built in validation makes the RA something i'm personally proud of a product that i work with and it truly makes my database recovery on steroids.

REDO_TRANSPORT_USER and Recovery Appliance (ZDLRA)

Tue, 2016-05-17 10:14

“REDO_TRANSPORT_USER” was an Oracle Database Parameter that was introduced in Oracle release 11.1 to help transporting redo from a primary to a standby by using a user designated for log transport , The default configuration assumes the user “SYS” is performing the transport.
This distinction is very important since the user “SYS” is available on every Oracle database and as such most data guard environment when created with default settings are created with “SYS” being the used for Log Transport services.
The Zero Data Loss Recovery Appliance (ZDLRA) adds an interesting twist to this configuration. In order for Real-TIme redo to work on a ZDLRA, the “REDO_TRANSPORT_USER” needs to be set to the Virtual Private Catalog (VPC) user of the ZDLRA. For database that are not participating in the Data Guard configuration , this is not an issue and a user does not be created on the Protected Database i.e the database being backed up to the ZDLRA. The important distinction comes into play if you already have a standby configured to receive redo, that process will break since we have switched the “REDO_TRANSPORT_USER” to a user that doesn’t exist on the protected database. In order to avoid this issue if you already have a Data Guard , you will need to create the VPC user as a user in the primary database with the "create session” and “sysoper" with an optional “sysdg” (12c) .
An example configuration is detailed below.
SQL> select * from v$pwfile_users;

SQL> select * from v$pwfile_users;

USERNAMESYSDBSYSOPSYSASSYSBASYSDGSYSKMCON_IDSYSTRUETRUEFALSEFALSEFALSEFALSE 0SYSDGFALSEFALSEFALSEFALSETRUEFALSE 0SYSBACKUPFALSEFALSEFALSETRUEFALSEFALSE 0SYSKMFALSEFALSEFALSEFALSEFALSETRUE 0

SQL> create user ravpc1 identified by ratest;
User created.

SQL> grant sysoper,create session to ravpc1;
Grant succeeded.

SQL> select * from v$pwfile_users;

USERNAMESYSDBSYSOPSYSASSYSBASYSDGSYSKMCON_IDSYSTRUETRUEFALSEFALSEFALSEFALSE 0SYSDGFALSEFALSEFALSEFALSETRUEFALSE 0SYSBACKUPFALSEFALSEFALSETRUEFALSEFALSE 0SYSKMFALSEFALSEFALSEFALSEFALSETRUE 0RAVPC1FALSETRUEFALSEFALSEFALSEFALSE 0

SQL> spool off


Once you have ensure that the password file has the entries , copy the password file to the standby node(s) and then ensure that the destination state on the primary to the standby is reset by deferring and then reenabling the destination state

SQL> alter system set log_archive_dest_state_X=defer scope=both sid='*'
SQL> alter system set log_archive_dest_state_X=enable scope=both sid='*'

This will ensure that you have redo transport working to the Data Guard standby and the ZDLRA


References

Data Guard Standby Database log shipping failing reporting ORA-01031 and Error 1017 when using Redo Transport User (Doc ID 1542132.1)
MAA White Paper - Deploying a Recovery Appliance in a Data Guard environment
REDO_TRANSPORT_USER Reference
Redo Transport Services
Real-Time Redo for Recovery Appliance

Links for 2016-04-29 [del.icio.us]

Sat, 2016-04-30 02:00

Enterprise Manager 13c And Database Backup Cloud Service

Mon, 2016-03-21 10:35

The Oracle Database Cloud Service allows for backup of an Oracle Database to the Oracle Cloud using Rman. Enterprise Manager 13c provides a very easy way to configure Oracle Database Backup Cloud Service. This post will walk you thru setup of the Oracle Database Backup Cloud service as well as running backups from EM.


There is a new menu Item to configure the Database Backup Cloud Service (DBCS) in the Backup & Recovery Drop down.


This will show you how to setup the Database Backup Cloud Service. If nothing was configured before you will see the screenshot .

Once you click on the Configure Database Backup Cloud Service you will be asked for the Service (Storage) and the Identity Domain that you want the Backups to go to . This Identity Domain comes as part of the DBCS or as Part of DBaaS that can be purchased from Oracle Cloud


Once the Settings are saved . A popup will confirm that the setting have been saved.


After Saving the Settings Submit the Configuration Job . This will Download the Oracle Backup Module to the hosts as well as configure the Media Management Settings. The Job will provide details and confirm all configuration is complete, and will configure this on all nodes of a RAC which can save a lot of time.

We have now completed the setup and can validate by looking the Configure Cloud Backup Setup . This also has an option to test cloud backup as well.

. Lets ensure we have settings there and Checking in Backup Settings , The Media Management settings will shows the location of the Library , Environment and Wallet. The Database Backup Cloud Service requires all backups sent to it is encrypted.


You can also validate this by connecting to rman on the command line and running a "SHOW ALL"

As you can see we have confirmed that the media management setup is completed and well as run a job to download the Cloud Backup Module and configure it.
Now as a final Step we will configure a backup and run an Rman Backup to the Cloud. In the Backup and Recovery Menu Schedule a Backup . Fill out the pertinent setting and make sure ou either encrypt via a password or a wallet or both. The backup that i scheduled was encrypted using a password.

On the Second Page Select the Destination which is the Cloud in our case. and Schedule it


Validate that the setting are right and execute the Job. You can monitor the job by clicking the View Job. The New Job Interface in EM13c is really nice and allows you to see a graphical representation of execution time as well as a log of what is happening side by Side like below.

Once the Backup is completed you can not only see the backup thru EM but also using RMAN on the command line

There are a couple of things that i didn’t show during the process . Parallelism during a backups is important as is compression.
Enterprise Manager 13c allows for making the already simple process of setting up Backup’s to the Database Cloud Service much easier.

Zero Data Loss Recovery Appliance - Basics

Mon, 2016-03-07 09:17

Oracle released Zero Data Loss Recovery Appliance in 2014. The Recovery Appliance was designed to ensure efficient and consistent Oracle Database Backups with a very key focus on Recovery.
I am going to write a series of blogs starting with this one to discuss the fundamental architecture of the Recovery Appliance and discuss the business case as well as deployment and operational strategies around the Recovery Appliance.
So Lets start with why an Appliance. Oracle has had a very interesting strategy start from way before the sun Acquisition. The Exadata was a prime example of a Database Machine that was optimized for Database Workloads. The Engineered Systems Family has since grown to include the smaller Oracle Database Appliance to the currently newest member of the family Zero Data loss recovery Appliance.

Now Lets Start with the Basics . The Recovery Appliance as the name suggests is an Appliance built to solve Data Protection gaps that most customers face , when trying to ensure their critical data that most often resides in the Oracle Database. So why recovery appliance and why now. Over the years Data storage has continued to grow and so does the amount of data stored in databases, where once a couple of GB’s of data was a big deal, today organizations are dealing with Petabytes of Database Storage. Database’s backups are getting harder and harder to manage and modern Backup Appliances have a focus on getting more out of the storage rather than provide a way to ensure recoverability and don’t have a good enough method to ensure that backups are valid. The Recovery Appliance is designed to solve these challenges and give customer an autopilot for their backups.
The name Recovery appliance suggests how much emphasis was put forward in ensuring recoverability of the database, and hence there were controls put in place to ensure everything is validated not just once , but on a regular basis, with extensive reporting made available.Backups are a very important part of every enterprise and the Recovery Appliance brings the ability to perform an incremental forever backup strategy. The incremental forever strategy as the name suggests provides for one full backup (Level 0 ) followed by subsequent incrementals (Level 1 ) Backups. This in conjunction with Protection Policies that ensure a recovery window is maintained , thus providing the autopilot that ensures backups are successful with very little overhead on the machine that is taking the backup. This is done by offloading the de-duplication and compression activities to the Recovery Appliance.
So far i’ve used terminologies like Protection Policies , De-duplication , compression etc. While these terminologies are common in the backup space , too often people have a hard time making the connection. So lets start by a brief definition of each term
Full Backups
When a Complete Backup of the database is taken, This is called a Full Backup and in a traditional environment, this can be done daily or weekly , depending on the backup strategy . Traditional Backup appliances rely on these full to provide De-duplication capabilities. Full backup require a lot of overhead since all blocks have to be read from the I/O subsystem and processed by database host.
Incremental Backups
Incremental backups as the name suggests is the ability to take backups of data blocks that have changed since the previous backups. The Oracle Backup and Recovery Users Guide is the best place to understand the incremental backup strategy and how that can be employed in terms of a backup strategy.
De-duplication
De-duplication is a technique to eliminate duplicate copies of repeating data. This technique is typically employed with flat files or text based data since you can find a better repeating . Incremental Backups are a poor source to de-dup since there is not much data that is repeating and due to the unique structure of the Oracle block , it makes it hard to get a lot of de-duplication.
Compression
Compression is act of shrinking data and Oracle provides various methods of compressing data within the database and with the rman backup process itself.
In Part 2 of this Blog post i will talk about some of the terminologies likes protection policies and incremental forever strategy as well as dicuss the architecture of the Recovery Appliance.


Exadata 12c New Features RMOUG Slides

Mon, 2015-02-23 08:33
I've finally gotten around to post my RMOUG Slide Deck on Slideshare. Hopefully this is helpful to folks looking at new features in Exadata.

Compliance and File Monitoring in EM12c

Mon, 2014-12-29 14:36
I was recently asked to help a customer set up File Monitoring in Enterprise Manager and I thought since I haven’t blogged in a while, this could be a good way to start back up again..
Enterprise manager 12c provides a very nice Compliance and File Monitoring Framwork. There are many Built in Frameworks include for PCI DSS and STIG but this How-to will only focus on a custom file monitoring framework.
Prior to Setting up Compliance features . Ensure that Privilege Delegation is set to sudo or whatever Privilege delegation provider you are using.  and Credentials for Realtime Monitoring are setup for hosts. All the Prereqs are explained here http://docs.oracle.com/cd/E24628_01/em.121/e27046/install_realtime_ccc.htm#EMLCM12307
Also important in the above link is how every OS interacts with these features.


Go To Enterprise -→ Compliance → Library

Create a New Compliance Standard



Name and Describe the Framework


You will see  the Framework Created


Now lets add some Facets to monitor > In this example I selected a tnsnames from my rdbms home


Below is a finished facet


Next lets create a rule that uses that facet

After Selecting the right rule lets Add more color

Lets add the facet that defined what file(s) will be monitored

For this example I will select all aspects  for testing but ensure that you have sized your respository as well as understand all the consequences  for each aspect





After defining the monitoring actions, you have the option to filtor and create monitoring rules based on specific events.
I will skip this for now
As we inch towards the end we can authorize changes and each event manually or incorporate a Change Management System that has a connector available in EM12c.

After We have completed this, we now have an opportunity to review the setting and then make this rule production.
Now lets create a Standard. We are creating a custom File Monitoring Standard With a RTM type Standard Applicable to host

We will add rules to the File Monitor . In this Case we will add the tnsnames rule we created to the Standard. You can add standard as well as rules to a Standard

Next Lets Associate Targets to this Standard.
You will be asked to confirm

Optionally now  you can add this to the compliance framework for one stop monitoring

Now that we have set everything up. Lets Test this. Here is the original tnsnames.ora
Lets add another tns entry

Prior to the change . here is that the Compliance Results Page Looks Like. As you can see the evaluation was successful. And we are 100% compliancet



Now  If If go to Compliance -> Real time observations . I can see that I didn’t install the Kernel module needed for granular control and this cannot use certain functionality

So I’m going to remove these from my rule for now.
Now I have made a whole bunch of changes including even moving the file. It is all captured .

There are many changes here and we can actually compare what changed
If you select unauthorized as the audited event  for the change the compliance score drops and you can use it for see how many violations for a given rule happen.

In Summary. Em12c Provides a very robust framework of monitoring compliance standards as well as custom created frameworks to ensure your auditors and IT Managers are happy.


HeartBleed and Oracle

Fri, 2014-04-11 08:23
There are a lot of people asking about Heartbleed and how it has impacted the web.
Oracle has published  MOS Note 1645479.1 that talks about all the products impacted and if and when fixes will be available.
The following blog post is also a good reference about the vulnerability.  https://blogs.oracle.com/security/entry/heartbleed_cve_2014_0160_vulnerability



User Groups and Speaking About Support and diag tools.

Tue, 2014-04-01 08:58
The Chicago Oracle Users Group (COUG) is finally in its reboot mode. Thanks to Alfredo Abate for taking on  the responsibility and bringing the enthusiasm to bring the community back together.  Jeremy Schneider has blogged about this here .  There is a Linked in Group now open for business and i would recommend every one to contribute and lets make this reboot a success.

I am also going to be presenting  at the Ohio Users Group on April 17th along with Jeremy Schneider. The Details of the Event can be found at http:///www.ooug.org. If you are in the area, Please stop by and say hi. I'll be talking about various support tools that Oracle has and how to use them effectively.



Collaborate 14 and Vegas

Mon, 2014-02-03 12:30
Collaborate 14 is coming soon and i can tell you that it is an excellent content learning and networking opportunity.  I have been going to collaborate for a while and have found it to be not only a place to learn but also to network with my peers .  We built the team to write the Practical Oracle Database Appliance  which is available here at Collaborate 13 and were able to deliver a book  with authors all across the world as a team effort.
I would highly encourage everyone to consider Collaborate 14 as a way to be part of the wonderful Oracle Community and talk to people ,  listen to people, learn and teach others and foremost volunteer. Hey did i mention its in VEGAS. 
Early Bird Registration ends February 12 , So please pass this along and use My Name Fuad Arshad as a referrer.  Adam Savage is the Keynote speaker which is going to be AWESOME.

Changes and Book

Mon, 2014-01-27 16:35
I just realized that i have not blogged in a very long time. This has been partly because i switched jobs and started working for Oracle.  it has been an interesting six months and i have been enjoying the challenge of working with various customer and helping them solve problems.
The other thing that has been an important milestone in my career is the publishing of a book that collaborated with a very fine team of individuals with . The book is a collection of our experiences and passion with the Oracle Database and is called Practical Oracle Database Appliance. You can pre-order the book at Amazon with a link available below. I will be trying to blog more about various aspects of my new job and interesting stuff above Exadata as i learn them


EM 12.1.0.3 interesting feature - Deploying Oneoff patches to Agents

Fri, 2013-07-05 09:19
Enterprise Manager 12c Already allows for deploying one-off patches thru the provisioning and patching module but what  if you need to deploy a lot of agents and don't want to keep patching them after the fact. EM 12.1.0.3 has  a new feature that allows for keeping management agent as well as plugin patches on the OMS for a particular agent version and the patches will automatically be applied when the agent  or plugin is deployed or upgraded. This is particularly a useful feature when you need to deploy or upgrade in bulk and have to apply one off patches on the environment as well.

The documentation for this feature is available in Technet EM Docs. The feature allows for generic patches by  putting them on each OMS
. In case of a Multi OMS this needs to be done on all OMS's Create a directory like below
$/install/oneoffs//Generic/
e.g 
$/install/oneoffs/12.1.0.3.0/Generic/
or 
$/install/oneoffs///
e.g 
$/install/oneoffs/12.1.0.3.0/linux_x64/  


On deployment or upgrade. The Patches will automatically  be  applied. The patches can be validated using the usual methods . Either by looking at the Manage Cloud Control --> Agents Screens or  Opatch lsinventory on the agent.

This is a very useful feature and will allow for rapid deployment and upgrades for agents without having to worry about applying one-off patches later.



EM12c Disk Busy Alert for Oracle Database Appliance V1 & X3-2

Wed, 2013-06-05 10:03
Oracle Just Published a Document ID 1558550.1 that talks about an issues that i've had an SR out for 6 months now.
Due to a Linux iostat bug  BUG: 1672511 (unpublished)  - oda - disk device sdau1 & sdau2 are 100% busy due to avgqu-sz value
This forces host level monitoring to report Critical Disk Busy alerts . This Bug will be fixed in an upcoming release of the the Oracle Database Appliance Software. 
This workaround is to disable Disk Activity Busy alert in EM12c. After the issue is resolved the user now has the responsibility to remember to reenable this alert.

  The alert in the document makes me laugh though 

Note:  Once you apply the iostat fix through an upcoming ODA release, make sure that you re-enable this metric by adding the Warning and Critical threshold values and applying the changes.
 

Patching an Exadata Compute Node

Tue, 2013-05-28 12:10
An Oracle Exadata Full Rack consists of 8 DB compute nodes. Oracle has shifted the strategy to patching the exadata in 11.2.3.2.0 onwards to using Yum as the method of patching. the Quarterly Full Stack  does not include the DB Compute nodes patches anymore. So that has to be done separately. 
So where does someone start when they are new to exadata and need to patch to a newer release of the Software.
For the Compute Nodes Start here
Exadata YUM Repository Population, One-Time Setup Configuration and YUM upgrades [ID 1473002.1]

This note walks you thru either setting up a direct connection to ULN and building a repository or using an ISO image that you can down for setting up the rpeository. Best Practice would be to setup a repository external to the Exadata and then add the repo info in the Exadata compute nodes. Once the repository is created and updated or ISO downloaded. you will need to create
/etc/yum.repos.d/Exadata-computenode.repo
[exadata_dbserver_11.2_x86_64_latest]
name=Oracle Exadata DB server 11.2 Linux $releasever - $basearch - latest
baseurl=http:///yum/unknown/EXADATA/dbserver/11.2/latest/x86_64/
gpgcheck=1
enabled=0
This needs to be added to all Exadata Compute Nodes . then ensure all repositories are disabled to avoid any accidents
sed -i 's/^[\t ]*enabled[\t ]*=[\t ]*1/enabled=0/g' /etc/yum.repos.d/*
Download and stage patch patch 13741363 in a software directory of each node This will have the helper scripts needed . Always make sure to get the updated versions. You will need to disable and stop the crs on the node you are patching as root and then perform a server backup .
$GRID_HOME/bin/crsctl disable crs
$GRID_HOME/bin/crsctl stop crs -f
/13741363//dbserver_backup.sh
This will providecreate a backup and results similar to below will show up.
INFO] Unmount snapshot partition /mnt_snap
[INFO] Remove snapshot partition /dev/VGExaDb/LVDbSys1Snap
Logical volume "LVDbSys1Snap" successfully removed
[INFO] Save partition table of /dev/sda in /mnt_spare/part_table_backup.txt
[INFO] Save lvm info in /mnt_spare/lvm_info.txt
[INFO] Unmount spare root partition /mnt_spare
[INFO] Backup of root /dev/VGExaDb/LVDbSys1 and boot partitions is done successfully
[INFO] Backup partition is /dev/VGExaDb/LVDbSys2
[INFO] /boot area back up named boot_backup.tbz (tar.bz2 format) is on the /dev/VGExaDb/LVDbSys2 partition.
[INFO] No other partitions were backed up. You may manually prepare back up for other partitions.
Once The backup is complete you can proceed with the update
yum --enablerepo=exadata_dbserver_11.2_x86_64_latest  repolist   // thisis the official channel for all updates 
yum --enablerepo=exadata_dbserver_11.2_x86_64_latest update
This will Download the appropriate rpm's and update the compute and reboot. The process can take between 10-30 mins . Once the node is up the Clusterware will not come up. Validate the image using imageinfo
[root@exa]# imageinfo
Kernel version: 2.6.32-400.21.1.el5uek #1 SMP Wed Feb 20 01:35:01 PST 2013 x86_64
Image version: 11.2.3.2.1.130302
Image activated: 2013-05-27 14:41:45 -0500
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1
This confirms that the compute node has been upgraded to 11.2.3.2.1 Unlock crs as root
$GRID_HOME/crs/install/rootcrs.pl -unlock
su - oracle
.oraenv
--select oracle database to set home
relink all
make -C $ORACLE_HOME/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle
su root
$GRID_HOME/crs/install/rootcrs.pl -patch
$GRID_HOME/bin/crsctl enable crs
This concludes a compute node patch application.  Rinse and repeat for all compute nodes 8 in X2-8
Now if you have read thru all this you will kind of see how many manual steps are involved. Fortunately Oracle Just Released a utility ot automate all these Tasks for you. Rene Kundersma of Oracle Talks about this new utility Call dbnodeUpdate.sh in his Blog Post Here
 Andy Colvin has published on his Blog his take on these Scripts and a demo Here

Oracle Database Appliance 2.6 Is now Available

Tue, 2013-05-07 08:01
Oracle Database Appliance Software version 2.6 is now available for Download. ODA 2.6 is the first version that contains combined software for ODA V1 and ODA X3-2.
This release also has an offline configuration Tool that will work with Virtualized and Non Virtualized ODA Configurations. This provides a lot of help in upfront planning and configuration of the Database Appliance.
The offline configurator is available on Oracle Technology Network and can be downloaded here.
The Configurator now asks a set of Questions like Environment and Hardware to determine deployment and network structure to use.
From the patching perspective ODA 2.6 Offers a few enhancements
For the Virtualized Platform
1. Remote Template Support
2. Support for Assemblies i.e .ova Support
3. a GUI VM Console can be accessed via oakcli
For Bare Metal (aka Non Virtualized)
1. SAP Application deployment is supported
2. PSU Patch
3. Unified Patch for Both V1 and X3-2

I'm still testing the patch and will  put up something on how to patch a virtualized ODA as well as Baremetal steps shortly
The Information Center and various notes on MOS still have not been updated yet with ODA 2.6 information.
As always please test before deploying to production.

Collaborate 13 Presentation - Oracle Database Appliance RAC In a box some strings attached.

Tue, 2013-04-09 22:36
I just uploaded my presentation on Oracle Database Appliance  - RAC In a box some strings attached. Thank you for all the people that attended and i hope i was able to convey my points about the Database Appliance in a manner that was objective and factual



Collaborate 2013 - Lots Of learning Lots of Fun

Tue, 2013-03-26 15:56
Collaborate 2013 is right around the corner and that is the place to be this April in Denver.
 There are a lot of Learning and a lot of social Activities. I'm goign to be down there presenting an Paper on Oracle Database Appliance http://bit.ly/YFmPpu .
I will be talking about how to deploy the Database Appliance and how it has changed how i do my day to day work. If you are interested in listening to me talk about Oracle Database Appliance , How it works , how easy it is to deploy etc . Please  join me at http://bit.ly/YFmPpu
There will be a lot of Tracks and Sessions and even time with a lot of influential People.
Below is a sneak peak of what i will be talking about. So if you feel that the Topic interests you. Please Come and join me or just come by to say Hi.


Moving Datafile from Physical Standby to Primary via Rman

Tue, 2013-03-26 10:08
Rman is an interesting tool and it seems everyday you learn something new. As part of a production issue we lost a diskgroup . This Diskgroup was only a small subset of the actual data but due to an issue in the underlying disk (External Redundancy). The database crashed. As we were debugging the issue with our vendor . We decided to use Rman and our Physical Standby to bring back the datafiles to help mitigate the outage. Now we could have failed over to our standby but since  our Standby hardware was not sized to handle production workload that was a risk the business was not willing to take.
As Oracle 11.1 you can use file from a standby(physical) database and move them to the primary.
We went thru to the Standby and determined which datafiles were in the particular group and then connected to primary from the standby as an auxiliary and copied the files over the network to the different diskgroup.

connect sys@standby AUXILIARY sys@primary;
BACKUP AS COPY DATAFILE 2 AUXILIARY FORMAT '+DATA/COPY_FILE/COPY_FILE.dbf;
..
..
Once the Files are copied you can switch the datafiles .
CATALOG DATAFILE COPY '+DATA/COPY_FILE/COPY_FILE.dbf';

RUN {
SET NEWNAME FOR DATAFILE 2 TO ''+DATA/COPY_FILE/COPY_FILE.dbf';
SWITCH DATAFILE 2;
}
This will allow for a move of the datafile as well as rename on the primary.
you will then  have do a crash recovery on the datafile(s) in question.

recover database 2;
alter database datafile 2 online;
you have just copied a file over the network and plugged it into your primary from standby. This is a pretty convenient if accidents happens.

Changing Enterprise Manager 12c Default inactive Timeout

Tue, 2013-02-26 14:27
Oracle Enterprise defaults to a timeout of 45 minutes . Depending on the organization or security policies you might want to change that to a lesser amount of time or in my case a longer period .
To check what the timeout is set
em@emap1>./emctl get property -name oracle.sysman.eml.maxInactiveTime
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
SYSMAN password:
Value for property oracle.sysman.eml.maxInactiveTime for oms All Management Servers is null null = default of 45 minutes
After you have checked the value . you can change it
/u01/app/oraem/oem12c/oms/bin
em@emap1>./emctl set property -name oracle.sysman.eml.maxInactiveTime -value 90
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
SYSMAN password:
Property oracle.sysman.eml.maxInactiveTime has been set to value 90 for all Management Servers
OMS restart is required to reflect the new property value
This will set the inactive time to 90 minutes . you will need an OMS bounce for this to take effect. Please ensure that whatever value you set meets your requirements and the security requirements of your company

Deploying a Virtualized Oracle Database Appliance ODA 2.5

Mon, 2013-02-11 17:12


So I finally got the opportunity  to deploy ODA 2.5 on a development Oracle Database Appliance. The Documentation  is very lacking and needs more refinement for the masses.
Here are the steps to deploy ODA 2.4 by reimaging the ODA . Concepts and procedures for bare maetaling the box remain the same.
    1. Use the ILOM to connect to the box via remote control and mount the VM Image  ( Need to do this on both hosts individually)
    2. After the reboot . The imaging Process will start and should takes between 1-2 hrs (Took about 2 hrs 30 minutes for me)
    3. Once you get the Oracle VM Server 3.1.1 screen. Your box has been imaged with a dom0 image.
    4. If you are using the ILOM you can ALT-F2 to get a login prompt
    5. Login as root
    6. Ensure that both boxes have been reimaged before starting the next step
    7. Run /opt/oracle/oak/bin/oakcli configure firstnet  ( on the first node)
    8. You have 2 options  (Local or Global)
    9. Global should be selectedi f both nodes are ready ot be ip'ed
    10. Select the network net1 , net2, net3, net4)
    11. Please note it is a little confusing but Here is a break down
        a.  priv1=bond0 (Interconnect)
        b.  Net1=bond1
        c. Net2=bond2
        d. Net3=bond3
        d. Net4=xbond0
    12. Oracle Failed to mention this but on startup does provide the MAC Addresses as well as ethernet names and bond info so be careful and ensure that  you understand your network topology prior to installing.
    13. You do want to make sure you have a DNS entry and a new IP Address for the DOM0 for each Server Node (2 x Dom0)
    14. Needless to say the network should be same on both nodes e.g public should be cabled on net1 on both nodes for consistency
    15. The network config will configure the public on both nodes  for Dom0 only
    16. After the config scp patch 16186172 into /OVS on the Dom0 box 0
    17. Unzip the patch 1 & 2 files and cat them together  cat a b >templates.
    18. Deploy the oda_base
 [root@odadb1-dom0 bin]# ./oakcli deploy oda_base
Enter the template location: /OVS/templateBuild-2013-01-15-08-53.tar.gz
Core Licensing Options:
1. 2 CPU Cores
2. 4 CPU Cores
3. 6 CPU Cores
4. 8 CPU Cores
5. 10 CPU Cores
6. 12 CPU Cores
Selection[1 : 6] : 5
ODA base domain memory in GB(min 8, max 88)[default 80] :
INFO: Using default memory size i.e. 80 GB
INFO: Node 0
INFO: Deployment in non local mode
INFO: Running the command to copy the template /OVS/templateBuild-2013-01-15-08- 53.tar.gz to remote node 1
templateBuild-2013-01-15-08-53.tar.gz 100% 4620MB 47.6MB/s 01:37
INFO: Node 0
INFO: Spawned the process 26679 in the deployment node 0
INFO: Trying to setup on deployment node 0
INFO: Spawned the process 26680 in the node 1
INFO: Trying to setup on node 1
templateBuild-2013-01-15-08-53/swap.img
......
templateBuild-2013-01-15-08-53/u01.img
Using config file "/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/vm.cfg" .
Started domain oakDom1 (id=1)
INFO: Deployment in local mode
INFO: Node 1
INFO: Extracted the image files on node 1
INFO: Node 1
INFO: The VM Configuration data is written to /OVS/Repositories/odabaseRepo/Virt ualMachines/oakDom1/vm.cfg file
INFO: Running /sbin/losetup /dev/loop0 /OVS/Repositories/odabaseRepo/VirtualMach ines/oakDom1/System.img command to mount the image file
INFO: Mount is successfully completed on /dev/loop0
INFO: Making change to the /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1 /tmpmnt/boot/grub/grub.conf file
INFO: Node 1
INFO: Node 1
INFO: Assigning IP to the second node...
INFO: Node 1
INFO: Created oda base pool
INFO: Starting ODA Base...
Using config file "/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/vm.cfg" .
Started domain oakDom1 (id=1)
INFO: Deployment in local mode
INFO: Node 0
INFO: Extracted the image files on node 0
INFO: Node 0
INFO: The VM Configuration data is written to /OVS/Repositories/odabaseRepo/Virt ualMachines/oakDom1/vm.cfg file
INFO: Running /sbin/losetup /dev/loop0 /OVS/Repositories/odabaseRepo/VirtualMach ines/oakDom1/System.img command to mount the image file
INFO: Mount is successfully completed on /dev/loop0
INFO: Making change to the /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1 /tmpmnt/boot/grub/grub.conf file
INFO: Node 0
INFO: Node 0
INFO: Assigning IP to the first node...
INFO: Node 0
INFO: Created oda base poo1
INFO: Starting ODA Base…
19. Once oda_base is deployed
                
 [root@podadb31-dom0 bin]# ./oakcli show oda_base
ODA base domain
ODA base CPU cores :10
ODA base domain memory :80
ODA base template :/OVS/templateBuild-2013-01-15-08-53.tar.g
20.Once the oda_base is installed you will have to vnc in using the dom0 port 5900 to get access to the database server ( Due to a bug you will need to vnc in on both servers first and press the press any key to continue).
21. Once logged in you will need to IP the oda_base
22.You can either use the /opt/oracle/oak/bin/oakcli configure firstnet ( Please note it detects VM Environment and gives eth1,eth2,eth3 and eth4 as options )
23. Better to use  ./oakcli deploy
24. Oakcli now has option to change the proxy port for ASR as wella s configure external ASR server
25. External ASR server needs a server name as well as port (no definition of what the port is supposed to be )
26.  Also due to a bug if vm manager is bounced   you will have to vnc in and hit "press any key to continue". you can see that below
 The deployment process has not changed and will follow the same deployment steps.
This is the first release of ODA on a virtualized platform and glitches are to be expected. but it does seems to have been rushed out .
Please feel free to comment or ask questions. I have only deployed a DOm0 and the ODA_base here but i will deploy an app shortly and post my experience

Update: Edited steps and changed the network names

Pages