Syed Jaffar

Subscribe to Syed Jaffar feed
Whatever topic has been discussed on this blog is my own finding and views, not necessary match with others. I strongly recommend you to do a test before you implement the piece of advice given at my blog.The Human Flyhttp://www.blogger.com/profile/03489518270084955004noreply@blogger.comBlogger257125
Updated: 14 hours 12 min ago

Oracle EBS Suite blank Login page - Post Exadata migration

Tue, 2017-11-07 08:35
As part of EBS database migration on Exadata, we recently deployed a brand new Exadata X5-2L (eighth rack), and migrated an Oracle EBS Database from the typical storage/server technologies.

Below are the environment details:

EBS Suite 12.1.3 (running on two application servers with hardware LOAD BALANCER)
Database : 11.2.0.4, RAC database with 2 instance

After the database migration, buildxml and autoconfig procedures went well on both application and database tiers. However, when the EBS login page is launched, it came out as just blank page, plus, apps passwords were unable to change through the typical procedure. We wonder what went wrong, as none of the procedure gave any significant failure indications, all went fine and we could see the successful completion messages.

After a quick initial investigation, we found that there is an issue with the GUEST user, and also found that the profile was not loaded when the autoconfig was ran on the application server. In the autoconfig log file, we could see the process was failed to update the password (ORACLE). We then tried all workaround, the recommended on Oracle support and other websites. Unfortunately, none of the workarounds helped us.

After almost spending a whole day, investigating and analyzing the root cause, we looked at the DB components and their status through dba_registry. We found the JSserver Java Virtual Machine component INVALID. I then realized the issue happened during the Exadata software deployment. There was an error during the DB software installation while applying the patch due to conflict between the patches. Due to which the catbundle didn't executed.

Without wasting a single second, we ran the @catbundle command and followed by ultrp.sql.

Guess what, all issues disappeared. We run the autoconfig on the application servers. After which we could change the app user password and we could see the Login page too.

It was quite a nice experience.

This really 

Exadata X5-2L deployment & Challenges

Sun, 2017-11-05 07:01
A brand new Exadata X5-2L eighth rack (I knew the latest is X7 now, but it was for a POC, so no worries) has been deployed recently at a customer for Oracle EBS Exadata migration POC purpose. Its wasn't an easy walk in the park as I initially presumed. There were some challenges (network, configuration) thrown during the migration, but, happily overcome and had it installed and EBS database migration completion.

So, I am going to share yet another Exadata bare metal deployment story explaining the challenges I have faced, and how they are fixed.

Issue 1) DB network cable issues:
After successful execution of the elasticConfig, all the Exadata factory IP addresses have been set to client IPs. Though the management network was accessible from the outside, client network was not accessible. When verified with the network team about enabling the ports on the corporate switch, they confirmed that the ports are enabled, however, the connection is showing as not active and asked to us investigate the network cables connected to the DB nodes. When we verified the network cables ports, we didn't find any lights flashing and after an extensive investigation (Switch ports, SFP on Exadata and Corporate switch, checking the cables status), it was found that the cables pin direction was not properly connected. Also, found that the network bonding interfaces (eth1 and eth2) were not up, confirmed from ethtool eth1 command. After fixing the cables, and bringing up the interfaces (ifup eth1 & eith2), we could see that cables are connected properly and we can also see the lights on the ports.

$ethtool eth1 (shows the interfaces were not connected)
Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   100baseT/Full
                                1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  100baseT/Full
                                1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Speed: Unknown!
        Duplex: Unknown! (255)

        Port: Twisted Pair
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: no

Issue 2) Wrong Netmask selection for client network:
After fixing the cable issues, we then continued with onecommand execution. During the validation it failed because of different netmask for client network (under the cluster information section). The customer unfortunately made a mistake in the client network netmask selection for cluster settings, so there was a difference in the client netmask value for client and cluster. This was fixed by modifying the netmask value in the ifcfg-boneth0 file (/etc/sysconfig/network-scripts), restart the network services.

Issue 3) Failed eight Rack configuration (rack type and disk size):
Since the system was delivered somewhere during end of August 2015, no one actually knows exactly the disk size and rack model. The BOQ (Bill of quantity) for the order only shows X5-2 HC storage. So, there was wrong selection in the OEDA for Exadata rack and disk size. Instead of 4TB disk size, it was selected as 8TB disk size and instead of Elastic configuration, fixed Eighth rack was selected. This was fixed by rerunning the OEDA with the correct options.

Issue 4) Cell server IP issues:
There was another obstacle faced while doing the cell connectivity (part of onecommand). Cell server IPs were not modified by the elasticConfig. Fortunately, I found my friend blog on this topic and quickly fixed the issue. This is why I like to blog all the technical issues, who knows, this could solve someone pains.

http://blog.umairmansoob.com/exadata-deployment-error-cell-02559/

Issue 5) SCAN Listener configuration:
Cluster validation failed due to inconsistent values for scan name. During the investigation of various issues, private, public & scan IPs are put in the /etc/hosts file. So, while configuring LISTENER_SCAN2 and SCAN3, this issue happened. This was fairly understandable. Due to 3 entries of scan values in the /etc/hosts file this happened. Upon a quick google about the issue, the following blog helped me to fix the issue

https://learnwithme11g.wordpress.com/2010/09/03/how-to-add-scan-listener-in-11gr2-2/

Finally, I have managed to deploy the Exadata successfully and perform the Oracle EBS database migration . No doubt, this experience really made me strong in network and other areas. So, every challenges comes with opportunity to learn.

I thank those individuals who really write blogs and share their experience to help Oracle community.

There is still one open issues which is yet to be resolved. A slow sqlplus and db startup. I presume this is due to heavy resource utilization over the server. Yet to resolve the mystery. Stay tuned for more updates.






Oracle R Enterprise Server Configuration

Fri, 2017-10-27 11:17


Oracle R Enterprise Server Overview

Oracle R Enterprise includes several components on the server. Together these components enable an Oracle R Enterprise client to interact with Oracle R Enterprise Server.
The server-side components of Oracle R Enterprise are: 
  • Oracle Database Enterprise Edition (64bit)
  • Oracle R Distribution or open source R
  • Oracle R Enterprise Server
Oracle R Enterprise Server consists of the following: 
    • The rqsys schema
    • Metadata and executable code in sys
    • Oracle R Enterprise Server libraries in $ORACLE_HOME/lib (Linux and UNIX) or %ORACLE_HOME%\bin (Windows)
    • Oracle R Enterprise R packages in $ORACLE_HOME/R/library (%ORACLE_HOME%\R\library on Windows)
Oracle R Enterprise Server RequirementsBefore installing Oracle R Enterprise Server, verify your system environment, and ensure that your user ID has the proper permissions.

Oracle R Enterprise runs on 64-bit platforms only.
·        Linux x86-64
·        Oracle Solaris
·        IBM AIX
·        IBM AIX

Oracle Database must be installed and configured as described
·        Oracle R Enterprise requires the 64-bit version of Oracle Database Enterprise Edition.

Installing Oracle R Enterprise Server
verify if ORACLE_HOME, ORACLE_SID, R_HOME, PATH, and LD_LIBRARY_PATH environment variables are properly set. For example, you could specify values like the following in a bashrc file:

export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1
export ORACLE_SID=ORCL
export R_HOME=/usr/lib64/R
export PATH=$PATH:$R_HOME/bin:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib:$R_HOME/lib

Download Oracle R Enterprise ServerGo to the Oracle R Enterprise home page on the Oracle Technology Network:.
http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/Opens a new window
Select Oracle R Enterprise Downloads. On the Downloads page, select Oracle R Enterprise Server and the Supporting Packages for Linux. The following files are downloaded for Oracle R Enterprise 1.4.1.

ore-server-linux-x86-64-1.4.1.zip
ore-supporting-linux-x86-64-1.4.1.zip

Login as root, and copy the installers for Oracle R Enterprise Server and the supporting packages across nodes. For example:

$ dcli -g nodes -l oracle mkdir -p /home/oracle/ORE
$ dcli -g nodes -l oracle -f ore-server-linux-x86-64-1.4.1.zip -d 
     /home/oracle/ORE/ore-server-linux-x86-64-1.4.1.zip
$ dcli -g nodes -l oracle -f ore-supporting-linux-x86-64-1.4.1.zip -d 
     /home/oracle/ORE/ore-supporting-linux-x86-64-1.4.1.zip

Unzip the supporting packages on each node:
$ dcli -t -g nodes -l oracle unzip   
     /home/oracle/ORE/ore-supporting-linux-x86-64-1.4.1.zip -d 
     /my_destination_directory/

 Install Oracle R Enterprise server components:
$ dcli -t -g nodes -l oracle "cd /my_destination_directory; ./server.sh -y
      --admin --sys syspassword --perm permtablespace
      --temp temptablespace --rqsys rqsyspassword
      --user-perm usertablespace --user-temp usertemptablespace
      --pass rquserpassword --user RQUSER"


Installing Oracle R Enterprise Server interactively:
 
$ ./server.sh -i

Oracle R Enterprise 1.4.1 Server.

Copyright (c) 2012, 2014 Oracle and/or its affiliates. All rights reserved.

Checking platform .................. Pass
Checking R ......................... Pass
Checking R libraries ............... Pass
Checking ORACLE_HOME ............... Pass
Checking ORACLE_SID ................ Pass
Checking sqlplus ................... Pass
Checking ORACLE instance ........... Pass
Checking CDB/PDB ................... Pass
Checking ORE ....................... Pass

Choosing RQSYS tablespaces
  PERMANENT tablespace to use for RQSYS [list]: DATA
  TEMPORARY tablespace to use for RQSYS [list]: TEMP1
Choosing RQSYS password
  Password to use for RQSYS:

Choosing ORE user
  ORE user to use [list]:

ORE user to use [list]: OREUSER1

Current configuration
  R Version ........................ Oracle Distribution of R version 3.1.1  (--)
  R_HOME ........................... /usr/lib64/R
  R_LIBS_USER ...................... /u01/app/oracle/product/12.1.0.2/dbhome_1/R/library
  ORACLE_HOME ...................... /u01/app/oracle/product/12.1.0.2/dbhome_1
  ORACLE_SID ....................... OFSAPRD1

  Existing R Version ............... None
  Existing R_HOME .................. None
  Existing ORE data ................ None
  Existing ORE code ................ None
  Existing ORE libraries ........... None

  RQSYS PERMANENT tablespace ....... DATA
  RQSYS TEMPORARY tablespace ....... TEMP1

  ORE user type .................... Existing
  ORE user name .................... OREUSER1
  ORE user PERMANENT tablespace .... DATA
  Grant RQADMIN role ............... No

  Operation ........................ Install/Upgrade/Setup

Proceed? [yes] y

Removing R libraries ............... Pass
Installing R libraries ............. Pass
Installing ORE libraries ........... Pass
Installing RQSYS data .............. Pass
Configuring ORE .................... Pass
Installing RQSYS code .............. Pass
Installing ORE packages ............ Pass
Creating ORE script ................ Pass
Installing migration scripts ....... Pass
Installing supporting packages ..... Pass
Granting ORE privileges ............ Pass

Done

References:

 


How to install Oracle R Enterprise (ORE) on Exadata

Sun, 2017-10-22 08:38
Very recently, we have deployed ORE (R Distribution and R Enterprise) 3.1.1 packages on 4 node Exadata environment. This blog will discuss the prerequisites and procedure to deploy Oracle R Distribution v3.1.1.

Note: Ensure you have a latest system (root and /u01) backup before you deploy the packages on the db server.

What is R and Oracle Enterprise

R is third-party, open source software. Open source R is governed by GNU General Public License (GPL) and not by Oracle licensing. Oracle R Enterprise requires an installation of R on the server computer and on each client computer that interacts with the server.

Why Oracle R Distribution? 
  • Oracle R Distribution simplifies the installation of R for Oracle R Enterprise.
  • Oracle R Distribution is supported by Oracle for customers of Oracle Advanced Analytics, Oracle Linux, and Oracle Big Data Appliance.

What is needed for R Distribution deployment for Oracle Linux 6?
The Oracle R Distribution RPMs for Oracle Linux 6 are listed as follows:

http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/R-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/R-core-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/R-devel-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/libRmath-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/libRmath-devel-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/libRmath-static-3.1.1-2.el6.x86_64.rpm
 
If the following dependent RPM is not automatically included, then download and install it explicitly:
texinfo-tex-4.13a-8.el6.x86_64.rpm

The picture below depicts the ORE client/server installation steps:


Description of Figure 1-2 follows

Oracle R Distribution on Oracle Linux Using RPMsOracle recommends that you use yum to install Oracle R Distribution, because yum automatically resolves RPM dependencies. However, if yum is not available, then you can install the RPMs directly and resolve the dependencies manually. Download the required rpms and its dependent rpms from below link:


http://yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/index.html

To know more about rpms and its dependent rpms, visit the following Oracle website:

https://docs.oracle.com/cd/E57012_01/doc.141/e57007/install_r.htm#BABECIBB

You can install the rpms in the following order:

yum localinstall libRmath-3.1.1-2.el6.x86_64.rpm
yum localinstall libRmath-devel-3.1.1-2.el6.x86_64.rpm
yum localinstall libRmath-static-3.1.1-2.el6.x86_64.rpm
yum localinstall R-core-3.1.1-2.el6.x86_64.rpm
yum localinstall R-devel-3.1.1-2.el6.x86_64.rpm
yum localinstall R-3.1.1-2.el6.x86_64.rpm

Once the rpms are installed, you can validate the installation , using the below procedure:

go to /usr/lib64/R directory on the database, as oracle user, type R:

You must see the output below:



type q() to exit from the R interface.

And repeat on the rest of the db nodes, if you are on RAC.

To install R distribution, use the procedure below:

rpm -e R-Rversion
rpm -e R-devel
rpm -e R-core
rpm -e libRmath-devel
rpm -e libRmath
 
 
 In the blog post, I will demonstrate how to configure Oracle R Enterprise.





How to expand Exadata Database Storage capacity on demand

Thu, 2017-10-19 04:27


Exadata Storage expansion

Most of us knew the capabilities that Exadata Database Machine delivers. Its known fact that Exadata comes in different fixed rack size capacity: 1/8 rack (2 db nodes, 3 cells), quarter rack (2 db nodes, 3 cells), half rack (4 db nodes, 7 cells) and full rack (8 db nodes, 14 cells). When you want to expand the capacity, it must be in fixed size as well, like, 1/8 to quarter, quarter to half and half to full.

With Exadata X5 Elastic configuration, one can also have customized sizing by extending capacity of the rack by adding any number of DB servers or storage servers or combination of both, up to the maximum allowed capacity in the rack.

In this blog post, I will summarize and walk through a procedure about extending Exadata storage capacity, i.e, adding a new cell to an existing Exadata Database Machine.

Preparing to Extend Exadata Database Machine

·        Ensure HW placed in the rack, and all necessary network and cabling requirements are completed. (2 IPs from the management network is required for the new cell).
·         
·        Re-image or upgrade of image:
o   Extract the imageinfo from one of the existing cell server.
o   Login to the new cell through ILOM, connect to the console as root user and get the imageinfo
o   If the image version on the new cell doesn’t match with the existing image version, either you download the exact image version and re-image the new cell or upgrade the image on the existing servers.

Review "Reimaging Exadata Cell Node Guidance (Doc ID 2151671.1)" if you want to reimage the new cell.
  • Add the IP addresses acquired for the new cell to the /etc/oracle/cell/network-config/cellip.ora file on each DB node. To do this, perform the steps below from the first 1 db serer in the cluster:
    • cd /etc/oracle/cell/network-config
    • cp cellip.ora cellip.ora.orig
    • cp cellip.ora cellip.ora-bak
 
    • Add the new entries to /etc/oracle/cell/network-config/cellip.ora-bak.
    • /usr/local/bin/dcli -g database_nodes -l root -f cellip.ora-bak -d /etc/oracle/cell/network-config/cellip.ora

  • If ASR alerting was set up on the existing storage cells, configure cell ASR alerting for the cell being added.
    • List the cell attributes required for configuring cell ASR alerting. Run the following command from any existing storage grid cell:
o   CellCLI> list cell attributes snmpsubscriber
    • Apply the same SNMP values to the new cell by running the command below as the celladmin user, as shown in the below example:
o   CellCLI> alter cell snmpSubscriber=((host='10.20.14.21',port=162,community=public))
  • Configure cell alerting for the cell being added.
    • List the cell attributes required for configuring cell alerting. Run the following command from any existing storage grid cell:
o   CellCLI> list cell attributes
o    notificationMethod,notificationPolicy,smtpToAddr,smtpFrom,
o    smtpFromAddr,smtpServer,smtpUseSSL,smtpPort
    • Apply the same values to the new cell by running the command below as the celladmin user, as shown in the example below:
o   CellCLI> alter cell notificationmethod='mail,snmp',notificationpolicy='critical,warning,clear',smtptoaddr= 'dba@email.com',smtpfrom='Exadata',smtpfromaddr='dba@email.com',smtpserver='10.20.14.21',smtpusessl=FALSE,smtpport=25
  • Create cell disks on the cell being added.
    • Log in to the cell as celladmin and run the following command:
o   CellCLI> create celldisk all
    • Check that the flash log was created by default:
o   CellCLI> list flashlog
You should see the name of the flash log. It should look like cellnodename_FLASHLOG, and its status should be "normal".
If the flash log does not exist, create it using:
CellCLI> create flashlog all
    • Check the current flash cache mode and compare it to the flash cache mode on existing cells:
o   CellCLI> list cell attributes flashcachemode
To change the flash cache mode to match the flash cache mode of existing cells, do the following:
i. If the flash cache exists and the cell is in WriteBack flash cache mode, you must first flush the flash cache:
CellCLI> alter flashcache all flush
Wait for the command to return.
ii. Drop the flash cache:
CellCLI> "drop flashcache all"
iii. Change the flash cache mode:
CellCLI> "alter cell flashCacheMode=writeback_or_writethrough"
The value of the flashCacheMode attribute is either writeback or writethrough. The value must match the flash cache mode of the other storage cells in the cluster.
iv. Create the flash cache:
cellcli -e create flashcache all
  • Create grid disks on the cell being added.
    • Query the size and cachingpolicy of the existing grid disks from an existing cell.
o   CellCLI> list griddisk attributes name,asmDiskGroupName,cachingpolicy,size,offset
    • For each disk group found by the above command, create grid disks on the new cell that is being added to the cluster. Match the size and the cachingpolicy of the existing grid disks for the disk group reported by the command above. Grid disks should be created in the order of increasing offset to ensure similar layout and performance characteristics as the existing cells. For example, the "list griddisk" command could return something like this:
o   DATAC1          default         5.6953125T         32M
o   DBFS_DG         default         33.796875G         7.1192474365234375T
o   RECOC1          none            1.42388916015625T  5.6953582763671875T
When creating grid disks, begin with DATAC1, then RECOC1, and finally DBFS_DG using the following command:
CellCLI> create griddisk ALL HARDDISK PREFIX=DATAC1, size=5.6953125T, cachingpolicy='default', comment="Cluster cluster-clux6 DR diskgroup DATAC1"

CellCLI> create griddisk ALL HARDDISK PREFIX=RECOC1,size=1.42388916015625T, cachingpolicy='none', comment="Cluster cluster-clux6 DR diskgroup RECOC1"

CellCLI> create griddisk ALL HARDDISK PREFIX=DBFS_DG,size=33.796875G, cachingpolicy='default', comment="Cluster cluster-clux6 DR diskgroup DBFS_DG"
CAUTION: Be sure to specify the EXACT size shown along with the unit (either T or G).
  • Verify the newly created grid disks are visible from the Oracle RAC nodes. Log in to each Oracle RAC node and run the following command:
·        $GI_HOME/bin/kfod op=disks disks=all | grep cellName_being_added
This should list all the grid disks created in step 7 above.
  • Add the newly created grid disks to the respective existing ASM disk groups.
·        alter diskgroup disk_group_nameadd disk 'comma_separated_disk_names';
The command above kicks off an ASM rebalance at the default power level. Monitor the progress of the rebalance by querying gv$asm_operation:
SQL> select * from gv$asm_operation;
Once the rebalance completes, the addition of the cell to the Oracle RAC is complete.
  • Download and run the latest exachk to ensure that the resulting configuration implements the latest best practices for Oracle Exadata.
References:

http://docs.oracle.com/cd/E80920_01/DBMMR/extending-exadata.htm#DBMMR21158
Reimaging Exadata Cell Node Guidance (Doc ID 2151671.1)

 







K21 Technologies - Upcoming Oracle trainings

Tue, 2017-10-17 08:46
Atual's K21 technologies offering some quality of Oracle training. Below are the upcoming Oracle training:

Apps DBA : Install | Patch | Clone | Maintain :
https://k21technologies.samcart.com/referral/appsdba/062687

Apps DBA Webinar: 
https://k21technologies.samcart.com/r…/appsdbawebinar/062687

Oracle GoldenGate Training

Whats new in Exadata X7

Tue, 2017-10-17 07:58
Oracle announced its new Exadata Database Machine X7 during OOW 2017.  Lets walk through quickly about the key features of X7.


Key features
  • Up to 912 CPU core and 28.5TB memory per rack
  • 2 to 19 DB servers per rack
  • 3 to 18 Storage servers per rack
  • Maximum of 920TB flash capacity
  • 2.1PB of disk capacity
  • Delivers 20% faster throughput from earlier models
  • 50% more memory capacity from earlier models
  • 10TB size disk. (10TB x 12 = 120TB RAW per storage server). The only system in the market today with 10TB disk capacity
  • Increased OLTP performance : about 4.8 million reads and about 4.3 million writes per second
  • Featuring an Intel Skylake processor with 24 cores 
  • Enhanced Ethernet connectivity: supports over 25GbE
  • Delivers in-memory performance from Shared Storage 
  • OEDA CML interface
  • New Exadata Smart Software : Exadata 18c
 

Switchover and Switchback simplified in Oracle 12c

Fri, 2017-10-13 07:51


Business continuity (Disaster Recovery) has become a very critical factor for every business, especially in the financial sectors. Most of the banks are tending to have their regular DR test to meet the central bank regulation on DR testing capabilities.

Very recently, there was a request from one of the clients to perform a reverse replication and rollback (i.,e switchover & switchback) between the HO and DR for one of the business critical databases. Similar activities performed with easy on pre 12c databases. However, this was my first experience with Oracle 12c. After spending a bit of time to explore whats new in 12c Switchover, it was amazing to learn how 12c simplified the procedure. So, I decided to write a post on my experience.

This post demonstrates how Switchover and Switchback procedure is simplified in Oracle 12c.

The following is used in the scenario:

·        2 instances Oracle 12c RAC primary database (IMMPRD)
·        Single instance Oracle 12c RAC Standby database (IMMSDB)

Look at the current status of the both databases:
-- Primary
IMMPRD> select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
OPEN         IMMPRD1           PRIMARY

-- Standby
IMMSDB> select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
OPEN         IMMSDB1           PHYSICAL STANDBY

Before getting into the real action, validate the following to avoid any failures during the course of role transition:

·        Ensure log_archive_dest_2 is configured on PRIMARY and STANDBY databases
·        Media Recovery Process (MRP) is active on STANDBY and in sync with PRIMARY database
·        Create STANDBY REDO logs on PRIMARY, if not exists
·        FAL_CLIENT & FAL_SERVER parameters set on both databases
·        Verify TEMP tablespaces on STANDBY, add them if required, as TEMPFFILES created after STANDBY creation won’t be propagated to STANDBY site.

Pre-Switchover in 12c

For a smooth role transition, it is important to have everything in-place and in sync. Pre-Oracle 12c, a set of commands used on PRIMARY and STANDBY to validate the readiness of the systems. However, with Oracle 12c, this is simplified with the ALTER DATABASE SWITCHOVER VERIFY command. The command performs the following set of actions:

·        Verifies minimum Oracle version, i.e, Oracle 12.1
·        PRIMRY DB REDO SHIPPING
·        Verify MRP status on Standby database

Let’s run the command on the primary database to validate if the environments are ready for the role transition.

IMMPRD>  alter database switchover to IMMSDB verify;
 alter database switchover to IMSDB verify
*
ERROR at line 1:
ORA-16475: succeeded with warnings, check alert log for more details

When the command is executed, an ORA-16475 error was encountered. For more details, lets walk through the PRIMARY and STANDBY databases alert.log file, and pay attention to the SWITCHOVER VERIFY WARNING.

--primary database alert.log

Fri Oct 13 11:16:00 2017
SWITCHOVER VERIFY: Send VERIFY request to switchover target IMSDB
SWITCHOVER VERIFY COMPLETE
SWITCHOVER VERIFY WARNING: switchover target has no standby database defined in LOG_ARCHIVE_DEST_n parameter. If the switchover target is converted to a primary database, the new primary database will not be protected.

ORA-16475 signalled during:  alter database switchover to IMSDB verify...

The LOG_ARCHIVE_DEST_2 parameter was not set on the STANDBY database and the VERIFY command produced the warning. After setting the parameter on the STANDBY, the verify command was re-ran, and it went well this time.

IMMPRD> alter database switchover to IMMSDB verify;

Database altered.

PRIMARY database alert.log confirms no WARINGS

alter database switchover to IMMSDB verify
Fri Oct 13 08:49:20 2017
SWITCHOVER VERIFY: Send VERIFY request to switchover target IMMSDB
SWITCHOVER VERIFY COMPLETE
Completed: alter database switchover to IMMSDB verify

Switchover in 12c 

After successful validation and confirmation about the DBs readiness for the role transition, execute the actual switchover command on the primary database. (advised to view the alert.log files of PRIMARY and STANDBY instances).

IMMPRD> alter database switchover to IMMSDB;

Database altered.

Let’s walk through the PRIMARY and STANDBY database alert.log files to review what Oracle has internally done.

--primary database alert.log

alter database switchover to IMMSDB
Fri Oct 13 08:50:21 2017
Starting switchover [Process ID: 302592]
Fri Oct 13 08:50:21 2017
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY [Process Id: 302592] (IMMPRD1)
Waiting for target standby to receive all redo
Fri Oct 13 08:50:21 2017
Waiting for all non-current ORLs to be archived...
Fri Oct 13 08:50:21 2017
All non-current ORLs have been archived.
Fri Oct 13 08:50:21 2017
Waiting for all FAL entries to be archived...
Fri Oct 13 08:50:21 2017
All FAL entries have been archived.
Fri Oct 13 08:50:21 2017
Waiting for dest_id 2 to become synchronized...
Fri Oct 13 08:50:22 2017
Active, synchronized Physical Standby switchover target has been identified
Preventing updates and queries at the Primary
Generating and shipping final logs to target standby
Switchover End-Of-Redo Log thread 1 sequence 24469 has been fixed
Switchover End-Of-Redo Log thread 2 sequence 23801 has been fixed
Switchover: Primary highest seen SCN set to 0x960.0x8bcd0f48
ARCH: Noswitch archival of thread 2, sequence 23801
ARCH: End-Of-Redo Branch archival of thread 2 sequence 23801
ARCH: LGWR is scheduled to archive destination LOG_ARCHIVE_DEST_2 after log switch
ARCH: Standby redo logfile selected for thread 2 sequence 23801 for destination LOG_ARCHIVE_DEST_2
ARCH: Noswitch archival of thread 1, sequence 24469
ARCH: End-Of-Redo Branch archival of thread 1 sequence 24469
ARCH: LGWR is scheduled to archive destination LOG_ARCHIVE_DEST_2 after log switch
ARCH: Standby redo logfile selected for thread 1 sequence 24469 for destination LOG_ARCHIVE_DEST_2
ARCH: Archiving is disabled due to current logfile archival
Primary will check for some target standby to have received all redo
Waiting for target standby to apply all redo
Backup controlfile written to trace file /u01/app/oracle/diag/rdbms/imprd/IMPRD1/trace/IMPRD1_ora_302592.trc
Converting the primary database to a new standby database
Clearing standby activation ID 627850507 (0x256c3d0b)
The primary database controlfile was created using the
'MAXLOGFILES 192' clause.
There is space for up to 186 standby redo logfiles
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 104857600;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 104857600;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 104857600;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 104857600;
ALTER DATABASE ADD STANDBY LOGFILE 'srl5.f' SIZE 104857600;
ALTER DATABASE ADD STANDBY LOGFILE 'srl6.f' SIZE 104857600;
ALTER DATABASE ADD STANDBY LOGFILE 'srl7.f' SIZE 104857600;
Archivelog for thread 1 sequence 24469 required for standby recovery
Archivelog for thread 2 sequence 23801 required for standby recovery
Switchover: Primary controlfile converted to standby controlfile succesfully.
Switchover complete. Database shutdown required
USER (ospid: 302592): terminating the instance
Fri Oct 13 08:50:44 2017
Instance terminated by USER, pid = 302592
Completed: alter database switchover to IMMSDB
Shutting down instance (abort)

--standby database alert.log

SWITCHOVER: received request 'ALTER DTABASE COMMIT TO SWITCHOVER  TO PRIMARY' from primary database.
Fri Oct 13 08:50:32 2017
ALTER DATABASE SWITCHOVER TO PRIMARY (IMMSDB1)
Maximum wait for role transition is 15 minutes.
Switchover: Media recovery is still active
Role Change: Canceling MRP - no more redo to apply

SMON: disabling cache recovery
Fri Oct 13 08:50:41 2017
Backup controlfile written to trace file /u01/app/oracle/diag/rdbms/imsdb/IMMSDB1/trace/IMMSDB1_rmi_120912.trc
SwitchOver after complete recovery through change 10310266982216
Online logfile pre-clearing operation disabled by switchover
Online log +DATAC1/IMMSDB/ONLINELOG/group_1.3018.922980623: Thread 1 Group 1 was previously cleared
Standby became primary SCN: 10310266982214
Switchover: Complete - Database mounted as primary
SWITCHOVER: completed request from primary database.
Fri Oct 13 08:51:11 2017

At this point-in-time, the new PRIMARY database is in MOUNT state, so you need to OPEN the database.

IMMSDB> alter database open

And startup the STANDBY database and enable MRP: (below is the active standby database command)

IMMPRD> startup
IMMPRD> recover managed standby database using current logfile disconnect from session;

Post Switchover, run through the following:

IMMSDB> alter system switch logfile;

IMMSDB> select dest_id,error,status from v$archive_dest where dest_id=2;

IMMSDB> select max(sequence#),thread# from v$log_history group by thread#;
IMMSDB> select max(sequence#)  from v$archived_log where applied='YES' and
dest_id=2;

On Standby database

IMMPRD> select thread#,sequence#,process,status from gv$managed_standby;
-- in 12.2, use gv$dataguard_status instead of gv$managed_standby view

IMMPRD> select max(sequence#),thread# from v$archived_log group by thread#;

You can also enable the trace on primary and standby before performing the role transition to analyze any failures during the procedure. Use the below procedure on the PRIMARY database to enable the tracing:

SQL> alter system set log_archive_trace=8191;  -- enabling trace

SQL> alter system set log_archive_trace=0;      -- disabling trace

Switchback

To revert (switch back) to the previous situation, perform the same action. Remember, now, your primary is your previous STANDBY and standby is previous PRIMARY.


References:

12c Data guard Switchover Best Practices using SQLPLUS (Doc ID 1578787.1)

A few useful Oracle 12cR2 MOS Docs

Thu, 2017-07-06 07:33
A few useful MOS Docs are listed below , in case if 12cR2 upgrade around the corner.



  • How to Upgrade to/Downgrade from Grid Infrastructure 12.2 and Known Issues (Doc ID 2240959.1)
  • Complete Checklist for Upgrading to Oracle Database 12c Release 2 (12.2) using DBUA (Doc ID 2189854.1)
  • 12.2 Grid Infrastructure Installation: What's New (Doc ID 2024946.1)
  • Patches to apply before upgrading Oracle GI and DB to 12.2.0.1 (Doc ID 2180188.1)
  • Differences Between Enterprise, Standard Edition 2 on Oracle 12.2 (Doc ID 2243031.1)
  • 12.2 gridSetup.sh Does Not List Disks Unless the Discovery String is Provided (Doc ID 2244960.1)


Oracle Clusterware 12cR2 - deprecated and desupported features

Thu, 2017-07-06 04:27


Having clear understanding of deprecated and desupported features in a new release is equally important as knowing the new features of the release. In this short blog post, I would like to highlight the following features that are either deprecated or desupported in 12cR2.

Deprecated
·        config.shwill no longer be used for Grid configuration wizard, instead, the gridSetip.sh is used in 12cR2;
·        Placement of OCR and Voting files directly on a shared filesystem is not deprecated;
·        The diagcollection.pl utility is deprecated in favor of Oracle Trace File Analyzer;

Desupported
·        You are no longer able to use Oracle Clusterware commands that are prefixed with crs_.


In my next blog post, will go over some of the important features Oracle Clusterware in 12cR2. Stay tuned.

References:

SQL Tuning Advisor against sql_id's in AWR

Tue, 2017-05-23 04:23
We were in a situation very recently to run SQL Tuning Advisor against a bunch of SQL statements that appeared in the AWR's ADDM recommendations report. The initial effort to launch SQL Tuning Advisor against the SQL_ID couldn't go through as the SQL didn't exist in the shared pool.

Since the sql_id was present in the AWR report, thought of running the advisory against the AWR data, and found a very nice and precisely explained at the following blog:

http://www.redstk.com/running-sql-tuning-advisor-against-awr-data/


---- Example how to run SQL Tuning advisor against sql_id in AWR

variable stmt_task VARCHAR2(64);
SQL> exec :stmt_task := DBMS_SQLTUNE.CREATE_TUNING_TASK (begin_snap => 4118, end_snap => 4119, sql_id => 'caxcavmq6zkv9' , scope => 'COMPREHENSIVE', time_limit => 60, task_name => 'sql_tuning_task01' );

SQL> exec DBMS_SQLTUNE.EXECUTE_TUNING_TASK(task_name => 'sql_tuning_task01');

SQL> SELECT status FROM USER_ADVISOR_TASKS WHERE task_name = 'sql_tuning_task01';

set long 50000
set longchunksize 500000
SET LINESIZE 150
Set pagesize 5000
 

SQL> SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK('sql_tuning_task01') FROM DUAL;


SQL> exec DBMS_SQLTUNE.drop_tuning_task(task_name =>'sql_tuning_task01');



References:
https://docs.oracle.com/database/121/ARPLS/d_sqltun.htm#ARPLS220
https://uhesse.com/2013/10/11/oracle-sql-tuning-advisor-on-the-command-line/



Happy reading/learning.

Migrating data from on-primeses to cloud

Mon, 2017-05-15 14:37
No doubt everyone talks about cloud technologies and certainly could holds the future for various reasons. Oracle doesn't want to left behind in the competition and put the top gear towards cloud offerings. 

This blog explore various Oracle options to migrate on-premises data to cloud. Typically, when a database is created on cloud, the next challenging factor is loading the data to cloud. The good thing about data migration is that the methods and procedures remain the same as you were doing earlier. All data migration constraints still applied, like the following:
  • OS versions of on-premises and cloud machine
  • DB versions
  • Character set
  • DB Size
  • data types
  • Network bandwidth
 The very known and DBA friendly popular Oracle methods are still valid for cloud data migration too :
  • Logical method (conventional data pumps)
  • TTS
  • Cross platform TTS
  • Unplugging/Plugging/Cloning/Remote Cloning of PDBs
  • SQL Developer and SQL Loader
  • Golden Gate
Usually, you take the data backup, choosing the method which suits your requirements,  and upload the backup files to the cloud machine where the database is hosted. Please consider good network and internet speed to expedite the data migration process.

In the example below, data pump (dumpfile) is copied from the on-premises machine to the cloud host machine:


Once the backup files are transferred to the cloud host, you use the typically method to do the data restore.

For more options, read the URL below:

https://docs.oracle.com/en/cloud/paas/database-dbaas-cloud/csdbi/mig-12c-non-cdb-12c.html

For example using SQL Developer and SQL Loading, read the URLs below:

http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/dbaas/OU/MigratingToDBaaS/LoadingData/LoadingData.html
http://docs.oracle.com/cloud/latest/dbcs_schema/CSDBU/GUID-3B14CF7A-637B-4019-AAA7-A6DC5FF3D2AE.htm#CSDBU179

http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/schema/50/DataLoad_SQLDev/DataLoad_SQLDev.html

Golden Gate
http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/ggcs/Replicate_On-Premises_Data_to_Cloud_with_OGGCS/Replicate_on_premises_data_to_cloud_with_Oracle_GoldenGate_cloud_service.html



Transforming a heap table to a partitioned table - how and whats new in 12c R1 & R2

Sun, 2017-05-14 04:13
As part of the daily operational job, one of the typical requests we DBAs get is to convert a regular (heap) table into a partitioned table. This can be achieved either offline or online. This blog will demonstrate some of the pre-12c methods and enhancements in Oracle 12c R1 and R2.

I remembered when I  had such requests in the past, I used the following offline/online methods to achieve the goals, whatever best fit my application needs.

The offline method involves the following action sequence:
  1. Create empty interim partitioned table, indexes and etc
  2. Stop the application services if the non-partitioned table involved in any operations
  3. Migrate the data from the non-partitioned table to partitioned table
  4. Swap the table names
  5. Drop the non-partitioned table
  6. Compile any invalid package/procedure/functions/triggers
  7. Gather table stats
Note: If any integrity references, dependencies exists, the above procedure slightly defers with a couple of additional actions. The downside of this workaround is the service interruption during the course of transformation.

To avoid any service interruption, Oracle provides redefinition feature to perform the action online, without actually impacting the going DML operations on the table. The redefinition option involves the following action sequence:
  1.  Validate if the table can use redefinition feature or not (DBMS_REDEFINITION.CAN_REDEF_TABLE procedure)
  2.  Create interim partition table and all indexes
  3. Start the online redefinition process (DBMS_REDEFINITION.START_REDEF_TABLE procedure)
  4. Copy dependent objects (DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS procedure)
  5. Perform data synchronization (DBMS_REDEFINITION.SYNC_INTERIM_TABLE procedure)
  6. Stop the online redefinition process (DBMS_REDEFINITIONS.FINISH_REDEF_TABLE procedure)
  7. Swap the table names
  8. Compile any invalid package/procedure/functions/triggers
  9. Gather table stats
 However, such sort of action is simplified in Oracle 12c R1 and made easier in R2. The following demonstraes12c R1 and R2 methods.

12cR1 EXCHANGE PARTITION
With EXCHANGE PARTITION feature, the data can be quickly loaded from a non-partitioned table to a partitioned table:

Once you have the partitioned table, use the following example to exchange the data of heap table to partitioned table. In this example, the existing data will be copied to a single partition in the partitioned table.

ALTER TABLE sales EXCHANGE PARTITION p1 WITH TABLE non_sales_part;

12cR2 MODIFY 
With 12cR2 ALTER TABLE MODIFY option, a non-partitioned table can be easily transformed into a partitioned table, either offline or online. The example below demonstrate creating daily interval partition:

offline procedure:
ALTER TABLE sales MODIFY
PARTITION BY RANGE (column_name) INTERVAL (1)
(partition p1 values less than (100),
partitionp2 values less than (1000)) ;

Online procedure:
ALTER TABLE sales MODIFY
PARTITION BY RANGE (column_name) INTERVAL (1)
(partition p1 values less than (100),
partitionp2 values less than (1000))ONLINE UPDATE INDEXES (index1, index2 LOCAL) ;


References:
https://uhesse.com/2010/02/15/partitioning-a-table-online-with-dbms_redefinition/
https://docs.oracle.com/database/122/VLDBG/evolve-nopartition-table.htm#VLDBG-GUID-5FDB7D59-DD05-40E4-8AB4-AF82EA0D0FE5
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:869096700346654484
https://oracle-base.com/articles/misc/partitioning-an-existing-table-using-exchange-partition




Oracle Private Cloud Appliance (PCA) - when and why?

Sun, 2017-04-30 13:44

What has become so critical in today's competitive business is the ability to fulfill the sudden and unpredictable demands that arises. It requires data centers agility, rapid deployments and cloud ready solutions. To succeed in today's modern business, companies must be ready to deploy innovative applications and quickly adopt the changes in the market.

Oracle Private Cloud Appliance (PCA) is an integrated, 'wire once' converged system designed for fast cloud and rapid application deployments at the data centers. PCA is a one stop system for all your applications, where mixed operating systems (Linux, Solaris, RHEL and Windows) workloads can be consolidated into a single machine.

Its has been observed off-late here in GCC specially, more and more organization are moving towards the PCA adoption. Hence, I thought of just writing a blog explaining the prime features and functionalities of PCA.  Once I get some hands-on (which is in the very near future), I would love to write some advance concepts about of PCA and how really organization benefited with PCA.


Here are the key features of PCA:
  • Engineered system comes with fully prebuilt and preconfigured setup
  • Cost effective solution for most of the Oracle and non-Oracle workloads
  • Automated installation and configuration software controller
  • Prebuilt OVM to speed-up the Oracle deployments
  • Single-button DR solutions through OEM
  • Pay for only you use policy
  • Flexibility to Oracle storage or any pre-existing storage
  • PCA certifies all Oracle software that is certified to run on OVM  
  • Deployment of PCA at the data center is very straightforward and simple. The system will be ready within minutes/
  • You can add virtual machines (OVM) either with some basic configuration or use the standard OVM templates
  • No additional software licenses are required on PCA
  • Greatly reduces the time required for deployments. A new deployment can be achieved in hours rather than days in contrast to the traditional infrastructure
  • Easy integration into to existing data center models
  • OVM included with no additional cost

Below picture depicts the typical architecture, what PCA comprises of and supports:



A pair of management servers are installed in a active/standard  for HA. The master management node runs the full set of services, whereas the standby node runs only a subset of services.

The compute nodes (Oracle Servers X series) constitutes the virtual platforms and provides the processing power and memory capacity for the servers they hosted. The entire functionality is orchestrated by the management node (master). 




References:
http://www.oracle.com/us/products/servers/private-cloud-appliance/oracle-private-cloud-appliance-ds-2595915.pdf
http://www.oracle.com/us/products/servers/private-cloud-appliance/oracle-private-cloud-appliance-faq-2595945.pdf
http://www.oracle.com/us/products/engineered-systems/coalfire-oracle-pca-pci-dss-3671919.pdf

http://docs.oracle.com/cd/E71897_01/
 

Stay tuned for more updates on this.

 

How to stabilize, improve and tune Oracle EBS Payroll and Retropay process

Sat, 2017-04-15 05:45
Visited few customers off late to review the performance issues pertaining to their Oracle EBS Payroll and Retro-pay processes. Not sure if many are aware of the tools Oracle has to analyze and improve any Oracle EBS modules, including Payroll and Retro-pay. To get proactive with Oracle EBS, refer the following note:

  • Get Proactive with Oracle E-Business Suite - Product Support Analyzer Index (Doc ID 1545562.1)

I must say, after running through the analyzers (Retro and Payroll), and implementing the suggestions, significant performance is achieved without making any change to the queries. I would strongly recommend to run the analyzers on different modules on Oracle EBS to get proactive and achieve performance improvements and stability. Below is the Payroll analyzer report screen shot, explains the findings and recommendations:


 



Few good MOS notes to stabilize, improve and tune the Retro-pay and Payroll processes on Oracle EBS environment:

  • EBS Payroll RetroPay Analyzer (Doc ID 1512437.1)
  • EBS Database Parameter Settings Analyzer (Doc ID 1953468.1)
  • EBS Payroll Analyzer (Doc ID 1631780.1)
  • EBS HRMS Payroll - RetroPay Advisor (Doc ID 1482827.1)
  • RetroPay Analyzer Tool FAQ (Doc ID 1568129.1)

12cR1 RMAN Restore/Duplicate from ASM to Non ASM takes a longer time waiting for the ASMB process

Tue, 2017-03-14 08:51
Yet another exciting journey with Oracle bugs and challenges. Here is the story for you.

One of our recent successful migrations was a single instance Oracle EBS 12cR1 database to Oracle Super Cluster M7 as a RAC database with 2 instances on the same DB version (12.1.0.2). Subsequently, the customer wants to run through EBS cloning and set up an Oracle active data guard configuration.

The target systems are not Super Cluster. The requirement to clone and set up an Oracle data guard is to configure as a single-instance database onto a filesystem (non-ASM). After initiating the cloning procedure using the DUPLICATE TARGET DATABASE method trough RMAN, we noticed that RMAN is taking significant time to restore(ship) the data files to the remote server. Also, the following warning messages were appeared in the alert.log:



ASMB started with pid=63, OS id=18085
WARNING: failed to start ASMB (connection failed) state=0x1 sid=''
WARNING: ASMB exiting with error
Starting background process ASMB
Sat Mar 11 13:53:24 2017
ASMB started with pid=63, OS id=18087
WARNING: failed to start ASMB (connection failed) state=0x1 sid=''
WARNING: ASMB exiting with error
Starting background process ASMB
Sat Mar 11 13:53:27 2017
ASMB started with pid=63, OS id=18089
WARNING: failed to start ASMB (connection failed) state=0x1 sid=''
WARNING: ASMB exiting with error
Starting background process ASMB
 

The situation raised couple of concerns in our minds:
  1. Why is the restore is too slow from RMAN? (while there is no Network latency and DB files are not so big sized)
  2. Why Oracle is looking for an ASM instance on a non-Cluster home? (not even a standard Grid home)
After some initial investigation, we come across following MOS Docs:
  • '12c RMAN Operations from ASM To Non-ASM Slow (Doc ID 2081537.1)'. 
  • WARNING: failed to start ASMB after RAC Database on ASM converted to Single Instance Non-ASM Database (Doc ID 2138520.1)
According to the above MOS Docs, this is an expected behavior  due to an  Unpublished BUG 19503821:  RMAN CATALOG EXTREMELY SLOW WHEN MIGRATING DATABASE FROM ASM TO FILE SYSTEM

You need to apply a patch 19503821 to overcome from the bug.


If you similar demand, make sure you apply the patch in your environmet before you proceed with the restore/duplicate procedure.

-- Excerpt from the above notes:

APPLIES TO:
Oracle Database - Enterprise Edition - Version 12.1.0.1 to 12.1.0.2 [Release 12.1]
 Information in this document applies to any platform.
 
SYMPTOMS:

1*. RAC Database with ASM has been converted or restored to Standalone Single Instance Non-ASM Database.
2*. From the RDBMS alert.log, it is showing continuous following messages.

3*.RMAN Restore/Duplicate from ASM to Non ASM in 12.1 take a longer time waiting for the ASMB process.
4*.Any RMAN command at the mount state which involves Non ASM location can take more time.

 SOLUTION:


Apply the patch 19503821, if not available for your version/OS then please log a SR with the support to get the patch for your version.

12cR2 new features for Developers and DBAs - Here is my pick (Part 2)

Sat, 2017-03-04 10:29
In Part 1, I have outlined a few (my pick) 12cR2 new features useful for Developers and DBAs. In the part 2, I am going to discuss a few more new features.
Read/Write and Read-Only Instances
Read-write and read-only database instances of the same primary database can coexist in an Oracle Flex Cluster.
Advanced Index Compression
Prior to this release, the only form of advanced index compression was low compression. Now you can also specify high compression. High compression provides even more space savings than low compression.
PDBs Enhancements
  • I/O Rate Limits for PDBs
  • Different character sets of PDBs in a CDB
  • PDB refresh to periodically propagate changes from a source PDB to its cloned copy
  • CONTAINERS hint : When a CONTAINERS ()query is submitted, recursive SQL statements are generated and executed in each PDB. Hints can be passed to these recursive SQL statements by using the CONTAINERS statement-level hint. 
  • Cloning PDB no longer to be in R/W mode : Cloning of a pluggable database (PDB) resolves the issue of setting the source system to read-only mode before creating a full or snapshot clone of a PDB.
  • Near Zero Downtime PDB Relocation:This new feature significantly reduces downtime by leveraging the clone functionality to relocate a pluggable database (PDB) from one multitenant container database (CDB) to another CDB. The source PDB is still open and fully functional while the actual cloning operation is taking place.
  • Proxy PDB: A proxy pluggable database (PDB) provides fully functional access to another PDB in a remote multitenant container database (CDB). This feature enables you to build location-transparent applications that can aggregate data from multiple sources that are in the same data center or distributed across data centers.
Oracle Data Pump Parallel Export of Metadata: The PARALLEL parameter for Oracle Data Pump, which previously applied only to data, is extended to include metadata export operations. The performance of Oracle Data Pump export jobs is improved by enabling the use of multiple processes working in parallel to export metadata.
Renaming Data Files During Import
Oracle RAC :
  • Server Weight-Based Node Eviction :Server weight-based node eviction acts as a tie-breaker mechanism in situations where Oracle Clusterware needs to evict a particular node or a group of nodes from a cluster, in which all nodes represent an equal choice for eviction. In such cases, the server weight-based node eviction mechanism helps to identify the node or the group of nodes to be evicted based on additional information about the load on those servers. Two principle mechanisms, a system inherent automatic mechanism and a user input-based mechanism exist to provide respective guidance.
  • Load-Aware Resource Placement : Load-aware resource placement prevents overloading a server with more applications than the server is capable of running. The metrics used to determine whether an application can be started on a given server, either as part of the startup or as a result of a failover, are based on the anticipated resource consumption of the application as well as the capacity of the server in terms of CPU and memory.
Enhanced Rapid Home Provisioning and Patch Management
TDE Tablespace Live Conversion: You can now encrypt, decrypt, and rekey existing tablespaces with Transparent Data Encryption (TDE) tablespace live conversion. A TDE tablespace can be easily deployed, performing the initial encryption that migrates to an encrypted tablespace with zero downtime. This feature also enables automated deep rotation of data encryption keys used by TDE tablespace encryption in the background with zero downtime.
Fully Encrypted Database: Transparent Data Encryption (TDE) tablespace encryption is applied to database internals including SYSTEM, SYSAUX, and UNDO.
TDE Tablespace Offline Conversion: This release introduces new SQL commands to encrypt tablespace files in place with no storage overhead. You can do this on multiple instances across multiple cores. Using this feature requires downtime, because you must take the tablespace temporarily offline. With Data Guard configurations, you can either encrypt the physical standby first and switchover, or encrypt the primary database, one tablespace at a time.

12cR2 new features for Developers and DBAs - Here is my pick (Part 1)

Thu, 2017-03-02 14:07
Since the announcement of 12cR2 on-premises availability, the Oracle community become energetic and busy tweeting/blogging the new features, demonstrating installation & upgrades. Hence, I have decided to pick my favorite list of 12cR2 new features for Developers and & DBAs. Here is the high-level summary, until I write a detailed post for each feature (excerpt from Oracle 12cR2 new features document).
Command history for SQL * Plus: Pre 12cR2, this could be achieved through a workaround, now, the history command would do the magic for you.
Materialized Views: Real-Time Materialized Views: Materialized views can be used for query rewrite even if they are not fully synchronized with the base tables and are considered stale. Using materialized view logs for delta computation together with the stale materialized view, the database can compute the query and return correct results in real time.
For materialized views that can be used for query rewrite all of the time, with the accurate result being computed in real time, the result is optimized and fast query processing for best performance. This alleviates the stringent requirement of always having to have fresh materialized views for the best performance.
Materialized Views: Statement-Level Refresh: In addition to ON COMMIT and ON DEMAND refresh, the materialized join views can be refreshed when a DML operation takes place, without the need to commit such a transaction. This is predominantly relevant for star schema deployments.
The new ON STATEMENT refresh capability provides more flexibility to the application developers to take advantage of the materialized view rewrite, especially for complex transactions involving multiple DML statements. It offers built-in refresh capabilities that can replace customer-written trigger-based solutions, simplifying an application while offering higher performance.
Oracle Data Guard Database Compare: This new tool compares data blocks stored in an Oracle Data Guard primary database and its physical standby databases. Use this tool to find disk errors (such as lost write) that cannot be detected by other tools like the DBVERIFY utility.
Subset Standby: A subset standby enables users of Oracle Multitenant to designate a subset of the pluggable databases (PDBs) in a multitenant container database (CDB) for replication to a standby database. 
Automatically Synchronize Password Files in Oracle Data Guard Configurations: This feature automatically synchronizes password files across Oracle Data Guard configurations. When the passwords of SYS, SYSDG, and so on, are changed, the password file at the primary database is updated and then the changes are propagated to all standby databases in the configuration.
Preserving Application Connections to An Active Data Guard Standby During Role Changes: Currently, when a role change occurs and an Active Data Guard standby becomes the primary, all read-only user connections are disconnected and must reconnect, losing their state information. This feature enables a role change to occur without disconnecting the read-only user connections. Instead, the read-only user connections experience a pause while the state of the standby database is changed to primary. Read-only user connections that use a service designed to run in both the primary and physical standby roles are maintained. Users connected through a physical standby only role continue to be disconnected.
Oracle Data Guard for Data Warehouses: The use of NOLOGGING for direct loads on a primary database has always been difficult to correct on an associated standby database. On a physical standby database the data blocks were marked unrecoverable and any SQL operation that tried to read them would return an error. Or, for a logical standby database, SQL apply would stop upon encountering the invalidation redo.
Rolling Back Redefinition: There is a new ROLLBACK parameter for the FINISH_REDEF_TABLE procedure that tracks DML on a newly redefined table so that changes can be easily synchronized with the original table using the SYNC_INTERIM_TABLE procedure.
The new V$ONLINE_REDEF view displays runtime information related to the current redefinition procedure being executed based on a redefinition session identifier.
Online Conversion of a Nonpartitioned Table to a Partitioned Table: Nonpartitioned tables can be converted to partitioned tables online. Indexes are maintained as part of this operation and can be partitioned as well. The conversion has no impact on the ongoing DML operations.
Online SPLIT Partition and Subpartition: The partition maintenance operations SPLIT PARTITION and SPLIT SUBPARTITION can now be executed as online operations for heap organized tables, allowing the concurrent DML operations with the ongoing partition maintenance operation.
Online Table Move: Nonpartitioned tables can be moved as an online operation without blocking any concurrent DML operations. A table move operation now also supports automatic index maintenance as part of the move.
Oracle Database Sharding: Sharding with Oracle Database 12c Release 2 (12.2) is an architecture for suitable online transaction processing (OLTP) applications where data is horizontally partitioned across multiple discrete Oracle databases, called shards, which share no hardware or software. The collection of shards is presented to an application as a single logical Oracle database.
Stay tuned for Part 2..

Oracle Enterprise Manager 13c R2 configuration on Win 2008 R2 server stops at 78%

Sun, 2017-01-08 05:13
Oracle Enterprise Manager (OEM) 13cR2 configuration on a Win 2008 R2 was stopping at 78% completion  while performing the BI Publisher Configuration. Apparently, the problem exists pre 13cR2 as well.

All OMS components like (WebTier, Oracle Management Server, JVMD Engine) will stop/start during the BI Publisher configuration operations. Unfortunately, the windows service for OMS was taking too long to complete the start operation of Webtier (HTTP) and installation at 78% stopped (didn't move forward). Initially, I have started looking at WebTier startup issues, in the process, tried to disable firewall, also excluded the installation directory from the anti virus on the Window server, and the result remains the same.




Cleaned-up the previous installation and start-over the OEM 13cR2 installation on the server, but this time I didn't check the BI Publisher configuration option as I wanted to exclude the BI Publisher configuration to ensure I complete the OEM installation without any issues. Despite the fact I didn't check the option, OEM started configuring BI and stopped exactly at 78%, the issue remains.

The error messages in the sysman and other OMS logs didn't provide any useful hints, in fact, it was misleading and took me in the wrong direction.

Came across of a MOS ID( 1943820.1), and after applying the solution, OEM configuration completed successfully.

Here is the excerpt from the MOS ID :

On some occasions, httpd.exe will fail to start, If you are missing or have a damaged Microsoft Visual C++ Redistributable 64-bit package.
It may report the error above, or give <SEVERE> <OHS-0> <Failed to start the server ohs_1>, with 0 bytes of details in the OHS log.
Install the Microsoft Visual C++ Redistributable Package (x64) anyhow.

1. You can obtain this file at:

http://www.microsoft.com/en-us/download/details.aspx?id=2092

2. Download the Microsoft Visual C++ Redistributable Package (x64)

3. Should have a file called vcredist_x64.exe. Run installation.

4. Try starting OHS again.

Note:
I understood why Oracle still does the BI Publisher configuration despite I didn't select the option. When you don't select the option, BI Publisher is confgured, but, will be disabled, so that in the future you can easily enable this option.




References

OHS 12c Fails To Start On Windows Server 2008 X64, with no detailed errors. (Doc ID 1943820.1)

https://community.oracle.com/thread/3889503


Exadata migration

Sun, 2016-11-27 12:21
Had a wonderful Sangam16 conference in India, and received much applaud for the two presentations delivered,  Oracle 12c multitenancy and Exadata migration best practices.

After a very short trip to India, life started to be business as usual again, and become busy. Was fully occupied with multiple assignments: Oracle EBS database health check assessment at a client for 2 days, GI/RDBMS/PSU deployments on Oracle Sun Super Cluster M7, Exadata configuration preparation and 9 databases migration to Exadata during the week-end.

Over the last week-end, we (me and my colleague) were pretty busy with 9 databases migration to Exadata. There were a few challenges , and learned a few new things too. I would like to discuss couple of scenarios that were interesting:

One of the databases had corrupted blocks, and the expdp was keep failing with ORA-01555: snapshot too old: rollback segment number  with name "" too small. Our initial thoughts were tuning undo_retention, increasing the undo tablespace, setting an event, etc. Unfortunately, none of the workarounds helped in the situation. We then cameacross a MOS note which explains that an ORA-01555 with "", no rollback segment name is probably due to corrupted blocks. After applying the solution explained in the note, we managed to export/import the database successfully. My colleague has blogged about the scenario at his blog: http://bit.ly/2fBOxm7

Another database is running on Windows x86 64-bit, and its full of LOBs, hence, the datapumps (expdp) took significant time, as NFS filesystem used to store the  dump file. We then thought of doing direct RMAN restore from source to target, as the database on Windows x86 64bit and Linux x86 64-bit are the same (Litten) Endian formats. As per one of the MOS notes, we can also do the Data Guard setup, and do RMAN restore. However, RMAN recovery would fail with ORA-600, as cross platform redo conversion won't be possible. We are now thinking of taking a cold backup (consistent) and do a complete restore with reset logs option.

Stayed tuned for more updates on this.



Pages