Feed aggregator

Accepting SQL profiles

Tom Kyte - Mon, 2017-04-24 22:46
Hi - we are doing some data conversion of our database associated with a vendor product. This means migrating from one version of the vendor schema to another. so remapping the data. During a performance run, one of the SQLs was taking longer to run....
Categories: DBA Blogs

Correctly identifying Dynamic Sampling queries run by Optimizer

Tom Kyte - Mon, 2017-04-24 22:46
It is clear that dynamic sampling queries run by the optimizer contains <b>/* DS_SVC */ </b>clause in them (when traced). e.g. SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel...
Categories: DBA Blogs

Pivot and null values

Tom Kyte - Mon, 2017-04-24 22:46
Hi team, I have a table test having column as name,id,language.... <code>name id language a 1 eng b 2 eng c 3 fer d 4 (null) select * from TEST pivot (min(id) for language in('eng' as "english",'fer' as "french",)) </code> ...
Categories: DBA Blogs

RANGE Partition in DATE column DD/MM/YYYY HH24:MM:SS

Tom Kyte - Mon, 2017-04-24 22:46
Hi team, I need to partition TEST_PARTITIONS table on the basis of end_date using RAGE INTERVAL partition. <code>create table TEST_PARTITIONS partition by range(end_date) ( partition p2010 values less than (to_date('01-Jan-2011','dd-mon-yyy...
Categories: DBA Blogs

Benefits of Analytics for Non-Profit Organizations

Nilesh Jethwa - Mon, 2017-04-24 14:49

Analytics is a process that uses tools to collect those ever-increasing volumes of diverse types of data from multiple sources, sort them out at record speeds, analyze them, and use them to gain new insights. This concept has existed for decades, and it has been recreated with modern and more powerful tools to consolidate today's data upsurge.

Non-profits run the extent in terms of using analytics. Some are only getting started, utilizing business intelligence and dashboard software for their budgeting and forecasting procedures, while others have gone far along the continuum and are considering effective ways to get more unstructured data to further enhance their existing analytics models. 
 

Read more at http://www.infocaptor.com/dashboard/non-profit-dashboards-benefits-of-analytics-for-non-profit

Java Web Start Now Available for EBS 12.1 and 12.2

Steven Chan - Mon, 2017-04-24 12:20

Java Web Start (JWS) is now available for Oracle E-Business Suite 12.1 and 12.2:

What is Java Web Start?

Java Web Start launches E-Business Suite Java-based functionality as Java Web Start applications instead of as applets.  Java Web Start is part of the Java Runtime Environment (JRE).

Does EBS use Java on desktop clients?

Yes.  The E-Business Suite requires Oracle Forms.  Oracle Forms requires Java. 

Other EBS products also have functionality that require Java.

What is the new approach with Java Web Start?

It's not technically "new" (it is a mature Java technology originally released in 2004), but we're using it for the first time with the E-Business Suite.  This approach launches EBS Forms-based screens and other functionality as Java Web Start applications instead of as applets.

What prerequisites are needed for Java Web Start?

 Oracle E-Business Suite Release  Minimum JRE Release  12.2  JRE 8 Update 121 b33  12.1.3  JRE 8 Update 121 b33

A small number of server-side patches for Forms and EBS are needed. See:

Why is this important?

Until now, E-Business Suite's Java-based content required a browser that supports Netscape Plug-in Application Programming Interface (NPAPI) plug-ins.

Some browsers are phasing out NPAPI plug-in support.  Some browsers were released without NPAPI plug-in support.  This prevents the Java plug-in from working.

With the release of Java Web Start, E-Business Suite 12.1 and 12.2 users can launch Java-based content (e.g. Oracle Forms) from browsers that do not support Java plug-ins via NPAPI.  Java Web Start in EBS works with:

  • Microsoft Internet Explorer
  • Microsoft Edge
  • Firefox Rapid Release (32-bit and 64-bit)
  • Firefox Extended Support Release (32-bit and 64-bit)
  • Google Chrome

How does the technology architecture change?

Java Web Start changes the way that Java runs on end-users' computers but this technical change is generally invisible to end-users.

Java Web Start applications are launched from browsers using the Java Network Launching Protocol (JNLP).

E-Business Suite Java Web Start architecture diagram

Will the end-user's experience change?

Generally not. We have worked hard to ensure that your end-users' experience with Java Web Start applications is as similar as possible to applets via the Java browser plugin.  The differences between the Java Plug-in and Java Web Start are expected to be almost-invisible to end-users.

Will E-Business Suite still require Java in the future?

Yes.  It is expected that our ongoing use of Oracle Forms for high-volume professional users of the E-Business Suite means that EBS will continue to require Java.  We replicate, simplify, or migrate selected Forms-based flows to OA Framework-based (i.e. web-based HTML) equivalents with every EBS update, but Oracle Forms is expected to continue to be part of the E-Business Suite technology stack for the foreseeable future. 

Does the E-Business Suite have other Java applet dependencies?

Yes.  In addition to Oracle Forms, various E-Business Suite products have functionality that runs as Java applets.  These Java applets require browsers that offer plugin support.  These products include applets:

  • Oracle General Ledger (GL): Account Hierarchy Manager
  • Oracle Customers Online (IMC): Party Relationships
  • Oracle Call Center Technology (CCT)
  • Oracle Sourcing (PON): Auction Monitor
  • Oracle Installed Base (CSI): Visualizer
  • Oracle Process Manufacturing (OPM): Recipe Designer
  • Oracle Advanced Supply Chain Planning (MSC): Plan Editor (PS/SNO)
  • Workflow (WF): Status Diagram, Notification Signing with Digital Signatures
  • Scripting (IES): Script Author

What is the roadmap for browser support for plug-ins?

Plug-in support has various names, including:

This article will simply use the term "plug-in support," which refers to all of the different types listed above.

Some browsers are phasing out plug-in support. Some browsers were never released with plug-in support.

Some organizations may wish to use browsers that do not offer plugin support.  The Java Web Start approach works with all browsers, regardless of whether they have plugin support. 

What is the roadmap for Java's support for plug-ins?

The Java team recently published their plans
for removing the Java browser plugin in a future version of Java. The announcement states (highlighted for emphasis):

Oracle plans to deprecate the Java browser plugin in JDK 9. This technology will be removed from the Oracle JDK and JRE in a future Java SE release.

What does "deprecate" mean?

In this context, "deprecate" means there will still be a Java Plug-in in JRE 9.

In other words, JRE 9 will include the Java Browser Plug-in and Java Web Start.  Users will still be able to run Java-based applications using the Java Plug-in and Java Web Start in JRE 9.

What does this mean for E-Business Suite users running the Java Plug-in with JRE 9?

The release of Java 9 is not expected to affect E-Business Suite users.

JRE 9 is expected to continue to work with the E-Business Suite in browsers that support the Java Browser Plug-in via the NPAPI protocol.

JRE 9 is expected to work with the E-Business Suite in browsers that support Java Web Start.

What browsers are expected to support the JRE 9 plug-in?

Internet Explorer, Firefox ESR 32-bit, and Safari are expected continue to support NPAPI -- and, therefore, Java and Forms. 

Firefox Rapid Release, Firefox ESR 64-bit, Google Chrome, and Microsoft Edge do not support NPAPI, so Java-based apps cannot run in those browsers using the Java Plug-in.  EBS users can run Java-based content using Java Web Start with JRE 9.

What are the timelines for browsers' plugin support?

Individual browser vendors have been updating their plans regularly.  Here's a snapshot of what some browser vendors have stated as of today:

Microsoft Internet Explorer (IE)

Microsoft has indicated that they intend to continue to offer plug-in support in IE.

Microsoft Edge

Microsoft Edge was released in Windows 10 without Browser Helper Object (BHO, aka. plugin) support.  Microsoft has no plans to add plugin support to Edge.

Mozilla Firefox Extended Support Release (ESR)

Mozilla indicated in early 2016 that Firefox ESR 52 32-bit will be the last version to offer NPAPI (and JRE) support.  Firefox ESR 52 32-bit was released in March 2017 and will be supported until May 2018. 

Mozilla removed NPAPI support from Firefox ESR 52 64-bit in March 2017.  

Mozilla Firefox Rapid Release

Mozilla removed NPAPI support from the Firefox 52 Rapid Release version in March 2017. 

Apple Safari for macOS

Safari offers Internet plug-in support for macOS users.  Apple has not made any statements about deprecating plugin support for macOS users.

Google Chrome for Windows

Chrome offered support for plugins until version 45, released in September 2015.  They removed NPAPI support in later Chrome releases.

Will I need to change browsers for EBS 12.1 or 12.2?

Not generally, but it depends on your choice of browsers and whether you wish to use Java Plug-in or Java Web Start.

Here's the compatibility matrix for EBS 12.1 and 12.2 certified combinations:

   Java Plug-In  Java Web Start  Microsoft Internet Explorer  Yes  Yes  Microsoft Edge    Yes  Firefox Rapid Release 32-bit    See Note 1  Firefox Rapid Release 64-bit    See Note 1  Firefox Extended Support Release 32-bit  Yes  Yes  Firefox Extended Support Release 64-bit    Yes  Google Chrome    Yes  Safari on macOS  Yes  See Note 2

Note 1: Expected to work but not tested.

New personal versions of Firefox on the Rapid Release channel are released roughly every six weeks.  It is impractical for us to certify these new personal Rapid Release versions of Firefox with the Oracle E-Business Suite because a given Firefox release is generally obsolete by the time we complete the certification.

From Firefox 10 and onwards, Oracle E-Business Suite is certified only with selected Firefox Extended Support Release versions. Oracle has no current plans to certify new Firefox personal releases on the Rapid Release channel with the E-Business Suite.

Note 2: Not certified.

Apple changed the Gatekeeper permissions in macOS Sierra 10.12.  These changes prevent JNLP execution, making the Java Web Start user experience very challenging.  We are investigating options right now. 

Will Oracle release its own browser for the E-Business Suite?

No.  Long-time Oracle users may remember the Oracle PowerBrowser. The industry has since moved away from software that requires proprietary browsers.  We have no plans to release a browser specifically for E-Business Suite users. 

Will this work on Android or iOS?

No. Neither of these operating systems are compatible with Java. 

E-Business Suite users who need to run Oracle Forms-based content or other Java-based functionality should use Windows or macOS.

Will Java Web Start be mandatory?

Not immediately. It is expected that the use of Java Web Start will be optional at least up to, and including, Java 9, which may be the last Java release to include the JRE browser plugin. 

Will Java Web Start coexist with JRE?

Yes.  You can have a mixed environment where some end-users launch Java Web Start applications, while others use applets via the Java plug-in.  This mixed group of end-users can connect to the same E-Business Suite environment.

EBS system administrators have full server-side control over these choices.

Will this affect EBS customizations?

Maybe. It depends upon which of the following apply to your environment:

  • Scenario 1You have modified standard EBS screens running in Forms: 
    No actions needed. These customizations are expected to work with Java Web Start without any additional changes.
  • Scenario 2You have built custom Java applets of your own to extend the E-Business Suite:  These will continue to run with the Java plug-in, but you may wish to update those applets to use Java Web Start.
  • Scenario 3You have third-party extensions or products that depend upon the Java plug-in:
    These will continue to run with the Java plug-in but you may wish to contact your third-party vendor for details about their plans for Java Web Start.

Are there any additional licensing costs?

No. Java Web Start is included with EBS licenses and does not introduce any new licensing costs.

Related Articles

Disclaimer

The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

 

Categories: APPS Blogs

Simple Steps to Perform oPatch Maintenance with Ansible

Pythian Group - Mon, 2017-04-24 10:44

Like the Universe, IT growth seems to be infinite, we always have more environments, more servers, more users, more disk usage, more databases to manage and it won’t stop. In fact, we are pretty sure that this expansion is going to be faster and faster.

We then have to adapt to this new, mutating IT environment being more productive in order to manage more and more targets in less time.

How to achieve this goal? Like human beings have always done from the early days – by using tools and by making better tools with the tools we have.

1/ The Ansible Tool

 

1.1/ A Word on Ansible

Ansible is an open source IT automation tool that was launched in early 2013 and bought by Red Hat in 2015. The most recent 2.3 version was released few days ago.

1.2/ Why Ansible?

Other the automation tools are professed to be easy, fast, able to manage thousands of thousands of targets, etc… so why Ansible instead of Puppet or Chef? For me, it’d because Ansible is agentless and does everything through standard SSH (or Paramiko which is a Python SSH implementation).

Indeed, ‘no agent’ really means easy to deploy, no agent to maintain (!), and it is very secure since it uses SSH. I am accustomed to working with companies that have tough security processes and challenging processes for any kind of installations. Be sure that it is easier to quickly deploy everything with these features:

  • Is it secure? Yes, it goes through SSH.
  • Anything to install on the targets? No.
  • Do you need root access? No, as long as what I need to do is doable with no root privilege.
  • Can it go through sudo? Yes, no worries.
  • What do you need then? An SSH key deployed on the targets (which also means that it is very easy to unsetup, you just have to remove that SSH key from the target)

For more information on the differences between Ansible, Puppet and Chef, just perform an online search.  You will find many in-depth comparatives.

2/ Manage oPatch with Ansible

To illustrate how quick and easy it is to use Ansible, I will demonstrate how to update oPatch with Ansible. oPatch is a very good candidate for Ansible as it needs to be frequently updated, exists in every Oracle home and also needs to be current every time you apply a patch (and for those who read my previous blogs, you know that I like to update opatch :))

2.1/ Install Ansible

The best way to install Ansible is to first refer to the official installation documentation .  There you will find the specific commands for your favorite platform (note that Ansible is not designed for Windows).

2.2/ Configure Ansible

To start, Ansible has to know the hosts you want to manage in a “host” file like:

oracle@control:~/work$ cat hosts_dev
[loadbalancer]
lb01

[database]
db01
db02 ansible_host=192.168.135.101
oracle@control:~/work$

We can split the hosts by group like [loadbalancer], [database] to have various hosts group. It is also possible that the host you are running Ansible on cannot resolve a host. We can then use the ansible_host parameter to specify the IP for it like I did for the db02 server. In fact, ansible_host defines the host Ansible will connect to and the name at the start of the line is an alias used if ansible_host is not defined

Note that I named the hosts file “hosts_dev” in my example. This was done so I would not use the default ansible hosts file which make it more modular. We then have to tell Ansible that we want to use this file instead of the default file in the ansible.cfg configuration file.

oracle@control:~/work$ cat ansible.cfg
[defaults]
inventory=./hosts_dev
oracle@control:~/work$

Please remember that Ansible uses SSH connectivity so you’ll need to exchange the SSH key of your “control” server to your targets. More extensive documentation on the subject can be found online. Here is an example with ssh-copy-id (if you don’t know the target user password, conduct a Google search for authorized_keys and you will find how to exchange an SSH key when you don’t know the target user password):

  oracle@control:~$ ssh-keygen                          # This will generate your SSH keys

  ... press ENTER at all prompts) ...

  oracle@control:~$ ssh-copy-id oracle@db01
  ...
  Are you sure you want to continue connecting (yes/no)? yes
  ...
  oracle@db01's password:                             # You will be prompted for the target password once
  ...
  Now try logging into the machine, with:   "ssh 'oracle@db01'"
  and check to make sure that only the key(s) you wanted were added.

  oracle@control:~$ ssh ansible@db01                   # Try to connect now
  Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-112-generic x86_64)
  Last login: Thu Apr 20 02:17:24 2017 from control
  oracle@db01:~$                                       # We are now connected with no password

 

2.3/ A First Playbook

A playbook is a collection of Ansible commands that are used to orchestrate what you want to do. Ansible uses the YAML language (please have a look at the official YAML website) for this purpose.

Let’s start with a first easy playbook that checks if the /etc/oratab file exists on my [database] hosts:

oracle@control:~/work$ cat upgrade_opatch.yml
---
- hosts: database                              # Specify only the hosts contained in the [database] group
  tasks:
  - name: Check if /etc/oratab exists          # A name for the task
    stat:                                      # I will use the stat module to check if /etc/oratab exists
      path: /etc/oratab                        # The file or directory I want to check the presence
    register: oratab                           # Put the return code in a variable named "oratab"

  - debug:                                     # A debug task to show an error message if oratab does not exist
      msg: "/etc/oratab does not exists"       # The debug message
    when: oratab.stat.exists == false          # The message is printed only when the /etc/oratab file does not exist

oracle@control:~/work$

Let’s run it now (we use ansible-playbook to run a playbook):

oracle@control:~/work$ ansible-playbook upgrade_opatch.yml

PLAY [database] ***************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [db02]
ok: [db01]

TASK [Check if /etc/oratab exists] ********************************************************************************************************************************************************************************
ok: [db02]
ok: [db01]

TASK [debug] ******************************************************************************************************************************************************************************************************
skipping: [db01]
ok: [db02] => {
    "changed": false,
    "msg": "/etc/oratab does not exists"
}

PLAY RECAP ********************************************************************************************************************************************************************************************************
db01                       : ok=2    changed=0    unreachable=0    failed=0
db02                       : ok=3    changed=0    unreachable=0    failed=0

oracle@control:~/work$

Since I removed /etc/oratab from db02 on purpose, I received the “/etc/oratab does not exists” error message (as expected).

Before going further, let’s add a test to see if unzip exists (we’ll need unzip to unzip the opatch zipfile). Put the db02’s oratab file back where it should be and run the playbook again:

  oracle@control:~/work$ cat upgrade_opatch.yml
  ---
  - hosts: database
    tasks:
    - name: Check if /etc/oratab exists
      stat:
        path: /etc/oratab
      register: oratab

    - debug:
        msg: "/etc/oratab does not exists"
      when: oratab.stat.exists == false

    - name: Check if unzip exists (if not we wont be able to unzip the opatch zipfile)
      shell: "command -v unzip"
      register: unzip_exists

    - debug:
        msg: "unzip cannot be found"
      when: unzip_exists == false
  oracle@control:~/work$ ansible-playbook upgrade_opatch.yml

  PLAY [database] ***************************************************************************************************************************************************************************************************

  TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
  ok: [db02]
  ok: [db01]

  TASK [Check if /etc/oratab exists] ********************************************************************************************************************************************************************************
  ok: [db01]
  ok: [db02]

  TASK [debug] ******************************************************************************************************************************************************************************************************
  skipping: [db01]
  skipping: [db02]

  TASK [Check if unzip exists (if not we wont be able to unzip the opatch zipfile)] *********************************************************************************************************************************
  changed: [db02]
  changed: [db01]

  TASK [debug] ******************************************************************************************************************************************************************************************************
  skipping: [db01]
  skipping: [db02]

  PLAY RECAP ********************************************************************************************************************************************************************************************************
  db01                       : ok=3    changed=1    unreachable=0    failed=0
  db02                       : ok=3    changed=1    unreachable=0    failed=0

  oracle@control:~/work$

Please note that I used the shell built-in module to test if unzip is present or not.

2.4/ Upgrade oPatch

To upgrade oPatch, we need to copy the zipfile to the target Oracle home and then unzip it — easy and straightforward. Let’s ask Ansible to do it for us.

First, let’s use the copy module to copy the oPatch zipfile to the target Oracle home:

- name: Copy the opatch zipfile to the target oracle home
   copy:
     src: p6880880_112000_Linux-x86-64.zip
     dest: /u01/oracle/11204

Unzip the zipfile in the target Oracle home (I use the shell module to unzip instead of the unarchive module on purpose. This will trigger a warning during the playbook execution, but I am not a big fan of the unarchive module… we could discuss that later on):

  - name: Upgrade opatch
    shell: unzip -o /u01/oracle/11204/p6880880_112000_Linux-x86-64.zip -d /u01/oracle/11204
    register: unzip
    failed_when: unzip.rc != 0

Let’s cleanup the zipfile we copied earlier using the file module (note that this is the keyword state: absent which will remove the file), we do not want to leave any leftovers:

  - name: Cleanup the zipfile from the target home
    file:
      name: /u01/oracle/11204/p6880880_112000_Linux-x86-64.zip
      state: absent

Now review the whole playbook:

  oracle@control:~/work$ cat upgrade_opatch.yml
---
- hosts: database
  tasks:
  - name: Check if /etc/oratab exists
    stat:
      path: /etc/oratab
    register: oratab

  - debug:
      msg: "/etc/oratab does not exists"
    when: oratab.stat.exists == false

  - name: Check if unzip exists (if not we wont be able to unzip the opatch zipfile)
    shell: "command -v unzip"
    register: unzip_exists

  - debug:
      msg: "unzip cannot be found"
    when: unzip_exists == false

  - name: Copy the opatch zipfile to the target oracle home
    copy:
      src: p6880880_112000_Linux-x86-64.zip
      dest: /u01/oracle/11204

  - name: Upgrade opatch
    shell: unzip -o /u01/oracle/11204/p6880880_112000_Linux-x86-64.zip -d /u01/oracle/11204
    register: unzip
    failed_when: unzip.rc != 0

  - name: Cleanup the zipfile from the target home
    file:
      name: /u01/oracle/11204/p6880880_112000_Linux-x86-64.zip
      state: absent

oracle@control:~/work$

and execute it:

oracle@control:~/work$ ansible-playbook upgrade_opatch.yml

PLAY [database] ***************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [db02]
ok: [db01]

TASK [Check if /etc/oratab exists] ********************************************************************************************************************************************************************************
ok: [db01]
ok: [db02]

TASK [debug] ******************************************************************************************************************************************************************************************************
skipping: [db01]
skipping: [db02]

TASK [Check if unzip exists (if not we wont be able to unzip the opatch zipfile)] *********************************************************************************************************************************
changed: [db02]
changed: [db01]

TASK [debug] ******************************************************************************************************************************************************************************************************
skipping: [db01]
skipping: [db02]

TASK [Copy the opatch zipfile to the target oracle home] **********************************************************************************************************************************************************
changed: [db01]
changed: [db02]

TASK [Upgrade opatch] *********************************************************************************************************************************************************************************************
 [WARNING]: Consider using unarchive module rather than running unzip

changed: [db01]
changed: [db02]

TASK [Cleanup the zipfile from the target home] *******************************************************************************************************************************************************************
changed: [db02]
changed: [db01]

PLAY RECAP ********************************************************************************************************************************************************************************************************
db01                       : ok=6    changed=4    unreachable=0    failed=0
db02                       : ok=6    changed=4    unreachable=0    failed=0

oracle@control:~/work$

We now have a playbook that can update all your oPatches in a blink!

Please note that this example is a very basic one since this is to give an overview on how to manage oPatch with Ansible.
Many features could be implemented here (and are implemented in the code we use here at Pythian) like:

  • Check the list of Oracle homes on each server — there are often many.
  • Check the version of each Oracle home’s oPatch.
  • Manager different oPatch versions : 11, 12 and 13.
  • Use the Ansible roles to make the code more modular and reusable.
  • Upgrade opatch only if it needs to and more…

I hope you enjoyed this Ansible overview!

Categories: DBA Blogs

Why Exadata can not go for redundancy in Disk Storage ?

Tom Kyte - Mon, 2017-04-24 04:26
Hi , All MPP systems suffer from re-distribution at run time . The Fast data Loading is a Myth as u Load once but read as long u wish . Exadata with storage cells still constrained by shared disk if we consider RAC env . Why Oracle can n...
Categories: DBA Blogs

How to remove orphaned breadcrumb ?

Tom Kyte - Mon, 2017-04-24 04:26
Hello Tom, I have deleted a page in an APEX application. THat page was built with a breadcrumb. But I forgot to specify I wanted the breadcrumb to be deleted with the page. THe result is I have an orphaned breadcrumb... How can I delete it ? ...
Categories: DBA Blogs

ODBC does not support interval data type

Tom Kyte - Mon, 2017-04-24 04:26
Hi, In Oracle through the ODBC API query a type of data for the Interval day to second, call SQLBindCol interface returned that the data does not support the type; I binding the column with varchar, call SQLFetch interface directly crash. Check some ...
Categories: DBA Blogs

DR during database migration

Tom Kyte - Mon, 2017-04-24 04:26
We currently have a production database 11.2.0.4 (A) and an active physical standby (PSA). we are in the process of developing migration code for this database because it is vendor supplied schemas. The target database is 12.1.0.2 (B). when we go to ...
Categories: DBA Blogs

distributed query join local table want to run local sub select first

Tom Kyte - Mon, 2017-04-24 04:26
I have a view (huge data set) on remote, and a local table. when join them restrict on local table down to one row, it does not run on local first, it always run the complete view on remote which takes forever. Tried couple different hint did nto he...
Categories: DBA Blogs

Install oracle RAC 11Gr2

Tom Kyte - Mon, 2017-04-24 04:26
Hello ,Can you help me for install oracle RAc11gR2? i give you the steps that i follown I use oraclevm virtual box and the os is oracle linux 6.5 the Memory is 3072GO swap 3072GO create group groupadd -g 501 oinstall groupadd -g 502 dba g...
Categories: DBA Blogs

Calculating partial table size

Tom Kyte - Mon, 2017-04-24 04:26
Hello, I need to copy 30% of SOME_TABLE data, which occupies 3TB in total. Is there a way to estimate "actual" size of 30% of table records? I know that it is ~900GB by using simple math, but this may vary due to CLOB datatype and etc... so I w...
Categories: DBA Blogs

12cR2 RMAN> REPAIR

Yann Neuhaus - Sun, 2017-04-23 15:39

Do you know the RMAN Recovery advisor? It detects the problems, and then you:

RMAN> list failure;
RMAN> advise failure;
RMAN> repair failure;

You need to have a failure detected. You can run Health Check if it was not detected automatically (see https://blog.dbi-services.com/oracle-12c-rman-list-failure-does-not-show-any-failure-even-if-there-is-one/). In 12.2 you can run the repair directly, by specifying what you want to repair.

Syntax

There is no online help on RMAN but you can list which keywords are expected by supplying a wrong one:
RMAN> repair xxx;
 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "failure"
RMAN-01008: the bad identifier was: xxx
RMAN-01007: at line 1 column 8 file: standard input

This is 12.1.0.2 where the only option is REPAIR FAILURE. In 12.2 we have a lot more:


RMAN> repair xxx
 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "database, database root, datafile, failure, pluggable, tablespace, ("
RMAN-01008: the bad identifier was: xxx
RMAN-01007: at line 1 column 8 file: standard input

When you know what is broken, you can repair it without having to know what to restore and what to recover. You can repair:

  • database: the whole database
  • database root: the CDB$ROOT container, which means all its tablespaces
  • pluggable database: it means all the PDB tablespaces
  • a specific datafile
Repair pluggable database

I corrupt one datafile from PDB01:


RMAN> host "> /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_system_d8k2t4wj_.dbf";
host command complete

And I repair the pluggable database:


RMAN> repair pluggable database PDB01;
 
Starting restore at 23-APR-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=203 device type=DISK
Executing: alter database datafile 21 offline
Executing: alter database datafile 22 offline
Executing: alter database datafile 23 offline
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00021 to /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_system_d8k2t4wj_.dbf
channel ORA_DISK_1: restoring datafile 00022 to /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_sysaux_d8k2t4wn_.dbf
channel ORA_DISK_1: restoring datafile 00023 to /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_users_d8kbmy6w_.dbf
channel ORA_DISK_1: reading from backup piece /u90/fast_recovery_area/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/backupset/2017_04_23/o1_mf_nnndf_B_dht2d4ow_.bkp
channel ORA_DISK_1: piece handle=/u90/fast_recovery_area/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/backupset/2017_04_23/o1_mf_nnndf_B_dht2d4ow_.bkp tag=B
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 23-APR-17
 
Starting recover at 23-APR-17
using channel ORA_DISK_1
 
starting media recovery
media recovery complete, elapsed time: 00:00:00
 
Executing: alter database datafile 21 online
Executing: alter database datafile 22 online
Executing: alter database datafile 23 online
Finished recover at 23-APR-17

The good thing is that it automatically restores and recovers the datafiles with only one command.
But we see here that all datafiles have been restored. AsI knew that only one datafile was corrupted, it would have been faster to use REPAIR DATAFILE for it.

However, doing the same and calling the recovery advisor is not better: it advises to:

1 Restore and recover datafile 21; Restore and recover datafile 23; Recover datafile 22

When dealing with recovery, you need to understand how it works, what was the scope of the failure, and how to repair it. The advisors or automatic actions can help but do not alleviate the need to understand.

 

Cet article 12cR2 RMAN> REPAIR est apparu en premier sur Blog dbi services.

Benefits of Dashboards for Sales and Profit

Nilesh Jethwa - Sun, 2017-04-23 12:55

A dashboard works by connecting with your business systems, including your email system, customer relationship management (CRM) system, accounting software, and website analytics program, among others.
dashboard pulls all these data into one place so you won’t need to log into several systems. It allows all business owners to better manage their business and consequently increase sales and profits.

Using a wrong kind of dashboard approach such as making dashboards in Excel, using charting engines and wrting it from scratch or desktop only dashboard software can give you cheap results but in the long run, these are throw away techniques

Using the right BI Tools can transform both your enjoyment and success in running your own business. In more specific terms, the right dashboard will provide you these 6 key benefits:

Read more at http://www.infocaptor.com/dashboard/how-can-bi-tools-like-dashboards-help-with-sales-and-profit

Data Pump LOGTIME, DUMPFILE, PARFILE, DATA_PUMP_DIR in 12c

Yann Neuhaus - Sat, 2017-04-22 16:28

Data Pump is a powerful way to save data or metadata, move it, migrate, etc. Here is an example showing few new features in 12cR1 and 12cR2.

New parameters

Here is the result of a diff between 12.1 and 12.2 ‘imp help=y’
CaptureDataPump122

But for this post, I’ll show the parameters that existed in 12.1 but have been enhanced in 12.2

LOGTIME

This is a 12.1 feature. The parameter LOGFILE=ALL displays the system timestamp in front of the messages in at the screen and in the logfile. The default is NONE and you can also set it to STATUS for screen only and LOGFILE for logfile only.


[oracle@vmreforanf12c01 tmp]$ expdp system/manager@PDB01 parfile=impdp.par logfile=impdp.log
 
Export: Release 12.2.0.1.0 - Production on Sat Apr 22 22:20:22 2017
 
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
22-APR-17 22:20:29.671: Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@PDB01 parfile=impdp.par logfile=impdp.log
22-APR-17 22:20:35.505: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
22-APR-17 22:20:36.032: Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
22-APR-17 22:20:36.407: Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
22-APR-17 22:20:43.586: Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
22-APR-17 22:20:44.126: Processing object type SCHEMA_EXPORT/USER
22-APR-17 22:20:44.199: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
22-APR-17 22:20:44.243: Processing object type SCHEMA_EXPORT/ROLE_GRANT
22-APR-17 22:20:44.296: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
22-APR-17 22:20:44.760: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
22-APR-17 22:20:53.706: Processing object type SCHEMA_EXPORT/TABLE/TABLE
22-APR-17 22:20:59.699: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
22-APR-17 22:21:00.712: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
22-APR-17 22:21:03.494: . . exported "SCOTT"."DEMO" 8.789 KB 14 rows
22-APR-17 22:21:03.651: . . exported "SCOTT"."EMP" 8.781 KB 14 rows
22-APR-17 22:21:03.652: . . exported "SCOTT"."DEPT" 6.031 KB 4 rows
22-APR-17 22:21:03.654: . . exported "SCOTT"."SALGRADE" 5.960 KB 5 rows
22-APR-17 22:21:03.656: . . exported "SCOTT"."BONUS" 0 KB 0 rows
22-APR-17 22:21:04.532: Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
22-APR-17 22:21:04.558: ******************************************************************************
22-APR-17 22:21:04.559: Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
22-APR-17 22:21:04.569: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2/SCOTT_20170422.01.dmp
22-APR-17 22:21:04.622: Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Apr 22 22:21:04 2017 elapsed 0 00:00:41

You will always appreciate finding timestamps in the log file. But remember that your import/export is processed by multiple workers and it is difficult to estimate duration between the different lines. I explained this in https://blog.dbi-services.com/datapump-processing-object-type-misleading-messages/

DUMPFILE

You can see that my DUMPFILE contains also the timestamp in the file name. This is possible in 12.2 with the %T substitution variable. Here was my PARFILE where DUMPFILE mentions %U (in addition to %U if there are multiple files):

[oracle@vmreforanf12c01 tmp]$ cat impdp.par
schemas=SCOTT
logtime=all
dumpfile=SCOTT_%T.%U.dmp
reuse_dumpfiles=yes
filesize=1M

PARFILE parameters

I don’t usually use a PARFILE and prefer to pass all parameters on the command line, even if this requires escaping a lot of quotes, because I like to ship the log file with the DUMPFILE. And before 12.2 the LOGFILE mentions only the parameters passed on command line. In 12.2 the PARFILE parameters are mentioned into the LOGFILE (but not to the screen):


;;;
Export: Release 12.2.0.1.0 - Production on Sat Apr 22 22:20:22 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
22-APR-17 22:20:24.899: ;;; **************************************************************************
22-APR-17 22:20:24.901: ;;; Parfile values:
22-APR-17 22:20:24.903: ;;; parfile: filesize=1M
22-APR-17 22:20:24.905: ;;; parfile: reuse_dumpfiles=Y
22-APR-17 22:20:24.907: ;;; parfile: dumpfile=SCOTT_%T.%U.dmp
22-APR-17 22:20:24.909: ;;; parfile: logtime=all
22-APR-17 22:20:24.911: ;;; parfile: schemas=SCOTT
22-APR-17 22:20:24.913: ;;; **************************************************************************
22-APR-17 22:20:29.654: Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@PDB01 parfile=impdp.par logfile=impdp.log
22-APR-17 22:20:35.469: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
22-APR-17 22:20:36.032: Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
22-APR-17 22:20:36.407: Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
22-APR-17 22:20:43.535: Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
22-APR-17 22:20:44.126: Processing object type SCHEMA_EXPORT/USER
22-APR-17 22:20:44.199: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
22-APR-17 22:20:44.243: Processing object type SCHEMA_EXPORT/ROLE_GRANT
22-APR-17 22:20:44.296: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
22-APR-17 22:20:44.760: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
22-APR-17 22:20:53.620: Processing object type SCHEMA_EXPORT/TABLE/TABLE
22-APR-17 22:20:59.699: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
22-APR-17 22:21:00.712: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
22-APR-17 22:21:03.494: . . exported "SCOTT"."DEMO" 8.789 KB 14 rows
22-APR-17 22:21:03.651: . . exported "SCOTT"."EMP" 8.781 KB 14 rows
22-APR-17 22:21:03.652: . . exported "SCOTT"."DEPT" 6.031 KB 4 rows
22-APR-17 22:21:03.654: . . exported "SCOTT"."SALGRADE" 5.960 KB 5 rows
22-APR-17 22:21:03.656: . . exported "SCOTT"."BONUS" 0 KB 0 rows
22-APR-17 22:21:04.532: Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
22-APR-17 22:21:04.558: ******************************************************************************
22-APR-17 22:21:04.559: Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
22-APR-17 22:21:04.569: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2/SCOTT_20170422.01.dmp
22-APR-17 22:21:04.621: Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Apr 22 22:21:04 2017 elapsed 0 00:00:41

Now the LOGFILE shows all export information. Only the password is hidden.

DATA_PUMP_DIR

In 12.1 multitenant, you cannot use the default DATA_PUMP_DIR. It is there, but you just cannot use it implicitly or explicitly. With my PARFILE above when DIRECTORY is not mentioned I would have the following error:

ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name DATA_PUMP_DIR is invalid

This means that there is no default possible and we need to mention DIRECTORY.

But in 12.2 it worked, going to /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2/ which is the default DATA_PUMP_DIR:

SYSTEM@PDB01 SQL> select * from dba_directories;
 
OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID
----- -------------- -------------- -------------
SYS TSPITR_DIROBJ_DPDIR /u90/tmp_data_restore 3
SYS PREUPGRADE_DIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin 1
SYS XMLDIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/xml 1
SYS ORA_DBMS_FCP_LOGDIR /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs 1
SYS ORA_DBMS_FCP_ADMINDIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin 1
SYS ORACLE_OCM_CONFIG_DIR /u01/app/oracle/product/12.2.0/dbhome_1/ccr/state 1
SYS ORACLE_OCM_CONFIG_DIR2 /u01/app/oracle/product/12.2.0/dbhome_1/ccr/state 1
SYS XSDDIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/xml/schema 1
SYS DATA_PUMP_DIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2 1
SYS OPATCH_INST_DIR /u01/app/oracle/product/12.2.0/dbhome_1/OPatch 1
SYS OPATCH_SCRIPT_DIR /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch 1
SYS OPATCH_LOG_DIR /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch 1
SYS ORACLE_BASE / 1
SYS ORACLE_HOME / 1

Of course, don’t leave it under ORACLE_HOME which is on a filesystem for binaries where you don’t want to put variable size files. But it is good to have a default.

 

Cet article Data Pump LOGTIME, DUMPFILE, PARFILE, DATA_PUMP_DIR in 12c est apparu en premier sur Blog dbi services.

Question on Buffer Cache Reads Avg Time in AWR

Tom Kyte - Sat, 2017-04-22 15:46
Hi Tom, In AWRs, under the section <b><i>"IOStat by Function summary"</i></b>, we find a statistic called <i><b>"Buffer Cache Reads Avg Time"</b> </i> According to my understanding it is the avg time taken to do a <b>"Buffer Get"</b>, Am I right...
Categories: DBA Blogs

CPU Utilisation

Tom Kyte - Sat, 2017-04-22 15:46
Hi team, Every Saturday DB Server CPU Become very high after to restart the db also it become high i checked awr where i found "cpu quantum" event then i check SQL> select client_name, status from dba_autotask_client; CLIENT_NAME ...
Categories: DBA Blogs

MV log missing records

Tom Kyte - Sat, 2017-04-22 15:46
Hi, During maintenance of fast refresh materialized views, 10000 MLOG$ records are lost. Can you suggest what are our options for restoring those missing records to our MV? Making dummy update of those records on master table is not allowed due...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator