Feed aggregator

Partner Webcast – Practical use cases of Oracle API Platform Cloud Service

Why do Application Programming Interfaces (API) and API Management matter? What would happen if we woke up tomorrow and APIs didn’t exist? We know they are a part of modern technological...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Cursor parameters versus global bind value

Tom Kyte - Thu, 2017-12-21 19:06
Tom, we have having a bit of a religious war at work regarding declaration of cursors and usage. Some developers will write a cursor such as: <code> cursor c_cur1 is select * from foo where foo_id = myID;</code> Where myID is a...
Categories: DBA Blogs

New Release of Node.js Module for Oracle Database: node-oracledb 2.0 is out

OTN TechBlog - Thu, 2017-12-21 17:41

It's been perhaps the most requested feature, and it's been delivered! You can now get pre-built binaries with all the required dependencies to connect your Node.js applications to an Oracle Database instance. Version 2.0 is the first release to have pre-built binaries. Node-oracledb 2.0.15, the Node.js add-on for Oracle Database, is now on npm for general use. These are provided for convenience and will make life a lot easier, particularly for Windows users.

With improvements throughout the code and documentation, this release is looking great. There are now over 3000 functional tests, as well as solid stress tests we run in various environments under Oracle's internal testing infrastructure.

Binaries for Node 4, 6, 8 and 9 are also available for Windows 64-bit, macOS 64-bit, and Linux 64-bit (built on Oracle Linux 6).

Simply add oracledb to your package.json dependencies or manually install with:

 

$ npm install oracledb

 

Review the CHANGELOG for all changes. For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0. To know more about this release, go check out the detailed announcement.

Related content

 

"Oracle Container Runtime for Docker" (17. ...

Wim Coekaerts - Thu, 2017-12-21 15:33

Basically - added 17.09.1 and 1.8.4-2.0.1 to http://yum.oracle.com/repo/OracleLinux/OL7/preview/x86_64/index.html

Enjoy.

12cR2 Subquery Elimination

Yann Neuhaus - Thu, 2017-12-21 15:27

More and more we can see crazy queries generated by ORM frameworks or BI query generators. They are build to be easily generated rather than being optimal. Then, the optimizer has to implement more and more transformations to get an efficient execution plan. Here is one new that appeared in Oracle 12cR2: Subquery Elimination when the subquery do not filter any rows.

A semi-join is a join where we do not need to match with all rows, but only one. We write it with an EXISTS subquery or a =ANY or =SOME one, which is equivalent.

12.1

Here is the behaviour in 12.1.0.2 when the subquery do not filter any row because it reads the same table as the outer one, without any predicate:
SQL> select * from dbms_xplan.display_cursor(null,null,'allstats last +outline');
PLAN_TABLE_OUTPUT
-----------------
SQL_ID az1jcra46h5ua, child number 1
-------------------------------------
select * from EMP where ename in (select ename from EMP)
 
Plan hash value: 977554918
 
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 13 | | | |
|* 1 | HASH JOIN SEMI | | 1 | 14 | 14 |00:00:00.01 | 13 | 1098K| 1098K| 889K (0)|
| 2 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 7 | | | |
| 3 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | | | |
----------------------------------------------------------------------------------------------------------------

We read 2 times the same table, join all rows and finally return as result exacly the same rows as those coming from the first full scan. This is not efficient.

12.2

Here is the same query in 12.2 where we can see that the table is scanned only once because the optimizer knows that the subquery do not filter anything:

SQL> select * from dbms_xplan.display_cursor(null,null,'allstats last +outline');
PLAN_TABLE_OUTPUT
SQL_ID az1jcra46h5ua, child number 0
-------------------------------------
select * from EMP where ename in (select ename from EMP)
 
Plan hash value: 3956160932
 
------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 9 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 9 |
------------------------------------------------------------------------------------

This is mentioned in the outline hints with ELIMINATE_SQ:

Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
DB_VERSION('12.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$D0DB0F9A")
ELIMINATE_SQ(@"SEL$2")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
FULL(@"SEL$D0DB0F9A" "EMP"@"SEL$1")
END_OUTLINE_DATA
*/

This feature can be disabled with the following paramter:

SQL> alter session set "_optimizer_eliminate_subquery"=false;
Session altered.

Or with the following hint:

SQL> select * from EMP where ename in (select /*+ NO_ELIMINATE_SQ */ ename from EMP);

Finally here is what you can see in the CDB trace:

SQE: Trying SQ elimination.
SQE: Trying SQ elimination.
SQE: Query valid for SQ elimination:******* UNPARSED QUERY IS *******
SELECT "EMP"."EMPNO" "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR" "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM" "COMM","EMP"."DEPTNO" "DEPTNO" FROM "SCOTT"."EMP" "EMP" WHERE "EMP"."ENAME"=ANY (SELECT "EMP"."ENAME" "ENAME" FROM "SCOTT"."EMP" "EMP")
Registered qb: SEL$D0DB0F9A 0xe7035778 (SUBQUERY ELIMINATE SEL$1; SEL$2)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$D0DB0F9A nbfros=1 flg=0
fro(0): flg=0 objn=73253 hint_alias="EMP"@"SEL$1"
 
SQE: Query after SQ elimination:******* UNPARSED QUERY IS *******
SELECT "EMP"."EMPNO" "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR" "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM" "COMM","EMP"."DEPTNO" "DEPTNO" FROM "SCOTT"."EMP" "EMP" WHERE 0=0 AND "EMP"."ENAME" IS NOT NULL

This example is simple and it is obvious that the SQL should be re-written. But with large generated queries, on complex views, this is the kind of thing that can be seen in the resulting query and this transformation will help to avoid unnecessary work.

 

Cet article 12cR2 Subquery Elimination est apparu en premier sur Blog dbi services.

Business Intelligence for the Finance Industry

Nilesh Jethwa - Thu, 2017-12-21 15:20

The banking and finance industry is experiencing continuous changes brought about by advances in technology. The changes are welcome – however not without the accompanying challenges.

As a result, the kind of market we now have is dynamic and Wall Street is making the necessary adjustments, not to dampen vibrancy in the market, but to improve strategies in line with the changing technology. That requires developing new tools and implementing them correctly.

Business Intelligence (BI) Tools for Financial Institutions

There are a lot of tools that have been developed and ones that are worth discussing about are business intelligence tools that help managers identify better management practices and make smarter business decisions. A finance management dashboard is one of these tools.

Here’s a quick look on how this kind of instruments can help in those areas and in other areas such as profitability, reduction of risks and creating competitive advantage.

  • Operational Efficiencies

Improving efficiency is a key to winning in the competition. BI tools can analyze how efficient are the operational processes leading to maximization of resources and expertise and to reduction of operational cost.

How good are you at customer experience? A dashboard can help you improve this at the point where your employees and customers come face to face.

  • Products and Services

BI tools do not only allow managers to track which products and services are performing well but also give them insights on the needs and experience of their customers.

  • Marketing

Do you want to know the best market segment to target? A simple dashboard is all you need to have a good picture of it. The tool can also show you which customer profile gives you the highest revenue and the one that should be put lowest in the list of priorities.

  • Customer Retention

There are customers who are loyal to you. You need to know the reasons why they are staying with your business, but you also need to identify why others are leaving.

Business analytic tools can help you determine the reasons why and lead you to taking steps to improve your retention rates.

  • Risk Reduction

This is an ever-changing world where nothing is 100% certain. How would you keep track of your customers and internal organizations so that you can early detect possibilities of fraud?

You can use BI tools for this. There are also lots of benefits from knowing your clients’ financial capacities, which a dashboard can provide you 24/7.

  • Investment

At some points, managers run out of invest ideas and strategies. The good thing about BI tools is that they can bring in “new” results from real-time data.

New results mean new things that can be explored further for better decision making and development of new approaches to investing. That’s the kind of data and analysis only a dashboard can bring to your organization.

As an example, the number of cars on a shopping mall’s parking lot can be useful in triangulating consumer research. Recently, images from satellites are used to gather data of the global supply of fuel.

These and more examples clearly show how indispensable BI tools can be in the conduct of modern business.

Read more at http://www.infocaptor.com/dashboard/business-intelligence-for-the-finance-industry

Recipe for JQuery Menu integration into ADF Faces

Andrejus Baranovski - Thu, 2017-12-21 12:32
May be you are thinking to add some fresh looking element into your ADF app? I would propose JQuery menu. Advantage of such menu - it is purely client side implementation and it allows user to navigate through items really fast. Besides fast navigation, it looks great too and can be easily customized.

In this post I will describe how to achieve JQuery menu integration into ADF Faces and provide sample ADF application. I'm using JQuery menu from here - Animated Responsive Drop Down Menu. There is no need to implement JQuery menu component, we can use one of the existing implementations.

I have recorded short example, you can see how convenient is client side menu navigation. Each menu item loads ADF region:


JQuery menu integrated into my ADF sample:


It can display menu items for multiple levels:


Integration into ADF Faces is very simple, you need to copy JS and CSS files into ADF application:


Define JS and CSS resources in ADF main page, make sure to reference JQuery too:


Menu structure is defined in ADF fragment using HTML tags. Once menu item is selected, call to ADF is made through combination of client/server listeners (check WBClient.menuNavigation method). In my future posts I will describe how to construct similar menu structure dynamically. If your application is secured, make sure to define ADF Security permissions to TF's properly, any unprotected TF can be loaded from the menu:


Client listener method menuNavigation constructs params with target TF address and calls server listener, which in turn displays ADF dynamic region:


Menu fragment with HTML tags is integrated into ADF page template:


Download sample application - ADFJQueryMenuApp.zip.

How Blockchain Technology Will Impact Accounting.

OracleApps Epicenter - Thu, 2017-12-21 11:13
Since this data is unchallengeable and absolute, accountants and auditors can save valuable time, which can also decrease cost. He highlights that, from an audit prospective, using blockchain, artificial intelligence, and cognitive tools can increase audit volume to help get through massive volumes of data. The technology also extends itself in helping accounting firms prevent […]
Categories: APPS Blogs

Oracle MOOC: Java Virtual Machine Troubleshooting

Oracle Java Virtual Machine Technology is an implementation of the Java platform (JDK) a complete Java Runtime Environment (JRE) plus tools for developing, debugging, and monitoring Java...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Licensing (R)evolution

Yann Neuhaus - Thu, 2017-12-21 08:49

In 2015 I wrote a blog named “All you need to know about Oracle Database licensing with VMware”. This blog generated lots of comments and hopefully helped some DBAs and IT Managers understanding the potential issues that they can face in case of Oracle LMS Audit.

In 2 years I made some new experiences related to Oracle LMS audit and I’d like to share those experiences with you to provide you up to date information. What is written in this blog is nothing more than summary of facts encountered with Swiss Oracle customers. The situation described in this blog is related to one of our customers named SITel (Service de l’Informatique et des Télécommunication) of Fribourg in Switzerland which has also been encountered by some others of our customers. I cannot guarantee that Oracle apply the same strategy worldwide. Please notice that this article has been written with the approval of all customers named in this blog.

What didn’t change?

Oracle on VMware not certified: Oracle still does not certify any of its products on VMware virtualized environments as explained in the My Oracle Support Note ID 249212.1. In summary it’s supported but not certified meaning that Oracle does not guarantee the proper working of these products on VMware architecture, but will support you in case of incidents not related to the VMware infrastructure. In cases where VMware could be involved, the Oracle support may ask you to reproduce the incident on a certified environment. VMware on his side has his own Oracle support policy available here.

Oracle Audit reputation: As mentioned on clearlicensing.org, Oracle didn’t improve them to get better audit reputation. Indeed according to this website, Oracle was voted worst vendor during the audit process. (cf graph below)

Clearlicensing

Who is the least helpful vendor in terms of audits (Agressive behavior, focused on short term revenue)

Soft and hard partitioning policy: VMware is still a soft partitioning technology according to Oracle. The distinction between hard and soft partitioning is explained in this Oracle pdf file. In addition VMware still explain to his customer that “DRS Host Affinity rules can be used to appropriately restrict the movement of virtual machines within the cluster. DRS Host Affinity is a vSphere feature that enables you to ensure that your Oracle applications are restricted to move  only between a subset of the hosts—that is, not all hardware in the cluster is “available” to the Oracle software”. – (cf UNDERSTANDING ORACLE CERTIFICATION, SUPPORT AND LICENSING FOR VMWARE ENVIRONMENTS). That’s for sure true but Oracle does not recognize this feature allowing VMware to provide a hard partitioning solution and won’t be taken into consideration during an LMS audit

 What changed ?

Oracle Edition: As you know Standard Edition One and Standard Edition are stopped and if you want to upgrade to 12.1.0.2 (12c patchet 1) then you have to go to Standard Edition Two. You can find a good blog from Franck Pachot related to the installation of Standard Edition Two here as well as the tests related to the 16 thread limitation here. Many questions related to Oracle Standard Edition 2 and the difference between the Standard Edition One and the Standard Edition Two are answered in this Oracle pdf.

Oracle LMS Auditors: In Switzerland we do now have a company (Entry of 15.02.2017), named Seven Eighths Schweiz GmbH doing Oracle audits. The JPE program is definitively stopped.

Oracle commercial approach for VMware customers: That is perhaps the most important thing that inspired me for the title of this blog. Up to the middle of last year, Oracle customers using VMware to host their Oracle products had to be licensed for all processors where Oracle products are installed and/or running but Oracle basically took into consideration all processors where Oracle products can/could be running. This has been well illustrated by the following famous image realized by House of brick.

House Of Brick Oracle Parking

House Of Brick Oracle Parking

Meaning than since VMware 5.1 all the physical hosts managed by a vCenter Server instance have to be licensed whatever the Oracle footprint on the virtual servers. This decision has probably been taken by Oracle due to the new features coming with VMware vSphere 5.1, particularly one named vSphere vMotion(zero-downtime migration) without the need for shared storage configurations. Of course with VMware 6.x according to this same principle all the vSphere environments of the company have to be licensed. Since last year some customers came to me requesting my advice on a proposal that have been done by Oracle. Basically Oracle proposed them to validate their Oracle/VMware infrastructure and license only the processors that are really used to execute Oracle products with these two conditions:

  1. Totally isolating Oracle products in their VMware infrastructure
  2. Oracle requires a “certain volume of business” (can be in terms of Oracle Cloud Credit) to validate the customer infrastructure schema

It is important to take into consideration that Oracle validated these infrastructures only for some specific versions of VMware, usually the current customer’s VMware version and the next one.

What means “Totally isolating Oracle products”?

Customers sent me the infrastructure example that has been provided to them by Oracle. You can find them below. (Notice that customer gave me the approval to publish these pictures)

Oracle VMware VLAN and storage configuration

Oracle VMware VLAN and storage configuration

 

vCenter, VLAN and storage configuration

vCenter, VLAN and storage configuration

What you can see on these infrastructure schemas is:

  1. A dedicated vCenter Server Instance with dedicated physical hosts
  2. A dedicated VLAN
  3. A Storage isolation through LUN Masking, Zoning or any approved restriction
What means a “certain volume of business”?

In the cases we encountered, Oracle asked a “certain volume of business” to validate the Oracle infrastructure on VMware. This volume of business has been in some cases realized by acquiring Oracle Cloud Credits. I cannot provide an exact budget for this compensation since it seems varying according to the customer infrastructure. In the cases we encountered with our customers we spoke of amounts above 40.000 US$ (CHF 40’000).

Conclusion

It seems that Oracle is not anymore so restrictive regarding the licensing of Oracle on VMware infrastructure. VMware is still not certified with any Oracle products but at least Oracle customers can speak with Oracle salespeople in order to avoid having to license all the processors of their vSphere environments. This discussion can lead in some circumstances and according to our experience, to the acquisition of some other Oracle products or Cloud Credits to get the infrastructure validated by Oracle for specific VMware versions.

The other alternative to VMware is Oracle VM Server. Although the support is chargeable, Oracle VM is free. Unfortunately even if Oracle did some great improvements over the last years, Oracle VM does not have the same functionalities as VMware but is can perfectly fit depending on your needs. You can find the documentation in order to configure it has hard partitioning, with CPU pining here since Oracle VM Server has to be configured specifically in order to be considered as a hard partitioning technology.

I very do hope that you find this blog helpful and would like to thanks the SITel for his precious collaboration in reviewing this blog.

 

Cet article Oracle Licensing (R)evolution est apparu en premier sur Blog dbi services.

Visual Studio Code Extensions I use

Dimitri Gielis - Thu, 2017-12-21 07:36
In my post List of the tools I use and why I use them I already mentioned I use Visual Studio Code as my main editor. Before I used different editors, but VSC replaced them all (core + extensions), so it's easier for me to just use one editor. Next to that, VSC is controlled by a company, gets monthly updates and there's a huge community behind it that provide extensions.

Here's a screenshot of the updates in version 1.19


These are the extensions I've installed:



If you want to search for those extensions this list might be easier:
  • DavidAnson.vscode-markdownlint
  • DotJoshJohnson.xml
  • HookyQR.minify
  • PKief.material-icon-theme
  • PeterJausovec.vscode-docker
  • Shan.code-settings-sync
  • alefragnani.project-manager
  • anseki.vscode-color
  • apng.orclapex-autocomplete
  • buianhthang.xml2json
  • christian-kohler.npm-intellisense
  • christian-kohler.path-intellisense
  • dbaeumer.vscode-eslint
  • donjayamanne.githistory
  • eamodio.gitlens
  • eg2.vscode-npm-script
  • formulahendry.auto-close-tag
  • formulahendry.code-runner
  • gerane.Theme-Blackboard
  • humao.rest-client
  • johnpapa.azure-functions-tools
  • kisstkondoros.vscode-codemetrics
  • mdickin.markdown-shortcuts
  • ms-vscode.azure-account
  • ms-vscode.Theme-MarkdownKit
  • ms-vscode.wordcount
  • msjsdiag.debugger-for-chrome
  • nodesource.vscode-for-node-js-development-pack
  • rafaelmaiolla.remote-vscode
  • robertohuertasm.vscode-icons
  • streetsidesoftware.code-spell-checker
  • wix.vscode-import-cost
  • xyz.plsql-language
If you install the Shell Command Line of VSC you can get the list of extensions by typing: code --list-extensions

In the Oracle APEX community many people are using Visual Studio Code, you can read a nice article of Morten; Using VS Code for PL/SQL development and Christope; Compile PL/SQL with VS Code using SSH. Adrian also created a nice extension for Oracle APEX. I highlighted the two extensions above.

Other editors that are often being used by Oracle APEX developers are Sublime Text and Atom, which have similar features than VSC, so if you read something nice that those editors can do, you can do it in Visual Studio Code most likely too e.g. Jorge's excellent post about Multi-Cursor Editing or Martin's truth about developing with Atom.


Just as a reminder for myself, As there were too many extensions to fit on one screen, I took two screenshots and glued the files together with following command:
convert -append vscode_1.png vscode_2.png vscode_extensions.png
If you wanted to glue them horizontally, you can use +append (I'm on a Mac).

Categories: Development

Run SQLcl from ANT

Darwin IT - Thu, 2017-12-21 04:29
I think since last year ORacle released SQLcl which could be seen as the commandline variant of SQL Developer. But even better: a replacement of SQL Plus.

A few years ago I created what I called a InfraPatch framework, to do preparations on an infrastructure as a pre-requisite for the deployment of services and/or applications. It can run WLST scripts for creating datasouces, jms-queues, etc.  It also supported the running of database scripts, but it required an sqlplus installation, for instance using the instant client. Since it was part of a release/deploy toolset, where the created release is to be deployed by an IT admin on a test, acceptance or production environment, I had to rely on a correct Oracle/instant client installation on an agreed location.

I'm in the process of revamping that framework and renamed to InfraPrep,  since preparing an infrastructural environment makes it more clear what it does. (It does not patch a system with Oracle patches...).

Now I'm at the point that I have to implement the support of running database scripts. The framework uses ANT, which in fact is Java. And SQLcl has two big advantages that makes it ideal for me to use in my InfraPrep framework:
  • It is incredibly small: it's only 19MB! And that includes the ojdbc and xmlparser jars. Since i used ANT from a FusionMiddleWare home, I could make it even smaller! 
  • It is Java, so I can leverage the java ant task.
 So, how to call SQLcl from ANT? I need a few ingredients:
  • Download and unzip SQLcl into my Ant project and add a sqlcl.home property:
    sqlcl.home=${basedir}/sqlcl
  • The actual sqlcl jar file and add the sqlcl.jar property for that:
    sqlcl.jar=oracle.sqldeveloper.sqlcl.jar
  • The main class file = oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli
These ingredients can be found in the sql.bat in the bin folder of the SQLcl download.

Then of course in my environment property file I need the user name, password and database url.
Something like:
DWN.dbUrl=(description=(address=(host=darlin-vce-db.org.darwinit.local)(protocol=tcp)(port=1521))(connect_data=(service_name=orcl)))
DWN.dbUserName=dwn_owner
DWN.dbPassword=dwn_owner

I used a TNS-style database URL, since it is the same as used in the creation of the corresponding DataSource. And it can be reused to connect with SQLcl.

Now, to make it easier to use and to abstract the plumbing in a sort of  ANT task, I crated a macrodef:


 <!-- Create Add Outbound connection pool to DB adapter-->
<macrodef name="runDbScript">
<attribute name="dbuser"/>
<attribute name="dbpassword"/>
<attribute name="dburl"/>
<attribute name="dbscript"/>
<sequential>
<logMessage message="DatabaseUrl: @{dburl}" level="info"/>
<logMessage message="DatabaseUser: @{dbuser}" level="info"/>
<logMessage message="DatabasePassword: ****" level="info"/>
<property name="dbConnectStr" value='@{dbuser}/@{dbpassword}@"@{dburl}"'/>
<property name="dbScript.absPath" location="@{dbscript}"/>
<property name="dbScriptArg" value="@${dbScript.absPath}"/>
<logMessage message="Run Database script: ${dbScriptArg}" level="info"/>
<record name="${log.file}" action="start" append="true"/>
<java classname="oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli" failonerror="true" fork="true">
<arg value="${dbConnectStr}"/>
<arg value="${dbScriptArg}"/>
<classpath>
<pathelement location="${sqlcl.home}/lib/${sqlcl.jar}"/>
</classpath>
</java>
<record name="${log.file}" action="stop"/>
</sequential>
</macrodef>
</project>

In this macrodefinition, I first build up a database connect string using the username, password and database url:
      <property name="dbConnectStr" value='@{dbuser}/@{dbpassword}@"@{dburl}"'/>
Then I use a little trick to create an absolute path of the dbscript path:
      <property name="dbScript.absPath" location="@{dbscript}"/>
The trick is in the location attribute of the property.
And since that now is a property instead of an attribute, I circumvented the need for escaping the @ character:
      <property name="dbScriptArg" value="@${dbScript.absPath}"/>
The logmessage task you see is another macrodef I use:
      <macrodef name="logMessage">
<attribute name="message" default=""/>
<attribute name="level" default="debug"/>
<sequential>
<echo message="@{message}" level="@{level}"></echo>
<echo file="${log.file}" append="true"
message="@{message}${line.separator}" level="@{level}"></echo>
</sequential>
</macrodef>

It both echo's the output to the console and to a log file.
Since I want the output of the java task into the same log file, I enclosed the java task with record tasks to start and stop the appending of the output-stream to the log file.

The java task is pretty simple, referencing the jar file in the classpath and providing the connect string and the script run argument as two separate arguments.
There are however two important properties:
  • failonerror="true": I want to quit my ANT scripting when the database script fails.
  • fork="true": when providing the exit statement in the sql script, SQLcl tries to quit the JVM. This is not allowed, because it runs by default in the same JVMas ANT. Not providing the exit statement in the script will keep the thread in SQLcl, which is not acceptable. So, forking the JVM will allow SQLcl to quit properly.
Now, the macro can be called as follows:
    <propertycopy name="dbUser" from="${database}.dbUserName"/>
<propertycopy name="dbUrl" from="${database}.dbUrl"/>
<propertycopy name="dbPassword" from="${database}.dbPassword"/>
<runDbScript dbuser="${dbUser}" dbpassword="${dbPassword}" dburl="${dbUrl}" dbscript="${prep.folder}/${dbScript}"/>

Where these properties are used:
database=DWN
dbScript=sample.sql

Ant the sample.sql file:
select * from global_name;
exit;

And this works like a charm:
runPrep:
[echo] Script voor uitvoeren van database script.
[echo] Environment:
[echo] Prep folder: ../../infraPreps/BpmDbS0004
[echo] Load prep property file ../../infraPreps/BpmDbS0004/BpmDbS0004.properties
[echo] Run Script
[echo] DatabaseUrl: (description=(address=(host=darlin-vce-db.org.darwinit.local)(protocol=tcp)(port=1521))(connect_data=(service_name=orcl)))
[echo] DatabaseUser: dwn_owner
[echo] DatabasePassword: ****
[echo] Run Database script: @c:\temp\FMWReleaseAll\DWN\1.0.0\infraPreps\BpmDbS0004\sample.sql
[java]
[java] SQLcl: Release 17.3.0 Production on do dec 21 11:18:50 2017
[java]
[java] Copyright (c) 1982, 2017, Oracle. All rights reserved.
[java]
[java] Connected to:
[java] Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
[java] With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
[java] Data Mining and Real Application Testing options
[java]
[java]
[java] GLOBAL_NAME
[java] --------------------------------------------------------------------------------
[java] ORCL
[java]
[java]
[java] Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
[java] With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
[java] Data Mining and Real Application Testing options
[echo] Done running preperations.

BUILD SUCCESSFUL
Total time: 12 seconds

One thing to be arranged though is the fetch of the username/password from the commandline, instead of properties. This can be as follows:
    <input message="Enter database user for environment ${database}: " addproperty="db.user"/>
<input message="Enter password for user ${db.user}: " addproperty="db.password">
<handler classname="org.apache.tools.ant.input.SecureInputHandler"/>
</input>

ConclusionSQLcl is great, since it is small and in java. So it turns out incredibly easy to distribute it within your own framework.

node.js 4, node.js 6, nodejs 8, node-oracledb12, php7.0 php 7.1 - php-oci

Wim Coekaerts - Thu, 2017-12-21 02:00

Just in time for the holidays.

A bunch of new RPMs released in a bunch of new channels on http://yum.oracle.com.

- node.js 4, node.js 6 and node.js 8 for both OL6 and OL7 along with the node-oracledb-12c add-on that lets you connect to oracle Databases out of the box.

- php 7.0 and php 7.1

more stuff in EPEL

 

have fun...

 

Latest Updates + FREE Training This Week

Online Apps DBA - Thu, 2017-12-21 01:11

Santa is Busy with Elfs Packing Gift and so are We, keep an eye on mails in your Inbox on 24th & 25th Dec. In this Week, you will find: 1. Oracle Identity & Access Manager (OAM/SSO) 1.1 Multi-Factor Authentication possibilities in OAM 11g 1.2 OAM Interview Questions Guide 2. Oracle Cloud 2.1 1Z0-160 Oracle Database Cloud […]

The post Latest Updates + FREE Training This Week appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Using CSS Frameworks with PeopleSoft Fluid

Jim Marion - Wed, 2017-12-20 15:20

Fluid is Oracle's strategic direction. If you have experience with Fluid development, you know that dragging, dropping, and aligning fields on a canvas isn't enough to develop a multi-form factor Fluid user interface. With Fluid comes an emphasis on CSS3 for multi-form factor support. This means you must add CSS class names to page fields to support different screen sizes. PeopleTools comes with an extensive list of predefined CSS class names. The challenge is identifying which CSS class to use. Fortunately, Oracle published two extremely helpful documents:

If you learn the style classes in these two documents, you will do quite well with Fluid layout (actually, just learn a few dozen of these classes and you will do quite well). But what if you want to use features of CSS that don't exist in Fluid-delivered CSS classes? Flexbox, for example, is one of the most powerful features of CSS3. When I find myself terribly annoyed with Fluid layout, I throw a Flexbox at my layout issue and the problem is solved. If we want to apply styling using CSS attributes that PeopleSoft hasn't already defined, we have two options:

  • Define our own CSS classes by creating a free-formed stylesheet or
  • Borrow someone else's CSS class names from a CSS framework.

Writing your own CSS can be a rewarding experience. I can often solve layout problems with just a few lines of CSS. My concern, however, is maintenance. What starts as a one-time CSS "fix" (or hack) for a layout often turns into a copy/paste exercise replicated at least a dozen times, with each page using some of the same CSS and some different CSS. Then what? Do we create a separate CSS file for each Fluid page? Do we refactor common CSS, moving similar code into a shared library?

Given the age and history of the internet, most web layout problems have already been solved. Since PeopleSoft is just another web application, we can leverage the work of the world wide web's pioneers. The solutions to most of our layout problems exist in today's common CSS frameworks, Boostrap being the most popular. There are many PeopleSoft consultants happily using Bootstrap to enhance PeopleSoft Fluid pages. Here is how they do it:

  1. Import Bootstrap into a Freeform Stylesheet
  2. Use AddStylesheet to insert Bootstrap into a Peoplesoft page
  3. Apply Bootstrap style classes to Fluid page elements
  4. Create a "reset stylesheet" to fix everything Bootstrap broke.

Yes, you read that last line correctly, "... fix everything Bootstrap broke." Please don't misread this. There are many developers successfully using Bootstrap with PeopleSoft. But here is the problem: Most CSS frameworks directly style HTML elements. This is actually good. Developers call this a "reset" stylesheet. What makes this a problem for PeopleSoft is that PeopleTools ALSO applies CSS directly to HTML elements. PeopleTools includes its own reset stylesheet. In a sense, we could say that Fluid is a CSS framework itself. The end result is a mixture of styles applied to HTML elements by two competing and complementing CSS frameworks. I call this "Fluid-strap." Consultants work around this problem by creating a reset for the competing reset stylesheets — a reset for the reset.

Here is another alternative: Use a CSS framework that does NOT style HTML elements, but instead relies on class names. This type of CSS framework was designed for compatibility. This type of framework understands that another CSS framework is in charge. My personal favorite CSS compatibility library is Oracle JET. In Oracle JET's GitHub repository, you will find oj-alta-notag.css, a CSS file containing a lot of CSS class names and no element declarations. To use this library, follow the first three steps described above, skipping the final step:

  1. Import oj-alta-notag.css into a Freeform Stylesheet
  2. Use AddStylesheet to insert the Oracle JET Stylesheet into a Peoplesoft page
  3. Apply Oracle JET style classes to Fluid page elements

The key difference is we don't have to create a reset for the reset. The Oracle JET stylesheet silently loads into a PeopleSoft page without changing any styling unless specifically asked to style an element through the element's Default Style Name (Style Classes on 8.56) property.

Consider a PeopleSoft page built with 4 group boxes aligned horizontally as demonstrated in the following screenshot.

In Classic, what you see is mostly what you get, so the online rendering would look nearly the same as the Application Designer screenshot. In Fluid when viewed online, however, each group box will render vertically as follows:

We can fix this by applying a CSS Flexbox to the 4 group boxes. With Flexbox, the 4 group boxes will align horizontally as long as the device has enough horizontal real estate. If the display is too small, any group boxes that don't fit horizontally will move to the next row. For this example, we will use Oracle Jet's Flex Layout.Here are the steps

  1. Add a Layout only Group Box around the 4 horizontal group boxes and mark the container group box as Layout Only
  2. While still setting container group box properties, set the group box's style class to oj-flex oj-sm-flex-items-1
  3. Likewise, to each of the 4 horizontal group boxes, add the Style Class oj-flex-item
  4. Create a Freeform stylesheet definition containing oj-alta-notag-min.css
  5. Use the AddStylesheet PeopleCode function in PageActivate to insert the Stylesheet into your page

The end result will look something like this:

Several years ago, I read the book Test Driven Development by Kent Beck. In that book, Kent identifies the first step of each project as the hardest step. Why? Because each new project contains significant uncertainty. Software development seems to involve a lot of unknowns (if the solution was known, someone would have created it, automated it, and published it). His advice? Start with what you know. You start with what you know and work torwards what you don't know. This is how we teach Fluid at JSMPros. Your developers understand Classic development and we use that knowledge to springboard students into a higher level of Fluid understanding. If you are ready to take the Fluid challenge, I encourage you to register for one of our monthly Fluid classes at www.jsmpros.com/training-live-virtual. Have a group of eight or more developers? Contact us to schedule your own personalized Fluid training event.

Benefits and Use of Analytics for Insurance

Nilesh Jethwa - Wed, 2017-12-20 15:11

The world in which we live is a volatile world, a fact that doesn’t bode well for the insurance industry whose greatest ally is “certainty”. Analytics, however, can somehow fill the gap. In this regard, BI reporting tools such as data visualization software are extremely useful.

Advanced analytics for insurance allows all stakeholders in the industry to identify new growth opportunities and risk factors as soon as they take shape. Insurance companies such as yours can benefit from analytics in two ways.

  • Protect your enterprise.
  • Optimize your business’s growth.

Three important things to remember

Read more at http://www.infocaptor.com/dashboard/benefits-and-use-of-analytics-for-insurance

Online datafile move in a 12c dataguard environment

Yann Neuhaus - Wed, 2017-12-20 12:13

Oracle 12c introduces moving online datafile. One question we might ask is what about moving datafile online in a dataguard environment. In this blog we will do some tests
Below our configuration, we are using oracle 12.2

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 8 seconds ago)
DGMGRL>

The StandbyFileManagement property is set to auto for both primary and standby database.

DGMGRL> show database 'MYCONT_SITE' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL> show database 'MYCONT_SITE1' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL>

Below datafiles on both primary and standby pluggable databases PDB1

SQL> show con_name;
CON_NAME
------------------------------
PDB1
SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

Now let’s move for example /u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf to a new location on the primary PDB1

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

We can verify the new location on the primary

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

As the StandbyFileManagement is set to auto for both databases, we might think that datafile is also moved in the standby, so let’s check

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

The answer is no.
Ok but is my dataguard still working? Let’s query the broker

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 0 seconds ago)

Yes the configuration is fine.
Ok now can I move online my datafile in the new location on the standby server? Let’s try

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

And we can verify that datafile was moved.

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

And we also can verify that my dataguard configuration is still fine

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 46 seconds ago)
DGMGRL>

Conclusion
We can see that
1- StandbyFileManagement property dos not concern online datafile move
2- Moving online datafile in the primary does not move the datafile on the standby
3- Online datafile can be done on the standby database

 

Cet article Online datafile move in a 12c dataguard environment est apparu en premier sur Blog dbi services.

Oracle 12.2 Dataguard : PDB Flashback on the Primary

Yann Neuhaus - Wed, 2017-12-20 12:12

The last day I was discussing with one colleague about database flashback for a pluggable database in a dataguard environment. I did some tests and I present results in this blog.
Below our broker configuration. Oracle 12.2 is used.

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 35 seconds ago)
DGMGRL>

The primary database has the flashback database set to YES.

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE READ WRITE YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

Same for the standby database

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE1 READ ONLY WITH APPLY YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ ONLY
PDB2 READ ONLY

For the tests we are going to do a flashback database for the primary PDB1.
Let’s connect to PDB1

10:15:59 SQL> alter session set container=pdb1;
Session altered.
.
10:16:15 SQL> show con_name;
CON_NAME
------------------------------
PDB1
10:16:22 SQL>

And let’s create a table article with some datafor reference

10:16:22 SQL> create table article (idart number);
Table created.
.
10:18:12 SQL> insert into article values (1);
1 row created.
10:18:31 SQL> insert into article values (2);
1 row created.
.
10:18:34 SQL> select * from article;
IDART
----------
1
2
.
10:18:46 SQL> commit;

Now let’s do a database flashback of primary pdb1 before the creation of the table article.

10:28:12 SQL> show con_name
CON_NAME
------------------------------
PDB1
.
10:28:16 SQL> shut immediate;
Pluggable Database closed.
.
10:28:28 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 MOUNTED
10:28:54 SQL>
.
10:28:54 SQL> FLASHBACK PLUGGABLE DATABASE PDB1 TO TIMESTAMP TO_TIMESTAMP('2017-12-20 10:16:00', 'YYYY-MM-DD HH24:MI:SS');
Flashback complete.
10:30:14 SQL>

Now let’s open PDB1 with resetlogs option

10:31:08 SQL> alter pluggable database PDB1 open resetlogs;
Pluggable database altered.
10:32:15 SQL>

And let’s query the table article. As expected the table is no longer present

10:32:15 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 READ WRITE
.
10:32:59 SQL> select * from article;
select * from article
*
ERROR at line 1:
ORA-00942: table or view does not exist
10:33:06 SQL>

Now if we check the status of our dataguard in the broker, we have errors

12/20/2017 10:23:07 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the member
Fast-Start Failover: DISABLED
Configuration Status:
ERROR (status updated 48 seconds ago)
12/20/2017 10:34:40 DGMGRL>

The status of the Primary database is fine

12/20/2017 10:34:40 DGMGRL> show database 'MYCONT_SITE';
Database - MYCONT_SITE
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYCONT
Database Status:
SUCCESS

But the standby status is returning some errors
12/20/2017 10:35:11 DGMGRL> show database 'MYCONT_SITE1';
Database - MYCONT_SITE1
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 3 minutes 10 seconds (computed 1 second ago)
Average Apply Rate: 7.00 KByte/s
Real Time Query: OFF
Instance(s):
MYCONT
Database Error(s):
ORA-16766: Redo Apply is stopped
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
Database Status:
ERROR
12/20/2017 10:35:15 DGMGRL>

And if we check the alert log of the standby dataset we can find following errors

(3):Recovery of pluggable database PDB1 aborted due to pluggable database open resetlog marker.
(3):To continue recovery, restore all data files for this PDB to checkpoint SCN lower than 2518041, or timestamp before 12/20/2017 10:16:01, and restart recovery
MRP0: Background Media Recovery terminated with error 39874
2017-12-20T10:32:05.565085+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 2520607
2017-12-20T10:32:05.612394+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
2017-12-20T10:32:05.612511+01:00
MRP0: Background Media Recovery process shutdown (MYCONT)

On the primary PDB, we can can query the current INCARNATION_SCN in the v$pdb_incarnation view. And we can remark that the current SCN is the same that the one specified in the standby alert log 2518041

11:08:11 SQL> show con_name
CON_NAME
------------------------------
PDB1
11:08:56 SQL> select status,INCARNATION_SCN from v$pdb_incarnation;
STATUS INCARNATION_SCN
------- ---------------
CURRENT 2518041
PARENT 2201909
PARENT 1396169
11:08:59 SQL>

And then as specified in the alert log we have to flashback the standby pdb to a SCN lower than 2518041
First let’s stop the redo apply on the standby

12/20/2017 11:13:14 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-OFF';
Succeeded.
12/20/2017 11:13:59 DGMGRL>

And then let’s flashback to 2518039 ( i.e 2518041 -2 ) for example
Let’s shutdown the standby container MYCONT and startup it in a mount state

11:18:42 SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
.
11:19:10 SQL> startup mount
ORACLE instance started.
Total System Global Area 956301312 bytes
Fixed Size 8799656 bytes
Variable Size 348129880 bytes
Database Buffers 595591168 bytes
Redo Buffers 3780608 bytes
Database mounted.
11:19:50 SQL>
.
11:19:50 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED MOUNTED
PDB1 MOUNTED
PDB2 MOUNTED

Now let’s flashback PDB1 on the standby

11:20:19 SQL> flashback pluggable database PDB1 to SCN 2518039;
Flashback complete.
11:20:40 SQL>

The last step is to enable again the redo apply for the standby container

12/20/2017 11:13:59 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-ON';
Succeeded.
12/20/2017 11:23:08 DGMGRL>

And then we can verify that the configuration is now fine

12/20/2017 11:25:05 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 1 second ago)
12/20/2017 11:25:07 DGMGRL>

Conclusion
In this article we saw that the flashback in a dataguard environment is working in the same way for a container or a non container. The only difference is the SCN we must consider to flashback the pluggable database. This SCN should be queried fom the v$pdb_incarnation and not from the v$database as we usually do for a non container database.

 

Cet article Oracle 12.2 Dataguard : PDB Flashback on the Primary est apparu en premier sur Blog dbi services.

nVision Performance Tuning 12: Hinting nVision with SQL Profiles

David Kurtz - Wed, 2017-12-20 10:00
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.  It is a PeopleSoft specific version of a posting on my Oracle blog.

As I explained earlier in this series, it is not possible to add hints to nVision.  The dynamic nature of the nVision SQL means that it is not possible to use SQL Patches.  nVision SQL statements contain literal values and never use bind variables.  When dynamic selectors are used, the SELECTOR_NUM will be different for every execution. A SQL_ID found in one report will be not be seen again in another report.  Even static selector numbers will change after the tree is updated or when a new tree is created.
It is possible to use SQL Profiles to introduce hints because they can optionally match the force match signature of a SQL.  SQL statements that differ only in the literal values they contain will have different SQL IDs but will have the same force matching signature.  Although you will still have a lot of force matching signatures, you should find that you have far fewer force matching signatures than SQL_IDs.   Picking out the signatures that account for the most elapsed execution time and creating profiles for them is manageable.
Note: SQL Profiles require the Tuning Pack to be licenced.
As far as is possible, good nVision performance should be achieved by setting appropriate tree performance options at tree level.  These are global settings.  You may find that a particular setting on a particular tree is not optimal for all reports.  You may then choose to override the tree-level setting in specific layouts.  You may also find that you still need hints to control execution plans.
In particular, parallel query can be an effective tactic in nVision performance tuning.  However, you should put a degree of parallelism on PS_LEDGER or PS_LEDGER_BUDG because that will invoke parallelism in many other processes.  I have found that even putting a degree of parallelism on a summary ledger table can easily result in too many concurrent parallel queries.   On OLTP systems, such as PeopleSoft, I recommend that parallelism should be used sparingly and in a highly controlled and targetted fashion.
ExampleLet's take the following nVision query as an example.
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_XX_SUM_CONSOL_VW A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_USMGT'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
OR A.DEPTID='B9150' OR A.DEPTID=' ')
AND L2.SELECTOR_NUM=10228
AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
AND L3.SELECTOR_NUM=10231
AND A.ACCOUNT=L3.RANGE_FROM_10
AND A.CHARTFIELD1='0012345'
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
/
We can tell from the equality join conditions that the two selectors still joined to the are dynamic selectors.
A third selector on DEPTID has been suppressed with the 'use literal values' performance option.  The number of DEPTID predicates in the statement will depend on the tree and the node selected for the report.  Note, that if these change then the statement will not force match the same profile.  SQL profiles might suddenly cease to work due to a tree or selection criteria change.
This is the plan I get initially and without a profile. It doesn't perform well.
Plan hash value: 808840077
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 10408 (100)| | | |
| 1 | HASH GROUP BY | | 517 | 50666 | 10408 (1)| 00:00:01 | | |
| 2 | HASH JOIN | | 517 | 50666 | 10407 (1)| 00:00:01 | | |
| 3 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
| 4 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
| 5 | HASH JOIN | | 518 | 41440 | 10404 (1)| 00:00:01 | | |
| 6 | PARTITION RANGE SINGLE | | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
| 7 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
| 8 | PARTITION RANGE ITERATOR | | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
| 9 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_X_LEDGER_ACCTS | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
| 10 | SORT CLUSTER BY ROWID BATCHED | | 5373 | | 5177 (1)| 00:00:01 | | |
| 11 | INDEX SKIP SCAN | PS_X_LEDGER_ACCTS | 5373 | | 5177 (1)| 00:00:01 | 28 | 40 |
-----------------------------------------------------------------------------------------------------------------------------------
These are the hints I want to introduce (on Oracle 12c).
SELECT /*+OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_min_time_threshold',2) 
OPT_PARAM('parallel_degree_limit',4) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)*/…
  • Use automatic parallel degree, statement queuing and in-memory parallel execution.
  • Invoke parallelism if the statement is estimated to run for at least 2 seconds
  • However, I will also limit the automatic parallelism to a degree of 4
  • Force materialize view rewrite
  • Use a Bloom filter when joining to the materialized view.
I have created a data-driven framework to create the profiles. I have created working storage table to hold details of each force matching signature for which I want to create a profile.
CREATE TABLE dmk_fms_profiles
(force_matching_signature NUMBER NOT NULL
,sql_id VARCHAR2(13)
,plan_hash_value NUMBER
,module VARCHAR2(64)
,report_id VARCHAR2(32) /*Application Specific*/
,tree_list CLOB /*Application Specific*/
,sql_profile_name VARCHAR2(30)
,parallel_min_time_threshold NUMBER
,parallel_degree_limit NUMBER
,other_hints CLOB
,delete_profile VARCHAR2(1)
,sql_text CLOB
,CONSTRAINT dmk_fms_profiles_pk PRIMARY KEY (force_matching_signature)
,CONSTRAINT dmk_fms_profiles_u1 UNIQUE (sql_id)
,CONSTRAINT dmk_fms_profiles_u2 UNIQUE (sql_profile_name)
)
/
Using conditional parallelism with the PARALLEL_MIN_TIME_THRESHOLD, but limited with PARALLEL_DEGREE_LIMIT is an effective tactic with nVision, so I have specified columns in the metadata table for those hints, otherwise, hints are injected via a string. I identified the problematic SQL by analysis with ASH, and hence I also obtained the FORCE_MATCHING_SIGNATURE. The metadata is keyed by FORCE_MATCHING_SIGNATURE. I have specified a meaningful name for the SQL profile.
INSERT INTO dmk_fms_profiles (force_matching_signature, parallel_min_time_threshold, parallel_degree_limit, other_hints, sql_profile_name) 
VALUES (16625752171077499412, 1, 4, 'REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)', 'NVS_GBGL123I_BU_CONSOL_ACCOUNT');
COMMIT;
Profiles are created using the text of a SQL rather than the SQL_ID or FORCE_MATCHING_SIGNATURE directly. Therefore the SQL_TEXT must be extracted from the AWR, so this method also requires that the SQL statement has been captured by an AWR snapshot.
UPDATE dmk_fms_profiles a
SET (module, action, sql_id, plan_hash_value, sql_text)
= (SELECT s.module, s.action, s.sql_id, s.plan_hash_value, t.sql_text
FROM dba_hist_sqlstat s
, dba_hist_sqltext t
WHERE t.dbid = s.dbid
AND t.sql_id = s.sql_id
AND s.force_matching_signature = a.force_matching_signature
AND s.snap_id = (
SELECT MAX(s1.snap_id)
FROM dba_hist_sqlstat s1
WHERE s1.force_matching_signature = a.force_matching_signature
AND s1.module = 'RPTBOOK' /*Application Specific*/
AND s1.action LIKE 'PI=%:%:%' /*Application Specific*/)
AND s.module = 'RPTBOOK' /*Application Specific*/
AND s.action LIKE 'PI=%:%:%' /*Application Specific*/
AND ROWNUM = 1)
WHERE sql_id IS NULL
/

MERGE INTO dmk_fms_profiles u
USING (
SELECT a.sql_id, a.force_matching_signature, p.name
FROM dmk_fms_profiles a
, dba_sql_profiles p
WHERE p.signature = a.force_matching_signature
) s
ON (s.force_matching_signature = u.force_matching_signature)
WHEN MATCHED THEN UPDATE
SET u.sql_profile_name = s.name
/
Columns REPORT_ID and TREE_LIST contain application specific information extracted from the application instrumentation and tree selector logging.
/*Application Specific - extract report ID from ACTION*/
UPDATE dmk_fms_profiles a
SET report_id = substr(regexp_substr(s.action,':([A-Za-z0-9_-])+',1,1),2)
WHERE report_id IS NULL
AND action IS NOT NULL
/
/*Application Specific - extract financial analysis tree from application logging*/
UPDATE dmk_fms_profiles a
SET tree_list =
(SELECT LISTAGG(tree_name,', ') WITHIN GROUP (ORDER BY tree_name)
FROM (select l.tree_name, MAX(l.length) length
FROM dba_hist_sql_plan p
, ps_nvs_treeslctlog l
WHERE p.plan_hash_value = a.plan_hash_value
AND p.sql_id = a.sql_id
AND p.object_name like 'PS%TREESELECT__'
AND p.partition_start = partition_stop
AND p.partition_start = l.selector_num
AND l.tree_name != ' '
GROUP BY l.tree_name)
)
WHERE tree_list IS NULL
/

Now I can produce a simple report of the metadata in order to see what profiles should be created.
column sql_text word_wrapped on format a110
column module format a8
column report_id heading 'nVision|Report ID'
column tree_list word_wrapped on format a20
column plan_hash_value heading 'SQL Plan|Hash Value' format 9999999999
column parallel_min_time_threshold heading 'Parallel|Min Time|Threshold' format 999
column parallel_degree_limit heading 'Parallel|Degree|Limit' format 999
set long 500
SELECT * FROM dmk_fms_profiles
/

SQL Plan
FORCE_MATCHING_SIGNATURE SQL_ID Hash Value MODULE ACTION
------------------------ ------------- ----------- -------- ----------------------------------------------------------------
Parallel Parallel
nVision Min Time Degree
Report ID TREE_LIST SQL_PROFILE_NAME Threshold Limit D
-------------------------------- -------------------- ------------------------------ --------- -------- -
OTHER_HINTS
--------------------------------------------------------------------------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------
12803175998948432502 5pzxhha3392cs 988048519 RPTBOOK PI=3186222:USGL233I:10008
USGL233I BU_GAAP_CONSOL, NVS_GBGL123I_BU_CONSOL_ACCOUNT 1 4
GAAP_ACCOUNT
REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)
SELECT L2.TREE_NODE_NUM,A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_GBMGT' AND A.FISCAL_YEAR=2017 AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND (A.DEPTID BETWEEN
'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149' OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID
BETWEEN 'B9165' AND 'B9999' OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999' OR
A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DE
Next, this PL/SQL block will create or recreate SQL profiles from the metadata. The various hints can be concatenated into a single string and passed as a parameter to SQLPROF_ATTR. The SQL text is passed as a parameter when the profile is created.
set serveroutput on
DECLARE
l_signature NUMBER;
h SYS.SQLPROF_ATTR;
e_no_sql_profile EXCEPTION;
PRAGMA EXCEPTION_INIT(e_no_sql_profile, -13833);
l_description CLOB;
BEGIN

FOR i IN (
SELECT f.*, s.name
FROM dmk_fms_profiles f
LEFT OUTER JOIN dba_sql_profiles s
ON f.force_matching_signature = s.signature
) LOOP

BEGIN
IF i.name IS NOT NULL AND i.delete_profile = 'Y' THEN
dbms_sqltune.drop_sql_profile(name => i.name);
END IF;
EXCEPTION WHEN e_no_sql_profile THEN NULL;
END;

IF i.delete_profile = 'Y' THEN
NULL;
ELSIF i.sql_text IS NOT NULL THEN
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_degree_policy'',''AUTO'') ' END||
CASE WHEN i.parallel_degree_limit >=0 THEN 'OPT_PARAM(''parallel_degree_limit'',' ||i.parallel_degree_limit ||') ' END||
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_min_time_threshold'','||i.parallel_min_time_threshold||') ' END||
i.other_hints,
q'[END_OUTLINE_DATA]');

l_signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(i.sql_text);
l_description := 'coe nVision '||i.report_id||' '||i.tree_list||' '||i.force_matching_signature||'='||l_signature;
dbms_output.put_line(i.sql_profile_name||' '||l_description);

DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => i.sql_text,
profile => h,
name => i.sql_profile_name,
description => l_description,
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );

END IF;
END LOOP;
END;
/
I can verify that the profile has been created, and the hints that it contains, thus:
SELECT profile_name,
xmltype(comp_data) as xmlval
FROM dmk_fms_profiles p
, dbmshsxp_sql_profile_attr x
WHERE x.profile_name = p.sql_profile_name
AND p.status = 'ENABLED'
ORDER BY 1
/

PROFILE_NAME
------------------------------
XMLVAL
------------------------------------------------------------------------------------------------
NVS_GBGL123I_BU_CONSOL_ACCOUNT
<![CDATA[BEGIN_OUTLINE_DATA]]>
<![CDATA[OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_degree_limit',4) OPT_PARAM('parallel_min_time_threshold',1) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)]]>
<![CDATA[END_OUTLINE_DATA]]>
And now when the application runs, I get the plan that I wanted.
  • The query runs in parallel.
  • The SQL is rewritten to use materialized view.
  • There are no indexes on the materialized view, so it must full scan it.
  • It generates a bloom filter from PSTREESELECT10 and applies it to the materialized view.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2219 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10004 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
| 4 | PX RECEIVE | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
| 5 | PX SEND HASH | :TQ10003 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | P->P | HASH |
| 6 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 7 | HASH JOIN | | 536 | 47704 | 2218 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 8 | PX RECEIVE | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 9 | PX SEND HYBRID HASH | :TQ10002 | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | P->P | HYBRID HASH|
| 10 | STATISTICS COLLECTOR | | | | | | | | Q1,02 | PCWC | |
| 11 | HASH JOIN | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | PCWP | |
| 12 | BUFFER SORT | | | | | | | | Q1,02 | PCWC | |
| 13 | JOIN FILTER CREATE | :BF0000 | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 14 | PX RECEIVE | | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 15 | PX SEND BROADCAST | :TQ10000 | 236 | 3776 | 2 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 16 | PARTITION RANGE SINGLE | | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
| 17 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
| 18 | JOIN FILTER USE | :BF0000 | 8859 | 475K| 2213 (6)| 00:00:01 | | | Q1,02 | PCWP | |
| 19 | PX BLOCK ITERATOR | | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWC | |
| 20 | MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_XX_SUM_GCNSL_MV | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWP | |
| 21 | BUFFER SORT | | | | | | | | Q1,03 | PCWC | |
| 22 | PX RECEIVE | | 731 | 13158 | 3 (0)| 00:00:01 | | | Q1,03 | PCWP | |
| 23 | PX SEND HYBRID HASH | :TQ10001 | 731 | 13158 | 3 (0)| 00:00:01 | | | | S->P | HYBRID HASH|
| 24 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
| 25 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Conclusion SQL Profiles can be used in much the same way as SQL Patches to introduce hints into application SQL without changing the code, the difference being that SQL Profiles can force match SQL.  However, SQL Profiles do require the Tuning pack to be licenced, whereas SQL Patches and Baselines do not.
Applying force matching SQL profiles to nVision is an effective, though reactive tactic.   Tree changes can result in changes to the number of literal criteria in nVision SQL statements that may, therefore, cease to match existing profiles.  nVision will always require on-going monitoring and introduction of new profiles.

Introducing Pixel Perfect Reporting in Oracle Analytics Cloud

Tim Dexter - Wed, 2017-12-20 08:30

 

For all you BI Publisher fans, here is the good news - BI Publisher is now available with Oracle Analytics Cloud !!

Oracle Analytics Cloud (OAC) is a scalable and secure public cloud service that provides a full set of capabilities to explore and perform collaborative analytics for your enterprise. You can take data from any source, explore with Data Visualization and collaborate with real-time data. It is available in three flavors - Standard Edition, Data Lake Edition and Enterprise Edition, with Standard Edition giving the base ability to explore data, Data Lake Edition allowing insights into big data, and Enterprise Edition offering the full platter of data exploration, big data analytics, dashboard, enterprise reporting, Essbase etc. Refer to this documentation for additional details on different editions.

With OAC 17.4.5 Enterprise Edition, now you can create pixel perfect report and deliver to a variety of destinations such as email, printer, fax, file server using ftp or WebDAV, Webcenter Content and Content & Experience Cloud. The version of BI Publisher here is 12.2.4.0.

If you have used BI Publisher On-prem, the experience will be very similar feature wise and look-and-feel wise, and therefore you will find it easy to get on-board. If you are new to BI Publisher, you will now be able to create pixel perfect and highly formatted business documents in OAC such as Invoices, Purchase Orders, Dunning Letters, Marketing Collateral, EFT & EDI documents, Financial Statements, Government Forms, Operational Reports, Management Reports, Retail Reports, Shipping Labels with barcodes, Airline boarding passes with PDF417 barcode, Market to Mobile content using QR code, Contracts with fine-print on alternate page, Cross-tab reports, etc.

You can connect to a variety of data sources including BI Subject Areas, BI Analysis and RPD; Schedule your report to run once or as a recurring job; and even burst documents to render in multiple formats and be delivered to multiple destinations.

 

Can we move from BI Publisher on-prem to BI Publisher on OAC?

Well yes, you can. You will have to understand your on-prem deployment and plan accordingly. If your data can be migrated to OAC, that will be the best otherwise you can plan to extend your network to Oracle Cloud allowing OAC to access your on-prem data. The repository can be migrated by archiving and unarchiving mechanism. User data management will be another task where application roles from On-prem will need to be added to OAC application roles. Details on this will be coming soon.

 

Benefits of BI Publisher on OAC

First of all OAC comes with many great features around data exploration and visualization with advanced analytics capabilities. BI Publisher compliments this environment for pixel perfect reporting. So now you have an environment that is packed with Industry leading BI products providing an end-to-end solution for an enterprise. 

Managing Server instances will be a cake walk now, with just few clicks you will be able to scale up/down to a different compute shape or scale out/in to manage nodes in the cluster, saving you both time and money.

Many self service features to manage reports and server related resources.

 

What's new in BI Publisher 12.2.4.0?

BI Publisher in OAC includes all features of 12.2.1.3 and has the following new features in this release:

Accessible PDF Support (Tagged PDF & PDF/UA-1) New Barcodes - QR Code and PDF417 Ability to purge Job History Ability to view diagnostic log for online report Widow-orphan support for RTF template

 

So why wait? You can quickly check this out by creating a free trial account here. Once you login, you are in OAC home page. To get to BI Publisher you need to click on the Page Menu on right side top of the page and then select option "Open Classic Home". BI Publisher options are available under Published Reporting in the classic home page.

For further details on pixel perfect reporting, check the latest Oracle Analytics Cloud Documentation.

 

Stay tuned for more updates on upgrade and new features !

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator