Feed aggregator

Jonathan Lewis

Bobby Durrett's DBA Blog - Tue, 2016-04-19 18:09

I am finally getting around to finishing my four-part blog series on people who have had the most influence on my Oracle performance tuning work. The previous three people were Craig ShallahamerDon Burleson, and Cary Millsap. The last person is Jonathan Lewis. These four people, listed and blogged about in chronological order, had the most influence on my understanding of how to do Oracle database performance tuning. There are many other great people out there and I am sure that other DBAs would produce their own, different, list of people who influenced them. But this list reflects my journey through my Oracle database career and the issues that I ran into and the experiences that I had. I ran into Jonathan Lewis’ work only after years of struggling with query tuning and getting advice from others. I ran into his material right around the time that I was beginning to learn about how the Oracle optimizer worked and some of its limits. Jonathan was a critical next step in my understanding of how Oracle’s optimizer worked and why it sometimes failed to pick the most efficient way to run a query.

Jonathan has produced many helpful tuning resources including his blog, his participation in online forums, and his talks at user group conferences, but the first and most profound way he taught me about Oracle performance tuning was through his query tuning book Cost-Based Oracle Fundamentals. It’s $30 on Amazon and that is an incredibly small amount of money to pay compared to the value of the material inside the book. I had spent many hours over several years trying to understand why the Oracle optimizer some times choses the wrong way to run a query. In many cases the fast way to run something was clear to me and the optimizer’s choices left me stumped. The book helped me better understand how the Oracle optimizer chooses what it thinks is the best execution plan. Jonathan’s book describes the different parts of a plan – join types, access methods, etc. – and how the optimizer assigns a cost to the different pieces of a plan. The optimizer chooses the plan with the least cost, but if some mistake causes the optimizer to calculate an unrealistic cost then it might choose a poor plan. Understanding why the optimizer would choose a slow plan helped me understand how to resolve performance issues or prevent them from happening, a very valuable skill.

There is a lot more I could say about what I got from Jonathan Lewis’ book including just observing how he operated. Jonathan filled his book with examples which show concepts that he was teaching. I think that I have emulated the kind of building of test scripts that you see throughout his book and on his blog and community forums. I think I have emulated not only Jonathan’s approach but the approaches of all four of the people who I have spotlighted in this series. Each have provided me with profoundly helpful technical information that has helped me in my career. But they have also provided me with a pattern of what an Oracle performance tuning practitioner looks like. What kind of things do they do? To this point in my career I have found the Oracle performance tuning part of my job to be the most challenging and interesting and probably the most valuable to my employers. Jonathan Lewis and the three others in this four-part series have been instrumental in propelling me along this path and I am very appreciative.

Bobby

Categories: DBA Blogs

Critical Patch Update for April 2016 Now Available

Steven Chan - Tue, 2016-04-19 16:39

The  Critical Patch Update (CPU) for April 2016 was released on April 19, 2016.  Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents.

Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.

Also, it is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches, as this is where you can find important pertinent information.

The Critical Patch Update Advisory is available at the following location:

The next four Critical Patch Update release dates are:

  • July 19, 2016
  • October 18, 2016
  • January 17, 2017
  • April 18, 2017

Oracle E-Business Suite References

My Oracle Support Knowledge Documents:

Web Library Book:

Categories: APPS Blogs

JRE 1.8.0.91/92 Certified with Oracle EBS 12.x

Steven Chan - Tue, 2016-04-19 15:56

Java logo

Java Runtime Environment 1.8.0_91 (a.k.a. JRE 8u91-b14) and its corresponding Patch Set Update (PSU) JRE 1.8.0_92 and later updates on the JRE 8 codeline are now certified with Oracle E-Business Suite 12.1 and 12.2 for Windows desktop clients.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

What's new in this release?

Oracle now releases a Critical Patch update (CPU) at the same time as the corresponding Patch Set Update (PSU) release for Java SE 8.

  • CPU Release:  JRE 1.8.0_91
  • PSU Release:  JRE 1.8.0_92
Oracle recommends that Oracle E-Business Suite customers use the CPU release (JRE 1.8.0_91) and only upgrade to the PSU release (1.8.0_92) if they require a specific bug fix.  For further information and bug fix details see Java CPU and PSU Releases Explained.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

All patches required for ensuring full compatibility of the E-Business Suite with JRE 8 are documented in these Notes:

For EBS 12.1 & 12.2

EBS + Discoverer 11g Users

This JRE release is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements:

Implications of Java 6 and 7 End of Public Updates for EBS Users

The Oracle Java SE Support Roadmap and Oracle Lifetime Support Policy for Oracle Fusion Middleware documents explain the dates and policies governing Oracle's Java Support.  The client-side Java technology (Java Runtime Environment / JRE) is now referred to as Java SE Deployment Technology in these documents.

Starting with Java 7, Extended Support is not available for Java SE Deployment Technology.  It is more important than ever for you to stay current with new JRE versions.

If you are currently running JRE 6 on your EBS desktops:

  • You can continue to do so until the end of Java SE 6 Deployment Technology Extended Support in June 2017
  • You can obtain JRE 6 updates from My Oracle Support.  See:

If you are currently running JRE 7 on your EBS desktops:

  • You can continue to do so until the end of Java SE 7 Deployment Technology Premier Support in July 2016
  • You can obtain JRE 7 updates from My Oracle Support.  See:

If you are currently running JRE 8 on your EBS desktops:

Will EBS users be forced to upgrade to JRE 8 for Windows desktop clients?

No.

This upgrade is highly recommended but remains optional while Java 6 and 7 are covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 and 7 desktop clients. Note that there are different impacts of enabling JRE Auto-Update depending on your current JRE release installed, despite the availability of ongoing support for JRE 6 and 7 for EBS customers; see the next section below.

Impact of enabling JRE Auto-Update

Java Auto-Update is a feature that keeps desktops up-to-date with the latest Java release.  The Java Auto-Update feature connects to java.com at a scheduled time and checks to see if there is an update available.

Enabling the JRE Auto-Update feature on desktops with JRE 6 installed will have no effect.

With the release of the January Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Enabling the JRE Auto-Update feature on desktops with JRE 8 installed will apply JRE 8 updates.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

JRE 8 is certified for Mac OS X 10.8 (Mountain Lion), 10.9 (Mavericks), and 10.10 (Yosemite) desktops.  For details, see:

Will EBS users be forced to upgrade to JDK 8 for EBS application tier servers?

No.

JRE is used for desktop clients.  JDK is used for application tier servers.

JRE 8 desktop clients can connect to EBS environments running JDK 6 or 7.

JDK 8 is not certified with the E-Business Suite.  EBS customers should continue to run EBS servers on JDK 6 or 7.

Known Iusses

Internet Explorer Performance Issue

Launching JRE 1.8.0_73 through Internet Explorer will have a delay of around 20 seconds before the applet starts to load (Java Console will come up if enabled).

This issue fixed in JRE 1.8.0_74.  Internet Explorer users are recommended to uptake this version of JRE 8.

Form Focus Issue

Clicking outside the frame during forms launch may cause a loss of focus when running with JRE 8 and can occur in all Oracle E-Business Suite releases. To fix this issue, apply the following patch:

References

Related Articles
Categories: APPS Blogs

JRE 1.7.0_101 Certified with Oracle E-Business Suite 12.x

Steven Chan - Tue, 2016-04-19 15:50

Java logo

Java Runtime Environment 1.7.0_101 (a.k.a. JRE 7u101-b14) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 12.x for Windows-based desktop clients.

This certification addresses Oracle Security Alert for CVE-2016-0636.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

EBS + Discoverer 11g Users

This JRE release is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements:

JRE 7 End of Public Updates

The JRE 7u79 release was the last JRE 7 update available to the general public.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 7 updates to the end of Java SE 7 Premier Support to the end of July 2016.

How can EBS customers obtain Java 7 updates after the public end-of-life?

EBS customers can download Java 7 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see:

Both JDK and JRE packages are now contained in a single combined download.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package. 

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

Java Auto-Update Mechanism

With the release of the January 2015 Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.7 (Lion), 10.8 (Mountain Lion), 10.9 (Mavericks), and 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE is used for desktop clients.  JDK is used for application tier servers

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 is covered by Extended Support until June 2017.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by June 2017. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is certified with E-Business Suite 12.  See:

Known Issues

When using Internet Explorer, JRE 1.7.0_01 had a delay of around 20 seconds before the applet started to load. This issue is fixed in JRE 1.7.0_95.

References

Related Articles
Categories: APPS Blogs

JRE 1.6.0_115 Certified with Oracle E-Business Suite

Steven Chan - Tue, 2016-04-19 15:43
Java logThe latest Java Runtime Environment 1.6.0_115 (a.k.a. JRE 6u115-b12) and later updates on the JRE 6 codeline are now certified with Oracle E-Business Suite Release 12.x for Windows-based desktop clients.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

New EBS installation scripts

This JRE release is the first with a 3-digit Java version. Installing this in your EBS 11i and 12.x environments will require new installation scripts.  See the documentation listed in the 'References' section for more detail.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Deploying JRE documentation for your EBS release for details.

Implications of Java 6 End of Public Updates for EBS Users

The Support Roadmap for Oracle Java is published here:

The latest updates to that page (as of Sept. 19, 2012) state:

Java SE 6 End of Public Updates Notice

After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support .

What does this mean for Oracle E-Business Suite users?

EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates from February 2013 to the end of Java SE 6 Extended Support in June 2017.

In other words, nothing changes for EBS users after February 2013. 

EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6 until the end of Java SE 6 Extended Support in June 2017. 

How can EBS customers obtain Java 6 updates after the public end-of-life?

Java 6 is now available only via My Oracle Support for E-Business Suite users.  You can find links to this release, including Release Notes, documentation, and the actual Java downloads here: Both JDK and JRE packages are contained in a single combined download after 6u45.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE is used for desktop clients.  JDK is used for application tier servers

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 is covered by Extended Support until June 2017.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by June 2017. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is certified with E-Business Suite 12.  See:

References

Related Articles
Categories: APPS Blogs

April 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-04-19 15:02

Oracle today released the April 2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle E-Business Suite, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (MOS Note 2126904.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpuapr2016v3-2985753.html

My Oracle Support Note 2126904.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2126904.1 (MOS account required).

April 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-04-19 15:02

Oracle today released the April 2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle E-Business Suite, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (MOS Note 2126904.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpuapr2016v3-2985753.html

My Oracle Support Note 2126904.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2126904.1 (MOS account required).

USING SELECT 'X' in query/subqueries.

Learn DB Concepts with me... - Tue, 2016-04-19 14:26
 
USING SELECT 'X' in query/sub-queries.



--------------------------------------------------------
--  DDL for Table TAB1
--------------------------------------------------------

  CREATE TABLE "ATEST"."TAB1"
   (    "ID" NUMBER,
    "NAME" VARCHAR2(20 BYTE)
   ) ;

Insert into ATEST.TAB1 (ID,NAME) values (1,'AAA');
Insert into ATEST.TAB1 (ID,NAME) values (2,'BBB');
Insert into ATEST.TAB1 (ID,NAME) values (3,'EEE');
Insert into ATEST.TAB1 (ID,NAME) values (4,'FFF');


--------------------------------------------------------
--  DDL for Table TAB2
--------------------------------------------------------

  CREATE TABLE "ATEST"."TAB2"
   (    "ID" NUMBER,
    "NAME" VARCHAR2(20 BYTE)
   ) ;

Insert into ATEST.TAB2 (ID,NAME) values (1,'CCC');
Insert into ATEST.TAB2 (ID,NAME) values (2,'DDD');


Get records that exits in TAB1 and not in TAB2 using select 'X' :


select * from TAB1 f where not exists (select 'X' from TAB2 where id=f.id);
ID    NAME
--    ---- 
4    FFF
3    EEE

IN the above query we get output of all the records from TAB1 that doesnt match with TAB2 ID's.
Hence we do not get the records with ID's 1 & 2 as they only exits in TAB1.
This is just like using "select * from TAB1 f where not exists (select ID from TAB2 where id=f.id);"


Get records that exits in TAB1 and in TAB2 using select 'X' :


select * from TAB1 f where exists (select 'X' from TAB2 where id=f.id);

ID    NAME
--    ---- 
1    AAA
2    BBB

IN the above query we get output of all the records from TAB1 that exist with same ID in TAB2 .
Hence we get only records with ID 1 & 2 as they exists in both TABLES.
This is just like using "select * from TAB1 f where exists (select ID from TAB2 where id=f.id);"
Categories: DBA Blogs

XA Transactions with SOASuite JMS Adapter

Darwin IT - Tue, 2016-04-19 13:39
JMS is perfect for setting transaction boundaries and in OSB it is pretty clear on how JMS transactions are handled. However, in SOASuite using the JMS adapter the SOA Infrastructure is handling your JMS transactions by default; and messages are removed from the queue rightaway because the Get's are Auto-acknowledged. If something fails, you would expect that messages are rolled back to the JMS queue and eventually moved to the error queue. But, again by default, not with the SOASuite/JMS Adapter. In that case the BPEL process, for instance, fails and get's in a recovery state, to be handled in the 'Error Hospital'in Enterprise Manager. But I want JMS to handle it! (Says the little boy...)

So how do we accomplish that? Today I got the chance to figure that out.

Start with a JMS setup with a JMS Server, Module and a Queue with an Error Queue that is configured to be the error destination on the first queue. On the first queue set a redelivery limit to 3 and a redelivery delay on for instance 60000 ms (or something like that). I'm not going in to that here.
Create also a Connection Factory in the JMS Module with a proper jndi, something like 'jms/myApplicationCF'.

In the JMS adapter on SOASuite there are several OutboundConnectionFactories already pre-configured. It is quite convenient to use the one with JNDI 'eis/wls/Queue'. But if you look into that, you'll see that it uses the default WebLogic JMS Connection factory 'weblogic.jms.XAConnectionFactory'. Not much wrong with that, but you can't configure that for your own particular situation. But more over it is configured with 'AcknowledgeMode' = 'AUTO_ACKNOWLEDGE'. As you can read in the docs there are three values for the AcknowledgeMode:
  • DUPS_OK_ACKNOWLEDGE, for consumers that are not concerned about duplicate messages
  • AUTO_ACKNOWLEDGE, in which the session automatically acknowledges the receipt of a message
  • CLIENT_ACKNOWLEDGE, in which the client acknowledges the message by calling the message's acknowledge method
So create a new outbound connection factory, with a JNDI like 'eis/jms/MyApp'. 
Now, apparently we don't want  'AUTO_ACKNOWLEDGE', because that would cause the message-get acknowledged 'On Get'. So you could rollback until 'Saint Juttemis' (as we say in our family) but it won't go back on the queue. Dups aren't ok with me, so I'll choose 'CLIENT_ACKNOWLEDGE' here. Then there's another option: 'IsTransacted'. I want that one on 'true'. Then in ConnectionFactoryLocation, you'd put the JNDI of your JMS Connection factory, in my example 'jms/myApplicationCF'.

So you'll get something like:

On the tab Transaction, validate that the transaction support is set to a XA Transaction:

Having done that, you can update/redeploy your JMS Adapter with the changed plan. I figure that how to do that is straight forward, especially when you've done that with DB Adapters already.

I created two SOA Projects (actually I adapted those created by a co-worker). The first one is TestPutJMS:

The project is straight forward with a WSDL refering to an xsd with two fields:







The bpel is then as follows:

It assigns the request to the input variable of the invoke of the JMSPut. The JMS_Put is an jms-adapter configuration, referring to the JNDI 'eis/jms/myApp', defined in the JMS Adapter.

After that there's an if on the action field, where in the case of a certain value a fault is thrown, to validate if the Put is rolled back.

In my case it's more interesting to look at the Get part. That project is as follows:

In this case there's a mediator wired to the get adapter config, also referring to the 'eis/jms/myApp' JNDI. The mediator routes to the bpel process. The transaction handling of a mediator is simple and straight-forward:
  • If there's a transaction it will subscribe to that,
  • if there isn't, a new transaction is created.
The JMS Adapter creates an new XA Transaction. On the JMS Adaptor on WLS we configured that no Auto Acknowledge should occur, and we want a transaction. Thus, this is the transaction that is re-used by the Mediator. But how about the BPEL?  The BPEL is asynchronous request only. Since it has no way to reply the response, or it would be on a response queue.
By default you would have a property 'bpel.config.oneWayDeliveryPolicy' set to 'async.persist'. But that would mean that a new thread is started. Setting it on 'sync' would cause the thread that is started by the Adapter is reused. I also want to subscribe to the already running transaction of the JMS Adapter as it is passed through by the mediator. Setting the property 'bpel.config.transaction' to 'required' will take care of that. Summarized, I set the following properties on the bpel:
  • bpel.config.transaction: required => subscribe to already opened transaction
  • bpel.config.oneWayDeliveryPolicy: sync => reuse existing running thread


The process looks like:


Here I have an if with a conditional throw of an exception as well. Based on the value of the action element I can have it to throw a custom exception, that will cause the BPEL to fail and the transaction rolled back.
When I have a redelivery limit to 3, I'll get three retries, so in total 4 tries of the BPEL process. After that, the message is moved to the JMS Error Queue.

A nice article on the JMS Transactions from the A team is found here. However, the setup above leaves the redelivery handling by JMS. So, in 12cR2 that is, I find that the properties of the JMS Queue apparently has preference over the settings I did in the TestJMSGet Service on the composite:

I hope this article clears things up regarding the JMS Adapter configuration for transactions.

Timeout Values for Enterprise Manager Components

Arun Bavera - Tue, 2016-04-19 13:35
Requirements: To have dashboard display continuously without logging in again

There are browser plugins which refreshes the current page automatically for the mentioned frequency. Try this approach first.
I searched “browser plugin to refresh page”



The EM components which affects timeout are:
http://docs.oracle.com/cd/E63000_01/EMADM/appdx_timeout_settings.htm#EMADM15477
ComponentDescriptionTimeout Value (in minutes)CommandApache timeoutNumber of seconds that an Apache session is kept active.
If Apache timeout is set beyond the operating system TCP timeout, it will cause unpredictable results. The operating system timeout is set to 2 hours by default.5 mins by defaultRun the following command:
$ omsvfy show tcp
Parameters Incoming Value
--------------------
tcp_keepalive_time 7200
tcp_keepalive_intvl 75
tcp_fin_timeout 60
--------------------






OMS timeout or Login timeoutThis is theoracle.sysman.eml.maxInactiveTimeparameter that can be set per OMS. To prevent unauthorized access to the Cloud Control, Enterprise Manager will automatically log you out of Cloud Control when there is no activity for a predefined period of time. For example, if you leave your browser open and leave your office. This default behavior prevents unauthorized users from using your Enterprise Manager administrator account.
If you make changes to the login timeout value, be sure to consider the security implications of leaving your session open for other than the default timeout period.
Note: The default timeout value does not apply when you restart the Web server or the OMS. In both of those cases, you will be asked to log in to the Cloud Control Console, regardless of the default timeout value.

45 min by defaultRun the following command: emctl set property -name oracle.sysman.eml.maxInactiveTime -value time_in_minutes -module emoms
Then, restart OMS for the value to take effect.ADF timeoutThis is controlled by the variableoracle.adf.view.rich.poll.timeout. The variable applies to pages that have auto poll. ADF pages may be enabled with automatic poll. After a page does not receive any keyboard or mouse event for duration oforacle.adf.view.rich.poll.timeoutvariable, then the poll stops. From that point on, the page participates in the standard server-side session timeout.10 minNoneAt my previous project we used these values as per client requirements:
--We are setting to 2700 ---> 45minX60= 2700 Seconds
emctl set property -name oracle.sysman.eml.maxInactiveTime -value 2700
cd /u01/app/mw_12cR3/oms/sysman/archives/emgc/deployments/GCDomain/emgc.ear/em.war/WEB-INF  
<!-- Workaround for Bug 18141467- CONFIGURE PERFORMANCE HOME PAGE TIMEOUT
You need to specify the value in milliseconds. Setting to 40Minuites on OCT-23-2014 :
-->
<context-param>
<param-name>oracle.adf.view.rich.poll.TIMEOUT</param-name>
<param-value>2400000</param-value>
</context-param>
………………..
…………………
</web-app>
Also make sure the settings at Load Balancer is proper if they are using one.
Note:
  • In case of multiple OMSs, this must be done on all OMSs.
  • If the OMSs are behind a SLB, and if the SLB name is used in the URL provided to login to the EM Console, check the Whitepaper:
    Enterprise Manager 12c Cloud Control - Configuring OMS High Availability with F5 BIG - IP Local Traffic Manager
    Topic Create the Persistence Profiles - Change the values related to the Console
    The value must be provided there in seconds (So 3600 for a timeout of 60 minutes).
Refer:
12c OEM: Console Session Expires with "Because of inactivity, your session has timed out and is no longer active, Click OK to reload the page (Doc ID 1544516.1)
How to change EM 12c Performance Home page default timeout (Doc ID 1644004.1)




















Categories: Development

nVision Performance Tuning: Coalescing Tree Leaves

David Kurtz - Tue, 2016-04-19 11:09
I have blogged recently about the effectiveness of the Tree Performance options in improving the performance of nVision reports.
However, a consequence of the Tree Performance Access Method suppress join; use literal values is that the resulting SQL in nVision will have a criteria for every leaf on each of the selected nodes on that tree.
nVision Tree Performance Options|


There will be an equality condition for each single value leaf. I normally set Selector Options to Ranges of values (BETWEEN), so I get a between condition for each ranged leaf. Behind the scenes, Oracle rewrites between as a pair of inequalities, so there is no difference, but the SQL generated by nVision is slightly shorter.
The following is typical of nVision SQL with these performance options set.
SELECT A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) 
FROM
PS_LEDGER A WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=2015 AND
A.ACCOUNTING_PERIOD BETWEEN 1 AND 12 AND A.CURRENCY_CD='GBP' AND
A.STATISTICS_CODE=' ' AND (A.BUSINESS_UNIT=

) AND (
A.DEPTID='C500' OR A.DEPTID='C512' OR A.DEPTID='C117' OR A.DEPTID='C157' OR
A.DEPTID='C340' OR A.DEPTID='C457' OR A.DEPTID='C510' OR A.DEPTID='A758' OR
A.DEPTID='8220' OR A.DEPTID='A704' OR A.DEPTID='A121' OR A.DEPTID='A110' OR
A.DEPTID BETWEEN 'A153' AND 'A154' OR A.DEPTID BETWEEN 'A151' AND 'A152' OR
A.DEPTID='A724' OR A.DEPTID BETWEEN 'A131' AND 'A133' OR A.DEPTID='A733' OR
A.DEPTID='A217' OR A.DEPTID='A437' OR A.DEPTID='A130' OR A.DEPTID='A134' OR
A.DEPTID='A703' OR A.DEPTID='A714' OR A.DEPTID='A218' OR A.DEPTID='A226' OR
A.DEPTID BETWEEN 'A135' AND 'A138'

A consequence of all the criteria is that Oracle can take a long time to parse the SQL statement. It may only be a few seconds in the case of a single SQL statement, but an nVision report book can consist of thousands of SQL statements.
To produce the following performance profile, I enabled Oracle SQL trace for a report book and profiled the trace. SQL Parse accounted for nearly 8% of the total runtime of this report book, so it can be significant, and can vary widely.
Event Name
% Time
Seconds
Calls
- Time per Call -
Avg
Min
Max
FETCH calls [CPU]
48.2%
3,699.8440s
16,068
0.2303s
0.0000s
178.0640s
db file sequential read
22.5%
1,728.2101s
4,413,352
0.0004s
0.0002s
0.1294s
SQL*Net message from client [idle]
8.0%
617.7042s
926
0.6671s
0.0002s
61.3147s
PARSE calls [CPU]
7.9%
605.9340s
5,383
0.1126s
0.0000s
11.0500s






Total
100.0%
7,681.4428s

Reducing the number of criteria in the SQL will reduce the parse time, but that is determined by the way the tree leaves are defined.
The leafcoal.sql script seeks to address this by repeatedly merging two consecutive leaves on the same tree node into a single ranged leaf where possible. It performs two checks before merging adjacent leaves on the same tree node:
  • There is not an intermediate value on the detail field defined in the tree structure record. So if the detail field was DEPT_TBL.DEPTID, the script checks that there are no values of DEPTID on PS_DEPT_TBL that are not currently selected by existing leaves that would be included in the merged leaf.
  • There is not another leaf on another node on the tree that would intersect with the merged leaf.
Instructionsleafcoal.sql was written as an anonymous PL/SQL block, so there is nothing to install. It should be run in SQL*Plus connected as the PeopleSoft owner ID (usually SYSADM). It is expected that there are some adjustments to the script that the user may need to make. As delivered, it runs in a test mode that does not update the database but reports on what it would do. Change k_testmode to FALSE to make it update the database.
k_testmode     CONSTANT BOOLEAN := FALSE; /*set this false to perform update*/
The level of output emitted depends on the variable l_debug_variable
l_debug_level  INTEGER := 4;
  • 1. end of processing message 
  • 2. start of processing for tree 
  • 3. number of leaves in tree and number of leaves coalesced 
  • 4. details of leaves being compressed 
  • 5. start and end of each procedure 
  • 6. parameters passed to functions 
  • 7. number of rows updated/deleted during coalesce 
  • 8. dynamic SQL statement 
The script reports on the work it has done. It does not commit its updates. That is left for the user to either commit or rollback.

.(3)Processing SHARE, ,XXX_ACCOUNT,151201                                       
.(4)634 nodes, 2636 leaves
.(4)1358 leaves coalesced (52%)

(1)Commit changes or rollback
The query at the end of the script determines which trees will be processed and may need to be changed as required. For example, you might choose to coalesce the leaves on

  • specific trees,
  • most recent effective dated trees,
  • trees with literal values performance option

  FOR i IN (
SELECT DISTINCT d.setid, d.setcntrlvalue, d.tree_name, d.effdt
FROM pstreedefn d
, pstreestrct s
, psrecfielddb f
WHERE d.tree_strct_id = s.tree_strct_id
AND s.node_fieldname = 'TREE_NODE'
-- AND d.TREE_ACC_METHOD = 'L' --literal values
AND s.dtl_recname = f.recname
AND s.dtl_fieldname = f.fieldname
-- AND tree_name = 'XXX_ACCOUNT'
) LOOP
Conclusion The number of leaves coalesced depends entirely on how the trees have been built. At one customer it has produced a reduction of over 50%, at another it was only 10%. The reduction in the number of leaves does produce a corresponding reduction in time spent on SQL parse time during nVision reports.

Data is Everything, and Everything is Data

Pythian Group - Tue, 2016-04-19 10:33
Exploring the phenomenon of “datafication”

In the year 2000, only a quarter of the world’s stored information was digital; the rest was on paper, film, and other analog media. Today, less than two percent of all stored information is nondigital. (1)

This is largely the result of “datafication”, a process that turns all aspects of life—preferences, opinions, telephone calls and sensor-driven information—into data.

Datafication is the driving force behind Big Data. It’s also causing a threefold shift in how we look for meaning in the information available to us: away from traditional sampling approaches, toward greater tolerance of messy, unstructured data, and into the search for correlations rather than absolute, singular causes to explain trends and events. These changes are already having major impacts in every area of our lives—from scientific research to business and finance, to healthcare, education and transportation.

Watch this video of Pythian President and CEO Paul Vallée, as he talks about datification and generating revenue.

From sampling to knowing

Representative sampling is based on the idea that, within a certain margin of error, we can make inferences about a total population from a small, randomized subset. This works well for simple questions like, “Which of our customers generate the most revenue?” but lacks the detail to effectively answer queries like, “Which customers are most profitable?” or, “Which customers are considering to leave us for another vendor?”

Inexpensive computer memory, powerful processors and sophisticated algorithms now allow us to analyze vast amounts of data rather than small samples. Using Big Data in this way has the considerable advantage of predictive capability—it can identify patterns and trends that aren’t detectable in a small sample, giving an unprecedented view of future behavior.

From clean to messy

What’s new about Big Data isn’t just that there’s lots of it. Because it comes from many different sources in many different formats, it’s not tidy like traditional datasets. Tolerating some inaccuracy may require data analysts to shift their outlooks a little, but when you’re trying to answer big, complex questions, the gain in data scope is a good trade-off against using smaller amounts of very exact data. Here’s an example.

In 2009, Google showed it’s possible to predict locations of seasonal outbreaks of the flu using nothing more than archived records of Google searches. The sheer size of the data set (think a billion searches a day in the U.S. alone) more than compensated for its messiness. After running nearly half a billion calculations against the data, Google identified 45 terms—words such as “headache” and “runny nose”—that had a strong correlation with the CDC’s data on flu outbreaks.

From cause to correlation

The Google example points to a third change brought about by datafication and Big Data: abandoning the search for certainty. Instead of looking for causes, innovative data users are looking for correlations. For example, automotive and aviation engineers are collecting and analyzing massive quantities of information on engines that have failed, looking for patterns that will help them predict when other engines might be at risk of failing in the future. They’re not seeking a single cause for a single event; they’re mapping correlations between huge numbers of events to recognize patterns that can be put to practical, preventative use.

The correlation approach has been used to spot infections in premature babies before overt symptoms appear and to predict everything from manhole cover failures to consumer purchasing habits.

Big Data insights require Big Thinking

Harnessing the powerful, often unpredictable, insights available from Big Data requires three things: as complete a dataset as possible, people with the skills required to collect, manage and analyze that data, and people who know how to ask unexpected, even visionary questions. It’s not just a matter of the right technologies—it’s about a fundamental shift in how we relate to data and what can be done with it.
Sources
1. https://www.foreignaffairs.com/articles/2013-04-03/rise-big-data

Categories: DBA Blogs

Partition Storage -- 1 : Default Partition Sizes in 12c

Hemant K Chitale - Tue, 2016-04-19 10:17
11g 11.2.0.2 introduced a change whereby the default Initial Extent of a Table Partition was 8MB.  However, this did not apply to Index Partitions which could still start with 64KB extents in an AutoAllocate Tablespace.

12cR1 now introduces a parameter to enable large Initial Extent for Index Partitions as well.

SQL> connect / as sysdba                          
Connected.
SQL> select banner from v$version where banner like 'Oracle Database%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> show parameter deferred

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> connect hemant/hemant
Connected.
SQL> create table my_part_tbl (id_column number(6), data_column varchar2(100))
2 partition by range(id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> create index my_part_tbl_ndx on my_part_tbl(id_column) local;

Index created.

SQL>
SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

no rows selected

SQL> 
SQL> insert into my_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into my_part_tbl values (151,'One Hundred Fifty One');

1 row created.

SQL> insert into my_part_tbl values (251, 'Two Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64

6 rows selected.

SQL>


I can enable large Index Partition Extent with a parameter (which can be set with ALTER SESSION)


SQL> alter session set "_index_partition_large_extents"=TRUE;

Session altered.

SQL> insert into my_part_tbl values (351,'Three Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 8192

8 rows selected.

SQL>


However, I can rebuild the Index Partition Extent as well :

SQL> alter index my_part_tbl_ndx rebuild partition p_400 storage (initial 64K);

Index altered.

SQL> select segment_name, partition_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2;

SEGMENT_NAME PARTITION_NA SEGMENT_TYPE BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL P_100 TABLE PARTITION 8192
MY_PART_TBL P_200 TABLE PARTITION 8192
MY_PART_TBL P_300 TABLE PARTITION 8192
MY_PART_TBL P_400 TABLE PARTITION 8192
MY_PART_TBL_NDX P_100 INDEX PARTITION 64
MY_PART_TBL_NDX P_200 INDEX PARTITION 64
MY_PART_TBL_NDX P_300 INDEX PARTITION 64
MY_PART_TBL_NDX P_400 INDEX PARTITION 64

8 rows selected.

SQL>


In the next post, we'll see more Extents for the Partitions.
,
,
,
Categories: DBA Blogs

SQL Server Dates, Dates and More Dates

Pythian Group - Tue, 2016-04-19 09:44

 

Working with SQL Server date functions can be frustrating. This purpose of this blog is to share some date statements I use regularly, especially when doing business Intelligence and DataWarehouse solutions.

I hope you find them useful and if you have any questions or any more useful statements in relation to dates in SQL Server, please feel free to leave them in the comments below

----Today
SELECT GETDATE() 'Today'
----Yesterday
SELECT DATEADD(d , -1 , GETDATE()) 'Yesterday'
----First Day of Current Week
SELECT DATEADD(wk , DATEDIFF(wk , 0 , GETDATE()) , 0) 'First Day of Current Week'
----Last Day of Current Week
SELECT DATEADD(wk , DATEDIFF(wk , 0 , GETDATE()) , 6) 'Last Day of Current Week'
----First Day of Last Week
SELECT DATEADD(wk , DATEDIFF(wk , 7 , GETDATE()) , 0) 'First Day of Last Week'
----Last Day of Last Week
SELECT DATEADD(wk , DATEDIFF(wk , 7 , GETDATE()) , 6) 'Last Day of Last Week'
----First Day of Current Month
SELECT DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) , 0) 'First Day of Current Month'
----Last Day of Current Month
SELECT DATEADD(ms , -3 , DATEADD(mm , 0 , DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) + 1 , 0))) 'Last Day of Current Month'
----First Day of Last Month
SELECT DATEADD(mm , -1 , DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) , 0)) 'First Day of Last Month'
----Last Day of Last Month
SELECT DATEADD(ms , -3 , DATEADD(mm , 0 , DATEADD(mm , DATEDIFF(mm , 0 , GETDATE()) , 0))) 'Last Day of Last Month'
----First Day of Current Year
SELECT DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) , 0) 'First Day of Current Year'
----Last Day of Current Year
SELECT DATEADD(ms , -3 , DATEADD(yy , 0 , DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) + 1 , 0))) 'Last Day of Current Year'
----First Day of Last Year
SELECT DATEADD(yy , -1 , DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) , 0)) 'First Day of Last Year'
----Last Day of Last Year
SELECT DATEADD(ms , -3 , DATEADD(yy , 0 , DATEADD(yy , DATEDIFF(yy , 0 , GETDATE()) , 0))) 'Last Day of Last Year'
Categories: DBA Blogs

VirtualBox 5.0.18

Tim Hall - Tue, 2016-04-19 09:22

VirtualBox 5.0.18 has been released.

The downloads and changelog are in the usual places.

So far I’ve only installed it on Windows 7, but I’ll no doubt be doing an install on OS X El Crapitan and Oracle Linux tonight.

Internals of Querying the Concurrent Requests’ Queue – Revisited for R12.2

Pythian Group - Tue, 2016-04-19 09:01

Once upon a time I wrote about the Internal Workflow of an E-Business Suite Concurrent Manager Process. Many things have changed since that blog post, the most obvious change being the release of Oracle e-Business Suite R12.2. I decided to check if the way the concurrent manager queues were processed by concurrent manager processes were still the same. My main goal was to see if the manager processes still don’t attempt any way of coordination to distribute the requests among them.

This is how I did the testing:

  • I used the VM templates provided by Oracle to build my R12.2.4 test environment. By the way, I didn’t expect that the process of getting the environment up would be so simple! Downloading the media files from edelivery.oracle.com was the most time-consuming step, once done – it took me just 1 hour to un-compress everything, import the Virtual Assembly file and bring up the R12.2.4 environment on my laptop.
  • 3 Standard managers are defined by default
  • Sleep seconds were left as is = 30 seconds
  • Cache size was increased from 1 to 5.
  • Identified the 3 DB processes that belong to the Standard managers:
    select sid, serial# from v$session where module='e:FND:cp:STANDARD'
  • I enabled tracing with binds and waits for each of them like this:
    exec dbms_monitor.session_trace_enable(sid,serial#,true,true);
  • Once that was done I submitted one concurrent program – “Active users” and waited for it to complete.
  • I disabled the tracing and collected the trace files.
    exec dbms_monitor.session_trace_disable(sid,serial#);
  • Collected the trace files

I found 2 of the trace files to be very interesting. To make things more simple, the manager process “A” will be the one that executed the concurrent request, and process “B” will be the one that didn’t.

Before the “Active Users” Request Was Submitted

No other requests were running at the time I did the testing, so I clearly observed how both Managers A and B queried the FND_CONCURRENT_REQUESTS table BOTH of the trace files displayed the same method of how requests are picked up from the queue. Note, I’m showing only the lines relevant to display the main query only, and I have formatted the query text to make it more readable:

PARSING IN CURSOR #139643743645920 len=1149 dep=0 uid=100 oct=3 lid=100 tim=1460211399835915 hv=3722997734 ad='d275f750' sqlid='cd23u4zfyhvz6'
SELECT R.Rowid
FROM Fnd_Concurrent_Requests R
WHERE R.Hold_Flag                             = 'N'
AND R.Status_Code                             = 'I'
AND R.Requested_Start_Date                   &amp;amp;amp;amp;amp;lt;= Sysdate
AND (R.Node_Name1                            IS NULL
OR (R.Node_Name1                             IS NOT NULL
AND FND_DCP.target_node_mgr_chk(R.request_id) = 1))
AND (R.Edition_Name                          IS NULL
OR R.Edition_Name                            &amp;amp;amp;amp;amp;lt;= sys_context('userenv', 'current_edition_name'))
AND EXISTS
  (SELECT NULL
  FROM Fnd_Concurrent_Programs P
  WHERE P.Enabled_Flag         = 'Y'
  AND R.Program_Application_Id = P.Application_Id
  AND R.Concurrent_Program_Id  = P.Concurrent_Program_Id
  AND EXISTS
    (SELECT NULL
    FROM Fnd_Oracle_Userid O
    WHERE R.Oracle_Id = O.Oracle_Id
    AND EXISTS
      (SELECT NULL
      FROM Fnd_Conflicts_Domain C
      WHERE P.Run_Alone_Flag = C.RunAlone_Flag
      AND R.CD_Id            = C.CD_Id
      )
    )
  AND (P.Execution_Method_Code                          != 'S'
  OR (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN ((0,98),(0,100),(0,31721),(0,31722),(0,31757)))
  )
AND ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN ((510,40032),(510,40033),(510,42156),(510,42157),(530,43793),(530,43794),(535,42626),(535,42627),(535,42628)))
ORDER BY NVL(R.priority, 999999999),
  R.Priority_Request_ID,
  R.Request_ID
END OF STMT
EXEC #139643743645920:c=0,e=33,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211399835910
FETCH #139643743645920:c=0,e=546,p=0,cr=106,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211399836507
WAIT #139643743645920: nam='SQL*Net message to client' ela= 3 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211399836572

*** 2016-04-09 10:17:09.837
WAIT #139643743645920: nam='SQL*Net message from client' ela= 30000367 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211429836965
...
EXEC #139643743645920:c=0,e=59,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211429838767
FETCH #139643743645920:c=0,e=689,p=0,cr=106,cu=0,mis=0,r=0,dep=0,og=1,plh=3984653669,tim=1460211429839587
WAIT #139643743645920: nam='SQL*Net message to client' ela= 4 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211429839652

*** 2016-04-09 10:17:39.840
WAIT #139643743645920: nam='SQL*Net message from client' ela= 30000325 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1460211459840003
...

It’s important to observe that:

  • All manager’s processes still compete for the same requests. If the query is executed at the same time, the same list of concurrent requests will be retrieved by all processes.
  • The constants literals used in lines 30-32 mean that the query for checking the queue is still built when the concurrent manager process starts up. These constants are mainly used to implement the specializations rules in the query.
  • Only rowid for the pending requests’ rows in FND_CONCURRENT_REQUESTS are fetched.
  • The sleep time is clearly visible on lines 41,42 and 48,49
After the “Active Users” Request Was Submitted – Starting the Concurrent Request

The manager process A was the first to pick up the submitted requests and it could be observed by the “r=1” (1 row fetched) in the FETCH call for the query we just reviewed:

FETCH #139643743645920:c=0,e=437,p=0,cr=113,cu=0,mis=0,r=1,dep=0,og=1,plh=3984653669,tim=1460211519844640

Immediately after this, the manager process A locked the row in FND_CONCURRENT_REQUESTS table, this way, the request got assigned to this process. Notice the similar where predicates used in this query, these are actually required to make sure that the request is still not picked up by another manager process. However the main thing here is the fact that the request row is accessed by the “rowid” retrieved earlier (row 45, the value of the bind variable “:reqname” is “AAAjnSAA/AAAyn1AAH” in this case). Locking of the row is done by the “FOR UPDATE OF R.status_code NoWait” clause on line 49:

PARSING IN CURSOR #139643743640368 len=4530 dep=0 uid=100 oct=3 lid=100 tim=1460211519864113 hv=4239777398 ad='cde86338' sqlid='6ya6bzgybbrmq'
SELECT R.Conc_Login_Id,
  R.Request_Id,
  ... excluded other 156 columns for brevity...
FROM fnd_concurrent_requests R,
  fnd_concurrent_programs P,
  fnd_application A,
  fnd_user U,
  fnd_oracle_userid O,
  fnd_conflicts_domain C,
  fnd_concurrent_queues Q,
  fnd_application A2,
  fnd_executables E,
  fnd_conc_request_arguments X
WHERE R.Status_code             = 'I'
AND (R.Edition_Name            IS NULL
OR R.Edition_Name              &amp;amp;amp;amp;amp;lt;= sys_context('userenv', 'current_edition_name'))
AND R.Request_ID                = X.Request_ID(+)
AND R.Program_Application_Id    = P.Application_Id(+)
AND R.Concurrent_Program_Id     = P.Concurrent_Program_Id(+)
AND R.Program_Application_Id    = A.Application_Id(+)
AND P.Executable_Application_Id = E.Application_Id(+)
AND P.Executable_Id             = E.Executable_Id(+)
AND P.Executable_Application_Id = A2.Application_Id(+)
AND R.Requested_By              = U.User_Id(+)
AND R.Cd_Id                     = C.Cd_Id(+)
AND R.Oracle_Id                 = O.Oracle_Id(+)
AND Q.Application_Id            = :q_applid
AND Q.Concurrent_Queue_Id       = :queue_id
AND (P.Enabled_Flag            IS NULL
OR P.Enabled_Flag               = 'Y')
AND R.Hold_Flag                 = 'N'
AND R.Requested_Start_Date     &amp;amp;amp;amp;amp;lt;= Sysdate
AND ( R.Enforce_Seriality_Flag  = 'N'
OR ( C.RunAlone_Flag            = P.Run_Alone_Flag
AND (P.Run_Alone_Flag           = 'N'
OR NOT EXISTS
  (SELECT NULL
  FROM Fnd_Concurrent_Requests Sr
  WHERE Sr.Status_Code         IN ('R', 'T')
  AND Sr.Enforce_Seriality_Flag = 'Y'
  AND Sr.CD_id                  = C.CD_Id
  ))))
AND Q.Running_Processes                                     &amp;amp;amp;amp;amp;lt;= Q.Max_Processes
AND R.Rowid                                                  = :reqname
AND ((P.Execution_Method_Code                               != 'S'
OR (R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID)       IN ((0,98),(0,100),(0,31721),(0,31722),(0,31757))))
AND ((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN ((510,40032),(510,40033),(510,42156),(510,42157),(530,43793),(530,43794),(535,42626),(535,42627),(535,42628))) 
FOR UPDATE OF R.status_code NoWait

The behavior of the manager process B was a little bit more interesting. It too managed to fetch the same rowid from FND_CONCURRENT_PROCESSES table belonging to the submitted “Active Users” processes. However, when it tried to lock the row in FND_CONCURRENT_REQUESTS (By using exactly the same query), this happened:

PARSING IN CURSOR #139690311998256 len=4530 dep=0 uid=100 oct=3 lid=100 tim=1460211519900924 hv=4239777398 ad='cde86338' sqlid='6ya6bzgybbrmq'
...
BINDS #139690311998256:
...
Bind#2
  oacdty=01 mxl=32(18) mxlc=00 mal=00 scl=00 pre=00
  oacflg=20 fl2=1000001 frm=01 csi=873 siz=0 off=64
  kxsbbbfp=7f0c2f713f20  bln=32  avl=18  flg=01
  value="AAAjnSAA/AAAyn1AAH"
EXEC #139690311998256:c=1000,e=1525,p=0,cr=25,cu=1,mis=0,r=0,dep=0,og=1,plh=4044729389,tim=1460211519902727
ERROR #139690311998256:err=54 tim=1460211519902750

The query failed with “ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired”.
This is how the access to pending concurrent requests is serialized to make sure only one of the manager processes can run it. And, I think, relying on the well-tuned and highly efficient locking mechanism of Oracle Database is a very very smart idea.

Conclusions
  • The coordination between manager processes is still not happening to distribute the requests, but the managers all query the queue the same way and then compete between themselves to lock the requests’ entries on the table 1st. The process that gets the lock also gets to execute the concurrent request.
  • The cache size variable couldn’t be observed in the trace files, but as far as I remember from my previous research the process would only fetch “cache size”-number of rowids using the 1st query in this post. This could be tested by submitting larger volume of requests simultaneously.
  • The “sleep seconds” kicks in only when the manager process didn’t fetch any rowids from the queue. After all the cached requests are attempted/executed by the manager process, the queue is checked again immediately without waiting for the “sleep seconds” (Not explained in detail in this post, but it’s revealed in the trace files)
  • The DMLs used to query the FND_CONCURRENT_REQUESTS and to lock the row are very very similar to Pre-R12.2 releases of e-Business Suite (Another sign that the process hasn’t changed, though one change that I see is the addition of where clause predicates for Checking the Editions).
Categories: DBA Blogs

April 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-04-19 07:00

Oracle today released the April 2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle E-Business Suite, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (MOS Note 2126904.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpuapr2016v3-2985753.html

My Oracle Support Note 2126904.1 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2126904.1 (MOS account required).

COLLABORATE16: Bots & Virtual Reality

Oracle AppsLab - Tue, 2016-04-19 03:00

Last week, Ben (@goldenmean1618) and I were in Las Vegas for COLLABORATE. We ran two studies which focuses on two trending topics in tech: bots and virtual reality!

Bot Focus Group

Ready for the focus group fun to begin. #oaux #CLV16 pic.twitter.com/sG0amqMGJx

— The AppsLab (@theappslab) April 12, 2016

Our timing for the bot study was perfect! The morning we were to run our focus group on bots in the workplace, Facebook launched it’s bot platform for messenger. They are not the only ones with a platform. Microsoft, Telegram as well as Slack has their own platform too.

The goal of our focus group was to generate ideas on useful bots in the workplace. This can range from the concierge bot that Facebook has to workflow bots that Slack has. To generate as many ideas as we could, without groupthink, we had everyone silently write down their ideas using the “I WANT [PAT] TO…SO I CAN…” Tower of Want framework I stumbled upon at the GDC16 conference last March.

Not only do you distill the participant’s motivations, intents and needs, but you also acquire soft goals to guide the bot’s development. Algorithms are extremely literal. The Harvard Business Review notes how social media sites were once “quickly filled with superficial and offensive material.”

The algorithm was simple, find the articles with the most clicks and feed them to the users. Somewhere, the goal of QUALITY highly engaged articles were lost to highly engaged articles at the expense of QUALITY. Intention is everything.

“Algorithms don’t understand trade-offs; they pursue objectives single-mindedly.”

Soft goals are in place to steer a bot away from unintended actions.

After the ideas were generated and shared, we had them place their bot tasks on a pain/frequency chart: How painful is this task for you to do? and How frequently do you need to do this task?

Focus group preparation underway #oaux #CLV16 pic.twitter.com/74tccV5kLZ

— The AppsLab (@theappslab) April 12, 2016

Then it was time for the business origami! Business Origami is similar to a task flow analysis that uses folded paper cutouts as memory nudges. We now have our bot tasks, but we do not know (a) what triggers the task, (b) what the bot needs to know to do its job and (c) what the desired output is. We modified the Business Origami activity with the inputs and outputs that a Resource Flow activity demands.

Before our customers created their own flows based on their best bot task idea, we did we group warm up. The flow below illustrates the flow of scheduling and booking meeting rooms. Everyone was involved as they talked about the myriad of ways that would trigger the act of scheduling a meeting, the mediums of communication used, what they would need to know in order to schedule that, and what feedback is needed when the task is done.

Business origami taking shape #oaux #CLV16 pic.twitter.com/PJARBrZGka

— The AppsLab (@theappslab) April 12, 2016

Virtual Reality Guerrilla Test

For 3 days, Ben and I ran a guerrilla study to get customer’s and partner’s thoughts on VR and where they might find it useful in their work/industry.

Getting ready for some #VR #oaux research #Collaborate16 w @iheartthannie and Ben pic.twitter.com/qfVjs9QadE

— The AppsLab (@theappslab) April 12, 2016

Our customers experienced virtual reality through the Samsung Gear VR. It relies on our Samsung Note 5 to deliver the immersive experience.

$99 Samsung Gear VR. The consumer version of the Oculus powered head mount display (HMD).

Because of the makeup of our audience at the demo pod, we had to ensure that our study took approximately 5 minutes. We had 2 experiences to show them: an under water adventure with the blue whale in the Artic Ocean (theBlu) and a heart-pounding task of diffusing a bomb (Keep Talking and Nobody Explodes).

Everyone really wanted to reach out and touch the sea animals. 2 reached out and accidentally touched Ben and I and freaked out at how realistic the experience was! Another case for haptic gloves?

Storage in the Oracle Cloud

Pat Shuff - Tue, 2016-04-19 02:07
This week we are going to focus on storage. Storage is a slippery slope and difficult conversation to have. Are we talking about a file synchronization like dropbox.com, google.com/docs, or box.com? Are we talking about raw block storage or long term archive storage? There are many services available from many vendors. We are going to focus on block storage in the cloud that can be used for files if desired or for backups of databases and virtual machines. Some of the cloud vendors have specific focused storage like Azure tables that offer a noSQL type storage or Amazon S3 allowing you to run a website without a web server. Today we will look at the Oracle IaaS Storage set of products. This is different than the Oracle PaaS Documents option which is more of a Google Docs like solution. The IaaS Storage is a block of storage that you pay for either on a metered usage or non-metered usage basis.

Notice from the cloud.oracle.com web page, we click on Infrastructure and follow the Storage route. We see that we get the raw block storage or the archive storage as options. We also have the option of an on-site cache front end that reduces latency and offers an NFS front end to the users providing more of a document management strategy rather than a raw block option.

Before we dive a little deeper into the options and differences between the storage appliance, spinning disk, and spinning tape in the cloud, we need to have a discussion about pricing and usage models. If you click on the Pricing tab at the top of the screen you see the screens below.

Metered pricing consists of three parts. 1) how much storage are you going to start with, 2) how much storage are you going to grow to, and 3) how much are you going to read back? Metering is difficult to guestimate and unfortunately it has a significant cost associated with being wrong. Many long term customers of AWS S3 understand this and have gotten sticker shock when the first bill comes in. The basic cost for outbound transfer is measured on a per GB basis. The more that you read across the internet, the more you pay. You can circumvent this by reading into a compute server in the Oracle cloud and not have to pay the outbound transfer. If, for example, you are backing up video surveillance data and uploading 24 hours of video at the end of they day, you can read the 24 hour bundle into a compute server and extract the 10-15 minutes that you are interested in and pay for the outbound charges on compute for the smaller video file.

Non-Metered pricing consists of one part. How much storage are you going to use over the year. Oracle does not charge for the amount of data transferred in-bound or out-bound with this storage. You can read and write as much as you want and there is no charge for data transfer across the internet. In the previous example you could read the 24 hours of video from the cloud storage, throw away 90% of it from a server in your data center, and not incur any charges for the volume of transfer.

Given that pricing is difficult to calculate, we created our own spreadsheet to estimate pricing as well as part numbers that should be ordered when consuming Oracle cloud resources. The images below show the cost of 120 TB of archive storage, metered block storage, and non-metered block storage.

Note that the data transfer price is non-trivial. Reading the data back from the cloud can get significantly more expensive than the cost of the storage itself. A good rule of thumb is the cost of spinning disk in the cloud should not exceed $30/TB/month or $400/TB/year. If you look at the cost of a NetApp or EMC storage system, you are looking at $3K-$4K/TB purchase price with 10% annual maintenance per year ($300-$400). If you are currently running out of storage and your NFS filer is filling up, you can purchase cloud resources for a few months and see if it works. It won't cost you anything more than paying support and you can grow your cloud storage as needed rather than buying 3 years ahead as you would with a filer in your data center. The key issue with cloud storage is latency and access times. Access to a filer in your data center is typically 10ms where access time to cloud storage is typically 80+ms. All cloud storage vendors have on site appliance solutions that act as cache front ends to address this latency problem. Oracle has one that talks NFS. Amazon has one that talks iSCSI. Microsoft has one that talk SMB. There truly is no single vendor with a generic solution that addresses all problems.

Enough with the business side of storage. Unfortunately, storage is a commodity so the key conversation is economics, reliability, and security. We have already addressed economics. When it comes to reliability the three cloud vendors address data replication and availability in different ways. Oracle triple mirrors the data and provides public-private key encryption of all data uploaded to the cloud. Data can be mirrored to another data center in the same geography but can not be mirrored across an ocean. This selection is done post configuration and is tied to your account as a storage configuration.

Now to the ugly part of block storage. Traditionally, block storage has been addressed through an operating system as a logical unit or aggregation of blocks on a disk drive. Terms like tracks and sectors bleed into the conversation. With cloud storage, it is not part of the discussion. Storage in the cloud is storage. It is accessed through an interface called a REST api. The data can be created, read, updated, and deleted using html calls. All of this is documented in the Oracle Documents - Cloud Storage web site.

The first step is to authenticate to the cloud site with an instance name, username, and password. What is passed back is an authentication token. Fortunately, there are a ton of tools to help read and write HTML code and are specifically tuned to help create headers and JSON structured data packets for the REST api interfaces. The screen below shows the Postman interface available through Chrome. A similar one exists for Firefox called RESTClient API. Unfortunately, there is no extension for Internet Explorer.

The first step is to get an auth header by typing in the username and password into the Basic Authentication screen.

Once we are authorized, we connect to the service by going to https://storage.us2.oraclecloud.com/v1/Storage-(identity domain) where identity domain is the cloud provider account that we have been assigned. In our example we are connecting to metcsgse00029 as our identity domain and logging in as the user cloud.admin. We can see what "containers" are available by sending a GET call or create a new container by sending a PUT call with the new container name at the end of our html string. I use the word container because the top level of storage consists of different areas. These areas are not directories. They are not file systems. The are containers that hold special properties. We can create a container that is standard storage which represents spinning disk in the cloud or we can create a container that is archive storage which represents a tape unit in the cloud. This is done by sending the X-Storage-Class header. If there is no header, the default is block storage and spinning disk. If the X-Storage-Class is assigned to Archive it is tape in the cloud. Some examples of creating a container are shown below. We can do this via Postman inside Chrome or a command line

From the command line this would look like

export OUID=cloud.admin
export OPASS=mypassword
export ODOMAIN=metcsgse00029
c url -is -X GET -H "X-Storage-User:Storage-$ODOMAIN:$OUID" 
                 -H "X-Storage-Pass:$OPASS" 
                 https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0

This should return an html header with HTTP 200 OK and an embedded header of X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706. Note that the value after the X-Auth-Token is what we will use to pass into all other requests. This token will change with each request and is good for 30 minutes from first execution. Once we have the authentication finished we either change the request type from a GET to a PUT and append the container name to the end. The screen above shows how to do this with Postman. The results should look like the screen below. We can do this from the command line as show below as well.

c url -is -X PUT -H "X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706" 
                 https://storage.us2.oraclecloud.com/v1/Storage-$ODOMAIN/new_area
In this example we create a new container from the command line called new_area. We can verify this by reviewing the cloud storage by changing the PUT to a GET.

c url -is -X GET -H "X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706" 
                 https://storage.us2.oraclecloud.com/v1/Storage-$ODOMAIN
Both of these methods allow us to see the storage that we created. I personally do not like this interface. It is not intended to be human consumable. Uploading and downloading a file is difficult at best. A user interface that makes dragging and dropping files is desirable. This is where dropbox and google docs shine. They allow you to drag and drop as well as synchronize directories to cloud storage. The Oracle Storage Cloud is not intended to be this solution. It is designed so that you can drop a new library into your rman backup and backup straight from your database to the cloud. You can point your ComVault or Legato backup software to a cloud instance and replicate your data to the cloud. If you want a human readable interface you need to purchase something like the Cloudberry Explorer from Cloudberry. This give you a Windows Explorer like interface and allows your to drag and drop files, create containers and directories, and schedule archives or backups as desired.

Note that the way that you create a block storage container vs an archive container is a simple menu selection. Retrieving the archive storage is a little more complex because the tape unit must stage the file from the tape to disk and notify you that the restoration has been completed. This is a little more complex and we will defer this discussion to a later blog.

Copying files is little more than dragging and dropping a file between sections of a window in Cloudberry.

For completeness, I have included the command line screen shots so that you can see the request/response of a command line interaction.

It is important to remember our objective. We can use the cloud block storage as a repository for things like database and a holding point for our backups. When we configure a database in the cloud, we backup and restore from this storage. This is configured in the database provisioning screen. The Storage-metcsgse00029/backup is the location of RMAN backup and restores. The backup container is created through the REST api or Cloudberry interface. We can also attach to the cloud storage through the cloud storage appliance software which runs inside a virtual machine and listens for NFS requests and translates them into REST api calls. A small disk is attached to the virtual machine and it acts as a cache front end to the cloud storage. As files are written via NFS they are copied to the cloud storage. As the cache fills up, files contents are dropped from local storage and the metadata pointing to where the files are located are updated relocating the storage to the cloud rather than the cache disk. If a file is retrieved via NFS, the file is read from cache or retrieved from the cloud and inserted into the cache as it is written to the client that requested it.

In summary, we covered the economics behind why you would select cloud storage over on site storage. We talked about how to access the storage from a browser based interface, web based interface, or command line. We talked about improving latency and security. Overall, cloud based storage is something that everyone is familiar with. Products like Facebook, Picaso, or Instagram do nothing more than store photos in cloud storage for you to retrieve when you want. You pay for these services by advertisements injected into the web page. Corporations are turning more and more towards cloud storage as a cheaper way to consume long term storage at a much lower price. The Oracle Storage Cloud service is first of three that we will evaluate this week.

Links for 2016-04-18 [del.icio.us]

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator