Skip navigation.

Feed aggregator

Blackboard’s Messaging Problems

Michael Feldstein - Fri, 2015-07-31 15:07

By Michael FeldsteinMore Posts (1038)

There are a lot of things that are hard to evaluate from the outside when gauging how a company is doing under new management in the midst of a turnaround with big new products coming out. For example, how good is Ultra, Blackboard’s new user experience? (At least, I think the user experience is what they mean by “Ultra.” Most of the time.) We can look at it from the outside and play around with it for a bit, but the best way to judge it is to talk to a lot of folks who have spent time living with it and delivering courses in it. There aren’t that many of those at the moment. Blackboard has offered to put us in touch with some of them, and we will let you know what we learn from them after we talk to them. How likely is Blackboard to deliver the promised functionality on their Ultra to-do list to other customers on schedule (or at all)? Since this is a big initiative and the company doesn’t have much of a track record, it’s hard to tell in advance of them actually releasing software. We’ll watch and report on it as it comes out. How committed is Blackboard to self-hosted customers on the current platform? We have their word, and logical reasons why we believe they mean it when they say they want to support those customers, but we have to talk to a bunch of customers to find out what they think of the support that they are getting, and even then, we only know about Blackboard’s current execution, which is not the same as their future commitment. So there are a lot of critical aspects about the company that are just hard and time-consuming to evaluate and will have to wait on more data.

But not everything is hard to evaluate. Communication, for example, is pretty easy to judge. Last year I mocked Jay Bhatt pretty soundly for his keynote. (Of course, we have hit D2L a lot harder for their communication issues because theirs have been a lot worse.) In some ways, it is so easy to critique communication that we have to be careful not to just take cheap shots. Everybody loves to mock vendors in general and LMS vendors in particular. We’re mainly interested in communications problems that genuinely threaten to hurt their relationship with their customers. Blackboard does have serious customer communication problems at the moment, and they do matter. I’m going to hit on a few of them.

Keynote Hits Sour Notes

Since I critiqued last year’s keynote, an update in that department is as good a place to start as any. It’s sort of emblematic of the problem. This year’s keynote was better than last year’s but that doesn’t mean it was good. Of the two-hour presentation, only the last twenty minutes or so directly addressed the software. The rest was about values and process. I get why the company is doing this. As I said in last year’s review, they are nothing if not earnest. So, for example, when Jay Bhatt says that we need to start a “revolution” in education and that Blackboard is inviting “you”—presumably the educators in the room—to join them, it doesn’t carry the sinister tone of the slick Sillycon Valley startup CEO talking about “disrupting” education (by which they generally mean replacing dumb, mean, unionized bad people teachers with slick, nice, happy-faced software). Jay comes across as a dad and a former teacher who honestly cares about education and wants very much to do his part to improve it. But his pitch is tone-deaf. No matter how earnest you are, you can’t take center stage as the CEO of a software company that has a long and infamous reputation for disregarding customers making education worse rather than better and then, giant-face projected on the jumbotron and simulcast on the web, convince people who you are just a dad who wants to make education better. It doesn’t work. It’s not going to win over skeptical customers, never mind skeptical prospective customers. No matter how much you sincerely mean it. No matter how much it is said with the best of intentions. You also can’t spend the first 90+ minutes of the keynote talking about process and then get around to admitting that your revolutionary software is a year late. Phil and I both give Jay and Blackboard tons of credit for being forthright about the delay in the keynote, and for generally showing a kind of honesty and openness that we don’t see very often from big ed tech vendors. Really, it’s rare, it’s important, and it deserves more credit that it will probably be given by a lot of people. But in terms of having the intended effect on the audience, owning up to your delivery problems in the last 10 minutes of a two-hour keynote, most of which was also not spent talking about the stuff that customers most immediately care about, will not have the desired effect. The reason Blackboard went though that first 90 minutes is that they, really, really want to tell you, with all their hearts, that “Gee whiz, gang, we really do care and we really are trying super-hard to create something that will make students’ lives better.” But if the punchline, after 90+ minutes, is “…and…uh…we know we told you we’d have it done a year ago, but honestly, we mean it, we’re still working on it,” you will not win converts.

The one thing I did like very much, besides the honesty about missing their delivery dates, was the day-in-the-life walk-throughs of the software. They very compactly and effectively conveyed the quality of thought and concern for the student that the first 90 minutes of process talk did not. If you want to convince me that you really care about students, then don’t talk to me about how much you really care about the students. Show me what you have learned from them. Because talk is cheap. I won’t believe that you really care about students in a way that affects what you do in your business until you show me that you have developed a clear and actionable understanding of what students need and want and care about. That is what the walk-through’s accomplished (although they would have been even more effective with just a smidge less “golly gee” enthusiasm).

There’s one simple thing Blackboard could do that would vastly improve their keynotes and make a host of rhetorical sins more forgivable. They could bring back Ray Henderson’s annual report card. Every year, Ray would start the keynote by laying out last year’s publicly declared goals, providing evidence of progress (or not) toward those goals—quantitative evidence, whenever possible—and setting the goals for the new year. This set the right tone for the whole conference. “I made you some promises. Here’s how I did on those promises. Here’s what I’m going to do better this year. And here are some new promises.” As a customer, I will hear whatever else you have to say to me next much more charitably if you do that first. For example, Phil and I have heard a number of customers express dissatisfaction with the length of time it takes to fix bugs. At a time when Blackboard is trying to convince self-hosted customers that they will not be abandoned, this is particularly important not to let get out-of-hand because every customer who has an unaddressed bug will be tempted to read it as evidence that the company is secretly abandoning 9.1 and just lying about it. But if Blackboard leadership got up on stage—as they used to—and said, “Here’s the number of new bugs we had in the past year, here’s average length of time that P1s go unaddressed, here’s the trend line on it, here’s our explanation of why that trend line is what it is, and here’s our promise that we will give you an update on this next year, even if it looks bad for us,” then customers are going to be much more likely to give the company the benefit of the doubt. If you’ve addressed my concerns as a customer and said your “mea culpas” first, then I’m going to be more inclined to believe that anything else you want to tell me is truthful and meant for my benefit.

What Is Ultra and What Does It Mean For Me?

Ultra Man

Another problem Blackboard has is that it is very hard to understand what they mean by “Ultra.” Sometimes they mean an architecture enabled by a user experience. Sometimes they mean a user experience that may or may not require the architecture. And at no time do they fully clarify what it means for hosting.

Here’s a webinar from last December that provides a pretty representative picture of what Blackboard’s Ultra talk is like:

Most of the “Ultra Talk” is about the user experience. So it makes sense to infer that Ultra is a new user experience which, for those with any significant experience with Blackboard or many of the other LMS providers, would suggest a new skin (or “lipstick on a pig,” as Phil recently put it). And yet, Ultra doesn’t run on the self-hosted version of Blackboard. Why is that? A cynical person would say (and cynical customers have said) that Blackboard is just trying to push people off of self-hosting. No, says Blackboard, not at all. Actually, the reason we can’t do self-hosted Ultra is because Ultra requires the new cloud architecture, which you can’t self-host.

Except for Ultra on mobile. You can experience Ultra on mobile today, even if you are running self-hosted 9.1.


OK, so if I want to run Ultra, I can’t run it self-hosted (except for mobile, which is fine). What if I’m managed hosted? Here’s the slide from that webinar:


There you go. Clear as mud. What is “Premium SaaS”? Is it managed hosting? Is it private cloud? What does it mean current managed hosting customers? What we have found is that there doesn’t seem to be complete shared understanding even among the Blackboard management team about what the answers to these questions are. Based on what Phil and I have been able to glean about the true state of affairs, here’s how I would explain the situation if I were a Blackboard executive:

  • Ultra is Blackboard’s new product philosophy and user interface. Rather than just sticking in a new tab or drop-down menu and a new bill from a new sales team every time we add new capabilities, we’re trying to design these capabilities into the core product experience in ways that fit with how customers would naturally use them. So rather than thinking about separate products living in separate places—like Collaborate, Community, Analytics, and Content, for example—you can think about synchronous collaboration, non-course groups, student progress tracking, and content sharing naturally when and where you need those capabilities in your daily academic life.

  • Blackboard Learn Cloud [Note: This is my made-up name, not Blackboard’s official product name] is the new architecture that makes Ultra possible for Learn. It also enables you to gain all of the benefits of being in the cloud, like being super-reliable and less expensive. But with regard to Ultra, we can’t create that nifty integrated experience without adding some new technical infrastructure. Learn Cloud enables us to do that. Update: Ultra is still a work in progress and may not be ready for all professors and all courses. Luckily, Learn Cloud also runs the traditional Learn experience that is available on Learn Enterprise. So you can run Learn Cloud now without impacting your faculty and have them switch over to the Ultra experience—on the same platform—whenever they are ready for it and it is ready for them.

  • Blackboard Learn Enterprise [another Feldstein-invented name] is the classic architecture for Learn, currently on version 9.1. We think that a significant number of customers, both in the US and abroad, will continue to want to use the current architecture for a long time to come, in part because they want or need to self-host. We are committed to actively developing Learn Enterprise for as long as a significant number of customers want to use it. Our published road maps go out two years, but that doesn’t mean we only plan to develop it for another two years. It just means that it’s silly to create technology road maps that are longer than two years, given how much technology changes. Because Learn Enterprise shares a lot of code with Learn Cloud, we actually can afford to continue supporting both as long as customers are buying both in numbers. So we really do mean it when we say plan to keep supporting Enterprise for the foreseeable future. We will also bring as much of the Ultra experience to Enterprise as the technology allows. That won’t be all or most, but it will be some. The product will continue moving forward and continue to benefit from our best thinking.

  • Self-hosted Learn Cloud isn’t going to happen any time soon, which means that self-hosted Ultra isn’t going to happen any time soon. It is possible that the technologies that we are using for Blackboard Cloud will mature enough in the future that we will be able to provide you with a self-hosted version that we feel confident that we can support. (This is a good example of why it is silly to create technology road maps that are more than two years long. Who knows what the Wizards of the Cloud will accomplish in two years?) But don’t hold your breath. For now and the foreseeable future, if you are self-hosting, you will use Learn Enterprise, and we will keep supporting and actively developing it for you.

  • Mobile is a special case because a lot of the functionality of the mobile app has lived in the cloud from Day 1 (unlike Learn Enterprise). So we can deliver the Ultra experience to your mobile apps even if you are running Learn Enterprise at home.

  • Managed hosted customers cannot run Ultra on Learn for the same reason that self-hosted customers cannot: They are currently using Learn Enterprise. They can continue to use Learn Enterprise on managed hosting for as long as they want, as long as they don’t need Ultra. We will, eventually, offer Learn Private Cloud [yet another Feldstein-invented name]. Just as it sounds, this will be a private, Blackboard-hosted instance of Blackboard Cloud. Managed Hosted clients are welcome to switch to Learn Private Cloud when it becomes available, but it is not the same as managed hosting and may or may not meet the client’s needs as well as other options. Please be sure to discuss it with your representative when it becomes available. In the meantime, we’ll provide you with detailed information about what would change if you moved from managed hosting of Blackboard Enterprise to Blackboard Cloud, along with detailed information about what the migration process would be like.

To be clear, I’m not 100% certain that what I’ve described above is factually correct (particularly the made-up names), in part because Phil and I have heard slightly different versions of the story from different Blackboard executives. The main point is that, whatever the truth is, Blackboard needs to lay it out more clearly. Right now, they are missing easy wins because they are not communicating well.

Time will tell whether Ultra pays off. I’m actually pretty impressed with what I’ve seen so far. But no matter how good it turns out to be, Blackboard won’t start winning RFPs in real numbers until they start telling their story better.

The post Blackboard’s Messaging Problems appeared first on e-Literate.

Oracle Mobile Cloud Service First Hands-On Experience

Andrejus Baranovski - Fri, 2015-07-31 12:50
Thanks to SOA Community and Jurgen Kress, I had a chance to play with Oracle MCS (Mobile Cloud Service). This new Oracle product is promoted with full force by Oracle PM team, there is dedicated Youtube channel with videos to watch and learn - Oracle Mobile Platform. Mobile Cloud Service offers mobile enterprise repository to organize and support your mobile development. Mobile backend services, security, connectors, storage and etc. can be defined and managed in MCS. Web Services published in MCS can be monitored to track performance and errors. All this should simplify mobile solutions implementation.

This was my first encounter with MCS and I would like to describe the test I did. MCS UI is implemented with familiar ADF Alta UI 12c. There are options to monitor and administer MCS instance. I'm more interested in development options:

I will not go through all available options, but only focus on Mobile Backend. Basically we can define a group, where we could include various reusable business logic artefacts (API's). Mainly this will be different Web Service calls. The same Web Service calls can be reused by mobile application developer.

In Mobile Backend section we can edit existing groups or create a new one:

You should think about Mobile Backend as about a group of reusable code artefacts (API's). There is an option to create new API or reuse existing one. I decided to reuse existing API for Incidents registration:

This API implements REST Web Service call to register new incident, also it allows to query information about previously reported incidents. This can be tested directly in MCS environment, we could define sample payload data and simulate Web Service call to register new incident:

Web Service call is successful, we can observe this from the log - new incident is registered and ID is assigned. Same Web Service will be reused from mobile application. With MCS we could monitor Web Service usage, number of invocations, errors, etc. - this makes it easier to manage entire infrastructure for mobile solutions:

To make sure new incident was successfully registered, I could run another REST call for the same Web Service - to get incident information about ID:

Result shows incident data, this means incident was located successfully:

Incidents registration service is registered in API's group, we could edit and test this Web Service online in MCS:

Red Samurai mobile backend service is live - invocation statistics and processing time metrics are aggregated by MCS:

Less Performance Impact with Unified Auditing in #Oracle 12c

The Oracle Instructor - Fri, 2015-07-31 04:56

There is a new auditing architecture in place with Oracle Database 12c, called Unified Auditing. Why would you want to use it? Because it has significantly less performance impact than the old approach. We buffer now audit records in the SGA and write them asynchronously to disk, that’s the trick.

Other benefits of the new approach are that we have now one centralized way (and one syntax also) to deal with all the various auditing features that have been introduced over time, like Fine Grained Auditing etc. But the key improvement in my opinion is the reduced performance impact, because that was often hurting customers in the past. Let’s see it in action! First, I will record a baseline without any auditing:


[oracle@uhesse ~]$ sqlplus / as sysdba

SQL*Plus: Release Production on Fri Jul 31 08:54:32 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select value from v$option where parameter='Unified Auditing';


SQL> @audit_baseline

Table truncated.

Noaudit succeeded.

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

Elapsed: 00:00:06.07

PL/SQL procedure successfully completed.
SQL> host cat audit_baseline.sql
connect / as sysdba
truncate table aud$;
noaudit select on adam.sales;
exec dbms_workload_repository.create_snapshot

connect adam/adam
set timing on
declare v_product adam.sales.product%type;
for i in 1..100000 loop
select product into v_product from adam.sales where id=i;
end loop;
set timing off

connect / as sysdba
exec dbms_workload_repository.create_snapshot

So that is just 100k SELECT against a 600M MB table with an index on ID without auditing so far. Key sections of the AWR report for the baseline:


The most resource consuming SQL in that period was the AWR snapshot itself. Now let’s see how the old way to audit impacts performance here:

SQL>  show parameter audit_trail

---------------------------------------- ----------- ----------------------------------------
audit_trail                              string      DB, EXTENDED
SQL> @oldaudit

Table truncated.

Audit succeeded.

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

Elapsed: 00:00:56.42

PL/SQL procedure successfully completed.
SQL> host cat oldaudit.sql
connect / as sysdba
truncate table aud$;
audit select on adam.sales by access;
exec dbms_workload_repository.create_snapshot

connect adam/adam
set timing on
declare v_product adam.sales.product%type;
for i in 1..100000 loop
select product into v_product from adam.sales where id=i;
end loop;
set timing off

connect / as sysdba
exec dbms_workload_repository.create_snapshot

That was almost 10 times slower! The AWR report confirms that and shows why it is so much slower now:


It’s because of the 100k inserts into the audit trail, done synchronously to the SELECTs. The audit trail is showing them here:


SQL> select sql_text,sql_bind from dba_audit_trail where rownum<=10; 
SQL_TEXT                                           SQL_BIND 
-------------------------------------------------- ---------- 
10 rows selected. 
SQL> select count(*) from dba_audit_trail where sql_text like '%SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1%';


Now I will turn on Unified Auditing – that requires a relinking of the software while the database is down. Afterwards:

SQL> select value from v$option where parameter='Unified Auditing';


SQL> @newaudit

Audit policy created.

Audit succeeded.

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

Elapsed: 00:00:11.90

PL/SQL procedure successfully completed.
SQL> host cat newaudit.sql
connect / as sysdba
create audit policy audsales actions select on adam.sales;
audit policy audsales;
exec dbms_workload_repository.create_snapshot

connect adam/adam
set timing on
declare v_product adam.sales.product%type;
for i in 1..100000 loop
select product into v_product from adam.sales where id=i;
end loop;
set timing off

connect / as sysdba
exec dbms_workload_repository.create_snapshot

That was still slower than the baseline, but much better than with the old method! Let’s see the AWR report for the last run:



Similar to the first (baseline) run, the snapshot is the most resource consuming SQL during the period. DB time as well as elapsed time are shorter by far than with the old audit architecture. The 100k SELECTs together with the bind variables have been captured here as well:

SQL> select sql_text,sql_binds from unified_audit_trail where rownum<=10; 
SQL_TEXT                                                     SQL_BINDS 
------------------------------------------------------------ ---------- 
create audit policy audsales actions select on adam.sales 
audit policy audsales 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):1 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):2 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):3 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):4 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):5 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):6 
SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1                   #1(1):7 
10 rows selected. 
SQL> select count(*) from unified_audit_trail where sql_text like '%SELECT PRODUCT FROM ADAM.SALES WHERE ID=:B1%';


The first three lines above show that sys operations are also recorded in the same (Unified!) Audit Trail, by the way. There is much more to say and to learn about Unified Auditing of course, but this may give you a kind of motivation to evaluate it, especially if you have had performance issues in the past related to auditing. As always: Don’t believe it, test it! :-)

Tagged: 12c New Features, Performance Tuning, security
Categories: DBA Blogs

Oracle Cloud - Modern & Flexible Cloud for Modern Business

Peeyush Tugnawat - Fri, 2015-07-31 01:07

Oracle offers the most comprehensive portfolio of cloud computing solutions in the industry today. Whatever your cloud needs, Oracle has the complete solution for you.

Discoverer and Windows 10

Michael Armstrong-Smith - Thu, 2015-07-30 22:33
Hi everyone
Has anyone had the courage to upgrade to Windows 10 and see if Discoverer Plus still works?

How about the Discoverer server? Anyone tried that.

If you have drop me a reply


August 6, 2015: Oracle ERP Cloud Customer Forum―The Rancon Group

Linda Fishman Hoyle - Thu, 2015-07-30 17:57

Join us for another Oracle Customer Reference Forum on August 6, 2015, at 9:00 a.m. PT to hear Steven Van Houten, CFO at The Rancon Group. The company is a leader in Southern California community development, commercial building, and land use.

During this Customer Forum call, Van Houten will share with you The Rancon Group’s lessons learned during its implementation and the benefits it is receiving by using Oracle ERP Cloud. He will explain how Oracle ERP Cloud helps The Rancon Group make intelligent decisions, get information out to its mobile workforce, and meet its needs now and in the future.

Register now to attend the live Forum on Thursday, August 6, 2015, at 09:00 a.m. Pacific Time / 12:00 p.m Eastern Time.

CVSS Version 3.0 Announced

Oracle Security Team - Thu, 2015-07-30 16:04

Hello, this is Darius Wiles.

Version 3.0 of the Common Vulnerability Scoring System (CVSS) has been announced by the Forum of Incident Response and Security Teams (FIRST). Although there have been no high-level changes to the standard since the Preview 2 release which I discussed in a previous blog post, there have been a lot of improvements to the documentation.

Soon, Oracle will be using CVSS v3.0 to report CVSS Base scores in its security advisories. In order to facilitate this transition, Oracle plans to release two sets of risk matrices, both CVSS v2 and v3.0, in the first Critical Patch Update (Oracle’s security advisories) to provide CVSS version 3.0 Base scores. Subsequent Critical Patch Updates will only list CVSS version 3.0 scores.

While Oracle expects most vulnerabilities to have similar v2 and v3.0 Base Scores, certain types of vulnerabilities will experience a greater scoring difference. The CVSS v3.0 documentation includes a list of examples of public vulnerabilities scored using both v2 and v3.0, and this gives an insight into these scoring differences. Let’s now look at a couple of reasons for these differences.

The v3.0 standard provides a more precise assessment of risk because it considers more factors than the v2 standard. For example, the important impact of most cross-site scripting (XSS) vulnerabilities is that a victim's browser runs malicious code. v2 does not have a way to capture the change in impact from the vulnerable web server to the impacted browser; basically v2 just considers the impact to the former. In v3.0, the Scope metric allows us to score the impact to the browser, which in v3.0 terminology is the impacted component. v2 scores XSS as "no impact to confidentiality or availability, and partial impact to integrity", but in v3.0 we are free to score impacts to better fit each vulnerability. For example, a typical XSS vulnerability, CVE-2013-1937 is scored with a v2 Base Score of 4.3 and a v3.0 Base Score of 6.1. Most XSS vulnerabilities will experience a similar CVSS Base Score increase.

Until now, Oracle has used a proprietary Partial+ metric value for v2 impacts when a vulnerability "affects a wide range of resources, e.g., all database tables, or compromises an entire application or subsystem". We felt this extra information was useful because v2 always scores vulnerabilities relative to the "target host", but in cases where a host's main purpose is to run a single application, Oracle felt that a total compromise of that application warrants more than Partial. In v3.0, impacts are scored relative to the vulnerable component (assuming no scope change), so a total compromise of an application now leads to High impacts. Therefore, most Oracle vulnerabilities scored with Partial+ impacts under v2 are likely to be rated with High impacts and therefore more precise v3.0 Base scores. For example, CVE-2015-1098 has a v2 Base score of 6.8 and a v3.0 Base score of 7.8. This is a good indication of the differences we are likely to see. Refer to the CVSS v3.0 list of examples for more details on score this vulnerability.

Overall, Oracle expects v3.0 Base scores to be higher than v2, but bear in mind that v2 scores are always relative to the "target host", whereas v3.0 scores are relative to the vulnerable component, or the impacted component if there is a scope change. In other words, CVSS v3.0 will provide a better indication of the relative severity of vulnerabilities because it better reflects the true impact of the vulnerability being rated in software components such as database servers or middleware.

For More Information

The CVSS v3.0 documents are located on FIRST's web site at

Oracle's use of CVSS [version 2], including a fuller explanation of Partial+ is located at

My previous blog post on CVSS v3.0 preview is located at

Eric Maurice's blog post on Oracle's use of CVSS v2 is located at

Oracle Priority Support Infogram for 30-JUL-2015

Oracle Infogram - Thu, 2015-07-30 13:09

Open World
Oracle OpenWorld 2015 - Registrations Open, from Business Analytics - Proactive Support.
Oracle Support
Top 5 Ways to Personalize My Oracle Support, from the My Oracle Support blog.
A set of three updates from Upgrade your Database - NOW! in this issue:
ORAchk - How to log SRs and ERs for ORAchk
Things to consider BEFORE upgrading to Oracle to AVOID poor performance and wrong results
Optimizer Issue in Oracle "Reduce Group By"
Upgrade your SES Database From to for the PeopleSoft Search Framework, from the PeopleSoft Technology Blog.
JShell and REPL in Java 9, from The Java Source.
Modifying the run configuration for the JUnit test runner, from Andreas Fester's Blog.
Learn About Queries, Stored Routines, and More MySQL Developer Skills, from Oracle's MySQL Blog.
Fusion Applications
Careful Use of Aggregate Functions, from the Fusion Applications Developer Relationsblog.
ADF Goodies – Conveyor Belt Component and Alta UI, from WebLogic Partner Community EMEA.
And from the same source:
Create and set clientAttribute to ADF Faces component programmatically to pass value on client side JavaScript
Docker coming to Oracle Solaris, from the Oracle Solaris blog.
Live storage migration for kernel zones, from The Zones Zone blog.
Ops Center
Recovering LDoms From a Failed Server, from the Ops Center blog.
From the Oracle E-Business Suite Support blog:
Webcast: Setup & Troubleshooting Dunning Plans in Oracle Advanced Collections
Troubleshooting the Closing of Work Orders in EAM and WIP
From the Oracle E-Business Suite Technology blog:
Database Certified with EBS 11i on Additional Platforms
Transportable Database 12c Certified for EBS 12.2 Database Migration
Quarterly EBS Upgrade Recommendations: July 2015 Edition

Why Move to Cassandra?

Pythian Group - Thu, 2015-07-30 12:05

Nowadays Cassandra is getting a lot of attention, and we’re seeing more and more examples of companies moving to Cassandra. Why is this happening? Why are companies with solid IT structures and internal knowledge shifting, not only to a different paradigm (Read: NoSQL vs SQL), but also to completely different software? Companies don’t simply move to Cassandra because they feel like it. A drive or need must exist. In this post, I’m going to review a few use cases and highlight some of the interesting parts to explain why these particular companies adopted Cassandra. I will also try to address concerns about Cassandra in enterprise environments that have critical SLAs and other requirements. And at the end of this post, I will go over our experience with Cassandra.

Cassandra Use Cases Instagram

Cutting costs. How? Instagram was using an in-memory database before moving to Cassandra. Memory is expensive compared to disk. So if you do not need the advanced performance of an in-memory datastore, Cassandra can deliver the performance you need and help you save money on storage costs. Plus, as mentioned in the use case, Cassandra allows Instagram to continually add data to the cluster.  They also loved Cassandra’s reliability and availability features.


Cassandra proved to be the best technology, among the ones they tested, for their scaling needs. With Cassandra, Ebay can look up historical behavioral data quickly and update their recommendation models with low latency. Ebay has deployed Cassandra across multiple data centers.


Spotify moved to Cassandra because it’s a highly reliable and easily scalable datastore. Their old datastore was not able to keep up with the volume of writes and reads they had. Cassandra’s scalability with its multi-datacenter replication, plus its reliability, proved to be a hit for them.


They were looking for 3 things: scale, availability, and active-active. Only Cassandra provided all of them. There transition to Cassandra went smoothly and enjoy the ease of development Cassandra offers.

Cassandra brings something new to the game

NoSQL existed before Cassandra. There were also mature technologies when Cassandra was released. So why didn’t companies didn’t move to those technologies?

Like the subtitle says, Cassandra brings something new to the game. In my experience, and as discussed in some of the use cases above, one of the strongest points is Cassandra’s ease of use. Once you know how to configure  Cassandra, it’s almost “fire-and-forget”! It just works. In an era like ours, where you see new technologies appear every day, on different stacks, with different dependencies, Cassandra easy installation and basic configuration is refreshingly simple, which leads us to…

Scalability!! Yes it scales linearly. This level of scalability, combined with its ease of deployment, takes your infrastructure to another level.

Last but not least, Cassandra is highly flexible. You can tweak your consistency settings per transaction. You need more speed? Pick less consistency. You want data integrity? Push those consistency settings up. It is up to you, your project, and your requirements. And you can easily change it.

Also don’t forget its other benefits: open source, free, geo-replication, low latency etc…

Pythian’s Experience with Cassandra

Cassandra is not without its challenges. Like I said earlier, it is new technology that makes you think differently about databases. And because it’s easy to deploy and work with, it can lead to mistakes that could seriously impact scalability, and applications/services performance when they start to scale.

And that is where we come in. We ensure that companies just starting out with Cassandra have well built and well designed deployments, so they don’t run into these problems. Starting with a solid architecture plan for a Cassandra deployment and the correct data model can make a whole lot of difference!

We’ve seen some deployments that started out well, but without proper maintenance, fell into some of the pitfalls or edge-cases mentioned above. We help out by fixing the problem and/or providing recommended changes to the original deployment, so it will keep performing well without without issues! And because Cassandra delivers high resilience, many of these problems can be solved without having to deal with downtime.

Thinking about moving to Cassandra? Not sure if open source or enterprise is right for you? Need project support? Schedule a free assessment so we can help you with next steps!

The post Why Move to Cassandra? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Advantages of using REST-based Integrations in PeopleSoft

Javier Delgado - Thu, 2015-07-30 08:49
REST-based services support were introduced in PeopleTools 8.52, although you may also build your own REST services using IScripts in previous releases (*). With PeopleTools 8.52, Integration Broker includes support for REST services, enabling PeopleSoft to act as both a consumer and a provider.

What is REST?
There is plenty of documentation in the Web about REST, its characteristics and benefits. I personally find the tutorial published by Dr. Elkstein ( particularly illustrating.

In a nutshell, REST can be seen as a lightweight alternative to other traditional Web Services mechanisms such as RPC or SOAP. A REST integration has considerably less overhead than the two previously mentioned methods, and as a result is more efficient for many types of integrations.

Today, REST is the dominating standard for mobile applications (many of which use REST integrations to interact with the backend) and Rich Internet Applications using AJAX.

PeopleSoft Support
As I mentioned before, PeopleSoft support was included in PeopleTools 8.52. This included the possibility to use the Provide Web Service Wizard for REST services on top of the already supported SOAP services. Also, the Send Master and Handler Tester utilities were updated so they could be used with REST.

PeopleTools 8.53 delivered support for one of the most interesting features of REST GET integrations: caching. Using this feature, PeopleSoft can, as a service provider, indicate that the response should be cached (using the SetRESTCache method of the Message object). In this way, the next time a consumer asks for the service, the response will be retrieved from the cache instead of executing the service again. This is particularly useful when the returned information does not change very often (ie.: list of countries, languages, etc.), and can lead to performance gains over a similar SOAP integration.

PeopleTools 8.54 brought, as in many other areas, significant improvements to the PeopleSoft support. In first place, the security of inbound services (in which PeopleSoft acts as the provider) was enhanced to require that the services are consumed using SSL, basic HTTP authentication, and basic HTTP authentication and SSL, or none of these.

On top of that, Query Access Services (QAS) were also made accessible through REST, so the creation of new provider services can be as easy as creating a new query and exposing it to REST.

Finally, the new Mobile Application Platform (an alternative way to FLUID to mobilise PeopleSoft contents) also uses REST as a cornerstone.

Although REST support is relatively new compared to SOAP web services, it has been supported by PeopleSoft for a while now. Its efficiency and performance (remember GET services caching) makes it an ideal choice for multiple integration scenarios. I'm currently building a mobile platform that interacts with PeopleSoft using REST services. This is keeping me busy and you may have noticed that I'm not posting so regularly in this blog, but hopefully in some time from now I will be able to share with you some learned lessons from a large scale REST implementation.

(*) Although it's possible to build REST services using IScripts, the Integration Broker solution introduced in PeopleTools 8.52 is considerably easier to implement and maintain. So, if you are in PeopleTools 8.52 release or higher, Integration Broker would be the preferred approach. If you are in an earlier release, actually a PeopleTools upgrade would the preferred approach, but I understand there might be other constraints. :)

Using Shared AM to Cache and Display Table Data

Andrejus Baranovski - Wed, 2015-07-29 23:12
This post is based on Steve Muench sample Nr. 156. In my personal opinion, ADF samples implemented by Steve Muench still remain one of the best source of examples and solutions for various ADF use cases. Sample Nr. 156 describes how to use Shared AM to display cached data in UI table. Typically Shared AM's are used to implement cached LOV's (session or application scope). But it could go beyond LOV, based on the use case we could display cached data in the table or form. I have tested this approach with 12c and it works fine.

Download sample application - AM is defined with application scope cache level - this means cached data will be available for multiple users:

In order to display cached data on UI and pass it through ADF bindings layer, we need to use Shared AM configuration in bindings:

You should create new Data Control reference entry manually in DataBindings.cpx file. JDeveloper doesn't provide an option to select Shared AM configuration. Simply change configuration property to Shared (HrModuleShared as per my example):

Make sure to use correct Data Control entry for iterator in the Page Definition. Cached table iterator binding should point to shared Data Control configuration:

This is how it looks like on UI - readonly table data is fetched once and cached in application scope cache. Other users will be reusing cached data, without re-fetching it from DB:

Jobs VO is set with AutoRefresh = true property. This turns on DB change notification listener mechanism and keeps VO data in synch, when changes happen in DB. This helps to auto refresh cached VO (read more about it Auto Refresh for ADF BC Cached LOV):

Here is the test. Let's change Job Title attribute value in DB:

Click on any row from Jobs table, or use any buttons (make a new request). Cached VO will be re-executed and new data will be fetched from DB, including latest changes:

You should see in the log, DB change notification was received and VO was re-executed, VO data was re-fetched:

The role of Coherence in Batch

Anthony Shorten - Wed, 2015-07-29 19:48

Lately I have been talking to partners and customers on older versions of the Oracle Utilities Application Framework and they are considering upgrading to the latest version. One of the major questions they ask is about the role of Oracle Coherence in our architecture. Here are some clarifications:

  • We supply a subset of the runtime Oracle Coherence libraries we use in the batch architecture with the installation. It does not require a separate Oracle Coherence license (unless you are intending to use Coherence for some customizations which requires the license). 
  • We only use a subset of the Oracle Coherence API around the cluster management and load balancing of the batch architecture. If you are a customer who uses the Oracle Coherence Pack within Oracle Enterprise Manager for monitoring the batch component, it is not recommended at the present time. The Coherence pack will return that components are missing and therefore give erroneous availability information. We have developed our own monitoring API within the framework that is exposed via the Oracle Application Management Pack for Oracle Utilities.
  • The idea behind the use of Oracle Coherence is as follows:
    • The Batch Architecture uses a Coherence based Cluster. This can be configured to use uni-cast or multi-cast to communicate across the cluster.
    • A cluster has a number of members (also known as nodes to some people). In our case members are threadpools and job threads.
    • A threadpool is basically a running Java Virtual Machine, preloaded with the Oracle Utilities Application Framework, ready to accept work. The reason we use threadpools is that when you execute java processes in Oracle Utilities Application Framework, there is an overhead in memory of loading the framework cache and objects, as well as java itself, before a job can execute. By creating a threadpool, this overhead is minimized and the threadpool can be used across lots of job threads.
    • Threadpools are named (this is important) and have a thread limit (this is a batch limit not a java limit as batch threads are heavier than online threads. The weight is used to describe batch because  batch thread are long running threads. Online threads are typically short running.
    • When a threadpool is started, locally or remotely, it is added to the cluster. A named threadpool can have multiple instances (even on the same machine). The threadpool limit is the sum of the limits across all its instances.
    • When a batch job thread is executed (some jobs are single threaded or multi-threaded) it is submitted to the cluster. Oracle Coherence then load balances those threads across the name of threadpool allocated on the job thread parameters.
    • Oracle Coherence tracks the threadpools and batch job threads so that if any failure occurs then the thread and threadpool are aware. For example, if a threadpool crashes the cluster is made aware and the appropriate action can be taken. This keeps the architecture in synchronization at all times.
  • We have built a wizard (bedit) to help build the properties files that drive the architecture. This covers clusters, threadpools and even individual batch jobs.
  • When building a cluster we tend to recommend the following:
    • Create a cache threadpool per machine to minimize member to member network traffic. A cache threadpool does not run jobs, it just acts as a co-ordination point for Oracle Coherence. Without a cache threadpool, each member communicates to each member which can be quite a lot of networking when you have a complex network of members (including lots of active batch job threads).
    • Create an administration threadpool with no threads to execute. This is just a configuration concept where you can connect to the JMX via this member. The JMX API is available from any active threadpool but it is a good practice to isolate JMX traffic from other traffic.
    • Create a pool of threadpools to cover key jobs and other pools for other jobs. The advantage is for monitoring and controlling resources within the JVM.

For more information about this topic and other advice on batch refer to the Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support.

Upgrade your SES Database From to for the PeopleSoft Search Framework

PeopleSoft Technology Blog - Wed, 2015-07-29 17:33
An Oracle database Upgrade from to is available for Secure Enterprise Search (SES) with PeopleSoft.  This document on My Oracle Support provides step by step instructions for performing the upgrade.  Note that this upgrade is available for PeopleTools 8.53 or higher on Unix/Linux environments.

I Wish I Sold More

Cary Millsap - Wed, 2015-07-29 17:26
I flew home yesterday from Karen’s memorial service in Jacksonville, on a connecting flight through Charlotte. When I landed in Charlotte, I walked with all my stuff from my JAX arrival gate (D7) to my DFW departure gate (B15). The walk was more stressful than usual because the airport was so crowded.

The moment I set my stuff down at B15, a passenger with expensive clothes and one of those permanent grins established eye contact, pointed his finger at me, and said, “Are you in First?”

Wai... Wha...?

I said, “No, platinum.” My first instinct was to explain that I had a right to occupy the space in which I was standing. It bothers me that this was my first instinct.

He dropped his pointing finger, and his eyes went no longer interested in me. The big grin diminished slightly.

Soon another guy walked up. Same story: the I’m-your-buddy-because-I’m-pointing-my-finger-at-you thing, and then, “First Class?” This time the answer was yes. “ALRIGHT! WHAT ROW ARE YOU IN?” Row two. “AGH,” like he’d been shot in the shoulder. He holstered his pointer finger, the cheery grin became vaguely menacing, and he resumed his stalking.

One guy who got the “First Class?” question just stared back. So, big-grin guy asked him again, “Are you in First Class?” No answer. Big-grin guy leaned in a little bit and looked him square in the eye. Still no answer. So he leaned back out, laughed uncomfortably, and said half under his breath, “Really?...”

I pieced it together watching this big, loud guy explain to his traveling companions so everybody could hear him, he just wanted to sit in Row 1 with his wife, but he had a seat in Row 2. And of course it will be so much easier to take care of it now than to wait and take care of it when everybody gets on the plane.

Of course.

This is the kind of guy who sells things to people. He has probably sold a lot of things to a lot of people. That’s probably why he and his wife have First Class tickets.

I’ll tell you, though, I had to battle against hoping he’d hit his head and fall down on the jet bridge (I battled coz it’s not nice to hope stuff like that). I would never have said something to him; I didn’t want to be Other Jackass to his Jackass. (Although people might have clapped if I had.)

So there’s this surge of emotions, none of them good, going on in my brain over stupid guy in the airport. Sales reps...

This is why Method R Corporation never had sales reps.

But that’s like saying I’ve seen bad aircraft engines before and so now in my airline, I never use aircraft engines. Alrighty then. In that case, I hope you like gliders. And, hey: gliders are fine if that makes you happy. But a glider can’t get me home from Florida. Or even take off by itself.

I wish I sold more Method R software. But never at the expense of being like the guy at the airport. It seems I’d rather perish than be that guy. This raises an interesting question: is my attitude on this topic just a luxury for me that cheats my family and my employees out of the financial rewards they really deserve? Or do I need to become that guy?

I think the answer is not A or B; it’s C.

There are also good sales people, people who sell a lot of things to a lot of people, who are nothing like the guy at the airport. People like Paul Kenny and the honorable, decent, considerate people I work with now at Accenture Enkitec Group who sell through serving others. There were good people selling software at Hotsos, too, but the circumstances of my departure in 2008 prevented me from working with them. (Yes, I do realize: my circumstances would not have prevented me from working with them if I had been more like the guy at the airport.)

This need for duality—needing both the person who makes the creations and the person who connects those creations to people who will pay for them—is probably the most essential of the founder’s dilemmas. These two people usually have to be two different people. And both need to be Good.

In both senses of the word.

Three Steps to Get Big Data Ready for HR

Linda Fishman Hoyle - Wed, 2015-07-29 13:02

A Guest Post by Melanie Hache-Barrois, Oracle HCM Strategy Director, Southern Europe

Big data will revolutionize HR practices―here is how to hit the ground running with your implementation.

Big data analytics promises to deliver new insights into the workforce; these insights can help HR better predict trends and policy outcomes, and thereby, make the right decisions. It has the power to help HR to predict and plan organizational performance, to minimize the cost, time, and risk of taking on new HR initiatives, and to understand, develop, and maintain a productive workforce over a single technology platform, and much more.

Big data analytics has a huge role to play in the future of HR, but it is important that HR teams get prepared in the right way. Here are our tips to make sure that your data is ready for the big data revolution.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-GB; mso-fareast-language:EN-GB;}

1.  Remove data ‘islands’

The first step is to identify what kind of data you need for a truly successful HR strategy. Too often, HR teams experience data organized in silos, cut off from the rest of the organization. The migration to big data provides the perfect opportunity to identify data islands within your HR systems and define a strategy to integrate and reorganize them.

2.  Use a single interface

Understanding how data is collected within your organization is fundamental to a successful big data strategy. You have to avoid ‘copy and paste’ practices, and instead, make sure that data is collected automatically, and seamlessly integrated in one interface. The less you have to manually record information and integrate it into your HR systems, the better. It is therefore crucial to choose a single simple interface that will collect all your data and make it easily accessible to your team.

3.  Start simple

Once you have chosen the type of data you need and the way and where you will collect it, you can decide the kind of analytics you need. To be efficient and keep it simple, you can start with simple correlations to understand how big data analytics works and what kind of results you can get. You can then slowly increase the analytical complexity, heading to predictive analytics.

These three steps will ensure that Oracle big data solution will help you deliver an enhanced HR strategy that meets your corporate goals.

My Friend Karen

Cary Millsap - Wed, 2015-07-29 11:54
My friend Karen Morton passed away on July 23, 2015 after a four-month battle against cancer. You can hear her voice here.

I met Karen Morton in February 2002. The day I met her, I knew she was awesome. She told me the story that, as a consultant, she had been doing something that was unheard-of. She guaranteed her clients that if she couldn’t make things on their systems go at least X much faster on her very first day, then they wouldn’t have to pay. She was a Give First person, even in her business. That is really hard to do. After she told me this story, I asked the obvious question. She smiled her big smile and told me that her clients had always paid her—cheerfully.

It was an honor when Karen joined my company just a little while later. She was the best teammate ever, and she delighted every customer she ever met. The times I got to work with Karen were bright spots in my life, during many of the most difficult years of my career. For me, she was a continual source of knowledge, inspiration, and courage.

This next part is for Karen’s family and friends outside of work. You know that she was smart, and you know she was successful. What you may not realize is how successful she was. Your girl was famous all over the world. She was literally one of the top experts on Earth at making computing systems run faster. She used her brilliant gift for explaining things through stories to become one of the most interesting and fun presenters in the Oracle world to go watch, and her attendance numbers proved it. Thousands of people all over the world know the name, the voice, and the face of your friend, your daughter, your sister, your spouse, your mom.

Everyone loved Karen’s stories. She and I told stories and talked about stories, it seems like, all the time we were together. Stories about how Oracle works, stories about helping people, stories about her college basketball career, stories about our kids and their sports, ...

My favorite stories of all—and my family’s too—were the stories about her younger brother Ted. These stories always started out with some middle-of-the-night phone call that Karen would describe in her most somber voice, with the Tennessee accent turned on full-bore: “Kar’n: This is your brother, Theodore LeROY.” Ted was Karen’s brother Teddy Lee when he wasn’t in trouble, so of course he was always Theodore LeROY in her stories. Every story Karen told was funny and kind.

We all wanted to have more time with Karen than we got, but she touched and warmed the lives of literally thousands of people. Karen Morton used her half-century here on Earth with us as well as anyone I’ve ever met. She did it right.

God bless you, Karen. I love you.

August 12: Atradius Collections Oracle Sales Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2015-07-29 11:06

Join us for another Oracle Customer Reference Forum on August 12th, 2015 at 8:00 a.m. PT / 11:00 a.m. ET / 5:00 p.m. CEST.

Sonja van Haasteren, Global Customer Experience Manager of Atradius Collections, will talk about the company’s journey with Oracle CX products focused on Oracle Sales Cloud with Oracle Marketing Cloud and its path to expand with Oracle Data Cloud.

Atradius Collections is a global leader in trade-invoice-collection services. It provides solutions to recover domestic and international trade invoices. Atradius Collections handles more than 100,000 cases a year for more than 14,500 customers, covering over 200 countries.

Register now to confirm your attendance for this informative event on August 12.

TekStream Reduces Project Admin Costs by 30% with Oracle Documents Cloud

WebCenter Team - Wed, 2015-07-29 07:54

Read this latest announcement from Oracle to find out more about how TekStream Solutions, a solution services company in North America streamlined project management and administration and improved client project delivery with Oracle Documents Cloud Service, an enterprise-grade cloud collaboration and file sync and share solution. Learn how, within the first month of its use, TekStream was able to cut project administration costs by 30% and reduce complexity to not only drive client results faster but also provide a superior project experience to both its consultants as well as its clients.

And here's a brief video with Judd Robins, executive vice president, Consulting Services of TekStream Solutions as he discusses the specific areas where they were looking to make improvements and how Oracle Documents Cloud enabled easy and yet secure cloud collaboration not only among its consultants who are always on the go, but also with its clients.

To learn more about Oracle Documents Cloud Service and how it can help your enterprise visit us at


Jonathan Lewis - Wed, 2015-07-29 06:05

A recent question on the OTN Database Forum asked:

I need to check if at least one record present in table before processing rest of the statements in my PL/SQL procedure. Is there an efficient way to achieve that considering that the table is having huge number of records like 10K.

I don’t think many readers of the forum would consider 10K to be a huge number of records; nevertheless it is a question that could reasonably be asked, and should prompt a little discssion.

First question to ask, of course is: how often do you do this and how important is it to be as efficient as possible. We don’t want to waste a couple of days of coding and testing to save five seconds every 24 hours. Some context is needed before charging into high-tech geek solution mode.

Next question is: what’s wrong with writing code that just does the job, and if it finds that the job is complete after zero rows then you haven’t wasted any effort. This seems reasonable in (say) a PL/SQL environment where we might discuss the following pair of strategies:

Option 1:
-- execute a select statement to see in any rows exist

if (flag is set to show rows) then
    for r in (select all the rows) loop
        do something for each row
    end loop;
end if;

Option 2:
for r in (select all the rows) loop
    do something for each row;
end loop;

If this is the type of activity you have to do then it does seem reasonable to question the sense of putting in an extra statement to see if there are any rows to process before processing them. But there is a possibly justification for doing this. The query to find just one row may produce a very efficient execution plan, while the query to find all the rows may have to do something much less efficient even when (eventually) it finds that there is no data. Think of the differences you often see between a first_rows_1 plan and an all_rows plan; think about how Oracle can use index-only access paths and table elimination – if you’re only checking for existence you may be able to produce a MUCH faster plan than you can for selecting the whole of the first row.

Next question, if you think that there is a performance benefit from the two-stage approach: is the performance gain worth the cost (and risk) of adding a near-duplicate statement to the code – that’s two statements that have to be maintained every time you make a change. Maybe it’s worth “wasting” a few seconds on every execution to avoid getting the wrong results (or an odd extra hour of programmer time) once every few months. Bear in mind, also, that the optimizer now has to optimize two statement instead of one – you may not notice the extra CPU usage in testing but perhaps in the live environment the execution benefit will be eroded by the optimization cost.

Next question, if you still think that the two-stage process is a good idea: will it result in an inconsistent database state ?! If you select and find a row, then run and find that there are no rows to process because something modified and “hid” the row you found on the first pass – what are you going to do. Will this make the program crash ? Will it produce an erroneous result on this run, or will a silent side effect be that the next run will produce the wrong results. (See Billy Verreynne’s comment on the original post). Should you set the session to “serializable” before you start the program, or maybe lock a critical table to make sure it can’t change.

So, assuming you’ve decided that some form of “check for existence then do the job” is both desirable and safe, what’s the most efficient strategy. Here’s one of the smarter solutions that miminises risk and effort (in this case using a pl/sql environment).

select  count(*)
into    m_counter
from    dual
where   exists ({your original driving select statement})

if m_counter = 0 then
    for c1 in {your original driving select statement} loop
        -- do whatever
    end loop;
end if;

The reason I describe this solution as smarter, with minimum risk and effort, is that (a) you use EXACTLY the same SQL statement in both locations so there should be no need to worry about making the same effective changes twice to two slightly different bits of SQL and (b) the optimizer will recognise the significance of the existence test and run in first_rows_1 mode with maximum join elimination and avoidance of redundant table visits. Here’s a little data set I can use to demonstrate the principle:

create table t1
        mod(rownum,200)         n1,     -- scattered data
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
connect by
        level <= 10000

delete from t1 where n1 = 100;

create index t1_i1 on t1(n1);

                cascade => true,
                method_opt => 'for all columns size 1'

It’s just a simple table with index, but the index isn’t very good for finding the data – it’s repetitive data widely scattered through the table: 10,000 rows with only 200 distinct values. But check what happens when you do the dual existence test – first we run our “driving” query to show the plan that the optimizer would choose for it, then we run with the existence test to show the different strategy the optimizer takes when the driving query is embedded:

alter session set statistics_level = all;

select  *
from    t1
where   n1 = 100

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

select  count(*)
from    dual
where   exists (
                select * from t1 where n1 = 100

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

Notice how I’ve enabled rowsource execution statistics and pulled the execution plans from memory with their execution statistics. Here they are:

select * from t1 where n1 = 100

| Id  | Operation         | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
|   0 | SELECT STATEMENT  |      |      1 |        |    38 (100)|      0 |00:00:00.01 |     274 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |     50 |    38   (3)|      0 |00:00:00.01 |     274 |

Predicate Information (identified by operation id):
   1 - filter("N1"=100)

select count(*) from dual where exists (   select * from t1 where n1 = 100  )

| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
|   0 | SELECT STATEMENT   |       |      1 |        |     3 (100)|      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE    |       |      1 |      1 |            |      1 |00:00:00.01 |       2 |
|*  2 |   FILTER           |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|   3 |    FAST DUAL       |       |      0 |      1 |     2   (0)|      0 |00:00:00.01 |       0 |
|*  4 |    INDEX RANGE SCAN| T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |

Predicate Information (identified by operation id):
   2 - filter( IS NOT NULL)
   4 - access("N1"=100)

For the original query the optimizer did a full tablescan – that was the most efficient path. For the existence test the optimizer decided it didn’t need to visit the table for “*” and it would be quicker to use an index range scan to access the data and stop after one row. Note, in particular, that the scan of the dual table didn’t even start – in effect we’ve got all the benefits of a “select {minimum set of columns} where rownum = 1″ query, without having to work out what that minimum set of columns was.

But there’s an even more cunning option – remember that we didn’t scan dual when when there were no matching rows:

for c1 in (

        with driving as (
                select  /*+ inline */
                from    t1
        select  /*+ track this */
                driving d1
                n1 = 100
        and     exists (
                        from    driving d2
                        where   n1 = 100
) loop

    -- do your thing

end loop;

In this specific case the subquery would automatically go inline, so the hint here is actually redundant; in general you’re likely to find the optimizer materializing your subquery and bypassing the cunning strategy if you don’t use the hint. (One of the cases where subquery factoring doesn’t automatically materialize is when you have no WHERE clause in the subquery.)

Here’s the execution plan pulled from memory (after running this SQL through an anonymous PL/SQL block):

SQL_ID  7cvfcv3zarbyg, child number 0
WITH DRIVING AS ( SELECT /*+ inline */ * FROM T1 ) SELECT /*+ track

| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
|   0 | SELECT STATEMENT   |       |      1 |        |    39 (100)|      0 |00:00:00.01 |       2 |
|*  1 |  FILTER            |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|*  2 |   TABLE ACCESS FULL| T1    |      0 |     50 |    38   (3)|      0 |00:00:00.01 |       0 |
|*  3 |   INDEX RANGE SCAN | T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |

Predicate Information (identified by operation id):
   1 - filter( IS NOT NULL)
   2 - filter("T1"."N1"=100)
   3 - access("T1"."N1"=100)

You’ve got just one statement – and you’ve only got one version of the complicated text because you put it into a factored subquery; but the optimizer manages to use one access path for one instantiation of the text and a different one for the other. You get an efficient test for existence and only run the main query if some suitable data exists, and the whole thing is entirely read-consistent.

I have to say, though, I can’t quite make myself 100% enthusiastic about this code strategy – there’s just a nagging little doubt that the optimizer might come up with some insanely clever trick to try and transform the existence test into something that’s supposed to be faster but does a lot more work; but maybe that’s only likely to happen on an upgrade, which is when you’d be testing everything very carefully anyway (wouldn’t you) and you’ve got the “dual/exists” fallback position if necessary.


Does anyone remember the thing about reading execution plan “first child first” – this existence test is one of the interesting cases where it’s not the first child of a parent operation that runs first: it’s the case I call the “constant subquery”.