Skip navigation.

Feed aggregator

Mechanism level: GSSHeader did not find the right tag,Error when accessing OAM WNA resources

Online Apps DBA - Wed, 2015-07-22 01:04

Hi All,

After long gap I’m start writing blogs and I’m feeling for that.

Today I have faced login issue in WNA setup environment.

Requirement is user would need to login via WNA fallback authentication and access to the OAM WNA protected resources but it login request landed into error page “Account locked or disabled”.

From oam-server1.out logs

Note: If you are not able to see below then you should enable Kerberos trace level.

 <Jul 21, 2015 6:27:52 PM AEST> <Error> <oracle.oam.plugin> <BEA-000000> <Defective token detected (Mechanism level: GSSHeader did not find the right tag) GSSException: Defective token detected (Mechanism level: GSSHeader did not find the right tag)         at<init>(         at         at         at$         at         at         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)         at sun.reflect.NativeMethodAccessorImpl.invoke( Normally this issue appears to be that something different from a Kerberos or NTLM token is being sent by the Microsoft IE browser client machine.

OAM only accepts Kerberos or NTLM tokens for now.

We noticed browser was sending the following token when accessing in company network domain.

And it keeps sending this similar like “Authorization: Negotiate” string over and over.

Authorization: Negotiate




This is not a standard NTLM value, as normally when we review the headers we would expect to see either:

Authorization: Negotiate TlRMTVNTUAABAAA…. (NTLM)

Authorization: Negotiate YIIGeAYGK…(Kerberos)

then this will still not work for OAM WAN Fallback, since the token received by OAM Server is NOT an NTLM token like, but appears to be more related to a NEGOEXTS token, which the Windows 7 clients sometimes send.

So, the token was not sent correctly by the browser to OAM server.


On the UNIX host, use kinit on your user account and use klist to verify that you have a ticket to the HTTP/DOMAIN.NAME@REALM.NAME principal or not.

In our cause we have encountered below exception

kinit(v5): Client not found in Kerberos database while getting initial credentials

We have found a DNS issue for application OAM hostname. OAM VIP host name was resolving to different hostname and Keytab was created based on VIP hostname not actual hostname different and frontend host which is critical specially for creating a keytab


Re-generated the keytab for DNS resolve hostname as follow


-mapuser aurdev\srv-oam-iap1 -pass <Password> -out master.keytab -kvno 0


Copy the new keytab into <Oracle Home>/server/config/ and restart OAM server.

Hope above information helped you to get out of the issues.

The post Mechanism level: GSSHeader did not find the right tag,Error when accessing OAM WNA resources appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

NetBeans 8.1 Remote Deployment for WebLogic

Steve Button - Tue, 2015-07-21 23:41
Stoked that NetBeans 8.1 will have support for deployment to remote WebLogic Server instances.


That Time I Killed My Phone

Oracle AppsLab - Tue, 2015-07-21 15:19

I don’t particularly like protective cases for phones because they ruin the industrial design aesthetics of the device.

Here at the ‘Lab, we’ve had spirited debates about cases or not, dating back the original team and continuing to our current team.

I am not careful with phones, and the death of my Nexus 5, which I’ve only had since October 2014, was my fault. It was also very, very bad luck.

I usually run with a Bluetooth headset, the Mpow Swift, which I quite like (hey Ultan, it’s green), specifically because I had a few instances where my hand caught the headset cord and pulled my phone off the treadmill deck and onto the belt, causing the phone to fly off the back like a missile.

Yes, that happened more than once, but in my defense, I’ve seen it happen to other people too.

However, on July 8, I was running on the treadmill talking to Tony on the phone, using a wired headset. I’ve found the Mpow doesn’t have a very strong microphone, or maybe I wasn’t aiming my voice in the right direction. Whatever the reason, the Mpow hasn’t been good for talking on the phone.

While talking to Tony, possibly mid-sentence, I caught the cord and pulled the phone off the deck.

Unlike the other times, this time, the phone slipped under the treadmill belt, trapping it between the belt and housing, sliding it the length of the belt, and dragging it over the back drum.

I stopped the treadmill and looked under, but it was trapped inside the machine. After sheepishly asking for help, we were able to get the machine to spit up my mangled phone.

IMG_20150708_190309 IMG_20150708_185854 IMG_20150708_185900 IMG_20150708_185910 IMG_20150708_185757 IMG_20150708_185805

Interestingly, the screen is completely intact, which gives an idea of how tough it really is. The phone’s body is sadly bent in an aspect that describes its journey over that drum. Luckily, its battery hasn’t leaked.

The device didn’t die right away. While it wouldn’t boot, when I connected it to my Mac via USB, it was recognized, although it wouldn’t mount the storage like it normally would. Something about the device consuming too much power for USB.

I tried with a powered USB hub, but I think the battery gave up the ghost.

Happily for me, I had recently bought a second generation Moto X on sale, and I’d been postponing the switch.

Unhappily, every time I switch phones, I lose something, even though I keep backups. When my Nexus 4 died mysteriously, I lost all my photos. This time, I lost my SMS/MMS history.

Like I said, I’m careless with phones.Possibly Related Posts:

NFL Play by play analysis using Cloudera Impala

Nilesh Jethwa - Tue, 2015-07-21 15:13

Who won the most games against which losing team?


Read More at:

Lessons Learned with Kubernetes

Pythian Group - Tue, 2015-07-21 13:00
Kubernetes Logo Trend Towards Kubernetes

Three trends in computing have come together to make container orchestration the next obvious evolution of internet service delivery.  The first is the trend to pack an increasing number of segregated services into larger and larger servers for efficiency gains.  The second trend is the rapid build->test->release cycle of modern microservices that can see hundreds or thousands of updates each day.  And, the third trend is infrastructure-as-code which abstracts the actual hardware of servers and networking equipment away into text files that describe the desired infrastructure.  These files can be tested and version controlled in exactly the same way as code, and deployed just as quickly.  At the convergence point sits Kubernetes from Google which uses flat files to describe the infrastructure and containers needed to deliver a service, which can be built, tested, and deployed incredibly quickly.

Pythian has been working with container orchestration using Kubernetes since it was announced to the public in June of 2014.  We have used it to deploy microservices faster while also speeding up the development cycle.  With the advent of V1.0, we decided to revisit some of what we learned implementing internally and with clients on Kubernetes.

Develop Locally

Google, and others provide hosted Kubernetes solutions that are fast and easy to use.  In fact, you can use them for your whole build->test->deploy workflow.  Keep in mind, that with hosted Kubernetes, the containers are exposed to the internet from very early in your development cycle.  If that’s not desirable, or if local development is important, go faster with a local cluster.  Kubernetes can run on as few as three VMs and the vagrant install is well supported.  Our workflow involves sharing the yaml files among the team and developing everything locally before pushing blessed containers for deployment on a production cluster.

Pay Attention to API Versions in Examples

Since the kubernetes team has been developing their api in public for the last year, there have been a number of fairly large breaking changes to the API.  Now that v1 of the API is stable, we can depend on it. However, many of the tutorials and examples online use earlier versions.  Be sure to check which version the example uses before trying to experiment with it.

Get to know Volumes at Cluster Scale

In Kubernetes, volumes are an outgrowth of the Docker concept of a volume, or a filesystem that can be mounted and isn’t tied to the lifecycle of specific container.  Kubernetes re-imagines them at cluster scale and through plugins, allows containers to mount all kinds of things as file systems.  One plugin adds a git repository as a mountable filesystem, which opens the door to some particularly interesting use cases.

Leverage Etcd

At the heart of the Kubernetes cluster is a distributed, shared-state system called etcd.  Built on the RAFT protocol, it stores key->value pairs in a tiered structure and supports an easy REST api.  Etcd also provides a level of access control sufficient to securely store shared secrets for use throughout the cluster, but not available to all etcd consumers.  This feature underpins the concept of a Secret in Kubernetes.  But, your application can also talk directly to the etcd cluster in Kubernetes.  Using confd, your application can use the Kubernetes etcd instance as a data storage layer.  For example, here’s a simple url shortener gist using just nginx, confd, and etcd.
Happy experimentation!

Schedule a free assessment with a Pythian Kubernetes expert.

Learn more about Pythian’s Cloud expertise.

If this sounds like the kind of thing you’d like to work on, we’re hiring too!

The post Lessons Learned with Kubernetes appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

New PeopleSoft Technology Spotlight Series Available

PeopleSoft Technology Blog - Tue, 2015-07-21 11:37

The PeopleSoft Spotlight Series is a new video-based learning resource that will help you get a deeper understanding of our latest Oracle PeopleSoft technologies, features and enhancements.  Watch these videos to gain insight on how you can take advantage of these features in your enterprise.

Senior Strategy Director Jeff Robbins is your host for the Tools and Technology PeopleSoft Spotlight Series.  Jeff introduces each video and summarizes key points covered in the session. Members of the PeopleSoft development organization take you into detail on each subject, helping you plan for, roll out, and gain value from these PeopleSoft enhancements.

The first sessions in our series focus on the game-changing Selective Adoption and the cutting edge PeopleSoft Fluid User Interface. Selective Adoption is the new method by which customers will update and maintain their PeopleSoft systems.  The Fluid UI enables PeopleSoft users to use PeopleSoft applications across form factors. 

Each video takes less than an hour and contains helpful resources. Look for more sessions of the PeopleSoft Spotlight Series in the near future.

Apache Drill : How to Create a New Function?

Tugdual Grall - Tue, 2015-07-21 11:04
Read this article on my new blog Apache Drill allows users to explore any type of data using ANSI SQL. This is great, but Drill goes even further than that and allows you to create custom functions to extend the query engine. These custom functions have all the performance of any of the Drill primitive operations, but allowing that performance makes writing these functions a little trickier Tugdual Grall

Oracle Process Cloud Application Player

Andrejus Baranovski - Tue, 2015-07-21 10:12
With Oracle Process Cloud you can forget these days when you was waiting long to deploy BPM process and test Human Task UI. No need to wait anymore, in Oracle Process Cloud you could use Application Player feature, this allows to run the process and test Human Task UI almost instantly.

To demonstrate Application Player functionality, I have implemented basic process of Data Entry for the Claim Handling:

This process is using Web Form in the Start activity to capture data, human activity Claim Expense Review allows to review entered data, before submitting it:

This is how Web Form UI design looks like:

When you build Web Form and arrange UI components, business type is constructed automatically, no need to define anything separately. Business type can be used in the process, without changing it:

Data object variable is assigned to the process activity element through Association dialog in Process Cloud, this is where you can map input/output with business type:

Once process is ready to be tested, all you need to do is to invoke Application Player. Click on Test Application button in the top right corner:

In the Test Application window you should select Play option:

To show how it works, I have recorded a video - see yourself how helpful is Application Player:

Password properties in SoapUI

Darwin IT - Tue, 2015-07-21 01:05
By accident I encountered the following behaviour of SoapUI. I wanted to register a username/password combination in SoapUI.
Normally in SoapUI property values are shown as plain text.  Here I miss-typed the property for password on purpose:
But see what happens if I correctly type the word "Password":
Apparently if the property contains the word  "Password", prefixed with something indicative, it will consider the property as a password field. Cool!

By the way: the phrase "Password" should be the last in the name. For example, if you post-fix the property with "-Dev", like "ContentServerPassword-Dev", the content becomes visible again. In those case, you should phrase it like "ContentServer-Dev-Password".

node-oracledb 0.7.0 now supports Result Sets and REF CURSORS

Christopher Jones - Mon, 2015-07-20 16:58

A new release of the Node.js driver for Oracle Database is now on and GitHub.

node-oracledb 0.7 connects Node.js 0.10, Node.js 0.12, and io.js to Oracle Database. It runs on a number of platforms. For more information about node-oracledb see the node-oracledb GitHub page.

The changes in 0.7 are:

  • Added result set support for fetching large data sets. Rows from queries can now be fetched in batches using a ResultSet class. This allows large query results to be fetched without requiring all values to be in memory at once. New getRow() and getRows() methods can be called repeatedly to scroll through the query results.

    The original node-oracledb behavior of returning all rows at once remains the default. To return a resultSet, use the new execute() option { resultSet: true }. For example:

    //  (See the full code in examples/resultset2.js)
    . . .
    var numRows = 10;  // number of rows to return from each call to getRows()
      "SELECT employee_id, last_name FROM employees ORDER BY employee_id",
      [], // no bind variables
      { resultSet: true }, // return a result set.  Default is false
      function(err, result)
        if (err) { . . . }
        fetchRowsFromRS(connection, result.resultSet, numRows);
    . . .
    function fetchRowsFromRS(connection, resultSet, numRows)
      resultSet.getRows( // get numRows rows
        function (err, rows)
          if (err) {
             . . .                        // close the result set and release the connection
          } else if (rows.length == 0) {  // no rows, or no more rows
            . . .                         // close the result set and release the connection
          } else if (rows.length > 0) {
            fetchRowsFromRS(connection, resultSet, numRows);  // get next set of rows

    It's important to use the new resultSet close() method to close the result set when no more data is available or required.

    There is more information on Result Sets in the manual.

  • Added REF CURSOR support for returning query results from PL/SQL. PL/SQL code that returns REFCURSOR results via bind parameters can now bind a new node-oracledb type Oracledb.CURSOR and fetch the results using the new ResultSet class.

    //  (See the full code in examples/refcursor.js)
    var oracledb = require('oracledb');
    . . .
    var numRows = 10;  // number of rows to return from each call to getRows()
    var bindvars = {
      sal:  6000,
      cursor:  { type: oracledb.CURSOR, dir: oracledb.BIND_OUT }
      "BEGIN get_emp_rs(:sal, :cursor); END;",  // The PL/SQL has an OUT bind of type SYS_REFCURSOR
      function(err, result)
        if (err) { . . . }
        fetchRowsFromRS(connection, result.outBinds.cursor, numRows);
    . . .
    function fetchRowsFromRS(connection, resultSet, numRows)
      resultSet.getRows( // get numRows rows
        function (err, rows)
          if (err) {
             . . .                        // close the result set and release the connection
          } else if (rows.length == 0) {  // no rows, or no more rows
            . . .                         // close the result set and release the connection
          } else if (rows.length > 0) {
            fetchRowsFromRS(connection, resultSet, numRows);  // get next set of rows

    There is more information on using REF CURSORS in the manual.

  • Added row prefetching support. The new ResultSet class supports prefetching via a new attribute oracledb.prefetchRows and a new execute() option prefetchRows. Each time the application fetches query or REF CURSOR rows in a ResultSet from Oracle Database, prefetching allows the underlying Oracle libraries to transfer extra rows. This allows better use of database and network resources, improving performance and scalability. Regardless of the prefetch size, the number of rows returned to the application does not change. Buffering is handled by the underlying Oracle client library.

    The default prefetch size is 100 extra rows. Applications should tune the prefetch size used by each execute() for desired performance and/or to avoid allocating and initializing unused memory. There are some more tips in the manual.

    With node-oracledb 0.7.0, non-ResultSet queries now use prefetching with a fixed size of 2. This should reduce the number of round trips required for these queries.

  • Added a test suite. Yay! See the README in the tests directory for how to run the tests. When you run the test suite, you'll notice each test has a unique number for ease of identification. The numbers are not necessarily sequential.

    We do most testing on Linux and Windows. If you see test output differences due to environment or version differences, please sign the OCA and submit a pull request with the fix and an explanation of why it is needed. See CONTRIBUTING.

    If you submit new tests (after signing the OCA), assign each one a unique number in the documented range that applies to the area being tested.

  • Fixed error handling for SQL statements using RETURNING INTO. A bug causing all errors with DML RETURNING statements to report the same error message was fixed.

  • Fixed INSERT of a date when the SQL has a RETURNING INTO clause. When using an INSERT to insert a date or timestamp and the SQL clause had a RETURNING INTO clause for character or number columns, then an error was being thrown. This has been fixed.

  • Renumbered the values used by the Oracledb Constants. If your application uses constant names such as Oracledb.OBJECT or Oracledb.BIND_INOUT then you won't notice the change. However if, for some reason, code has hardcoded numbers like 2, then you will have to update to use the new numbers, see lib/oracledb.js. Or, better, change the code to use the constants' names.

Reading System Logs: SQL Server – Part 2

Pythian Group - Mon, 2015-07-20 13:52


4355536275_430b18f9d5_nLast time I talked about reading System Logs on the SQL Server box, explaining why it is really important that DBA(s) should scan through the logs once a day on a critical production system. As I mentioned in my previous blog post , sometimes there are messages logged in as information, and at times it can be treated as an early warning before the system gets actual error messages – a sign of warning or an error. That is why it is important to read the information level messages. Let me tell you yet another case that I had where the disk sub system issue was reported as an information in system logs.

In this case, the system was suffering with the high disk I/O. The disk that we had replaced was used for writing backups. For a few days we observed that writing backups were longer than it was before.  The number of databases were the same and the size of these databases were not drastically increased, though the time it was taking to write backups had increased significantly. Looking at the system logs I noticed some messages related to the disk. Searching for those messages lead me to some links pointing toward a disk issue, link among them. After working with others in storage admin they confirmed the issue too, and now they are procuring a new disk.

So, here is what I would say. When you start your day, spare few minutes to read the system logs.  At Pythian, we have our home grown monitoring tool Avail which does this job for us reporting information, warnings and errors as a report.

Excerpts from the System Log:

Log Name:      System
Source:        Server Administrator
Date:          6/18/2015 10:55:55 PM
Event ID:      2271
Task Category: Storage Service
Level:         Information
Keywords:      Classic
User:          N/A
Computer:      SQLServer
The Patrol Read corrected a media error.:  Physical Disk 0:0:10 Controller 0, Connector 0

photo credit: Ubicación del disco duro (antiguo) a desmontar via photopin (license)


Learn more about our expertise in SQL Server.

The post Reading System Logs: SQL Server – Part 2 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

RMAN 11g : How to Restore / Duplicate to a More Recent Patchset

Pythian Group - Mon, 2015-07-20 13:24

In an Oracle DBA’s life, you’ll be regularly asked to work on applying a new patchset on a database and then you will apply it starting from the development database to the production database and this process can be quite long depending on the organization you are working for.

In an Oracle DBA’s life, you’ll be regularly asked to refresh a development database (or any environment before the production) with the production data for development, test or whatever needs. For years now, RMAN has helped us a lot to perform this kind of task easily.

And what should happen always happens and one day you will be asked to refresh your more recent patchset DEV database (let’s say with your PROD data (let’s say that it’s running against an version). And let’s call a spade a spade, that could be a bit tricky — and specially if you discover that the versions are different once the RESTORE / DUPLICATE is terminated because you have launched the usual refresh scripts forgetting this little detail…

A solution could be to ask some GB to the sys admin team, copy an ORACLE_HOME from another server, quickly clone it on the DEV server, start a RMAN DUPLICATE / RESTORE DATABASE from the PROD to the DEV and then upgrade it to But this will probably be quite long and in the case that adding some GB to a server requires some procedures, validations, etc… it could take many days to refresh the DEV database which is obviously not what everybody wants. And this possibility does not exists if you face the issue after the RESTORE / DUPLICATE is finished.

Hopefully, there’s a way to achieve this goal by directly RESTORE / DUPLICATE a database to a more recent patchset (note that this method is also working for 10g databases). Let’s explore the two cases you can face doing a direct RESTORE / DUPLICATE to a more recent patchset database.



Whether we are restoring or duplicating the production database from a backup, here is what will happen on the DEV database:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/11/2015 22:38:59
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 17 and starting SCN of 2232530

Here, we can’t open the database with the RESETLOGS option due to the patchset version difference. We have to use a slightly different command:

SQL> alter database open resetlogs upgrade ;
Database altered.

Now the database is opened in upgrade mode, we can now apply the patchset and open it.

SQL> @?/rdbms/admin/catupgrd


SQL> startup
ORACLE instance started.

Total System Global Area 626327552 bytes
Fixed Size 2255832 bytes
Variable Size 243270696 bytes
Database Buffers 377487360 bytes
Redo Buffers 3313664 bytes
Database mounted.
Database opened.

This one is in fact quick and easy.



Starting from 11g, we have the cool DUPLICATE FORM ACTIVE DATABASE feature that we can also use to perform this kind of refresh. When you perform a DUPLICATE FROM ACTIVE DATABASE operation from a to a version, the procedure is different from the previous one as the RESETLOGS will begin but will not be able to finish properly and you will face this error :

RMAN-08161: contents of Memory Script:
 Alter clone database open resetlogs;
RMAN-08162: executing Memory Script

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
RMAN-10041: Could not re-create polling channel context following failure.
RMAN-10024: error setting up for rpc polling
RMAN-10005: error opening cursor
RMAN-10002: ORACLE error: ORA-03114: not connected to ORACLE
RMAN-03002: failure of Duplicate Db command at 03/25/2015 20:22:56
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-39700: database must be opened with UPGRADE option
Process ID: 24341
Session ID: 1 Serial number: 9

At this stage, it’s not possible to open the database in UPGRADE mode nor RECOVER the database and not even generate a BACKUP CONTROLFILE TO TRACE.

SQL> recover database using backup controlfile until cancel ;
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.


So we have to recreate the controlfile. By using these queries, we can easily create a new CREATE CONTROLFILE statement (or we could generate a BACKUP CONTROLFILE TO TRACE from the source database and then adapt it for the destination database).

SQL> select name from v$datafile order by file#;
SQL> select group#, member from v$logfile;
SQL> select name, bytes from v$tempfile order by file#;

And then recreate the controlfile:

 8 GROUP 1 '/u01/app/oracle/data/orcl11204/redo01.log' SIZE 50M BLOCKSIZE 512,
 9 GROUP 2 '/u01/app/oracle/data/orcl11204/redo02.log' SIZE 50M BLOCKSIZE 512,
 10 GROUP 3 '/u01/app/oracle/data/orcl11204/redo03.log' SIZE 50M BLOCKSIZE 512
 12 '/u01/app/oracle/data/orcl11204/system01.dbf',
 13 '/u01/app/oracle/data/orcl11204/sysaux01.dbf',
 14 '/u01/app/oracle/data/orcl11204/undotbs01.dbf',
 15 '/u01/app/oracle/data/orcl11204/users01.dbf'
 16 ;

Control file created.


To finish the recover and open the database in UPGRADE mode, we would need to apply the current redolog (and not any archivelog — we don’t have any archivelog as the RESETLOGS didn’t happen yet).

SQL> select * from v$logfile ;

---------- ------- ------- ------------------------------------------------------- ---
 3 STALE ONLINE /u01/app/oracle/data/orcl11204/redo03.log NO

 2 STALE ONLINE /u01/app/oracle/data/orcl11204/redo02.log NO

 1 STALE ONLINE /u01/app/oracle/data/orcl11204/redo01.log NO

ORA-00279: change 2059652 generated at 03/25/2015 20:22:54 needed for thread 1
ORA-00289: suggestion :
ORA-00280: change 2059652 for thread 1 is in sequence #1

Specify log: {<ret>=suggested | filename | AUTO | CANCEL}
Log applied.
Media recovery complete.
SQL> alter database open resetlogs upgrade ;

Database altered.


Now we can apply the patchset:

SQL> @?/rdbms/admin/catupgrd



And check that everything is good:

SQL> select * from v$version ;

Oracle Database 11g Enterprise Edition Release - 64bit Production
PL/SQL Release - Production
CORE Production
TNS for Linux: Version - Production
NLSRTL Version - Production

SQL> select comp_name, version, status from dba_registry ;

--------------------------------------------- ------------------------------ -----------
Oracle Application Express VALID
Oracle Enterprise Manager VALID
Spatial VALID
Oracle Multimedia VALID
Oracle XML Database VALID
Oracle Text VALID
Oracle Expression Filter VALID
Oracle Rules Manager VALID
Oracle Workspace Manager VALID
Oracle Database Catalog Views VALID
Oracle Database Packages and Types INVALID
JServer JAVA Virtual Machine VALID
Oracle Database Java Packages VALID
OLAP Analytic Workspace INVALID

18 rows selected.




This saved me a lot of time, have a good day :)


Discover more about our expertise in Oracle.

The post RMAN 11g : How to Restore / Duplicate to a More Recent Patchset appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

DevOps: Applied Empathy

Pythian Group - Mon, 2015-07-20 12:09

I enjoyed participating in a panel on DevOps culture by Electric Cloud last week. Our commendable hosts kept things light and productive despite the number of participants and breadth of topic.

It wouldn’t be a culture discussion if we had failed to review the motivations for that (DevOps) culture: namely the evolve-or-die progression of tech business in general and software systems of customer engagement in particular. So the logic goes, any non-trivial business is now (also) a software business – thus, being “good at software” (and rapidly deriving high quality, constantly improving, large-scale services from it) is a fundamental market success factor and must be(come) part of the corporate DNA.

I doubt the above is controversial, but the degree to which it feels true probably depends on the scale of opportunity in your sector(s) and the governing regulatory regime(s). Those factors have a big impact not only on the intensity of motivation, but the challenges and resistance to change that must be overcome in a successful program of transformation.

The discussion reminded me that empathy is important for more than just being nice. It’s also a great way to understand what motivates people and how to incorporate their success drivers into transformation efforts.

top of the world in a typical software engineering org

Consider Katniss, for example – she has to deliver to Rex (and consumers like you and me) the “and sandwich”, Velocity (new features) and Operational Excellence, or we (consumers) will find a service that does. She may prioritize Velocity at times over other initiatives, and the stress on Bill grows under this pressure. If, as agent of transformational change, you propose methods of increasing Velocity to Bill – you are likely to face rejection – Bill’s already drowning at the present pace.

If, on the other hand, one approaches Bill to explain that pervasive, intelligent automation strategies can give his team their weekends back, and make his team a proactive contributor and valued partner in growth of the business, one will likely find a different sort of audience.

All this means, to me, is that DevOps is a useful context for improving a complex sort of collaboration that’s called a software product lifecycle. Understanding the motivations and needs of the players in an organization  is a key enabler for planning and executing successful programs of change.


Discover more about our expertise in DevOps and the author Aaron Lee.

The post DevOps: Applied Empathy appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Why Oracle ERP Cloud? by Terrance Wampler

Linda Fishman Hoyle - Mon, 2015-07-20 11:00

In this video, Terrance Wampler (pictured left), VP of Oracle ERP Cloud Strategy and Product Development, talks candidly and fluently about Oracle ERP Cloud.

He hits on three topics.

1) The Changing Role of the CFO: CFO’s need to know how to create value in the organization, not just know how to control the organization. Theirs is an expanded role where they’re being asked to provide guidance to other LOB business leaders and those involved with customer engagement. Ideally, back office technology and processes are integrated with the front office—bringing analytics to the forefront, improving the customer experience and the bottom line.

2) Modernizing ERP: The cloud is very appealing from a cost savings perspective, but other key drivers of cloud adoption include the digital transformation technologies (mobile, social, embedded analytics, collaboration) and a modern user experience. Also the ability to take on two-three releases a year injects innovation into the business.

3) Social Brings Speed and Accountability: Social networking tools are very powerful inside a business operation. Nothing needs more collaboration than a business process to solve exceptions. Specific social conversations with those who need to take action move faster. In a secure environment, users can share documents, get approvals, produce an audit trail, and drive accountability.

Take a look at the video and share it freely with your colleagues.

A Lot To Listen To

FeuerThoughts - Mon, 2015-07-20 06:40
A Lot To Listen To

Sometimes, if you're lucky,
there is nothing to hearbut the sound of the windblowing through trees.
Now you could say:"That's not much to listen to."Or you could listen...
Listento the rustling, hissing, whispering, sometimes angry soundof thousands of almost silent brushings of leaf against leaf,of feather-light taps of twig striking twig,any single act nothing to hear at allbut when the tree is big enoughand the leaves are numerous enoughand the branches reach out thinner and thinnerpoking out toward the suncarrying leaves to their destiny,
then you might be able to hearthe sound of the windblowing through trees.
It's a lot to listen to,if you can hear it.

Copyright 2015 Steven Feuerstein
Categories: Development

12c Downgrade

Jonathan Lewis - Mon, 2015-07-20 06:12

No, not really – but sometimes the optimizer gets better and gives you worse performance as a side effect when you upgrade. Here’s an example where recognised (with a few hints) the case for a nested loop semi-join and 12c went a bit further and recognised the opportunity for doing a cunning “semi_to_inner” transformation … which just happened to do more work than the 11g plan.

Here’s a data set to get things going, I’ve got “parent” and “child” tables, but in this particular demonstration I won’t be invoking referential integrity:

create table chi
with generator as (
        select  --+ materialize
                rownum  id
        from dual
        connect by
                level <= 1e4
        rownum - 1                              id,
        trunc((rownum-1)/10)                    n1,
        trunc(dbms_random.value(0,1000))        n2,
        rpad('x',1000)                          padding

create table par
with generator as (
        select  --+ materialize
                rownum  id
        from dual
        connect by
                level <= 1e4
        rownum - 1      id,
        rpad('x',1000)  padding
        rownum <= 1e3

alter table par modify id not null;
alter table par add constraint par_pk primary key(id)
-- deferrable

-- Now gather stats on the tables.

The code uses my standard framework that could generate a few million rows even though it’s only generating 1,000 in par and 10,000 in chi. The presence of the commented “deferrable” for the primary key constraint is for a secondary demonstration.

You’ll notice that the 1,000 values that appear in chi.n1 and chi.n2 are matched by the 1,000 rows that appear in the primary key of par – in some other experiment I’ve got two foreign keys from chi to par. Take note that the values in n1 are very well clustered because of the call to trunc() while the values in n2 are evenly scattered because of the call to dbms_random() – the data patterns are very different although the data content is very similar (the randomised data will still produce, on average, 10 rows per value).

So here’s the test code:

set serveroutput off
set linesize 156
set trimspool on
set pagesize 60

alter session set statistics_level = all;

prompt  =============================
prompt  Strictly ordered driving data
prompt  =============================

                leading(@sel$5da710d3 chi@sel$1 par@sel$2)
                full   (@sel$5da710d3 chi@sel$1)
                use_nl (@sel$5da710d3 par@sel$2)
                index  (@sel$5da710d3 par@sel$2 (
where   exists (
                select null
                from par
                where = chi.n1

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline alias cost'));

prompt  =============================
prompt  Randomly ordered driving data
prompt  =============================

                leading(@sel$5da710d3 chi@sel$1 par@sel$2)
                full   (@sel$5da710d3 chi@sel$1)
                use_nl (@sel$5da710d3 par@sel$2)
                index  (@sel$5da710d3 par@sel$2 (
where   exists (
                select null
                from par
                where = chi.n2

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline alias cost'));

set serveroutput on
alter session set statistics_level = typical;

In both cases I’ve hinted the query quite heavily, using internally generated query block names, into running with a nested loop semi-join from chi to par. Since there are 10,000 rows in chi with no filter predicates, you might expect to see the probe into the par table starting 10,000 times returning (thanks to our perfect data match) one row for each start. Here are the run-time plans with rowsource execution stats from

Strictly ordered driving data

| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
|   0 | SELECT STATEMENT    |        |      1 |        |   190 (100)|      1 |00:00:00.14 |    1450 |   1041 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.14 |    1450 |   1041 |
|   2 |   NESTED LOOPS SEMI |        |      1 |  10065 |   190   (4)|  10000 |00:00:00.12 |    1450 |   1041 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10065 |   186   (2)|  10000 |00:00:00.07 |    1434 |   1037 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |   1000 |   1048 |     0   (0)|   1000 |00:00:00.01 |      16 |      4 |

Randomly ordered driving data

| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
|   0 | SELECT STATEMENT    |        |      1 |        |   190 (100)|      1 |00:00:00.12 |    5544 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.12 |    5544 |
|   2 |   NESTED LOOPS SEMI |        |      1 |  10065 |   190   (4)|  10000 |00:00:00.10 |    5544 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10065 |   186   (2)|  10000 |00:00:00.02 |    1434 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |   4033 |   1048 |     0   (0)|   4033 |00:00:00.02 |    4110 |

Notice how we do 1,000 starts of operation 4 when the data is well ordered, and 4,033 starts when the data is randomly ordered. For a semi-join nested loop the run-time engine uses the same caching mechanism as it does for scalar subqueries – a fact you can corroborate by removing the current hints and putting the /*+ no_unnest */ hint into the subquery so that you get a filter subquery plan, in which you will note exactly the same number of starts of the filter subquery.

As an extra benefit you’ll notice that the index probes for the well-ordered data have managed to take advantage of buffer pinning (statistic “buffer is pinned count”) – keeping the root block and most recent leaf block of the par_pk index pinned almost continually through the query; while the randomised data access unfortunately required Oracle to unpin and repin the index leaf blocks (even though there were only 2 in the index) as the scan of chi progessed.

Time to upgrade to and see what happens:

Strictly ordered driving data

| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
|   0 | SELECT STATEMENT    |        |      1 |        |   189 (100)|      1 |00:00:00.22 |    1448 |   1456 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.22 |    1448 |   1456 |
|   2 |   NESTED LOOPS      |        |      1 |  10000 |   189   (4)|  10000 |00:00:00.20 |    1448 |   1456 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10000 |   185   (2)|  10000 |00:00:00.03 |    1432 |   1429 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |  10000 |      1 |     0   (0)|  10000 |00:00:00.06 |      16 |     27 |

Randomly ordered driving data

| Id  | Operation           | Name   | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
|   0 | SELECT STATEMENT    |        |      1 |        |   189 (100)|      1 |00:00:00.22 |   11588 |   1429 |
|   1 |  SORT AGGREGATE     |        |      1 |      1 |            |      1 |00:00:00.22 |   11588 |   1429 |
|   2 |   NESTED LOOPS      |        |      1 |  10000 |   189   (4)|  10000 |00:00:00.19 |   11588 |   1429 |
|   3 |    TABLE ACCESS FULL| CHI    |      1 |  10000 |   185   (2)|  10000 |00:00:00.03 |    1432 |   1429 |
|*  4 |    INDEX UNIQUE SCAN| PAR_PK |  10000 |      1 |     0   (0)|  10000 |00:00:00.07 |   10156 |      0 |

Take a close look at operation 2 – it’s no longer a NESTED LOOP SEMI, the optimizer has got so smart (recognising the nature of the primary key on par) that it’s done a “semi_to_inner” transformation. But a side effect of the transformation is that the scalar subquery caching mechanism no longer applies so we probe the par table 10,000 times. When the driving data is well-ordered this hasn’t made much difference to the buffer gets (and related latch activity), but when the data is randomised the extra probes ramp the buffer gets up even further.

The timings (A-time) on these experiments are not particularly trustworthy – the differences between cached reads and direct path reads introduced more variation than the difference in Starts and Buffers, and the total CPU load is pretty small anyway – and I suspect that this difference won’t make much difference to most people most of the time. No doubt, though, there will be a few cases where a small change like this could have a noticeable effect on some important queries.


There is a hint /*+ no_semi_to_inner(@queryblock object_alias) */ that I thought might persuade the optimizer to stick with the semi-join, but it didn’t have any effect. Since the “semi to inner” transformation (and the associated hints) are available in I was a little puzzled that (a) I didn’t see the same transformation in the 11g test, and (b) that I couldn’t hint the transformation.  This makes me wonder if there’s a defect in 11g that might be fixed in a future patch.

It’s also nice to think that the scalar subquery caching optimisation used in semi-joins might eventually become available  to standard joins (in cases such as “join to parent”, perhaps).

Get to Know the Latest Feature Updates in Documents Cloud Service

WebCenter Team - Mon, 2015-07-20 05:00

While it is seamless for our Oracle Documents Cloud Service users as the updates are automatically pushed out without the users or customer organizations having to do anything at their ends, we thought you may want to learn about what additional features and capabilities we have pushed out with the latest release of Oracle Documents Cloud Service and why Oracle's cloud collaboration solution is fast becoming an industry benchmark. Our product expert and member of the product management team, Ellen Gravina discusses.

by: Ellen Gravina, Principal Product Manager, Oracle Documents Cloud Service

v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);}

Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Oracle Documents Cloud Service was upgraded to include the latest features for our web, mobile, and desktop clients. Features include:

Oracle Documents Presenter

v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);}

Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Wow your customers and prospects with beautiful presentations that deliver maximum impact and results!
• Present PPTs, review PDF documents, play videos, and use many more presentation formats.
• All your presentations are stored on your tablet–no need to find a WiFi hot spot.
• Author presentations by organizing material in folders on your desktop. Customize the look to your brand by using folder background and icon images.

Multiple Account Support
• Synchronize content from multiple accounts to your desktop.

• Access content from multiple accounts on your mobile device. 

Enhancements to Public Link Policy

• Administrators can set a maximum role allowed for public links.

• Folder owners can disable public links on a per-folder basis.

Customize Oracle Documents with Your Own Branding

• Change the logo in the header.
• Control where users look for help, share feedback, and get information about client downloads.

iOS Touch ID Support
• Make use of your fingerprint to protect access to your content.
• Available in the native mobile app and Oracle Documents Presenter app.

Access Version History from Your Mobile Device
• View old versions of a document.
• Delete old versions.
• Make an old version the current version.

Enterprise Installation Support for Desktop Client
• Roll out the Desktop Sync Client software to multiple client machines with the help of the EXE and MSI installer packages.
• Deploy the MSI installer through Active Directory’s group policy.
• See the Administrator’s Guide for details.

Learn More
Check out the Oracle Documents Cloud Service Getting Started page to learn more. And visit  for additional solution and market information.

Please contact Oracle Support for any support questions.  Feedback is always welcome in our Documents Cloud Service discussion forum.

SaaS and traditional software from the same vendor?

DBMS2 - Mon, 2015-07-20 03:09

It is extremely difficult to succeed with SaaS (Software as a Service) and packaged software in the same company. There were a few vendors who seemed to pull it off in the 1970s and 1980s, generally industry-specific application suite vendors. But it’s hard to think of more recent examples — unless you have more confidence than I do in what behemoth software vendors say about their SaaS/”cloud” businesses.

Despite the cautionary evidence, I’m going to argue that SaaS and software can and often should be combined. The “should” part is pretty obvious, with reasons that start:

  • Some customers are clearly better off with SaaS. (E.g., for simplicity.)
  • Some customers are clearly better off with on-premises software. (E.g., to protect data privacy.)
  • On-premises customers want to know they have a path to the cloud.
  • Off-premises customers want the possibility of leaving their SaaS vendor’s servers.
  • SaaS can be great for testing, learning or otherwise adopting software that will eventually be operated in-house.
  • Marketing and sales efforts for SaaS and packaged versions can be synergistic.
    • The basic value proposition, competitive differentiation, etc. should be the same, irrespective of delivery details.
    • In some cases, SaaS can be the lower cost/lower commitment option, while packaged product can be the high end or upsell.
    • An ideal sales force has both inside/low-end and bag-carrying/high-end components.

But the “how” of combining SaaS and traditional software is harder. Let’s review why. 

Why it is hard for one vendor to succeed at both packaged software and SaaS?

SaaS and packaged software have quite different development priorities and processes. SaaS vendors deliver and support software that:

  • Runs on a single technology stack.
  • Is run only at one or a small number of physical locations.
  • Is run only in one or a small number of historical versions.
  • May be upgraded multiple times per month.
  • Can be assumed to be operated by employees of the SaaS company.
  • Needs, for customer acquisition and retention reasons, to be very easy for users to learn.

But traditional packaged software:

  • Runs on technology the customer provides and supports, at the location of the customer’s choice.
  • Runs in whichever versions customers have not yet upgraded from.
  • Should — to preserve the sanity of all concerned — have only have a few releases per year.
  • Is likely to be operated by less knowledgeable or focused staff than a SaaS vendor enjoys.
  • Can sometimes afford more of an end-user learning curve than SaaS.

Thus, in most cases:

  • Traditional software creates greater support and compatibility burdens than SaaS does.
  • SaaS and on-premises software have very different release cycles.
  • SaaS should be easier for end-users than most traditional software, but …
  • … traditional software should be easier to administer than SaaS.

Further — although this is one difference that I think has at times been overemphasized — SaaS vendors would prefer to operate truly multi-tenant versions of their software, while enterprises less often have that need.

How this hard thing could be done

Most of the major problems with combining SaaS and packaged software efforts can be summarized in two words — defocused development. Even if the features are substantially identical, SaaS is developed on different schedules and for different platform stacks than packaged software is.

So can we design an approach to minimize that problem? I think yes. In simplest terms, I suggest:

  • A main development organization focused almost purely on SaaS.
  • A separate unit adapting the SaaS code for on-premises customers, with changes to the SaaS offering being concentrated in three aspects:
    • Release cadence.
    • Platform support.
    • Administration features, which are returned to the SaaS group for its own optional use.

Certain restrictions would need to be placed on the main development unit. Above all, because the SaaS version will be continually “thrown over the wall” to the sibling packaged-product group, code must be modular and documentation must be useful. The standard excuses — valid or otherwise — for compromising on these virtues cannot be tolerated.

There is one other potentially annoying gotcha. Hopefully, the SaaS group uses third-party products and lots of them; that’s commonly better than reinventing the wheel. But in this plan they need to use ones that are also available for third-party/OEM kinds of licensing.

My thoughts on release cadence start:

  • There should be a simple, predictable release cycle:
    • N releases per year, for N approximately = 4.
    • Strong efforts to adhere to a predictable release schedule.
  • A reasonable expectation is that what’s shipped and supported for on-premises use is 6-9 months behind what’s running on the SaaS service. 3-6 months would be harder to achieve.

The effect would be that on-premises software would lag SaaS features to a predictable and bounded extent.

As for platform support:

  • You have to stand ready to install and support whatever is needed. (E.g., in the conversation that triggered this post, the list started with Hadoop, Spark, and Tachyon.)
  • You have to adapt to customers’ own reasonably-current installations of needed components (but help them upgrade if they’re way out of date).
  • Writing connectors is OK. Outright porting from your main stack to another may be unwise.
  • Yes, this is all likely to involve significant professional services, at least to start with, because different customers will require different degrees of adaptation.

That last point is key. The primary SaaS offering can be standard, in the usual way. But the secondary business — on-premises software — is inherently services-heavy. Fortunately, packaged software and professional services can be successfully combined.

And with that I’ll just stop and reiterate my conclusion:

It may be advisable to offer both SaaS and services-heavy packaged software as two options for substantially the same product line.

Related link

  • Point #4 of my VC overlord post is relevant  — and Point #3 even more so. :)
Categories: Other

Release of University of California at Davis Case Study on e-Literate TV

Michael Feldstein - Sun, 2015-07-19 16:55

By Phil HillMore Posts (348)

Today we are thrilled to release the fifth and final case study in our new e-Literate TV series on “personalized learning”. In this series, we examine how that term, which is heavily marketed but poorly defined, is implemented on the ground at a variety of colleges and universities. We plan to cap off this series with two analysis episodes looking at themes across the case studies.

We are adding three episodes from the University of California at Davis (UC Davis), a large research university that has a strong emphasis in science, technology, engineering, and math or STEM fields. The school has determined that the biggest opportunity to improve STEM education is to improve the success rates in introductory sciences classes – the ones typically taught in large lecture format at universities of their size. Can you personalize this most impersonal of academic experiences? What opportunities and barriers do institutions face when they try to extend personalized learning approaches?

You can see all the case studies (either 2 or 3 per case study) at the series link, and you can access individual episodes below.

UC Davis Case Study: Personalized The Large Lecture Class

UC Davis Case Study: Intro to Biology and Intro to Chemistry Examples

UC Davis Case Study: Opportunities and Barriers to Extending Personalization

e-Literate TV, owned and run by MindWires Consulting, is funded in part by the Bill & Melinda Gates Foundation. When we first talked about the series with the Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

As with the previous series, we are working in collaboration with In the Telling, our partners providing the platform and video production. Their Telling Story platform allows people to choose their level of engagement, from just watching the video to accessing synchronized transcripts and accessing transmedia. We have added content directly to the timeline of each video, bringing up further references, like e-Literate blog posts or relevant scholarly articles, in context. With In The Telling’s help, we are crafting episodes that we hope will be appealing and informative to those faculty, presidents, provosts, and other important college and university stakeholders who are not ed tech junkies.

We welcome your feedback, either in comments or on Twitter using the hashtag #eLiterateTV. Enjoy!

The post Release of University of California at Davis Case Study on e-Literate TV appeared first on e-Literate.