Skip navigation.

Feed aggregator

Taleo Interview Evaluations, Part 1

Oracle AppsLab - Wed, 2014-07-02 10:50

Time to share a new concept demo, built earlier this Spring by Raymond and Anthony (@anthonyslai), both of whom are ex-Taleo.

Back in April, I got my first exposure to Taleo during a sales call. I was there with the AUX contingent, talking about Oracle HCM Cloud Release 8, featuring Simplified UI, our overall design philosophies and approaches, i.e. simplicity-mobility-extensibility, glance-scan-commit, and our emerging technologies work and future cool stuff.

I left that meeting with an idea for a concept demo, streamlining the interview evaluation process with a Google Glass app.

The basic pain point here is that recruiters have trouble urging the hiring managers they support through the hiring process because these managers have other job responsibilities.

It’s the classic Catch-22 of hiring; you need more people to help do work, but you’re so busy doing the actual work, you don’t have time to do the hiring.

Anyway, Taleo Recruiting has the standard controls, approvals and gating tasks that any hiring process does. One of these gating tasks is completing the interview evaluation; after interviewing a candidate, the interviewer, typically the hiring manager and possibly others, completes an evaluation of the candidate that determines her/his future path in the process.

Good evaluation, the candidate moves on in the process. Poor evaluation, the candidate does not.

Both Taleo’s web app and mobile app provide the ability to complete these evaluations, and I thought it would be cool to build a Glass app just for interview evaluations.

Having a hands-free way to complete an evaluation would be useful for a hiring manager walking between meetings on a large corporate campus or driving to a meeting. The goal here is to bring the interview evaluation closer to the actual end of the interview, while the chat is still fresh in the manager’s mind.

Imagine you’re the hiring manager. Rather than delaying the evaluation until later in the day (or week), walk out of an interview, command Glass to start the evaluation, have the questions read directly into your ear, dictate your responses and submit.

Since the Glass GDK dropped last Winter, Anthony has been looking for a new Glass project, and I figured he and Raymond would run with a Taleo project. They did.

The resulting concept demo is a Glass app and an accompanying Android app that can also be used as a dedicated interview evaluation app. Raymond and Anthony created a clever way to transfer data using the Bluetooth connection between Glass and its parent device.

Here’s the flow, starting with the Glass app. The user can either say “OK Glass” and then say “Start Taleo Glass,” or tap the home card, swipe through the cards and choose the Start Taleo Glass card.


The Glass app will then wait for its companion Android app to send the evaluation details.


Next, the user opens the Android app to see all the evaluations s/he needs to complete, and then selects the appropriate one.


Tapping Talk to Google Glass sends the first question to the Glass over the Bluetooth connection. The user sees the question in a card, and Glass also dictates the question through its speaker.


Tapping Glass’ touchpad turns on the microphone so the user can dictate a response, either choosing an option for a multiple choice question or dictating an answer for an open-ended question. As each answer is received by the Android app, the evaluation updates, which is pretty cool to watch.


The Glass app goes through each question, and once the evaluation is complete, the user can review her/his answers on the Android app and submit the evaluation.

The guys built this for me to show at a Taleo and HCM Cloud customer expo, similar to the one AMIS hosted in March. After showing it there, I decided to expand the concept demo to tell a broader story. If you want to read about that, stay tuned for Part 2.

Itching to sound off on this post, find the comments.

Update: The standard disclaimer applies here. This is not product of any kind. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.Possibly Related Posts:


Jonathan Lewis - Wed, 2014-07-02 10:09

Catching up (still) from the Trivadis CBO days, here’s a little detail which had never crossed my mind before.

where   (col1, col2) < (const1, const2)

This isn’t a legal construct in Oracle SQL, even though it’s legal in other dialects of SQL. The logic is simple (allowing for the usual anomaly with NULL): the predicate should evaluate to true if (col1 < const1), or if (col1 = const1 and col2 < const2). The thought that popped into my mind when Markus Winand showed a slide with this predicate on it – and then pointed out that equality was the only option that Oracle allowed for multi-column operators – was that, despite not enabling the syntax, Oracle does implement the mechanism.

If you’re struggling to think where, it’s in multi-column range partitioning: (value1, value2) belongs in the partition with high value (k1, k2) if (value1 < k1) or if (value1 = k1 and value2 < k2).

Restore datafile from service: A cool #Oracle 12c Feature

The Oracle Instructor - Wed, 2014-07-02 09:02

You can restore a datafile directly from a physical standby database to the primary. Over the network. With compressed backupsets. How cool is that?

Here’s a demo from my present class Oracle Database 12c: Data Guard Administration. prima is the primary database on host01, physt is a physical standby database on host03. There is an Oracle Net configuration on both hosts that enable host01 to tnsping physt and host03 to tnsping prima


[oracle@host01 ~]$ rman target sys/oracle@prima

Recovery Manager: Release - Production on Wed Jul 2 16:43:39 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2084081935)

RMAN> run
set newname for datafile 4 to '/home/oracle/stage/users01.dbf';
restore (datafile 4 from service physt) using compressed backupset;
catalog datafilecopy '/home/oracle/stage/users01.dbf';

executing command: SET NEWNAME

Starting restore at 02-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=47 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service physt
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /home/oracle/stage/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 02-JUL-14

cataloged datafile copy
datafile copy file name=/home/oracle/stage/users01.dbf RECID=8 STAMP=851877850

This does not require backups taken on the physical standby database.

Tagged: 12c New Features, Backup & Recovery, Data Guard
Categories: DBA Blogs

What’s New with Apps Password Change in R12.2 E-Business Suite ?

Pythian Group - Wed, 2014-07-02 08:39

Apps password change routine in Release 12.2 E-Business Suite changed a little bit. We have now extra options to change password, as well as some manual steps after changing the password using FNDCPASS.

There is a new utility introduced called AFPASSWD. This utility unlike FNDCPASS wont require you to enter apps and system user password, and makes it possible to separate duties between database administrator and application administrator. In most cases both these roles are done by same DBA. But in large organizations, there may be different teams that manage Database and Application. You can read about different options available in AFPASSWD in EBS Maintenance guide.

Whether you use FNDCPASS or AFPASSWD to change the APPLSYS/APPS password, you must also perform some additional steps. This is because in R12.2, the old AOL/J connection pooling is replaced with Weblogic Connection Pool ( JDBC Datasource ).  Currently this procedure is not yet automated. It would be good, if this can be automated using some WLS scripting.

  • Shut down the application tier services
  • Change the APPLSYS password, as described for the utility you are using.
  • Start AdminServer using the script from your RUN filesystem
  • Do not start any other application tier services.
  • Update the “apps” password in WLS Datasource as follows:
    • Log in to WLS Administration Console.
    • Click Lock & Edit in Change Center.
    • In the Domain Structure tree, expand Services, then select Data Sources.
    • On the “Summary of JDBC Data Sources” page, select EBSDataSource.
    • On the “Settings for EBSDataSource” page, select the Connection Pool tab.
    • Enter the new password in the “Password” field.
    • Enter the new password in the “Confirm Password” field.
    • Click Save.
    • Click Activate Changes in Change Center.
  • Start all the application tier services using the script.

I will be posting more of these What’s new with R12.2 articles in future. Post your experiences changing passwords in Oracle EBS in the comments section. I will happy to hear your stories and give my inputs

Categories: DBA Blogs

Essential Hadoop Concepts for Systems Administrators

Pythian Group - Wed, 2014-07-02 08:38

Of course, everyone knows Hadoop as the solution to Big Data. What’s the problem with Big Data? Well, mostly it’s just that Big Data is too big to access and process in a timely fashion on a conventional enterprise system. Even a really large, optimally tuned, enterprise-class database system has conventional limits in terms of its maximum I/O, and there is a scale of data that outstrips this model and requires parallelism at a system level to make it accessible. While Hadoop is associated in many ways with advanced transaction processing pipelines, analytics and data sciences, these applications are sitting on top of a much simpler paradigm… that being that we can spread our data across a cluster and provision I/O and processor in a tunable ratio along with it. The tune-ability is directly related to the hardware specifications of the cluster nodes, since each node has processing, I/O and storage capabilities in a specific ratio. At this level, we don’t need Java software architects and data scientists to take advantage of Hadoop. We’re solving a fundamental infrastructure engineering issue, which is “how can we scale our I/O and processing capability along with our storage capacity”? In other words, how can we access our data?

The Hadoop ecosystem at it’s core is simply a set of RESTfully interacting Java processes communicating over a network. The base system services, such as the data node (HDFS) and task tracker (MapReduce) run on each node in the cluster, register with an associated service master and execute assigned tasks in parallel that would normally be localized on a single system (such as reading some data from disk and piping it to an application or script). The result of this approach is a loosely coupled system that scales in a very linear fashion. In real life, the service masters (famously, NameNode and JobTracker) are a single point of failure and potential performance bottleneck at very large scales, but much has been done to address these shortcomings. In principal, Hadoop uses the MapReduce algorithm to extend parallel execution from a single computer to an unlimited number of networked computers.

MapReduce is conceptually a very simple system. Here’s how it works. Given a large data set (usually serialized), broken into blocks (as for any filesystem) and spread among the HDFS cluster nodes, feed each record in a block to STDIN of a local script, command or application, and collect the records from STDOUT that are emitted. This is the “map” in MapReduce. Next, sort each record by key (usually just the first field in a tab-delimited record emitted by the mapper, but compound keys are easily specified). This is accomplished by fetching records matching each specific key over the network to a specific cluster node, and accounts for the majority of network I/O during a MapReduce job. Finally, process the sorted record sets by feeding the ordered records to STDIN of a second script, command or application, collecting the result from STDOUT and writing them back to HDFS. This is the “reduce” in MapReduce. The reduce phase is optional, and usually takes care of any aggregations such as sums, averages and record counts. We can just as easily pipe our sorted map output straight to HDFS.

Any Linux or Unix systems administrator will immediately recognize that using STDIO to pass data means that we can plug any piece of code into the stream that reads and writes to STDIO… which is pretty much everything! To be clear on this point, Java development experience is not required. We can take advantage of Linux pipelines to operate on very large amounts of data. We can use ‘grep’ as a mapper. We can use the same pipelines and commands that we would use on a single system to filter and process data that we’ve stored across the cluster. For example,

grep -i ${RECORD_FILTER} | cut -f2 | cut -d’=’ -f2 | tr [:upper:][:lower:]

We can use Python, Perl and any other languages with support configured on the task tracker systems in the cluster, as long as our scripts and applications read and write to STDIO. To execute these types of jobs, we use the Hadoop Streaming jar to wrap the script and submit it to the cluster for processing.

What does this mean for us enterprise sysadmins? Let’s look at a simple, high level example. I have centralized my company’s log data by writing it to a conventional enterprise storage system. There’s lots of data and lots of people want access to it, including operations teams, engineering teams, business intelligence and marketing analysts, developers and others. These folks need to search, filter, transform and remodel the data to shake out the information they’re looking for. Conventionally, I can scale up from here by copying my environment and managing two storage systems. Then 4. Then 8. We must share and mount the storage on each system that requires access, organize the data across multiple appliances and manage access controls on multiple systems. There are many business uses for the data, and so we have many people with simultaneous access requirements, and they’re probably using up each appliance’s limited I/O with read requests. In addition, we don’t have the processor available… we’re just serving the data at this point, and business departments are often providing their own processing platforms from a limited budget.

Hadoop solves this problem of scale above the limits of conventional enterprise storage beautifully. It’s a single system to manage that scales in a practical way to extraordinary capacities. But the real value is not the raw storage capacity or advanced algorithms and data analytics available for the platform… it’s about scaling our I/O and processing capabilities to provide accessibility to the data we’re storing, thereby increasing our ability to leverage it for the benefit of our business. The details of how we leverage our data is what we often leave for the data scientists to figure out, but every administrator should know that the basic framework and inherent advantages of Hadoop can be leveraged with the commands and scripting tools that we’re already familiar with.

Categories: DBA Blogs

Differences Between R12.1 and R12.2 Integration with OAM

Pythian Group - Wed, 2014-07-02 08:37

With the revamp of technology stack in R12.2 of Oracle E-Business Suite (EBS) , the way we integrate Oracle Access Manager (OAM) has changed. R12.2 now is built on Weblogic techstack, which drastically changed how it integrates with Other Fusion Middleware Products like OAM

Here is a overview of Steps to configure OAM with EBS R12.1

  • Install Oracle HTTP Server ( OHS)  11g
  • Deploy & Configure Webgate on OHS 11g
  • Install Weblogic
  • Deploy & Configure Accessgate on Weblogic
  • Integrate Webgate, Accessgate with EBS and OAM/OID

R12.2 has both OHS and Weblogic built-in. So we no longer have to Install OHS and Weblogic for Webgate and Accessgate. All we have to do is Deploy and Configure Webgate and Accessgate.  Webgate is deployed on top of R12.2 OHS 11g home. Accessgate is deployed as a separate managed server ( oaea_server1 )  on top of R12.2 weblogic.

Here is the pictorial version of the same

R12.1 and 11i EBS integration with OAM/OID



R12.2 Integration with OAM/OID


Basically R12.2 reduces the number of moving parts in the OAM integration EBS. It saves DBAs lot of time, as it reduces the number of servers to manage.


Integrating Oracle E-Business Suite Release 12.2 with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Doc ID 1576425.1)

Integrating Oracle E-Business Suite Release 12 with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Doc ID 1484024.1)

Images are courtesy of Oracle from note “Overview of Single Sign-On Integration Options for Oracle E-Business Suite (Doc ID 1388152.1)”


Categories: DBA Blogs

How Oracle WebCenter Customers Build Digital Businesses: Contending with Digital Disruption

WebCenter Team - Wed, 2014-07-02 07:00
Evernote Export body, td { font-family: Tahoma; font-size: 10pt; }
Guest Blog Post by: Geoffrey Bock, Principal, Bock & Company
Customer Conversations What are Oracle WebCenter customers doing to exploit innovative digital technologies and develop new sources of value? How are they mobilizing their enterprise applications and leveraging opportunities of the digital business revolution?
To better understand the landscape for digitally powered businesses, I talked to several Oracle WebCenter customers and systems integrators across a range of industries -- including hospitality, manufacturing, life sciences, and the public sector. Through in depth conversations with IT and business leaders, I collected a set of stories about their mobile journeys -- how they are developing next-generation enterprise applications that weave digital technologies into their ongoing operations.
In this and two subsequent blogs, I will highlight several important points from my overall roadmap for developing digital businesses.
Beyond an Aging Infrastructure As a first step, successful customers are contending with digital disruption, and leveraging their inherent strengths to transform operations. Today they are web-aware, if not already web-savvy. Most organizations launched their initial sites more than fifteen years ago. They have steadily added web-based applications to support targeted initiatives.
Yet the customers I interviewed are now at a crossroads. They realize that they need to refresh, modernize, and mobilize their enterprise application infrastructure to build successful digital businesses.
  • One IT leader describes how her firm implemented a cutting-edge enterprise portal ten years ago. Designed for order processing and resources management, the portal now runs outdated technologies and is unable to support needed employee-facing applications.
  • Another business leader has a similar story. The company still relies on a custom designed web-based application. The technology is obsolete and the people knowledgeable about maintaining the application are difficult to find.
  • A third IT leader describes how her organization collects information through several Cold Fusion sites, and needs to replace them in order to deliver more flexible self-service applications.
From my perspective, these leaders are recognizing the power of digital disruption. To create new value, they must deliver seamless customer-, partner-, and employee-facing experiences. They are confronting the limitations of their current application infrastructure and are turning to Oracle for long-term solutions.
Rather than simply enhance what they have, leaders are opting for modernization. They need to develop and deploy native digital experiences. Web-based applications that are bolted onto an aging infrastructure are no longer sufficient.
Change and Continuity Yet there is also continuity around integrating the end-to-end experiences. Let’s take the case of a large manufacturing firm now mobilizing its digital business around Oracle WebCenter. The business leaders identified the multiple steps in the buying process – the information customers and partners need to have to assess alternatives and make purchasing decisions.
The firm had developed multiple web sites to publish product information, offer design advice, and schedule follow-up meetings. But the end result was a fragmented and disconnected set of activities, relying first on information from one system, then from another, and lacking an end-to-end view for measuring results.
The leaders realized that they needed to connect the dots and deliver a seamless experience. In the case of this manufacturing firm, a key step blends online with real-time – helping customers schedule appointments with designers who advise them about design alternatives and product options. (From the manufacturer’s perspective, designers are channel partners who sell the finished goods and deliver support services.)
The breakthrough that accelerates the buying process focuses on these customer/designer interactions -- assembling all of the necessary information into a seamless experience, and making it easy for customers to engage with designers to finalize designs and place orders. As a result, this manufacturing firm mitigates the threat of digital disruption by mobilizing resources to complete a high-value task.

The firm empowers its partner channel by reinventing a key business process for the digital age. This becomes a win-win opportunity that increases customer satisfaction while also improving sales opportunities.

 Delivering Moments of Engagement Across the Enterprise

Leveraging Collective Knowledge and Subject Matter Experts to Improve the Quality of Database Support

Chris Foot - Wed, 2014-07-02 06:10

The database engine plays a strategic role in the majority of organizations. It provides the mechanism to store physical data along with business rules and executable business logic. The database’s area of influence has expanded to a point where it has become the heart of the modern IT infrastructure. Because of its importance, enterprises expect their databases to be reliable, secure and available.

Rapid advances in database technology combined with relatively high database licensing and support costs compel IT executives to ensure that their organization fully utilizes the database product’s entire feature set. The more solutions the database inherently provides, the more cost effective it becomes. These integrated features allow technicians to solve business problems without the additional costs of writing custom code and/or integrating multiple vendor solutions.

The issue then becomes one of database complexity. As database vendors incorporate new features into the database, it becomes more complex to administer. Modern database administrators require a high level of training to be able to effectively administer the environments they support. Without adequate training, problems are commonplace, availability suffers and the database’s inherent features are not fully utilized.

The Benefits of Collective Knowledge

Successful database administration units understand that providing better support to their customers not only comes from advances in technology but also from organizational innovation. The selection of support-related technologies is important, but it is the effective implementation and administration of those technologies that is critical to organizational success.

Database team managers should constantly leverage the collective knowledge of their entire support staff to improve the quality of support the team provides and reduce the amount of time required to solve problems.

One strategy to build the team’s expertise is to motivate individual team members to become Subject Matter Experts in key database disciplines. This strategy is performed informally hundreds of times in IT daily. A support professional is required to perform a given task and “gets stuck”. They spin their wheels and then decide to run down the hall and find someone they feel can provide them with advice. They consult with one or more fellow team members to solve the problem at hand.

The recommendation is to have a more formal strategy in place so that each team member, in addition to their daily support responsibilities, becomes a deep-dive specialist in a given database discipline. Their fellow team members are then able to draw from that expertise.

Increasing the Efficiency of Support- Subject Matter Experts

The database environment has become so complex that it precludes database administrators from becoming true experts in all facets of database technology. RDX’s large administrative staff allows it to increase efficiency by creating specialists in key database disciplines. In addition to expertise in providing day-to-day support, each of RDX’s support staff members is required to become an expert in one or more database disciplines including backup and recovery, highly available architectures, SQL tuning, database performance, database monitoring, UNIX/Windows scripting and database security.

RDX allocates the support person with the highest-level skill sets in that particular task to provide the service requested by the customer. This methodology ensures that the customer gets the most experienced person available to perform complex tasks. Who do you want to install that 5 node Oracle RAC cluster? A team member that has limited knowledge or one that has extensively studied Oracle’s high availability architecture and performs RAC installations on a daily basis?

Although your team may only consist of a ½ dozen administrators, that doesn’t mean that you aren’t able to leverage the benefits that the Subject Matter Experts strategy provides. Identify personnel on the team that are interested in a particular database support discipline (i.e. security, database performance, SQL Performance, scripting, etc.) and encourage them to build their expertise in those areas. If they are interested in high availability, send them to classes, offer to reimburse them for books on that topic and/or allocate time for them to review HA specific websites. Focus on the areas that are most critical to the needs of your shop. For instance, is your company having lots of SQL statement performance problems? A sound strategy is to have one of your team members focus on SQL tuning and support them throughout the entire educational process.

Also consider special skills during the DBA interview and selection process. At RDX, we always look for candidates that are able to provide deep-dive expertise in key database support disciplines. We have several DBAs on staff that have strong application development backgrounds including SQL performance tuning. This was in addition to possessing a strong background in database administration. We use the same strategy for HA architectures, and we look for candidates that have strong skills in any database advanced feature. We’re able to leverage that expertise for the customer’s benefit. The same strategy can be applied to any size team. Look for candidates that excel in database administration but are also strong in key areas that will improve your ability to support your internal customers.

In addition, you can also draw expertise from other teams. For example, you may have access to an application developer who is strong in SQL coding and tuning or an operating system administrator that excels in scripting. Build relationships with those personnel and leverage their experience and skill sets when needed. Ask them to provide recommendations on training to your team or assist when critical problems occur. Technicians are usually more than happy to be asked to help. Just make sure to be courteous when asking and thank them (and their manager) when they do help out.

Reducing Downtime Duration by Faster Problem Resolution

RDX’s large staff also reduces the amount of time spent on troubleshooting and problem solving. RDX is able to leverage the expertise of a very large staff of database, operating system and middle-tier administrators. Additionally, RDX is able to leverage the team’s expertise to provide faster resolution to database performance issues and outages. Since the support staff works with many different companies, they have seen a number of different approaches to most situations.

Ninety-nine percent of our support technicians work at the same physical site. This allows RDX to create a “war room” strategy for brainstorming activities and problem solving. All technicians needed to create a solution or solve a problem are quickly brought to bear when the need arises. Support technicians come from varied backgrounds and have many different skill sets. RDX is able to leverage these skills without having to search for the right person or wait for a return call. Work can take place immediately.

This “war room” strategy works for any size team. When a significant issue occurs, leverage the entire team’s skill sets. Appoint yourself to be the gate keeper to ensure that the team remains focused on the goal of quick problem resolution and that the conversation continues to be productive. Brainpower counts, and the more collective knowledge you have at your disposal, the more effective your problem resolution activities become.


Corporate information technology executives understand that their success relies upon their ability to cut costs and improve efficiency. Decreasing profit margins and increased competition in their market segment force them to continuously search for creative new solutions to reduce the cost of the services they provide. They also realize that this reduction in cost must not come at the expense of the quality of services their organization delivers.

RDX invites you to compare the benefits of our organizational architecture and quality improvement initiatives to our competitors, your in-house personnel or your on-site consultants. We firmly believe that our Collective Knowledge Support Model allows us to provide world class support.

The post Leveraging Collective Knowledge and Subject Matter Experts to Improve the Quality of Database Support appeared first on Remote DBA Experts.

Oracle E-Business Suite Security, Java 7 and Auto-Update

Maintaining a secure Oracle E-Business Suite implementation requires constant vigilance. For the desktop clients accessing Oracle E-Business Suite, Integrigy recommends running the latest version of Java 7 SE.  Java 7 is fully supported by Oracle with Public Updates through April 2015 and is patched with the latest security fixes. Most likely in late 2014 we anticipate that Oracle will have released and certified Java 8 with the Oracle E-Business Suite.

Most corporate environments utilize a standardized version of Java, tested and certified for corporate and mission critical applications. As such the Java auto-update functionality cannot be used to automatically upgrade Java on all desktops. These environments require new versions of Java to be periodically pushed to all desktops. For more information on how to push Java updates through software distribution see MOS Note 1439822.1. This note also describes how to download Java versions with the Java auto-update functionality disabled.

Keep in mind too that the version of Java used with the E-Business Suite should be obtained from My Oracle Support. Your Desktop support teams may or may not have Oracle support accounts.

Other points to keep in mind:

  • To support Java 7, the Oracle E-Business Suite application servers must be updated per the instructions in MOS Note 393931.1
  • “Non-Static Versioning” should be used the E-Business Suite to allow for later versions of the JRE Plug-in to be installed on the desktop client. For example, with Non-Static versioning JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. With Non-Static versioning, the web server’s version of Java is the minimum version that can be used on the desktop client.
  • You will need to implement the Enhanced JAR File signing for the later versions of Java 7 (refer to Integrigy blog posting for more information)
  • Remember to remove all versions of Java that are no longer needed – for example JIinitiator

You may continue using Java 6.  As an Oracle E-Business Suite customer, you are entitled to Java 6 updates through Extended Support.  The latest Java 6 update (6u75) may be downloaded from My Oracle Support. This version (6u75) is equal to 7u55 for security fixes.

If you have questions, please contact us at



Tags: Security Strategy and StandardsOracle E-Business SuiteIT Security
Categories: APPS Blogs, Security Blogs

New Batch Configuration Wizard (BatchEdit) available

Anthony Shorten - Tue, 2014-07-01 16:36

A new configuration facility is available s part of Oracle Utilities Application Framework V4. called Batch Edit. One of the concerns customers and partners asked us to address is to make configuration of the batch architecture simpler and less error prone. A new command line utility has been introduced to allow customers to quickly and easily implement a robust technical architecture for batch. The feature provides the following:

  • A simple command interface to create and manage configurations for clusters, threadpools and submitters in batch.
  • A set of optimized templates to simplify configuration but also promote stability amongst configurations. Complex configurations can be error prone which can cause instability. These templates, using optimal configurations from customers, partners and Oracle's own performance engineer group, attempt to simply the configuration process, whilst supporting flexibility in configuration to cover implementation scenarios.
  • The cluster interface supports multi-cast and unicast configurations and adds a new template optimized for single server clusters. The single server cluster is ideal for use in non-production situations such as development, testing, conversion and demonstrations. The cluster templates have been optimized to support advanced facilities in Oracle Coherence to use the high availability and optimize network operations
  • The threadpoolworker interface allows implements to configure all the attributes from a single interface including for the first the the ability to create cache threadpools. These special type of threadpoolworker dod not run submitters but allow implementations to reduce the network overheads for individual components communicating across a cluster. They provide a mechanism for Coherence to store and relay state of all the components in a concentrated format and also serve as a convenient conduit for the Global JMX capability.
  • The submitter interface allows customers and implementors to create global and job specific properties files.
  • Tagging is supported in the utility to allow groups of threadpools and submitters to share attributes.
  • The utility provides helpful context sensitive help for all its functions, parameters and configurations with advanced help also available.

Over the next few weeks I will be publishing articles highlighting features and functions of this new facility.

More information about Batch Edit is available from the Batch Server Administration Guide shipped with your product and the Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support.

Paper on PeopleSoft Search Technology

PeopleSoft Technology Blog - Tue, 2014-07-01 13:31

This paper has been around for a bit, but some people may not have seen it.  As customers move to the latest releases of PeopleTools they are adopting our new search technology, and this paper can help with the understanding of all aspects of the new search.  The paper covers the following subjects:

  1. Getting Started PeopleSoft Search Technology 
  2. Understanding PeopleSoft Search Framework
  3. Defining Search Definition Queries
  4. Creating Query and Connected Query Search Definitions
  5. Creating File Source Search Definitions
  6. Creating Web Source Search Definitions
  7. Creating Search Categories
  8. Administering PeopleSoft Search Framework
  9. Working with PeopleSoft Search Framework Utilities
  10. Working with PeopleSoft Search Framework Security Features
  11. Working with PeopleSoft Search (once it is deployed)

As you can see, this paper covers it all.  It is a valuable resource for anyone deploying and administering the new PeopleSoft search.

<b>Contributions by Angela Golla,

Oracle Infogram - Tue, 2014-07-01 12:38
Contributions by Angela Golla, Infogram Deputy Editor

Advisor Webcast Recordings
Did you know that Advisor Webcasts are recorded and available for download?  Topic covered include many Oracle products as well as My Oracle Support.  Note:740966.1 has the details on how the program works.

July 1, 1858: Co-discovery of Evolution by Natural Selection

FeuerThoughts - Tue, 2014-07-01 11:51
On this day in 1858, members of the Linnaean Society of London listened to the reading of a composite paper, with two authors, announcing the discovery of evolution by natural selection.
One author you've probably heard of: Charles Darwin
The other? Famous in his time, but in the 20th and 21st centuries largely forgotten: Alfred Russel Wallace.
Darwin was a Big Data scientist, spending 20 years after his trip to the Galapagos gathering data from his own experiments and from botanists around the world, to make his theory unassailable. Wallace was a field naturalist, studying species and variation, up close and very personal.
Both ended up in the same place at roughly the same time, driven by the inescapable conclusion from these three facts:
1. More organisms are born than can survive (for their full "normal" lifespan). 2. Like father like son: we inherit characteristics from our parents3. NOT like father like son: each offspring varies in some way from its parents.
So who/what survives to reproduce and pass on its genes? Or rather, who dies and why? You can die purely by accident. You are the biggest, strongest lion. Nothing can beat you. But a tree falls on you. Dead and gone.
Or you can survive because you have an advantage, however slight, that another in your species lacks. Your beak is slightly more narrow and lets you get at all the nuts on the tree. Your legs are slightly longer so you can avoid the tiger. And so on, everything sorting out how to eat, how to survive long enough to reproduce, from bacteria to coral to fish to mammals.
And with each passing generation, the mutations that help you survive get passed along, and so we (humans and everyone, everything) change - sometimes slowly, sometimes quickly. But change we do. 
With this announcement on July 1, 1858, humans now had a way of understanding how the world works without having to fall back on some unknowable god or gods. And we have also been able to build on Wallace's and Darwin's insight to now understand, perhaps too well, how life works on our planet, and how similar we are to so many other species.
Which means - to my way of thinking - that we no longer have any excuses, we humans, for our ongoing devastation and depletion of our world and our co-inhabitants.
In a more rational world, in which humans shared their planet with everything around them, instead of consuming everything in sight, July 1 would be an international day of celebration.
Well, at least I posted a note on my blog! Plus I will go outside later and cut back invasives, to help native trees grow.
How will you celebrate International Evolution Day?
Here are some links to information about evolution, about the way these two men got to the point of announcing their discoveries, and more.
You will read in some of these articles about Wallace being "robbed" of his just fame and recognition; I must tell you that Wallace, in his own words and the way he lived his life, was gracious and generous in spirit. He always saw Darwin as the one who fully elaborated the theory, making its acceptance so instantly widespread across Europe. He did not seem the least bit jealous.
And Wallace was, in many ways, a far more interesting human being than Darwin. I encourage to check out his autobiography, My Life, as a way of being introduced to one of my heroes.
Categories: Development

Microsoft’s database administration strengthens BYOD security

Chris Foot - Tue, 2014-07-01 11:02

The prevalence of the bring-your-own-device trend has incited new database security concerns while simultaneously improving employee performance. 

Enterprises don't want to sacrifice worker productivity and happiness simply because server activity can't be properly managed. There's no reason to abandon BYOD. All it requires is assiduous surveillance, new usage protocols and network optimization. 

The biggest concern comes within 
Every organization has at least one staff member who couldn't be more dissatisfied with his or her current work situation. The idea of the disgruntled employee may seem somewhat cartoonish, but it's important that businesses consider the situation as a serious threat to data confidentiality. 

Chris DiMarco, a contributor to InsideCounsel, acknowledged that mobile devices can be useful assets to personnel harboring ill intentions. David Long-Daniels, co-chairman of Greenberg Traurig's Global Labor & Employment Practice, noted that malicious activity can be carried out through smartphones in a number of ways, and it all starts with willingly sharing information. 

"What happens if an individual leaves and you don't have a policy that allows you to wipe their device?" Long-Daniels posited, as quoted by the source. 

Set up a protocol 
Thankfully, there's a way you can deter malevolent employees from stealing critical information. Bret Arsenault, CIO of Microsoft and contributor to Dark Reading, noted that the software developer has successfully deterred deviancy by implementing database active monitoring and segregating personal and corporate data. He acknowledged that any device accessing company email must:

  • Encrypt the information on the mechanism
  • Be activated by a PIN
  • Enable remote management and application updates to protect Microsoft's programs

Handling transactions off-premise has been a significant boon for Microsoft. The organization consistently deploys products that act as administrators between its own databases and the personal devices of employees. In addition, the solution allows Microsoft to remove any corporate intelligence from devices in the event the user leaves the enterprise. 

Implement an access strategy 
Depending on what hardware an employee is using and how trustworthy a worker is deemed to be, Microsoft defines how much database access a person will receive. Arsenault maintained that the business asks the following questions:

  • What kind of email solution is an individual using? Is it personal or corporate?
  • Is his or her device managed and authenticated by Microsoft or handled solely by the employee?
  • Is the mechanism being used from a known or unidentified location?

With the aforementioned approaches in mind and sound remote database support at their backs, enterprises will be able to benefit from the flexible workflow BYOD offers without suffering from security woes. 

The post Microsoft’s database administration strengthens BYOD security appeared first on Remote DBA Experts.


Catherine Devlin - Tue, 2014-07-01 10:37

Yesterday was my first day at 18F!

What is 18F? We're a small, little-known government organization that works outside the usual channels to accomplish special projects. It involves black outfits and a lot of martial arts.

Kidding! Sort of. 18F is a new agency within the GSA that does citizen-focused work for other parts of the U.S. Government, working small, quick projects to make information more accessible. We're using all the tricks: small teams, agile development, rapid iteration, open-source software, test-first, continuous integration. We do our work in the open.

Sure, this is old hat to you, faithful blog readers. But bringing it into government IT work is what makes it exciting. We're hoping that the techniques we use will ripple out beyond the immediate projects we work on, popularizing them throughout government IT and helping efficiency and responsiveness throughout. This is a chance to put all the techniques I've learned from you to work for all of us. Who wouldn't love to get paid to work for the common good?

Obviously, this is still my personal blog, so nothing I say about 18F counts as official information. Just take it as my usual enthusiastic babbling.

Oracle Global FY15 Global Partner Kickoff

Last week, during 25 and 26th of June, Oracle PartnerNetwork had the FY15 Global Partner Kickoff where you, hopefully, got to meet Oracle executives, including Rich Geraffo, SVP, Worldwide Alliances...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Configure shared storage for #em12c Business Intelligence Publisher (BIP)

DBASolved - Tue, 2014-07-01 07:52

Oracle Enterprise Manager 12c Release 4 has a lot of new features; however, I quickly want to focus on a feature that has been in OEM12c for awhile.  This feature is Business Intelligence Publisher (BIP).  BIP has been a part of OEM12c since it was initially released; at first it was a bit cumbersome to get it installed. With the release of, the OEM team has done a great job at making the process a lot easier. Although this post is not directly talking about BIP installation; just understand that the process is easier and details can be found here.

What I want to focus on is how to configure BIP, once installed, to use shared storage.  I don’t recall if the requirement for shared storage was required in earlier versions of OEM12c; however, if you want to share BIP reports between OMS nodes in a high-avaliablity configuration, a shared location is required.  The initial directions for reconfiguring BIP for shared storage can be found here.

In order to allow multiple OMS nodes to support multiple BIP servers the following command needs to be ran:

emctl config oms -bip_shared_storage -config_volume <directory location> -cluster_volume <directory location>

Note: The directory location supplied for the shared location has to be accessible by both OMS nodes.


emctl config oms -bip_shared_storage -config_volume /oms/BIP/config -cluster_volume /oms/BIP/cluster

When the reconfiguring of BIP begins, you will be asked for the Admin User’s password (Weblogic User) and the SYSMAN password.  Supply these and then wait for the completion of the script. Once completed the CONFIG and CLUSTER directories for BIP will be moved to the location specified.

The new directory locations can be verified from the BIP web page under Administration -> Server Configuration.

In the end, reconfiguring BIP to use shared storage is quite simple.


twitter: @dbasolved


Filed under: OEM
Categories: DBA Blogs

Enterprise Manager Cloud Control 12cR4 Production Upgrade

Tim Hall - Tue, 2014-07-01 04:12

I’ve already written about the 12cR3 to 12cR4 upgrade here. I did a few run through’s at home to practice it and it all seemed good.

Setting The Scene

Just to set the scene, for our production environment we run Cloud Control in a VMware virtual machine, using Oracle Linux 6.5 as the quest OS. With that setup, we can use a simple installation (DB and OMS on the same VM) and use VMware to provide our failover, rather than having to worry about multiple OMS installations and any DB failover technology etc. If there’s one thing I’ve learned about Cloud Control, it’s Keep It Simple Stupid (KISS)! As far as our managed servers go, most of our databases and all our middle tier stuff runs on VMware and Oracle Linux too. We have a handful of things still hanging around on HP-UX and Solaris, which will hopefully be migrated soon…

Upgrade Attempt 1 : Non-Starter

Yesterday I started the upgrade of our production system. Pretty much straight out of the blocks I hit a road block. It didn’t like the agents running on our HP-UX servers. The upgrades of the HP-UX agents are so painful. Every time so far I’ve had to reinstall them. As a result, I didn’t bother to upgrade them last time and kept running with the previous version of the agents. The upgrade wouldn’t have anything to do with that, so I forgot about the Cloud Control upgrade and I spent yesterday attempting to upgrade the HP-UX agents to 12cR3, before I could attempt the 12cR4 Cloud Control upgrade.

As usual, the upgrade of the agents on HP-UX involved me uninstalling, removing all the targets, installing, discovering all the targets and setting up the backups etc. Not all of it is scripted yet, so it is an annoying and painful process. I’m not sure if other HP-UX users suffer this, but it seems pretty consistently bad for us. The sooner we get rid of these straggling HP-UX servers the better!

So this wasn’t so much a failure of the upgrade. It was really down to me being lazy and not bothering to upgrade some agents.

Fast forward to this morning and I was actually ready to start the upgrade. :)

Upgrade Attempt 2 : Success

With the 12cR3 agents in place on HP-UX, the upgrade ran past that step with no problems and on to the main body of the installation. The install and upgrade were textbook.

I’ve upgraded the agent on the cloud control server, but I’m not going to upgrade any of the other agents until I know things are working fine.

Happy days!



Enterprise Manager Cloud Control 12cR4 Production Upgrade was first posted on July 1, 2014 at 11:12 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Big Data in the Cloud at Google I/O

William Vambenepe - Tue, 2014-07-01 00:55

Last week was a great party for the entire Google developer family, including Google Cloud Platform. And within the Cloud Platform, Big Data processing services. Which is where my focus has been in the almost two years I’ve been at Google.

It started with a bang, when our fearless leader Urs unveiled Cloud Dataflow in the keynote. Supported by a very timely demo (streaming analytics for a World Cup game) by my colleague Eric.

After the keynote, we had three live sessions:

In “Big Data, the Cloud Way“, I gave an overview of the main large-scale data processing services on Google Cloud:

  • Cloud Pub/Sub, a newly-announced service which provides reliable, many-to-many, asynchronous messaging,
  • the aforementioned Cloud Dataflow, to implement data processing pipelines which can run either in streaming or batch mode,
  • BigQuery, an existing service for large-scale SQL-based data processing at interactive speed, and
  • support for Hadoop and Spark, making it very easy to deploy and use them “the Cloud Way”, well integrated with other storage and processing services of Google Cloud Platform.

The next day, in “The Dawn of Fast Data“, Marwa and Reuven described Cloud Dataflow in a lot more details, including code samples. They showed how to easily construct a streaming pipeline which keeps a constantly-updated lookup table of most popular Twitter hashtags for a given prefix. They also explained how Cloud Dataflow builds on over a decade of data processing innovation at Google to optimize processing pipelines and free users from the burden of deploying, configuring, tuning and managing the needed infrastructure. Just like Cloud Pub/Sub and BigQuery do for event handling and SQL analytics, respectively.

Later that afternoon, Felipe and Jordan showed how to build predictive models in “Predicting the future with the Google Cloud Platform“.

We had also prepared some recorded short presentations. To learn more about how easy and efficient it is to use Hadoop and Spark on Google Cloud Platform, you should listen to Dennis in “Open Source Data Analytics“. To learn more about block storage options (including SSD, both local and remote), listen to Jay in “Optimizing disk I/O in the cloud“.

It was gratifying to see well-informed people recognize the importance of these announcement and partners understand how this will benefit their customers. As well as some good press coverage.

It’s liberating to now be able to talk freely about recent progress on our quest to equip Google Cloud users with easy to use data processing tools. Everyone can benefit from Google’s experience making developers productive while efficiently processing data at large scale. With great power comes great productivity.

Categories: Other

Happy News for Readers - Packt Pub offers $10 Discount on Books on celebrating 10 glorious years

Senthil Rajendran - Mon, 2014-06-30 22:47
Would like to pass on the below message to Readers.
Packt Publishing are celebrating 10 glorious years of publishing books. To celebrate this huge milestone, from June 26th Packt is offering all of its eBooks and Videos at just $10 each for 10 days. This promotion covers every title and customers can stock up on as many copies as they like until July 5th.Explore this offer here