Skip navigation.

Feed aggregator

General Availability announcement for Oracle Application Management Pack 12.1.0.2.0 for PeopleSoft

PeopleSoft Technology Blog - Tue, 2014-10-07 23:48


Oracle PeopleSoft is pleased to announce the General Availability of Oracle Application Management Pack 12.1.0.2.0 for PeopleSoft

Oracle Application Management Pack, or AMP, is also known as Oracle PeopleSoft Plug-in for Oracle Enterprise Manager. Oracle PeopleSoft Plug-in Is licensed as part of Application Management Suite for PeopleSoft.

This release of Application Management Pack supports PeopleTools Releases 8.54, 8.53 and 8.52.

Here are some of the new features of Application Management Pack for PeopleSoft:

System Management Enhancements

Administration/Configuration/Monitoring:

· New ADF Based UI: All the Administration/Configuration and Monitoring UI is available now on ADF UI with advanced dashboards.

· Improved PeopleTools System Discovery: Allows user to discover the PeopleTools Database from one of the PeopleTools/Tuxedo Domain Targets.

· New Aggregate Target Homes: Aggregate Target Home pages eases Inter Target Navigation for users. Also comes with new menu based navigation, helping users to navigate within PeopleTools Targets with less hops.

· Configuration Comparison Templates: Configuration comparison templates allow customers to compare configurations of two or more PeopleTools environments.

· Diagnostic Framework: Enables users to collect extensive diagnostic log, leading to faster resolution to Target Discovery/Configuration and Monitoring issues.

· Performance Monitoring: PeopleTools customers will now be able Proactively monitor the targets using new JMX enabled Monitoring Metrics.

Change Management Enhancements

· Cloning: Supports cloning of Web Server and Application Server Domain Configurations.

Release Details

Downloading PeopleSoft Application Management Suite

PeopleSoft Application Management Suite can be downloaded from Oracle Software Delivery Cloud by using following instructions:

1. Go to the Oracle Software Delivery Cloud site.

2. Choose a language and click Continue.

3. Answer export validation questions.

4. Select PeopleSoft Enterprise from the list of product/media packs.

5. Choose Respective Platform and click Go.

6. Select Oracle Application Management Suite 1.0.0.5.0 for PeopleSoft from the list and click on Continue.

7. Select PeopleSoft Application Management Plug-in 12.1.0.2 for Oracle Enterprise Manager 12c and download.

Note: Starting with Enterprise Manager 12c customers can download and install the application management pack as Self Update from EM store. For more details on the EM Store and the Self Update feature, see the Oracle Enterprise Manager Cloud Control Administrator's Guide.

Supported Releases and Platforms

The following Oracle Enterprise Manager Cloud Control releases, PeopleTools releases, and platforms are supported:

  • Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3.0)
  • Oracle Enterprise Manager Cloud Control 12c Release 4 (12.1.0.4.0)
  • PeopleTools Release 8.54
  • PeopleTools Release 8.53
  • PeopleTools Release 8.52
  • Supported Platforms: Oracle Application Management Pack for PeopleSoft is available on Linux, IBM AIX, Oracle Solaris, and HP-UX Itanium and Windows. For a complete list of supported platforms and operating systems, refer to the certification pages of PeopleTools. Also, for a complete list of Enterprise Manager supported platforms and operating systems, refer to the certification pages of Oracle Enterprise Manager.

Installing Oracle Application Management Pack for PeopleSoft Release 12.1.0.2.0

Installation and Implementation Guides are available on OTN and on OSDC.

  • PeopleSoft Application Management Plug-in 12.1.0.2.0 for Oracle Enterprise Manager 12c Install Guide is available on Part No - E57421-01.
  • PeopleSoft Application Management Plug-in 12.1.0.2.0 for Oracle Enterprise Manager 12c Implementation Guide is available on Part No - E55343-01.
  • The Oracle Application Management Pack PeopleSoft can be downloaded and installed by using the Self Update feature of Oracle Enterprise Manager.
    Please refer to the following documentation to understand more about the Self Update Feature:
    Oracle® Enterprise Manager Cloud Control Administrator's Guide

New Alta UI for ADF UI Shell Application

Andrejus Baranovski - Tue, 2014-10-07 23:14
I have applied new Alta UI for customised ADF UI Shell application. Customised version of ADF UI Shell is taken from my previous blog post - ADF UI Shell Usability Improvement - Tab Contextual Menu. Old application with new Alta UI looks fresh and clean. Runtime performance is improved - ADF transfers less content to the browser, this makes application load and run faster.

Here you can download my sample application with Alta UI applied to ADF UI Shell - MultiTaskFlowApp_v12c.zip.

All three ADF UI Shell tabs are opened and Master-Detail data is displayed in this example:


New style is applied for LOV component and buttons, making all buttons and controls more visible and natural:


Customized ADF UI Shell supports tab menu - user can close current tab or other tabs:


There was a change in 12c related to the tab menu, we need to set align ID property differently. You can see this change in ADF UI Shell template file - Java Script function gets tab ID to align directly from component client ID property:


Alta UI is applied simply by changing a skin name in trinidad file:


This hidden gem was packaged with current JDEV 12.1.3 release, you don't need to download anything extra.

Bringing Clarity To The Avalanche Part 1 - OOW14

Floyd Teter - Tue, 2014-10-07 15:50
Since the prior post here, I've had some people ask why I compared Oracle OpenWorld this year to an avalanche.  Well, to be honest, there are two reasons.  First, it was certainly an avalanche of news. You can check all the Oracle press releases related to the conference here (warning: it's pages and pages of information).  Second, I'm tired of using the analogy of sipping or drinking from a firehose...time to try something new.

So let's talk about some User Experience highlights from the conference.  Why am I starting with UX?  Because I like it and it's my blog ;)

Alta UI

OK, let's be clear.  Alta is more of a user interface standard than a full UX, as it focuses strictly on UI rather than the entire user experience.  That being said, it's pretty cool.  It's a very clean and simplified look, and applies many lessons learned through Oracle's (separate) UX efforts.  I could blab on and on about Oracle Alta, but you can learn about it for yourself here.

Beacons

We all love gadgets.  I had the opportunity to get a sneak peek at some of the "projects that aren't quite products yet" in the works at the Oracle UX Labs.  Beacons are a big part of that work.  Turns out that the work has already progress beyond mere gadgetry.  The beacons were used to help guide me from station to station within the event space - this booth is ready for you now.  The AppsLab team talks about beacons on a regular basis.  I'm much more sold now on the usefulness to beacon technology than I was before OOW.  This was one of the better applications I've seen at the intersection of Wearables and the Internet of Things.

Simplified UI

I like the concepts behind Simplified UI because well-designed UX drives user acceptance and increases productivity.  Simplified UI was originally introduced for Oracle Cloud Applications back when they were known as Fusion Applications.  But now we're seeing Simplified UI propagating out to other Oracle Applications.  We now see Simplified UI patterns applied to the E-Business Suite, JD Edwards and PeopleSoft.  Different underlying technology for each, but the same look and feel.  Very cool to see the understanding growing within Oracle development that user experience is not only important, but is a value-add product in and of itself.

Simplified UI Rapid Development Kit

Simplified UI is great for Oracle products, but what if I want to extend those products.  Or, even better, what if I want to custom-build products with the same look and feel?  Well, Oracle has made it easy for me to literally steal...in fact, they want me to steal...their secret sauce with the Simplified UI Rapid Development Kit.  Yeah, I'm cheating a bit.  This was actually released before OOW.  But most folks, especially Oracle partners, were unaware prior to the conference.  If I had a nickel for every time I saw a developer's eyes light up over this at OOW, I'd could buy my own yacht and race Larry across San Francisco Bay.  Worth checking out if you haven't already.

Student Cloud

I'll probably get hauled off to the special prison Oracle keeps for people who toy with the limits of their NDA for this, but it's too cool to keep to myself.  I had the opportunity to work hands-on with an early semi-functional prototype of the in-development Student Cloud application for managing Higher Education continuing education students.  The part that's cool:  you can see great UX design throughout the application.  Very few clicks, even fewer icons, a search-based navigation architecture, and very, very simple business processes for very specific use cases.  I can't wait to see and hear reactions when this app rolls out to the Higher Education market.

More cool stuff next post...

Little script for finding tables for which dynamic sampling was used

XTended Oracle SQL - Tue, 2014-10-07 14:42

You can always download latest version here: http://github.com/xtender/xt_scripts/blob/master/dynamic_sampling_used_for.sql
Current source code:

col owner         for a30;
col tab_name      for a30;
col top_sql_id    for a13;
col temporary     for a9;
col last_analyzed for a30;
col partitioned   for a11;
col nested        for a6;
col IOT_TYPE      for a15;
with tabs as (
      select 
         to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,1))  owner
        ,to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,2))  tab_name
        ,count(*)                                                                    cnt
        ,sum(executions)                                                             execs
        ,round(sum(elapsed_time/1e6),3)                                              elapsed
        ,max(sql_id) keep(dense_rank first order by elapsed_time desc)               top_sql_id
      from v$sqlarea a
      where a.sql_text like 'SELECT /* OPT_DYN_SAMP */%'
      group by
         to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,1))
        ,to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,2))
)
select tabs.* 
      ,t.temporary
      ,t.last_analyzed
      ,t.partitioned
      ,t.nested
      ,t.IOT_TYPE
from tabs
    ,dba_tables t
where 
     tabs.owner    = t.owner(+)
 and tabs.tab_name = t.table_name(+)
order by elapsed desc
/
col owner         clear;
col tab_name      clear;
col top_sql_id    clear;
col temporary     clear;
col last_analyzed clear;
col partitioned   clear;
col nested        clear;
col IOT_TYPE      clear;

ps. Or if you want to find queries that used dynamic sampling, you can use query like that:

select s.*
from v$sql s
where 
  s.sql_id in (select p.sql_id 
               from v$sql_plan p
               where p.id=1
                 and p.other_xml like '%dynamic_sampling%'
              )
Categories: Development

OOW : Edition 2015

Jean-Philippe Pinte - Tue, 2014-10-07 14:22
A noter dans l'agenda, les dates de l'édition 2015 !
25 au 29 octobre 2015

Presentations Available from OpenWorld

Anthony Shorten - Tue, 2014-10-07 11:38

Last week I conducted three sessions on a number of topics. The presentations used in those sessions are now available from the Sessions --> Content Catalog on the Oracle OpenWorld site.Just search for my name (Anthony Shorten) to download the presentations in PDF format.

The sessions available are:

I know a few customers and partners came to me after each session to get a copy of the presentation. They are now available as I pointed out.

Objects versus Insert Statements

Anthony Shorten - Tue, 2014-10-07 11:06

A few times I have encountered issues and problems at customers that can defy explanation. After investigation I usually find out the cause and in some cases it is the way the implementation has created the data in the first place. In the majority of these types of issues, I find that interfaces or even people are using direct INSERT statements against the product database to create data. This is inherently dangerous for a number of reasons and therefore strongly discouraged:

  • Direct INSERT statements frequently miss important data in the object.
  • Direct INSERT statements ignore any product business logic which means the data is potentially inconsistent from the definition of the object. This can cause the product processing to misinterpret the data and may even cause data corruption in extreme cases.
  • Direct INSERT statements ignore product managed referential integrity. We do not use the referential integrity of the data within the database as we allow extensions to augment the behavior of the object and determine the optimal point of checking data integrity. The object has inbuilt referential integrity rules.

To avoid this situation we highly recommend that you only insert data through the object and NOT use direct INSERT statements. The interface to the object can be direct within the product or via Web Services (either directly or through your favorite middleware) to create data from an external source. Running through the object interface ensures not only that the data is complete but takes into account product referential integrity and conforms to the business rules that you configure for your data.

Take care and create data through the objects.

12c Upgrade and Concurrent Stats Gathering

Jason Arneil - Tue, 2014-10-07 07:50

I was upgrading an Exadata test database from 11.2.0.4 to 12.1.0.2 and I came across a failure scenario I had not encountered before. I’ve upgraded a few databases to both 12.1.0.1 and 12.1.0.2 for test purposes, but this was the first one I’d done on Exadata. And the first time I’d encountered such a failure.

I started the upgrade after checking with the pre upgrade script that everything was ready to upgrade. And I ran with the maximum amount of parellelism:

$ORACLE_HOME/perl/bin/perl catctl.pl -n 8 catupgrd.sql
.
.
.
Serial Phase #:81 Files: 1 A process terminated prior to completion.

Died at catcon.pm line 5084.

That was both annoying and surprising. The line in catcon.pm is of no assistance:

   5080   sub catcon_HandleSigchld () {
   5081     print CATCONOUT "A process terminated prior to completion.\n";
   5082     print CATCONOUT "Review the ${catcon_LogFilePathBase}*.log files to identify the failure.\n";
   5083     $SIG{CHLD} = 'IGNORE';  # now ignore any child processes
   5084     die;
   5085   }

But what of more use was the bottom of a catupgrd.log file:

11:12:35 269  /
catrequtlmg: b_StatEvt     = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
declare
*
ERROR at line 1:
ORA-20000: Unable to gather statistics concurrently: Resource Manager is not
enabled.
ORA-06512: at "SYS.DBMS_STATS", line 34567
ORA-06512: at line 152

This error is coming from the catrequtlmg.sql, but my first thought was checking if the parameter resource_manager_plan was set, and it turned out it wasn’t. However setting the default_plan and running this piece of sql by itself produced the same error:

SQL> @catrequtlmg.sql

PL/SQL procedure successfully completed.

catrequtlmg: b_StatEvt	   = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
declare
*
ERROR at line 1:
ORA-20000: Unable to gather statistics concurrently: Resource Manager is not
enabled.
ORA-06512: at "SYS.DBMS_STATS", line 34567
ORA-06512: at line 152



PL/SQL procedure successfully completed.

I then started thinking about what it meant by gather statistics concurrently and I noticed that I had indeed set this database to gather stats concurrently (it’s off by default):

SQL> select dbms_stats.get_prefs('concurrent') from dual;

DBMS_STATS.GET_PREFS('CONCURRENT')
--------------------------------------------------------------------------------
TRUE

I then proceeded to turn of this concurrent gathering and rerun the failing SQL:


SQL> exec dbms_stats.set_global_prefs('CONCURRENT','FALSE');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('concurrent') from dual;

DBMS_STATS.GET_PREFS('CONCURRENT')
--------------------------------------------------------------------------------
FALSE


SQL> @catrequtlmg.sql

PL/SQL procedure successfully completed.

catrequtlmg: b_StatEvt	   = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
catrequtlmg: Gathering Table Stats USER$MIG
catrequtlmg: Gathering Table Stats COL$MIG
catrequtlmg: Gathering Table Stats CLU$MIG
catrequtlmg: Gathering Table Stats CON$MIG
catrequtlmg: Gathering Table Stats TAB$MIG
catrequtlmg: Gathering Table Stats IND$MIG
catrequtlmg: Gathering Table Stats ICOL$MIG
catrequtlmg: Gathering Table Stats LOB$MIG
catrequtlmg: Gathering Table Stats COLTYPE$MIG
catrequtlmg: Gathering Table Stats SUBCOLTYPE$MIG
catrequtlmg: Gathering Table Stats NTAB$MIG
catrequtlmg: Gathering Table Stats REFCON$MIG
catrequtlmg: Gathering Table Stats OPQTYPE$MIG
catrequtlmg: Gathering Table Stats ICOLDEP$MIG
catrequtlmg: Gathering Table Stats TSQ$MIG
catrequtlmg: Gathering Table Stats VIEWTRCOL$MIG
catrequtlmg: Gathering Table Stats ATTRCOL$MIG
catrequtlmg: Gathering Table Stats TYPE_MISC$MIG
catrequtlmg: Gathering Table Stats LIBRARY$MIG
catrequtlmg: Gathering Table Stats ASSEMBLY$MIG
catrequtlmg: delete_props_data: No Props Data

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

It worked! I was able to upgrade my database in the end.

I wish the preupgrade.sql script would check for this. Or indeed when upgrading, the catrequtlmg.sql would disable the concurrent gathering.

I would advise checking for this before any upgrade to 12c and turning it off if you find it in one of your about to be upgraded databases.


iBeacons or The Physical Web?

Oracle AppsLab - Tue, 2014-10-07 06:55

For the past year at the AppsLab we have been exploring the possibilities of advanced user interactions using BLE beacons. A couple days ago, Google (unofficially) announced that one of their Chrome teams is working on what I’m calling the gBeacon. They are calling it the Physical Web.
This is how they describe it:

“The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device – a vending machine, a poster, a toy, a bus stop, a rental car – and not have to download an app first. Everything should be just a tap away.

The Physical Web is not shipping yet nor is it a Google product. This is an early-stage experimental project and we’re developing it out in the open as we do all things related to the web. This should only be of interest to developers looking to test out this feature and provide us feedback.

Here is a short run down of how iBeacon works vs The Physical Web beacons:

iBeacon

The iBeacon profile advertises a 30 byte packet containing three values that combined make a unique identifier: UUID, Major, Minor. The mobile device will actively listen for these packets. When it gets close to one of them it will query a database (cloud) or use hard-coded values to determine what it needs to do or show for that beacon. Generally the UUID is set to identify a common organization. Major value is an asset within that organization, and Minor is a subset of assets belonging to the Major.
iBeacon_overview.001
For example, if I’m close to the Oracle campus, and I have an Oracle application that is actively listening for beacons, then as I get within reach of any beacon my app can trigger certain interactions related to the whole organization (“Hello Noel, Welcome to Oracle.”) The application had to query a database to know what that UUID represents. As I reach building 200, my application picks up another beacon that contains a Major value of lets say 200. Then my app will do the same and query to see what it represents (“You are in building 200.”) Finally when I get close to our new Cloud UX Lab, a beacon inside the lab will broadcast a Minor ID that represents the lab (“This is the Cloud UX lab, want to learn more?”)

iBeacons are designed to work as full closed ecosystem where only the deployed devices (app+beacons+db) will know what a beacon represents. Today I can walk to the Apple store and use a Bluetooth app to “sniff” BLE devices, but unless I know what their UUID/Major/Minor values represent I cannot do anything with that information. Only the official Apple Store app will know what do with when is nearby beacons around the store (“Looks like you are looking for a new iPhone case.”)

As you can see the iBeacon approach is a “push” method where the device will proactively push actions to you. In contrast the Physical Web beacon proposes to act as a “pull” or on-demand method.

Physical Web

The Physical Web gBeacon will advertise a 28 bytes packet containing an encoded URL. Google wants to use the familiar and established method of URLs to tell an application, or an OS, where to find information about physical objects. They plan to use context (physical and virtual) to top rank what might be more important to you at the current time and display it.

gBeacon

Image from https://github.com/google/physical-web/blob/master/documentation/introduction.md

The Physical Web approach is designed to be a “pull” discovery service where most likely the user will initiate the interaction. For example, when I arrive to the Oracle campus, I can start an application that will scan for nearby gBeacons or I can open my Chrome browser and do a search.  The application or browser will use context to top rank nearby objects combined with results. It can also use calendar data, email or Google Now to narrow down interests.  A background process with “push” capabilities could also be implemented. This process could have filters that can alert the user of nearby objects of interest.  These interests rules could be predefined or inferred by using Google’s intelligence gathering systems like Google Now.

The main difference between the two approaches is that iBeacons is a closed ecosystem (app+beacons+db) and the Physical Web is intended to be a public self discovered (app/os+beacons+www) physical extension of the web. Although the Physical Web could also be restricted by using protected websites and encrypted URLs.

Both approaches are accounting to prevent the misconception about these technologies: “I am going to be spammed as soon as I walk inside a mall?”  The answer is NO. iBeacons is an opt-in service within an app and the Physical Web beacons will mostly work on-demand or will have filter subscriptions.

So there you have it. Which method do you prefer?Possibly Related Posts:

Oracle OpenWorld 2014 Highlights

WebCenter Team - Tue, 2014-10-07 06:28

As Oracle OpenWorld 2014 comes to a close, we wanted to reflect on the week and provide some highlights for you all!

We say this every year, but this year's event was one of the best ones yet. We had more than 35 scheduled sessions, plus user group sessions, 10 live product demos, and 7 hands-on labs devoted to Oracle WebCenter and Oracle Business Process Management (Oracle BPM) solutions. This year's Oracle OpenWorld provided broad and deep insight into next-generation solutions that increase business agility, improve performance, and drive personal, contextual, and multichannel interactions. 

Oracle WebCenter & BPM Customer Appreciation Reception

Our 8th annual Oracle WebCenter & BPM Customer Appreciation Reception was held for the second year at San Francisco’s Old Mint, a National Historic Landmark. This was a great evening of networking and relationship building, where the Oracle WebCenter & BPM community had the chance to mingle and make new connections. Many thanks to our partners Aurionpro, AVIO Consulting, Bezzotech, Fishbowl Solutions, Keste, Redstone Content Solutions, TekStream & VASSIT for sponsoring!

Oracle Fusion Middleware Innovation Awards 

Oracle Fusion Middleware Innovation honors Oracle customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners were selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. This year’s winners for WebCenter were Bank of Lebanon and McAfee.


This year’s winners for the BPM category were State Revenue Office, Victoria and Vertafore.


Congratulations winners! 

Oracle Appreciation Event at Treasure Island

We stayed up past our bedtimes rocking to Aerosmith and hip-hopping to Macklemore & Ryan Lewis and Spacehog at the Oracle Appreciation Event. These award-winners—plus free-flowing networking, food, and drink—made Wednesday evening magical at Treasure Island. Once we arrived on Treasure Island, we saw that it had been transformed and we were wowed by the 360-degree views of Bay Area skylines (with an even better view from the top of the Ferris wheel). We tested our skills playing arcade games between acts, and relaxed and enjoyed ourselves after a busy couple of days.

Cloud

Cloud was one of the OOW shining spotlights this year. For WebCenter and BPM, we had dedicated hands-on labs for Documents Cloud Service and Process Cloud Service @ the Intercontinental. In addition, we had live demos including Documents Cloud Service, Process Cloud Services and Oracle Social Network (OSN) throughout the week. Documents Cloud Service and OSN were featured prominently in the Thomas Kurian OOW Keynote (from the 46 minute mark) and the FMW General Session (from the 40 minute mark). 

The Oracle WebCenter & BPM Community

Oracle OpenWorld is unmatched in providing you with opportunities to interact and engage with other WebCenter & BPM customers and experts from among our partner and employee communities. It was great to see everyone, make new connections and reconnect with old friends. We look forward to seeing you all again next year!

BI Applications in Cloud

Dylan's BI Notes - Mon, 2014-10-06 18:28
Prepackaged analytics applications are available as cloud services. The idea is that the client company does not need to use their own hardware and does not need to install the software or apply patches by themselves. What they need is just simply the browsers. For the end users, there should not be much difference.   The BI apps built […]
Categories: BI & Warehousing

Comparing SQL Execution Times From Different Systems

Comparing SQL Execution Times From Different Systems
Suppose it's your job to identify SQL that may run slower in the about-to-be-upgrated Oracle Database. It's tricky because no two systems are alike. Just because the SQL run time is faster in the test environment doesn't mean the decision to upgrade is a good one. In fact, it could be disastrous.

For example; If a SQL statement runs 10 seconds in production and runs 20 seconds in QAT, but the production system is twice as fast as QAT, is that a problem? It's difficult to compare SQL runs times when the same SQL resides in different environments.

In this posting, I present a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.

And, there is a cool, free, downloadable tool involved!

Why SQL Can Run Slower In Different Environments
There are a number of reasons why a SQL's run time is different in different systems. An obvious reason is a different execution plan. A less obvious and much more complex reason is a workload intensity or type difference. In this posting, I will focus on CPU speed differences. Actually, what I'll show you is how to remove the CPU speed differences so you can appropriately compare two SQL statements. It's pretty cool.

The Mental Gymnastics
If a SQL statement's elapsed time in production is 10 seconds and 20 seconds in QAT, that’s NOT an issue IF the production system is twice as fast.

If this makes sense to you, then what you did was mentally adjust one of the systems so it could be appropriately compared. This is how I did it:

10 seconds in production * production is 2 times as fast as QA  = 20 seconds 
And in QA the sql ran in 20 seconds… so really they ran “the same” in both environments. If I am considering placing the SQL from the test environment into the production environment, then this scenario does not raise any risk flags. The "trick" is determining "production is 2 times as fast as QA" and then creatively use that information.
Determining The "Speed Value"
Fortunately, there are many ways to determine a system's "speed value." Basing the speed value on Oracle's ability to process buffers in memory has many advantages: a real load is not required or even desired, real Oracle code is being run at a particular version, real operating systems are being run and the processing of an Oracle buffer highly correlates with CPU consumption.
Keep in mind, this type of CPU speed test is not an indicator of scalability (benefit of adding additional CPUs) in any way shape or form. It is simply a measure of brut force Oracle buffer cache logical IO processing speed based on a number of factors. If you are architecting a system, other tests will be required.
As you might expect, I have a free tool you can download to determine the "true speed" rating. I recently updated it to be more accurate, require less Oracle privileges, and also show the execution plan of the speed test tool SQL. (A special thanks to Steve for the execution plan enhancement!) If the execution plan used in the speed tool is difference on the various systems, then obviously we can't expect the "true speeds" to be comparable.
You can download the tool HERE.
How To Analyze The Risk
Before we can analyze the risk, we need the "speed value" for both systems. Suppose a faster system means its speed rating is larger. If the production system speed rating is 600 and the QAT system speed rating is 300, then production is deemed "twice as fast."
Now let's put this all together and quickly go through three examples.
This is the core math:
standardized elapsed time = sql elapsed time * system speed value
So if the SQL elapsed time is 25 seconds and the system speed value is 200, then the standardized "apples-to-apples" elapsed time is 5000 which is 25*200. The "standardized elapsed time" is simply a way to compare SQL elapsed times, not what users will feel and not the true SQL elapsed time.
To make this a little more interesting, I'll quickly go through three scenarios focusing on identifying risk.
1. The SQL truly runs the same in both systems.
Here is the math:
QAT standardized elapsed time = 20 seconds X 300 = 6000 seconds
PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds
In this scenario, the true speed situation is, QAT = PRD. This means, the SQL effectively runs just as fast in QAT as in production. If someone says the SQL is running slower in QAT and therefore this presents a risk to the upgrade, you can confidently say it's because the PRD system is twice as fast! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.
2. The SQL runs faster in production.
Now suppose the SQL runs for 30 seconds in QAT and for 10 seconds in PRD. If someone was to say, "Well of course it's runs slower in QAT because QAT is slower than the PRD system." Really? Everything is OK? Again, to make a fare comparison, we must compare the system using a standardizing metric, which I have been calling the, "standardized elapsed time."
Here are the scenario numbers:
QAT standardized elapsed time = 30 seconds X 300 = 9000 secondsPRD standardized elapsed time = 10 seconds X 600 = 6000 seconds
In this scenario, the QAT standard elapsed time is greater than the PRD standardized elapsed time. This means the QAT SQL is truly running slower in QAT compared to PRD. Specifically, this means the slower SQL in QAT can not be fully explained by the slower QAT system. Said another way, while we expect the SQL in QAT to run slower then in the PRD system, we didn't expect it to be quite so slow in QAT. There must another reason for this slowness, which we are not accounting for. In this scenario, the QAT SQL should be flagged as presenting a significant risk when upgrading from QAT to PRD.
3. The SQL runs faster in QAT.
In this final scenario, the SQL runs for 15 seconds in QAT and for 10 seconds in PRD. Suppose someone was to say, "Well of course the SQL runs slower in QAT. So everything is OK." Really? Everything is OK? To get a better understanding of the true situation, we need to look at their standardized elapsed times.
QAT standardized elapsed time = 15 seconds X 300 = 4500 secondsPRD standardized elapsed time = 10 seconds X 600 = 6000 seconds 
In this scenario, QAT standard elapsed time is less then the PRD standardized elapsed time. This means the QAT SQL is actually running faster in the QAT, even though the QAT wall time is 15 seconds and the PRD wall time is only 10 seconds. So while most people would flag this QAT SQL as "high risk" we know better! We know the QAT SQL is actually running faster in QAT than in production! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.
In Summary...
Identify risk is extremely important while planning for an upgrade. It is unlikely the QAT and production system will be identical in every way. This mismatch makes identifying risk more difficult. One of the common differences in systems is their CPU processing speeds. What I demonstrated was a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.
What's Next?
Looking at the "standardized elapsed time" based on Oracle LIO processing is important, but it's just one reason why a SQL may have a different elapsed time in a different environment. One of the big "gotchas" in load testing is comparing production performance to a QAT environment with a different workload. Creating an equivalent workload on different systems is extremely difficult to do. But with some very cool math and a clear understanding of performance analysis, we can also create a more "apples-to-apples" comparison, just like we have done with CPU speeds. But I'll save that for another posting.

All the best in your Oracle performance work!

Craig.




Categories: DBA Blogs

New ADF Alta UI for ADF UI Shell

Andrejus Baranovski - Mon, 2014-10-06 15:55
New skin for ADF in 12c looks great, I have applied it for one of my sample application with ADF UI Shell and it works smoothly. Check Oracle documentation how to apply Alta UI, really easy.

ADF UI Shell with Alta UI - clean and light:


Upgrading PeopleTools with Zero Downtime (2/3)

Javier Delgado - Mon, 2014-10-06 14:37
Continuing with my previous blog entry, the requirement from our customer was to be able to move users back and forth between the old and new PeopleTools releases until the latter was stabilised.

This naturally required both PeopleTools versions to coexist. Now, as you know, you cannot just install the new PeopleTools release binaries and point them to the new release. Each PeopleTools release can only connect to a database for which the PSSTATUS.TOOLSREL field contains the corresponding version value. But this is not the only problem, also part of the data model and values on the tables changes from one PeopleTools version to the other.

Therefore, we needed a database for each PeopleTools release, with its full stack of Application Server, Web Server, Process Scheduler, etc. The idea was to give users either the new or the old URL to access the environment, being able to rapidly switch from one instance to the other. Now, in order to maintain the data in sync between both instances, we needed to implement some kind of data replication between them, which should only cover the tables not impacted by the PeopleTools upgrade process.

There are a couple of ways in which the PeopleTools tables could be identified. For instance, the PPLTLS84CUR project may probably contain all of them. Another source could be the mvprdexp.dms script. Instead of using those methods, we decided to search for the impacted tables using a regular expression search tool by looking at the logs and traces of the PeopleTools upgrade done again a copy of the Demo environment. Although it required more work, and a few test iterations until we got it right, it allowed us to keep the number of non-replicated tables to a minimum.

When we finally got a list of tables, we let the key user know which functionalities would not be shared by both environments. As it turned out, Process Monitor, Query or Report Manager would need to be used separately. Fortunately enough, those functionalities did not pose a big issue from an user perspective, so we could move forward.

The next step was to decide which replication method we would use. Both databases were Oracle, although on different versions (no version was supported by both PeopleTools releases) (*). For many of the tables, we needed a bidirectional replication, as users were expected to enter transactions in any of the two environments.

There are many products and solutions that provide data replication with Oracle databases. We finally opted for a very simple one, which is not strictly replication: Oracle DB Link. We kept the application tables in the old PeopleTools instance, and then replaced the same tables in the new PeopleTools instance by synonyms pointing to the other instance using the DB Link. Once the new PeopleTools release was stabilised, we would move the physical tables to the target instance and create the DB Link on the other side.

Once we implemented this approach, we started testing. During testing, we faced some challenges, but we will cover them in the next and final post.

(*) This was unlucky. If we were using the same database version, we could have used a different schema for each PeopleTools release, and instead of creating a DB Link, we could have just used synonyms and avoid some of the issues brought by DB Links.

Upcoming Webinar Series: Using Google Search with your Oracle WebCenter or Liferay Portal

GSA Portal Search LogoFishbowl will host a series of webinars this month about integrating the Google Search Appliance with an Oracle WebCenter or Liferay Portal. Our new product, the GSA Portal Search Suite, fully exposes Google features within portals while also maintaining the existing look and feel.

The first webinar, “The Benefits of Google Search for your Oracle WebCenter or Liferay Portal”, will be held on Wednesday, October 15 from 12:00-1:00 PM CST. This webinar will focus on the benefits of using the Google Search Appliance, which has the best-in-class relevancy and impressive search features, such as spell check and document preview, that Google users are used to.

Register now

The second webinar, “Integrating the Google Search Appliance and Oracle WebCenter or Liferay Portal”, further explains how Fishbowl’s GSA Portal Search Suite helps improve the process of setting up a GSA with a WebCenter or Liferay Portal. This product uses configurable portlets so users can choose which Google features to enable and provides single sign-on between the portal and the GSA. The webinar will be held on Wednesday, October 22 from 12:00-1:00 PM CST.

Register now

For more information on the GSA Portal Search Suite, read our previous blog post on the topic.

The post Upcoming Webinar Series: Using Google Search with your Oracle WebCenter or Liferay Portal appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Microsoft Hadoop: Taming the Big Challenge of Big Data – Part Three

Pythian Group - Mon, 2014-10-06 11:57

Today’s blog post completes our three-part series with excerpts from our latest white paper, Microsoft Hadoop: Taming the Big Challenge of Big Data. In the first two posts, we discussed the impact of big data on today’s organizations, and its challenges.

Today, we’ll be sharing what organizations can accomplish by using the Microsoft Hadoop solution:

  1. Improve agility. Because companies now have the ability to collect and analyze data essentially in real time, they can more quickly discover which business strategies are working and which are not, and make adjustments as necessary.
  2. Increase innovation. By integrating structured and unstructured data sources, the solution provides decision makers with greater insight into all the factors affecting the business and encouraging new ways of thinking about opportunities and challenges.
  3. Reduce inefficiencies. Data that currently resides in conventional data management systems can be migrated into Parallel Data Warehouse (PDW) for faster information delivery
  4. Better allocate IT resources. The Microsoft Hadoop solution includes a powerful, intuitive interface for installing, configuring, and managing the technology, freeing up IT staff to work on projects that provide higher value to the organization.
  5. Decrease costs. Previously, because of the inability to effectively analyze big data, much of it was dumped into data warehouses on commodity hardware, which is no longer required thanks to Hadoop.

Download our full white paper to learn which companies are currently benefiting from Hadoop, and how you can achieve the maximum ROI from the Microsoft Hadoop solution.

Don’t forget to check out part one and part two of our Microsoft Hadoop blog series.

Categories: DBA Blogs

Clarity In The Avalanche

Floyd Teter - Mon, 2014-10-06 10:04
So I've spent the days since Oracle OpenWorld 14 decompressing...puttering in the garden, BBQing for family, running errands.  The idea was to give my mind time to process all the things I saw and heard at OOW this year.  Big year - it was like trying to take a sip from a firehose.  Developing any clarity around the avalanche of news has been tough.

If you average out all of Oracle's new product development, it comes to a rate of one new product release every working day of the year.  And I think they saved up bunches for OOW. It was difficult to keep up.

It was also difficult to physically keep up with things at OOW, as Oracle utilized the concept of product centers and spread things out over even more of downtown San Francisco this year. For example, Cloud ERP products were centered in the Westin on Market Street.  Cloud HCM was located at the Palace Hotel.  Sales Cloud took over the 2nd floor of Moscone West.  Higher Education focused around the Marriott Marquis. Anything UX, as well as many other hands-on labs, happened at the InterContinental Hotel.  And, of course, JavaOne took place at the Hilton on Union Square along with the surrounding area.  The geographical separation required even more in the way of making tough choices about where to be and when to be there.

With all that, I think I've figured out a way to organize my own take on the highlights from OOW - with a tip o' the hat to Oracle's Thomas Kurian.  Thomas sees Oracle as based around five product lines:  engineered systems, database, middleware, packaged applications, and cloud services. The more I consider this framework, the more it makes sense to me.  So my plan is to organize the news from OOW around these five product lines over the next few posts here.  We'll see if we can't find some clarity in the avalanche.

rsyslog: Send logs to Flume

Surachart Opun - Mon, 2014-10-06 04:12
Good day for learning something new. After read Flume book, that something popped up in my head. Wanted to test "rsyslog" => Flume => HDFS. As we know, forwarding log to other systems. We can set rsyslog:
*.* @YOURSERVERADDRESS:YOURSERVERPORT ## for UDP
*.* @@YOURSERVERADDRESS:YOURSERVERPORT ## for TCPFor rsyslog:
[root@centos01 ~]# grep centos /etc/rsyslog.conf
*.* @centos01:7777Came back to Flume, I used Simple Example for reference and changed a bit. Because I wanted it write to HDFS.
[root@centos01 ~]# grep "^FLUME_AGENT_NAME\="  /etc/default/flume-agent
FLUME_AGENT_NAME=a1
[root@centos01 ~]# cat /etc/flume/conf/flume.conf
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
#a1.sources.r1.type = netcat
a1.sources.r1.type = syslogudp
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 7777
# Describe the sink
#a1.sinks.k1.type = logger
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://localhost:8020/user/flume/syslog/%Y/%m/%d/%H/
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.batchSize = 10000
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 10000
a1.sinks.k1.hdfs.filePrefix = syslog
a1.sinks.k1.hdfs.round = true


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[root@centos01 ~]# /etc/init.d/flume-agent start
Flume NG agent is not running                              [FAILED]
Starting Flume NG agent daemon (flume-agent):              [  OK  ]Tested to login by ssh.
[root@centos01 ~]#  tail -0f  /var/log/flume/flume.log
06 Oct 2014 16:35:40,601 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter.doOpen:208)  - Creating hdfs://localhost:8020/user/flume/syslog/2014/10/06/16//syslog.1412588139067.tmp
06 Oct 2014 16:36:10,957 INFO  [hdfs-k1-roll-timer-0] (org.apache.flume.sink.hdfs.BucketWriter.renameBucket:427)  - Renaming hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067.tmp to hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]# hadoop fs -ls hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   1 flume supergroup        299 2014-10-06 16:36 hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]#
[root@centos01 ~]#
[root@centos01 ~]# hadoop fs -cat hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
sshd[20235]: Accepted password for surachart from 192.168.111.16 port 65068 ssh2
sshd[20235]: pam_unix(sshd:session): session opened for user surachart by (uid=0)
su: pam_unix(su-l:session): session opened for user root by surachart(uid=500)
su: pam_unix(su-l:session): session closed for user rootLook good... Anyway, It needs to adapt more...



Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Why the In-Memory Column Store is not used (II)

Karl Reitschuster - Mon, 2014-10-06 03:10
Now after some research - I detected one simple rule for provoking In-Memory scans :