Skip navigation.

Feed aggregator

How do I type e acute (é) on Windows 8

Robert Baillie - Wed, 2014-10-08 08:32
I keep on forgetting how to type é on Windows 8 (I used to CTRL+ALT+e, but that's now often reserved for the Euro symbol) I then tend to run a search on Google, and end up being pointed towards 8 year old answers that point you to character map, options in old version of word, or the old way of typing the extended ASCII character code. They all suck. And then I remember - it's easy. You start by pressing a CTRL + a key that represent the accent, then type the letter you want accented. For example, CTRL + ' followed by e gives you é. Brilliant! The great thing about using this technique is that the characters you use (dead letters) are representative of the accents you want to type. This makes them much easier to remember than the seemingly random character codes Here are the ones I know about: KeystrokesAccent typeExamplesCTRL + 'acuteéCTRL + `graveèCTRL + SHIFT + 6 / CTRL + ^circumflexêCTRL + ,cedillaçCTRL + ~perispomeneõCTRL + SHIFT + 7 / CTRL + &Diphthongs / others a =...

How do I type e acute (é) on Windows 8

Rob Baillie - Wed, 2014-10-08 08:27

I keep on forgetting how to type é on Windows 8 (I used to CTRL+ALT+e, but that's now often reserved for the Euro symbol)

I then tend to run a search on Google, and end up being pointed towards 8 year old answers that point you to character map, options in old version of word, or the old way of typing the extended ASCII character code.

They all suck.

And then I remember - it's easy.

You start by pressing a CTRL + a key that represent the accent, then type the letter you want accented.

For example, CTRL + ' followed by e gives you é.

Brilliant!

The great thing about using this technique is that the characters you use (dead letters) are representative of the accents you want to type. This makes them much easier to remember than the seemingly random character codes

Here are the ones I know about:

KeystrokesAccent typeExamplesCTRL + 'acuteéCTRL + `graveèCTRL + SHIFT + 6 / CTRL + ^circumflexêCTRL + ,cedillaçCTRL + ~perispomeneõCTRL + SHIFT + 7 / CTRL + &Diphthongs / others a = æ, o = œ, s = ß It doesn't quite work with every app (Blogger on Chrome, for example), but it certainly covers Office 2013, including both Outlook and Word.

Deploying a Private Cloud at Home — Part 1

Pythian Group - Wed, 2014-10-08 08:17

Today’s blog post is part one of seven in a series dedicated to Deploying a Private Cloud at Home. In my day-to-day activities, I come across various scenarios where I’m required to do sandbox testing before proceeding further on the production environment—which is great because it allows me to sharpen and develop my skills.

My home network consists of an OpenFiler NAS which also serves DNS, DHCP, iSCSI, NFS and Samba in my network. My home PC is a Fedora 20 Workstation, where I do most of the personal activities.  KVM hypervisor is running on CentOS 6.2 x86_64 to run sandbox VMs for testing.

Recently I decided to move it to the cloud and create a private cloud at home. There are plenty of open source cloud solutions available, but I decided to use OpenStack for two reasons.

  1. I am already running Redhat compatible distros ( CentOS and Fedora ) so I just need to install OpenStack on top of it to get started.
  2. Most of the clients I support have RHEL compatible distros in the environment, so it makes sense having RHEL compatible distros to play around.

Ideally OpenStack cloud consists of minimum three nodes with at least 2 NICs on each node.

  • Controller: As the name suggests, this is the controller node which runs most of the control services.
  • Network: This is the network node which handles virtual networking.
  • Compute : This is the hypervisor node which runs your VMs.

However due to small size of my home network I decided to use legacy networking which only requires controller and compute nodes with single NIC

Stay tuned for the remainder of my series, Deploying a Private Cloud at Home. In part two of seven, I will be demonstrating configuration and setup.

Categories: DBA Blogs

First-timer tips for Oracle Open World

Rittman Mead Consulting - Wed, 2014-10-08 07:16

Last week I had the great pleasure to attend Oracle Open World (OOW) for the first time, presenting No Silver Bullets – OBIEE Performance in the Real World at one of the ODTUG user group sessions on the Sunday. It was a blast, as the saying goes, but the week before OOW I was more nervous about the event itself than my presentation. Despite having been to smaller conferences before, OOW is vast in its scale and I felt like the week before going to university for the first time, full of uncertainty about what lay ahead and worrying that everyone would know everyone else except you! So during the week I jotted down a few things that I’d have found useful to know ahead of going and hopefully will help others going to OOW take it all in their stride from the very beginning.

Coming and going

I arrived on the Friday at midday SF time, and it worked perfectly for me. I was jetlagged so walked around like a zombie for the remainder of the day. Saturday I had chance to walk around SF and get my bearings both geographically, culturally and climate. Sunday is “day zero” when all the user group sessions are held, along with the opening OOW keynote in the evening. I think if I’d arrived Saturday afternoon instead I’d have felt a bit thrust into it all straight away on the Sunday.

In terms of leaving, the last formal day is Thursday and it’s full day of sessions too. I left straight after breakfast on Thursday and I felt I was leaving too early. But, OOW is a long few days & nights so chances are by Thursday you’ll be beat anyway, so check the schedule and plan your escape around it.

Accomodation

Book in advance! Like, at least two months in advance. There are 60,000 people descending on San Francisco, all wanting some place to stay.

Get airbnb, a lot more for your money than a hotel. Wifi is generally going to be a lot better, and having a living space in which to exist is nicer than just a hotel room. Don’t fret about the “perfect” location – anywhere walkable to Moscone (where OOW is held) is good because it means you can drop your rucksack off at the end of the day etc, but other than that the events are spread around so you’ll end up walking further to at least some of them. Or, get an Uber like the locals do!

Sessions

Go to Oak Table World (OTW), it’s great, and free. Non-marketing presentations from some of the most respected speakers in the industry. Cuts through the BS. It’s also basically on the same site as the rest of OOW, so easy to switch back and forth between OOW/OTW sessions.

Go and say hi to the speakers. In general they’re going to want to know that you liked it. Ask questions — hopefully they like what they talk about so they’ll love to speak some more about it. You’ll get more out of a five minute chat than two hours of keynote. And on that subject, don’t fret about dropping sessions — people tweet them, the slides are usually available, and in fact you could be sat at your desk instead of OOW and have missed the whole lot so just be grateful for what you do see. Chance encounters and chats aren’t available for download afterwards; most presentations are. Be strict in your selection of “must see” sessions, lest you drop one you really really did want to see.

Use the schedule builder in advance, but download it to your calendar (watch out for line-breaks in the exported file that will break the import) and sync it to your mobile phone so you can see rapidly where you need to head next. Conference mobile apps are rarely that useful and frequently bloated and/or unstable.

Don’t feel you need to book every waking moment of every day to sessions. It’s not slacking if you go to half as many but are twice as effective from not being worn out!

Dress

Dress wise, jeans and polo is fine, company polo or a shirt for delivering presentations. Day wear is fine for evenings too, no need to dress up. Some people do wear shorts but they’re in the great minority. There are lots of suits around, given it is a customer/sales conference too.

Socialising

The sessions and random conversations with people during the day are only part of OOW — the geek chat over a beer (or soda) is a big part too. Look out for the Pythian blogger meetup, meetups from your country’s user groups, companies you work with, and so on.

Register for the evening events that you get invited to (ODTUG, Pythian, etc) because often if you haven’t pre-registered you can’t get in if you change your mind, whereas if you do register but then don’t go that’s fine as they’ll bank on no-shows. The evening events are great for getting to chat to people (dare I say, networking), as are the other events that are organised like the swim in the bay, run across the bridge, etc.

Sign up for stuff like swim in the bay,  it’s good fun – and I can’t even swim really. Run/Bike across the bridge are two other events also organised. Hang around on twitter for details, people like Yury Velikanov and Jeff Smith are usually in the know if not doing the actual organising.

General

When the busy days and long evenings start to take their toll don’t be afraid to duck out and go and decompress. Grab a shower, get a coffee, do some sight seeing. Don’t forget to drink water as well as the copious quantities of coffee and soda.

Get a data package for your mobile phone in advance of going eg £5 per day unlimited data. Conference wifi is just about OK at best, often flaky. Trying to organise short-notice meetups with other people by IM/twitter/email gets frustrating if you only get online half an hour after the time they suggested to meet!

Don’t pack extra clothes ‘just in case’. Pack minimally because (1) you are just around the corner from Market Street with Gap, Old Navy etc so can pick up more clothes cheaply if you need to and (2) you’ll get t-shirts from exhibitors, events (eg swim in the bay) and you’ll need the suitcase space to bring them all home. Bring a suitcase with space in or that expands, don’t arrive with a suitcase that’s already at capacity.

Food

So much good food and beer. Watch out for some of the American beers; they seem to start at about 5% ABV and go upwards, compared to around 3.6% ABV here in the UK. Knocking back this at the same rate as this will get messy.

In terms of food you really are spoilt, some of my favourites were:

  • Lori’s diner (map) : As a brit, I loved this American Diner, and great food - yum yum. 5-10 minutes walk from Moscone.
  • Mel’s drive-in (map) : Just round the corner from Moscone, very busy but lots of seats. Great american breakfast experience! yum
  • Grove (map) : Good place for breakfast if you want somewhere a bit less greasy than a diner (WAT!)

 

Categories: BI & Warehousing

select pdf from sqlplus

Laurent Schneider - Wed, 2014-10-08 06:48

sqlplus 10gR2 and later allows you to select from a BLOB. If you use linux, you could convert the hex output to binary with xxd


sqlplus -s scott/tiger <<EOF |xxd -p -r >doc.pdf
set pages 0 lin 17000 long 1000000000 longc 16384
select document from emp where ename=user;
EOF

Obviously, it could also be a sound, a video or an image !

Spring XD Pivotal Gemfire Sink Demo

Pas Apicella - Wed, 2014-10-08 03:39
Spring XD is a unified, distributed, and extensible system for data ingestion, real time analytics, batch processing, and data export. The project's goal is to simplify the development of big data applications.

There are 2 implementation of the gemfire sink: gemfire-server and gemfire-json-server. They are identical except the latter converts JSON string payloads to a JSON document format proprietary to GemFire and provides JSON field access and query capabilities. If you are not using JSON, the gemfire-server module will write the payload using java serialization to the configured region.

In this example below we show how we connect to an existing GemFire 7.0.2 cluster using a locator to add some JSON trade symbols to an existing region in the cluster.

1. Start a GemFire cluster with with an existing region as shown below. The following cache.xml is for "server1" of the cluster and "server2" of the cluster. They are identical configs , just using different ports

server1 cache.xml
  
<?xml version="1.0"?>
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN"
"http://www.gemstone.com/dtd/cache7_0.dtd">

<cache>
<cache-server bind-address="localhost" port="40404" hostname-for-clients="localhost"/>

<region name="springxd-region">
<region-attributes data-policy="partition">
<partition-attributes redundant-copies="1" total-num-buckets="113"/>
<eviction-attributes>
<lru-heap-percentage action="overflow-to-disk"/>
</eviction-attributes>
</region-attributes>
</region>

<resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/>

</cache>

server2 cache.xml
  
<?xml version="1.0"?>
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN"
"http://www.gemstone.com/dtd/cache7_0.dtd">

<cache>
<cache-server bind-address="localhost" port="40405" hostname-for-clients="localhost"/>

<region name="springxd-region">
<region-attributes data-policy="partition">
<partition-attributes redundant-copies="1" total-num-buckets="113"/>
<eviction-attributes>
<lru-heap-percentage action="overflow-to-disk"/>
</eviction-attributes>
</region-attributes>
</region>

<resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/>

</cache>

2. Verify using GFSH you have 2 members , a locator and a region as follows
  
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ v7.0.2.10

Monitor and Manage GemFire
gfsh>connect --locator=localhost[10334];
Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=10.98.94.88, port=1099] ..
Successfully connected to: [host=10.98.94.88, port=1099]

gfsh>list members;
Name | Id
-------- | ---------------------------------------
server1 | 10.98.94.88(server1:10161)<v1>:15610
server2 | 10.98.94.88(server2:10164)<v2>:39300
locator1 | localhost(locator1:10159:locator):42885

gfsh>list regions;
List of regions
---------------
springxd-region

3. Start single node SpringXD server
  
[Wed Oct 08 14:51:06 papicella@:~/vmware/software/spring/spring-xd/spring-xd-1.0.1.RELEASE ] $ xd-singlenode

_____ __ _______
/ ___| (-) \ \ / / _ \
\ `--. _ __ _ __ _ _ __ __ _ \ V /| | | |
`--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | |
/\__/ / |_) | | | | | | | (_| | / / \ \ |/ /
\____/| .__/|_| |_|_| |_|\__, | \/ \/___/
| | __/ |
|_| |___/
1.0.1.RELEASE eXtreme Data


Started : SingleNodeApplication
Documentation: https://github.com/spring-projects/spring-xd/wiki

....

4. Start SpringXD shell
  
$ xd-shell
_____ __ _______
/ ___| (-) \ \ / / _ \
\ `--. _ __ _ __ _ _ __ __ _ \ V /| | |
`--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | |
/\__/ / |_) | | | | | | | (_| | / / \ \ |/ /
\____/| .__/|_| |_|_| |_|\__, | \/ \/___/
| | __/ |
|_| |___/
eXtreme Data
1.0.1.RELEASE | Admin Server Target: http://localhost:9393
Welcome to the Spring XD shell. For assistance hit TAB or type "help".
xd:>

5. Create a stream as follows
  
xd:>stream create --name gemfiredemo --definition "http --port=9090 | gemfire-json-server --host=localhost --port=10334 --useLocator=true --regionName=springxd-region --keyExpression=payload.getField('symbol')" --deploy
Created and deployed new stream 'gemfiredemo'

6. Post some entries via HTTP which will be inserted into the GemFire Region
  
xd:>http post --target http://localhost:9090 --data {"symbol":"ORCL","price":38}
> POST (text/plain;Charset=UTF-8) http://localhost:9090 {"symbol":"ORCL","price":38}
> 200 OK

xd:>http post --target http://localhost:9090 --data {"symbol":"VMW","price":94}
> POST (text/plain;Charset=UTF-8) http://localhost:9090 {"symbol":"VMW","price":94}
> 200 OK

7. Verify via GFSH that data has been inserted into the GemFire region. JSON data stored in GemFire regions is done using PDX.
  
gfsh>query --query="select * from /springxd-region";

Result : true
startCount : 0
endCount : 20
Rows : 2

symbol | price
------ | -----
ORCL | 38
VMW | 94

NEXT_STEP_NAME : END

More Information

SpringXD
http://projects.spring.io/spring-xd/

GemFire Sinks
http://docs.spring.io/spring-xd/docs/1.0.1.RELEASE/reference/html/#gemfire-serverhttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

OOW : Présentations 2014

Jean-Philippe Pinte - Wed, 2014-10-08 03:12
Les présentations d'OOW 2014 sont téléchargeables en pdf sur le site https://www.oracle.com/openworld/index.html (onglet Sessions / Content Catalog)


OOW 2014: Day 2

Doug Burns - Wed, 2014-10-08 03:02
Having been awake for so many hours, I was along at Oak Table World bright and early because :-
1) I wanted to make damn sure I got one of the T-shirts. The courier had let down poor Kyle Hailey so they weren't there at first, but I accosted him to remind him that I was one of the first group of people there ;-) (Oh, and it worked later when they turned up.)
2) Because Mogens Norgaard was on first with a 30 minute opening talk. Mmmm, at 08:30? Who came up with *that* moment of scheduling genius?! LOL ... Sure enough, Kyle had to implement a last-minute schedule change and Riyaj Shamsudeen helped out by stepping up to deliver his 9:00 slot 30 minutes early. 
Which was a shame for those who showed up at 09:00 and missed the first half of his In-memory Internals presentation, which I loved. Riyaj always works at a deep level but in those areas that are practically important, rather than just showing off his smarts!

I picked up a few extremely useful things from this presentation but I think the most important one was the journaling area used when rows in the standard row-orientated buffer cache have been updated. Which, for starters, means that only 80% of the allocated memory will be available for your original data. Not a problem, but worth knowing.

What really jumped out at me though was when he discussed how the number of updated rows could affect the optimiser's decision to use In-Memory or not. I might not have explained that very well, but I believe the effect would be that the optimiser is likely to flip between using In-Memory or not depending on quite a few variables. Which means one thing to me. Potential Execution Plan instability. I'm not sure how Oracle could get around this because cost-based decisions are the sensible approach but I foresee lots of new performance analysis and tuning opportunities! Not quite "flick a switch and it just works", but who would ever believe that kind of thing anyway?

Great presentation, though. Exactly what Oak Table World is all about so thanks to Kyle Hailey and the various sponsors () and speakers for making it happen!
When Mogens eventually showed up, he was on top form for his enormously entertaining Conference Opening where he delved into that new Big Data thingy. The strange thing about his presentations is that although they're very quotable, I always find I've been enjoying it too much to remember a damn thing he said! LOL But I managed to have many interesting talks with him later in the week about how unstoppable this Big Data thing is for those who need it. You could question who really needs it, but I personally remember the days of 'why would anyone need their own personal computer' too.
Next up was Andy Mendelsohn's Database General Session in an extremely frosty Marriot. I've become more of a fan of air-conditioning over the past 3 months but this was ridiculous! The presentation was very cloudy at first, then came the In-Memory stuff including Maria Colgan giving the cool demo which I've seen before but seems to have been polished up. The other thing that struck me for the first time in this presentation was just how much better Oracle's new slide template is! As anyone who has used it would confirm, the old one was *very* red and blocky and intense and the new one is so much cleaner and spacious and uses colours that don't kill your eyes. I thought the difference was staggering and actually found myself wanting to look at them for a change! ;-) But, on the whole, it was a relatively sober and honest presentation without any great announcements, but plenty of focus on delivering the meat of the previous year's announcements.

Judge for yourself. No need to go to San Francisco!
Then I was straight over to the first Real World Performance group presentation with Andy Holdsworth and Graham Wood talking about some of the higher level application design issues they have discovered via AWR reports. But first they kicked off with their usual dose of performance analysis and design reality, reflecting on the daft way that customers approach performance (and those last word are mine, based on my own experiences). 
They talked about the obsession people seem to have with identifying and treating narrow symptoms of problems that are, in reality, application design problems that need to be treated from the top down in order to relieve the low-level symptoms.
For example, right at the top of the report is the number of sessions. Imagine 3,300 sessions on a 32 core server. Well you don't need to because this was an AWR report from a real system so no imagination is necessary. Does that make any sense to anyone? Then why do we still see that kind of thing all the time? 

... or how about finding open_cursors set to 2000? A per-session limit of 2000 cursors? As Graham pointed out - good luck keeping track of the state of all of those! As soon as you stop and think about these things sensibly, you realise that it's almost certainly a sign of an application leaking cursors. 

There were lots of similar examples but the interesting overall approach that I would say they were illustrating is something that I tend to do when I first arrive at a new client site and I've watched other experienced Oracle techies do the same.

An AWR report is not just the top 5 timed events and the sections at the top are a pretty good description of the actual system workload which, in turn, can tell you a lot about the application design. Then, based on potential application design issues, you can drill down into the report and look at later sections to see where all those leaked cursors or transaction rollbacks or (whatever) ... are coming from.

Updated Later: As Toon Koppelaars highlighted on Twitter later, you can see this version of exactly what I'm talking about here, for free. I should hang my head in shame because Andrew and Graham made a point of all the RWP videos being available online here. Watch and enjoy!

Lucky boy that I am, I was able to retire to the comforting surroundings of the Thirsty Bear to continue the conversation about all things performance related with Graham and JB, much of the conversation being me whining about why people don't use the *full* range of tools that come with the Diagnostics and Tuning Packs that they've paid Oracle good money for. That's why I've been slowly developing a presentation on that very subject. 

Then it was back to Oak Table World to catch Greg Rahn talking about all that Hadoop stuff *again*! :-) Even though I only caught part of the presentation, I do keep managing to pick up bits and pieces on the subject although I wonder when it'll become relevant to my day to day work. Probably whenever I'm too late to the party, as usual ;-)

But my main reason for showing up was to see Kevin Closson talking about using SLOB in some less obvious ways. Because SLOB is a good all-round Oracle workload generator, it shouldn't be seen as simply a tool for testing storage performance and that's probably it's main strength. Kevin is always a great speaker and I find listening to him a very different experience to reading his blog, but I'm not sure I can put my finger on why. Oh, he also had the most ridiculously bright SLOB buttons! (As I found out by making the mistake of looking to closely at it as I tried to switch it on ;-))

At some point, all of the slides for the Oak Table World presentations should be available on the site, so keep a look out for those! (Oh, and I got my T-shirt which is deeply cool and was one of the few items of non-ACE swag I managed to pick up all week)

From there on, it was more or less party all the way.

- First quiet beers and snacks with lots of Oak Table and Oracle types.

- Then my very first ever Customer event that wasn't for a specific technology area, but a sales region. Man, *that* was a mistake! Suits *everywhere*! ;-) but I suppose it was useful to build contacts with the senior support managers in my new region. 

- Instead, I headed towards the OTN night in Howard Street (until I realised I'd just dropped my bag with the entry ticket back at my hotel room)

- So instead I landed at one of the events of this and any other OOW - The Friends of Pythian Party'. As always, beautifully-organised, very generous on the liquid refreshments and the coolest crowd in town. Just because I find myself thanking Vanessa Simmons, Paul Vallee and all of the Pythian crew every year doesn't make it any less sincere.

I have to be honest, though, and say that the highlight of the night for me was spending much more time with Kevin's punchy, beautiful and fun wife Lori. If you think Kevin's smart, wait until you meet his wife! There's a lady who can hold her own and make me chuckle :-) Problem is that I think she's used to scaring people but us Scots don't scare so easily ;-)

It was a great night anyway, as always, and although this is entirely unconnected to the Pythian party but might have had a *lot* to do with jet lag, I didn't wake up until 11:45 the next morning :-(

General Availability announcement for Oracle Application Management Pack 12.1.0.2.0 for PeopleSoft

PeopleSoft Technology Blog - Tue, 2014-10-07 23:48


Oracle PeopleSoft is pleased to announce the General Availability of Oracle Application Management Pack 12.1.0.2.0 for PeopleSoft

Oracle Application Management Pack, or AMP, is also known as Oracle PeopleSoft Plug-in for Oracle Enterprise Manager. Oracle PeopleSoft Plug-in Is licensed as part of Application Management Suite for PeopleSoft.

This release of Application Management Pack supports PeopleTools Releases 8.54, 8.53 and 8.52.

Here are some of the new features of Application Management Pack for PeopleSoft:

System Management Enhancements

Administration/Configuration/Monitoring:

· New ADF Based UI: All the Administration/Configuration and Monitoring UI is available now on ADF UI with advanced dashboards.

· Improved PeopleTools System Discovery: Allows user to discover the PeopleTools Database from one of the PeopleTools/Tuxedo Domain Targets.

· New Aggregate Target Homes: Aggregate Target Home pages eases Inter Target Navigation for users. Also comes with new menu based navigation, helping users to navigate within PeopleTools Targets with less hops.

· Configuration Comparison Templates: Configuration comparison templates allow customers to compare configurations of two or more PeopleTools environments.

· Diagnostic Framework: Enables users to collect extensive diagnostic log, leading to faster resolution to Target Discovery/Configuration and Monitoring issues.

· Performance Monitoring: PeopleTools customers will now be able Proactively monitor the targets using new JMX enabled Monitoring Metrics.

Change Management Enhancements

· Cloning: Supports cloning of Web Server and Application Server Domain Configurations.

Release Details

Downloading PeopleSoft Application Management Suite

PeopleSoft Application Management Suite can be downloaded from Oracle Software Delivery Cloud by using following instructions:

1. Go to the Oracle Software Delivery Cloud site.

2. Choose a language and click Continue.

3. Answer export validation questions.

4. Select PeopleSoft Enterprise from the list of product/media packs.

5. Choose Respective Platform and click Go.

6. Select Oracle Application Management Suite 1.0.0.5.0 for PeopleSoft from the list and click on Continue.

7. Select PeopleSoft Application Management Plug-in 12.1.0.2 for Oracle Enterprise Manager 12c and download.

Note: Starting with Enterprise Manager 12c customers can download and install the application management pack as Self Update from EM store. For more details on the EM Store and the Self Update feature, see the Oracle Enterprise Manager Cloud Control Administrator's Guide.

Supported Releases and Platforms

The following Oracle Enterprise Manager Cloud Control releases, PeopleTools releases, and platforms are supported:

  • Oracle Enterprise Manager Cloud Control 12c Release 3 (12.1.0.3.0)
  • Oracle Enterprise Manager Cloud Control 12c Release 4 (12.1.0.4.0)
  • PeopleTools Release 8.54
  • PeopleTools Release 8.53
  • PeopleTools Release 8.52
  • Supported Platforms: Oracle Application Management Pack for PeopleSoft is available on Linux, IBM AIX, Oracle Solaris, and HP-UX Itanium and Windows. For a complete list of supported platforms and operating systems, refer to the certification pages of PeopleTools. Also, for a complete list of Enterprise Manager supported platforms and operating systems, refer to the certification pages of Oracle Enterprise Manager.

Installing Oracle Application Management Pack for PeopleSoft Release 12.1.0.2.0

Installation and Implementation Guides are available on OTN and on OSDC.

  • PeopleSoft Application Management Plug-in 12.1.0.2.0 for Oracle Enterprise Manager 12c Install Guide is available on Part No - E57421-01.
  • PeopleSoft Application Management Plug-in 12.1.0.2.0 for Oracle Enterprise Manager 12c Implementation Guide is available on Part No - E55343-01.
  • The Oracle Application Management Pack PeopleSoft can be downloaded and installed by using the Self Update feature of Oracle Enterprise Manager.
    Please refer to the following documentation to understand more about the Self Update Feature:
    Oracle® Enterprise Manager Cloud Control Administrator's Guide

New Alta UI for ADF UI Shell Application

Andrejus Baranovski - Tue, 2014-10-07 23:14
I have applied new Alta UI for customised ADF UI Shell application. Customised version of ADF UI Shell is taken from my previous blog post - ADF UI Shell Usability Improvement - Tab Contextual Menu. Old application with new Alta UI looks fresh and clean. Runtime performance is improved - ADF transfers less content to the browser, this makes application load and run faster.

Here you can download my sample application with Alta UI applied to ADF UI Shell - MultiTaskFlowApp_v12c.zip.

All three ADF UI Shell tabs are opened and Master-Detail data is displayed in this example:


New style is applied for LOV component and buttons, making all buttons and controls more visible and natural:


Customized ADF UI Shell supports tab menu - user can close current tab or other tabs:


There was a change in 12c related to the tab menu, we need to set align ID property differently. You can see this change in ADF UI Shell template file - Java Script function gets tab ID to align directly from component client ID property:


Alta UI is applied simply by changing a skin name in trinidad file:


This hidden gem was packaged with current JDEV 12.1.3 release, you don't need to download anything extra.

Bringing Clarity To The Avalanche Part 1 - OOW14

Floyd Teter - Tue, 2014-10-07 15:50
Since the prior post here, I've had some people ask why I compared Oracle OpenWorld this year to an avalanche.  Well, to be honest, there are two reasons.  First, it was certainly an avalanche of news. You can check all the Oracle press releases related to the conference here (warning: it's pages and pages of information).  Second, I'm tired of using the analogy of sipping or drinking from a firehose...time to try something new.

So let's talk about some User Experience highlights from the conference.  Why am I starting with UX?  Because I like it and it's my blog ;)

Alta UI

OK, let's be clear.  Alta is more of a user interface standard than a full UX, as it focuses strictly on UI rather than the entire user experience.  That being said, it's pretty cool.  It's a very clean and simplified look, and applies many lessons learned through Oracle's (separate) UX efforts.  I could blab on and on about Oracle Alta, but you can learn about it for yourself here.

Beacons

We all love gadgets.  I had the opportunity to get a sneak peek at some of the "projects that aren't quite products yet" in the works at the Oracle UX Labs.  Beacons are a big part of that work.  Turns out that the work has already progress beyond mere gadgetry.  The beacons were used to help guide me from station to station within the event space - this booth is ready for you now.  The AppsLab team talks about beacons on a regular basis.  I'm much more sold now on the usefulness to beacon technology than I was before OOW.  This was one of the better applications I've seen at the intersection of Wearables and the Internet of Things.

Simplified UI

I like the concepts behind Simplified UI because well-designed UX drives user acceptance and increases productivity.  Simplified UI was originally introduced for Oracle Cloud Applications back when they were known as Fusion Applications.  But now we're seeing Simplified UI propagating out to other Oracle Applications.  We now see Simplified UI patterns applied to the E-Business Suite, JD Edwards and PeopleSoft.  Different underlying technology for each, but the same look and feel.  Very cool to see the understanding growing within Oracle development that user experience is not only important, but is a value-add product in and of itself.

Simplified UI Rapid Development Kit

Simplified UI is great for Oracle products, but what if I want to extend those products.  Or, even better, what if I want to custom-build products with the same look and feel?  Well, Oracle has made it easy for me to literally steal...in fact, they want me to steal...their secret sauce with the Simplified UI Rapid Development Kit.  Yeah, I'm cheating a bit.  This was actually released before OOW.  But most folks, especially Oracle partners, were unaware prior to the conference.  If I had a nickel for every time I saw a developer's eyes light up over this at OOW, I'd could buy my own yacht and race Larry across San Francisco Bay.  Worth checking out if you haven't already.

Student Cloud

I'll probably get hauled off to the special prison Oracle keeps for people who toy with the limits of their NDA for this, but it's too cool to keep to myself.  I had the opportunity to work hands-on with an early semi-functional prototype of the in-development Student Cloud application for managing Higher Education continuing education students.  The part that's cool:  you can see great UX design throughout the application.  Very few clicks, even fewer icons, a search-based navigation architecture, and very, very simple business processes for very specific use cases.  I can't wait to see and hear reactions when this app rolls out to the Higher Education market.

More cool stuff next post...

Little script for finding tables for which dynamic sampling was used

XTended Oracle SQL - Tue, 2014-10-07 14:42

You can always download latest version here: http://github.com/xtender/xt_scripts/blob/master/dynamic_sampling_used_for.sql
Current source code:

col owner         for a30;
col tab_name      for a30;
col top_sql_id    for a13;
col temporary     for a9;
col last_analyzed for a30;
col partitioned   for a11;
col nested        for a6;
col IOT_TYPE      for a15;
with tabs as (
      select 
         to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,1))  owner
        ,to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,2))  tab_name
        ,count(*)                                                                    cnt
        ,sum(executions)                                                             execs
        ,round(sum(elapsed_time/1e6),3)                                              elapsed
        ,max(sql_id) keep(dense_rank first order by elapsed_time desc)               top_sql_id
      from v$sqlarea a
      where a.sql_text like 'SELECT /* OPT_DYN_SAMP */%'
      group by
         to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,1))
        ,to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,2))
)
select tabs.* 
      ,t.temporary
      ,t.last_analyzed
      ,t.partitioned
      ,t.nested
      ,t.IOT_TYPE
from tabs
    ,dba_tables t
where 
     tabs.owner    = t.owner(+)
 and tabs.tab_name = t.table_name(+)
order by elapsed desc
/
col owner         clear;
col tab_name      clear;
col top_sql_id    clear;
col temporary     clear;
col last_analyzed clear;
col partitioned   clear;
col nested        clear;
col IOT_TYPE      clear;

ps. Or if you want to find queries that used dynamic sampling, you can use query like that:

select s.*
from v$sql s
where 
  s.sql_id in (select p.sql_id 
               from v$sql_plan p
               where p.id=1
                 and p.other_xml like '%dynamic_sampling%'
              )
Categories: Development

OOW : Edition 2015

Jean-Philippe Pinte - Tue, 2014-10-07 14:22
A noter dans l'agenda, les dates de l'édition 2015 !
25 au 29 octobre 2015

Presentations Available from OpenWorld

Anthony Shorten - Tue, 2014-10-07 11:38

Last week I conducted three sessions on a number of topics. The presentations used in those sessions are now available from the Sessions --> Content Catalog on the Oracle OpenWorld site.Just search for my name (Anthony Shorten) to download the presentations in PDF format.

The sessions available are:

I know a few customers and partners came to me after each session to get a copy of the presentation. They are now available as I pointed out.

Objects versus Insert Statements

Anthony Shorten - Tue, 2014-10-07 11:06

A few times I have encountered issues and problems at customers that can defy explanation. After investigation I usually find out the cause and in some cases it is the way the implementation has created the data in the first place. In the majority of these types of issues, I find that interfaces or even people are using direct INSERT statements against the product database to create data. This is inherently dangerous for a number of reasons and therefore strongly discouraged:

  • Direct INSERT statements frequently miss important data in the object.
  • Direct INSERT statements ignore any product business logic which means the data is potentially inconsistent from the definition of the object. This can cause the product processing to misinterpret the data and may even cause data corruption in extreme cases.
  • Direct INSERT statements ignore product managed referential integrity. We do not use the referential integrity of the data within the database as we allow extensions to augment the behavior of the object and determine the optimal point of checking data integrity. The object has inbuilt referential integrity rules.

To avoid this situation we highly recommend that you only insert data through the object and NOT use direct INSERT statements. The interface to the object can be direct within the product or via Web Services (either directly or through your favorite middleware) to create data from an external source. Running through the object interface ensures not only that the data is complete but takes into account product referential integrity and conforms to the business rules that you configure for your data.

Take care and create data through the objects.

12c Upgrade and Concurrent Stats Gathering

Jason Arneil - Tue, 2014-10-07 07:50

I was upgrading an Exadata test database from 11.2.0.4 to 12.1.0.2 and I came across a failure scenario I had not encountered before. I’ve upgraded a few databases to both 12.1.0.1 and 12.1.0.2 for test purposes, but this was the first one I’d done on Exadata. And the first time I’d encountered such a failure.

I started the upgrade after checking with the pre upgrade script that everything was ready to upgrade. And I ran with the maximum amount of parellelism:

$ORACLE_HOME/perl/bin/perl catctl.pl -n 8 catupgrd.sql
.
.
.
Serial Phase #:81 Files: 1 A process terminated prior to completion.

Died at catcon.pm line 5084.

That was both annoying and surprising. The line in catcon.pm is of no assistance:

   5080   sub catcon_HandleSigchld () {
   5081     print CATCONOUT "A process terminated prior to completion.\n";
   5082     print CATCONOUT "Review the ${catcon_LogFilePathBase}*.log files to identify the failure.\n";
   5083     $SIG{CHLD} = 'IGNORE';  # now ignore any child processes
   5084     die;
   5085   }

But what of more use was the bottom of a catupgrd.log file:

11:12:35 269  /
catrequtlmg: b_StatEvt     = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
declare
*
ERROR at line 1:
ORA-20000: Unable to gather statistics concurrently: Resource Manager is not
enabled.
ORA-06512: at "SYS.DBMS_STATS", line 34567
ORA-06512: at line 152

This error is coming from the catrequtlmg.sql, but my first thought was checking if the parameter resource_manager_plan was set, and it turned out it wasn’t. However setting the default_plan and running this piece of sql by itself produced the same error:

SQL> @catrequtlmg.sql

PL/SQL procedure successfully completed.

catrequtlmg: b_StatEvt	   = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
declare
*
ERROR at line 1:
ORA-20000: Unable to gather statistics concurrently: Resource Manager is not
enabled.
ORA-06512: at "SYS.DBMS_STATS", line 34567
ORA-06512: at line 152



PL/SQL procedure successfully completed.

I then started thinking about what it meant by gather statistics concurrently and I noticed that I had indeed set this database to gather stats concurrently (it’s off by default):

SQL> select dbms_stats.get_prefs('concurrent') from dual;

DBMS_STATS.GET_PREFS('CONCURRENT')
--------------------------------------------------------------------------------
TRUE

I then proceeded to turn of this concurrent gathering and rerun the failing SQL:


SQL> exec dbms_stats.set_global_prefs('CONCURRENT','FALSE');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('concurrent') from dual;

DBMS_STATS.GET_PREFS('CONCURRENT')
--------------------------------------------------------------------------------
FALSE


SQL> @catrequtlmg.sql

PL/SQL procedure successfully completed.

catrequtlmg: b_StatEvt	   = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
catrequtlmg: Gathering Table Stats USER$MIG
catrequtlmg: Gathering Table Stats COL$MIG
catrequtlmg: Gathering Table Stats CLU$MIG
catrequtlmg: Gathering Table Stats CON$MIG
catrequtlmg: Gathering Table Stats TAB$MIG
catrequtlmg: Gathering Table Stats IND$MIG
catrequtlmg: Gathering Table Stats ICOL$MIG
catrequtlmg: Gathering Table Stats LOB$MIG
catrequtlmg: Gathering Table Stats COLTYPE$MIG
catrequtlmg: Gathering Table Stats SUBCOLTYPE$MIG
catrequtlmg: Gathering Table Stats NTAB$MIG
catrequtlmg: Gathering Table Stats REFCON$MIG
catrequtlmg: Gathering Table Stats OPQTYPE$MIG
catrequtlmg: Gathering Table Stats ICOLDEP$MIG
catrequtlmg: Gathering Table Stats TSQ$MIG
catrequtlmg: Gathering Table Stats VIEWTRCOL$MIG
catrequtlmg: Gathering Table Stats ATTRCOL$MIG
catrequtlmg: Gathering Table Stats TYPE_MISC$MIG
catrequtlmg: Gathering Table Stats LIBRARY$MIG
catrequtlmg: Gathering Table Stats ASSEMBLY$MIG
catrequtlmg: delete_props_data: No Props Data

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

It worked! I was able to upgrade my database in the end.

I wish the preupgrade.sql script would check for this. Or indeed when upgrading, the catrequtlmg.sql would disable the concurrent gathering.

I would advise checking for this before any upgrade to 12c and turning it off if you find it in one of your about to be upgraded databases.


iBeacons or The Physical Web?

Oracle AppsLab - Tue, 2014-10-07 06:55

For the past year at the AppsLab we have been exploring the possibilities of advanced user interactions using BLE beacons. A couple days ago, Google (unofficially) announced that one of their Chrome teams is working on what I’m calling the gBeacon. They are calling it the Physical Web.
This is how they describe it:

“The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device – a vending machine, a poster, a toy, a bus stop, a rental car – and not have to download an app first. Everything should be just a tap away.

The Physical Web is not shipping yet nor is it a Google product. This is an early-stage experimental project and we’re developing it out in the open as we do all things related to the web. This should only be of interest to developers looking to test out this feature and provide us feedback.

Here is a short run down of how iBeacon works vs The Physical Web beacons:

iBeacon

The iBeacon profile advertises a 30 byte packet containing three values that combined make a unique identifier: UUID, Major, Minor. The mobile device will actively listen for these packets. When it gets close to one of them it will query a database (cloud) or use hard-coded values to determine what it needs to do or show for that beacon. Generally the UUID is set to identify a common organization. Major value is an asset within that organization, and Minor is a subset of assets belonging to the Major.
iBeacon_overview.001
For example, if I’m close to the Oracle campus, and I have an Oracle application that is actively listening for beacons, then as I get within reach of any beacon my app can trigger certain interactions related to the whole organization (“Hello Noel, Welcome to Oracle.”) The application had to query a database to know what that UUID represents. As I reach building 200, my application picks up another beacon that contains a Major value of lets say 200. Then my app will do the same and query to see what it represents (“You are in building 200.”) Finally when I get close to our new Cloud UX Lab, a beacon inside the lab will broadcast a Minor ID that represents the lab (“This is the Cloud UX lab, want to learn more?”)

iBeacons are designed to work as full closed ecosystem where only the deployed devices (app+beacons+db) will know what a beacon represents. Today I can walk to the Apple store and use a Bluetooth app to “sniff” BLE devices, but unless I know what their UUID/Major/Minor values represent I cannot do anything with that information. Only the official Apple Store app will know what do with when is nearby beacons around the store (“Looks like you are looking for a new iPhone case.”)

As you can see the iBeacon approach is a “push” method where the device will proactively push actions to you. In contrast the Physical Web beacon proposes to act as a “pull” or on-demand method.

Physical Web

The Physical Web gBeacon will advertise a 28 bytes packet containing an encoded URL. Google wants to use the familiar and established method of URLs to tell an application, or an OS, where to find information about physical objects. They plan to use context (physical and virtual) to top rank what might be more important to you at the current time and display it.

gBeacon

Image from https://github.com/google/physical-web/blob/master/documentation/introduction.md

The Physical Web approach is designed to be a “pull” discovery service where most likely the user will initiate the interaction. For example, when I arrive to the Oracle campus, I can start an application that will scan for nearby gBeacons or I can open my Chrome browser and do a search.  The application or browser will use context to top rank nearby objects combined with results. It can also use calendar data, email or Google Now to narrow down interests.  A background process with “push” capabilities could also be implemented. This process could have filters that can alert the user of nearby objects of interest.  These interests rules could be predefined or inferred by using Google’s intelligence gathering systems like Google Now.

The main difference between the two approaches is that iBeacons is a closed ecosystem (app+beacons+db) and the Physical Web is intended to be a public self discovered (app/os+beacons+www) physical extension of the web. Although the Physical Web could also be restricted by using protected websites and encrypted URLs.

Both approaches are accounting to prevent the misconception about these technologies: “I am going to be spammed as soon as I walk inside a mall?”  The answer is NO. iBeacons is an opt-in service within an app and the Physical Web beacons will mostly work on-demand or will have filter subscriptions.

So there you have it. Which method do you prefer?Possibly Related Posts:

Oracle OpenWorld 2014 Highlights

WebCenter Team - Tue, 2014-10-07 06:28

As Oracle OpenWorld 2014 comes to a close, we wanted to reflect on the week and provide some highlights for you all!

We say this every year, but this year's event was one of the best ones yet. We had more than 35 scheduled sessions, plus user group sessions, 10 live product demos, and 7 hands-on labs devoted to Oracle WebCenter and Oracle Business Process Management (Oracle BPM) solutions. This year's Oracle OpenWorld provided broad and deep insight into next-generation solutions that increase business agility, improve performance, and drive personal, contextual, and multichannel interactions. 

Oracle WebCenter & BPM Customer Appreciation Reception

Our 8th annual Oracle WebCenter & BPM Customer Appreciation Reception was held for the second year at San Francisco’s Old Mint, a National Historic Landmark. This was a great evening of networking and relationship building, where the Oracle WebCenter & BPM community had the chance to mingle and make new connections. Many thanks to our partners Aurionpro, AVIO Consulting, Bezzotech, Fishbowl Solutions, Keste, Redstone Content Solutions, TekStream & VASSIT for sponsoring!

Oracle Fusion Middleware Innovation Awards 

Oracle Fusion Middleware Innovation honors Oracle customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners were selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. This year’s winners for WebCenter were Bank of Lebanon and McAfee.


This year’s winners for the BPM category were State Revenue Office, Victoria and Vertafore.


Congratulations winners! 

Oracle Appreciation Event at Treasure Island

We stayed up past our bedtimes rocking to Aerosmith and hip-hopping to Macklemore & Ryan Lewis and Spacehog at the Oracle Appreciation Event. These award-winners—plus free-flowing networking, food, and drink—made Wednesday evening magical at Treasure Island. Once we arrived on Treasure Island, we saw that it had been transformed and we were wowed by the 360-degree views of Bay Area skylines (with an even better view from the top of the Ferris wheel). We tested our skills playing arcade games between acts, and relaxed and enjoyed ourselves after a busy couple of days.

Cloud

Cloud was one of the OOW shining spotlights this year. For WebCenter and BPM, we had dedicated hands-on labs for Documents Cloud Service and Process Cloud Service @ the Intercontinental. In addition, we had live demos including Documents Cloud Service, Process Cloud Services and Oracle Social Network (OSN) throughout the week. Documents Cloud Service and OSN were featured prominently in the Thomas Kurian OOW Keynote (from the 46 minute mark) and the FMW General Session (from the 40 minute mark). 

The Oracle WebCenter & BPM Community

Oracle OpenWorld is unmatched in providing you with opportunities to interact and engage with other WebCenter & BPM customers and experts from among our partner and employee communities. It was great to see everyone, make new connections and reconnect with old friends. We look forward to seeing you all again next year!

BI Applications in Cloud

Dylan's BI Notes - Mon, 2014-10-06 18:28
Prepackaged analytics applications are available as cloud services. The idea is that the client company does not need to use their own hardware and does not need to install the software or apply patches by themselves. What they need is just simply the browsers. For the end users, there should not be much difference.   The BI apps built […]
Categories: BI & Warehousing

Comparing SQL Execution Times From Different Systems

Comparing SQL Execution Times From Different Systems
Suppose it's your job to identify SQL that may run slower in the about-to-be-upgrated Oracle Database. It's tricky because no two systems are alike. Just because the SQL run time is faster in the test environment doesn't mean the decision to upgrade is a good one. In fact, it could be disastrous.

For example; If a SQL statement runs 10 seconds in production and runs 20 seconds in QAT, but the production system is twice as fast as QAT, is that a problem? It's difficult to compare SQL runs times when the same SQL resides in different environments.

In this posting, I present a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.

And, there is a cool, free, downloadable tool involved!

Why SQL Can Run Slower In Different Environments
There are a number of reasons why a SQL's run time is different in different systems. An obvious reason is a different execution plan. A less obvious and much more complex reason is a workload intensity or type difference. In this posting, I will focus on CPU speed differences. Actually, what I'll show you is how to remove the CPU speed differences so you can appropriately compare two SQL statements. It's pretty cool.

The Mental Gymnastics
If a SQL statement's elapsed time in production is 10 seconds and 20 seconds in QAT, that’s NOT an issue IF the production system is twice as fast.

If this makes sense to you, then what you did was mentally adjust one of the systems so it could be appropriately compared. This is how I did it:

10 seconds in production * production is 2 times as fast as QA  = 20 seconds 
And in QA the sql ran in 20 seconds… so really they ran “the same” in both environments. If I am considering placing the SQL from the test environment into the production environment, then this scenario does not raise any risk flags. The "trick" is determining "production is 2 times as fast as QA" and then creatively use that information.
Determining The "Speed Value"
Fortunately, there are many ways to determine a system's "speed value." Basing the speed value on Oracle's ability to process buffers in memory has many advantages: a real load is not required or even desired, real Oracle code is being run at a particular version, real operating systems are being run and the processing of an Oracle buffer highly correlates with CPU consumption.
Keep in mind, this type of CPU speed test is not an indicator of scalability (benefit of adding additional CPUs) in any way shape or form. It is simply a measure of brut force Oracle buffer cache logical IO processing speed based on a number of factors. If you are architecting a system, other tests will be required.
As you might expect, I have a free tool you can download to determine the "true speed" rating. I recently updated it to be more accurate, require less Oracle privileges, and also show the execution plan of the speed test tool SQL. (A special thanks to Steve for the execution plan enhancement!) If the execution plan used in the speed tool is difference on the various systems, then obviously we can't expect the "true speeds" to be comparable.
You can download the tool HERE.
How To Analyze The Risk
Before we can analyze the risk, we need the "speed value" for both systems. Suppose a faster system means its speed rating is larger. If the production system speed rating is 600 and the QAT system speed rating is 300, then production is deemed "twice as fast."
Now let's put this all together and quickly go through three examples.
This is the core math:
standardized elapsed time = sql elapsed time * system speed value
So if the SQL elapsed time is 25 seconds and the system speed value is 200, then the standardized "apples-to-apples" elapsed time is 5000 which is 25*200. The "standardized elapsed time" is simply a way to compare SQL elapsed times, not what users will feel and not the true SQL elapsed time.
To make this a little more interesting, I'll quickly go through three scenarios focusing on identifying risk.
1. The SQL truly runs the same in both systems.
Here is the math:
QAT standardized elapsed time = 20 seconds X 300 = 6000 seconds
PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds
In this scenario, the true speed situation is, QAT = PRD. This means, the SQL effectively runs just as fast in QAT as in production. If someone says the SQL is running slower in QAT and therefore this presents a risk to the upgrade, you can confidently say it's because the PRD system is twice as fast! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.
2. The SQL runs faster in production.
Now suppose the SQL runs for 30 seconds in QAT and for 10 seconds in PRD. If someone was to say, "Well of course it's runs slower in QAT because QAT is slower than the PRD system." Really? Everything is OK? Again, to make a fare comparison, we must compare the system using a standardizing metric, which I have been calling the, "standardized elapsed time."
Here are the scenario numbers:
QAT standardized elapsed time = 30 seconds X 300 = 9000 secondsPRD standardized elapsed time = 10 seconds X 600 = 6000 seconds
In this scenario, the QAT standard elapsed time is greater than the PRD standardized elapsed time. This means the QAT SQL is truly running slower in QAT compared to PRD. Specifically, this means the slower SQL in QAT can not be fully explained by the slower QAT system. Said another way, while we expect the SQL in QAT to run slower then in the PRD system, we didn't expect it to be quite so slow in QAT. There must another reason for this slowness, which we are not accounting for. In this scenario, the QAT SQL should be flagged as presenting a significant risk when upgrading from QAT to PRD.
3. The SQL runs faster in QAT.
In this final scenario, the SQL runs for 15 seconds in QAT and for 10 seconds in PRD. Suppose someone was to say, "Well of course the SQL runs slower in QAT. So everything is OK." Really? Everything is OK? To get a better understanding of the true situation, we need to look at their standardized elapsed times.
QAT standardized elapsed time = 15 seconds X 300 = 4500 secondsPRD standardized elapsed time = 10 seconds X 600 = 6000 seconds 
In this scenario, QAT standard elapsed time is less then the PRD standardized elapsed time. This means the QAT SQL is actually running faster in the QAT, even though the QAT wall time is 15 seconds and the PRD wall time is only 10 seconds. So while most people would flag this QAT SQL as "high risk" we know better! We know the QAT SQL is actually running faster in QAT than in production! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.
In Summary...
Identify risk is extremely important while planning for an upgrade. It is unlikely the QAT and production system will be identical in every way. This mismatch makes identifying risk more difficult. One of the common differences in systems is their CPU processing speeds. What I demonstrated was a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.
What's Next?
Looking at the "standardized elapsed time" based on Oracle LIO processing is important, but it's just one reason why a SQL may have a different elapsed time in a different environment. One of the big "gotchas" in load testing is comparing production performance to a QAT environment with a different workload. Creating an equivalent workload on different systems is extremely difficult to do. But with some very cool math and a clear understanding of performance analysis, we can also create a more "apples-to-apples" comparison, just like we have done with CPU speeds. But I'll save that for another posting.

All the best in your Oracle performance work!

Craig.




Categories: DBA Blogs