Skip navigation.

Feed aggregator

Miranda's Customer Panel is "The One to Beat Going Forward"

Linda Fishman Hoyle - Tue, 2014-10-28 14:02

EVP Steve Miranda admittedly talks fast, but he's the farthest thing from a fast talker. In his OpenWorld 2014 general session, Oracle Applications―Don’t Sit On the Sidelines, Miranda (known for his credibility) took conference goers on a vigorous sprint through Oracle Applications.

He stated that the main objective of his presentation was to answer four of the most common questions that he gets from customers:

  • Is Oracle going to force me to go the cloud? (The short answer is no.)
  • When is the Oracle Cloud going to be ready? (The short answer is today.)
  • Why should I move to the cloud? (That required a bit longer answer. Refer to Steve's video.)
  • OK, I’m going to move to the cloud. How should I do it and what should I be thinking about on a go-forward basis?

Miranda emphasized the speed at which Oracle is getting innovation into the hands of its customers. In the past, a customer had a problem and two years later the software vendor delivered the solution. But by then, the problem had changed and the cycle started all over. It took much longer to turn the crank. Now we’re delivering updates every three to four months in a cloud-based model that is faster, cheaper, and better. Plus we can prune the apps based on how customers are using them. That precise and immediate feedback leads to more modernization―and much happier customers.

Speaking of customers, Miranda's panel had a great mix of them from Atradius (CX Cloud), BG Group (HCM Cloud), GE (ERP Cloud), and Marriott (Social and Marketing Cloud). The discussion covered the companies’ business imperatives, approaches to the cloud, execution strategies, and most importantly, compelling results. The panelists were simply outstanding. Oracle's Rajan Krishnan said, "this is the one to beat going forward."

Here's the video link to Miranda's 2014 General Session (1:06).

November 5: Wave Broadband ERP Cloud Reference Forum

Linda Fishman Hoyle - Tue, 2014-10-28 13:58

Join us for another Oracle Customer Reference Forum on Wednesday, November 5, 2014, at 9:00 a.m.PT / 12:00 p.m. ET.  These Reference Forums are a great vehicle for you to advance later-stage deals.

Julie Caldwell, VP of Accounting at Wave Broadband, is a seasoned finance professional who has more than 25 years experience in accounting and finance.

She will talk about what prompted the search for a new ERP system, why Wave chose Oracle, and how the implementation project is going. She will host a Q&A session after the overview.

Wave Division Holdings is currently a live ERP Cloud customer. It operates leading broadband cable systems under the trade names Wave Broadband and Astound Broadband in the Tier 1 suburban markets of Washington, Oregon, and California. The company currently serves more than 232,000 customers. Wave has 1000+ employees.

Invite your prospects and customers. You can register now to attend the live Forum on Wednesday, November 5 at 9:00 a.m. PT / 12:00 p.m. ET and learn more from Wave directly.

OGG-01742 when trying to start extract process

DBASolved - Tue, 2014-10-28 13:57

Every once in awhile I do something in my test environments to test something else, then I go back to test core functions of the product; in this case I was testing a feature of Oracle GoldenGate 12c.  Earlier in the day, I had set the $ORACLE_HOME enviornment variable to reference the Oracle GoldenGate home.  Instead of closing my session and restarting, I just started GGSCI and the associated manager.  To my surprise, the extracts that I had configured wouldn’t start.  GGSCI was issuing an OGG-01742 error about the “child process is no longer alive” (Image 1).

Image 1:

As I was trying to figure out what was going on, I checked the ususal files (ggserr.log and report files) for any associated errors.  I didn’t find anything that was out of place or lead me to believe there was a  problem with Oracle GoldenGate.  The next thing I needed to identify was if the environment was configured correctly.  In doing this, I first checked the session environment variables (Image 2) related to the Oracle database.

Image 2:

As you can see, I had set the ORACLE_HOME equal to my OGG_HOME.  After seeing this, I remembered that I had set it this way for a specific test.  Since the ORACLE_HOME was set to OGG_HOME, the Oracle GoldenGate Extract (capture) process didn’t know where to get the needed libraries for the database.  

The Fix:

To fix this issue, I just needed to reset my environment to reference the ORACLE_HOME and ORACLE_SID correctly.  In Linux enviornment, I like to use “. oraenv” to do this (Image 3).

Image 3:

Now that I have reset my Oracle environment to point to the database, I should be able to start my extracts and not get any error messages related to OGG-01742 (Image 4).

Note: It appears that if the manager process has AUTOSTART and/or AUTORESTART set, the manager process needs to be restarted before the OGG-01742 message will go away.

Image 4:

With my extracts started now, I’ve got all my Oracle GoldenGate processes back up and running (Image 5).

Image 5:

In a nutshell, if you get OGG-01742 error while trying to start or restart an extract (capture) process; just reset your Oracle environment parameters!


Filed under: Golden Gate
Categories: DBA Blogs

PeopleSoft's paths to the Cloud - Part I

Javier Delgado - Tue, 2014-10-28 13:54

Nowadays, all paths seem to lead to cloud computing. In the business applications world, Oracle is pushing hard to position the Oracle Cloud Applications in an increasingly competitive market. The reasons that favor Software as a Service (SaaS) applications over their on premise counterparties are significant, even though there are still a good number of circumstances under which the latter should normally be the preferred option.

Our beloved PeopleSoft (yes, I like PeopleSoft, so what?) is clearly not a SaaS application. Still, my point of view is that we can still benefit of many cloud computing features without migrating to another application.

On this post, and a few more to come, I will focus on the aspects of cloud computing could be incorporated to your PeopleSoft application.

Infrastructure as a Service

Infrastructure as a Service (IaaS) is a provision model in which an organization outsources the equipment used to support operations, including storage, hardware, servers and networking components. The service provider owns the equipment and is responsible for housing, running and maintaining it. The client typically pays on a per-use basis.

Probably the best known service in this category is Amazon EC2, but there are many other providers with similar features. We have installed PeopleSoft quite a few times under Amazon EC2, and the advantages are visible immediately:

  • CPU, memory and disk space can be dynamically allocated. This is particularly useful when facing system usage peaks, for instance close to the evaluations submission deadline when using the PeopleSoft ePerformance module.
  • Servers can be seamlessly cloned, which enormously reduces the time needed to set up new environments.
  • The instance cloning can also take place between different geographical areas, providing a perfect solution for contingency environments.
  • As mentioned before, the allocated servers are paid on a per-use basis. The only exception is storage, for which you will get charged even if the server is down (and assuming you still keep the storage space busy for the next time the instance is booted).

Use Case: Development Environments

One of the most typical uses of IaaS with PeopleSoft is for non-production environments. In many cases, these environments do not need to be up and running 24x7, so the solution provided by Infrastructure as a Service is not only more flexible, but also normally more cost effective.

The flexibility of IaaS is major advantage when a sandbox environment is needed. Cloning any existing environment just takes a few minutes allowing the developers to build prototypes on a new and isolated environment that is out of the migration path.

Use Case: Test a New Release

Another functionality of IaaS is the ability to use templates that could be rapidly be used to create a new instance based on it. The Amazon name for these templates is AMI. In the past, Oracle used to provide AMIs for PeopleSoft 9.1, so if you wanted to test that release, it was just a couple of minutes away.

However, currently there are no AMIs provided by Oracle for PeopleSoft 9.2. Luckily, you may still contact consulting companies like BNB to provide you the AMI, as long as you have a valid PeopleSoft license (the Oracle provided AMIs are under a trial license, so even if you are not currently a PeopleSoft customer you can use them).

Note: An alternative way to test a new release is to download the latest PeopleSoft Update Manager image, but it takes considerable time to do it due to the size of the files (over 30 Gb).

Use Case: Training

IaaS can also be used to quickly deploy PeopleSoft instances for internal user training. We actually use this approach at BNB for training our consultants. We have created an AMI for each course, so before the training session starts, we create one instance per student, so they have a completely isolated environment to learn and play with.

Coming Next...

In the next post, I will cover the value that cloud computing brings to PeopleSoft Production environments. But that's not the end of it, so stay tuned.

phpBB 3.1 Ascraeus Released

Tim Hall - Tue, 2014-10-28 13:41

Just a quick heads-up for those that use it, phpBB 3.1 Ascraeus as been released. It’s a feature release, so the upgrade is a bit messy. I did the “automatic” upgrade. There was so much manual work involved, I would recommend you take the approach of deleting the old files, replacing with the new ones, then running the database upgrade from there. I’ve not tried that approach, but the docs say it is OK to do it that way…

I figured I might as well upgrade, even though the forum is locked. :)



phpBB 3.1 Ascraeus Released was first posted on October 28, 2014 at 8:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Use OPatch to check Oracle GoldenGate version

DBASolved - Tue, 2014-10-28 11:08

Recently on I was strolling the OTN message boards and came across a question about identifying the version of Oracle GoldenGate using OPatch.  This was the second time I came across this question; with that I decided to take a look and see if Oracle GoldenGate information could be retrieved using opatch.

Initially I thought that identifing the Oracle GoldenGate version could only be done by logging into GGSCI and reviewing the header information.  To do this, just setup the Oracle environment using “. oraenv”.

Note: “. oraenv” will use the /etc/oratab file to set the ORACLE_HOME and ORACLE_SID parameters and ensure that Oracle GoldenGate has access to the library files needed.

Once the enviornment is set, the GGSCI can be used to start the interface.

[oracle@db12cgg ogg]$ . oraenv
ORACLE_SID = [oragg] ?
The Oracle base has been set to /u01/app/oracle
[oracle@db12cgg ogg]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version OGGCORE_12.
Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:21:34
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI ( 1>

Notice in the code above, that the version of Oracle GoldenGate being ran is for Linux x64.

How can this be done through OPatch?  The same information can be gathered using the opatch utility.  Ideally, you will want to use opatch from the $GG_HOME/OPatch directory.

Note: $ORACLE_HOME needs to be set to $OGG_HOME before correct opatch inventory will be listed.  If $ORACLE_HOME is set for the database, the opatch will return information the database not Oracle GoldenGate.

After making sure that the $ORACLE_HOME directory is pointed to the correct $GG_HOME,  the inventory for Oracle GoldenGate can be retrieved using “./opatch lsinventory”.

[oracle@db12cgg ogg]$ pwd
[oracle@db12cgg ogg]$cd OPatch
[oracle@db12cgg OPatch]$./opatch lsinventory
Invoking OPatch

Oracle Interim Patch Installer version
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
Oracle Home      : /u01/app/oracle/product/12.1.2/ogg
Central Inventory : /u01/app/oraInventory
  from          : /etc/oraInst.loc
OPatch version    :
OUI version      :
Log file location : /u01/app/oracle/product/12.1.2/ogg/cfgtoollogs/opatch/opatch2014-10-28_11-18-49AM.log
Lsinventory Output file location : /u01/app/oracle/product/12.1.2/ogg/cfgtoollogs/opatch/lsinv/lsinventory2014-10-28_11-18-49AM.txt

Installed Top-level Products (1):

Oracle GoldenGate Core                                    

There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.


OPatch succeeded.

As you can tell, I was able to find the same information using OPatch without having to go to the GGSCI utility.

Note: I have not had a chance to check this against Oracle GoldenGate 11g and earlier. This may be something specific to Oracle GoldenGate 12c.  Will verify at a later time.



Filed under: Golden Gate
Categories: DBA Blogs

Part 4: DBAs guide to managing sandboxes

Keith Laker - Tue, 2014-10-28 08:11

This is the next part in my on-going series of posts on the topic of how to successfully manage sandboxes within an Oracle data warehouse environment. In Part 1 I provided an overview of sandboxing (key characteristics, deployment models) and introduced the concept of a lifecycle called BOX’D (Build, Observe, X-Charge and Drop). In Part 2 I briefly explored the key differences between data marts and sandboxes. Part 3 explored the Build-phase of our lifecycle.

Now, in this post I am going to focus on the Observe-phase. At this stage in the lifecycle we are concerned with managing our sandboxes. Most modern data warehouse environments will be running hundreds of data discovery projects so it is vital that the DBA can monitor and control the resources that each sandbox consumes by establishing rules to control the resources available to each project both in general terms and specifically for each project.  

In most cases, DBAs will setup a sandbox with dedicated resources. However, this approach does not create an efficient use of resources since sharing of unused resources across other projects is just not possible. The key advantage of Oracle Multitenant is its unique approach to resource management. The only realistic way to support thousands of sandboxes, which in today’s analytical driven environments is entirely possible if not inevitable, is to allocate one chunk of memory and one set of background processes for each container database. This provides much greater utilisation of existing IT resources and greater scalability as multiple pluggable sandboxes are consolidated into the multitenant container database.



Using multitenant we can now expand and reduce our resources as required to match our workloads. In the example below we are running an Oracle RAC environment, with two nodes in the cluster. You can see that only certain PDBs are open on certain nodes of the cluster and this is achieved by opening the corresponding services on these nodes as appropriate. In this way we are partitioning the SGA across the various nodes of the RAC cluster. This allows us to achieve the scalability we need for managing lots of sandboxes. At this stage we have a lot of project teams running large, sophisticated workloads which is causing the system to run close to capacity as represented by the little resource meters.


Expand 1


It would be great if our DBA could add some additional processing power to this environment to handle this increased workload. With 12c what we can do is simply drop another node into the cluster which allows us to spread the processing of the various sandbox workloads loads out across the expanded cluster. 

Expand 2

Now our little resource meters are showing that the load on the system is a lot more comfortable. This shows that the new multitenant feature integrates really well with RAC. It’s a symbiotic relationship whereby Multitenant makes RAC better and RAC makes Multitenant better.

So now we can add resources to the cluster how do we actually manage resources across each of our sandboxes? As a DBA I am sure that you are familiar with the features in Resource Manager that allow you to control system resources: CPU, sessions, parallel execution servers, Exadata I/O. If you need a quick refresher on Resource Manager then check out this presentation by Dan Norris “Overview of Oracle Resource Manager on Exadata” and the chapter on resource management in the 12c DBA guide.

With 12c Resource Manager is now multitenant-aware. Using Resource Manager we can configure policies to control how system resources are shared across the sandboxes/projects. Policies control how resources are utilised across PDBs creating hard limits that can enforce a “get what you pay for” model which is an important point when we move forward to the next phase of the lifecycle: X-Charge. Within Resource Manager we have adopted an “industry standard” approach to controlling resources based on two notions:

  1. a number of shares is allocated to each PDB
  2. a maximum utilization limit may be applied to each PDB

To help DBAs quickly deploy PDBs with a pre-defined set of shares and utilisation limits there is a “Default” configuration that works, even as PDBs are added or removed. How would this work in practice? Using a simple example this is how we could specify resource plans for the allocation of CPU between three PDBs:

RM 1


As you can see, there are four total shares, 2 for the data warehouse and one each for our two sandboxes. This means that our data warehouse is guaranteed 50% of the CPU whatever else is going on in the other sandboxes (PDBs). Similarly each of our sandbox projects is guaranteed at least 25%. However, in this case we did not specify settings for maximum utilisation. Therefore, our marketing sandbox could use 100% of the CPU if both the data warehouse and the sales sandbox were idle.

By using the “Default” profile we can simplify the whole process of adding and removing sandboxes/PDBS. As we add and remove sandboxes, the system resources are correctly rebalanced, by using the settings specific default profile, across all the plugged-in sandboxes/PDBs as shown below.

RM 2



In this latest post on sandboxing I have examined the “Observe” phase of our BOX’D sandbox lifecycle. With the new  multitenant-aware Resource Manager we can configure policies to control how system resources are shared across sandboxes. Using Resource Manager it is possible to configure a policy so that the first tenant in a large, powerful server experiences a realistic share of the resources that will eventually be shared as other tenants are plugged in.

In the next post I will explore the next phase of our sandbox lifecycle, X-charge, which will cover the metering and chargeback services for pluggable sandboxes. 


Technorati Tags: , ,

Categories: BI & Warehousing

Cedar Wins Gold – PeopleSoft Partner of the Year

Duncan Davies - Tue, 2014-10-28 08:00

If you follow any number of those in the Partner community on LinkedIn you’ll have seen many of us asking that you vote for us in the UKOUG’s annual Partner of the Year competition. All of the partners are really grateful for your votes as winning an award selected by end-users carries significant prestige.

I’m delighted that the company which I now work for – Cedar Consulting – were awarded 1st place (Gold) for PeopleSoft Partner of the Year for 2014/2015.

Simon cropped

Simon (right), collecting the award from David Warburton-Broadhurst – the UKOUG’s President

Cedar were also thrilled to win Silver in the Fusion Partner of the Year awards, further establishing our reputation as the go-to partner for Fusion/Taleo for all existing PeopleSoft customers.

SimonWr (normal)Simon Wragg, Director at Cedar Consulting said, “We are honoured to receive both the PeopleSoft and Fusion Partner of the Year awards amongst such a strong group of finalists. Cedar Consulting are delighted to be recognised as one of the leading partners within the UK Oracle User Group community. Winning these awards and knowing that so many votes were cast from Oracle customers is a real testament to the service we have provided over the last 12 months”  

We’d like to thank all of you who took the time to vote for us, we’re very grateful for your support.


Getting started with XQuery Update Facility 1.0

Marco Gralike - Tue, 2014-10-28 07:15
DeleteXML, InsertXML, UpdateXML, appendChildXML, insertChildXML, insertchildXMLafter, insertChildXMLbefore, insertXMLafter and insertXMLbefore are dead (& deprecated) from…

Demo User Community in BPM 12c Quickstart

Darwin IT - Tue, 2014-10-28 05:34
When you want to do demo-ing or perform the BPM12c workshops on a BPM12c QuickStart developers installation you'll need the Oracle BPM Demo User community. You know: with Charles Dickens, John Steinbeck and friends.

How to do so, you can find on the page following this link. You'll need the demo-community scripting that can be found following this link, and then download 'workflow-001-DemoCommunitySeedApp'.

However, besides adapting the file, there are a few changes to make in the build.xml.

First, find the property declaration for 'wls.home', and change it to:
<property name="wls.home" value="${bea.home}/wlserver"/>
This is needed, since they renamed the folder for the weblogic server in the FMW12c home. Then, after the comment of 'End Default values for params', add the following

<!-- Import task def for antcontrib functions as if -->
<property name="ant-contrib.jar" value="${bea.home}/oracle_common/modules/net.sf.antcontrib_1.1.0.0_1-0b3/lib/ant-contrib.jar"/>
<taskdef resource="net/sf/antcontrib/antlib.xml">
<pathelement location="${ant-contrib.jar}"/>
This is because the script lacks a definition for the ant-contrib lib, needed amongst others for the 'if' activities. After this change, it worked for me.

github DMCA takedown notices in the rise!

Nilesh Jethwa - Tue, 2014-10-28 04:54

GitHub maintains a list of all DMCA takedown notices along with counteractions and retractions if any.

Analysing all the notices from 2011, it seems that the takedown notices are on the rise.

Year View : Notice the sharp increase in 2014



Quarterly view : Now looking at the quarterly breakup, seems like the takedowns are cooling off in the later quarters.



So who is issuing these DMCA takedowns?

Here is the complete list of all companies who issued DMCA takedowns


NOTE: The names were extracted from the description text

And here are the counteractions and retractions


See the full list of companies with notice type

So the important question is “Why the DMCA takedown notices have increased?”

One important thing to note is sites like Stackoverflow encourage to replicate the content of the web page from where the original idea/algorithm or source code is copied from. To be honest it is a good thing because lot of times these referring sites become zombies and you don’t want to lose this knowledge. But could it be the case that such non-referenceable source codes end up in GitHub and hence causing the increase in the takedown notices as companies start discovering them?

Oracle Grid Infrastructure: fixing the PRVG-10122 error during installation

Yann Neuhaus - Tue, 2014-10-28 02:22

Making errors is human and when you configure a new Oracle Grid Infrastructure environment (especially one with a large number of nodes), mistakes can happen when configuring ASMLIB on all nodes. If you get an error looking like"PRVG-10122 : ASMLib configuration...does not match with cluster nodes", there is a simple solution to fix it.

When you are installing Grid Infrastructure, the following error can occor in cluvfy output or directly in the pre-requisites check step of the OUI:


PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_UID" on the node "srvora01" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_GID" on the node "srvora01" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_ENABLED" on the node "srvora01" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_UID" on the node "srvora02" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_GID" on the node "srvora02" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_ENABLED" on the node "srvora02" does not match with cluster nodes
Result: Check for ASMLib configuration failed.


The three parameters ORACLEASM_UID, ORACLEASM_GID and ORACLEASM_ENABLED displayed in the error should be defined when configuring ASMLIB on the system (see Grid Infrastructure 12c pre-requisites for installation, configure ASM step). To check if the configuration is coherent between the nodes specified in the error above, run the following command as root on all concerned nodes. In my case, srvora01 and srvora02 are the involved servers:


On srvora01

[root@srvora01 ~]# oracleasm configure


On srvora02

[root@srvora02 ~]# oracleasm configure


As we can see, it seems that ASMLIB has not been configured on srvora02: ORACLEASM_ENABLED is false, and no UID or GID are provided. Thse are the default values! The parameters are different between the two nodes.

To solve the issue, simply reconfigure ASMLIB on the second node by running the following command with the right parameters:


On srvora02


[root@srvora02 ~]# oracleasm configure -i


Configuring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.


Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done


Now, the parameters are the same between the two nodes:


[root@srvora02 ~]# oracleasm configure


The Grid Infrastructure installation can now continue. I know that the error message was quite explicit, but after having spent hours to configure all nodes of my cluster, it took me some time to understand my big mistake. I hope this will help you.

First Rows

Jonathan Lewis - Tue, 2014-10-28 01:01

Following on from the short note I published about the first_rows optimizer mode yesterday here’s a note that I wrote on the topic more than 2 years ago but somehow forgot to publish.

I can get quite gloomy when I read some of the material that gets published about Oracle; not so much because it’s misleading or wrong, but because it’s clearly been written without any real effort being made to check whether it’s true. For example, a couple of days ago [ed: actually some time around May 2012] I came across an article about optimisation in 11g that seemed to be claiming that first_rows optimisation somehow “defaulted” to first_rows(1) , or first_rows_1, optimisation if you didn’t supply a final integer value.

For at least 10 years the manuals have described first_rows (whether as a hint or as a parameter value) as being available for backwards compatibility; so if it’s really just a synonym for first_rows_1 (or first_rows(1)) you might think that the manuals would actually mention this. Even if the manuals didn’t mention it you might just consider a very simple little test before making such a contrary claim, and if you did make such a test and found that your claim was correct you might actually demonstrate (or describe) the test so that other people could check your results.

It’s rather important, of course, that people realise (should it ever happen) that first_rows has silently changed into first_rows_1 because any code that’s using it for backwards compatibility might suddenly change execution path when you did the critical upgrade where the optimizer changed from “backwards compatibility” mode to “completely different optimisation strategy” mode. So here’s a simple check (run from – to make sure I haven’t missed the switch):


create table t2 as
        mod(rownum,200)         n1,
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from    all_objects
where rownum <= 3000

                method_opt => 'for all columns size 1'

create index t2_i1 on t2(n1);

SQL> select /*+ all_rows */ n2 from t2 where n1 = 15;

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |    15 |   120 |    12   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T2   |    15 |   120 |    12   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   1 - filter("N1"=15)

You’ll notice that I’ve created my data in a way that means I’ll have 15 rows with the value 15, scattered evenly through the table. As a result of the scattering the clustering_factor on my index is going to be similar to the number of rows in the table, and the cost of fetching all the rows by index is going to be relatively high. Using all_rows optimization Oracle has chosen a tablescan.

So what happens if I use the first_rows(1) hint, and how does this compare with using the first_rows hint ?

SQL> select /*+ first_rows(1) */ n2 from t2 where n1 = 15;

| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |       |     2 |    16 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T2    |     2 |    16 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T2_I1 |       |       |     1   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - access("N1"=15)

SQL> select /*+ first_rows */ n2 from t2 where n1 = 15;

| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |       |    15 |   120 |    16   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T2    |    15 |   120 |    16   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T2_I1 |    15 |       |     1   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - access("N1"=15)

You might not find it too surprising to see that Oracle used the indexed access path in both cases. Does this mean that first_rows really means (or defaults to) first_rows(1) ?

Of course it doesn’t - you need only look at the estimated cost and cardinality to see this. The two mechanisms are clearly implemented through difference code paths. The first_rows method uses some heuristics to restrict the options it examines, but still gives us the estimated cost and cardinality of fetching ALL the rows using the path it has chosen. The first_rows(1) method uses arithmetic to decide on the best path for getting the first row, and adjusts the cost accordingly to show how much work it thinks it will have to do to fetch just that one row.

Of course, no matter how inane a comment may seem to be, there’s always a chance that it might be based on some (unstated) insight. Is there any way in which first_rows(n) and first_rows are related ? If so could you possibly manage to claim that this establishes a “default value” link?

Funnily enough there is a special case: if you try hinting with first_rows(0) – that’s the number zero – Oracle will use the old first_rows optimisation method – you can infer this from the cost and cardinality figures, or you can check the 10053 trace file, or use a call to dbms_xplan() to report the outline.  It’s an interesting exercise (left to the reader) to decide whether this is the lexical analyzer deciding to treat the “(0)” as a new – and meaningless – token following the token “first_rows”, or whether it is the optimizer recognising the lexical analyzer allowing “first_rows(0)” as a token which the optimizer is then required to treat as first_rows.

Mind you, if you only want the first zero rows of the result set there’s a much better trick you can use to optimise the query – don’t run the query.


SQL Server & memory leak: Are you sure?

Yann Neuhaus - Mon, 2014-10-27 22:02

I have recently come across an issue with one of my customer that told me that their SQL Server 2008 R2 instance had a memory leak. These are strong words! The server has 64 GB of RAM and the concerned SQL Server instance is limited to 54 GB according to the max server memory (GB) server option. However, he noticed that SQL Server used 60 GB of memory which did not correspond to the configured max server memory option. What’s going on? Let’s investigate what is causing this memory overhead.

I admit to work mainly on SQL Server 2012 now because most of my customers have upgraded to this version and I had to revisit my skills about memory architecture with SQL Server 2008 R2. Smile

Let’s start by the following question: Is it possible for SQL Server to consume more memory than a configured memory limit? For SQL Server versions older than 2012 the answer is yes and this is directly related to the memory architecture design. From this point on, we will talk exclusively about the SQL Server 2005 / 2008 versions. So, as you certainly know, SQL Server caps only the buffer pool memory area that mainly concerns the single page allocator mechanism.

Let’s have a little deep dive on SQL Server memory architecture here to understand where the single page allocator comes from. My goal is not to provide a complete explanation of the SQL Server memory management with SQLOS but just the concept to understand how to troubleshoot the issue presented in this article.

First of all, threads with SQL Server cannot directly interface with memory. They must go through memory allocators routines from a memory node that know which Windows APIs to use to honor a memory request. The memory node is a hidden component and provides a locality of allocation. Page allocators are one of the memory allocator types and most commonly used with the SQLOS memory manager because they allocate memory in multiple of SQLOS's page – (payload = 8KB). Because memory node is a hidden component threads cannot use it directly and it must create a memory object that has its own memory clerk depending on its type.

A memory clerk is another component that provides data caching, memory control (in case of memory pressure for instance), memory usage statistics tracking capabilities and supports the same type of memory allocators than a memory node. There are several type of memory allocators like single page allocator that can only provide one page at the time, multipage allocator that provide a set of pages at time and others.

To summarize, when a request is asking memory to create a memory object, it goes to the concerned memory clerk which in turn requests the concerned memory allocator. Here a simple representation of memory allocation:


{Thread} -> {Memory object} --> {Memory Clerk} --> {Memory allocator in Memory Node}


The most interesting part here is that the buffer pool in SQL Server versions older than 2012 acts as both a memory clerk and a consumer. It means that it can provide single pages from others consumers and track its own memory consumption. You can read the detailed explainations in the excellent articles from Slava Oks here.

This is why we have only control of the buffer pool size by configuring the min. / max. server memory options because it provide single page allocations unlike other memory clerks with multi-page allocations. Note that SQL Server 2012 memory management is completely different and fortunately, we have a better control of the memory limitation with SQL Server.

So, now that we know exactly what is capped by the “max. server memory server” level option, let's go back to my problem: Unfortunately I cannot reveal the real context of my customer here, but no problem: I am able to simulate the same problem.

In my scenario, I have a SQL Server 2008 R2 instance capped to 2560 MB and the total memory on my server is 8192MB. If I take a look at the task manager panel like my customer I can see that SQL Server uses more than my configured value MB:




A quick math tells us that SQL Server is using 3.4 GB rather than 2.5 GB. Well, where then does the remaining part come from? If we take a look at the DMV sys.dm_os_sys_info, we can confirm that the buffer pool is capped to 2560MB as expected (single page allocations):


SELECT        physical_memory_in_bytes / 1024 / 1024 AS physical_memory_MB,        bpool_committed / 128 AS bpool_MB,        bpool_commit_target / 128 AS bpool_target_MB,        bpool_visible / 128 AS bpool_visible_MB FROM sys.dm_os_sys_info(nolock)





Go ahead and take a look at the multipage allocators statistics information related on memory clerks by using the DMV sys.dm_os_memory_clerks (remember that memory clerks have memory usage capabilities):


SELECT        name AS clerk_name,        memory_node_id,        sum(single_pages_kb) / 1024 as single_page_total_size_mb,        sum(multi_pages_kb) / 1024 as multi_page_total_size_mb,        sum(awe_allocated_kb) / 1024 as awe_allocaed_size_MB FROM sys.dm_os_memory_clerks(nolock) WHERE memory_node_id 64 group by memory_node_id, name HAVING SUM(multi_pages_kb) > 0 ORDER BY sum(single_pages_kb) + sum(multi_pages_kb) + sum(awe_allocated_kb) DESC;




Note that in my case (same case as for my customer, but with a different order of magnitude), we have a particular memory clerk with a multipage allocator size greater than others. A quick math (2560 + 863 = 3423 MB) can confirm most part of the memory overhead in this case. So, at this point we can claim that SQL Server does not suffer from any memory leaks. This is a normal behavior 'by design'.

Finally, let's go back to the root cause of my customer that comes from the memory clerk related to the TokenAndPermUserStore cache that grows with time (approximatively 6GB of memory overhead). What exactly is the TokenAndPermUserStore cache?

Well, this is a security cache that maintains the different security token types like LoginToken, TokenPerm, UserToken, SecContextToken, and TokenAccessResult generated when a user executes a query. The problem can occur when this cache store grows, the time to search for existing security entries to reuse increases, causing potentially slower query times because access to this cache is controlled by only one thread (please refer to the Microsoft KB 927396). In my demo we can also notice a lot of different entries related on the TokenAndPermUserStore cache by using the following query:


WITH memory_cache_entries AS (        SELECT              name AS entry_name,              [type],              in_use_count,              pages_allocated_count,              CAST(entry_data AS XML) AS entry_data        FROM sys.dm_os_memory_cache_entries(nolock)        WHERE type = 'USERSTORE_TOKENPERM' ), memory_cache_entries_details AS (        SELECT              entry_data.value('(/entry/@class)[1]', 'bigint') AS class,              entry_data.value('(/entry/@subclass)[1]', 'int') AS subclass,              entry_data.value('(/entry/@name)[1]', 'varchar(100)') AS token_name,              pages_allocated_count,              in_use_count        FROM memory_cache_entries ) SELECT        class,        subclass,        token_name,        COUNT(*) AS nb_entries FROM memory_cache_entries_details GROUP BY token_name, class, subclass ORDER BY nb_entries DESC;





The situation above was very similar to my customer issue but we didn’t find any relevant related performance problems, only an important number of SQL Server logins and ad-hoc queries used by the concerned application.

The fix consisted of flushing entries from the TokenAndPermUserStore cache according to the workaround provided by Microsoft in the Microsoft KB 927396. I hope it will be a temporary solution, the time to investigate from the side of the application code, but this is another story!

However here the moral of my story: check carefully how SQL Server uses memory before saying that it suffers from memory leak Smile

New Central Resource for Info on PeopleSoft Update Manager

PeopleSoft Technology Blog - Mon, 2014-10-27 18:39
You probably have already heard that PeopleSoft is moving to a new model for providing updates and maintenance to customers.  This new life cycle model enables customers to adopt changes incrementally and on their schedule.  It is much less disruptive, is cheaper and simpler, makes updates available sooner, and gives customers greater control.  An essential part of the new model is PeopleSoft Update Manager.  Now there is a new central resource for everything about PUM and PeopleSoft Update Images.  With this resource you can find information about the methodology and purpose of PUM, the update image home pages, and specific information about each image, troubleshooting links, best practices, and more.  Check it out!

How To Change The Priority Of Oracle Background Processes

How To Change The Priority Of Oracle Background Processes
Before you get in a huf, it can be done! You can change an Oracle Database background process

priority through an instance parameter! I'm not saying it's a good idea, but it can be done.
In this post I explore how to make the change, just how far you can take it and when you may want to consider changing an Oracle background process priority.
To get your adrenaline going, check out the instance parameter _high_priority_processes from one of your production Oracle system with a version of 11 or greater. Here is an example using my OSM tool, ipx.sql on my Oracle Database version
SQL> @ipx _highest_priority_processes
Database: prod40 27-OCT-14 02:22pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_highest_priority_processes = VKTM Highest Priority TRUE
Process Name Mask
Then at the Linux prompt, I did:
$ ps -eo pid,class,pri,nice,time,args | grep prod40
2879 TS 19 0 00:00:00 ora_pmon_prod40
2881 TS 19 0 00:00:01 ora_psp0_prod40
2883 RR 41 - 00:02:31 ora_vktm_prod40
2889 TS 19 0 00:00:01 ora_mman_prod40
2903 TS 19 0 00:00:00 ora_lgwr_prod40
2905 TS 19 0 00:00:01 ora_ckpt_prod40
2907 TS 19 0 00:00:00 ora_lg00_prod40
2911 TS 19 0 00:00:00 ora_lg01_prod40
Notice the "pri" for priority of the ora_vktm_prod40 process? It is set to 41 while all the rest of the Oracle background processes are set to the default of 19. Very cool, eh?

Surprised By What I Found
Surprised? Yes, surprised because changing Oracle process priority is a pandoras box. Just imagine if an Oracle server (i.e., foreground) process has its priority lowered just a little and then attempts to acquire a latch or a mutex? If it doesn't get the latch quickly, I might never ever get it!

From a user experience perspective, sometimes performance really quick and other times the application just hangs.

This actually happened to a customer of mine years ago when the OS started reducing a process's priority after it consumed a certain amount of CPU. I learned that when it comes to Oracle processes, they are programed to expect an even process priority playing field. If you try to "game" the situation, do so at your own risk... not Oracle's.

Then why did Oracle Corporation allow background process priority to be changed. And why did Oracle Corporation actually change a background processes priority?!

Doing A Little Exploration
It turns out there are a number of "priority" related underscore instance parameters! On my system there 6 "priority" parameters. On my system there are 8 "priority" parameters. On my system there are 13 "priority" parameters! So clearly Oracle is making changes! In all cases, the parameter I'm focusing on, "_high_priority_processes" exists.

In this posting, I'm going to focus on my Oracle Database 12c version system. While you may see something different in your environment, the theme will be the same.

While I'll be blogging about all four of the below parameters, in this posting my focus will be on the _high_priority_processes parameter. Below are the defaults on my system:
_high_priority_processes        LMS*
_highest_priority_processes VKTM
_os_sched_high_priority 1
_os_sched_highest_priority 1

Messing With The LGWR Background Processes
I'm not testing this on a RAC system, so I don't have an LMS background process. When I saw the "LMS*" I immediately thought, "regular expression." Hmmm... I wonder if I can change the LGWR background process. So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:36pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LGWR High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
5521 RR 41 - 00:00:00 ora_vktm_prod40
5539 TS 19 0 00:00:00 ora_dbw0_prod40
5541 RR 41 - 00:00:00 ora_lgwr_prod40
5545 TS 19 0 00:00:00 ora_ckpt_prod40
5547 TS 19 0 00:00:00 ora_lg00_prod40
5551 TS 19 0 00:00:00 ora_lg01_prod40

How Far Can I Take This?
At this point in my journey, my mind was a blaze! The log file sync wait event can be really difficult to deal with and especially so when there is a CPU bottleneck. Hmmm... Perhaps I can increase the priority of all the log writer background processes?

So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:44pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LG* High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
5974 TS 19 0 00:00:00 ora_psp0_prod40
5976 RR 41 - 00:00:00 ora_vktm_prod40
5994 TS 19 0 00:00:00 ora_dbw0_prod40
5996 RR 41 - 00:00:00 ora_lgwr_prod40
6000 TS 19 0 00:00:00 ora_ckpt_prod40
6002 RR 41 - 00:00:00 ora_lg00_prod40
6008 RR 41 - 00:00:00 ora_lg01_prod40
6014 TS 19 0 00:00:00 ora_lreg_prod40

So now all the log writer background processes have a high priority. My hope would be that if there is an OS CPU bottleneck and the log writer background processes wanted more CPU, I now have the power to give that to them! Another tool in my performance tuning arsenal!

Security Hole?
At this point, my exuberance began to turn into paranoia. I thought, "Perhaps I can increase the priority of an Oracle server process or perhaps any process." If so, that would be a major Oracle Database security hole.

With fingers trembling, I changed the instance parameters to match an Oracle server process and recycled the instance. Below shows the instance parameter change:

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:52pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|oracleprod40 Process Name Mask

Below is an operating system perspective using the ps command:

$ ps -eo pid,class,pri,nice,time,args | grep prod40
6360 TS 19 0 00:00:00 ora_psp0_prod40
6362 RR 41 - 00:00:00 ora_vktm_prod40
6366 TS 19 0 00:00:00 ora_gen0_prod40
6382 RR 41 - 00:00:00 ora_lgwr_prod40
6386 TS 19 0 00:00:00 ora_ckpt_prod40
6388 RR 41 - 00:00:00 ora_lg00_prod40
6394 RR 41 - 00:00:00 ora_lg01_prod40
6398 TS 19 0 00:00:00 ora_reco_prod40
6644 TS 19 0 00:00:00 oracleprod40...

OK, that didn't work so how about this?

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:55pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|*oracle* Process Name Mask

Let's see what happened at the OS.

$ ps -eo pid,class,pri,nice,time,args | grep prod40
6701 RR 41 - 00:00:00 ora_vktm_prod40
6705 RR 41 - 00:00:00 ora_gen0_prod40
6709 RR 41 - 00:00:00 ora_mman_prod40
6717 RR 41 - 00:00:00 ora_diag_prod40
6721 RR 41 - 00:00:00 ora_dbrm_prod40
6725 RR 41 - 00:00:00 ora_vkrm_prod40
6729 RR 41 - 00:00:00 ora_dia0_prod40
6733 RR 41 - 00:00:00 ora_dbw0_prod40
6927 RR 41 - 00:00:00 ora_p00m_prod40
6931 RR 41 - 00:00:00 ora_p00n_prod40
7122 TS 19 0 00:00:00 oracleprod40 ...
7124 RR 41 - 00:00:00 ora_qm02_prod40
7128 RR 41 - 00:00:00 ora_qm03_prod40

Oh Oh... That's not good! Now EVERY Oracle background process has a higher priority and my Oracle server process does not.

So my "*" wildcard caused all the Oracle processes to be included. If all the processes a high prioirty, then the log writer processes have no advantage over the others. And to make matters even worse, my goal of increasing the server process priority did not occur.

However, this is actually very good news because it appears this is not an Oracle Database security hole! To me, it looks like the priority parameter is applied during the instance startup for just the background processes. Since my server process was started after the instance was started and for sure not included in the list of background processes, its priority was not affected. Good news for security, not as good of news for a performance optimizing fanatic such as myself.

Should I Ever Increase A Background Process Priority?
Now that we know how to increase an Oracle Database background process priority, when would we ever want to do this? The short answer is probably never. But the long answer is the classic, "it depends."

Let me give you an example. Suppose there is an OS CPU bottleneck and the log writer background processes are consuming lots of CPU while handling all the associated memory management when server process issues a commit. In this situation, performance may benefit by making it easier for the log writer processes to get CPU cycles, therefore improving performance. But don't even think about doing this unless there is a CPU bottleneck. And even then, be very very careful.

In my next block posting, I'll detail an experiment where I changed the log writer background processes priority.

Thanks for reading!


Categories: DBA Blogs

<b>Contributions by Angela Golla,

Oracle Infogram - Mon, 2014-10-27 13:22
Contributions by Angela Golla, Infogram Deputy Editor

Texas Delivers Citizen Services in Secure Cloud
The first US state to deliver a complete portfolio of citizen services in the Cloud, Texas is the leader in Open Government. With a "my government, my way" promise, Texas relies on Oracle to help 30+ agencies deliver services to nearly 30M citizens.  This is a brief video on how Texas accomplished this. 

MariaDB 10.0 Multi-source Replication at Percona Live UK 2014

Pythian Group - Mon, 2014-10-27 12:12

Percona Live UK is upon us and I have the privilege to present a tutorial on setting up multi-source replication in MariaDB 10.0 on Nov 3, 2014.

If you’re joining me at PLUK14, we will go over setting up two different topologies that incorporates the features in MariaDB. The first is a mirrored topology:

Replication Topologies - Mirrored

Replication Topologies – Mirrored

This basically makes use of an existing DR environment by setting it up to be able to write to either master. Please be advised, this is normally not recommended due to the complexity of making your application able to resolve conflicts and data sync issues that might arise from writing to multiple masters.

The second topology is a basic fan-in topology:

Replication Topologies - Fan-in

Replication Topologies – Fan-in

This use-case is more common, especially for unrelated datasets that can be gathered into a single machine for reporting purposes or as part of a backup strategy. It was also previously available in MySQL only through external tools such as Tungsten Replicator

As promised in the description of the tutorial, I am providing a Vagrantfile for the tutorial. This can be downloaded/cloned from my PLUK14 repository

The vagrant environment requires at least Vagrant 1.5 to make use of Vagrant Cloud.

I hope to see you next week!

Categories: DBA Blogs

Developing Your First Oracle Alta UI page with Oracle ADF Faces

Shay Shmeltzer - Mon, 2014-10-27 11:20

At Oracle OpenWorld this year Oracle announced the new Oracle Alta UI - a set of UI guidelines that will help you create better looking and functioning applications. We use these guidelines to build all our modern cloud based applications and products - and you can use it too today if you are on JDeveloper 12.1.3. 

The Alta UI site is at

Take a look for example at one page from the master details pattern page:


You might be wondering how do I go about starting to create such an Alta page layout?

Below is a quick video that shows you how to build such a page from scratch.

A few things you'll see during the demo:

  • Basic work with the panelGridLayout - for easier page structure
  • Working with the new tablet first page template 
  • Enabling selection on a listView component
  • Working with the circular status meter
  • The new AFAppNavbarButton style class
  •  Hot-swap usage to reduce page re-runs

One point to raise about this video is that it focuses more on getting the layout and look rather then the Alta way of designing an application flow and content. In a more complete Alta mind-set you'll also figure out things like fields that probably don't need to be shown (such as the employee_id), you'll think more about "why is the user on this page and what will he want to do here?" which might mean you'll add things like a way to see all the employees in a department in a hierarchy viewer rather than a form that scroll a record at a time.  There are more things you can do to this page to get even better functionality, but on those in future blog entries... 

Categories: Development

Fastest growing and rapidly declining job industry

Nilesh Jethwa - Mon, 2014-10-27 09:17

Data source :


Fastest growing job industry


Original Visualization

Most rapidly declining job industry


Rapidly declining jobs link