Feed aggregator

Oracle Openworld 2011 Schedule Builder is Now Live

OCP Advisor - Fri, 2011-08-05 01:59
Oracle OpenWorld 2011 Content Management team announced today that the conference Schedule Builder is now online. Registered attendees can log in to search through hundreds of sessions, partner exhibits, and Oracle demos and find content of interest to enroll in sessions and build your conference agenda.

A recommendation engine powered by Oracle Data Mining provides a list of sessions, demos and exhibits that are most relevant to you. Once a session is added to the personal agenda, any session change information or communication is automatically sent. It also provides the conference organizers an idea about the popularity of a session. In case the designated room is overbooked on the schedule builder, then the session is often moved to a location with more seating capacity. It is always a good idea to use the Schedule Builder to enroll in sessions and get early access. In order to gain early access attendees must be enrolled in the session and arrive at least ten minutes before the session start time. Enrolled attendees are seated first and then non-enrolled attendees are seated for the most popular sessions.

Those who are yet to register, please view the Content Catalog to see list of session topics, demos and exhibits. Please add Session # 8042 on your agenda for a session on Oracle Certification at Moscone Center West Room 3000 from 5.00pm - 6.00pm. One lucky attendee will win an Oracle Certification Exam exam voucher as the audience prize!

Amazon Web Services is Ready for the Enterprise

Brent Martin - Thu, 2011-08-04 15:49

Amazon has been steadily moving toward making their web service offering ready for the enterprise. Over the last year or so they've received certification for Oracle database, they've broken down the barriers that would prevent PCI certification, and they've improved their pricing structure to make it more corporation-friendly.

Today they may have finally broken the final barriers down to large scale enterprise adoption with the following announcements:

Virtual Private Cloud is now out of Beta and allows you to "provision a private section of the AWS cloud where you can create avirtual network that you control, including selection of an IP address range, creation of subnets, and configuration or route tables and network gateways. you can connect your Amazon VPC directly to the Internet while also extending your corporate data center to the cloud using encrypted VPN connections."

But the announcement of Amazon Direct Connect might be my favorite. "Amazon Direct Connect is a new service that enables you to bypass the internet and deliver data to and from AWS via private network connection. With a private connection, you can reduce networking latency and costs, and provide a more consistent network experience while moving data between AWS and your datacenters. With pay-as-you-go pricing and no minimum commitment, you pay only for the network ports used and the data transferred out from AWS over the private connection."

There's also new functionality for AWS Identity and Access Management that lets you use your existing corporate identity management system to grant secure and direct access to AWS resources without creating a new AWS identity for those users.

I'm excited about the possibilities this opens up in terms of on demand computing capacity in the enterprise.

ORA-00020: maximum number of processes (n) exceeded in ASM Instance

Madan Mohan - Thu, 2011-08-04 08:02

Increase the PROCESSES parameter in the ASM parameter file

Processes = 25 + 15 * n, where n is the number of instances on the box using ASM for their storage.

NOTE : this formula is for a basic instance and does not accomodate for

* Multiple ARCH processes
* Multiple LGWR processes

Should the ORA-0020 occur even after implementing this formula ... add additional for any multiples of these background processes

Refer NOTE 265633.1 "ASM Technical Best Practices" for more information.

Quick Start: Git for personal use

Vattekkat Babu - Tue, 2011-08-02 13:50

Problem: Needed to find a way to keep my config and research files under version control which I can get to various machines I work with and update from anywhere.

  • I have an OpenBSD server account (no root though)
  • I don't want to run any additional daemon process or expose it via http
  • I can download and compile source if it can be installed as a non root account
  • Transport must be via SSH
  • I don't need GUI tools, speed should be reasonable

Tried Mercurial, Darcs, Bazaar and Git. Fossil is also a great tool that provides wiki, version control and ticket management. Darcs is the easiest, but for some reason, extremely slow. Finally chose Git. Download and compile was easy. Read top 3 lines in the INSTALL file in source distribution for steps. Rest of the steps explain how I set it up. Note that this may not be the best possible Git workflow. Merely that it works for me. Note that I've installed git in ~/software/git.

Task flows: Sayonara auto AM nesting in Hello, ah, let's call it Bruce.

Chris Muir - Mon, 2011-08-01 23:58
In my post from last week I documented the changing behaviour of task flows and Application Module nesting between the and 11.1.1.X.0 series of ADF & JDeveloper. In that post I detected a distinct change in the underlying behaviour of how ADF works with ADF BC Application Modules with certain task flow options and I was concerned this would destroy the scalability of our applications. To understand those concerns and the rest of this post you need to read that post to comprehend where I was coming from.

One of the fortunate things of being apart of the Oracle ACE Director program is behind the scenes, we and Oracle staff are often chatting and helping each other out, which I'm incredibly appreciative of. In this case I must raise my hat to Steven Davelaar and John Stegeman for their out of hours assistance in one form or another.

Regarding my last post, John made a reasonable point that I was making/drawing my conclusions far too early, that in fact I needed to test the complete task flow transaction life cycle to see if the behaviour of the task flows has changed in In particular John's good memory led him back to this OTN forum post by Steve Muench that stated:
In fact, in a future release we will likely be changing that implementation detail so that the AMs are always used from their own AM pool, however they will share a transaction/connection.Steve's point from late 2010, and the one that John was re-affirming is that even though the underlying implementation may change, the end effect from the Bounded Task Flow (BTF) programmer's point of view, everything can still work the same. And this is what I needed to check, looking at what AM methods are called is not enough. I needed to check the actual database connections and transaction behaviour.

From my post I was concerned without auto AM nesting, a page comprised of several BTFs with separate AMs would spawn as many database connections compromising the scalability of the application. From the logs I thought this was the case as I could see two root AMs created and a separate (ie. two) call to prepareSession() for each. My assumption being this meant two connections were being raised with the database under, where alternatively using 11.1.1.X.0 is was only one database connection using the auto AM nesting feature.

However a query on the v$session table in the database using the solution:

SELECT * FROM v$session WHERE username = 'HR';

...showed only 1 connection. So regardless that there is 2 roots AMs instantiated under, they share connections (and therefore transactions too). In other words, while the end result is the same, the underlying implementation has changed.

I don't have a snazzy name for this new implementation vs the older auto AM nesting, so I figure we should call it Bruce to keep it simple (with apologies to Monty Python).

The only discrepancies between implementations we can see is that the prepareSession() and similar AM methods that deal with the connection or transaction (eg. afterConnect(), afterCommit()) are now called on the secondary AM as it's treated as a root AM rather than a nested AM. This was not the behaviour under 11.1.1.X.0 as nested AMs delegate that responsibility back to the root AM. This in turn may cause you a minor hiccup if you've overridden these method in a framework extension of the AppModuleImpl as they'll now be called across all your AMs, including those who used to be auto nested.

In returning to Steve Muench's point:
In fact, in a future release we will likely be changing that implementation detail so that the AMs are always used from their own AM pool, however they will share a transaction/connection.Via http://localhost:7101/dms/Spy I've verified this is the case with Bruce, where under 11.1.1.X.0 is used to be a single AM pool, but now under there is 2 AM pools & 1 defined connection. The end effect and my primary concern from the previous blog post is now mute, the scalability of database connections is maintained. Bruce is a winner.

The interesting change under is the 1 AM pool vs many AM pools. Ignoring if at design time create nested AMs under the 1 root AM, traditionally with the runtime auto nesting AM feature you'd still have 1 root AM pool. Now for each root AM you'll end up with an associated AM pool. If you're app is made up of hundreds of AMs that were nested and used the AM pool of their parent, you'll now end up with hundreds of AM pools. Whether this is a problem is hard to tell without load testing an application with this setup, but Steve Muench does comment in a follow up to the OTN forum post "The AM pool in an of itself is not additional overhead, no."

So potentially the midtier is less scalable as it needs to maintain more pools and process them, to what degree we don't know. Yet, more importantly, we now have a relatively more flexible solution in terms of tuning the AM pools, because previously
it was an all or nothing affair with 1 AM pool under the auto AM nesting 11.1.1.X.0 approach (again ignoring design time nesting AMs under the root AM), and now with Bruce we've lots of find grained AM pools to tune. As such each pool can be tuned where an AM pool for a little used BTF can be set to consume less resources over a BTF with an AM pool that is hit frequently.

So, besides a couple of minor implementation changes (and if you find more please post a comment), it looks like Bruce is a winner.

Again thanks must go to John Stegeman for his assistance in working through his issues, and Steven Davelaar for his offer of support.

Document Theft - IRM as a Last Line of Defense

Simon Thorpe - Mon, 2011-08-01 23:54

Document TheftI haven't had much time to update the blog recently, but just time to post before going on holiday. Over recent weeks there have been numerous stories relating to document theft – the Pentagon commentary on systematic theft of thousands of documents from defense contractors, the reports of journalists hacking into not just phones but the email systems of public and private citizens, the smug announcements by “cyber terrorists” that they’ve stolen files from various organisations.

The relevance of IRM is clear. Protect your perimeter, your applications, your file systems and repositories, of course, but protect your sensitive documents too. In the end, there are so many ways to gain digital possession of documents – but only one way to actually make use of them if they are protected by IRM. Anyone stealing a sealed document by whatever means has another substantial line of defense to overcome.

And that line of defense is designed to audit and authenticate access attempts as well as consider a number of other risk factors. It can also be rapidly reconfigured to deny access completely in the event of calamity – a single rule change can prevent all access from compromised user accounts or for whole classifications of information. The audit trail can also provide valuable clues as to the source of the attack.

In a cloudy world, where perimeters are of diminishing relevance, you need to apply controls to the assets themselves. And the scalable, manageable, intuitive way to achieve that control is Oracle IRM.

Log Directory Structure in Cluster Ready Service:

Ayyappa Yelburgi - Sun, 2011-07-31 10:12
$ORA_CRS_HOME/crs/log--->contains trace files for the CRS resources$ORA_CRS_HOME/crs/init--->contains the trace files of the CRS daemon during startup.Good Place to start with any CRS login problems.$ORA_CRS_HOME/css/log---->The Cluster Synchronization (CSS) logs indicate all actions sych as reconfigurations,missed check -inbs,connects and disconnects from the client CSS listener.In some cases,ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com10

What I learned at Quest Northeast – Part 1

Andrews Consulting - Fri, 2011-07-29 08:19
Quest’s annual Northeast conference, held last week at the Mohegan Sun in Connecticut, is always a great source of news, information and gossip.  This year was no exception.  There was no earth-shaking news about either JD Edwards or PeopleSoft, but the era when dramatic things happen to either of them is long past.  Instead, executives […]
Categories: APPS Blogs

JDev 11.1.2: Differences in Table Behavior

JHeadstart - Thu, 2011-07-28 23:03

While building a simple ADF application in JDev 11.1.2 I encountered some strange runtime behavior. I built another application with the same behavior in exactly the same way in JDev and there things worked smoothly. However, in JDev 11.1.2, the addRow and deleteRow functions didn't work as expected. In this post I will share my tough journey in founding out what was happening, and discuss the difference in behavior and the changes required to make it work in JDev 11.1.2.

When using the add row button (the green plus icon in the screen shot below) an error message for the required JobId dropdown list was shown immediately.

Some investigation using the Google Chrome Developer tools revealed that two requests instead of one are sent to the server, with apparently the second request causing the validation error to appear. (Although, the validation error is a client-side error, so still not sure how the second request can trigger the error.)

At first I thought this was caused by the partialSubmitproperty on the addRow button, which was set to true. Setting this property to false (or removing this property) fixed this problem, but caused table rendering to hang. Weird, but I didn't investigate that further. I decided to build the same app in JDev which worked smoothly and then opened this app in JDev 11.1.2. After the auto-migration, I ran the app but much to my surprise the "Selection required" message didn't show up. I compared the page and page definition of both apps over and over again, and couldn't see any difference.  Eventually, I started comparing all the files in both projects. This lead me to the adf-config.xml file, located in the .adf directory under the root directory of the application, also visible under the Resources panel. In this file, one property existed in the JDev 11.1.2 application that was not present in the JDev version: changeEventPolicy="ppr".

By removing this property, things started to work again, and only one request was sent again.

Note that the really tricky thing here is that when you upgrade an application from JDev this property does not get added, but new JDev 11.1.2 apps will have this property setting, causing difference in behavior between a migrated app and a new app. At this point, my recommendation is to remove this property (or set it to none) for new JDev 11.1.2 apps. If memory serves me well, in some JDev 11.1.1.x version, dragging and dropping a data a table on a page, added the changeEventPolicy="ppr" property to the iterator binding in the page def. In a later JDev 11.1.1.x release this property was gone again. Looks like it is back in a different form (this time in adf-config.xml), but still with undesirable implications. 

The next error I hit was in the delete confirmation dialog, when trying to delete a row. Regardless of which button I pressed (Yes or No), I got validation errors on the underlying new row, and the dialog was not closed, nor was the row removed.

Now, I think this error has to do with the ADF Faces optimized JSF lifecycle.  Since the table needs to be refreshed when the row is removed by clicking yes in the dialog, the af:table component  requires a partialTrigger property that refers to the af:dialog element. With this partialTrigger property in place the ADF JSF optimized lifecycle causes the table items to be submitted (and validated) as well when clicking the Yes or No button in the dialog. Now,I am speculating here, but may be this wasn't supposed to work at all in JDev, but it did because of a bug in the optimized lifecyle code, that has now been fixed in JDev 11.1.2...?

Anyway, what feels like the most logical and easy way for me to solve this issue, is setting the immediate property on the af:dialog to true, so the dialog listener method would skip the JSF validation phase. However, the af:dialog element does not have such a property (logged enhancement request).  Two other solutions remain:

  • No longer use the dialoglistener property, but instead define custom Yes/No buttons using the toolbar facet on the af:dialog. On these buttons I can set the immediate property to true, bypassing client-side and server-side validation.
  • Do not specify the af:dialog as partial trigger on the af:table component, instead, add the table or a surrounding layout container element as partial target programatically after deleting the row. This is the solution I chose, since it only required one line of code in the managed bean class that deletes the row.

Links and references:

Categories: Development

Task Flows: Sayonara automated nesting of Application Modules JDev

Chris Muir - Thu, 2011-07-28 22:59
-- Post edit --

Any readers of this post should also read the following follow-up post.

-- End post edit --

In a previous blog post I discussed the concept of automated nesting of Application Modules (AMs) when using Bounded Task Flows (BTFs) with a combination of the transactional options Always Begin New Transaction, Always Use Existing Transaction and Use Existing Transaction if possible. The automated nesting of AMs is a very important feature as when you have a page made up of disparate regions containing task flows, and those regions have their own AMs, without the auto-nesting feature you end up with the page creating as many connections as there are independent region-AMs. Thus your application is less scalable, and architectural your application must be built in a different manner to avoid this issue in the first place.

This automated nesting of AMs is exhibited in the 11.1.1.X.0 series of JDeveloper and the ADF framework including JDev & Unfortunately, either by error or design this feature is gone in Having checked the JDev release notes and what's new notes, I can't see any mention of this change.

In turn I don't believe (please correct me if I'm wrong) there to be a section in the Fusion Guide that specifically talks about the interactions of the task flow transaction options and Application Module creation. The documentation talks about one or the other, not both in combination. This is somewhat of a frustrating documentation omission to me because it means Oracle can change the behaviour without being held accountable to any documentation stating how it was meant to work in the first place. All I have is a number of separate posts and discussions with Oracle Product Managers describing the behaviour which cannot be considered official.

In the rest of this post I'll demonstrate the changing behaviour between versions. It would be appreciated if any readers find errors in both the code, or even factual errors that you follow up with a comment on this blog please. I'm always wary of misleading others, I write my blog to inform and educate, not lead people down the garden path.

4 test applications

For this post you can download a single zip containing 4 different test applications to demonstrate the changing behaviour:

a) ByeByeAutoAMNestingJSPX111140
b) ByeByeAutoAMNestingJSPX111150
c) ByeByeAutoAMNestingJSPX111200
d) ByeByeAutoAMNestingFacelets111200

Why so many versions? My current client site is using, not, so I wanted to check there was consistent behaviour in the pre- releases. For this blog post I'll talk about the version, but the exactly same behaviour is demonstrated under

In addition I know that in the release because of the support for JSPX & Facelets, that the controllers have different implementations, so it is necessary to see if the issue is different between the two VDL implementations.

Besides the support for 4 different versions of JDev, and in the the different VDLs, each application is constructed in exactly the same fashion, using a near identical Model and ViewController setup. The following sections describe what has been setup in both these projects across all the example applications.

The Model project

Each application has exactly the same ADF BC setup connecting to Oracle's standard HR schema. The Model project in each application includes EOs and VOs that map to the employees and locaitons tables in the HR schema. The tables and the data they store are not consequential to this post, we simply need some Entity Objects (EOs) and View Objects (VOs) to expose through our Application Modules (AMs) to describe the AM nesting behaviour.

In the diagram above you can see the EOs and VOs. In addition I've defined 2 root level AMs EmployeesAppModule and LocationsAppModule. The EmployeesAppModule exposes the EmployeesView VO and the LocationsAppModule exposes the LocationsView.

To be clear, note I've defined these as separate root level AMs. So at the ADF BC level there is no nesting of the AMs defined. What we'll attempt to do is show the automatic nesting of AMs at the task flow level, or not, as the case might be.

In order to comprehend if the automated AM nesting is working at runtime, it's useful to add some logging to the ADF Business Components to show us such things as:

1) When our Application Modules are being created
2) If the AMs are created as root AMs or nested AMs

As such in each of the AMs we'll include the following logging code. The following example shows the EmployeesAppModuleImpl changes. Exactly the same would be written into the LocationsAppModuleImpl, with the exception of changing the log messages:
public class EmployeesAppModuleImpl extends ApplicationModuleImpl {
// Other generated methods

public static ADFLogger logger = ADFLogger.createADFLogger(EmployeesAppModuleImpl.class);

protected void create() {
if (isRoot())
logger.info("EmployeesAppModuleImpl created as ROOT AM");
logger.info("EmployeesAppModuleImpl created as NESTED AM under " + this.getRootApplicationModule().getName());

protected void prepareSession(Session session) {
if (isRoot())
logger.info("EmployeesAppModuleImpl prepareSession() called as ROOT AM");
logger.info("EmployeesAppModuleImpl prepareSession() called as NESTED AM under " + this.getRootApplicationModule().getName());
View Controller project

Each application has a near identical ViewController project with the same combination of task flows, pages & fragments. The only exception being the applications, where the Facelets application doesn't use JSPX pages or fragments, but rather Facelets equivalents. This section describes the commonalities across all applications.

Each application essentially is made up of 3 parts:

a) A Start page
b) A Bounded Task Flow (BTF) named ParentTaskFlow comprised of a single page ParentPage
c) A BTF named ChildTaskFlow comprised of a single fragment ChildFragment

Start Page

1) The start page is designed to call the ParentTaskFlow through a task flow call.


1) The ParentTaskFlow is set to Always Begin New Transaction and Isolated data control scope.

2) The ParentTaskFlow page contains an af:table showing data from the EmployeesView of the EmployeesAppModuleDataControl.

3) The ParentTaskFlow page also contains a region that embeds the ChildTaskFlow


1) The ChildTaskFlow is set to Use Existing Transaction if Possible and Shared data control scope

2) The ChildFragment fragment contains an af:table showing data from the LocationsView of the LocationsAppModuleDataControl

The behaviour under

When we run our application, and navigate from the Start Page to the ParentTaskFlow BTF showing the ParentPage, in the browser we see a page showing data from both the Employees VO and Locations VO. Of more interest this is what we see in the logs:
<EmployeesAppModuleImpl> <create> EmployeesAppModuleImpl created as ROOT AM
<EmployeesAppModuleImpl> <prepareSession> EmployeesAppModuleImpl prepareSession() called as ROOT AM
<EmployeesAppModuleImpl> <create> EmployeesAppModuleImpl created as NESTED AM under EmployeesAppModule
<LocationsAppModuleImpl> <create> LocationsAppModuleImpl created as NESTED AM under EmployeesAppModule
Based on my previous blog post on investigating and explaining the automated Application Module nesting feature in the 11.1.1.X.0 JDeveloper series, this is what I believe is occurring.

As the ParentTaskFlow is designed to start a new transaction, when the first binding in the page exercises the EmployeesAppModule via the associated Fata Control and View Object embedded in the table, ADF instantiates the AM as the root AM and attaches it to the Data Control Frame.

The Data Control Frame exists for chained BTFs who are joining transactions. So in this example the Employees AM is the first AM to join the Data Control Frame and it becomes the root AM. A little oddly we see the EmployeesAppModuleImpl then created again and nested under a root instance of itself. I'm not really sure why this occurs, but it might just be some sort of algorithmic consistency required for the Data Control Frame. Maybe readers might have something to share on this point?

It's worth noting the significance of a root AM unlike a nested AM, is only the root AM connects to the database and manages the transactions through commits and rollbacks. Nested AMs delegate these responsibilities back to the root AM. This is why we can see the prepareSession() call for the EmployeesAppModule.

When the page processing gets to the bindings associated with the ChildTaskFlow, within the fragment of the ChildTaskFlow it discovers the LocationsAppModule via the associated Data Control and View Object embedded in the table. Now we must remember that the ChildTaskFlow has the Use Existing Transaction if Possible and Shared data control scope options set. From the Fusion Guide this transaction option says:

"Use Existing Transaction if possible - When called, the bounded task flow either participates in an existing transaction if one exists, or starts a new transaction upon entry of the bounded task flow if one doesn't exist."

In order for the BTF to be part of the same transaction, it must use the same database connection too. As such regardless in the ADF Business Components where we defined the two separate Application Modules as root AMs (which by definition implies they have separate transactions & database connections), it's expected the task flow transaction options overrides this and forces the second AM to "nest" AM under the first AM as it wants to "participate in the existing transaction if it exists."

So to be clear, this is the exact behaviour we see in the logs of our 11.1.1.X.0 JDeveloper series of applications. The end result is our application takes 1 connection out with the database rather than 2.

The behaviour under

Under JDeveloper regardless if we run the JSPX or Facelets application, with exactly the same combination of task flow elements and transaction options, this is what we see in the log:
<EmployeesAppModuleImpl> <create> EmployeesAppModuleImpl created as ROOT AM
<EmployeesAppModuleImpl> <prepareSession> EmployeesAppModuleImpl prepareSession() called as ROOT AM
<LocationsAppModuleImpl> <create> LocationsAppModuleImpl created as ROOT AM
<LocationsAppModuleImpl> <prepareSession> LocationsAppModuleImpl prepareSession() called as ROOT AM
As such regardless that the Oracle task flow documentation for the latest release says that the second task flow with the Use Existing Transaction if Possible option should join the transaction of the calling BTF, it doesn't. As can see from the logs both AMs are now treated as root and are preparing their own session/connection with the database.

The effect of this is our application now uses 2 connections rather than 1, and in turn the BTF transaction options don't appear to be working as prescribed.

Is it just a case of philosophy?

Maybe this is just a case of philosophy? Prior to JDev the ADFc controller (who implements the task flows) was the winner in how the underlying ADF BC Application Modules were created and nested. Maybe in Oracle has decided that no, in fact the ADFm model layer should control its own destiny?

Who knows – I don't see any documentation in the release notes or what's new notes to tell me this has changed. The task flow transaction option documentation is all I have. As such if you've relied on this feature, your architecture is built on this feature as we have, we're now in a position we can't upgrade our application to without major rework.

To get clarification from Oracle I'll lodge an SR and will keep the blog up to date on any information discovered.

-- Post edit --

Any readers of this post should also read the following follow-up post.

-- End post edit --

MySQL Group By is a little too indulgent

Nigel Thomas - Thu, 2011-07-28 07:54
After 30 years of Oracle, I've found myself using MySQL recently. I came across a little thing that surprised me. I'm by no means the first to trip over this - I found this 2006 post from Peter Zaitsey on the same topic.

MySQL lets you write a group by statement that references columns that aren't in the group by, and aren't aggregates. For example:

mysql> select table_name, column_name, count(*)
-> from information_schema.columns
-> where table_schema = 'information_schema'
-> group by table_name
-> limit 5;
| table_name | column_name | count(*) |
5 rows in set (0.07 sec)

A similar query from any version of Oracle would fail:

SQL> select table_name, column_name, count(*)
2 from dba_tab_columns
3 group by table_name;
select table_name, column_name, count(*)
ERROR at line 1:
ORA-00979: not a GROUP BY expression

In effect MYSQL is doing the GROUP BY as requested, and giving you the first value it comes across for the un-aggregated columns (COLUMN_NAME in this example). A near equivalent Oracle query would be:

SQL> select table_name, min(column_name), count(*)
2 from dba_tab_columns
3* group by table_name

------------------------------ ------------------------------ ----------
ICOL$ BO# 14

But in the Oracle case we are explicitly selecting the MIN(column_name), whereas MySQL's laxer behaviour is just picking the first column name at random (or rather, dependent on the execution plan).

So: when grouping in MySQL, make double certain that your SQL is really returning the number of rows you expected. In our example it is possible that the intention was actually the very different:

mysql> select table_name, column_name, count(*)
-> from information_schema.columns
-> where table_schema = 'information_schema'
-> group by table_name, column_name
-> limit 20;
| table_name | column_name | count(*) |
20 rows in set (0.06 sec)

Happy debugging everyone!

Oracle Unified Directory Webcast Q&A Results Posted

Mark Wilcox - Thu, 2011-07-28 07:10
We have posted the answers to the questions from the Q&A from the OUD introduction webcast.

Oracle Unified Directory Webcast Q&A Results Posted

Mark Wilcox - Thu, 2011-07-28 07:10
We have posted the answers to the questions from the Q&A from the OUD introduction webcast.

Debug mode for MOS

Charles Schultz - Wed, 2011-07-27 08:06
Had an SR in which I learned about a debug mode for FLASH MOS (tried it in HTML, no go *grin*). Hold down the Control key and click on the Oracle My Oracle Support Logo in the upper left-hand corner:

Here is a short video, using Oracle's recommendation of CamStudio:

"The Year of the ADF developer" at Oracle Open World 2011

Chris Muir - Wed, 2011-07-27 07:58
What's one of the worst things about attending Oracle Open World? From my point of view it's the huge amount of marketing. Booooorrrring. I'm a developer, I want to hear technical stuff, not sales talk!!

For ADF developers attending OOW in 2011 this is all set to change. Not only has Oracle lined up a number of ADF presentations during the mainstream conference, but the ADF Enterprise Methodology Group (ADF EMG) has a whole day of sessions on the user group Sunday October 2nd!

Think about it. That's a mini ADF conference just for ADF programmers! Even better it will be hosted by ADF experts from around the world to share their day-to-day ADF experiences with you, not just in a brief 1hr session, but 6 sessions in total. That's a lot of ADF content and an addition for no extra cost to your OOW tickets.

So I officially declare OOW'11 "The Year of the ADF developer".

Who have we got lined up for you? I'm glad you asked. We have such A1 ADF presenters as:

* Sten Vesterli - Oracle ACE Director, author of the latest ADF book "Oracle ADF Enterprise Application Development - Made Simple" and best speaker at the 2010 ODTUG Kscope conference.

* Frank Nimphius - Oracle Corp's own superstar ADF product manager who produces near-1000 blog posts a day, the ADF code harvests, articles in Oracle Magazine and is a top contributor to the OTN forums in assisting others write successful ADF applications.

* Maiko Rocha - part of Oracle Corp's own WebCenter A-Team who solves some of the most complex and challenging issues Oracle customers throw at ADF and WebCenter.

* Andrejus Baranovskis - ADF blogging wiz whose detailed posts on ADF architecture & best practices has shown many an ADF novice how to put together a working, optimised application using a huge range of ADF features.

* Wilfred van der Deijl - the author of potentially the most important ADF plug-in OraFormsFaces, which gives you the ability to integrate Oracle Forms & ADF into the same running web pages.

* Steven Davelaar - one of the key brains behind Oracle's JHeadstart, and a well known ADF presenter who shows how to push ADF to the extreme for productive development.

* Lucas Jellema - the Fusion Middleware blogging powerhouse from AMIS in the Netherlands, showing how to solve just a
bout any problem in the ADF and FMW space.

Excited? You should be.

But more importantly what are they presenting?

- 09:00 - Sten - Session 32460 - Oracle ADF Enterprise Methodology Group: Starting an Enterprise Oracle ADF Project

- 10:15 - Frank & Maiko - Session 32480 - Oracle ADF Enterprise Methodology Group: Learn Oracle ADF Task Flows in Only 60 Minutes

- 11:30 - Andrejus - Session 32481 - Oracle ADF Enterprise Methodology Group: A+-Quality Oracle ADF Code

- 12:45 - Wilfred - Session 32500 - Oracle ADF Enterprise Methodology Group: Transitioning from Oracle Forms to Oracle ADF

- 14:00 - Steven - Session 32501 - Oracle ADF Enterprise Methodoloy Group: Empower Multitasking with an Oracle ADF UI Powerhouse

- 15:15 - Lucas - Session 32502 - Oracle ADF Enterprise Methodology Group: Gold Nuggets in Oracle ADF Faces

All sessions will be held on Sunday October 2nd, so you need to make sure you turn up a day earlier if you only traditionally attend the main part of the conference.

All sessions are in Moscone West room 2000, though remember to check on the day in case the sessions have been moved.

I hope you’re excited as we are in the ADF EMG sessions at Oracle Open World 2011. We really hope you can attend and spread the word about what we’ve got going this year. Remember the ADF EMG is only as good as it’s members’ participation – it’s your group.

(Thanks must go to Bambi Price and the APOUC for giving us the room to hold these presentations at OOW'11).

Managing Chartfields and Trees Across PeopleSoft and Hyperion

Brent Martin - Tue, 2011-07-26 04:07

If you’re implementing Hyperion applications to complement your PeopleSoft Financials application, one decision you’ll have to make relatively early is which tool to use to manage your core dimensions and their associated hierarchies.  Here are the options:

Native Functionality
Hyperion EPMA
Hyperion Data Relationship Management

So which one is the right choice?  Based on my research and discussions with Christopher Dwight, a member of Oracle’s Master Data Management practice, here’s what I have learned:

The native functionality basically means you’ll maintain your dimensions in each application separately.  So if you want to add a department, you’ll have to add it to PeopleSoft, then Hyperion Financial Management, then Planning separately.

Hyperion EPMA provides a robust, single point of administration for EPM applications.  It allows you to create a dimension library which allows several EPM dimensions to be centrally stored and re-used across multiple EPM applications.  Basic dimension editing capabilities are provided.  Individual dimension elements ("nodes" or "members") can be flagged for use within a specific application, supporting slightly different application requirements while promoting dimension re-use.  Although this feature has potential, each member must be individually flagged, limiting the usability for large dimensions.  EPMA is intended to support only Hyperion EPM applications, and to be utilized by system administrators, not the typical end user.

DRM is different in that it was conceived from the start as an agnostic enterprise dimension management platform, and not beholden to Hyperion EPM applications alone.  As such, DRM can be deployed to support financial metadata and dimensions in a myriad of systems, ranging from PeopleSoft to GEAC to SAP to Cognos to Teradata to Hyperion and many more.  It was also design to support not only system administrator users, but also to allow business users to become direct contributors into the dimension management process.

Moving OVD 11g Test to Production Configurations

Mark Wilcox - Mon, 2011-07-25 03:43
Just back from vacation - during which we launched our new Oracle Unified Directory (OUD). And I'll be spending a lot of time writing about that since it's new product. But here's a useful 11g OVD piece of information. If you need to migrate test to production configurations on 11g OVD and you apply the latest patchset ( aka Patchset 4) we have new migration scripts that are particularly useful for off-line migrations: For off-line Test-To-Production migration of OVD, customers can use Movement Scripts to:
  1. Create a configuration archive of OVD instance using 'copyConfig' script.
  2. Extract the move plan using 'extractMovePlan' script & edit the move plan appropriately.
  3. Copy the configuration archive & move plan to Production server(s) & execute 'pasteConfig' script.

Configuring FTP on Exadata

Alejandro Vargas - Sun, 2011-07-24 23:30
Exadata is installed with the minimum set of rpm's required to make it work as a database server.In many cases you will need to install by yourself the rpms required to make available some specific functions, like FTP.Exadata is installed either with Oracle Enterprise Linux or Solaris Express. This instructions match the Linux distribution, and can be used on any RH compatible Linux, not only OEL on Exadata.You can find the rpm's on the Oracle Enterprise Linux Distribution Disk, downloadable from edelivery.oracle.comInstall the Following rpms:[root@exand02 rpms]# lsftp-0.17-35.el5.x86_64.rpm pam-rpms vsftpd-2.0.5-16.el5_4.1.x86_64.rpmlftp-3.7.11-4.el5.x86_64.rpm tftp-server-0.49-2.0.1.x86_64.rpmThe Command to Install[root@exand02 rpms]# rpm -Uivh vsftpd-2.0.5-16.el5_4.1.x86_64.rpm ftp-0.17-35.el5.x86_64.rpm lftp-3.7.11-4.el5.x86_64.rpmStart Service vsftpd[root@exand02 rpms]# service vsftpd startStarting vsftpd for vsftpd: [ OK ][root@exand02 rpms]# service vsftpd statusvsftpd (pid 9274) is running...Configure Automatic vsftp Start[root@exand02 rpms]# chkconfig vsftpd on[root@exand02 rpms]# chkconfig --list | grep vsftpdvsftpd 0:off 1:off 2:on 3:on 4:on 5:on 6:offecho "service vsftpd status" >> /etc/rc.local[root@exand02 rpms]# tail -2 /etc/rc.local########### END DO NOT REMOVE Added by Oracle Exadata ###########service vsftpd startEdit /etc/vsftpd.confSet the following parameters on vsftpd.conf#anonymous_enable=YES (changed to NO to allow Exadata users to ftp)anonymous_enable=NO#userlist_enable=YES (changed to NO to allow Exadata users to ftp)userlist_enable=NOTest[root@exand02 vsftpd]# ftp exand02Connected to exand02 ( (vsFTPd 2.0.5)Name (exand02:root): oracle331 Please specify the password.Password:230 Login successful.Remote system type is UNIX.Using binary mode to transfer files.ftp> pwd257 "/home/oracle"ftp> ls227 Entering Passive Mode (10,25,104,130,85,192)150 Here comes the directory listing.drwxr-xr-x 3 1001 500 4096 May 20 19:47 localdrwxr----- 3 1001 500 4096 May 03 12:20 oradiag_oracle-rw-r--r-- 1 1001 500 1020 Jun 01 14:41 ~oraclec226 Directory send OK.ftp> bye221 Goodbye.
Categories: DBA Blogs

Configuring FTP on Exadata

Alejandro Vargas - Sun, 2011-07-24 23:30

Exadata is installed with the minimum set of rpm's required to make it work as a database server.
In many cases you will need to install by yourself the rpms required to make available some specific functions, like FTP.

Exadata is installed either with Oracle Enterprise Linux or Solaris Express. This instructions match the Linux distribution, and can be used on any RH compatible Linux, not only OEL on Exadata.

You can find the rpm's on the Oracle Enterprise Linux Distribution Disk, downloadable from edelivery.oracle.com

Install the Following rpms:

[root@exand02 rpms]# ls
ftp-0.17-35.el5.x86_64.rpm pam-rpms
lftp-3.7.11-4.el5.x86_64.rpm tftp-server-0.49-2.0.1.x86_64.rpm

The Command to Install

[root@exand02 rpms]# rpm -Uivh vsftpd-2.0.5-16.el5_4.1.x86_64.rpm ftp-0.17-35.el5.x86_64.rpm lftp-3.7.11-4.el5.x86_64.rpm

Start Service vsftpd

[root@exand02 rpms]# service vsftpd start
Starting vsftpd for vsftpd: [ OK ]
[root@exand02 rpms]# service vsftpd status
vsftpd (pid 9274) is running...

Configure Automatic vsftp Start

[root@exand02 rpms]# chkconfig vsftpd on

[root@exand02 rpms]# chkconfig --list | grep vsftpd
vsftpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

echo "service vsftpd status" >> /etc/rc.local

[root@exand02 rpms]# tail -2 /etc/rc.local
########### END DO NOT REMOVE Added by Oracle Exadata ###########
service vsftpd start

Edit /etc/vsftpd.conf

Set the following parameters on vsftpd.conf

#anonymous_enable=YES (changed to NO to allow Exadata users to ftp)

#userlist_enable=YES (changed to NO to allow Exadata users to ftp)


[root@exand02 vsftpd]# ftp exand02

Connected to exand02 (
220 (vsFTPd 2.0.5)
Name (exand02:root): oracle
331 Please specify the password.
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.

ftp> pwd
257 "/home/oracle"

ftp> ls
227 Entering Passive Mode (10,25,104,130,85,192)
150 Here comes the directory listing.
drwxr-xr-x 3 1001 500 4096 May 20 19:47 local
drwxr----- 3 1001 500 4096 May 03 12:20 oradiag_oracle
-rw-r--r-- 1 1001 500 1020 Jun 01 14:41 ~oraclec
226 Directory send OK.

ftp> bye
221 Goodbye.

Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator