Feed aggregator

JDev 11g, Task Flows & ADF BC – the Always use Existing Transaction option – it's not what it seems

Chris Muir - Mon, 2011-05-16 18:46
JDev 11.1.1.5.0

Oracle's JDeveloper 11g introduces the powerful concept of task flows to the Application Development Framework (ADF). Task flows enable "Service Oriented Development" (akin to "Service Oriented Architecture") allowing developers to align web application development closely to the concept of business processes, rather than a disparate set of web pages strung loosely together by URLs.

Yet as the old saying goes, "with power comes great responsibility", or alternatively, "the devil is in the detail". Developers need to have a good grasp of the task flow capabilities and options in order to not paint themselves into a corner. This is particularly true of the transaction and data control scope behavioural options provided by "bounded" task flows.

The transaction and data control scope behavioural options available to bounded task flows provide a sophisticated set of functionality for spawning and managing one or more transactions during an ADF user's session. Straight from the Fusion Developer's Guide the transaction options are:

• <No Controller Transaction>: The called bounded task flow does not participate in any transaction management.

• Always Use Existing Transaction: When called, the bounded task flow participates in an existing transaction already in progress.

• Use Existing Transaction If Possible: When called, the bounded task flow either participates in an existing transaction if one exists, or starts a new transaction upon entry of the bounded task flow if one doesn't exist.

• Always Begin New Transaction: A new transaction starts when the bounded task flow is entered, regardless of whether or not a transaction is in progress. The new transaction completes when the bounded task flow exits.

In recently discussing the task flow transaction options on the OTN Forums (with the kind assistance of Frank Nimphius it's become apparent that the transaction options described in the Fusion Guide are written from the limited perspective of the ADF controller (ADFc). Why a limited perspective? Because the documentation doesn't consider how these transactions options are dealt with by the underlying business services layer – the controller makes no assumptions about the underlying layers, it is deliberate an abstraction that sits on top. As such if we consider ADF Business Components (ADF BC), ADFc can interpret the task flow transaction options as it sees fit. The inference being, ADF BC can introduce subtle nuances in how the transaction options work as called by the controller.

The vanilla "Always Use Existing Transaction" option

The Fusion Guide is clear in the use of the task flow "Always Use Existing Transaction" option:

• Always Use Existing Transaction: When called, the bounded task flow participates in an existing transaction already in progress.

The inference here is that the task flow won't create its own transaction, but rather will attach itself to an existing transaction established by its calling task flow (let's refer to this as the "parent" task flow), or a "grandparent" task flow somewhere up the task flow call stack.

To test this let's demonstrate how ADFc enforces this option.

In our first example application we have an extremely simple ADF BC model of a single Entity Object (EO), single View Object (VO) and Application Module (AM), serving data from a table of Organisations in my local database:


Oracle's JDeveloper 11g introduces the powerful concept of task flows to the Application Development Framework (ADF). Task flows enable "Service Oriented Development" (akin to "Service Oriented Architecture") allowing developers to align web application development closely to the concept of business processes, rather than a disparate set of web pages strung loosely together by URLs.

From the ViewController side we have a single Bounded Task Flow (BTF) OrgTaskFlow1 comprised of a single page:


....where the single page displays a table of Organisations via the underlying ADF Business Components:


...and the transaction options of the BTF are set to Always Use Existing Transaction. By default the framework enforces the data control scope must be shared:


In order to call the BTF, from our Unbounded Task Flow (UTF) configured in the adfc-config.xml file, we have a simple Start.jspx page, which via a button invokes a Task Flow Call to the BTF OrgTaskFlow1:


On starting the application, running the Start page, selecting the button to navigate to the Task Flow Call, we immediately hit the following error:
oracle.adf.controller.activity.ActivityLogicException: ADFC-00006: Existing transaction is required when calling task flow '/WEB-INF/OrgTaskFlow1.xml#OrgTaskFlow1'.

Via this error we can see ADFc is enforcing at runtime that the OrgTaskFlow1 BTF is unable to run as it requires its parent or grandparent task flow to have established a transaction on its behalf. With this enforcement we can (incorrectly?) conclude that Oracle's controller will never allow the BTF to run if a new transaction hasn't been established. However as you can probably guess, this post will demonstrate this isn't always the case.

A side note on transactions

Before showing how to create a transaction with the Always Use Existing Transaction option, a discussion on how we can identify transactions created via ADF BC is required.

Readers familiar with ADF Business Components will know that root Application Modules (AM) are responsible for the establishment of connections and transactional processing with the database. Ultimately the concept of transactions in context of the ADF Controller is that off the underlying business services, and by inference when ADF Business Components are used this means it's the root Application Modules that provide this functionality.

It should also be noted that by inference, that the concept of a transaction and a connection are the one in the same, in the idea that a connection with the database allows you to support a transaction, and if you have multiple transactions, you therefore have multiple connections. Simple you can't have one without the other.

Yet thanks to the Application Module providing the ability to create connections and transactions, how do we know when an AM actually creates a connection? Without knowing this, in our trials with the transaction options supported by Bounded Task Flows, unless the ADFc explicitly throws an error, we'll have trouble discerning what the ADF BC layer is actually doing underneath the task flow transaction options.

While external tools like the Fusion Middleware Control will give you a good insight into this, the easiest mechanism is to extend the Application Module's ApplicationModuleImpl's class with our AppModuleImpl and override the create() and prepareSession() methods:
public class AppModuleImpl extends ApplicationModuleImpl {
// Other generated methods

@Override
protected void create() {
super.create();
if (isRoot())
System.out.println("######## AppModuleImpl.create() called. AM isRoot() = true");
else
System.out.println("######## AppModuleImpl.create() called. AM isRoot() = false");
}

@Override
protected void prepareSession(Session session) {
super.prepareSession(session);
if (isRoot())
System.out.println("######## AppModuleImpl.prepareSession() called. AM isRoot() = true");
else
System.out.println("######## AppModuleImpl.prepareSession() called. AM isRoot() = false");
}
}
Overriding the create() method allows us to see when the Application Module is not just instantiated, but ready to be used. This doesn't tell us when a transaction and connection is established with the database, but, is useful in identifying situations where the framework creates a nested AM (which is useful for another discussion about task flows, stay tuned for another blog post).

The prepareSession() method is a chokepoint method the framework uses to set database session state when a connection is established with the database. As such overriding this method allows us to see when the AM does establish a new connection and transaction.

Bending the "Always Use Existing Transaction" option to create a transaction

Now that we have a mechanism for seeing when transactions are established, let's show a scenario where the Always Use Existing Transaction option does create a new transaction.

In our previous example our Unbounded Task Flow called our OrgTaskFlow1 Bounded Task Flow directly. This time let's introduce an intermediate Bounded Task Flow called the PregnantTaskFlow. As such our UTF Start page now calls the PregnantTaskFlow:


The PregnantTaskFlow will set its transaction option to Always Begin New Transaction and an Isolated data control scope:


By doing this we are setting up a scenario where the parent task flow will establish a transaction, which will be used by the OrgTaskFlow1 later on. Next within the PregnantTaskFlow we include a single page to land on called Pregnant.jspx, which includes a simple button to then navigate to the OrgTaskFlow1 task flow via a Task Flow Call in the PregnantTaskFlow itself:


The Pregnant.jspx page is only necessary as it gives a useful landing page when the task flow is called, to see what the task flow has done with transactions before we call the OrgTaskFlow1 BTF.

The transaction options of the OrgTaskFlow1 remain the same, Always Use Existing Transaction and a Shared data control scope:


With the moving parts of our application established, if we now run our application starting with the Start page:


...clicking on the button we arrive on the Pregnant.jspx page within the PregnantTaskFlow BTF:


(Oops, looks like this picture has been lost... I'll attempt to restore this picture soon)

Remembering our PregnantTaskFlow is responsible for establishing the transaction, and therefore we should see our Application Module create() and prepareSession() methods write out System.out.println messages to the console in the JDev log window:


Hmmm, interesting, the log window is bare, no sign of our messages? So our PregnantTaskFlow was set to create a new transaction, but no such transaction or connection with the database for that matter was established?

Here's the interesting point of our demonstration. If we then select the button in the Pregnant.jspx page which will navigate to the OrgTaskFlow1 task flow call activity in the PregnantTaskFlow, firstly we see in the browser our OrgList.jspx page:


According to our previous tests at the beginning of this post we may have expected the ADFC-00006 error "Existing transaction is required", but instead the page has rendered?

In addition if we look at our log window:


...we now see our System.out.println messages in the console, showing that the AM create() methods were called and a new connection was established to the database via the prepareSession() method being called too.

(Why are there 2 calls to create() for AppModuleImpl? The following blog post on root AM interaction with task flows explains all.)

The contradictory results here are, that even though we set the Always Use Existing Transaction option for the OrgTaskFlow1 BTF are expected the ADFC-00006 error, that it in fact OrgTaskFlow1 did establish a new transaction?

What's going on?

An easy but incorrect conclusion to make is this is an ADF bug. However if you think through how the ADF framework works with bindings to the underlying services layer, in our context ADF BC, this actually makes sense.

From the point of view of a task flow, there is no inherit and directly configured relationship between the task flow and the business services layer/ADF BC. As example there is no option in the task flow properties to say which Data Control mapping to an ADF BC Application Module the task flow will use. The only point in the framework where the ADF view and controller layers touch the ADF BC side is through the pageDef bindings files, which are used by distinct task flow activities (including pages and page fragments) within the task flow as we navigate through the task flow (i.e. not the task flow itself). As such until the task flow hits an activity that calls a binding indirectly calling the ADF BC Application Module via a Data Control, the task flow has no way of actually establishing the transaction.

That's why in the demonstrations above I referred to the intermediate task flow as the "pregnant" task flow. This task flow knows it wants to establish a transaction with the underlying business services Application Module through a binding layer Data Control, it's effectively pregnant waiting for such the event, but it can't do so until one of its children activities exercises a pageDef file with a call to the business service (to take the analogy too far, you're in labour expecting your first child, you've rushed to the hospital, but you're told you'll have to wait as the widwife hasn't arrived yet ... you know at this point you're going to have this damned kid, but you've got to desperately wait until the midwife arrives ;-)

By chance in our example, the first activity in the PregnantTaskFlow that does have a pageDef file is the OrgList.jspx page that resides in the OrgTaskFlow1 task flow called via a task flow call in the PregnantTaskFlow. So in the sense even though the OrgTaskFlow1 says it won't create a transaction, it in fact does.

Why does this matter?

At this point of the discussion you might think this all a very interesting discussion, but rather an academic exercise too. Logically there's still only one transaction established for the combination of the PregnantTaskFlow and OrgTaskFlow1, regardless of where the transaction is actually established. Why does it matter?

Recently on the ADF Enterprise Methodology Group I started a discussion on building task flow for reuse. Of specific interest I asked the question on what's the most flexible data control scope and transactions options to pick such that we don't limit the reusability of our task flows? If we set the wrong options such as Always Use Existing Transaction, because of errors like ADFC-00006, it may make the task flow unreusable, or at least limited in reuse to specific scenarios.

The initial conclusion from the ADF EMG post was only the Use Existing Transaction if Possible and Shared data control scope options should be used, as, this option will reuse an existing transaction if available from the calling task flow, or, establish a new transaction if one isn't available.

However from the conclusion of this post we can see the Always Use Existing Transaction option is in fact more flexible than first thought as long as we at some point wrap it in a task flow that starts a transaction, giving us another option when building reusable task flows.

Some caveats

A caveat also shared by the next blog post on task flow transaction, is both posts describe the transaction behaviours in context of the interaction with ADF Business Components. Readers should not assume that the same transaction behaviour will be exhibited by different underlying business services such as EJBs, POJOs or Web Services. As example Web Services don't have the concept of transactions, so we can probably guess that there's no point using anything but the No Controller Transaction option .... however again you need to experiment with these alternatives yourself, don't base your conclusions on this post.

Further reading

If you've got this far, I highly recommend you follow up reading this post by reading my next blog post on root Application Modules and how the transaction options of task flows change their behaviour.

CP10 for Discoverer 10.1.2.3

Michael Armstrong-Smith - Mon, 2011-05-16 16:56
Just wanted to let you know that on April 18, 2011, Oracle has released CP10 for 10.1.2.3. You will find it on MetaLink as patch number 11674847. When compared to CP9, 10 bugs have been fixed.


Note: when you download the readme from MyOracle, from CP9 Oracle has placed the new bug fixes at the top of the list.

So far this cumulative patch has been released for the following platforms:
  • IBM AIX on POWER systems (64-bit)
  • Linux x86
  • Microsoft Windows 32-bit
  • Oracle Solaris on SPARC (64-bit)
If you are upgrading to CP10 from any patch level prior to CP4 then JDBC patch patch p4398431_10105_GENERIC.zip for bug 4398431(release 10.1.0.5) needs to be installed before you apply CP5.

Note: please take a look at the comments posted below and if anyone has any experience of CP10, good or bad, please let me know.

alert.log appears not be updated

Charles Schultz - Mon, 2011-05-16 15:09
After a few days of spinning my wheels and subjecting the poor recipients of oracle-l to multiple posts, I have identified an issue in Oracle code that I believe needs to be looked at.

First, some background.
We are running Oracle EE 11.1.0.7 on Solaris 10. We also have a job that occasionally bzips (compresses) the alert.log. The logic in the job is supposed to check if the file is actively being written to before zapping it, but by pure chance (so it would seem), in this particular case the alert.log was still open by the database when the file was scorched. This led to the appearance of the alert.log not receiving any more updates from the database. We attempted to bounce the database which had no discernible effect. I also changed the diagnostic_dest, which caused us to go from slightly strange to absolutely bizarre, and what opens the door for the rest of this post.


What I found
After changing diagnostic_dest several times, posting on oracle-l, the Oracle Community forums and playing tag with an Oracle Support Analyst, and doing lots of truss commands against sqlplus, I started to focus on this result from truss:
access("./alert.log", F_OK)              = 0

Now, you may notice that this "access" command is saying that the file in question ("./alert.log") is legit. This caused no small amount of head-scratching. I got the same results no matter which directory I ran the commands from. In my system, I only had two files with this name, one in $ORACLE_HOME/dbs and one in $DIAG/trace. Neither were actively updated by the database. It was not clear to me, at first, that Oracle was finding one of these log files. Especially since it never did anything with it. I searched file descriptors in /proc/*/fd and found nothing. I even grepped keywords from all text files looking for strings that should show up in this particular alert.log.

For the life of me, I could not figure out what directory ./alert.log was in. When I compared to other databases, this same access always returned Err#2 ENOENT. So I knew this must be key, but not sure exactly how. On a whim, I decided to delete the alert.log in $ORACLE_HOME/dbs. Lo and behold, the problem seemed to go away magically.

The BUG
So here is the root problem, in my opinion. The Oracle code line is looking for $ORACLE_HOME/dbs/alert.log, but completely fails to write to the file if it is found. Instead, the branch simply exits out. How is that helpful?

In retrospect....
I believe when I changed diagnostic_dest to a non-existing directory, Oracle automatically created alert.log in $ORACLE_HOME/dbs. I guess I learned a few things from this. :) Also, I learned a few tidbits along the way. One can use KSDWRT to write messages to the alert.log. Dan Morgan's library (still hosted by PSOUG) shows this. Also learned a little more about truss and dtrace as I was researching this issue.

Now the hard part; convincing Oracle that this is a problem and needs to be corrected.

The JDE alliance with IBM gets stronger

Andrews Consulting - Mon, 2011-05-16 14:03
One of the more intriguing aspects of the IT industry is that businesses are often both great friends and bitter enemies at the same time. Oracle and IBM provide a great example. Oracle CEO Larry Ellison could hardly have been more vitriolic in his attacks on IBM’s hardware business at OpenWorld last September. Ellison used […]
Categories: APPS Blogs

Client want to go for DR test.The changes During DR Test should Not REflect On Prod(Means the changed Data during DR Test should Not reflect on primar

Ayyappa Yelburgi - Sun, 2011-05-15 22:53
Possibility1:Planned Fail over Note:Primary Database will be down until DR Test completes a.Take cold/hot/RMAN backup on primary before DR test b.Take cold/hot/RMAN backup on standby Database before DR test. c.Shutdown Primary Database d.On standby Database fire the below command sql> alter database activate standby database; e.Once standby Database is activated,Execute the below command. ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com5

CP9 for Discoverer 10.1.2.3

Michael Armstrong-Smith - Thu, 2011-05-12 14:28
Just wanted to let you know that on January 11, 2011, Oracle has released CP8 for 10.1.2.3. You will find it on MetaLink as patch number 10233659. When compared to CP8, 6 bugs have been fixed.


Note: when you download the readme from MyOracle, from this release Oracle has started to place the new bug fixes at the top of the list.

So far this cumulative patch has been released for the following platforms:
  • HP-UX Itanium
  • HP-UX PA-RISC (64-bit)
  • IBM AIX on Power Systems (64-bit)
  • Linux x86
  • Microsoft Windows 32-bit
  • Oracle Solaris on SPARC (64-bit)
If you are upgrading to CP9 from any patch level prior to CP4 then JDBC patch patch p4398431_10105_GENERIC.zip for bug 4398431(release 10.1.0.5) needs to be installed before you apply CP5.
This patch needs to be applied to all Oracle Homes, i.e. Infrastructure home as well as all related midtier homes.
Bug 4398431 - HANG WHEN RETRIEVING A CONNECTION FROM THE IMPLICIT CONNECTION CACHE

CP2 for Discoverer 11g released

Michael Armstrong-Smith - Thu, 2011-05-12 14:16
Just wanted to let you know that on January 11, 2011, Oracle released CP2 for Discoverer 11.1.1.2.0. This is applicable for both Discoverer Plus and Viewer. You will find it on My Oracle Support (formerly MetaLink) as patch number 10409451. There are 5 bugs fixed in this cumulative patch.

So far this cumulative patch has been released for the following 5 platforms:
  • Linus x86
  • Linux x86-64 bit
  • Microsoft Windows (32-bit)
  • Microsoft Windows x64 (64-bit)
  • Oracle Solaris on SPARC (64-bit)

Configuring Discoverer Plus to pre-populate login credentials

Michael Armstrong-Smith - Thu, 2011-05-12 13:20
Have you ever noticed how Discoverer does not remember your user name, database and EUL whenever you log out and wished there was a way to make it do so?

Well, there is a way but you need to add some parameters to your URL to make it do so.

Let's assume the following:
  • User Name is michael
  • Database is prod
  • EUL is eul5_us
All you need to do is to add switches to your URL and then save it in your favorites. The switches you need are:
  • For User Name  use us=
  • For Database use database=
  • For EUL use eul=
Putting this altogether I can use: http://myserver.com:7779/discoverer/plus?us=michael&database=prod&eul=eul5_us

If you are using E-Business Suite you can also pre-populate this setting too by adding: lm=applications, like this:

http://myserver.com:7779/discoverer/plus?lm=applications&us=michael&database=prod&eul=eul5_us

Running Plus in IE8

Michael Armstrong-Smith - Thu, 2011-05-12 13:15
If you are experiencing issues running Discoverer Plus inside Microsoft IE8 the following comments may help.

So far, I have noticed that under no circumstances with Discoverer run in IE8 when it is configured to use JInitiator. If your company has enabled Discoverer to run primarily using JInitiator try adding the following parameter to your URL:  _jvm_name =sun

Your URL should look something like this: http://myserver.com:7778/discoverer/plus?_jvm_name=sun

Now all this assumes that your Discoverer administrator has enabled a more recent Sun Java than Discoverer comes installed with, namely 1.4.0_06

Should you find that you have this version installed please upgrade the server Java and try again.

OWB runtime repository

Klein Denkraam - Thu, 2011-05-12 07:25

I have been looking around the OWB runtime repository from time to time. Mainly because the Control Center isn’t allways the speedy friend you need when things get tough. It shows a lot of white screen a lot of the time while waiting for results to show. So I made myself a view on the runtime repository. I have been meaning to share it for some time, but did not get around to it. Until I recently saw a bit of much needed and long overdue OWB 11gR2 documentation for the runtime repository. I have not checked if I have not taken any shortcuts through the model yet, but when that leads to improvement, I will publish them here. So here it is.


CREATE OR REPLACE VIEW VW_RT_AUDIT_INFO
(EXECUTION_NAME, RETURN_RESULT, STARTTIME, ENDTIME, ELAPSE_TIME,
ELAPSE_FORMAT, SELECTED, INSERTED, UPDATED, DELETED,
DISCARDED, MERGED, CORRECTED, ERROR#, EXECUTION_AUDIT_STATUS,
MESSAGE_SEVERITY, MESSAGE_TEXT, PARAMETER_NAME, VALUE, CREATION_DATE,
OBJECT_NAME, OBJECT_LOCATION_NAME, TASK_NAME, TOP_LEVEL_EXECUTION_AUDIT_ID, EXECUTION_AUDIT_ID,
PARENT_EXECUTION_AUDIT_ID)
AS
SELECT    e.execution_name,
e.return_result,
e.created_on starttime,
e.updated_on endtime,
e.elapse_time,
TO_CHAR (TRUNC (SYSDATE, 'DD') + e.elapse_time / (24 * 3600),
'HH24:MI:SS') AS elapse_format,
DECODE (x.sel, NULL, 0, x.sel) AS selected,
DECODE (x.ins, NULL, 0, x.ins) AS inserted,
DECODE (x.upd, NULL, 0, x.upd) AS updated,
DECODE (x.del, NULL, 0, x.del) AS deleted,
DECODE (x.dis, NULL, 0, x.dis) AS discarded,
DECODE (x.mer, NULL, 0, x.mer) AS merged,
DECODE (x.cor, NULL, 0, x.cor) AS corrected,
DECODE (x.err, NULL, 0, x.err) AS error#,
e.execution_audit_status,
m.message_severity,
m.message_text,
p.parameter_name,
p.VALUE,
m.created_on AS creation_date,
e.object_name,
e.object_location_name,
e.task_name,
e.top_level_execution_audit_id,
e.execution_audit_id,
e.parent_execution_audit_id
FROM      all_rt_audit_executions e
LEFT JOIN all_rt_audit_exec_messages m
ON        e.execution_audit_id = m.execution_audit_id
LEFT JOIN all_rt_audit_execution_params p
ON        e.execution_audit_id = p.execution_audit_id
-- AND       p.parameter_name LIKE '%SPEELR%'
AND       p.parameter_name NOT IN ('PROCEDURE_NAME', 'PURGE_GROUP', 'OPERATING_MODE', 'MAX_NO_OF_ERRORS', 'AUDIT_LEVEL', 'BULK_SIZE', 'COMMIT_FREQUENCY', 'ODB_STORE_UOID', 'PACKAGE_NAME')
LEFT JOIN
(SELECT   e.execution_audit_id,
SUM (a.number_errors) AS err,
SUM (a.number_records_selected) AS sel,
SUM (a.number_records_inserted) AS ins,
SUM (a.number_records_updated) AS upd,
SUM (a.number_records_deleted) AS del,
SUM (a.number_records_discarded) AS dis,
SUM (a.number_records_merged) AS mer,
SUM (a.number_records_corrected) AS cor
FROM      all_rt_audit_executions e
LEFT JOIN all_rt_audit_map_runs a
ON e.execution_audit_id = a.execution_audit_id
GROUP BY e.execution_audit_id) x
ON         e.execution_audit_id = x.execution_audit_id




Note:

I have included error messages for each execution. This means rows will be duplicated when more than one error message is found for an execution.

Note 2:

I excluded the ‘default’ parameters for each execution because they too would lead to duplication of rows and most parameters will have default values anyway. Custom parameter values during execution will be shown in this way.


Is ChromeBook nothing but Larry's old idea of Network Internet Computer?

Khanderao Kand - Thu, 2011-05-12 01:44
Google announced ChromeBooks at GoogleIO 2011 conference today with a great fan-far. Definitely it is an idea appropriate to current Web centric world. It seems to be giving right vibes. It's slick, fast to start, connected to web, secured, may be free from viruses, consumes low battery. It is consistent with today's cloud computing. In other word, it is perfect client device for a Cloud Computing world or new web. However, is it a real innovation? Larry had started Network computer concept and had launched a separate company for the same. May be it was ahead of time. Isn't ChromeBook recycled the same idea? Anyway, though it has an innovative subscription model for education and businesses, the cost is higher, 499 for wifi and 599 for wireless. Especially on the background of various efforts going on to introduce slick Netbook at $100. Moreover, at this price, the ChromeBook would get sandwiched between tablets and PCs. Read my detailed blog at:
http://texploration.wordpress.com/
http://texploration.wordpress.com/2011/05/12/is-chromebook-recycled-idea-of-netcomputing-would-it-be-sandwitched-between-tablets-and-laptops/

Hadoop is building a good momentum...

Khanderao Kand - Tue, 2011-05-10 23:58
In EMC World this week, many new products based on Hadoop called launched.

EMC announced enterprise and community distribution as well as appliance of Apache Hadoop. This would be in competition with Cloudera which has a very good traction in Hadoop market. Moreover, Yahoo which has been pioneer in original contribution of Hadoop and a heavy user, is rumoured to be launching Hadoop spin-off. It has contributed Pig as a layer above Hadoop.

During the conference other products like Brisk,which makes Hadoop with Cassendra as a node, and SnapReduce from SnapLogic were also announced. Overall all of these are good indication of Hadoop traction. A more detailed note is in my other blog which is dedicated to emerging technologies and apps.

http://texploration.wordpress.com/2011/05/10/hadoop-based-products-launche/

ASM – It's not just for RAC anymore

alt.oracle - Tue, 2011-05-10 21:43

I'm super critical of Oracle when they screw stuff up or try to push technology in a direction that's bad for DBAs. You'll be hearing some rants about it in upcoming posts. But I also think that Oracle is a company that is actually good for the direction that technology is heading, unlike some companies whose names begin with "Micro" and end with "soft". Yes, they're a vast, stone-hearted corporation that would sell their grandmothers to raise their stock price. So is every other technology company – get used to it. But when they do something right, I'll be fair and sing their praises. Once every version or so, Oracle does something that really changes the game for DBAs. In version 8 it was RMAN. In 9i it was locally managed tablespaces. In 10g, it's definitely ASM - Automatic Storage Management. Yeah, I know this is kinda old news - ASM has been out for a good long while. What surprises me, though, is how many DBAs think that ASM is only useful for RAC architectures. "I don't run RAC, why would I need ASM?"

When ASM came out, it both intrigued and terrified me. The claim that it could produce I/O performance almost on par with raw devices without all the grief that comes with using them was exciting. But the idea of putting your production data on a completely new way of structuring files was pretty scary. I trust filesystems like UFS and ext2/3 (maybe even NTFS a little, but don't quote me) because they've stood the test of time. If there's one thing a DBA shouldn't screw around with, it's the way that the bits that represent your company's data are written to disk. I'm skeptical of any new way to store Oracle data on disk, since I'm the loser that has to recover the data if everything goes south. So I entered into my new relationship with ASM the way you should – with a whole lot of testing.

I originally moved to ASM out of sheer necessity. I was running RAC and using a woeful product called OCFS – Oracle Clustered Filesystem – to store the data. Performance was bad, weird crashes happened when there was heavy I/O contention, it wasn't pretty. Nice try, Oracle. It's cool that it was an open source project, but eventually it became clear that Oracle was pushing toward ASM as their clustered filesystem of choice. To make a long story short, we tested the crap out of it and ASM came through with flying colors. Performance was outstanding and the servers used a lot less CPU, since ASM bypasses that pesky little filesystem cache thing. In the end, we moved our single instance databases to ASM as well and saw similar results. It's true that, since you give Oracle control of how reads and writes are done, ASM is a very effective global filesystem for RAC. But the real strength of ASM is in the fact that its a filesystem built specifically for Oracle databases. You don't use it to store all your stolen mp3 files (unless you're storing them as blobs in the database, wink), you use it for Oracle datafiles. You give Oracle control of some raw partitions and let it go. And it does a good job. Once you go ASM, you never go back.

I'm not going to do a sell job on the features of ASM, since I don't work for the sales department at Oracle. Really, the positives for ASM boil down to three key features. 1) It bypasses the filesystem cache, thus going through fewer layers in the read/write process. This increases performance in essentially the same way that raw devices do. 2) It works constantly to eliminate hot spots in your Oracle data. This is something that your typical filesystem doesn't do, since it takes an intimate knowledge of how the particular application (in this case Oracle) is going to use the blocks on disk. Typical filesystems are designed to work equally well with all sorts of applications, while ASM is specialized for Oracle. 3) It works (with Oracle) as a global filesystem. In clustered systems, your filesystem is crucial. It has to be "globally aware" that two processes from different machines might try to modify the same block of data at the same time. That means that global filesystems need to have a "traffic cop" layer of abstraction that prevents data integrity violations. Normally this layer would impact performance to a certain degree. But ASM gives control to Oracle, which has a streamlined set of rules about what process can access a certain block and prevents this performance loss.

So consider using ASM. Even if you don't run RAC, benefits #1 and #2 make it worth your while. Our DBA team has been using it religiously on both RAC and non-RAC systems for years without any problems.

Of course, we're talking about Oracle here, so leave it to them to take the wonderful thing that is ASM and screw it up. Next time I'll tell you how they did just that in version 11g.
Categories: DBA Blogs

"The Bridge": Day 3 (part 2)

Charles Schultz - Tue, 2011-05-10 08:53
I have received some pictures (not all, but most the important ones).

First, a recap of Day 2:
Our "realistic" picture evolved a little bit; Ahjay added some grouping tags ("WHERE", "WHAT") which we incorporated from there on out.


And here is what our OBJECT list finally looked like; complete with attributes and verbs:



Day 3
Hard at work.


After hashing things out in the morning, we finally had something akin to a prototype forming at our fingertips.

I really struggled with the overall complexity; I wanted simplicity. As a compromise, we worked very hard to make as much optional as possible, attempting to capitalize on pre-filled defaults and "quickfill" options, trying to use the technology and data that should already be available to reduce user interaction. For instance, if the user might be presented with the most recent Products at the top of one's list. Or setting your default QuickFill option (Previous SR, Profile or OCM) in your global Preferences. You will see, also, at the top left blue stickies for "Support Recommended" and "Product specific tips"; these are to be dynamically populated as you type and fill in information - the more information the user provides, the more relevant and specific the search becomes. I do not have any pictures, but on one of our white sheets we put in a meter as a gimmick to relate how more information upfront helps the user and the analyst focus on the problem (akin to the Password Strength Meter).

Near the end of the day, our final draft prototype was looking like this:


Again, you can see how "insta search" is being populated in the right-hand side, hopefully not too distracting, but also hopefully to be filled with information that would perhaps prevent an SR or guide a customer down the right path. Again, we are assuming huge improvements to Search. :)  This picture also demonstrates one possible "multi-screen" approach, trying to cram in as much as possible "above the fold". I argued for the "one-screen" approach, but compromised and suggested that a Preference be added to allow either one-page or multiple pages.

Another thing that might be slightly less obvious is that we are trying to keep the big picture in mind, or "tell a story" as Kelli put it. We are trying to describe a problem, which has a beginning (ie, the environment), a middle or body (the Description) and an ending (optional files, template questions, further elaboration, etc).

In the end, it still feels like way too much complexity to me. I noted earlier that I really want to talk to a human to route the issue (which obviates the whole "Category" mess). I do not mind filling in all the technical details, but what if you had a "Contact Analyst" button that, like Amazon and many other companies, auto-dialed you (the user) and attempted to get a IHUB person on the phone asap? Yes, I realize from Oracle's standpoint this is impractical. But does anyone else want that?

It will be interesting to see what comes out of this project. I think I am excited. The workshop itself was definitely very productive, eye-opening and an awesome experience that I am fully thankful to Oracle for.

Before we all parted ways, we did get a group photo. Say "Cheese!"

RAC, ASM and Linux Forum, May 18, 2011: EXADATA Production Customers updates

Alejandro Vargas - Sun, 2011-05-08 08:11

Exadata is changing the world of Database Performance, on this forum we will have two EXADATA Production Customers updates.

75 million Customer Telecom Turkcell will be represented by Ferhat Sengonul, Senior OS Administrator, DBA, and Datawarehouse Project Leader, that led Exadata implementation and scale out to their actual 3 full, 24 database nodes, Exadata racks.

Ferhat will present his experience with a very large Data-Warehouse in Exadata, including online high performance reporting, VLDB backup and recovery best practices and upgrading from a traditional 11 racks (1 Sun M9000 Sparc 7; 10 storage racks 250 TB uncompressed) datawarehouse to a full rack and then to multiple racks. We will hear also about his consolidation project of all datawarehouse databases in Exadata.

Golden Pages, the first Consolidation on Exadata project implemented in Israel will be presented by Shimi Nahum, Senior DBA and Exadata Project Leader. Shimi will tell us about the challenges the Exadata environment presented to him as a DBA and how he faced them, and the impact of using Oracle Exadata to consolidate multiple Customer databases, including Siebel and ERP databases.

A practical dive into the technology will be presented by Oracle's Ophir Manor, the responsible for the several POC's being run by different Israeli Customers.

And finally I will tell about the experiences from the field, installing and implementing Exadata at different Customers around the world.

Exadata is radically changing the rules and expectations a DBA can have of an Oracle Database; this first hand experiences promise to be one of the most interesting conferences in Israel this year.

The conference will be held on May 18 at the Sharon Conference Center 09 starting at 14:00

REGISTRATION: ILOUG RAC, ASM and Linux Forum Registration,

SCHEDULE:

14:00 – 14:30

Registration

14:30 – 14:40

Welcome

14:40 – 15:25

Shimi Nahum, Dapei Zahab, Senior Oracle DBA, responsible of the Exadata project.

The first production Oracle Exadata in Israel, challenges for the DBA, speedup impact of Exadata on the end Customer

15:25 – 16:15

Ferhat Sengönül, Senior OS and DBA Turkcell, Responsible for the DW project

A very large Data-Warehouse in Exadata, the migration process, backup and recovery strategies, scaling up from 1 Exadata rack to 3

16:15 – 16:30

Refreshments Break

16:30 – 17:15

Ofir Manor. Oracle Senior Sales Consultant and Exadata Expert.

Preparing the IT infrastructure for Exadata. Lifetime maintenance procedures.

17:15 – 17:45

Alejandro Vargas, Oracle Principal Support Consultant and EMEA Exadata Core Team Member.

Inside the Oracle Database Machine, secrets about the configuration, install and support procedures

17:45 – 18:15

Questions and Answers
Categories: DBA Blogs

RAC, ASM and Linux Forum, May 18, 2011: EXADATA Production Customers updates

Alejandro Vargas - Sun, 2011-05-08 08:11

Exadata is changing the world of Database Performance, on this forum we will have two EXADATA Production Customers updates.

75 million Customer Telecom Turkcell will be represented by Ferhat Sengonul, Senior OS Administrator, DBA, and Datawarehouse Project Leader, that led Exadata implementation and scale out to their actual 3 full, 24 database nodes, Exadata racks.

Ferhat will present his experience with a very large Data-Warehouse in Exadata, including online high performance reporting, VLDB backup and recovery best practices and upgrading from a traditional 11 racks (1 Sun M9000 Sparc 7; 10 storage racks 250 TB uncompressed) datawarehouse to a full rack and then to multiple racks. We will hear also about his consolidation project of all datawarehouse databases in Exadata.

Golden Pages, the first Consolidation on Exadata project implemented in Israel will be presented by Shimi Nahum, Senior DBA and Exadata Project Leader. Shimi will tell us about the challenges the Exadata environment presented to him as a DBA and how he faced them, and the impact of using Oracle Exadata to consolidate multiple Customer databases, including Siebel and ERP databases.

A practical dive into the technology will be presented by Oracle's Ophir Manor, the responsible for the several POC's being run by different Israeli Customers.

And finally I will tell about the experiences from the field, installing and implementing Exadata at different Customers around the world.

Exadata is radically changing the rules and expectations a DBA can have of an Oracle Database; this first hand experiences promise to be one of the most interesting conferences in Israel this year.

The conference will be held on May 18 at the Sharon Conference Center 09 starting at 14:00

REGISTRATION: ILOUG RAC, ASM and Linux Forum Registration,

SCHEDULE:

14:00 – 14:30 Registration

14:30 – 14:40 Welcome

14:40 – 15:25 Shimi Nahum, Dapei Zahab, Senior Oracle DBA, responsible of the Exadata project.

The first production Oracle Exadata in Israel, challenges for the DBA, speedup impact of Exadata on the end Customer

15:25 – 16:15 Ferhat Sengönül, Senior OS and DBA Turkcell, Responsible for the DW project

A very large Data-Warehouse in Exadata, the migration process, backup and recovery strategies, scaling up from 1 Exadata rack to 3

16:15 – 16:30 Refreshments Break

16:30 – 17:15 Ofir Manor. Oracle Senior Sales Consultant and Exadata Expert.

Preparing the IT infrastructure for Exadata. Lifetime maintenance procedures.

17:15 – 17:45 Alejandro Vargas, Oracle Principal Support Consultant and EMEA Exadata Core Team Member.

Inside the Oracle Database Machine, secrets about the configuration, install and support procedures

17:45 – 18:15 Questions and Answers
Categories: DBA Blogs

IRM Desktop for 64-bit Systems

Simon Thorpe - Sat, 2011-05-07 05:14
Quick product update – the IRM Desktop now formally supports 64 bit Windows. Oracle has just released Oracle Fusion Middleware 11g R1 PS4 (11.1.1.5.0), which includes a fresh IRM build. Some of our customers have been using earlier IRM Desktops on 64 bit systems for various reasons, but there were some known restrictions. The PS5 release gives us a build that is formally certified for 64 bit. The new kit is available from the Oracle Tech Network and elsewhere.

APEX 4.1 Early Adopter Released

David Peake - Fri, 2011-05-06 16:51


As Joel announced here the APEX 4.1 Early Adopter is now available on http://tryapexnow.com.


This release is not nearly as huge as APEX 4.0, however, our hyperactive development team have been cranking out lots of new features for you to enjoy.


To find out what you can try out for yourself go to the Feature Description application. There are a number of other features that we are still working on, the most notable being Mobile Templates and Data Upload. Never fear, these will be coming to an APEX Early Adopter near you soon -- Read when we do the next major build of our EA.


Make sure you check out our new and improved Websheets interface. This feature is not 100% complete but we couldn't wait to show off the new look and feel!


Enjoy our latest offereings and be sure to provide feedback.

"The Bridge": Day 3 (part 1)

Charles Schultz - Thu, 2011-05-05 21:33
Still no pictures yet, so this is Part 1 of Day 3.

Day 3 was crunch time; by 5:pm we were aiming to have a working prototype. Because we expanded our scope (rather significantly) and spent so much time on tangential (but very important and sometimes relevant) details, the idea of getting a working prototype seemed rather dubious. But I think we did it. To a degree.

Picking up where we left off, we started to tackle the actual UI design itself. We had already done a lot of work on Search, so we needed to focus on the SR part of it. I came in a little earlier and drew up my own mock ups - they are horribly cluttered, but I personally think they are kinda cool. :) Basically, my mockup capitalizes on the vast similarities between Search and Creating an SR; providing keywords (ie, title), a product (and version) and you can start going to town. Category is a bit tricky, and I will cover it a little more in the last paragraph, but if you can nail down Category you can potentially narrow down your Search (called "Task Intent") rather dramatically and better yet, you are primed to punch in and route an SR. So why not do both in parallel? Maybe even on the same screen. You start filling in information, and in one pane you start seeing search results aggregated by facets (like what Advanced Search does now, but much more dynamic and insta-search), while at the same time your "Create SR" button lights up. And maybe even a "Post to Forums" button. I briefly argued for this approach, and I readily admitted that the huge downside is that the screen gets very cluttered very fast. I think we adopted a hybrid (eg, compromise), where the "Related articles" shows up insta-matically in a somewhat unobtrusive region floating off to the side.

We did a couple of usability tests; frankly, I think we need specific "Usability Test" training to learn how to do these better. :) I was not entirely satisfied with the particular way we approached this topic. But the good news is that we discovered many holes in our current prototype. Late in the day, we voted and started to tackle some of the more critical (or easy-to-fix) issues. Near the top of that list was whether or not to display the entire SR Creation process as one page or multiple pages. Again, some were very concerned about cluttering the screen and wanted "screen-sized" sections. I want everything on one page. In the end I posited that the user should have a preference for how he/she wants to view this process. We will see what happens with that.

Actually, this topic consumed a bit of time. After we green-lighted the idea of multiple pages, we got to work going through several permutations of possible screen layouts. Again, I found it ironic that we kept coming back to a design that is very similar to what we have today in MOS. Granted, we are added a lot of behind-the-scenes features that auto-fills (and insta-searches) as much as possible - that is not to be overlooked. But our final "look and feel" does not diverge much from the current design, in my opinion. In fact, if I count correctly, our final design may actually look more complicated. It is hard to say without having a real GUI to step through. Even though it looks more complicated, we are actively working to allow the user to input as little as possible to get the SR filed.

I have mentioned this previously, but it bears repeating. We were very much biased by the current implementation. In some ways, we spent a huge chunk of time trying to "fix" and patch current brokeness, instead of redesigning from the ground up. This is not to say we did not think out of the box (or at least try to).  And right now as I type this, I cannot think of one single "out of the box" new thing we pushed. Maybe I am simply tired and not remembering well.

Another point of discussion that came up, and in retrospect I wished we spent more time on, is the current super-criticality of "categories". Currently, SRs are routed based on the sub-category (or category if no sub exists). These are currently filtered by which product one chooses. In our experience, choosing the most appropriate sub/category is often tedious and seems like a relatively useless step from the users point of view. We briefly talked about driving the sub/category off keywords in the Description field, and to be done in the "insta-search" way (you start typing, and the list of possible sub/categories to choose from grows smaller). But the bigger issue, in my opinion, is all about the routing in the first place. Oracle has placed a lot of emphasis on building automated logic to get the SR to a specialist team. I have a problem with that, at least how it is done currently. In my personal "Bleu Sky" vision (Day 1), I created a big easy "Create SR" button, with no requirements whatsoever. How the heck is that any good? Well, think about it, what happens? Rather, what if you changed the button to say "Chat with a human being"? By the end, we made comparisons to various other companies (ie, Amazon) that allows you to fill in call-back information, a computer actually calls you 1 second later, and then attempts to connect you to a live person. I love that concept!! As you can imagine, the managers and directors and support representatives at the meeting hated that idea. :) Yes, currently, it is hugely impractical - the IHub would be drowned to oblivion. Currently. But if we are thinking Utopian thoughts.... There are other ideas to simply routing. For instance, drastically reduce the number of routes. How? Well.... we didn't talk about that, yet. :)

Pages

Subscribe to Oracle FAQ aggregator