Feed aggregator

How long did Oracle materialized view refresh run?

Ittichai Chammavanijakul - Mon, 2013-01-21 09:38

The LAST_REFRESH_DATE column of the DBA_MVIEWS or the LAST_REFRESH column of the DBA_MVIEW_REFRESH_TIMES indicates the start refresh time. But what if we’d like to find out how long the refresh of the materialized view really takes. Well, we can query the DBA_MVIEW_ANALYSIS.

For Complete Refresh, the refresh duration will be in the FULLREFRESHTIM column of the DBA_MVIEW_ANALYSIS. For Fast Refresh duration, it will be in the INCREFRESHTIM column.

Both values are in seconds.

SELECT mview_name, last_refresh_date, fullrefreshtim, increfreshtim
FROM dba_mview_analysis
WHERE owner='JOHN';

------------------------ ---------------------- -------------- -------------
MV_CHANGE_HISTORY        07-JAN-13 04.36.58 PM               0            36
MV_ITEM_HISTORY          07-JAN-13 04.36.58 PM               0             9

This shows that the recent refresh of the MV_CHANGE_HISTORY and MV_ITEM_HISTORY are the fast refreshes for 36 and 9 seconds respectively.

Put in one query to calculate and display the end time.

   last_refresh_date "START_TIME",
      WHEN fullrefreshtim <> 0 THEN
         LAST_REFRESH_DATE + fullrefreshtim/60/60/24
      WHEN increfreshtim <> 0 THEN
         LAST_REFRESH_DATE + increfreshtim/60/60/24
FROM all_mview_analysis
WHERE owner='JOHN';

----------------------- ---------------------- ---------------------- -------------- -------------
MV_CHANGE_HISTORY       07-JAN-13 04.36.58 PM  07-JAN-13 04.37.34 PM               0            36
MV_ITEM_HISTORY         07-JAN-13 04.36.58 PM  07-JAN-13 04.37.07 PM               0             9

Reference: How To Calculate MVIEW Refresh Duration? What Does DBA_MVIEWS.LAST_REFRESH_DATE and DBA_MVIEW_REFRESH_TIMES.LAST_REFRESH Indicate? [ID 1513554.1]

Categories: DBA Blogs

Geek quotient

Andrew Clarke - Sun, 2013-01-20 16:57
I only scored 9/10 on the 'How big a David Bowie fan are you?' quiz. And I scored 20/20 on the 'can you tell Arial from Helvetica?' quiz. But I only scored 32.1032% on the Geek Test. So I still have some way to go.

Changing Your PeopleSoft Database Platform

Brent Martin - Sun, 2013-01-20 01:43

There are several reasons why you might decide to migrate PeopleSoft to a new database platform.  It could be that you need to move to a more scalable platform.  Or you may be looking to reduce your annual license costs and want to switch to a more cost-effective platform.  Whatever the reason, you can certainly take advantage of PeopleSoft's multiple-database support and change your database platform.  This article will give you some ideas about how to plan the effort.

One of the first things to consider is whether or not you want to upgrade to the latest version of PeopleTools.  This may be required, especially if you want to deploy the latest version to the database platform you’re migrating to.   If this can be done in advance of the replatforming that makes the actual replatforming easier, but it does require it’s own testing and deployment cycle, and depending on the age of your current database platform it might be impossible.  In that case you will have to do the tools upgrade along with the database replatforming. 

 If you’re upgrading PeopleTools as part of the upgrade, you need to decide if you’ll want to introduce the new look and feel of PeopleTools roll out new PeopleTools enhancements to your users.  Review the release notes to get a good list of new features and enhancements, carefully choose what will be deployed as part of your scope.

 You’ll probably need to purchase new hardware to run your new database.  This may also be a good time to do a larger hardware refresh on your web/app/batch tests.  If you’re adding new hardware, be sure you size the hardware to run the new database according to your hardware vendor’s recommendations. They all have PeopleSoft-specific sizing questionnaires and will be happy to assign an engineer to assist you with your needs.  Also give yourself adequate time in your project plan to procure the hardware and have it installed and configured.

 Minimize non-technical changes. This isn’t the best time to change business processes or implement new functionality.  It’s not even the best time to de-customize.  If you have to do it, plan for the additional testing and change management (training/communication) that will be required.

One of the first things you should do to plan out this effort is to get a good list of the objects that will need to be updated as part of the replatforming effort.  We can assume that the delivered code will work on whatever supported database platform you want to migrate to, so we can focus much of our level of effort analysis on custom and customized code.

Processes, Reports and Queries

Get a list of all process and reports from the process definition tables.  Use the process scheduler tables to flag which processes have been executed in the last 2 years or so.  This will serve as your active processes list and will define the in scope processes for this effort.  Any processes, reports and queries discussed should be filtered against this active process list to prevent spending time on unused or unneeded processes.

To capture which nVision and Crystal reports are being run you’ll need to join PSPRCSRQST to PSPRCSPARAM and pick out the report name from the command line stored in the ORIGPARMLIST field.

PeopleCode Analysis

·         Extract the custom Peoplecode from PSPCMPROG to a flat file using  DecodePeopleCode.sqr.   Run another program to extract all SQLEXEC statements, and execute each one against the target database platform.   Be sure to flag any App Engines as being Used in your process list if you find active application engine PeopleCode.

Views Analysis

Analyzing views are a bit more straightforward.  Use App Designer or Data Mover to build ALL views on the target database platform, and identify which ones error out during the build. 

SQL Object Analysis

Extract the custom SQL from SSQLTEXTDEFN into a flat file and search it for specific keywords that aren’t going to work on your current platform.  For example, if you’re migrating from SQLServer to Oracle you might look for keywords like getdate, nolock, datediff, dateadd, day, month, year, str, left, right, +, ..,*=, datepart,isnull,convert,select top, len, inner, outer, .dbo.,xact_abort,implicit,round. 

This approach still requires a human touch.  Some of the SQL flagged to be modified may be fine because the keywords may be valid PeopleSoft functions, which are prefixed with “%”,  eg Dateadd. 

 Also keep in mind different database platforms have different conventions for outer joins, allowing subquery joins, etc.  This type of syntax different is very diRead More...

All Software Development is Schema Management

Kenneth Downs - Wed, 2013-01-16 20:36
Before you automatically disagree, stop a bit and think about it.  Can you think of any code you have ever written that did not handle data of some sort?  Of course not.  This is not about relational data either, it is about any data.  There is no software that does not process data.  Even the textbook example of some function that squares a number is processing the parameter and returning a value.  The first line of that function, in most languages, names the input parameter and its type.  That is a schema.

This remains true as you climb out of the toy textbook examples into simple one-off programs into small packages and ultimately to apps that build to megabytes of executables running on a rack (or a floor) of servers.  Just as each method call has parameters, so does each class have its attributes, every table its columns, every XML file its XSD (or so we hope) and that code works if and only if everybody on the team understood what was supposed to happen to the data.

Are We Talking UI or Platform or Middleware?

Generally today we are talking about the server side of things.  This is about any project that is going to take a user request and consult a data store: the file system, a search engine, a database, a NoSQL database, or an XML database.  If you go to the disk, that is what we are talking about.

New Versions Are (Almost Always) About Schema Changes

So imagine you've got some working code.  Maybe a single script file, or perhaps a library or package, or maybe some huge application with hundreds of files.  It is time to change it.  In my experience new code means some kind of change to the schema.

I cannot prove that, nor do I intend to try.  It is a mostly-true not an always-true.

Schema Here Does Not Mean Relational

This is not about relational schemas, though they are included.  If you are using Mongo or some other NoSQL Database which does not manage the schema for you, it just means you are managing the schema yourself somewhere in code.   But since you cannot write code that has no knowledge of the structure of the data it handles, that schema will be there somewhere, and if the code changes the schema generally changes.

Does It Need To Be Said?

Sometimes you will find yourself in a situation where people do not know this.  You will notice something is wrong, they first symptom is that the conversation does not appear to be proceeding to a conclusion.  Even worse, people do not seem to know what conclusion they are even seeking.  They do not know that they are trying to work out the schema, so they wander about the requirements trying to make sense of them.

Order and progress can be restored when somebody ties the efforts down to the discovery and detailing of the schema.  The question is usually, "What data points are we handling and what are we doing to them?"
Categories: Development

Code Today's Requirements Today

Kenneth Downs - Tue, 2013-01-15 20:06
In Part 1 of this series, Do You Know What Day It Is? (written a mere 6 months ago, I really ought to update this blog more often), we looked at Ken's First Law of Architecture:

Today's Constant is Tomorrow's Variable

If you do not know this, there are two mistakes you can make.

Mistake 1: You Ain't Gonna Need It
Mistake 1 is creating a variable, option, switch, parameter, or other control for something which as far as we know for today is a constant.  You can avoid this principle if you remember that You Ain't Gonna Need It.

It is this simple: you can invent an infinite number of variables that the user may wish to control in the future, but the chance of guessing which ones they will actually want to control is near zero.  It is a total complete waste of time.

Let's take an example.  Imagine you are asked to do a task.  Any task.  It does not even have to be programming.  The task will be loaded with points where programmers invent decisions that nobody asked them to make.  For example, consider this blog post.

1) Should I have used a different background color up there in that green box?
2) Should I have used a sub-section heading by this point, maybe I should go back and do that?
3) Should this list be numbered or maybe just bullets instead?

Notice that this list is kind of arbitrary.  I did not think of it very hard, I just made up those questions.  Another person might have made up different questions.  The problem is that the list never ends.

This can never be eradicated, it is fundamental to the human condition.  Well meaning individuals will raise irrelevancies in an effort to be helpful and the rules of polite society work against weeding these out.  Committees will put two heads on a horse and you will end up with a bunch of YAGNI options.  But we can still fight against them.

Mistake 2: Constants in Code - Nobody Does That!
It is possible to go the other way, which is to embed constants into your code so that when you need to change them you find it is expensive, difficult and error prone.  Most of us learn not to do this so early that it seems unthinkable to us.  Why even write a blog post about it in 2013?

Well the very good reason to write about it is that we all still embed "constants in code" in ways that we never think about.

For example, if you are using a relational database, then the structure of your table is a "constant in code", it is something that is woven through every line.  Your code "knows" which tables are in there and what columns they have.  When you change the structure you are changing something very much "constant" in your system.   This is why it is so important to use a lot of library code that itself reads out of data dictionaries, so that you have removed this particular "constant in code."

The great problem here is that we cannot see the future, and you never think that a basic feature of your system is an embedded constant in code.  In a simpler period in my life a system lived on a server.  One server, that was it, who ever heard of a server farm?  That was a constant: number of servers=1, and all of my code assumed it would always be true.  I had to weed out that constant from my code.

Those who understand this basic inability to see the future are most at risk of slipping into over-generalization and generating YAGNI features.  It is an attempt to fight the very universe itself, and you cannot win.

Keep it Simple, Keep it Clean

In the end you can only write clean code, architect clean systems, follow the old knowledge and be the wise one who learns to expect the unexpected.
Categories: Development

Oracle NoSQL Database Storage Node Capacity Parameter

Charles Lamb - Tue, 2013-01-15 11:16

I noticed in this article about Cassandra 1.2 that they have added the concept of vnodes, which allow you to have multiple nodes on a piece of hardware. This is pretty much the same as Oracle NoSQL Database's capability to place multiple Rep Nodes per Storage Node using the Capacity parameter. In general, the recommended starting point in configuring multiple Replication Nodes per Storage Node is one Rep Node per spindle or IO Channel.

The article also talks about Atomic Batching, which has been available in Oracle NoSQL Database since R1 through the various oracle.kv.KVStore.execute() methods. This capability allows an application to batch multiple operations against multiple records with the same major key in one atomic operation (transaction). Our users have all said that this is an important capability.

OTN Yathra

Hans Forbrich - Sun, 2013-01-13 18:45
What does Yathra mean?

Yathra is a Sanskrit word which means Journey.  In the northern states of India Yathra is spelled as Yatra.

Why is this relevant?

Oracle Technology Network, and the Oracle ACE and ACE Director programs are sponsoring a 6 city OTN tour, or Yathra, in India this February. 
  • February 16 in Delhi
  • February 18 in Mumbai
  • February 20 in Pune
  • February 22 in Bangalore
  • February 25 in Hyderabad
  • February 27 in Chennai
And I'm going!  (Assuming my Visa application is approved.)  Speaker's list is
  • Speaker                           Country
  • Vinay Agrawal                  India
  • Hans Forbrich                  Canada
  • PS Janakiram                  India
  • Lucas Jellema                 Netherlands
  • Satyendra Kumar            India
  • Raj Matamall                   USA
  • Harshad Oak                   India
  • Edward Roske                 USA
  • Vijay Seghal                    India
  • Aman Sharma                 India
  • Vivek Sharma                  India
  • Ganapthy Subramanian  India
  • Murali Vallath                   India
For more details, check out 
Categories: DBA Blogs

Upgrading the JasperReports libraries to 5.0.1

Dietmar Aust - Wed, 2013-01-09 17:04
Would you like to upgrade your existing JasperReportsIntegration with the latest 5.0.1 libraries of JasperReports?

Here you go ...

Step 1: Download the files for 5.0.1
You can download the files here:

 Step 2: Shutdown the Apache Tomcat J2EE server Step 3:  Remove the existing JasperReportsLibraries from your existing installation
Typically, after you have installed your previous version of the JasperReportsIntegration toolkit on your Apache Tomcat J2EE server, the files will be located in the directory $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib, for example version 4.7.0 of JasperReports, where $CATALINA_HOME represents the path to your installation of Tomcat.

Then you would have to remove these libraries first. In this directory you should find two files for removal: _jasper-reports-delete-libs-4.7.0.sh and _jasper-reports-delete-libs-4.7.0.cmd, for *nix or Windows respectively. For *nix systems you would have to make it executable, though, e.g.: chmod u+x _jasper-reports-delete-libs-4.7.0.sh . Then you can call it and it will remove all files for version 4.7.0. But it will NOT remove the file for the JasperReportsIntegration and all other libraries which YOU might have placed there deliberately.

You can always find the required removal scripts here: http://www.opal-consulting.de/downloads/free_tools/JasperReportsLibraries/ . 
 Step 4: Install the new 5.0.1 libraries
Now you can just copy the new libraries from JasperReportsLibraries-5.0.1.zip into $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib.
 Step 5: Start the Apache Tomcat J2EE server again
Now you system should be upgraded to the most current JasperReports 5.0.1 !

Just drop me a note when you need updated libraries for 5.0.2, 5.0.3, ... 6.0.0, etc. I have scripts in place to create a new package of the libraries. 

Here you can find the notes from my last upgrade (4.5.0 => 4.8.0), I hope it makes sense:

** download the libraries from:
**  http://www.opal-consulting.de/downloads/free_tools/JasperReportsLibraries/4.8.0/JasperReportsLibraries-4.8.0.zip
** to /home/jasper/JasperReportsLibraries

cd /home/jasper
mkdir JasperReportsLibraries

** unzip them
cd JasperReportsLibraries
unzip JasperReportsLibraries-4.8.0.zip -d JasperReportsLibraries-4.8.0

** stop tomcat server

** remove libraries of current jasper reports release
cd /home/jasper/tomcat/webapps/JasperReportsIntegration/WEB-INF/lib
chmod +x _jasper-reports-delete-libs-4.5.0.sh
dos2unix _jasper-reports-delete-libs-4.5.0.sh


** copy libraries of the new release to the WEB-INF/lib directory
cp /home/jasper/JasperReportsLibraries/JasperReportsLibraries-4.8.0/* /home/jasper/tomcat/webapps/JasperReportsIntegration/WEB-INF/lib

** restart tomcat


Getting started with Couchbase and node.js on Windows

Tugdual Grall - Fri, 2013-01-04 09:56
In a previous post I have explained how to use Couchbase and Node.js on OS X. Since it is quite different on Windows here another article about it. Install Couchbase Server 2.0 If you have not installed Couchbase Server already, do the following : Download Couchbase Server from here Run the installer Configure the database at http://localhost:8091 (if you have issue take a look to this Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com6

PaaS lets you pick the right tool for the job, without having to worry about the additional operational complexity

William Vambenepe - Sun, 2012-12-30 02:06

In a recent blog post, Dan McKinley explains “Why MongoDB Never Worked Out at Etsy“. In short, the usefulness of using MongoDB in addition to their existing MySQL didn’t justify the additional operational complexity of managing another infrastructure service.

This highlights the least appreciated benefit of PaaS: PaaS lets you pick the right tool for the job, without having to worry about the additional operational complexity.

I tried to explain this a year ago in this InfoQ article. But the title was cringe-worthy and the article was too long.

So this blog will be short. I even made the main point bold; and put it in the title.


Categories: Other

TROUG 2012 değerlendirme

H.Tonguç Yılmaz - Fri, 2012-12-28 15:02
2013 ‘e ışık tutması açısından, 2012 ‘de TROUG ile yaşadığınız deneyim paralelinde sizlerin geri beslemelerini merak ediyorum. Hatırlatma amaçlı 2012 ‘de ne gibi aktiviteler gerçekleşti sitemiz anasayfadaki “Geçmiş Etkinlikler” bölümünü inceleyebilirsiniz: http://troug.org/

UKOUG 2012

Rob van Wijk - Sat, 2012-12-22 06:39
My third time visiting the annual UKOUG conference in Birmingham started all wrong. At Schiphol Airport, the usual luggage check routine took place: laptop out of the suitcase, wallet/keys/belt apart, toothpaste apart. And afterwards putting everything back in. But I forgot to close the wheeled suitcase and when putting it on the ground, my MacBook Pro fell out. A quick inspection revealed that Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com0

Poem: "Wrong Address" - a tribute to victims of Newtown massacre

Debu Panda - Fri, 2012-12-21 13:49

When I dropped you off at the school last Friday
Your eyes were so bright
Face so charming;
You told me with your sweet voice
The weekend plans of your choice
The ballet class on Saturday
Bowling on friend’s birthday;
Helping your mom in Christmas shopping
Putting more decorations and lightings
Every itty-bitty thing that ran through your mind;
Without which I am deaf, dumb and blind.
You were so excited
To paint your walls blue and roof with red
And a new white bed
To match with the fury cat.
All your dreams and wishes
Shattered like glass
When the devil sprayed bullets in your class  
Death arrived at the wrong address
Making you an angel and driving us hapless.

I found a piece of your shattered dream
In your Barbie bag
 The incomplete Christmas drawing
Vivid and enigmatic;
Holding that your mom
Inconsolable and desolate
Sitting by the Christmas tree
Decorated by thee
 With colorful ornaments
Candy canes and reindeers
The shiny golden stars
How proud they are
That you have become a tiny star in the sky
To give light to the stranger who lost his way.

The mailman returned your letter to Santa Claus
Marking “Undeliverable address”;
You wrote in big letters
With your tiny hands 
Asked for a cute little brother
And a new dollhouse for you
A purse for your mommy
A new job for me
Happiness for your grandma
And world peace for your grandpa
Probably your letter arrived at wrong address
Saint of Death delivered your sacrifice as our present.
We are hapless
Powerless living cadavers
We forgive the devil for the sin
We cannot stop his toy named “Gun”!

I still miss
Your goodnight kiss
I want to cuddle you again
But I don’t know the address for heaven.

(Note: The recent massacre at Sandy Brook Elementary has probably broken everybody’s heart. I wrote this poem to express my feelings as a father as a tribute to the victims of that mindless killing. I do not know any victims personally)

Dark Reading – Database Security

Slavik Markovich - Thu, 2012-12-20 12:16
I was interviewed for a nice article about database security on Dark Reading. The interesting question, I think, is not wether to invest in DB security. To me, it’s a given that you have to do it (even though some customers still don’t agree). The question is – how will the threat landscape change if […]

Oracle NoSQL Database R2 Released

Charles Lamb - Thu, 2012-12-20 11:20

It's official: we've shipped Oracle NoSQL Database R2.

Of course there's a press release, but if you want to cut to the chase, the major features this release brings are:

  • Elasticity - the ability to dynamically add more storage nodes and have the system rebalance the data onto the nodes without interrupting operations.
  • Large Object Support - the ability to store large objects without materializing those objects in the NoSQL Database (there's a stream API to them).
  • Avro Schema Support - Data records can be stored using Avro as the schema.
  • Oracle Database External Table Support - A NoSQL Database can act as an Oracle Database External Table.
  • SNMP and JMX Support
  • A C Language API

There are both an open-source Community Edition (CE) licensed under aGPLv3, and an Enterprise Edition (EE) licensed under a standard Oracle EE license. This is the first release where the EE has additional features and functionality.

Congratulations to the team for a fine effort.

External table and preprocessor for loading LOBs

Antonio Romero - Wed, 2012-12-19 14:18

I was using the COLUMN TRANSFORMS syntax to load LOBs into Oracle using the Oracle external which is a handy way of doing several stuff - from loading LOBs from the filesystem to having constants as fields. In OWB you can use unbound external tables to define an external table using your own arbitrary access parameters - I blogged a while back on this for doing preprocessing before it was added into OWB 11gR2.

For loading LOBs using the COLUMN TRANSFORMS syntax have a read through this post on loading CLOB, BLOB or any LOB, the files to load can be specified as a field that is a filename field, the content of this file will be the LOB data.

So using the example from the linked post, you can define the columns;

Then define the access parameters - if you go the unbound external table route you can can put whatever you want in here (your external table get out of jail free card);

This will let you read the LOB files fromn the filesystem and use the external table in a mapping.

Pushing the envelope a little further I then thought about marrying together the preprocessor with the COLUMN TRANSFORMS, this would have let me have a shell script for example as the preprocessor which listed the contents of a directory and let me read the files as LOBs via an external table. Unfortunately that doesn't quote work - there is now a bug/enhancement logged, so one day maybe. So I'm afraid my blog title was a little bit of a teaser....

APEX: Running multiple version on single Weblogic Server

Marc Kelderman - Wed, 2012-12-19 07:51
Sometimes you would like to have multiple versions of APEX running on one Weblogic Server, or Weblogic Cluster. For example, you would like to run APEX version 4.0 and version 4.2 along each other.

To configure this is rather simple and straight forward. The trick is to use the various WAR files; apex.war and images.war under a different name and use plan files to manipulate the root and image directories.

The following actions should be carried out:

make sure you have different WAR files of the APEX application in a single directory on your Admin server:

$ cd /data/deploy
$ ls -1

Create a plan directory to store your plan-files to overule the WEB properties

$ mkdir -p /data/user_projects/domains/APEX/plan/apex
$ ls -1

Make sure you have these files in this directory. Here is an example of the APEX plan-images file, to create your own plan file:

<?xml version='1.0' encoding='UTF-8'?> 
<deployment-plan xmlns="http://xmlns.oracle.com/weblogic/deployment-plan" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/deployment-plan http://xmlns.oracle.com/weblogic/deployment-plan/1.0/deployment-plan.xsd" global-variables="false">




<module-descriptor external="false">



<config-root>/data/tmp/apex</config-root> </deployment-plan>

Here is an example of the APEX plan file:

<?xml version='1.0' encoding='UTF-8'?> 
<deployment-plan xmlns="http://xmlns.oracle.com/weblogic/deployment-plan" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/deployment-plan http://xmlns.oracle.com/weblogic/deployment-plan/1.0/deployment-plan.xsd" global-variables="false">



<module-descriptor external="false">



<module-descriptor external="false">





Now comes the deployment part, which is straight forward as a normal application.
  • Logon to Adminconsole
  • Lock & Edit to open session
  • Deploy images.4.0.war with plan file plan-images.4.0.xml
  • Deploy images.4.2.war with plan file plan-images.4.2.xml
  • Deploy apex.4.0.war with plan file plan.4.0.xml
  • Deploy apex.4.2.war with plan file plan.4.2.xml
  • Activate changes
  • Start running the images applications for all requests
  • Start running the apex applications for all requests
While we have overuled the default images direcotry from '/i' into '/apex-images-42' or '/apex-images-40', we should run some SQL scripts to update the APEX data. The following SQL code should be execute on the database for the specific APEX version

    l_stmt varchar2(4000);
    l_stmt := 'create or replace package wwv_flow_image_prefix
    g_image_prefix       constant varchar2(255) := ''/
end wwv_flow_image_prefix;';

    execute immediate l_stmt;

update wwv_flows
       set flow_image_prefix = '/apex-images-42/'
     where flow_image_prefix = '/i/';




    dbms_utility.compile_schema(schema => 'APEX_040200', compile_all => FALSE);

Now you are able to run multiple APEX applications from onw Weblogic Server/Cluster:


Learn from the smaller one's : Native support for ENUM Types

Karl Reitschuster - Tue, 2012-12-18 08:30
Gerhard a colleague is dealing with postgres databases and was very suprised how many features are very oracle like. For example the same concept of Sequences as in Oracle. But then he detected the postgres enum native type support in postgres and asked me if the same feature would exist in Oracle.

When a Query runs slower on second execution - a possible side effect of cardinality feedback

Karl Reitschuster - Tue, 2012-12-18 01:13
After a successful migration from Oracle 10gR2 to Oracle 11gR2, I observed a very odd behavior executing a query; On first execution the query run fast - means 0.3 s; On second not faster but with tremendous reduced speed - approximate 20 s; Now this behavior is the opposite I experienced with complex queries (lot of joins, lot of predicates) , the first time of execution needs the extra cost of hard parsing and disk reads if the data is not in cache. the second time even the query run initial several seconds it run in a fraction of a second.


Subscribe to Oracle FAQ aggregator