Feed aggregator

Dark Reading – Database Security

Slavik Markovich - Thu, 2012-12-20 12:16
I was interviewed for a nice article about database security on Dark Reading. The interesting question, I think, is not wether to invest in DB security. To me, it’s a given that you have to do it (even though some customers still don’t agree). The question is – how will the threat landscape change if […]

Oracle NoSQL Database R2 Released

Charles Lamb - Thu, 2012-12-20 11:20

It's official: we've shipped Oracle NoSQL Database R2.

Of course there's a press release, but if you want to cut to the chase, the major features this release brings are:

  • Elasticity - the ability to dynamically add more storage nodes and have the system rebalance the data onto the nodes without interrupting operations.
  • Large Object Support - the ability to store large objects without materializing those objects in the NoSQL Database (there's a stream API to them).
  • Avro Schema Support - Data records can be stored using Avro as the schema.
  • Oracle Database External Table Support - A NoSQL Database can act as an Oracle Database External Table.
  • SNMP and JMX Support
  • A C Language API

There are both an open-source Community Edition (CE) licensed under aGPLv3, and an Enterprise Edition (EE) licensed under a standard Oracle EE license. This is the first release where the EE has additional features and functionality.

Congratulations to the team for a fine effort.

External table and preprocessor for loading LOBs

Antonio Romero - Wed, 2012-12-19 14:18

I was using the COLUMN TRANSFORMS syntax to load LOBs into Oracle using the Oracle external which is a handy way of doing several stuff - from loading LOBs from the filesystem to having constants as fields. In OWB you can use unbound external tables to define an external table using your own arbitrary access parameters - I blogged a while back on this for doing preprocessing before it was added into OWB 11gR2.

For loading LOBs using the COLUMN TRANSFORMS syntax have a read through this post on loading CLOB, BLOB or any LOB, the files to load can be specified as a field that is a filename field, the content of this file will be the LOB data.

So using the example from the linked post, you can define the columns;

Then define the access parameters - if you go the unbound external table route you can can put whatever you want in here (your external table get out of jail free card);

This will let you read the LOB files fromn the filesystem and use the external table in a mapping.

Pushing the envelope a little further I then thought about marrying together the preprocessor with the COLUMN TRANSFORMS, this would have let me have a shell script for example as the preprocessor which listed the contents of a directory and let me read the files as LOBs via an external table. Unfortunately that doesn't quote work - there is now a bug/enhancement logged, so one day maybe. So I'm afraid my blog title was a little bit of a teaser....

APEX: Running multiple version on single Weblogic Server

Marc Kelderman - Wed, 2012-12-19 07:51
Sometimes you would like to have multiple versions of APEX running on one Weblogic Server, or Weblogic Cluster. For example, you would like to run APEX version 4.0 and version 4.2 along each other.

To configure this is rather simple and straight forward. The trick is to use the various WAR files; apex.war and images.war under a different name and use plan files to manipulate the root and image directories.

The following actions should be carried out:

make sure you have different WAR files of the APEX application in a single directory on your Admin server:

$ cd /data/deploy
$ ls -1

Create a plan directory to store your plan-files to overule the WEB properties

$ mkdir -p /data/user_projects/domains/APEX/plan/apex
$ ls -1

Make sure you have these files in this directory. Here is an example of the APEX plan-images file, to create your own plan file:

<?xml version='1.0' encoding='UTF-8'?> 
<deployment-plan xmlns="http://xmlns.oracle.com/weblogic/deployment-plan" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/deployment-plan http://xmlns.oracle.com/weblogic/deployment-plan/1.0/deployment-plan.xsd" global-variables="false">




<module-descriptor external="false">



<config-root>/data/tmp/apex</config-root> </deployment-plan>

Here is an example of the APEX plan file:

<?xml version='1.0' encoding='UTF-8'?> 
<deployment-plan xmlns="http://xmlns.oracle.com/weblogic/deployment-plan" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/deployment-plan http://xmlns.oracle.com/weblogic/deployment-plan/1.0/deployment-plan.xsd" global-variables="false">



<module-descriptor external="false">



<module-descriptor external="false">





Now comes the deployment part, which is straight forward as a normal application.
  • Logon to Adminconsole
  • Lock & Edit to open session
  • Deploy images.4.0.war with plan file plan-images.4.0.xml
  • Deploy images.4.2.war with plan file plan-images.4.2.xml
  • Deploy apex.4.0.war with plan file plan.4.0.xml
  • Deploy apex.4.2.war with plan file plan.4.2.xml
  • Activate changes
  • Start running the images applications for all requests
  • Start running the apex applications for all requests
While we have overuled the default images direcotry from '/i' into '/apex-images-42' or '/apex-images-40', we should run some SQL scripts to update the APEX data. The following SQL code should be execute on the database for the specific APEX version

    l_stmt varchar2(4000);
    l_stmt := 'create or replace package wwv_flow_image_prefix
    g_image_prefix       constant varchar2(255) := ''/
end wwv_flow_image_prefix;';

    execute immediate l_stmt;

update wwv_flows
       set flow_image_prefix = '/apex-images-42/'
     where flow_image_prefix = '/i/';




    dbms_utility.compile_schema(schema => 'APEX_040200', compile_all => FALSE);

Now you are able to run multiple APEX applications from onw Weblogic Server/Cluster:


Learn from the smaller one's : Native support for ENUM Types

Karl Reitschuster - Tue, 2012-12-18 08:30
Gerhard a colleague is dealing with postgres databases and was very suprised how many features are very oracle like. For example the same concept of Sequences as in Oracle. But then he detected the postgres enum native type support in postgres and asked me if the same feature would exist in Oracle.

When a Query runs slower on second execution - a possible side effect of cardinality feedback

Karl Reitschuster - Tue, 2012-12-18 01:13
After a successful migration from Oracle 10gR2 to Oracle 11gR2, I observed a very odd behavior executing a query; On first execution the query run fast - means 0.3 s; On second not faster but with tremendous reduced speed - approximate 20 s; Now this behavior is the opposite I experienced with complex queries (lot of joins, lot of predicates) , the first time of execution needs the extra cost of hard parsing and disk reads if the data is not in cache. the second time even the query run initial several seconds it run in a fraction of a second.

Running Teradata Aster Express on a MacBook Pro

Donal Daly - Sat, 2012-12-15 05:26
To those people who know me, know that I am complete Apple geek.  Teradata's supports BYOD, so naturally I have a MacBook Pro. Why wouldn't I want to run Aster Express on it  :-)

The configuration was surprisingly easy once I worked out how!

  • 4 GB memory - (I have 8GB in mine)
  • At least 20 GB free disk space
  • OS: I am running Mac OS X 10.8.2
  • VMware Player: -  Buy VMWare Fusion. I have version 5.0.2 (900491)
  • ** Make sure to order the professional version, as only this version has the new
  • ** Network Editor feature
  • 7-Zip: To extract (or uncompress) the Aster Express package.

You get to the Network Editor from Preferences. Create a new network adapter vmnet2 as shown in the screen shot below:

Then make sure that for both the Queen and Worker VMWare images you assign vmnet2 as your network adapter as illustrated in the screenshot below:

That is really the only changes you need to make. Follow the rest of the instructions as outlined in the Getting Started with Aster Express 5.0 to get your Aster nCluster up and running.

If you have 8GB of memory you might decide to allocate 2GB of memory to each VM instead of the 1GB which is the default. Again you can set this in the settings for each VMWare image.  I also run the utility Memory Clean (available for free from the App Store). You would be amazed how much a memory hog FireFox and Safari can be. I normally shutdown most other running programs when I am working with Aster Express to give me the best user experience.

To run the Aster Management console point your favourite browser to You may ignore any website security certificatewarnings and continue to the website.

You will also find mac versions of act and ncluster_loader in /home/beehive/clients_all/mac. I just copy them to my host. In fact, Once I start up the VMWare images, I do most everything natively from the Mac.

In future posts I plan to cover the following topics:
  • How to scale your Aster Express nCluster and make it more reliable
  • Demo: From Raw Web log to Business Insight
  • Demo: Finding the sentiment in Twitter messages
If there are topics you would like me to cover in the future, then just let me know.

APEX 4.2.1 Patch Set Released

David Peake - Fri, 2012-12-14 15:13

Today we released APEX 4.2.1 on the APEX OTN Download page.
This  is a cumulative patch set for APEX 4.2.0.
If you already have APEX 4.2.0 installed then you will need to download the patch from My Oracle Support , Patch # 14732511 {Filename will be p14732511_420_Generic.zip until Oracle Support has 4.2.1 loaded at which time the name will change to p14732511_421_Generic.zip}

There were quite a number of APEX 4.2.0 Known Issues, the majority of which have been incorporated into APEX 4.2.1.

It is important to read the APEX 4.2.1 Patch Set Notes and also the APEX 4.2 Release Notes when installing APEX 4.2.1.

... But Wait - That's Not All
On the APEX OTN Home Page I have added a new button :
This leads to a collection of pages on Installation, Upgrades, Deploying Applications, User Interface, Security, and Performance. These pages are designed to answer the questions our team often gets asked and  provide additional information to that available in the documentation.

.. Still More
On the Collateral - White Papers Page the Extending Oracle E-Business Suite Release 12 using Oracle Application Express white paper has been revised to include some corrections and include OAM which is the new Oracle standard for single-sign on integration across Oracle Applications.

... And Last But Not Least
On the OTN APEX Home Page there are a number of new Case Studies. One of the best case studies I have seen for Application Express comes from APEX Consulting Partner Inoapps, who delivered a system for INEOS in Scotland: INEOS Group Cuts 80% off Application Design and Build Costs for Managing Hydrocarbon Accounting and Refinery Information.

There is also a new book listed - Oracle APEX Best Practices.

Oracle NoSQL Database: Cleaner Performance

Charles Lamb - Fri, 2012-12-14 15:03

In an earlier post I noted that Berkeley DB Java Edition cleaner performance had improved significantly in release 5.x. From an Oracle NoSQL Database point of view, this is important because Berkeley DB Java Edition is the core storage engine for Oracle NoSQL Database.

Many contemporary NoSQL Databases utilize log based (i.e. append-only) storage systems and it is well-understood that these architectures also require a "cleaning" or "compaction" mechanism (effectively a garbage collector) to free up unused space. 10 years ago when we set out to write a new Berkeley DB storage architecture for the BDB Java Edition ("JE") we knew that the corresponding compaction mechanism would take years to perfect. "Cleaning", or GC, is a hard problem to solve and it has taken all of those years of experience, bug fixes, tuning exercises, user deployment, and user feedback to bring it to the mature point it is at today. Reports like Vinoth Chandar's where he observes a 20x improvement validate the maturity of JE's cleaner.

Cleaner performance has a direct impact on predictability and throughput in Oracle NoSQL Database. A cleaner that is too aggressive will consume too many resources and negatively affect system throughput. A cleaner that is not aggressive enough will allow the disk storage to become inefficient over time. It has to

  1. Work well out of the box, and
  2. Needs to be configurable so that customers can tune it for their specific workloads and requirements.

The JE Cleaner has been field tested in production for many years managing instances with hundreds of GBs to TBs of data. The maturity of the cleaner and the entire underlying JE storage system is one of the key advantages that Oracle NoSQL Database brings to the table -- we haven't had to reinvent the wheel.

Pick Bex's Deep Dive Talk for Collaborate 2013

Bex Huff - Tue, 2012-12-11 21:54

How would you like to leave Collaborate knowing exactly what you wanted to learn? Here's your chance...

Like last year, the WebCenter SIG at IOUG Collaborate 2013 (April 7-11 in Denver) will have a deep dive session for Sunday. Bezzotech was asked to deliver 2-hours of a deep dive... and were batting around ideas for what to talk about... Security? Performance? Integrations?

Then it hit us, why not let the attendees pick our talk?

If you always wanted to know something crazy about how WebCenter works, please take our survey so we know what to present. You can also leave a comment, email us at info@bezzotech.com, or send it to me directly. We'll tally up the requests and let the WebCenter faithful decide what our talk will be!

I'm genuinely curious about what you are curious about ;-)

read more

Categories: Fusion Middleware

What Customers expect in a new generation APM (2.0) solution

Debu Panda - Tue, 2012-12-11 13:31
As an application owner, architect or application support personnel, you want to exceed service levels and avoid costly, reputation-damaging application failures through improved visibility into the end-user experience. The blog discusses some of the top features the new generation APM solutions provide that help you achieve your business objectives.

Here is the direct link to the blog.

Error Handling: Or, How to Start an Argument Among Programmers

Tahiti Views - Tue, 2012-12-11 01:18
Lately I see a lot of discussions come up about error handling. For example: Dr. Dobbs "The Scourge of Error Handling" covering mainly C-descended languages; this blog post stemming from OTN discussions of Oracle exception handling in PL/SQL. I've been on both sides of the fence (cowboy coder and database programmer) so I'll offer my take. I'll write mostly from the perspective of PL/SQL stored John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

Big Data career adviser says you should be a… Big Data analyst

William Vambenepe - Tue, 2012-12-11 00:13

LinkedIn CEO Jeff Weiner wrote an interesting post on “the future of LinkedIn and the economic graph“. There’s a lot to like about his vision. The part about making education and career choices better informed by data especially resonates with me:

With the existence of an economic graph, we could look at where the jobs are in any given locality, identify the fastest growing jobs in that area, the skills required to obtain those jobs, the skills of the existing aggregate workforce there, and then quantify the size of the gap. Even more importantly, we could then provide a feed of that data to local vocational training facilities, junior colleges, etc. so they could develop a just-in-time curriculum that provides local job seekers the skills they need to obtain the jobs that are and will be, and not just the jobs that once were.

I consider myself very lucky. I happened to like computers and enjoy programming them. This eventually lead me to an engineering degree, a specialization in Computer Science and a very enjoyable career in an attractive industry. I could have been similarly attracted by other domains which would have been unlikely to give me such great professional options. Not everyone is so lucky, and better data could help make better career and education choices. The benefits, both at the individual and societal levels, could be immense.

Of course, like for every Big Data example, you can’t expect a crystal ball either. It’s unlikely that the “economic graph” for France in 1994 would have told me: “this would be a good time to install Linux Slackware, learn Python and write your first CGI script”. It’s also debatable whether that “economic graph” would have been able to avoid one of the worst talent waste of recent time, when too many science and engineering graduates went into banking. The “economic graph” might actually have encouraged that.

But, even under moderate expectations, there is a lot of potential for better informed education and career decision (both on the part of the training profession and the students themselves) and I am glad that LinkedIn is going after that. Along with the choice of a life partner (and other companies are after that problem), this is maybe the most important and least informed decision people will make in their lifetime.

Jeff Weiner also made proclamation of openness in that same article:

Once realized, we then want to get out of the way and allow all of the nodes on this network to connect seamlessly by removing as much friction as possible and allowing all forms of capital, e.g. working capital, intellectual capital, and human capital, to flow to where it can best be leveraged.

I’m naturally suspicious of such claims. And a few hours later, I get a nice email from LinkedIn, announcing that as of tomorrow they are dropping the “blog link” application which, as far as I can tell, fetches recent posts form my blog and includes them on my LinkedIn profile. Seems to me that this was a nice and easy way to “allow all of the nodes on this network to connect seamlessly by removing as much friction as possible”…

Categories: Other

Webinar: Using XMLA with Cognos and Oracle OLAP Cubes

Keith Laker - Mon, 2012-12-10 10:01
When:  Thursday, Dec 13, 2012 at 9:00am PST / 12:00pm EST / 6:00pm CET.

To attend:    Sign up here.
If you use a business intelligence tool such as IBM Cognos, Microstrategy or SPA BusinessObjects Analysis that uses XMLA to connect to multidimensional data sources, check out a free webinar by Simba Technologies which offers a "sneak peak" of the Simba XMLA Provider for Oracle OLAP.  The Simba XMLA Provider for Oracle OLAP is an XMLA version for the Simba MDX Provider for Oracle OLAP, the gold standard in MDX connectivity to Oracle OLAP.   (The Simba MDX Provider for Oracle OLAP allows MDX based clients such as Microsoft Excel PivotTables to query Oracle OLAP cubes.  The XMLA version allows clients that use XMLA rather than ODBO to connect to Oracle OLAP.)

Simba will demonstrate using IBM Cognos using the XMLA provider to query Oracle OLAP cubes.  Here's a brief outline of the session.

See how:
  • Familiar business intelligence applications such as IBM Cognos can connect to an Oracle OLAP cube.
  • Ad-hoc querying and data analysis can be performed directly in IBM Cognos on your OLAP data.
  • The most advanced application that responds to XMLA requests available enables users to interactively build reports, drill into details and slice and dice data
  • Connectivity can be established without the need to install any software on the client machine.
    Simply connect to the XMLA service and everything works!
See you there!

Categories: BI & Warehousing

Missing controlfiles ?

Bas Klaassen - Mon, 2012-12-10 03:36
During the backup check this morning, I noticed something strange.. The backup logfile was showing the following error : Normal 0 21 false false false NL X-NONE X-NONE Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com2
Categories: APPS Blogs

Error Wrangling

Andrew Clarke - Mon, 2012-12-10 01:50
Last week the OTN SQL and PL/SQL Forum hosted of those threads which generate heat and insight without coming to a firm conclusion: this one was titled WHEN OTHERS is a bug. Eventually Rahul, the OP, complained that he was as confused as ever. The problem is, his question asked for a proof of Tom Kyte's opinion that, well, that WHEN OTHERS is a bug. We can't proof an opinion, even an opinion from a well-respected source like Tom. All we can do is weigh in with our own opinions on the topic.

One of the most interesting things in the thread was Steve Cosner's observations on designing code "to be called by non-database, and ... non-Oracle software (Example: Oracle Forms, or a web process)". He uses WHEN OTHERS to produce a return code. Now return codes are often cited as an example of bad practice. Return codes disguise the error and the allow the calling program to proceed as though nothing had gone wrong in the called unit. Exceptions, on the other hand, cannot be ignored.

But Steve goes on to make a crucial point: we must make allowances for how our code will be called. For instance, one time I was writing PL/SQL packages for an E-Business Suite application and I was doing my usual thing, coding procedures which raised exceptions on failure. This was fine for the low level routines which were only used in other PL/SQL routines. But the procedures which ran as concurrent jobs had to finish cleanly, no matter what happened inside; each passed a return code to the concurrent job manager, and logged the details. Similarly the procedures used by Forms passed back an outcome message, prefixed with "Success" or"Error" as appropriate.

This rankled at first but it is undoubtedly true that exceptions make demands on the calling program. Although the demands are not as high in PL/SQL as in say C++ or Java, raising exceptions still changes how the calling function must be coded. We have to work with the grain of the existing code base when introducing new functionality. (Interestingly, Google has a fiat against exceptions in its C++ code base, and developed its Go language to support return codes as the default error handling mechanism. Find out more.)

The point is APIs are a contract. On its side the called program can enforce rules about how it is called - number and validity of input parameters, return values. But it cannot impose rules about what the calling program does with the outcome. So there's no point in exposing a function externally if its behaviour is unacceptable to the program which wants to call it. When the calling program wants to use return codes there's little point in raising exceptions instead. Sure the coder writing the calling program can ignore the value in the return code, but that is why we need code reviews.

So, is WHEN OTHERS a bug? The answer is, as so often, it depends. Install failing for Pre-requisite Checks (error: configured=unknown)

Madan Mohan - Sat, 2012-12-08 06:50
All the Pre-req checks are failing for kernel parameters  

Current = "250"  configured=unknown

/etc/sysctl.conf  is not having the read permission to the User that is installing the Software.

chmod +r  /etc/sysctl.conf

Challenges with APM 1.0 product

Debu Panda - Tue, 2012-12-04 11:38
Customers have been managing application performance since early days of mainframe evolution. However, Application Performance Management as a discipline has gained popularity in the past decade.

See my blog in BMC communities for challenges with old generation of APM products.

Here is the direct link : https://communities.bmc.com/communities/community/bsm_initiatives/app_mgmt/blog/2012/12/04/challenges-with-apm-10-products

GOTOs, considered

Andrew Clarke - Mon, 2012-12-03 00:12
Extreme programming is old hat now, safe even. The world is ready for something new, something tougher, something that'll... break through. You know? . And here is what the world's been waiting for: Transgressive Programming.

The Transgressive Manifesto is quite short: It's okay to use GOTO. The single underlying principle is that we value willful controversy over mindless conformity.

I do have a serious point here. Even programmers who haven't read the original article (because they can't spell Dijkstra and so can't find it through Google) know that GOTOs are "considered harmful". But as Marshall and Webber point out, "the problem lies ... in the loss of knowledge and experience. If something is forbidden for long enough, it becomes difficult to resurrect the knowledge of how to use it."

How many Oracle developers even realise PL/SQL supports GOTO? It does, of course. Why wouldn't it? PL/SQL is a proper programming language.

The standard objection is that there is no role for GOTO because PL/SQL has loops, procedures, CASE, etc. But sometimes we need to explicitly transfer control. In recent times I have have across these examples:

  • a loop which raised a user-defined exception to skip to the END LOOP; point when the data was in certain ststes, thus avoiding large chunk of processing. A GOTO would have have been cleaner, because it is poor practice to represent normal business states as exceptions.
  • a huge function with more than a dozen separate RETURN statements. GOTOs directing flow to a single RETURN call would have been really helpful, because I needed to log the returned value.
  • a condition which set a counter variable to a large number so as to short-circuit a loop. Here a GOTO would simply have been more honest.
These examples are all explicit control transfers: they cause exactly the sort of random paths through the code which Dijkstra inveighed against. But the coders didn't honour the principle underlying his fatwa, they just lacked the moxie to invoke the dread statement. Instead they kludged. I'm not saying that using a GOTO would have redeemed a function with 800 LOC ; clearly there'e a lot more refactoring to be done there. But it would have been better.

Here is a situation I have come across a few times. The spec is to implement a series of searches of increasing coarseness, depending on which arguments are passed; the users want the most focused set of records available, so once a specific search gets some hits we don't need to run the more general searches.

Nested IF statements provide one way to do this:

result_set := sieve_1(p1=>var1, p2=>var2, p3=>var4, p4=>var5);

if result_set.count() = 0
result_set := sieve_2(p1=>var2, p2=>var3, p3=>var4);

if result_set.count() = 0
result_set := sieve_3(p1=>var3, p2=>var5);

if result_set.count() = 0
end if;
end if;
end if;
return result_set;
Obviously as the number of distinct searches increases the nested indentation drives the code towards the right-hand side of the page. Here is an alternative implementation which breaks the taboo and does away with the tabs.

result_set := sieve_1(p1=>var1, p2=>var2, p3=>var4, p4=>var5);
if result_set.count() > 0
goto return_point;
end if;

result_set := sieve_2(p1=>var2, p2=>var3, p3=>var4);
if result_set.count() > 0
goto return_point;
end if;

result_set := sieve_3(p1=>var3, p2=>var5);
if result_set.count() > 0
goto return_point;
end if;
<< return_point >>
return result_set;
I think the second version has a clearer expression of intent. Did we find any records? Yes we did, job's a good'un, let's crack on.

GOTO: not as evil as triggers.


Subscribe to Oracle FAQ aggregator