Feed aggregator

Let’s keep Manston

Pete Scott - Mon, 2014-03-24 17:22
It is said that aviation is in the blood (or in the genes). My partner worked as cabin crew with Air New Zealand before becoming their senior HR manager responsible for all off-shore based Air New Zealand staff; her cousin is a senior pilot with Cathay Pacific; of her uncles, one was managing director of […]

Chicago Oracle User Community Restart

Jeremy Schneider - Mon, 2014-03-24 13:47

Chicago is the third largest city in the United States. There are probably more professional Oracle users here than most other areas in the country – and yet for many years now there hasn’t been a cohesive user group.

But right now there’s an opportunity for change. If the professional community of Chicago Oracle users steps up to the plate.

Chicago Oracle User Group

First, the Chicago Oracle User Group has just elected a new president. Alfredo Abate is bringing a level of enthusiasm and energy to the position which we’ve been missing for a long time. He’s trying to figure out how to restart the COUG and re-engage the professional community here – but he needs input and assistance from you! If you’re an administrator or developer anywhere near Chicago and you have Oracle software anywhere in your company, then please help Alfredo get the user group going! Here are a few specific things you can do:

  1. Send Alfredo an email saying congrats and offering suggestions for the COUG. You can find him on LinkedIn or the COUG site below.
  2. Join the LinkedIn group that Alfredo set up for the COUG.
  3. Sign up for a free account at the COUG site: chicago.oracle.ioug.org
  4. Complete the survey at the COUG website (must sign up for free account, then look for “survey” link in the top navigation bar). This will help Alfredo think about planning the next event.
Lunch Huddles

A few years ago, I was part of a group of Oracle database users from different companies in Chicago who started hanging out regularly for lunches downtown. It was never a big event but it was a lot of fun to get together and catch up regularly. However I stopped organizing the lunches after a job change back into travel consulting and the birth of our daughter. I live on the north side of the city, I worked from home when I wasn’t traveling, and I wasn’t able to make trips downtown anymore.

Ever since, I’ve missed hanging out with friends downtown and I’ve always wanted to do these group lunches again. Besides the fact that I really enjoy catching up with people, I think that face-to-face meetups really help strengthen our sense of community as a whole in Chicago.

So – after far too long – I started the lunches again last week.

Oracle DB Lunch Downtown

Oracle DB Lunch Downtown

But it’s improved – there are now lunches happening all over ChicagoLand!

Tomorrow: Deerfield
This wednesday: Des Plaines
Next week wednesday: Downtown

Coming soon: Naperville?

Please join us for a lunch sometime! I promise you’ll find it to be both beneficial and fun! And also, please join the group on meetup.com – then you’ll get reminders about upcoming lunches in Chicago.

Spread the Word

Even if you don’t live in Chicago, you can help me out with this – send a brief tweet or quick email to any Oracle professionals you know around Chicago and direct them to this blog post. I hope to see some new life in the Oracle professional community here. It won’t happen by accident.

What's New in Dodeca 6.7.1?

Tim Tow - Mon, 2014-03-24 11:28
Last week, we released Dodeca version 6.7.1.4340 which focuses on some new relational functionality. The major new features in 6.7.1 are:
  • Concurrent SQL Query Execution
  • Detailed SQL Timed Logging
  • Query and Display PDF format
  • Ability to Launch Local External Processes from Within Dodeca
Concurrent SQL Query ExecutionDodeca has a built-in SQLPassthroughDataSet object that supports queries to a relational database.  The SQLPassthroughDataSet functionality was engineered such that a SQLPassthroughDataSet object can include multiple queries that get executed and returned on a single trip to the server and several of our customers have taken great advantage of that functionality.  We have at least one customer, in fact, that has some SQLPassthroughDataSets that execute up to 20 queries in a single trip to the server.  The functionality was originally designed to run the queries sequentially, but in some cases it would be better to run the queries concurrently.  Because this is Dodeca, of course concurrent query execution is configurable at the SQLPassthroughDataSet level.

Detailed SQL Timed LoggingIn Dodeca version 6.0, we added detailed timed logging for Essbase transactions.  In this version, we have added similar functionality for SQL transactions and have formatted the logs in pipe-delimited format so they can easily be loaded into Excel or into a database for further analysis.  The columns of the log include the log message level, timestamp, sequential transaction number, number of active threads, transaction GUID, username, action, description, and time to execute in milliseconds.

Below is an example of the log message.

Click to enlarge

Query and Display PDF formatThe PDF View type now supports the ability to load the PDF directly from a relational table via a tokenized SQL statement.  This functionality will be very useful for those customers who have contextual information, such as invoice images, stored relationally and need a way to display that information.  We frequently see this requirement as the end result of a drill-through operation from either Essbase or relational reports.

Ability to Launch Local External Processes from Within DodecaCertain Dodeca customers store data files for non-financial systems in relational data stores and use Dodeca as a central access point.  The new ability to launch a local process from within Dodeca is implemented as a Dodeca Workbook Script method which provides a great deal of flexibility in how the process is launched.

The new 6.7.1 functionality follows closely on the 6.7.0 release that introduces proxy server support and new MSAD and LDAP authentication services.  If you are interested in seeing all of the changes in Dodeca, highly detailed Dodeca release notes are available on our website at http://www.appliedolap.com/resources/downloads/dodeca-technical-docs.

Categories: BI & Warehousing

Packt Publishing Buy One Get One Free Offer

Antony Reynolds - Thu, 2014-03-20 14:35
Packt Publishing celebrates their 2000th title with a Buy One Get One Free Offer

Great time to get those Packt books you’ve been thinking of buying, like the SOA Suite 11g Developers Guide or the SOA Suite 11g Developers Cookbook.

Packt Publishing Buy One Get One Free Offer

Antony Reynolds - Thu, 2014-03-20 14:35
Packt Publishing celebrates their 2000th title with a Buy One Get One Free Offer

Great time to get those Packt books you’ve been thinking of buying, like the SOA Suite 11g Developers Guide or the SOA Suite 11g Developers Cookbook.

Yet Another Post How to Link to Download a File or Display an Image from a BLOB column

Joel Kallman - Thu, 2014-03-20 13:00
On an internal mailing list, an employee (Richard, a long-time user of Oracle Application Express) asked:

"...we are attempting to move to storing (the images) in a BLOB column in our own application tables.  Is there no way to display an image outside of page items and reports? "

Basically, he has a bunch of images stored in the BLOB column of the common upload table, APEX_APPLICATION_FILES (or WWV_FLOW_FILES).  He wishes to move them to a table in his workspace schema, but it's unclear to him how they can be displayed.  While there is declarative support for BLOBs in Application Express, there are times where you simply wish to get a link which would return the image - and without having to add a form and report against the table containing the images.

I fully realize that this question has been answered numerous times in various books and blog posts, but I wish to reiterate it here again.

Firstly, a way not to do this is via a PL/SQL procedure that is called directly from a URL.  I see this "solution" commonly documented on the Internet, and in general, it should not be followed.  The default configuration of Oracle Application Express has a white list of entry points, callable from a URL.  For security reasons, you absolutely want to leave this restriction in place and not relax it.  This is specified as the PlsqlRequestValidationFunction for mod_plsql and security.disableDefaultExclusionList for Oracle REST Data Services (nee APEX Listener).  With this default security measure in place, you will not be able to invoke a procedure in your schema from a URL.  Good!

The easiest way to return an image from a URL in an APEX application is either via a RESTful Service or via an On-Demand process.  This blog post will cover the On-Demand process.  It's definitely easier to implement via a RESTful Service, and if you can do it via a RESTful call, that will always be much faster - Kris has a great example how to do this. However, one benefit of doing this via an On Demand process is that it will also be constrained by any conditions or authorization schemes that are in place for your APEX application (that is, if your application requires authentication and authorization, someone won't be able to access the URL unless they are likewise authenticated to your APEX application and fully authorized).

  1. Navigate to Application Builder -> Shared Components -> Application Items
  2. Click Create
    • Name:  FILE_ID
    • Scope:  Application
    • Session State Protection:  Unrestricted
  3. Navigate to Application Builder -> Shared Components -> Application Processes
  4. Click Create
    • Name: GETIMAGE
    • Point:  On Demand: Run this application process when requested by a page process.
  5. Click Next
  6. For Process Text, enter the following code:


begin
for c1 in (select *
from my_image_table
where id = :FILE_ID) loop
--
sys.htp.init;
sys.owa_util.mime_header( c1.mime_type, FALSE );
sys.htp.p('Content-length: ' || sys.dbms_lob.getlength( c1.blob_content));
sys.htp.p('Content-Disposition: attachment; filename="' || c1.filename || '"' );
sys.htp.p('Cache-Control: max-age=3600'); -- tell the browser to cache for one hour, adjust as necessary
sys.owa_util.http_header_close;
sys.wpg_docload.download_file( c1.blob_content );

apex_application.stop_apex_engine;
end loop;
end;

Then, all you need to do is construct a URL in your application which calls this application process, as described in the Application Express Application Builder Users' Guide.  You could manually construct a URL using APEX_UTIL.PREPARE_URL, or specify a link in the declarative attributes of a Report Column.  Just be sure to specify a Request of 'APPLICATION_PROCESS=GETIMAGE' (or whatever your application process name is).  The URL will look something like:


f?p=&APP_ID.:0:&APP_SESSION.:APPLICATION_PROCESS=GETIMAGE:::FILE_ID:<some_valid_id>

That's all there is to it.

A few closing comments:
  1. Be mindful of the authorization scheme specified for the application process.  By default, the Authorization Scheme will be "Must Not Be Public User", which is normally acceptable for applications requiring authentication.  But also remember that you could restrict these links based upon other authorization schemes too.
  2. If you want to display the image inline instead of being downloaded by a browser, just change the Content-Disposition from 'attachment' to 'inline'.
  3. A reasonable extension and optimization to this code would be to add a version number to your underlying table, increment it every time the file changes, and then reference this file version number in the URL.  Doing this, in combination with a Cache-Control directive in the MIME header would let the client browser cache it for a long time without ever running your On Demand Process again (and thus, saving your valuable database cycles).
  4. Application Processes can also be defined on the page-level, so if you wished to have the download link be constrained by the authorization scheme on a specific page, you could do this too.
  5. Be careful how this is used. If you don't implement some form of browser caching, then a report which displays 500 images inline on a page will result in 500 requests to the APEX engine and database, per user per page view! Ouch! And then it's a matter of time before a DBA starts hunting for the person slamming their database and reports that "APEX is killing our database". There is an excellent explanation of cache headers here.

Don't use INC and DEC in PL/SQL Libraries

Gerd Volberg - Wed, 2014-03-19 06:01
In my oldest PL/SQL-Library in Forms 4 my first procedures were:

PROCEDURE Inc (P_Number IN OUT NUMBER) IS
BEGIN
P_Number := P_Number + 1;
END;

PROCEDURE Dec (P_Number IN OUT NUMBER) IS
BEGIN
P_Number := P_Number - 1;
END;

In some variations with one or two parameters I used this Increment and Decrement since 20 years! Without having trouble in all the days.

Now I found out, that DEC isn't working in newer Oracle Forms versions...

And why? Oracle created in PL/SQL a new SUBTYPE of DECIMAL and it is called "DEC". So you can use it in this way:
DECLARE
V_Value DEC;
BEGIN
... 
 
And this behaviour kills my procedure in the PL/SQL-Library :-(
All other languages know INC and DEC for incrementing and decrementing. Not Oracle!

Be careful when creating new functions which you want to use for a long time :-)
Gerd

Collaborate 14 Paper: Balancing by 2 Segments

David Haimes - Tue, 2014-03-18 11:55

It is a very common requirement to automatically balance by two different segments of the chart of accounts, typically the Legal Entity and some sort of Management Entity (department, line of business, cost center, etc).  Fusion financials allows you to balance by up to three segments, which is always a well received feature.  However at Collaborate you can learn how one company built a custom to do this in EBusiness Suite.  I’m honored to have been asked to Co-present with GE on this topic, they have done some really good work here and I’m sure it will be of great interest to others too.  So come along and join us, you’ll get the specifics of what they did, the details are below:

Session ID: 13941
When: 10 Apr 11:00 AM-12:00 PM
Room: Level 1, Marco Polo – 801
Speakers: Sangeeta Sameer, General Electric (GE),  David Haimes, Oracle Corporation
Abstract: This paper describes the Custom solution that General Electric (GE) implemented in collaboration with Oracle to achieve Balancing by 2 segments in Release 12.1.3 of Oracle E-Business Suite (EBS). GE has the business need to Balance by both the Legal Entity (LE) and Management Entity (ME) segments in the Chart of Accounts. While Fusion Applications allow Balancing by 3 segments, Oracle EBS Applications do not have this Functionality. This paper covers the requirements and solution in detail for 2 Segment Balancing.


Categories: APPS Blogs

Microservices is SOA, for those who know what SOA is.

Steve Jones - Tue, 2014-03-18 10:05
Ok so its started a bit of debate on Twitter and now there have been emails, but in the spirit of openness I thought I'd better blog.  Now its good that Martin has now added a side bar on SOA to his article on Microservices but that really makes it worse in many ways.  I'll get to that at the end but first off lets explain why Microservices is just another SOA implementation pattern.  Its SOD
Categories: Fusion Middleware

Why Is My MView Log Not Purging?

Don Seiler - Sun, 2014-03-16 15:50
A few weeks ago we saw one of our tablespaces growing at a rate much higher than the others. Taking a look we saw that the biggest users of space were two materialized view logs, one being 110 Gb and the other 60 Gb. These logs were in place to facilitate the fast refresh of two materialized views, one for each log/table. These materialized views did some aggregations (sum) throughout the day on some important base table data. The fast refreshes were completing successfully many times a day, but the logs were not being purged as expected.

In our case, there was only one mview performing a fast refresh on those base tables, so the mview logs should have been completely purged after each refresh. They certainly shouldn't be growing to over 100+ Gb. Looking at the data in the mview log, all records had a SNAPTIME$$ value of "4000/01/01 00:00:00", which is the default value for records in the mview log that have not been refreshed. Once they are refreshed, the SNAPTIME$$ value gets set to SYSDATE and can then be evaluated for purging.

But why was this value not being updated after refresh?

 
For those of you unfamiliar with the role of materialized view logs, I'll share this primer from Tim Hall via his excellent Oracle-Base article:

Since a complete refresh involves truncating the materialized view segment and re-populating it using the related query, it can be quite time consuming and involve a considerable amount of network traffic when performed against a remote table. To reduce the replication costs, materialized view logs can be created to capture all changes to the base table since the last refresh. This information allows a fast refresh, which only needs to apply the changes rather than a complete refresh of the materialized view.
Digging deeper led me to MOS DocId 236233.1, which tells us that Oracle compares the MLOG$_<TABLE_NAME>.SNAPTIME$$ value against the SYS.SLOG$SNAPTIME:

Rows in the MView log are unnecessary if their refresh timestamps MLOG$<table_name>.SNAPTIME$$ are older or equal than the oldest entry in SLOG$.SNAPTIME for this log.

MLOG$<table_name>.SNAPTIME$$ <= MIN (SLOG$.SNAPTIME)
Here's where we saw the real problem.

SQL> select MOWNER,MASTER,SNAPSHOT,SNAPID,SNAPTIME,SSCN,USER#
 2 from sys.slog$
 3 where mowner='FOO' and master='BAR';

 no rows selected

If the purge mechanism checks SLOG$.SNAPTIME then of course nothing is going to happen, as the materialized view is NOT registered in SYS.SLOG$!

We re-created the MVIEW from scratch on our development database and had the same results, which indicates it's something systemic in Oracle so we opened an SR. After the standard back-and-forth of trying the same things over and over, Oracle Support said that this was actually expected behavior:
This mview is defined as fast refreshable with aggregates. The mv log is defined with PRIMARY KEY INCLUDING NEW VALUES.

In order to support fast refresh the mv log should include ROWID as well. Please review the Restrictions on Fast Refresh on Materialized Views with Aggregates located here:
http://docs.oracle.com/cd/E11882_01/server.112/e25554/basicmv.htm#DWHSG8203 There are additional restrictions depending on the operations performed. As an example, SEQUENCE should also need to be added to the mv log if direct loads are performed on new_invoice_record.This turned out to be the case. We recreated the mview log with the ROWID specification, then re-created the materialized view and, sure enough, the mview was registered in SYS.SLOG$ and refreshes were purging the log as expected.

I was more than a little frustrated then that Oracle would let us create the MVIEW without any warnings or errors in the first place. The database obviously detected something wrong since it wouldn't register them in SYS.SLOG$. Their last response was that, since the MVIEW itself was refreshing successfully, no error should be reported. This fails to address the question for me, so I'm going to push back a little harder and will share what I find.

For now, though, we need to schedule a maintenance window to recreate these materialized views and their logs and see if we can reclaim some disk space afterward (perhaps a future post!).
Categories: DBA Blogs

Find the enabled events using ORADEBUG EVENTDUMP

Mihajlo Tekic - Sat, 2014-03-15 09:46

Just a quick note on how to use ORADEBUG in order to find events that are enabled on system or session level.

Starting from Oracle 10.2 you could use ORADEBUG EVENTDUMP in order to get all events enabled either on session or system level. (not sure if this is available in 10.1)


The synopsis is:


oradebug eventdump


Where level is either system, process or session:
 


SQL> oradebug doc event action eventdump
eventdump
- list events that are set in the group
Usage
-------
eventdump( group < system | process | session >)


For demonstration purposes I will set three events, two on session and one on system level.
 


SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> alter session set events '942 trace name errorstack';

Session altered.

SQL> alter system set events 'trace[px_control][sql:f54y11t3njdpj]';

System altered.


Now, let's check what events are enabled on SESSION or SYSTEM level using ORADEBUG EVENTDUMP

SQL> oradebug setmypid

SQL> oradebug eventdump session;
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]
sql_trace level=8
942 trace name errorstack

SQL> oradebug eventdump system
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]


As you may already know, ORADEBUG requires SYSDBA privilege. In order to check events set for other session, one could do so by attaching to the other session process using oradebug  setospid or oradebug setorapid.

This note was more for my own reference. I hope someone else finds it useful too.

What to Expect When You're Changing the DB_UNIQUE_NAME

Don Seiler - Wed, 2014-03-12 18:31
I recently had to change the db_unique_name of a database to make it jive with our typical database/DataGuard naming policy of appending the datacenter location. For the sake of this post let's say it was changed from ORCL to ORCL_NYC, since this database is in our fictional New York City datacenter.

I did a quick set of tests and thought I'd share the findings to save anyone any unpleasant surprises. Here are the things to expect when changing DB_UNIQUE_NAME.

Change the Parameter Value
First we obviously have to change the value. How we do so is important.
The command:
alter system set db_unique_name=orcl_nyc scope=spfile;

will result in a db_unique_name of ORCL_NYC, in all uppercase, which is used for the path changes we'll discuss later. However using quotes instead:
alter system set db_unique_name='orcl_nyc' scope=spfile;

will result in a lowercase orcl_nyc value used in the parameter and some paths. In either case, the fun begins when you next restart the instance!



ADR Location (i.e. alert log)
The ADR location appends the DB_UNIQUE_NAME to the location specified by the DIAGNOSTIC_DEST initialization parameter (defaulting to the $ORACLE_BASE environment variable value). When you restart after setting the DB_UNIQUE_NAME, your ADR location will be in a new location using the new DB_UNIQUE_NAME. Probably of most interest to you is that this means your alert log location will move, so any tools or scripts that referenced that file directly (e.g. error-scraping scripts or log-rotation jobs) will need to be updated. 

Regardless of quotes or not, the ADR path always used a lowercase string in the path.

Before:

SQL> show parameter db_unique_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                       string      ORCL
SQL> show parameter background_dump_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
background_dump_dest                 string      /oracle/app/diag/rdbms/orcl/or
                                                 cl/trace

After:

SQL> show parameter db_unique_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                       string      ORCL_NYC
SQL> show parameter background_dump_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
background_dump_dest                 string      /oracle/app/diag/rdbms/orcl_ny
                                                 c/orcl/trace


Datafile and Online Redo Log OMF Location
If you have the db_create_file_dest (and optionally db_create_online_log_dest_N) parameter set, then Oracle will use the DB_UNIQUE_NAME value in the OMF location for any new datafiles and redo logs created after the change, assuming a full path isn't specified.

SQL> alter tablespace users add datafile size 10m;

Tablespace altered.

SQL> select file_name from dba_data_files
  2  where tablespace_name='USERS';

FILE_NAME
--------------------------------------------------------------------------------
/oracle/app/oradata/ORCL/datafile/data_D-ORCL_I-3995253326_TS-USERS_FNO-4_1r
p30nup
/oracle/app/oradata/ORCL_NYC/datafile/o1_mf_users_9l1r8dx9_.dbf

In this case, Oracle will use an uppercase string regardless of whether or not the DB_UNIQUE_NAME is in upper or lower case. Note that existing files won't be affected, this will only apply to new files.

If this database is part of a DataGuard configuration, you'll want to be sure to update your db_file_name_convert and log_file_name_convert parameters to point to the new location.

FRA (Backups, Archivelogs, Flashback Logs)
In the same spirit of OMF, the FRA will also change locations. Similar to the previous case, the uppercase value of the DB_UNIQUE_NAME is used in the path, regardless of the original case. So, after a change and a couple of log switches, you would see something like this:

SQL> select name from v$archived_log;

NAME
--------------------------------------------------------------------------------

/oracle/app/fast_recovery_area/ORCL/archivelog/2014_03_12/o1_mf_1_13_9l1sgt20_.a
rc

/oracle/app/fast_recovery_area/ORCL/archivelog/2014_03_12/o1_mf_1_14_9l1shfjo_.a
rc

/oracle/app/fast_recovery_area/ORCL_NYC/archivelog/2014_03_12/o1_mf_1_15_9l1sj59
6_.arc

/oracle/app/fast_recovery_area/ORCL_NYC/archivelog/2014_03_12/o1_mf_1_16_9l1sjw4
k_.arc

Again, this will only affect newly created files. Existing backups, archivelogs and flashback logs will not be affected, they remain cataloged in their present locations and will be accessed just fine. RMAN will delete them when needed (or commanded) and then you could choose to delete the empty directories.

Oracle Wallet
I admit this one I didn't think of. If you use an Oracle Wallet, and do not specify a location in the sqlnet.ora file, Oracle looks in the default location of $ORACLE_BASE/admin/<DB_UNIQUE_NAME>/wallet/. Even more interesting, the DB_UNIQUE_NAME value is case sensitive. So you'll need to be aware of this when moving your wallet files to the new location. Here is a quick look at my findings on this matter:

-- db_unique_name set with no quotes
SQL> select wrl_parameter from v$encryption_wallet;

WRL_PARAMETER
--------------------------------------------------------------------------------
/oracle/app/admin/ORCL_NYC/wallet

-- db_unique_name set with quotes, lowercase
SQL> select wrl_parameter from v$encryption_wallet;

WRL_PARAMETER
--------------------------------------------------------------------------------
/oracle/app/admin/orcl_nyc/wallet

I can't say if this makes a difference when running on Windows, which I believe is case-insensitive. But on Linux it makes a difference.

Conclusion
That wraps this one up. I hope it helps a few people out and saves them an hour or two of head-scratching or worse. If there's anything I forgot, please let me know and I'll update this post.
Categories: DBA Blogs

What is real-time? Depends on who you ask

Steve Jones - Wed, 2014-03-12 13:29
"Real-time" its a word that gets thrown about a lot in IT and its worth documenting a few of the different ways it gets used Hard Real-time This is what Real-time Java was created to address (along with Soft Real-time) what is this?  Easiest way to say it is that often in Hard Real-time environments the following statement is true If it doesn't finish in X milliseconds then people might die So
Categories: Fusion Middleware

Mandated Third Party Static Analysis: Bad Public Policy, Bad Security

Mary Ann Davidson - Tue, 2014-03-11 16:08
Many commercial off-the-shelf (COTS) vendors have recently seen an uptick of interest by their customers in third party static analysis or static analysis of binaries (compiled code). Customers who are insisting upon this in some cases have referenced the (then-)SANS Top Twenty Critical Controls (http://www.sans.org/critical-security-controls/) to support their position, specifically, Critical Control 6, Application Software Security:

"Configuration/Hygiene: Test in-house developed and third-party-procured web and other application software for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software. If source code is not available, these organizations should test compiled code using static binary analysis tools (emphases added). In particular, input validation and output encoding routines of application software should be carefully reviewed and tested."

Recently, the "ownership" of the 20 Critical Controls has passed to the Council on CyberSecurity and the particular provision on third party static analysis has been excised. Oracle provided feedback on the provision (more on which below) and appreciates the responsiveness of the editors of the 20 Critical Controls to our comments.

The argument for third party code analysis is that customers would like to know that they are getting “reasonably defect-free” code in a product. All things being equal, knowing you are getting better quality code than not is a good thing, while noting that there is no defect-free or even security defect-free software – no tool finds all problems and they generally don’t find design defects. Also, a product that is “testably free” of obvious defects may still have significant security flaws – like not having any authentication, access control, or auditing. Nobody is arguing that static analysis isn’t a good thing – that’s why a lot of vendors already do it and don’t need a third party to “attest” to their code (assuming there is a basis for trusting the third party other than their saying “trust us”).

Oracle believes third party static analysis is at best infeasible for organizations with mature security assurance practices and – well, a bad idea, not to put too fine a point on it. The reasons why it is a bad idea are expanded upon in detail below, and include: 1) worse, not better security 2) increased security risk to customers, 3) an increased risk of intellectual property theft and 4) increased costs for commercial software providers without a commensurate increase in security. Note that this discussion does not address the use of other tools - such as so-called web vulnerability analysis tools – that operate against “as installed” object code. These tools also have challenges (i.e., a high rate of false positives) but do not in general pose the same security threats, risks and high costs that static analysis as conducted by third parties does.

Discussion: Static analysis tools are one of many means by which vendors of software, including commercial off-the-shelf (COTS) software, can find coding defects that may lead to exploitable security vulnerabilities. Many vendors – especially large COTS providers - do static analysis of their own code as part of a robust, secure software development process. In fact, there are many different types of testing that can be done to improve security and reliability of code, to include regression testing (ensuring that changes to code do not break something else, and that code operates correctly after it has been modified), “fuzzing” tools, web application vulnerability tools and more. No one tool finds all issues or is necessarily even suitable for all technologies or all programming languages. Most companies use a multiplicity of tools that they select based on factors such as cost, ease-of-use, what the tools find, how well and how accurately, programming languages the tool understands, and other factors. Note of course that these tools must be used in a greater security assurance context (security training, ethical hacking, threat modeling, etc.), echoing the popular nostrum that security has to be “baked in, not bolted on.” Static analysis and other tools can’t “bake in” security – just find coding errors that may lead to security weaknesses. More to the point, static analysis tools should correctly be categorized as “code analysis tools” rather than “code testing tools,” because they do not automatically produce accurate or actionable results when run and cannot be used, typically, by a junior developer or quality assurance (QA) person.

These tools must in general be “tuned” or “trained” or in some cases “programmed” to work against a particular code base, and thus the people using them need to be skilled developers, security analysts or QA experts. Oracle has spent many person years evaluating the tools we use, and have made a significant commitment to a particular static analysis tool which works the best against much – but not all – of our code base. We have found that results are not typically repeatable from code base to code base even within a company. That is, just because the tool works well on one code base does not mean it will work equally well on another product -- another reason to work with a strong vendor who will consider improving the tool to address weaknesses. In short, static analysis tools are not a magic bullet for all security ills, and the odds of a third party being able to do meaningful, accurate and cost-effective static code analysis are slim to none.

1. Third party static analysis is not industry-standard practice.
Despite the marketing claims of the third parties that do this, “third party code review” is not “industry best practice.” As it happens, it is certainly not industry-standard practice for multiple reasons, not the least of which is the lack of validation of the entities and tools used to do such “validation” and the lack of standards to measure efficacy, such as what does the tool find, how well, and how cost effectively? As Juvenal so famously remarked, “Quis custodiet ipsos custodes?” (Who watches the watchmen?) Any third party can claim, and many do, that “we have zero false positives” but there is no way to validate such puffery – and it is puffery. (Sarcasm on: I think any company that does static analysis as a service should agree to have their code analyzed by a competitor. After all, we only have Company X’s say-so that they can find all defects known to mankind with zero false positives, whiten your teeth and get rid of ring-around-the-collar, all with a morning-fresh scent!)

The current International Standards Organization (ISO) standard for assurance (which encompasses the validation of secure code development), the international Common Criteria (ISO-15408), is, in fact, retreating from the need for source code access currently required at higher assurance levels (e.g., Evaluation Assurance Level (EAL) 4). While limited vulnerability analysis has been part of higher assurance evaluations currently being deprecated by the U.S. National Information Assurance Partnership (NIAP), static analysis has not been a requirement at commercial assurance levels. Hence, “the current ISO assurance standard” does not include third party static code analysis and thus, “third party static analysis” is not standard industry practice. Lastly, “third party code analysis” is clearly not “industry best practice” if for no other reason than all the major COTS vendors are opposed to it and will not agree to it. We are already analyzing our own code, thanks very much.

(It should be noted that third party systematic manual code review is equally impractical for the code bases of most commercial software. The Oracle database, for example, has multiple millions of lines of code. Manual code review for the scale of code most COTS vendors produce would accomplish little except pad the bank accounts of the consultants doing it without commensurate value provided (or risk reduction) for either the vendor or the customers of the product. Lastly, the nature of commercial development is that the code is continuously in development: the code base literally changes daily. Third party manual code review in these circumstances would accomplish absolutely nothing. It would be like painting a house while it is under construction.)

2. Many vendors already use third party tools to find coding errors that may lead to exploitable security vulnerabilities.
As noted, many large COTS vendors have well-established assurance programs that include the use of a multiplicity of tools to attempt to find not merely defects in their code, but defects that lead to exploitable security vulnerabilities. Since only a vendor can actually fix a product defect in their proprietary code, and generally most vulnerabilities need a “code fix” to eliminate the vulnerability, it makes sense for vendors to run these tools themselves. Many do.

Oracle, for example, has a site license for a COTS static analysis tool and Oracle also produces a static analysis tool in-house (Parfait, which was originally developed by Sun Labs). With Parfait, Oracle has the luxury of enhancing the tool quickly to meet Oracle-specific needs. Oracle has also licensed a web application vulnerability testing tool, and has produced a number of in-house tools that focus on Oracle’s own (proprietary) technologies. It is unlikely that any third party tool can fuzz Oracle PL/SQL as well as Oracle’s own tools, or analyze Oracle’s proprietary SQL networking protocol as well as Oracle’s in-house tools do. The Oracle Ethical Hacking Team (EHT) also develops tools that they use to “hack” Oracle products, some of which are “productized” for use by other development and QA teams. As Oracle runs Oracle Corporation on Oracle products, Oracle has a built-in incentive to write and deliver secure code. (In fact, this is not unusual: many COTS vendors run their own businesses on their own products and are thus highly motivated to build secure products. Third party code testers typically do not build anything that they run their own enterprises on.)

The above tool usage within Oracle is in addition to extensive regression testing of functionality to high levels of code coverage, including regression testing of security functionality. Oracle also uses other third party security tools (many of which are open source) that are vetted and recommended by the Oracle Software Security Assurance (OSSA) team. Additionally, Oracle measures compliance with “use of automated tools” as part of the OSSA program. Compliance against OSSA is reported quarterly to development line-of-business owners as well as executive management (the company president and the CEO). Many vendors have similarly robust assurance programs that include static analysis as one of many means to improve product security.

Several large software vendors have acquired static analysis (or other) code analysis tools. HP, for example, acquired both Fortify and WebInspect and IBM acquired Coverity. This is indicative both of these vendors’ commitment to “the secure code marketplace” but also, one assumes, to secure development within their own organizations. Note that while both vendors have service offerings for the tools, neither is pushing “third party code testing,” which says a lot. Everything, actually.

Note that most vendors will not provide static analysis results to customers for valid business reasons, including ensuring the security of all customers. For example, a vendor who finds a vulnerability may often fix the issue in the version of product that is under development (i.e., the “next product train leaving the station”). Newer versions are more secure (and less costly to maintain since the issue is already fixed and no patch is required). However, most vendors do not - or cannot - fix an issue in all shipping versions of product and certainly not in versions that have been deprecated. Telling customers the specifics of a vulnerability (i.e., by showing them scan results) would put all customers on older, unfixed or deprecated versions at risk.

3. Testing COTS for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software increases costs without a commensurate return on investment (ROI).
The use of static code analysis software is a highly technical endeavor requiring skilled development personnel. There are skill requirements and a necessity for detailed operational knowledge of how the software is built to help eliminate false positives, factors that raise the cost of this form of “testing.” Additionally, static code analysis tools are not the tool of choice for detecting malware or backdoors. (It is in fact, trivial, to come up with a “backdoor” that, if inserted into code, would not be detected by even the best static analysis tools. There was an experiment at Sandia Labs in which a backdoor was inserted into code and code reviewers told where in code to look for it. They could not find it – even knowing where to look.)

If the real concern of a customer insisting on a third party code scan is malware and backdoor detection: it won’t work and thus represents an extremely expensive – and useless – distraction.

4. Third party code analysis will diminish overall product security.
It is precisely leading vendors’ experience with static analysis tools that contributes to their unwillingness to have third parties attempt to analyze code – emphasis on “attempt.” None of these tools are “plug and play": in some cases, it has taken years, not months, to be able to achieve actionable results, even from the best available static analysis tools. These are in fact code analysis tools and must be “tuned” – and in some cases actually “programmed”- to understand code, and must typically be run by an experienced developer (that is, a developer who understand the particular code base being analyzed) for results to be useful and actionable. There are many reasons why static analysis tools either raise many false positives, or skip entire bodies of code. For example, because of the way Oracle implements particular functionality (memory management) in the database, static analysis tools that look for buffer overflows either do not work, or raise false positives (Oracle writes its own checks to look for those issues).

The rate of false positives from use of a “random” tool run by inexperienced operators – especially on a code base as large as that of most commercial products – would put a vendor in the position of responding to unsubstantiated fear, uncertainty, and doubt (FUD). In a code base of 10,000,000 lines of code, even a false positive rate of one per 1000 lines of code would yield 10,000 “false positives” to chase down. The cost of doing this is prohibitive. (One such tool run against a large Oracle code base generated a false positive for every 3.4 lines of code, or about 160,000 false positives in toto due to the size of the code base.)

This is why most people using these tools must “tune” them to drown out “noise.” Many vendors have already had this false positive issue with customers running web application vulnerability tools and delivering in some cases hundreds of pages of “alarms” in which there were, perhaps, a half page of actionable issues. The rate of false positives is the single biggest determinant whether these tools are worth using or an expensive distraction (aka “rathole”).

No third party firm has to prove that their tool is accurate – especially not if the vendor is forced to use a third party to validate their code – and thus there is little to no incentive to improve their tool. Consultants get paid more the longer they are on site and working. A legislative or “standards” requirement for “third party code analysis” is therefore a license for the third party doing it to print money. Putting it differently, if the use of third party static analysis was accurate and cost effective, why wouldn’t vendors already be doing it? Instead, many vendors use static analysis tools in-house, because they own the code, and are willing to assume the cost of going up the learning curve for a long term benefit to them of reduced defects (and reduced cost of fixing these defects as more vulnerabilities are found earlier in the development cycle).

In short, the use of a third party is the most expensive, non-useful, high-cost attempt at “better code” most vendors could possibly use, and would result in worse security, not better security as in-house “security boots on the ground” are diverted to working with the third party. It is unreasonable to expect any vendor to in effect tune a third party tool and train the third party on their code – and then have to pay the third party for the privilege of doing it. Third party static analysis represents an unacceptably high opportunity cost caused by the “crowding out effect” of taking scarce security resources and using them on activity of low value to the vendor and to their customers. The only “winner” here is the third party. Ka-chink. Ka-chink.

5. Third party code analysis puts customers at increased risk.
As noted, there is no standard for what third party static analysis tools find, let alone how well and how economically they find it. More problematically, there are no standards for protection of any actual vulnerabilities these tools find. In effect, third party code analysis allows the third party to amass a database of unfixed vulnerabilities in products without any requirements for data protection or any recourse should that information be sold, incorporated into a hacking tool or breached. The mere fact of a third party amassing such sensitive information makes the third party a hacker target. Why attempt to hack products one by one if you can break into a third party’s network and get a listing of defects across multiple products – in the handy “economy size?” The problem is magnified if the “decompiled” source code is stored at the third party: such source code would be an even larger hacker target than the list of vulnerabilities the third party found.

Most vendors have very strict controls not merely on their source code, but on the information about product vulnerabilities that they know about and are triaging and fixing. Oracle Corporation, for example, has stringent security vulnerability handling policies that are promulgated and “scored” as part of Oracle’s software and hardware assurance program. Oracle uses its own secure database technology (row level access control) to enforce “need to know” on security vulnerabilities, information that is considered among the most sensitive information the company has. Security bugs are not published (meaning, they are not generally searchable and readable across the company or accessible by customers). Also, security bug access is stringently limited to those working on a bug fix (and selected others, such as security analysts and the security point of contact (SPOC) for the development area).

One of the reasons Oracle is stringent about limiting access to security vulnerability information is that this information often does leak when “managed” by third parties, even third parties with presumed expertise in secret-keeping. In the past, MI5 circulated information about a non-public Oracle database vulnerability among UK defense and intelligence entities (it should be noted that nobody reported this to Oracle, despite the fact that only Oracle could issue a patch for the issue). Oracle was only notified about the bug by a US commercial company to whom the information had leaked. As the saying goes, two people can’t keep a secret.

There is another risk that has not generally been considered in third party static analysis, and that is the increased interest in cyber-offense. There is evidence that the market for so-called zero-day vulnerabilities is being fueled in part by governments seeking to develop cyber-offense tools. (StuxNet, for example, allegedly made use of at least four “zero-day” vulnerabilities: that is, vulnerabilities not previously reported to a vendor.) Coupled with the increased interest in military suppliers/system integrators in getting into the “cyber security business,” it is not a stretch to think that at last some third parties getting into the “code analysis” business can and would use that as an opportunity to “sell to both sides” – use legislative fiat or customer pressure to force vendors to consent to static analysis, and then surreptitiously sell the vulnerabilities they found to the highest bidder as zero-days. Who would know?

Governments in particular cannot reasonably simultaneously fuel the market in zero days, complain at how irresponsible their COTS vendors are for not building better code and/or insist on third party static analysis. This is like stoking the fire and then complaining that the room is too hot.

6. Equality of access to vulnerability information protects all customers.
Most vendors do not provide advance information on security vulnerabilities to some customers but not others, or more information about security vulnerabilities to some customers but not others. As noted above, one reason for this is the heightened risk that such information will leak, and put the customers “not in the know” at increased risk. Not to mention, all customers believe their secrets are as worthy of protection as any other customer: nobody wants to be on the “Last Notified” list.

Thus, third party static analysis is problematic because it may result in violating basic fairness and equality in terms of vulnerability disclosure in the rare instances where these scans actually find exploitable vulnerabilities. The business model for some vendors offering static analysis as a service is to convince the customers of the vendor that the vendor is an evil slug and cannot be trusted, and thus the customer should insist on the third party analyzing the vendors’ code base.

There is an implicit assumption that the vendor will fix vulnerabilities that the third party static analysis finds immediately, or at least, before the customer buys/installs the product. However, the reality is more subtle than that. For one thing, it is almost never the case that a vulnerability exists in one and only one version of product: it may also exist on older versions. Complicating the matter: some issues cannot be “fixed” via a patch to the software but require the vendor to rearchitect functionality. This typically can only be done in so-called major product releases, which may only occur every two to three years. Furthermore, such issues often cannot be fixed on older versions because the scope of change is so drastic it will break dependent applications. Thus, a customer (as well as the third party) has information about a “not-easily-fixed” vulnerability which puts other customers at a disadvantage and at risk to the extent that information may leak.

Therefore, allowing some customers access to the results of a third party code scan in advance of a product release would violate most vendors’ disclosure policies as well as actually increasing risk to many, many customers, and potentially that increased risk could exist for a long period of time.

7. Third party code analysis sets an unacceptable precedent that risks vendors’ core intellectual property (IP).
COTS vendors maintain very tight control over their proprietary source code because it is core, high-value IP. As such, most COTS vendors will not allow third parties to conduct static analysis against source code (and for purposes of this discussion, this includes static analysis against binaries, which typically violates standard license agreements).

Virtually all companies are aware of the tremendous cost of intellectual property theft: billions of dollars per year, according to published reports. Many nation states, including those that condone if not encourage wholesale intellectual property theft, are now asking for source code access as a condition of selling COTS products into their markets. Most COTS vendors have refused these requests. One can easily imagine that for some nation states, the primary reason to request source code access (or, alternatively, “third party analysis of code”) is for intellectual property theft or economic espionage. Once a government-sanctioned third party has access to source code, so may the government. (Why steal source code if you can get a vendor to gift wrap it and hand it to you under the rubric of “third party code analysis?”)

Another likely reason some governments may insist on source code access (or third party code analysis) is to analyze the code for weaknesses they then exploit for their own national security purposes (e.g., more intellectual property theft). All things being equal, it is easier to find defects in source code than in object code. Refusing to accede to these requests – in addition to, of course, a vendor doing its own code analysis and defect remediation – thus protects all customers. In short, agreeing to any third party code analysis involving source code – either static analysis or static analysis of binaries - would make it very difficult if not impossible for a vendor to refuse any other similar requests for source code access, which would put their core intellectual property at risk. Third party code analysis is a very bad idea because there is no way to “undo” a precedent once it is set.

Summary
Software should have a wide variety of tests performed before it is shipped and additional security tests (such as penetration tests) should be used against “as-deployed” software. However, the level of testing should be commensurate with the risk, which is both basic risk management and appropriate (scarce) resource management. A typical firm has many software elements, most probably COTS, and to suggest that they all be tested with static analysis tools begs a sanity check. The scope of COTS alone argues against this requirement: COTS products run the gamut from operating systems to databases to middleware, business intelligence and other analytic tools, business applications (accounting, supply chain management, manufacturing) as well as specialized vertical market applications (e.g., clinical trial software),representing a number of programming languages and billions – no, hundreds of billions - of lines of code.

The use of static analysis tools in development to help find and remediate security vulnerabilities is a good assurance practice, albeit a difficult one because of the complexity of software and the difficulty of using these tools. The only utility of these tools is that they be used by the producer of software in a cost- effective way geared towards sustained vulnerability reduction over time. The mandated use of third party static analysis to “validate” or “test” code is unsupportable, for reasons of cost (especially opportunity cost), precedence, increased risk to vendors’ IP and increased security risk to customers. The third party static code analysis market is little more than a subterfuge for enabling the zero-day vulnerability market: bad security, at a high cost, and very bad public policy.

Book of the Month
It’s been so long since I blogged, it’s hard to pick out just a few books to recommend. Here are three, and a "freebie":

Hawaiki Rising: Hōkūle’a, Nainoa Thompson and the Hawaiian Renaissance by Sam Low
Among the most amazing tidbits of history are the vast voyages that the Polynesians made to settle (and travel among) Tahiti, Hawai’i and Aotearoa (New Zealand) using navigational methods largely lost to history. (Magellan – meh – he had a compass and sextant.) This book describes the re-creation of Polynesian wayfinding in Hawai’i in the 1970s via the building of a double-hulled Polynesian voyaging canoe, the Hōkūle’a, and how one amazing Hawaiian (Nainoa Thompson) – under the tutelage of one of the last practitioners of wayfinding (Mau Piailug) – made an amazing voyage from Hawai’i to Tahiti using only his knowledge of the stars, the winds, and the currents. (Aside: one of my favorite songs is “H
ōkūle’a Hula,” which describes this voyage, and is so nicely performed by Erik Lee.) Note: the Hōkūle’a is currently on a voyage around the world.

The Korean War by Max Hastings
Max Hastings is one of the few historians whom I think is truly balanced: he looks at the moral issues of history, weighs them, and presents a fair analysis – not “shove-it-down-your-throat revisionism.” He also makes use of a lot of first-person accounts, which makes history come alive. The Korean War is in so many cases a forgotten war, especially the fact that it literally is a war that never ended. It’s a good lesson of history, as it is made clear that the US drew down their military so rapidly and drastically after the World War II that we were largely (I am trying not to say “completely”) unprepared for Korea. (Moral: there is always another war.)

Code Talker by Chester Nez
Many people now know of the crucial role that members of the Navajo Nation played in the Pacific War: the code they created that provided a crucial advantage (and was never broken). This book is a first-person account of the experiences of one Navajo code talker, from his experiences growing up on the reservation to his training as a Marine, and his experiences in the Pacific Theater. Fascinating.

Securing Oracle Database 12c: A Technical Primer
If you are a DBA or security professional looking for more information on Oracle database security, then you will be interested in this book. Written by members of Oracle's engineering team and the President of the International Oracle User Group (IOUG), Michelle Malcher, the book provides a primer on capabilities such as data redaction, privilege analysis and conditional auditing. If you have Oracle databases in your environment, you will want to add this book to your collection of professional information. Register now for the complimentary eBook and learn from the experts.


Mandated Third Party Static Analysis: Bad Public Policy, Bad Security

Mary Ann Davidson - Tue, 2014-03-11 16:08

Many commercial off-the-shelf (COTS) vendors have recently seen an uptick of interest by their customers in third party static analysis or static analysis of binaries (compiled code). Customers who are insisting upon this in some cases have referenced the (then-)SANS Top Twenty Critical Controls (http://www.sans.org/critical-security-controls/) to support their position, specifically, Critical Control 6, Application Software Security:

"Configuration/Hygiene: Test in-house developed and third-party-procured web and other application software for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software. If source code is not available, these organizations should test compiled code using static binary analysis tools (emphases added). In particular, input validation and output encoding routines of application software should be carefully reviewed and tested."


Recently, the "ownership" of the 20 Critical Controls has passed to the Council on CyberSecurity and the particular provision on third party static analysis has been excised. Oracle provided feedback on the provision (more on which below) and appreciates the responsiveness of the editors of the 20 Critical Controls to our comments.


The argument for third party code analysis is that customers would like to know that they are getting “reasonably defect-free” code in a product. All things being equal, knowing you are getting better quality code than not is a good thing, while noting that there is no defect-free or even security defect-free software – no tool finds all problems and they generally don’t find design defects. Also, a product that is “testably free” of obvious defects may still have significant security flaws – like not having any authentication, access control, or auditing. Nobody is arguing that static analysis isn’t a good thing – that’s why a lot of vendors already do it and don’t need a third party to “attest” to their code (assuming there is a basis for trusting the third party other than their saying “trust us”).


Oracle believes third party static analysis is at best infeasible for organizations with mature security assurance practices and – well, a bad idea, not to put too fine a point on it. The reasons why it is a bad idea are expanded upon in detail below, and include: 1) worse, not better security 2) increased security risk to customers, 3) an increased risk of intellectual property theft and 4) increased costs for commercial software providers without a commensurate increase in security. Note that this discussion does not address the use of other tools - such as so-called web vulnerability analysis tools – that operate against “as installed” object code. These tools also have challenges (i.e., a high rate of false positives) but do not in general pose the same security threats, risks and high costs that static analysis as conducted by third parties does.


Discussion: Static analysis tools are one of many means by which vendors of software, including commercial off-the-shelf (COTS) software, can find coding defects that may lead to exploitable security vulnerabilities. Many vendors – especially large COTS providers - do static analysis of their own code as part of a robust, secure software development process. In fact, there are many different types of testing that can be done to improve security and reliability of code, to include regression testing (ensuring that changes to code do not break something else, and that code operates correctly after it has been modified), “fuzzing” tools, web application vulnerability tools and more. No one tool finds all issues or is necessarily even suitable for all technologies or all programming languages. Most companies use a multiplicity of tools that they select based on factors such as cost, ease-of-use, what the tools find, how well and how accurately, programming languages the tool understands, and other factors. Note of course that these tools must be used in a greater security assurance context (security training, ethical hacking, threat modeling, etc.), echoing the popular nostrum that security has to be “baked in, not bolted on.” Static analysis and other tools can’t “bake in” security – just find coding errors that may lead to security weaknesses. More to the point, static analysis tools should correctly be categorized as “code analysis tools” rather than “code testing tools,” because they do not automatically produce accurate or actionable results when run and cannot be used, typically, by a junior developer or quality assurance (QA) person.


These tools must in general be “tuned” or “trained” or in some cases “programmed” to work against a particular code base, and thus the people using them need to be skilled developers, security analysts or QA experts. Oracle has spent many person years evaluating the tools we use, and have made a significant commitment to a particular static analysis tool which works the best against much – but not all – of our code base. We have found that results are not typically repeatable from code base to code base even within a company. That is, just because the tool works well on one code base does not mean it will work equally well on another product -- another reason to work with a strong vendor who will consider improving the tool to address weaknesses. In short, static analysis tools are not a magic bullet for all security ills, and the odds of a third party being able to do meaningful, accurate and cost-effective static code analysis are slim to none.


1. Third party static analysis is not industry-standard practice.
Despite the marketing claims of the third parties that do this, “third party code review” is not “industry best practice.” As it happens, it is certainly not industry-standard practice for multiple reasons, not the least of which is the lack of validation of the entities and tools used to do such “validation” and the lack of standards to measure efficacy, such as what does the tool find, how well, and how cost effectively? As Juvenal so famously remarked, “Quis custodiet ipsos custodes?” (Who watches the watchmen?) Any third party can claim, and many do, that “we have zero false positives” but there is no way to validate such puffery – and it is puffery. (Sarcasm on: I think any company that does static analysis as a service should agree to have their code analyzed by a competitor. After all, we only have Company X’s say-so that they can find all defects known to mankind with zero false positives, whiten your teeth and get rid of ring-around-the-collar, all with a morning-fresh scent!)


The current International Standards Organization (ISO) standard for assurance (which encompasses the validation of secure code development), the international Common Criteria (ISO-15408), is, in fact, retreating from the need for source code access currently required at higher assurance levels (e.g., Evaluation Assurance Level (EAL) 4). While limited vulnerability analysis has been part of higher assurance evaluations currently being deprecated by the U.S. National Information Assurance Partnership (NIAP), static analysis has not been a requirement at commercial assurance levels. Hence, “the current ISO assurance standard” does not include third party static code analysis and thus, “third party static analysis” is not standard industry practice. Lastly, “third party code analysis” is clearly not “industry best practice” if for no other reason than all the major COTS vendors are opposed to it and will not agree to it. We are already analyzing our own code, thanks very much.


(It should be noted that third party systematic manual code review is equally impractical for the code bases of most commercial software. The Oracle database, for example, has multiple millions of lines of code. Manual code review for the scale of code most COTS vendors produce would accomplish little except pad the bank accounts of the consultants doing it without commensurate value provided (or risk reduction) for either the vendor or the customers of the product. Lastly, the nature of commercial development is that the code is continuously in development: the code base literally changes daily. Third party manual code review in these circumstances would accomplish absolutely nothing. It would be like painting a house while it is under construction.)


2. Many vendors already use third party tools to find coding errors that may lead to exploitable security vulnerabilities.
As noted, many large COTS vendors have well-established assurance programs that include the use of a multiplicity of tools to attempt to find not merely defects in their code, but defects that lead to exploitable security vulnerabilities. Since only a vendor can actually fix a product defect in their proprietary code, and generally most vulnerabilities need a “code fix” to eliminate the vulnerability, it makes sense for vendors to run these tools themselves. Many do.


Oracle, for example, has a site license for a COTS static analysis tool and Oracle also produces a static analysis tool in-house (Parfait, which was originally developed by Sun Labs). With Parfait, Oracle has the luxury of enhancing the tool quickly to meet Oracle-specific needs. Oracle has also licensed a web application vulnerability testing tool, and has produced a number of in-house tools that focus on Oracle’s own (proprietary) technologies. It is unlikely that any third party tool can fuzz Oracle PL/SQL as well as Oracle’s own tools, or analyze Oracle’s proprietary SQL networking protocol as well as Oracle’s in-house tools do. The Oracle Ethical Hacking Team (EHT) also develops tools that they use to “hack” Oracle products, some of which are “productized” for use by other development and QA teams. As Oracle runs Oracle Corporation on Oracle products, Oracle has a built-in incentive to write and deliver secure code. (In fact, this is not unusual: many COTS vendors run their own businesses on their own products and are thus highly motivated to build secure products. Third party code testers typically do not build anything that they run their own enterprises on.)


The above tool usage within Oracle is in addition to extensive regression testing of functionality to high levels of code coverage, including regression testing of security functionality. Oracle also uses other third party security tools (many of which are open source) that are vetted and recommended by the Oracle Software Security Assurance (OSSA) team. Additionally, Oracle measures compliance with “use of automated tools” as part of the OSSA program. Compliance against OSSA is reported quarterly to development line-of-business owners as well as executive management (the company president and the CEO). Many vendors have similarly robust assurance programs that include static analysis as one of many means to improve product security.


Several large software vendors have acquired static analysis (or other) code analysis tools. HP, for example, acquired both Fortify and WebInspect and IBM acquired Coverity. This is indicative both of these vendors’ commitment to “the secure code marketplace” but also, one assumes, to secure development within their own organizations. Note that while both vendors have service offerings for the tools, neither is pushing “third party code testing,” which says a lot. Everything, actually.


Note that most vendors will not provide static analysis results to customers for valid business reasons, including ensuring the security of all customers. For example, a vendor who finds a vulnerability may often fix the issue in the version of product that is under development (i.e., the “next product train leaving the station”). Newer versions are more secure (and less costly to maintain since the issue is already fixed and no patch is required). However, most vendors do not - or cannot - fix an issue in all shipping versions of product and certainly not in versions that have been deprecated. Telling customers the specifics of a vulnerability (i.e., by showing them scan results) would put all customers on older, unfixed or deprecated versions at risk.


3. Testing COTS for coding errors and malware insertion, including backdoors, prior to deployment using automated static code analysis software increases costs without a commensurate return on investment (ROI).
The use of static code analysis software is a highly technical endeavor requiring skilled development personnel. There are skill requirements and a necessity for detailed operational knowledge of how the software is built to help eliminate false positives, factors that raise the cost of this form of “testing.” Additionally, static code analysis tools are not the tool of choice for detecting malware or backdoors. (It is in fact, trivial, to come up with a “backdoor” that, if inserted into code, would not be detected by even the best static analysis tools. There was an experiment at Sandia Labs in which a backdoor was inserted into code and code reviewers told where in code to look for it. They could not find it – even knowing where to look.)


If the real concern of a customer insisting on a third party code scan is malware and backdoor detection: it won’t work and thus represents an extremely expensive – and useless – distraction.


4. Third party code analysis will diminish overall product security.
It is precisely leading vendors’ experience with static analysis tools that contributes to their unwillingness to have third parties attempt to analyze code – emphasis on “attempt.” None of these tools are “plug and play": in some cases, it has taken years, not months, to be able to achieve actionable results, even from the best available static analysis tools. These are in fact code analysis tools and must be “tuned” – and in some cases actually “programmed”- to understand code, and must typically be run by an experienced developer (that is, a developer who understand the particular code base being analyzed) for results to be useful and actionable. There are many reasons why static analysis tools either raise many false positives, or skip entire bodies of code. For example, because of the way Oracle implements particular functionality (memory management) in the database, static analysis tools that look for buffer overflows either do not work, or raise false positives (Oracle writes its own checks to look for those issues).


The rate of false positives from use of a “random” tool run by inexperienced operators – especially on a code base as large as that of most commercial products – would put a vendor in the position of responding to unsubstantiated fear, uncertainty, and doubt (FUD). In a code base of 10,000,000 lines of code, even a false positive rate of one per 1000 lines of code would yield 10,000 “false positives” to chase down. The cost of doing this is prohibitive. (One such tool run against a large Oracle code base generated a false positive for every 3.4 lines of code, or about 160,000 false positives in toto due to the size of the code base.)


This is why most people using these tools must “tune” them to drown out “noise.” Many vendors have already had this false positive issue with customers running web application vulnerability tools and delivering in some cases hundreds of pages of “alarms” in which there were, perhaps, a half page of actionable issues. The rate of false positives is the single biggest determinant whether these tools are worth using or an expensive distraction (aka “rathole”).


No third party firm has to prove that their tool is accurate – especially not if the vendor is forced to use a third party to validate their code – and thus there is little to no incentive to improve their tool. Consultants get paid more the longer they are on site and working. A legislative or “standards” requirement for “third party code analysis” is therefore a license for the third party doing it to print money. Putting it differently, if the use of third party static analysis was accurate and cost effective, why wouldn’t vendors already be doing it? Instead, many vendors use static analysis tools in-house, because they own the code, and are willing to assume the cost of going up the learning curve for a long term benefit to them of reduced defects (and reduced cost of fixing these defects as more vulnerabilities are found earlier in the development cycle).


In short, the use of a third party is the most expensive, non-useful, high-cost attempt at “better code” most vendors could possibly use, and would result in worse security, not better security as in-house “security boots on the ground” are diverted to working with the third party. It is unreasonable to expect any vendor to in effect tune a third party tool and train the third party on their code – and then have to pay the third party for the privilege of doing it. Third party static analysis represents an unacceptably high opportunity cost caused by the “crowding out effect” of taking scarce security resources and using them on activity of low value to the vendor and to their customers. The only “winner” here is the third party. Ka-chink. Ka-chink.


5. Third party code analysis puts customers at increased risk.
As noted, there is no standard for what third party static analysis tools find, let alone how well and how economically they find it. More problematically, there are no standards for protection of any actual vulnerabilities these tools find. In effect, third party code analysis allows the third party to amass a database of unfixed vulnerabilities in products without any requirements for data protection or any recourse should that information be sold, incorporated into a hacking tool or breached. The mere fact of a third party amassing such sensitive information makes the third party a hacker target. Why attempt to hack products one by one if you can break into a third party’s network and get a listing of defects across multiple products – in the handy “economy size?” The problem is magnified if the “decompiled” source code is stored at the third party: such source code would be an even larger hacker target than the list of vulnerabilities the third party found.


Most vendors have very strict controls not merely on their source code, but on the information about product vulnerabilities that they know about and are triaging and fixing. Oracle Corporation, for example, has stringent security vulnerability handling policies that are promulgated and “scored” as part of Oracle’s software and hardware assurance program. Oracle uses its own secure database technology (row level access control) to enforce “need to know” on security vulnerabilities, information that is considered among the most sensitive information the company has. Security bugs are not published (meaning, they are not generally searchable and readable across the company or accessible by customers). Also, security bug access is stringently limited to those working on a bug fix (and selected others, such as security analysts and the security point of contact (SPOC) for the development area).


One of the reasons Oracle is stringent about limiting access to security vulnerability information is that this information often does leak when “managed” by third parties, even third parties with presumed expertise in secret-keeping. In the past, MI5 circulated information about a non-public Oracle database vulnerability among UK defense and intelligence entities (it should be noted that nobody reported this to Oracle, despite the fact that only Oracle could issue a patch for the issue). Oracle was only notified about the bug by a US commercial company to whom the information had leaked. As the saying goes, two people can’t keep a secret.


There is another risk that has not generally been considered in third party static analysis, and that is the increased interest in cyber-offense. There is evidence that the market for so-called zero-day vulnerabilities is being fueled in part by governments seeking to develop cyber-offense tools. (StuxNet, for example, allegedly made use of at least four “zero-day” vulnerabilities: that is, vulnerabilities not previously reported to a vendor.) Coupled with the increased interest in military suppliers/system integrators in getting into the “cyber security business,” it is not a stretch to think that at last some third parties getting into the “code analysis” business can and would use that as an opportunity to “sell to both sides” – use legislative fiat or customer pressure to force vendors to consent to static analysis, and then surreptitiously sell the vulnerabilities they found to the highest bidder as zero-days. Who would know?


Governments in particular cannot reasonably simultaneously fuel the market in zero days, complain at how irresponsible their COTS vendors are for not building better code and/or insist on third party static analysis. This is like stoking the fire and then complaining that the room is too hot.


6. Equality of access to vulnerability information protects all customers.
Most vendors do not provide advance information on security vulnerabilities to some customers but not others, or more information about security vulnerabilities to some customers but not others. As noted above, one reason for this is the heightened risk that such information will leak, and put the customers “not in the know” at increased risk. Not to mention, all customers believe their secrets are as worthy of protection as any other customer: nobody wants to be on the “Last Notified” list.


Thus, third party static analysis is problematic because it may result in violating basic fairness and equality in terms of vulnerability disclosure in the rare instances where these scans actually find exploitable vulnerabilities. The business model for some vendors offering static analysis as a service is to convince the customers of the vendor that the vendor is an evil slug and cannot be trusted, and thus the customer should insist on the third party analyzing the vendors’ code base.


There is an implicit assumption that the vendor will fix vulnerabilities that the third party static analysis finds immediately, or at least, before the customer buys/installs the product. However, the reality is more subtle than that. For one thing, it is almost never the case that a vulnerability exists in one and only one version of product: it may also exist on older versions. Complicating the matter: some issues cannot be “fixed” via a patch to the software but require the vendor to rearchitect functionality. This typically can only be done in so-called major product releases, which may only occur every two to three years. Furthermore, such issues often cannot be fixed on older versions because the scope of change is so drastic it will break dependent applications. Thus, a customer (as well as the third party) has information about a “not-easily-fixed” vulnerability which puts other customers at a disadvantage and at risk to the extent that information may leak.


Therefore, allowing some customers access to the results of a third party code scan in advance of a product release would violate most vendors’ disclosure policies as well as actually increasing risk to many, many customers, and potentially that increased risk could exist for a long period of time.


7. Third party code analysis sets an unacceptable precedent that risks vendors’ core intellectual property (IP).
COTS vendors maintain very tight control over their proprietary source code because it is core, high-value IP. As such, most COTS vendors will not allow third parties to conduct static analysis against source code (and for purposes of this discussion, this includes static analysis against binaries, which typically violates standard license agreements).


Virtually all companies are aware of the tremendous cost of intellectual property theft: billions of dollars per year, according to published reports. Many nation states, including those that condone if not encourage wholesale intellectual property theft, are now asking for source code access as a condition of selling COTS products into their markets. Most COTS vendors have refused these requests. One can easily imagine that for some nation states, the primary reason to request source code access (or, alternatively, “third party analysis of code”) is for intellectual property theft or economic espionage. Once a government-sanctioned third party has access to source code, so may the government. (Why steal source code if you can get a vendor to gift wrap it and hand it to you under the rubric of “third party code analysis?”)


Another likely reason some governments may insist on source code access (or third party code analysis) is to analyze the code for weaknesses they then exploit for their own national security purposes (e.g., more intellectual property theft). All things being equal, it is easier to find defects in source code than in object code. Refusing to accede to these requests – in addition to, of course, a vendor doing its own code analysis and defect remediation – thus protects all customers. In short, agreeing to any third party code analysis involving source code – either static analysis or static analysis of binaries - would make it very difficult if not impossible for a vendor to refuse any other similar requests for source code access, which would put their core intellectual property at risk. Third party code analysis is a very bad idea because there is no way to “undo” a precedent once it is set.


Summary
Software should have a wide variety of tests performed before it is shipped and additional security tests (such as penetration tests) should be used against “as-deployed” software. However, the level of testing should be commensurate with the risk, which is both basic risk management and appropriate (scarce) resource management. A typical firm has many software elements, most probably COTS, and to suggest that they all be tested with static analysis tools begs a sanity check. The scope of COTS alone argues against this requirement: COTS products run the gamut from operating systems to databases to middleware, business intelligence and other analytic tools, business applications (accounting, supply chain management, manufacturing) as well as specialized vertical market applications (e.g., clinical trial software),
representing a number of programming languages and billions – no, hundreds of billions - of lines of code.


The use of static analysis tools in development to help find and remediate security vulnerabilities is a good assurance practice, albeit a difficult one because of the complexity of software and the difficulty of using these tools. The only utility of these tools is that they be used by the producer of software in a cost- effective way geared towards sustained vulnerability reduction over time. The mandated use of third party static analysis to “validate” or “test” code is unsupportable, for reasons of cost (especially opportunity cost), precedence, increased risk to vendors’ IP and increased security risk to customers. The third party static code analysis market is little more than a subterfuge for enabling the zero-day vulnerability market: bad security, at a high cost, and very bad public policy.


Book of the Month
It’s been so long since I blogged, it’s hard to pick out just a few books to recommend. Here are three, and a "freebie":


Hawaiki Rising: Hōkūle’a, Nainoa Thompson and the Hawaiian Renaissance by Sam Low
Among the most amazing tidbits of history are the vast voyages that the Polynesians made to settle (and travel among) Tahiti, Hawai’i and Aotearoa (New Zealand) using navigational methods largely lost to history. (Magellan – meh – he had a compass and sextant.) This book describes the re-creation of Polynesian wayfinding in Hawai’i in the 1970s via the building of a double-hulled Polynesian voyaging canoe, the Hōkūle’a, and how one amazing Hawaiian (Nainoa Thompson) – under the tutelage of one of the last practitioners of wayfinding (Mau Piailug) – made an amazing voyage from Hawai’i to Tahiti using only his knowledge of the stars, the winds, and the currents. (Aside: one of my favorite songs is “H
ōkūle’a Hula,” which describes this voyage, and is so nicely performed by Erik Lee.) Note: the Hōkūle’a is currently on a voyage around the world.


The Korean War by Max Hastings
Max Hastings is one of the few historians whom I think is truly balanced: he looks at the moral issues of history, weighs them, and presents a fair analysis – not “shove-it-down-your-throat revisionism.” He also makes use of a lot of first-person accounts, which makes history come alive. The Korean War is in so many cases a forgotten war, especially the fact that it literally is a war that never ended. It’s a good lesson of history, as it is made clear that the US drew down their military so rapidly and drastically after the World War II that we were largely (I am trying not to say “completely”) unprepared for Korea. (Moral: there is always another war.)


Code Talker by Chester Nez
Many people now know of the crucial role that members of the Navajo Nation played in the Pacific War: the code they created that provided a crucial advantage (and was never broken). This book is a first-person account of the experiences of one Navajo code talker, from his experiences growing up on the reservation to his training as a Marine, and his experiences in the Pacific Theater. Fascinating.


Securing Oracle Database 12c: A Technical Primer
If you are a DBA or security professional looking for more information on Oracle database security, then you will be interested in this book. Written by members of Oracle's engineering team and the President of the International Oracle User Group (IOUG), Michelle Malcher, the book provides a primer on capabilities such as data redaction, privilege analysis and conditional auditing. If you have Oracle databases in your environment, you will want to add this book to your collection of professional information. Register now for the complimentary eBook and learn from the experts.




Microservices - Money for old rope or re-badging SOA for the cool kids

Steve Jones - Tue, 2014-03-11 13:00
Hat tip to John Evedemon for the heads up on this one.  Martin Fowler is peddling a new approach, 'Microservices' which... wait for it is a way of developing applications as a suite of services.  Each one of which has its own process thread and 'communicates via lightweight mechanisms' such as.... over HTTP. But wait there is more, you'll be stunned to know that these services can be built
Categories: Fusion Middleware

What are the types of Data Scientist?

Steve Jones - Tue, 2014-03-11 11:00
There are various views going around on what a Data Scientist is and what their value is to an organisation and the salaries they command.  To me however asking 'what is a Data Scientist?' is like asking 'What is a Physicist?' sure 'someone who studies Physics' might be a factually accurate but pointless definition.  How does that separate someone who did Physics in High School from Albert
Categories: Fusion Middleware

The Impact of Change

Antony Reynolds - Sun, 2014-03-09 16:01
Measuring Impact of Change in SOA Suite
Mormon prophet Thomas S. Monson once said:

When performance is measured, performance improves. When performance is measured and reported, the rate of performance accelerates.

(LDS Conference Report, October 1970, p107)

Like everything in life, a SOA Suite installation that is monitored and tracked has a much better chance of performing well than one that is not measured.  With that in mind I came up with tool to allow the measurement of the impact of configuration changes on database usage in SOA Suite.  This tool can be used to assess the impact of different configurations on both database growth and database performance, helping to decide which optimizations offer real benefit to the composite under test.

Basic Approach

The basic approach of the tool is to take a snapshot of the number of rows in the SOA tables before executing a composite.  The composite is then executed.  After the composite has completed another snapshot is taken of the SOA tables.  This is illustrated in the diagram below:

An example of the data collected by the tool is shown below:

Test NameTotal Tables ChangedTotal Rows AddedNotesAsyncTest11315Async Interaction with simple SOA composite, one retry to send response.AsyncTest21213Async interaction with simple SOA composite, no retries on sending response.AsyncTest31213Async interaction with simple SOA composite, no callback address provided.OneWayTest11213One-Way interaction with simple SOA composite.SyncTest177Sync interaction with simple SOA composite.

Note that the first three columns are provided by the tool, the fourth column is just an aide-memoir to identify what the test name actually did. The tool also allows us to drill into the data to get a better look at what is actually changing as shown in the table below:

Test NameTable NameRows AddedAsyncTest1AUDIT_COUNTER1AsyncTest1AUDIT_DETAILS1AsyncTest1AUDIT_TRAIL2AsyncTest1COMPOSITE_INSTANCE1AsyncTest1CUBE_INSTANCE1AsyncTest1CUBE_SCOPE1AsyncTest1DLV_MESSAGE1AsyncTest1DOCUMENT_CI_REF1AsyncTest1DOCUMENT_DLV_MSG_REF1AsyncTest1HEADERS_PROPERTIES1AsyncTest1INSTANCE_PAYLOAD1AsyncTest1WORK_ITEM1AsyncTest1XML_DOCUMENT2

Here we have drilled into the test case with the retry of the callback to see what tables are actually being written to.

Finally we can compare two tests to see difference in the number of rows written and the tables updated as shown below:

Test NameBase Test NameTable NameRow DifferenceAsyncTest1AsyncTest2AUDIT_TRAIL1

Here are the additional tables referenced by this test

Test NameBase Test NameAdditional Table NameRows AddedAsyncTest1AsyncTest2WORK_ROWS1How it Works

I created a database stored procedure, soa_snapshot.take_soa_snaphot(test_name, phase). that queries all the SOA tables and records the number of rows in each table.  By running the stored procedure before and after the execution of a composite we can capture the number of rows in the SOA database before and after a composite executes.  I then created a view that shows the difference in the number of rows before and after composite execution.  This view has a number of sub-views that allow us to query specific items.  The schema is shown below:

The different tables and views are:

  • CHANGE_TABLE
    • Used to track number of rows in SOA schema, each test case has two or more phases.  Usually phase 1 is before execution and phase 2 is after execution.
    • This only used by the stored procedure and the views.
  • DELTA_VIEW
    • Used to track changes in number of rows in SOA database between phases of a test case.  This is a view on CHANGE_TABLE.  All other views are based off this view.
  • SIMPLE_DELTA_VIEW
    • Provides number of rows changed in each table.
  • SUMMARY_DELTA_VIEW
    • Provides a summary of total rows and tables changed.
  • DIFFERENT_ROWS_VIEW
    • Provides a summary of differences in rows updated between test cases
  • EXTRA_TABLES_VIEW
    • Provides a summary of the extra tables and rows used by a test case.
    • This view makes use of a session context, soa_ctx, which holds the test case name and the baseline test case name.  This context is initialized by calling the stored procedure soa_ctx_pkg.set(testCase, baseTestCase).

I created a web service wrapper to the take_soa_snapshot procedure so that I could use SoapUI to perform the tests.

Sample OutputHow many rows and tables did a particular test use?

Here we can see how many rows in how many tables changed as a result of running a test:

-- Display the total number of rows and tables changed for each test
select * from summary_delta_view
order by test_name;

TEST_NAME            TOTALDELTAROWS TOTALDELTASIZE TOTALTABLES
-------------------- -------------- -------------- -----------
AsyncTest1                   15              0          13
AsyncTest1noCCIS             15              0          13
AsyncTest1off                 8              0           8
AsyncTest1prod               13              0          12
AsyncTest2                   13              0          12
AsyncTest2noCCIS             13              0          12
AsyncTest2off                 7              0           7
AsyncTest2prod               11              0          11
AsyncTest3                   13              0          12
AsyncTest3noCCIS             13          65536          12
AsyncTest3off                 7              0           7
AsyncTest3prod               11              0          11
OneWayTest1                  13              0          12
OneWayTest1noCCI             13          65536          12
OneWayTest1off                7              0           7
OneWayTest1prod              11              0          11
SyncTest1                     7              0           7
SyncTest1noCCIS               7              0           7
SyncTest1off                  2              0           2
SyncTest1prod                 5              0           5

20 rows selected

Which tables grew during a test?

Here for a given test we can see which tables had rows inserted.

-- Display the tables which grew and show the number of rows they grew by
select * from simple_delta_view
where test_name='AsyncTest1'
order by table_name;
TEST_NAME            TABLE_NAME                      DELTAROWS  DELTASIZE
-------------------- ------------------------------ ---------- ----------
AsyncTest1       AUDIT_COUNTER                           1          0
AsyncTest1       AUDIT_DETAILS                           1          0
AsyncTest1       AUDIT_TRAIL                             2          0
AsyncTest1       COMPOSITE_INSTANCE                      1          0
AsyncTest1       CUBE_INSTANCE                           1          0
AsyncTest1       CUBE_SCOPE                              1          0
AsyncTest1       DLV_MESSAGE                             1          0
AsyncTest1       DOCUMENT_CI_REF                         1          0
AsyncTest1       DOCUMENT_DLV_MSG_REF                    1          0
AsyncTest1       HEADERS_PROPERTIES                      1          0
AsyncTest1       INSTANCE_PAYLOAD                        1          0
AsyncTest1       WORK_ITEM                               1          0
AsyncTest1       XML_DOCUMENT                            2          0
13 rows selected

Which tables grew more in test1 than in test2?

Here we can see the differences in rows for two tests.

-- Return difference in rows updated (test1)
select * from different_rows_view
where test1='AsyncTest1' and test2='AsyncTest2';

TEST1                TEST2                TABLE_NAME                          DELTA
-------------------- -------------------- ------------------------------ ----------
AsyncTest1       AsyncTest2       AUDIT_TRAIL                             1

Which tables were used by test1 but not by test2?

Here we can see tables that were used by one test but not by the other test.

-- Register base test case for use in extra_tables_view
-- First parameter (test1) is test we expect to have extra rows/tables
begin soa_ctx_pkg.set('AsyncTest1', 'AsyncTest2'); end;
/
anonymous block completed
-- Return additional tables used by test1
column TEST2 FORMAT A20
select * from extra_tables_view;
TEST1                TEST2                TABLE_NAME                      DELTAROWS
-------------------- -------------------- ------------------------------ ----------
AsyncTest1       AsyncTest2       WORK_ITEM                               1

Results

I used the tool to find out the following.  All tests were run using SOA Suite 11.1.1.7.

The following is based on a very simple composite as shown below:

Each BPEL process is basically the same as the one shown below:

Impact of Fault Policy Retry Being Executed OnceSettingTotal Rows Written Total Tables UpdatedNo Retry1312One Retry1513

When a fault policy causes a retry then the following additional database rows are written:

Table NameNumber of RowsAUDIT_TRAIL1WORK_ITEM1Impact of Setting Audit Level = Development Instead of ProductionSettingTotal Rows Written Total Tables UpdatedDevelopment1312Production1111

When the audit level is set at development instead of production then the following additional database rows are written:

Table NameNumber of RowsAUDIT_TRAIL1WORK_ITEM1Impact of Setting Audit Level = Production Instead of OffSettingTotal Rows Written Total Tables UpdatedProduction1111Off77

When the audit level is set at production rather than off then the following additional database rows are written:

Table NameNumber of RowsAUDIT_COUNTER1AUDIT_DETAILS1AUDIT_TRAIL1COMPOSITE_INSTANCE1Impact of Setting Capture Composite Instance StateSettingTotal Rows Written Total Tables UpdatedOn1312Off1312

When capture composite instance state is on rather than off then no additional database rows are written, note that there are other activities that occur when composite instance state is captured:

Impact of Setting oneWayDeliveryPolicy = async.cache or syncSettingTotal Rows Written Total Tables Updatedasync.persist1312async.cache77sync77

When choosing async.persist (the default) instead of sync or async.cache then the following additional database rows are written:

Table NameNumber of RowsAUDIT_DETAILS1DLV_MESSAGE1DOCUMENT_CI_REF1DOCUMENT_DLV_MSG_REF1HEADERS_PROPERTIES1XML_DOCUMENT1

As you would expect the sync mode behaves just as a regular synchronous (request/reply) interaction and creates the same number of rows in the database.  The async.cache also creates the same number of rows as a sync interaction because it stores state in memory and provides no restart guarantee.

Caveats & Warnings

The results above are based on a trivial test case.  The numbers will be different for bigger and more complex composites.  However by taking snapshots of different configurations you can produce the numbers that apply to your composites.

The capture procedure supports multiple steps in a test case, but the views only support two snapshots per test case.

Code Download

The sample project I used us available here.

The scripts used to create the user (createUser.sql), create the schema (createSchema.sql) and sample queries (TableCardinality.sql) are available here.

The Web Service wrapper to the capture state stored procedure is available here.

The sample SoapUI project that I used to take a snapshot, perform the test and take a second snapshot is available here.

Pages

Subscribe to Oracle FAQ aggregator