Feed aggregator

Fronting Oracle Maven Repository with Artifactory

Steve Button - Mon, 2015-02-09 22:44
The JFrog team announced this week the release of Artifactory 3.5.1, which is a minor update that now works with the Oracle Maven Repository.

http://www.jfrog.com/confluence/display/RTF/Artifactory+3.5.1

I spent a little while yesterday having a look at it, working through the configuration of a remote repository and testing it with a maven project to see how it worked.

Once I'd downloaded it and started it up -- much love for simple and obvious bin/*.sh scripts -- it was a very simple two step process:

1. Since we live behind a firewall first add a proxy configuration to point at our proxy server.



2. Add a new remote repository and pointed it at the Oracle Maven Repository, specifying its URL and using my OTN credentials as username and password.


The Artifactory 3.5.1 documentation stated that the Advanced Settings >  Lenient host authentication and Enable cookie management options must be checked when accessing the Oracle Maven Repository.


The Test button is handy to verify the server settings have been entered correctly.

3. Use the Home tab > Client Settings > Maven Settings link to generate and save a settings.xml file that uses the artifactory server.



With the repository running, configured and the settings.xml saved, its then possible to try it out with an existing maven project such as https://github.com/buttso/weblogic-with-arquillian.

I also nuked my local repository to force/verify that the dependencies were fetched through the specified Artifactory server.

$ rm -fr ~/.m2/repository/com/oracle
$ mvn -s artifactory-settings.xml test

Viewing the output of the mvn process and the running Artifactory server you can see that maven is downloading dependencies from http://localhost:8081/artifactory and correspondingly Artifactory is downloading the requested artifact from https://maven.oracle.com.


Once the maven process has completed and all the requested artifacts downloaded, Artifactory will have cached them locally for future use.
 
Using the Search functionality of the Artifactory Web UI you can search for weblogic artifacts.


Using the Repository Browser functionality of the Artifactory Web UI you can view and navigate around the contents of the remote Oracle Maven Repository.

Nice JFrog > Artifactory team - thanks for the quick support of our repository.

One further thing I'd look at doing is enabling the Configure Passwords Encryption option in the Security settings to encrypt your OTN password, so that it's not stored in cleartext in the etc/artifactory.config.latest.xml file.


JavaScript Stored Procedures and Node.js Applications with Oracle Database 12c

Kuassi Mensah - Mon, 2015-02-09 20:44
                                      Kuassi Mensah
                                    db360.blogspot.com | @kmensah | https://www.linkedin.com/in/kmensah

Introduction                                                            
Node.js and server-side JavaScript are hot and trendy; per the latest “RedMonk Programming Languages Rankings[1], JavaScript and Java are the top two programming languages. For most developers building modern Web, mobile, and cloud based applications, the ability to use the same language across all tiers (client, middle, and database) feels like Nirvana but the IT landscape is not a green field; enterprises have invested a lot in Java (or other platforms for that matter) therefore, the integration of JavaScript with it becomes imperative. WebSockets and RESTful services enable loose integration however, the advent of JavaScript engines on the JVM (Rhino, Nashorn, DynJS), and Node.js APIs on the JVM (Avatar.js, Nodyn, Trireme), make possible and very tempting to co-locate Java and Node applications on the same JVM.
This paper describes the steps for running JavaScript stored procedures[2]directly on the embedded JVM in Oracle database 12c and the steps for running Node.js applications on the JVM against Orace database 12c, using Avatar.js, JDBC and UCP.
          
JavaScript and the Evolution of Web Applications Architecture                                   
At the beginning, once upon a time, long time ago, JavaScript was a browser-only thing while business logic, back-end services and even presentations where handled/produced in middle-tiers using Java or other platforms and frameworks. Then JavaScript engines (Google’s V8, Rhino) leave the browsers and gave birth to server-side JavaScript frameworks and Node.js.
Node Programming Model
Node.js and similar frameworks bring ease of development rapid prototyping, event-driven, and non-blocking programming model[3]to JavaScript. This model is praised for its scalability and good enough performance however, unlike Java, Node lacks standardization in many areas such as database access i.e., JDBC equivalent, and may lead, without discipline, to the so called “callback hell[4]”.
Nonetheless, Node is popular and has a vibrant community and a large set of frameworks[5].
Node Impact on Web Applications Architecture
With the advent of Node, REST and Web Sockets, the architecture of Web applications has evolved into 
(i) plain JavaScript on browsers (mobiles, tablets, and desktops); 
(ii) server-side JavaScript modules (i.e., Node.js, ORM frameworks) interacting with Java business logic and databases.
The new proposal for Web applications architecture is the integration of Node.js and Java on the JVM.  Let’s discuss the enabling technologies: JavaScript engine on the JVM and Node API on the JVM and describe typical use cases with Oracle database 12c.  
JavaScript on the JVM
Why implement a JavaScript engine and run JavaScript on the JVM? For starters, i highly recommend Mark Swartz ‘s http://moduscreate.com/javascript-and-the-jvm/and Steve Yegge’s  http://steve-yegge.blogspot.com/2008/06/rhinos-and-tigers.htmlblog posts. 
In summary, the JVM brings (i) portability; (ii) manageability; (iii) Java tools; (iv) Java libraries/technologies such as JDBC, Hadoop; and (v) the preservation of investments in Java. 
There are several implementations/projects of Java based JavaScript engines including Rhino, DynJS and Nashorn.Rhino
First JavaScript engine entirely written in Java; started at NetScape in 1997 then, became an open-source Mozilla project[6]. Was for quite some time the default JavaScript engine in Java SE, now  replaced by Nashorn in Java SE 8. 
DynJS
DynJS is another open-source JavaScript engine for the JVM. Here is the project homepage http://dynjs.org/. 
Nashorn
Introduced in Java 7 but “production” in Java 8[7], the goal of project Nashorn (JEP 174), is to enhance the performance and security of the Rhino JavaScript engine on the JVM. It integrates with javax.script API (JSR 223) and allows seamless interaction between Java and JavaScript (i.e., invoking Nashorn from Java and invoking Java from Nashorn).

To illustrate the reach of Nashorn on the JVM and the interaction between Java and JavaScript, let’s run some JavaScript directly on the database-embedded JVM in Oracle database 12c. 
JavaScript Stored Procedures with Oracle database 12c Using Nashorn
Why would anyone run JavaScript in the database? For the same reasons you’d run Java in Oracle database. Then you might ask: why run Java in the database, in the first place? As discussed in my book[8], the primary motivations are: 
(i) reuse skills and code, i.e., which programming languages are your new hire knowledgeable of or willing to learn; 
(ii) avoid data shipping[9] i.e., in-place processing of billions of data/documents; 
(iii) combine SQL with foreign libraries to achieve new database capability thereby extending SQL and the reach of the RDBMS, e.g., Web Services callout, in-database container for Hadoop[10]
Some developers/architects prefer a tight separation between the RDBMS and applications therefore, no programming language in the database[11]but there are many pragmatic developers/architects who run code near data, whenever it is more efficient than shipping data to external infrastructure.

Co-locating functions with data on the same compute engine is shared by many programming models such as Hadoop. With the surge and prevalence of Cloud computing, RESTful service based architecture is the new norm. Data-bound services can be secured and protected by the REST infrastructure, running outside the RDBMS. Typical use case: a JavaScript stored procedures service would process millions/billions of JSON documents in the Oracle database and would return the result sets to the service invoker.

To conclude, running Java, JRuby, Python, JavaScript, Scala, or other programming language on the JVM in the database is a sound architectural choice. The best practices consist in: (i) partitioning applications into data-bound and compute-bound modules or services; (ii) data-bound services are good candidates for running in the database; (iii) understand Oracle database 
DEFINER INVOKER rights [12]and grant only the necessary privilege(s) and/or permission(s). 

The Steps
The following steps allow implementing JavaScipt stored procedure  running in Oracle database; these steps represent an enhancement from the ones presented at JavaOne and OOW 2014 -- which consisted in reading the JavaScript from the file system; such approach required granting extra privileges to the database schema for reading from RDBMS file system something not recommended from security perspective. Here is a safer approach:

1.      Nashorn is part of Java 8 but early editions can be built for Java 7; the embedded JavaVM in Oracle database 12c supports Java 6 (the default) or Java 7. For this proof of concept, install Oracle database 12c with Java SE 7 [13]
2.      Build a standard Nashorn.jar[14]; (ii) modify the Shell code to interpret the given script name as an OJVM resource; this consists mainly in invoking getResourceAsStream()on the current thread's context class loader ; (iii) rebuild Nashorn.jar with the modified Shell
3.  Load the modified Nashorn jar into an Oracle database shema e.g., HR
 loadjava -v -r -u hr/ nashorn.jar
4.      Create a new dbms_javascript  package for invoking Nashorn’s Shell with a script name as parameter
create or replace package dbms_javascript as
  procedure run(script varchar2);
end;
/
create or replace package body dbms_javascript as
  procedure run(script varchar2) as
  language java name 'com.oracle.nashorn.tools.Shell.main(java.lang.String[])';
end;
/

Then call dbms_javascript,run(‘myscript.js’)from SQL which will invoke Nashorn  Shell to execute the previously loaded myscript.js.
5.  Create a custom role, we will name it NASHORN, as follows, connected as SYSTEM
SQL> create role nashorn;
SQL> call dbms_java.grant_permission('NASHORN', 'SYS:java.lang.RuntimePermission', 'createClassLoader', '' );
SQL> call dbms_java.grant_permission('NASHORN', 'SYS:java.lang.RuntimePermission', 'getClassLoader', '' );
SQL> call dbms_java.grant_permission('NASHORN', 'SYS:java.util.logging.LoggingPermission', 'control', '' );
Best practice: insert those statements in a nash-role.sqlfile and run the script as SYSTEM
6.      Grant the NASHORN role created above to the HR schema as follows (connected as SYSTEM):

SQL> grant NASHORN to HR;

7.      Insert the following JavaScript code in a file e.g., database.js stored on your client machine’s (i.e., a machine from which you will invoke loadjava as explained in the next step).
This script illustrates using JavaScript and Java as it
uses the server-side JDBC driver to execute a PreparedStatement to retrieve the first and last names from the EMPLOYEES table.

var Driver = Packages.oracle.jdbc.OracleDriver;
var oracleDriver = new Driver();
var url = "jdbc:default:connection:";   // server-side JDBC driver
var query ="SELECT first_name, last_name from employees";
// Establish a JDBC connection
var connection = oracleDriver.defaultConnection();
// Prepare statement
var preparedStatement = connection.prepareStatement(query);
// execute Query
var resultSet = preparedStatement.executeQuery();
// display results
     while(resultSet.next()) {
     print(resultSet.getString(1) + "== " + resultSet.getString(2) + " " );
     }
// cleanup
resultSet.close();
preparedStatement.close();
connection.close();


8.      Load database.js in the database as a Java resource (not a vanilla class)
loadjava –v –r –u hr/ database.js

9.      To run the loaded script

sqlplus hr/
SQL>set serveroutput on
SQL>call dbms_java.set_output(80000)
SQL>call dbms_javascript.run(‘database.js’);

The Nashorn Shell reads ‘database.js’ script stored as Java Resource from internal table; the JavaScript in its turn invokes JDBC to execute a PreparedStatement and the result set is displayed on the console. The message “ORA=29515: exit called from Java code with status 0” is due to the invocation of java.lang.Runtime.exitInternal; and status 0 means normal exit (i.e., no error). The fix is to remove that call from Nashorn. 
Node.js on the JVM
As discussed earlier, Node.js is becoming the man-in-the-middle between Web applications front ends and back-end legacy components and since companies have invested a lot in Java, it is highly desirable to co-locate Node.js and Java components on the same JVM for better integration thereby eliminating the communication overhead. There are several projects re-implementing Node.js APIs on the JVM including: Avatar.js, Nodyn, and Trireme. This paper will only discuss Oracle’s Avatar.js.
Project Avatar.js[15]
The goal of project Avatar.js is to furnish “Node.js on the JVM”; in other words, an implementation of Node.js APIs, which runs on top of Nashorn and enables the co-location of Node.js programs and Java components. It has been outsourced by Oracle under GPL license[16]. Many Node frameworks and/or applications have been certified to run unchanged or slightly patched, on Avatar.js.

There are binary distributions for Oracle Enterprise Linux, Windows and MacOS (64-bits). These builds can be downloaded from https://maven.java.net/index.html#welcome. Search for avatar-js.jar and platform specific libavatar-js libraries (.dll, .so, dylib). Get the latest and rename the jar and the specific native libary accordingly. For example: on  Linux, rename the libary to avatar-js.so; on Windows, rename the dll to avatar-js.dll and add its location to your PATH (or use -Djava.library.path=).

RDBMSes in general and Oracle database in particular remain the most popular persistence engines and there are RDBMS specific Node drivers[17]as well as ORMs frameworks. However, as we will demonstrate in the following section, with Avatar.js, we can simply reuse existing Java APIs including JDBC and UCP for database access.

Node Programming with Oracle Database using Avatar.js, JDBC and UCP 
The goal of this proof of concept is to illustrate the co-location of a Node.js application, the Avatar.js library, the Oracle JDBC driver and the Oracle Universal Connection Pool (UCP) on the same Java 8 VM.
The sample application consists in a Node.js application which performs the following actions:
(i) Request a JDBC-Thin connection from the Java pool (UCP)
(ii)Create a PreparedStatement object for “SELECT FIRST_NAME, LAST_NAME FROM EMPLOYEES
(iii)Execute the statement and return the ResultSet in a callback
(iv)Retrieve the rows and display in browser on port 4000
(v) Perform all steps above in a non-blocking fashion – this is Node.js’s raison d’être. The demo also uses Apache ab load generator to simulate concurrent users running the same application in the same/single JVM instance.For the Node application to scale in the absence of asynchronous JDBC APIs, we need to turn synchronous calls into non-blocking ones and retrieve the result set via callback.
Turning Synchronous JDBC Calls into Non-Blocking Calls
We will use the following wrapper functions to turn any JDBC call into a non-blocking call i.e., put the JDBC call into a thread pool and free up the Node event loop thread.
var makeExecutecallback = function(userCallback) {
 return function(name, args){
      ...
      userCallback(undefined, args[1]);
  }
}
 function submit(task, callback, msg) {
    var handle = evtloop.acquire();
    try {    var ret = task();
               evtloop.post(new EventType(msg, callback, null, ret)); {catch{}
    evtloop.submit(r);
}

Let’s apply these wrapper functions to executeQuery JDBC call, to illustrate the concept
exports.connect = function(userCallback) {..} // JDBC and UCP settings
Statement.prototype.executeQuery = function(query, userCallback) {
         var statement = this._statement;
          var task = function() {
          return statement.executeQuery(query);
       }
     submit(task, makeExecutecallback(userCallback), "jdbc.executeQuery");
}
Similarly the same technique will be applied to other JDBC statement APIs.
Connection.prototype.getConnection = function() {…}
Connection.prototype.createStatement = function() {..}
Connection.prototype.prepareCall = function(storedprocedure) {..}
Statement.prototype.executeUpdate = function(query, userCallback) {..}
Returning Query ResultSet through a Callback
The application code fragment hereafter shows how: for every HTTP request: (i) a connection is requested, (ii) the PreparedStatement is executed, and (iii) the result set printed on port 4000.
   ...
   var ConnProvider = require('./connprovider').ConnProvider;
var connProvider = new ConnProvider(function(err, connection){.. });

var server = http.createServer(function(request, response) {
  connProvider.getConn(function(name,data){..});     
  connProvider.prepStat(function(resultset) {
                while (resultset.next()) {
                   response.write(resultset.getString(1) + " --" + resultset.getString(2));
                   response.write('
');
    }
    response.write('
');
    response.end();   
}
server.listen(4000, '127.0.0.1');

Using Apache AB, we were able to scale to hundreds of simultaneous invocations of the Node application. Each instance grabs a Java connection from The Universal Connection Pool (UCP), executes the SQL statements through JDBC then return the result set via a Callbak on port 4000.
Conclusions
As server-side JavaScript (typified by Node.js) gains in popularity it’ll have to integrate with existing components (COBOL is still alive!!). Developers, architects will have to look into co-locating JavaScript with Java, across middle and database tiers.





[1] http://redmonk.com/sogrady/2015/01/14/language-rankings-1-15/
[2] I’ll discuss the rationale for running programming languages in the database, later in this paper.
[3] Request for I/O and resource intensive components run in separate process then invoke a Callback in the main/single Node  thread, when done.
[4] http://callbackhell.com/
[5] Search the web for “Node.js frameworks
[6] https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Rhino
[7] Performance being one of the most important aspect
[8] http://www.amazon.com/exec/obidos/ASIN/1555583296
[9] Rule of thumb: when processing more than ~20-25% of target data, do it in-place, where data resides (i.e., function shipping).
[10] In-database Container for Hadoop is not available, as of this writing.
[11] Other than database’s specific procedural language, e.g.,  Oracle’s PL/SQL
[12] I discuss this in chapter 2 of my book; see also Oracle database docs.
[13] See Multiple JDK Support in http://docs.oracle.com/database/121/JJDEV/E50793-03.pdf
[14] Oracle does not furnish a public download of Nashorn.jar for Java 7; search “Nashorn.jar for Java 7”.
[15]  https://avatar-js.java.net/
[16] https://avatar-js.java.net/license.html
[17] The upcoming Oracle Node.js driver was presented at OOW 2014. 

Oracle Maven Repository - Viewing Contents in Eclipse

Steve Button - Mon, 2015-02-09 16:18
With the Oracle Maven Repository now accessible one way to have explore its contents is to use the Maven Repositories viewer feature available in most development tools. I've seen the repository contents displayed easily in NetBeans so I decided to take a look at what it looks like in Eclipse as well.

I had to make a few minor setting changes to get it to work so decided to document them here.  If you've gotten it to work with less setting changes, let me know!

As initial setup, I configured my local maven environment to support access to the Oracle Maven Repository.  This is documented here https://maven.oracle.com/doc.html.  I also installed maven-3.2.5 that includes the updated Wagon module that supports authentication.

Next I downloaded and used the new network installer that the Oracle Eclipse team has published on OTN to install the latest version of Oracle Enterprise Pack for Eclipse.



This network installer lets developers select the version of Eclipse to install and the set of Oracle extensions --  Weblogic, GlassFish and other stuff -- to add in to it.

 Once Eclipse is installed, you can add the Maven Repository viewer by selecting   Window > Show View > Other > Maven Repositories from the Eclipse toolbar.



I also added a Console > Maven viewer to see what was happening under the covers and arranged them so they were visible at the same time:


With the Maven views ready to go, expand the Global Repositories node. This will show Maven Central (any other repositories you may have configured) and the Oracle Maven Repository if you have configured it correctly in the settings.xml file.

The initial state of the Oracle Maven Repository doesn't show any contents indicating that its index hasn't been downloaded to display.

Right mouse clicking on it and selecting the Rebuild Index option causes an error to be shown in the console output indicating that the index could not be accessed.


To get it to work, I made the following changes to my environment.  
Configure Eclipse to Use Maven 3.2.5Using the Eclipse > Preferences > Maven > Installation dialog, configure Eclipse to use Maven 3.2.5.  This is preferred version of Maven to use to access the Oracle Maven Repository since it automatically includes the necessary version of the Wagon HTTP module that supports the required authentication configuration and request flow.


Configure Proxy Settings in Maven Settings File ** If you don't need a proxy to access the Internet then step won't be needed **
 
If you sit behind a firewall and need to use a proxy server to access public repositories then you need to configure a proxy setting inside the maven settings file.

Interestingly for command line maven use and NetBeans a single proxy configuration in settings.xml was enough to allow the Oracle Maven Repository to be successfully accesses and its index and artifacts used.

However with Eclipse, this setting alone didn't allow the Oracle Maven Repository to be accessed.  Looking at the repository URL for the Oracle Maven Repository you can see ity's HTTPS based -- https://maven.oracle.com and it appears for Eclipse that a specific HTTPS based proxy setting is required for Eclipse to access HTTPS based repositories.


Rebuild Index SuccessWith the settings in place, the Rebuild Index operation succeeds and the contents of the Oracle Maven Repository are displayed in the repository viewer.



Have your say ...

Tim Dexter - Mon, 2015-02-09 15:25

Another messaging exchange last week with Leslie ...

OK, so we practised it a bit after our first convo and things got a little cheesy but hopefully you get the message.

Hit this link and you too can give some constructive feedback on the Oracle doc for BI (not just BIP.) I took the survey; its only eight questions or more if you want to share more of your input. Please take a couple of minutes to help us shape the documentation of future. 

Categories: BI & Warehousing

Have your say ...

Tim Dexter - Mon, 2015-02-09 15:25

Another messaging exchange last week with Leslie ...

OK, so we practised it a bit after our first convo and things got a little cheesy but hopefully you get the message.

Hit this link and you too can give some constructive feedback on the Oracle doc for BI (not just BIP.) I took the survey; its only eight questions or more if you want to share more of your input. Please take a couple of minutes to help us shape the documentation of future. 

Categories: BI & Warehousing

Little Things Doth Crabby Make – Part XVIII. Automatic Storage Management Won’t Let Me Use My Disk For My Files! Yes, It Will!

Kevin Closson - Fri, 2015-02-06 14:52

It’s been a long time since my last installment in the Little Things Doth Crabby Make series and to be completely honest this particular topic isn’t really all that fit for a LTDCM installment because it covers something that is possible but less than expedient.  That said, there are new readers of this blog and maybe it’s time they google “Little Things Doth Crabby Make” to see where this series has been. This post might rustle up that curiosity!

So what is this blog post about? It’s about stuffing any file system file into Automatic Storage Management space. OK, so maybe this is just morbid curiosity or trivial pursuit. Maybe it’s just a parlor trick. I would agree with any of those descriptions. Nonetheless maybe there are 42 or so people out there who didn’t know this. If so, this post is for them.

ASMCMD cp Command

The cp sub-command of ASM lets you stuff certain database files into ASM. We all know this. However, just to make it all fresh in people’s minds I’ll show a screen shot of me trying to push a compressed tar archive of $ORACLE_HOME/bin/oracle up into ASM:

2014.02.04-pic-0

Well, that’s not surprising. But what happens if I take heed of the error message and attempt to placate? The block size is 8KB so the following screen shot shows me rounding up the size of the compressed tar archive to an 8192B blocking factor:

2014.02.04-pic-0.1

ASMCMD still won’t gobble up the file. That’s still not all that surprising because after ASMCMD checked the geometry of the file it then read the file looking for a header or any file magic it could understand.  As you can see ASMCMD doesn’t see a file type it understands. The following screen shot shows me pre-pending the tar archive with file magic I know ASMCMD must surely understand. I have a database with a tablespace called foo that I created in a non-Oracle Disk Manager naming convention (foo.dbf). The screen shot shows me:

  1. Extracting the foo.dbf file
  2. “Borrowing” 1MB from the head of the file
  3. Creating a compressed tar archive of the Oracle Database executable
  4. Rounding up the size of the compressed tar archive to an 8192B blocking factor

2014.02.04-pic1

 

So now I have a file that has the “shape” of a datafile and the necessary header information from a datafile. The next screen shot shows:

  1. ASMCMD cp command pushing my file into ASM
  2. Removal of all of my current working directory files
  3. ASMCMD cp command pulling the file form ASM and into my current working directory
  4. Extracting the contents of the “embedded” tar archive
  5. md5sum(1) proof the file contents survived the journey

2014.02.04-pic2

OK, so that’s either a) something nobody would ever do or b) something that can be done with some elegant execution of some internal database package in a much less convoluted way or c) a combination of both “a” and “b” or d) a complete waste of my time to post, or, finally, e) a complete waste of your time reading the post. I’m sorry for “a”,”b”,”c” and certainly “e” if the case should be so.

Now you must wonder why I put this in the Little Things Doth Crabby Make series. That’s simple. I don’t like any “file system” imposing restrictions on file types:)

 


Filed under: oracle

Integrigy Database Log and Audit Framework with the Oracle Audit Vault

Most clients do not fully take advantage of their database auditing and logging features. These features are sophisticated and are able to satisfy most organization’s compliance and security requirements. 

The Integrigy Framework for database logging and auditing is a direct result of Integrigy’s consulting experience and will be equally useful to both those wanting to improve their capabilities as well as those just starting to implement logging and auditing.  Our goal is to provide a clear explanation of the native auditing and logging features available, present an approach and strategy for using these features and a straight-forward configuration steps to implement the approach.

Integrigy’s Framework is also specifically designed to help clients meet compliance and security standards such as Sarbanes-Oxley (SOX), Payment Card Industry (PCI), FISMA, and HIPAA.  The foundation of the Framework is PCI DSS requirement 10.2.

Integrigy’s Log and Audit Framework can be easily implemented using the Oracle Audit Vault.  The high-level summary is a follows –

Level 1

Enable database auditing as directed by the Integrigy Framework Level 1 requirements. 

Level 2
  1. Install the Oracle Audit Vault.  If already installed, it is highly recommended to perform a health check as described in Audit Vault Server Configuration Report and Health Check Script (Doc ID 1360138.1).
  2. Configure Oracle database to use Syslog per Integrigy Framework Level 2 requirements.  Set the database initialization parameter AUDIT_TRAIL parameter to equal ‘OS’ and AUDIT_FILE_DEST parameter to desired file in the directory specification.  Last set the initialization parameter AUDIT_SYSLOG_LEVEL to ‘LOCAL1.WARNING’ to generate Syslog formatted log files.
  3. Install and activate the Oracle Audit Vault collector agent OSAUD for operating system files.  Collect Syslog formatted logs located by the AUDIT_FILE_DEST parameter.
Level 3

Protect application log and audit tables by creating standard database audit policies and adding these new policies the Audit Vault Collectors.  Create database alerts based on correlations between standard database events and application audit logs.

Oracle E-Business Suite Example

To use the Oracle Audit Vault with the Oracle E-Business Suite, no additional patches required either for the E-Business Suite or the Oracle database.  This is because the Oracle Audit Vault uses only standard Oracle database functionality. 

There are two steps for Level 3.  The first is to protect the Oracle E-Business Suite audit tables, the second is to build alerts and reports that correlate application and database log information.  To protect the E-Business Log and Audit tables, enable standard auditing on them.  The second step is to define the Audit Vault alerts and reports.

Below is an example of event E12 - Protect Application Audit Data

The sign-on audit tables log user logon and navigation activity for the professional forms user interface.  This data needs to be protected.

Steps
  1. Enable Standard Auditing
  2. Create Audit Vault Alert
  3. Forward to Alert to Syslog (This feature is available as of Oracle AVDF version 12.1.2)

To enable standard auditing:

AUDIT UPDATE, DELETE ON APPLSYS.FND_LOGINS BY ACCESS;

AUDIT UPDATE, DELETE ON APPLSYS.FND_LOGIN_RESPONSIBILITIES BY ACCESS;

AUDIT UPDATE, DELETE ON APPLSYS.FND_LOGIN_RESP_FORMS BY ACCESS;

AUDIT UPDATE, DELETE ON APPLSYS.FND_UNSUCCESSFUL_LOGINS BY ACCESS;

 

To create an alert in Audit Vault:

Audit Vault -> Auditor -> Policy -> Alerts -> Create Alert

 

Name: E12 - Modify audit and logging

Condition:

 :TARGET_OWNER='APPLSYS' AND :EVENT_NAME in ('UPDATE','DELETE') AND :TARGET_OBJECT in ('FND_LOGINS','FND_LOGIN_RESPONSIBILITIES','FND_LOGIN_RESP_FORMS','FND_UNSUCCESSFUL_LOGINS')

Example:

 

                             

If you have questions, please contact us at mailto:info@integrigy.com

Reference
Auditing, Oracle Audit Vault
Categories: APPS Blogs, Security Blogs

PeopleTools 8.54 Features: Dynamic Alert Sliding Windows

Javier Delgado - Thu, 2015-02-05 23:54
One of my first memories in the PeopleSoft world was from by training bootcamp when I joined PeopleSoft. The instructor was showing us the Process Monitor functionality in PeopleSoft 7.5, where the button used to refresh the list of scheduled processes was represented by fetching dog named Sparky shown to the right of this paragraph.

It actually surprised me that an application button had a name, but that was the PeopleSoft style. Anyway, the poor dog did not last too much. In year 2000, with the introduction of PeopleSoft 8, our beloved Sparky was replaced by a boring Refresh button.

PeopleTools 8.54 has pushed this functionality to the next generation, making it potentially redundant. One of the new features in this release is the ability to show the status of the process within the same page were it is scheduled. This is a major usability improvement, as the users do not need to navigate to Process Monitor to check the status of the process instance. True, in previous PeopleTools versions there was also the possibility of running the process with output to Window, which using REN Server would achieve a similar result. The main drawback of REN Server is that it opened a new page/tab even before the process was finished, making the navigation more complicated.

The new functionality is called Dynamic Alert Sliding Windows, which is still more boring than Sparky, but what matters is the functionality, not the name. These notifications are enabled in the Process Scheduler System Settings page:


In this page, the administrator chooses which Status are going to be displayed to the user when running a process. As you see, the functionality is quite easy to setup and a significant step forward in usability of batch process scheduling.

Note (Feb 6th 2015): There is a great blog post by Srinivas Reddy demonstrating this functionality from an user perspective. Here is the link: http://srinivasreddy18.blogspot.it/2015/02/peopletools-854-dynamic-alert-sliding.html




Alliance 2015

Jim Marion - Thu, 2015-02-05 16:42

I am looking forward to seeing everyone at the Alliance conference in Nashville next month. I was just reviewing my schedule and see lots of interesting technical sessions (as always). If you have room in your schedule, I invite you to attend my session on Tuesday, Mar 17, 2015 from 11:15 AM to 12:15 PM in Presidential D. The session is titled PeopleSoft PeopleTools Developer: Tips and Techniques. If you can't make it to my session, then perhaps I'll see you shortly thereafter at Meet the Experts from 1:45 to 2:45 (table 11)? I'll be around the conference all week and will be working in the demo grounds when I'm not attending sessions. See you in a few weeks!

Steps to Blackout Agent of Cloud Control 12c

Pakistan's First Oracle Blog - Wed, 2015-02-04 18:09
1) Set the environment to the cloud control agent. You can agent name from /etc/oratab file.

myserver: $ . oraenv
ORACLE_SID = [ORCL] ? agent12c

2) Check which targets are being monitored by the cloud control agent on this server:

myserver: $ emctl config agent listtargets
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
[MYSERVER, host]
[MYSERVER:3872, oracle_emd]
[ORCL, oracle_database]

3) Check if there is any existing blackout of agent on this server:

myserver: $ emctl status blackout
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
No Blackout registered.

4) Start the blackout:

myserver: $ emctl start blackout orcl_down_20150204 ORCL:oracle_database
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Blackout orcl_down_20150204 added successfully
EMD reload completed successfully

5) Again check the status of the blackout:

myserver: $ emctl status blackout
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Blackoutname = orcl_down_20150204
Targets = (ORCL:oracle_database,)
Time = ({2015-02-04|16:51:37,|} )
Expired = False

6) Stop the blackout:

myserver: $ emctl stop blackout orcl_down_20150204
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
Blackout orcl_down_20150204 stopped successfully
EMD reload completed successfully

7) Again check the status of blackout:

myserver: $ emctl status blackout
Oracle Enterprise Manager Cloud Control 12c Release 4 
Copyright (c) 1996, 2014 Oracle Corporation.  All rights reserved.
No Blackout registered.
Categories: DBA Blogs

Developing On-Device Java Mobile Apps for iOS...and Android Too

Shay Shmeltzer - Wed, 2015-02-04 15:25

At the last JavaOne conference I presented a session titled "Developing On-Device Java Mobile Apps for iOS...and Android Too"

The recording of this session just became available, and I wanted to share it with you.

This session should be a good introduction to how Oracle enables Java developers to take their skills to the mobile world.

The first 28 minutes provide the overview, but if you are not into slides fast forward to minute 29 and start watching the extensive demo of developing an iOS application with Java and Eclipse. 

<br />


Categories: Development

Introduction to MongoDB Security

Tugdual Grall - Wed, 2015-02-04 12:12
View it on my new blog Last week at the Paris MUG, I had a quick chat about security and MongoDB, and I have decided to create this post that explains how to configure out of the box security available in MongoDB. You can find all information about MongoDB Security in following documentation chapter: http://docs.mongodb.org/manual/security/ In this post, I won't go into the detail about Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com1

How To Approach Different Oracle Database Performance Problems

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.How To Approach Different Oracle Database Performance ProblemsJump Start Your Oracle Database Tuning Effort

Every Oracle Database Administrator will tell you no two performance problems are the same. But a seasoned Oracle DBA recognizes there are similarities...patterns. Fast problem pattern recognition allows us to minimize diagnosis time, so we can focus on developing amazing solutions.

I tend to group Oracle performance problems into four patterns. Quickly exploring these four patterns is what this article is all about.


You Can Not Possibly List Every Problem And Solution
When I teach, some Oracle Database Administrators want me to outline every conceivable problem along with the solution. Not only is the thought of this exhausting, it's not possible. Even my Stori product uses pattern matching. One of the keys to becoming a fantastic performance analyst is the ability quickly look at a problem and then decided which diagnosis approach is the best. For example, if you don't know the problem SQL (assuming there is one) tracing is not likely to be your best approach.

The Four Oracle Database Performance Patterns
Here are the four performance patterns I tend to group problems into.

The SQL Is Known
Many times there is a well know SQL statement that is responsible for the poor performance. While I will always do a quick Oracle Time Based Analysis (see below) and verify the accused SQL, I will directly attack this problem by tuning with SQL specific diagnostic and tuning tools.

But... I will also ask a senior application user, if the users are using the application correctly. Sometimes new applications users try and use a new application like their old application. It's like trying to drive a car with moving your feet as you are riding a bicycle... not going to work and it's dangerous!

Business Process Specific

I find that when the business is seriously affected by application performance issues, that's when the "limited budget" is suddenly not so limited. When managers and their business's are affected they want action.

When I'm approached to help solve a problem, I always ask how the business is being affected. If I keep hearing about a specific business process or application module I know two things.

First, there are many SQL statements involved. Second, the problem is bounded by a business process or application. This is when I start the diagnostic process with an Oracle Time Based Analysis approach which, will result in multiple solutions to the same problem.

As I teach in my online seminar How To Tune Oracle With An AWR Report, user feel performance through time. So, if our analysis is time based we can create a quantitative link between our analysis and their experience. If our analysis creates solutions that reduce time, then we can expect the user experience to improve. This combined with my "3 Circle" approach yields spot-on solutions very quickly.

While an Oracle Time Based Analysis is amazing, because Oracle does not instrument CPU consumption we can't answer the question, "What's Oracle doing with all that CPU?" If you want to drill into this topic check out my online seminar, Detailing Oracle CPU Consumption: The Missing Link.

It's Just Slow
How many times have I experienced this... It's Just Slow!


If what the user is attempting to explain is true, the performance issue is affecting a wide range of business processes. The problem is probably not a single issue (but could be) and clearly the key SQL is not know. Again, this is a perfect problem scenario to apply an Oracle Time Based Analysis.

The reason I say this is because an OTBA will look at the problem from multiple perspectives, categorize Oracle time and develop solutions to reduce those big categories of time. If you also do Unit Of Work Time Based Analysis, you can an even anticipate the impact of your solutions! Do an OraPub website search HERE or search my blog for UOWTBA.
Random Incident That Quickly Appears And Vanishes
This is the most difficult problem to fix. Mainly because the problem "randomly" appears and can't be duplicated. (Don't even bother calling Oracle Support to help in this situation.) Furthermore, it's too quick for an AWR report to show it's activity and you don't want to impact the production system by gathering tons of detailed performance statistics.

Even a solid Oracle Time Based Analysis will likely not help in this situation. Again, the problem is performance data collection and retention. The instrumented AWR or Statpack data does not provide enough detail. What we need step-by-step activity...like a timeline.

Because this type of problem scares both DBAs and business managers, you will likely need to answer questions like this:

  • What is that blip all about?
  • Did this impact users?
  • Has it happened before?
  • Will it happen again?
  • What should we do about it?

The only way I know how to truly diagnose a problem like this is to do a session-level time-line analysis. Thankfully, this is possible using the Oracle Active Session History data. Both v$active_session_history and dba_hist_active_sess_history are absolutely key in solving problems like this.

ASH samples Oracle Database session activity once each second (by default). This is very different than measuring how long something takes, which is the data an AWR report is based upon. Because sampling is non-continuous, a lot of detail can be collected, stored and analyzed.

A time-line type of analysis is so important, I enhanced my ASH tools in my OraPub System Monitor (OSM) toolkit to provide this type of analysis. If you want to check them out, download the OSM toolkit HERE, install it and read the osm/interactive/ash-readme.txt file.

As an example, using these tools you can construct an incident time-line like this:

HH:MM:SS.FFF User/Process Notes
------------ ------------- -----------------
15:18:28.796 suspect (837) started the massive update (see SQL below)

15:28:00.389 user (57) application hung (row lock on TM_SHEET_LINE_EXPLOR)
15:28:30.486 user (74) application hung (row lock on TM_SHEET_LINE_EXPLOR)
15:29:30.??? - row locks becomes the top wait event (16 locked users)
15:29:50.749 user (83) application hung (row lock on TM_SHEET_LINE_EXPLOR)

15:30:20.871 user (837) suspect broke out of update (implied)
15:30:20.871 user (57) application returned
15:30:20.871 user (74) application returned
15:30:20.871 user (83) application returned

15:30:30.905 smon (721) first smon action since before 15:25:00 (os thread startup)
15:30:50.974 user (837) first wait for undo - suspect broke out of update
15:30:50.974 - 225 active session, now top event (wait for a undo record)

15:33:41.636 smon (721) last PQ event (PX Deq: Test for msg)
15:33:41.636 user (837) application returned to suspect. Undo completed
15:33:51.670 smon (721) last related event (DFS lock handle)

Without ASH seemingly random problems would be a virtually impossible nightmare scenario for an Oracle DBA.

Summary
It's true. You need the right tool for the job. And the same is true when diagnosing Oracle Database performance. What I've done above is group probably 90% of the problems we face as Oracle DBAs into four categories. And each of these categories needs a special kind of tool and/or diagnosis method.

Once we recognize the problem pattern and get the best tool/method involved to diagnosis the problem, then we will know the time spent developing amazing solutions is time well spent.

Enjoy your work!

Craig.


Categories: DBA Blogs

Bare-Bones Example of Using WebLogic and Arquillian

Steve Button - Tue, 2015-02-03 16:18
The Arquillian project is proving to be very popular for testing code and applications.  It's particularly useful for Java EE projects since it allows for in-container testing to be performed, enabling unit tests to use dependency injection and all the common services  provided by the Java EE platform.

Arquillian uses the concept of container adapters to allow it to execute test code with a  specific test environment.  For the Java EE area,  most of the Java EE implementations have an adapter than can be used to perform the deployment of the archive under test and to execute and report on the results of the unit tests.
A handy way to see all the WebLogic Server related content on the Arquillian blog is this URL: http://arquillian.org/blog/tags/wls/For WebLogic Server the current set of adapters are listed here: http://arquillian.org/blog/2015/01/09/arquillian-container-wls-1-0-0-Alpha3/

There are multiple adapters available for use.  Some of them are historical and some are for use with older versions of WebLogic Server (10.3).
We are actively working with the Arquillian team on finalizing the name, version and status of a WebLogic Server adapter.The preferred adapter set from the WebLogic Server perspective are these:


These adapters utilize the WebLogic Server JMX API to perform their tasks and are the adapters used internally by the development teams when working with Arquillian.  They have been tested to work with WebLogic Server [12.1.1, 12.1.2, 12.1.3].  We also have been using them internally with the 12.2.1 version under development to run the CDI TCK and other tests.

To demonstrate WebLogic Server working with Arquillian a bare-bones example is available on GitHub here: https://github.com/buttso/weblogic-with-arquillian

This example has the most basic configuration you can use to employ Arquillian with a Maven project to deploy and execute tests using WebLogic Server 12.1.3.
 
The README.md file in the project contains more details and a longer description.  In summary though:

1. The first step is to add the Arquillian related dependencies in the Maven pom.xml:

2. The next step is to create an arquillian.xml file that the container adapter uses to connect to the remote server that is being used as the server to run the tests:

3. The last step is to create a unit test which is run with Arquillian.  The unit test is responsible for implementing the @Deployment method which constructs an archive to deploy that contains the code to be tested.  The unit test then provides @Test methods in which the deployment is tested to verify its behaviour:


Executing the unit tests, with the associated archive creation and deployment to the server is performed using the maven test goal:


The tests can be executed directly from IDEs such as NetBeans and Eclipse using the Run Test features:

Executing Tests using NetBeans

Social Coding Resolves JAX-RS and CDI Producer Problem

Steve Button - Tue, 2015-02-03 06:04
The inimitable Bruno Borges picked up tweet earlier today commenting on a problem using @Produces with non-CDI libraries with WebLogic Server 12.1.3.

The tweeter put his example up on a github repository to share - quite a nice example of using JAX-RS, CDI integration and of using Arquillian to verify it works correctly.  Ticked a couple of boxes for what I've been looking at lately

Forking his project to have a look at it locally:

https://github.com/buttso/weblogic-producers

Turns out that the issue was quite a simple and common one - a missing reference to the jax-rs:2.0 shared-library that is needed to use JAX-RS 2.0 on WebLogic Server 12.1.3.   Needs a weblogic.xml to reference that library.

I made the changes in a local branch and tested it again:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] producers .......................................... SUCCESS [  0.002 s]
[INFO] bean ............................................... SUCCESS [  0.686 s]
[INFO] web ................................................ SUCCESS [  7.795 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------


With the tests now passing, pushed the branch to my fork and sent Kuba a pull request to have a look at the changes I made:

https://github.com/buttso/weblogic-producers/tree/steve_work

I now just hope it works in his environment too :-)

The GitHub model is pretty magical really.

Big changes ahead for India's IT majors

Abhinav Agarwal - Tue, 2015-02-03 02:22
My article on challenges confronting the Indian IT majors was published in DNA in January 2015.

Here is the complete text of the article - Big changes ahead for India's IT majors:

Hidden among the noise surrounding the big three of the Indian IT industry - TCS, Wipro, and Infosys - was a very interesting sliver of signal that points to possibly big changes on the horizon. Though Cognizant should be counted among these biggies - based on its size and revenues - let's focus on these three for the time being.

Statements made by the respective CEOs of Infosys and Wipro, and the actions of TCS, provide hints on how these companies plan on addressing the coming headwinds that the Indian IT industry faces. Make no mistake. These are strong headwinds that threaten to derail the mostly good fairy tale of the Indian IT industry. Whether it is the challenge of continuing to show growth on top of a large base - each of these companies is close to or has exceeded ten billion dollars in annual revenues; protecting margins when everyone seems to be in a race to the bottom; operating overseas in the face of unremitting resistance to outsourcing; or finding ways to do business in light of the multiple disruptions thrust by cloud computing, big data, and the Internet of Things, they cannot continue in a business-as-usual model any longer.



For nearly two decades the Indian IT industry has grown at a furious pace, but also grown fat in the process, on a staple diet of low-cost business that relied on the undeniable advantage of labour-cost arbitrage. Plainly speaking, people cost a lot overseas, but they cost a lot less in India. The favourable dollar exchange-rate ensured that four, five (or even ten engineers at one point in time) could be hired in India for the cost of one software engineer in the United States. There was no meaningful incentive to either optimize on staffing, or build value-added skills when people could be retained by offering fifteen per cent salary hikes, every year. Those days are fast fading, and while the Indian IT workforce's average age has continued to inch up, the sophistication of the work performed has not kept pace, resulting in companies paying their employees more and more every year for work that is much the same.

TCS, willy nilly, has brought to the front a stark truth facing much of the Indian IT industry - how to cut costs in the face of a downward pressure on most of the work it performs, which has for the most part remained routine and undifferentiated. Based on a remark made by its HR head on "layoffs" and "restructuring" that would take place over the course of 2015, the story snowballed into a raging controversy. It was alleged that TCS was planning on retrenching tens of thousands of employees - mostly senior employees who cost more than college graduates with only a few years of experience. Cursory and level-headed thinking would have revealed that prima-facie any such large layoffs could not be true. But such is the way with rumours - they have long legs. What however remains unchanged is the fact that without more value-based business, an "experienced" workforce is a drag on margins. It's a liability, not an asset. Ignore, for a minute, the absolute worst way in which TCS handled the public relations fiasco arising out of its layoff rumours - something even its CEO, N Chandraskaran, acknowledged. Whether one likes it or not, so-called senior resources at companies that cannot lay claim to skills that are in demand will find themselves under the dark cloud of layoffs. If you prefer, call them "involuntary attrition", "labour cost rationalization", or anything else. The immediate reward of a lowered loaded cost number will override any longer-term damage such a step may involve. If it is a driver for TCS, it will be a driver for Wipro and Infosys.

Infosys, predictably, and as I had written some six months back, is trying to use the innovation route to find its way to both sustained growth and higher margins. Its CEO, Vishal Sikka, certainly has the pedigree to make innovation succeed. His words have unambiguously underlined his intention to pursue, acquire, or fund innovation. Unsurprisingly, there are several challenges to this approach. First, outsourced innovation is open to market risks. If you invest early enough, you will get in at lower valuations, but you will also have to cast a wider net, which requires more time and focus. Invest later, and you pay through your nose by way of sky-high valuations. Second, external innovation breeds resentment internally. It sends the message that the company does not consider its own employees "good enough" to innovate. To counter this perception, Vishal has exhorted Infosys employees "to innovate proactively on every single thing they are working on." This is a smart strategy. It is low cost, low risk, and a big morale booster. However, it also distracts. Employees can easily get distracted by the "cool" factor of doing what they believe is innovative thinking. "20%" may well be a myth in any case. How does a company put a process in place that can evaluate, nurture, and manage innovative ideas coming out of tens of thousands of employees? Clearly, there are issues to be balanced. The key to success, like in most other things, will lie in execution - as Ram Charan has laid out in his excellent book, unsurprisingly titled "Execution".

Lastly, there is Wipro. In an interview, Wipro's CEO, TK Kurien, announced that Wipro would use "subcontracting to drive growth". This seems to have gone largely unnoticed, in the industry. Wipro seems to have realized, on the basis of this statement at least, that it cannot continue to keep sliding down the slipper slope of low-cost undifferentiated work. If the BJP government's vision of developing a hundred cities in India into so-called "Smart Cities", one could well see small software consulting and services firm sprout up all over India, in Tier 2 and even Tier 3 cities. These firms will benefit from the e-infrastructure available as a result of the Smart Cities initiative on the one hand, and find a ready market for their services that requires a low cost model to begin with on the other. This will leave Wipro free to subcontract low-value, undifferentiated work, to smaller companies in smaller cities. A truly virtuous circle. In theory at least. However, even here it would be useful for Wipro to remember the Dell and Asus story. Dell was at one point among the most innovative of computer manufacturers. It kept on giving away more and more of its computer manufacturing business - from motherboard designing, laptop assembly, and so on - to Asus, because it helped Dell keep its margins high while allowing it to focus on what it deemed its core competencies. Soon enough, Asus had learned everything about the computer business, and it launched its own computer brand. The road to commoditization hell is paved with the best intentions of cost-cutting.

While it may appear that these three IT behemoths are pursuing three mutually exclusive strategies, it would be naïve to judge these three strategies as an either-or play. Wach will likely, and hopefully, pursue a mix of these strategies, focusing more on what they decide fits their company best, and resist the temptation to follow each other in a monkey-see-monkey-do race. Will one of the big three Indian IT majors pull ahead of its peers and compete with the IBM, Accenture, and other majors globally? Watch this space.

Oracle MAA 12c – A Data Plan for Meeting Every Organization’s Needs

VitalSoftTech - Mon, 2015-02-02 22:36
Data is one of the most valuable and crucial asset for any company, hence businesses spend huge amounts of money on their enterprise software to ensure its safety and availability. RAC and clustering are some examples of the ways companies practice for high availability and performance of their enterprise and databases. However there are still […]
Categories: DBA Blogs

My Oracle Support Release 15.1 is Live!

Joshua Solomin - Mon, 2015-02-02 20:12
My Oracle Support Release 15.1 is Live!

My Oracle Support release 15.1 is now live. Improvements include:

All Customer User Administrators (CUAs) can manage and group their users and assets using the Support Identifier Groups (SIGs) feature.
Knowledge Search automatically provides unfiltered results when filters return no results. In addition, product and version detail displays in bug search results.
The SR platform selector groups common products with the appropriate platform.
Some problem types for non-technical SRs have guided resolution workflow.
In the Proactive Analysis Center: all clickable links are underlined, users only see applicable reports, and column headers can be sorted.



Learn more by viewing the What's new in My Oracle Support video.

Exadata Vulnerability

Pakistan's First Oracle Blog - Mon, 2015-02-02 19:49
This Exadata vulnerability is related to glibc vulnerability. A heap-based buffer overflow was found in glibc's __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() glibc function calls.

A remote attacker able to make an application call either of these functions could use this flaw to execute arbitrary code with the permissions of the user running the application.

In order to check if your Exadata system suffers from this vulnerability, use:

[root@server ~]# ./ghostest-rhn-cf.sh
vulnerable

The solution and action plan for this vulnerability is available by My Oracle Support in the following document:

glibc vulnerability (CVE-2015-0235) patch availability for Oracle Exadata Database Machine (Doc ID 1965525.1)
Categories: DBA Blogs

Scrutinizing Exadata X5 Datasheet IOPS Claims…and Correcting Mistakes

Kevin Closson - Mon, 2015-02-02 19:37

I want to make these two points right out of the gate:

  1. I do not question Oracle’s IOPS claims in Exadata datasheets
  2. Everyone makes mistakes
Everyone Makes Mistakes

Like me. On January 21, 2015, Oracle announced the X5 generation of Exadata. I spent some time studying the datasheets from this product family and also compared the information to prior generations of Exadata namely the X3 and X4. Yesterday I graphed some of the datasheet numbers from these Exadata products and tweeted the graphs. I’m sorry  to report that two of the graphs were faulty–the result of hasty cut and paste. This post will clear up the mistakes but I owe an apology to Oracle for incorrectly graphing their datasheet information. Everyone makes mistakes. I fess up when I do. I am posting the fixed slides but will link to the deprecated slides at the end of this post.

We’re Only Human

Wouldn’t IT be a more enjoyable industry if certain IT vendors stepped up and admitted when they’ve made little, tiny mistakes like the one I’m blogging about here? In fact, wouldn’t it be wonderful if some of the exceedingly gruesome mistakes certain IT vendors make would result in a little soul-searching and confession? Yes. It would be really nice! But it’ll never happen–well, not for certain IT companies anyway. Enough of that. I’ll move on to the meat of this post. The rest of this article covers:

  • Three Generations of Exadata IOPS Capability
  • Exadata IOPS Per Host CPU
  • Exadata IOPS Per Flash SSD
  • IOPS Per Exadata Storage Server License Cost
Three Generations of Exadata IOPS Capability

The following chart shows how Oracle has evolved Exadata from the X3 to the X5 EF model with regard to IOPS capability. As per Oracle’s datasheets on the matter these are, of course, SQL-driven IOPS. Oracle would likely show you this chart and nothing else. Why? Because it shows favorable,  generational progress in IOPS capability. A quick glance shows that read IOPS improved just shy of 3x and write IOPS capability improved over 4x from the X3 to X5 product releases. These are good numbers. I should point out that the X3 and X4 numbers are the datasheet citations for 100% cached data in Exadata Smart Flash Cache. These models had 4 Exadata Smart Flash Cache PCIe cards in each storage server (aka, cell). The X5 numbers I’m focused on reflect the performance of the all-new Extreme Flash (EF) X5 model. It seems Oracle has started to investigate the value of all-flash technology and, indeed, the X5 EF is the top-dog in the Exadata line-up. For this reason I choose to graph X5 EF data as opposed to the more pedestrian High Capacity model which has 12 4TB SATA drives fronted with PCI Flash cards (4 per storage server). exadata-evolution-iops-gold-1 The tweets I hastily posted yesterday with the faulty data points aimed to normalize these performance numbers to important factors such as host CPU, SSD count and Exadata Storage Server Software licensing costs.  The following set of charts are the error-free versions of the tweeted charts.

Exadata IOPS Per Host CPU

Oracle’s IOPS performance citations are based on SQL-driven workloads. This can be seen in every Exadata datasheet. All Exadata datasheets for generations prior to X4 clearly stated that Exadata IOPS are limited by host CPU. That is a very important fact to understand because SQL-driven IOPS is a host metric no matter what your storage is.

Indeed, anyone who studies Oracle Database with SLOB knows how all of that works. SQL-driven IOPS requires host CPU. Sadly, however, Oracle ceased stating the fact that IOPS are host-CPU bound in Exadata as of the advent of the X4 product family. I presume Oracle stopped correctly stating the factual correlation between host CPU and SQL-driven IOPS for only the most honorable of reasons with the best of customers’ intentions in mind.

In case anyone should doubt my assertion that Oracle historically associated Exadata IOPS limitations with host CPU I submit the following screen shot of the pertinent section of the X3 datasheet:   X3-datasheet-truth Now that the established relationship between SQL-driven IOPS and host CPU has been demystified, I’ll offer the following chart which normalizes IOPS to host CPU core count: exadata-evolution-iops-per-core-gold I think the data speaks for itself but I’ll add some commentary. Where Exadata is concerned, Oracle gives no choice of host CPU to customers. If you adopt Exadata you will be forced to take the top-bin Xeon SKU with the most cores offered in the respective Intel CPU family.  For example, the X3 product used 8-core Sandy Bridge Xeons. The X4 used 12-core Ivy Bridge Xeons and finally the X5 uses 18-core Haswell Xeons. In each of these CPU families there are other processors of varying core counts at the same TDP. For example, the Exadata X5 processor is the E5-2699v3 which is a 145w 18-core part. In the same line of Xeons there is also a 145w 14c part (E5-2697v3) but that is not an option to Exadata customers.

All of this is important since Oracle customers must license Oracle Database software by the host CPU core. The chart shows us that read IOPS per core from X3 to X4 improved 18% but from X4 to X5 we see only a 3.6% increase. The chart also shows that write IOPS/core peaked at X4 and has actually dropped some 9% in the X5 product. These important trends suggest Oracle’s balance between storage plumbing and I/O bandwidth in the Storage Servers is not keeping up with the rate at which Intel is packing cores into the Xeon EP family of CPUs. The nugget of truth that is missing here is whether the 145w 14-core  E5-2697v3 might in fact be able to improve this IOPS/core ratio. While such information would be quite beneficial to Exadata-minded customers, the 22% drop in expensive Oracle Database software in such an 18c versus 14c scenario is not beneficial to Oracle–especially not while Oracle is struggling to subsidize its languishing hardware business with gains from traditional software.

Exadata IOPS Per Flash SSD

Oracle uses their own branded Flash cards in all of the X3 through X5 products. While it may seem like an implementation detail, some technicians consider it important to scrutinize how well Oracle leverages their own components in their Engineered Systems. In fact, some customers expect that adding significant amounts of important performance components, like Flash cards, should pay commensurate dividends. So, before you let your eyes drift to the following graph please be reminded that X3 and X4 products came with 4 Gen3 PCI Flash Cards per Exadata Storage Server whereas X5 is fit with 8 NVMe flash cards. And now, feel free to take a gander at how well Exadata architecture leverages a 100% increase in Flash componentry: exadata-evolution-iops-per-SSD-gold This chart helps us visualize the facts sort of hidden in the datasheet information. From Exadata X3 to Exadata X4 Oracle improved IOPS per Flash device by just shy of 100% for both read and write IOPS. On the other hand, Exadata X5 exhibits nearly flat (5%) write IOPS and a troubling drop in read IOPS per SSD device of 22%.  Now, all I can do is share the facts. I cannot change people’s belief system–this I know. That said, I can’t imagine how anyone can spin a per-SSD drop of 22%–especially considering the NVMe SSD product is so significantly faster than the X4 PCIe Flash card. By significant I mean the NVMe SSD used in the X5 model is rated at 260,000 random 8KB IOPS whereas the X4 PCIe Flash card was only rated at 160,000 8KB read IOPS. So X5 has double the SSDs–each of which is rated at 63% more IOPS capacity–than the X4 yet IOPS per SSD dropped 22% from the X4 to the X5. That means an architectural imbalance–somewhere.  However, since Exadata is a completely closed system you are on your own to find out why doubling resources doesn’t double your performance. All of that might sound like taking shots at implementation details. If that seems like the case then the next section of this article might be of interest.

IOPS Per Exadata Storage Server License Cost

As I wrote earlier in this article, both Exadata X3 and Exadata X4 used PCIe Flash cards for accelerating IOPS. Each X3 and X4 Exadata Storage Server came with 12 hard disk drives and 4 PCIe Flash cards. Oracle licenses Exadata Storage Server Software by the hard drive in X3/X4 and by the NVMe SSD in the X5 EF model. To that end the license “basis” is 12 units for X3/X5 and 8 for X5. Already readers are breathing a sigh of relief because less license basis must surely mean less total license cost. Surely Not! Exadata X3 and X4 list price for Exadata Storage Server software was $10,000 per disk drive for an extended price of $120,000 per storage server. The X5 EF model, on the other hand, prices Exadata Storage Server Software at $20,000 per NVMe SSD for an extended price of $160,000 per Exadata Storage Server. With these values in mind feel free to direct your attention to the following chart which graphs the IOPS per Exadata Storage Server Software list price (IOPS/license$$). exadata-evolution-iops-per-license-cost-gold The trend in the X3 to X4 timeframe was a doubling of write IOPS/license$$ and just short of a 100% improvement in read IOPS/license$$. In stark contrast, however, the X5 EF product delivers only a 57% increase in write IOPS/license$$ and a troubling, tiny, 17% increase in read IOPS/license$$. Remember, X5 has 100% more SSD componentry when compared to the X3 and X4 products.

Summary

No summary needed. At least I don’t think so.

About Those Faulty Tweeted Graphs

As promised, I’ve left links to the faulty graphs I tweeted here: Faulty / Deleted Tweet Graph of Exadata IOPS/SSD: http://wp.me/a21zc-1ek Faulty / Deleted Tweet Graph of Exadata IOPS/license$$: http://wp.me/a21zc-1ej

References

Exadata X3-2 datasheet: http://www.oracle.com/technetwork/server-storage/engineered-systems/exadata/exadata-dbmachine-x3-2-ds-1855384.pdf Exadata X4-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-dbmachine-x4-2-ds-2076448.pdf Exadata X5-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-x5-2-ds-2406241.pdf X4 SSD info: http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f80/overview/index.html X5 SSD info: http://docs.oracle.com/cd/E54943_01/html/E54944/gokdw.html#scrolltoc Engineered Systems Price List: http://www.oracle.com/us/corporate/pricing/exadata-pricelist-070598.pdf , http://www.ogs.state.ny.us/purchase/prices/7600020944pl_oracle.pdf


Filed under: oracle

Pages

Subscribe to Oracle FAQ aggregator