Skip navigation.

Feed aggregator

APEX 5.0 - Run Applications in New Tabs

Denes Kubicek - Wed, 2015-02-04 02:31
If you are using Firefox you will probably have an issue with running pages from APEX Builder 5.0 in new Tabs. Now, the links are working differently and normally clicking at button "Save and Run Page" will open a new window. In order to get it opened in a new tab, you will need a plugin Tab Mix Plus. Once you have it, you will need to change the settings there as shown in the screenshots below.



Categories: Development

Vsphere 6.0 improvements for Windows Failover Clusters

Yann Neuhaus - Wed, 2015-02-04 01:57

A couple of days ago, VMWare announced vSphere 6.0. I guess many of our customers have been waiting for this new release and probably I will see in the next few months this new version rolled out on the top of their virtual SQL Server vitual machines. If you missed this event, you can still register here.

I’m not a VMWare expert but VSphere 6.0 seems to be a storage-oriented version with a lot of improvements and new functionalities (including Virtual SAN 6.0). Among the long list of improvements one of them may probably interest many of our customers: using vMotion with pRDMs disks will be possible in the future. As a reminder, pRDMs disks are a prerequisite with Microsoft Failover Clusters virtual machines (in CAB scenarios).

There are also other interesting features that will probably benefit our SQL Server instances as storage IOPS reservation, application level backup and restore of SQL Server (including Exchange and SharePoint) with new vSphere Data Protection capabilities. I will surely blog about it in the coming months …

Happy virtualization!

How To Approach Different Oracle Database Performance Problems

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.How To Approach Different Oracle Database Performance ProblemsJump Start Your Oracle Database Tuning Effort
Every Oracle Database Administrator will tell you no two performance problems are the same. But a seasoned Oracle DBA recognizes there are similarities...patterns. Fast problem pattern recognition allows us to minimize diagnosis time, so we can focus on developing amazing solutions.

I tend to group Oracle performance problems into four patterns. Quickly exploring these four patterns is what this article is all about.


You Can Not Possibly List Every Problem And Solution
When I teach, some Oracle Database Administrators want me to outline every conceivable problem along with the solution. Not only is the thought of this exhausting, it's not possible. Even my Stori product uses pattern matching. One of the keys to becoming a fantastic performance analyst is the ability quickly look at a problem and then decided which diagnosis approach is the best. For example, if you don't know the problem SQL (assuming there is one) tracing is not likely to be your best approach.

The Four Oracle Database Performance Patterns
Here are the four performance patterns I tend to group problems into.

The SQL Is Known
Many times there is a well know SQL statement that is responsible for the poor performance. While I will always do a quick Oracle Time Based Analysis (see below) and verify the accused SQL, I will directly attack this problem by tuning with SQL specific diagnostic and tuning tools.

But... I will also ask a senior application user, if the users are using the application correctly. Sometimes new applications users try and use a new application like their old application. It's like trying to drive a car with moving your feet as you are riding a bicycle... not going to work and it's dangerous!

Business Process Specific
I find that when the business is seriously affected by application performance issues, that's when the "limited budget" is suddenly not so limited. When managers and their business's are affected they want action.

When I'm approached to help solve a problem, I always ask how the business is being affected. If I keep hearing about a specific business process or application module I know two things.

First, there are many SQL statements involved. Second, the problem is bounded by a business process or application. This is when I start the diagnostic process with an Oracle Time Based Analysis approach which, will result in multiple solutions to the same problem.

As I teach in my online seminar How To Tune Oracle With An AWR Report, user feel performance through time. So, if our analysis is time based we can create a quantitative link between our analysis and their experience. If our analysis creates solutions that reduce time, then we can expect the user experience to improve. This combined with my "3 Circle" approach yields spot-on solutions very quickly.

While an Oracle Time Based Analysis is amazing, because Oracle does not instrument CPU consumption we can't answer the question, "What's Oracle doing with all that CPU?" If you want to drill into this topic check out my online seminar, Detailing Oracle CPU Consumption: The Missing Link.

It's Just Slow
How many times have I experienced this... It's Just Slow!


If what the user is attempting to explain is true, the performance issue is affecting a wide range of business processes. The problem is probably not a single issue (but could be) and clearly the key SQL is not know. Again, this is a perfect problem scenario to apply an Oracle Time Based Analysis.

The reason I say this is because an OTBA will look at the problem from multiple perspectives, categorize Oracle time and develop solutions to reduce those big categories of time. If you also do Unit Of Work Time Based Analysis, you can an even anticipate the impact of your solutions! Do an OraPub website search HERE or search my blog for UOWTBA.
Random Incident That Quickly Appears And Vanishes
This is the most difficult problem to fix. Mainly because the problem "randomly" appears and can't be duplicated. (Don't even bother calling Oracle Support to help in this situation.) Furthermore, it's too quick for an AWR report to show it's activity and you don't want to impact the production system by gathering tons of detailed performance statistics.

Even a solid Oracle Time Based Analysis will likely not help in this situation. Again, the problem is performance data collection and retention. The instrumented AWR or Statpack data does not provide enough detail. What we need step-by-step activity...like a timeline.

Because this type of problem scares both DBAs and business managers, you will likely need to answer questions like this:

  • What is that blip all about?
  • Did this impact users?
  • Has it happened before?
  • Will it happen again?
  • What should we do about it?

The only way I know how to truly diagnose a problem like this is to do a session-level time-line analysis. Thankfully, this is possible using the Oracle Active Session History data. Both v$active_session_history and dba_hist_active_sess_history are absolutely key in solving problems like this.

ASH samples Oracle Database session activity once each second (by default). This is very different than measuring how long something takes, which is the data an AWR report is based upon. Because sampling is non-continuous, a lot of detail can be collected, stored and analyzed.

A time-line type of analysis is so important, I enhanced my ASH tools in my OraPub System Monitor (OSM) toolkit to provide this type of analysis. If you want to check them out, download the OSM toolkit HERE, install it and read the osm/interactive/ash-readme.txt file.

As an example, using these tools you can construct an incident time-line like this:

HH:MM:SS.FFF User/Process Notes
------------ ------------- -----------------
15:18:28.796 suspect (837) started the massive update (see SQL below)

15:28:00.389 user (57) application hung (row lock on TM_SHEET_LINE_EXPLOR)
15:28:30.486 user (74) application hung (row lock on TM_SHEET_LINE_EXPLOR)
15:29:30.??? - row locks becomes the top wait event (16 locked users)
15:29:50.749 user (83) application hung (row lock on TM_SHEET_LINE_EXPLOR)

15:30:20.871 user (837) suspect broke out of update (implied)
15:30:20.871 user (57) application returned
15:30:20.871 user (74) application returned
15:30:20.871 user (83) application returned

15:30:30.905 smon (721) first smon action since before 15:25:00 (os thread startup)
15:30:50.974 user (837) first wait for undo - suspect broke out of update
15:30:50.974 - 225 active session, now top event (wait for a undo record)

15:33:41.636 smon (721) last PQ event (PX Deq: Test for msg)
15:33:41.636 user (837) application returned to suspect. Undo completed
15:33:51.670 smon (721) last related event (DFS lock handle)

Without ASH seemingly random problems would be a virtually impossible nightmare scenario for an Oracle DBA.
Summary
It's true. You need the right tool for the job. And the same is true when diagnosing Oracle Database performance. What I've done above is group probably 90% of the problems we face as Oracle DBAs into four categories. And each of these categories needs a special kind of tool and/or diagnosis method.

Once we recognize the problem pattern and get the best tool/method involved to diagnosis the problem, then we will know the time spent developing amazing solutions is time well spent.

Enjoy your work!

Craig.


Categories: DBA Blogs

Bare-Bones Example of Using WebLogic and Arquillian

Steve Button - Tue, 2015-02-03 16:18
The Arquillian project is proving to be very popular for testing code and applications.  It's particularly useful for Java EE projects since it allows for in-container testing to be performed, enabling unit tests to use dependency injection and all the common services  provided by the Java EE platform.

Arquillian uses the concept of container adapters to allow it to execute test code with a  specific test environment.  For the Java EE area,  most of the Java EE implementations have an adapter than can be used to perform the deployment of the archive under test and to execute and report on the results of the unit tests.
A handy way to see all the WebLogic Server related content on the Arquillian blog is this URL: http://arquillian.org/blog/tags/wls/For WebLogic Server the current set of adapters are listed here: http://arquillian.org/blog/2015/01/09/arquillian-container-wls-1-0-0-Alpha3/

There are multiple adapters available for use.  Some of them are historical and some are for use with older versions of WebLogic Server (10.3).
We are actively working with the Arquillian team on finalizing the name, version and status of a WebLogic Server adapter.The preferred adapter set from the WebLogic Server perspective are these:


These adapters utilize the WebLogic Server JMX API to perform their tasks and are the adapters used internally by the development teams when working with Arquillian.  They have been tested to work with WebLogic Server [12.1.1, 12.1.2, 12.1.3].  We also have been using them internally with the 12.2.1 version under development to run the CDI TCK and other tests.

To demonstrate WebLogic Server working with Arquillian a bare-bones example is available on GitHub here: https://github.com/buttso/weblogic-with-arquillian

This example has the most basic configuration you can use to employ Arquillian with a Maven project to deploy and execute tests using WebLogic Server 12.1.3.
 
The README.md file in the project contains more details and a longer description.  In summary though:

1. The first step is to add the Arquillian related dependencies in the Maven pom.xml:

2. The next step is to create an arquillian.xml file that the container adapter uses to connect to the remote server that is being used as the server to run the tests:

3. The last step is to create a unit test which is run with Arquillian.  The unit test is responsible for implementing the @Deployment method which constructs an archive to deploy that contains the code to be tested.  The unit test then provides @Test methods in which the deployment is tested to verify its behaviour:


Executing the unit tests, with the associated archive creation and deployment to the server is performed using the maven test goal:


The tests can be executed directly from IDEs such as NetBeans and Eclipse using the Run Test features:

Executing Tests using NetBeans

Fun with an Android Wear Watch

Oracle AppsLab - Tue, 2015-02-03 15:46

A couple days ago, I was preparing to show some development work Luis (@lsgaleana) did for Android Wear using the Samsung Gear Live.

One of the interesting problems we’ve encountered lately is projecting our device work onto larger screens to show to an audience. I know, bit of a first world problem, which is why I said “interesting.”

At OpenWorld last year, I used an IPEVO camera to project two watches, the Gear Live and the Pebble, using a combination of jewelry felt displays. That worked OK, but the contrast differences between the watches made it a bit tough to see them equally well through the camera.

Plus, any slight movement of the table, and the image shook badly. Not ideal.

Lately, we haven’t been showing the Pebble much, which actually makes the whole process much easier because . . . it’s all Android. An Android Wear watch is just another Android device, so you can project its image to your screen using tools like Android Screen Monitor (ASM) or Android Projector.

Of course, as with any other Android device, you’ll have to put the watch into debugging mode first. If you’re developing for Android Wear, you already know all this, and for the rest of us, the Android Police have a comprehensive how-to hacking guide.

For my purposes, all I needed to do is get adb to recognize the watch. Here are the steps (h/t Android Police):

  • Tap on Wear’s watch face to get a menu of options. Be sure to hit the watch face instead of a notification card.
  • Scroll down the list of options and select Settings.
  • Open About, which is the last option in the list.
  • Find Build number and tap on it seven times, and you’ll get the “You are now a developer!” message.
  • Swipe right (to go back) to the Settings menu.
  • Open Developer options, which is now the last option in the list.
  • Find and set ADB debugging to Enabled.
  • Tap the checkmark button to confirm.

Now, when I need to show a tablet app driving the Wear watch, I can use adb and ASM to show both screens on my Mac, which I can then project. Like so.

allTheScreens

Bonus points, the iPod Touch in that screen is projected using a new feature for QuickTime in Mavericks that works with iOS 8 devices.

49578622Possibly Related Posts:

Oracle Mobile Suite - Web Service Performance Optimisation with Result Caching

Andrejus Baranovski - Tue, 2015-02-03 13:18
One of the main advantages of Oracle Mobile Suite - Service Bus and SOAP/REST web service transformation (more here - Oracle Mobile Suite Service Bus REST and ADF BC SOAP). In addition you will get very nice performance improvement, there is out of the box caching for Web Service resultset with Coherence. I'm going to demonstrate how it works, all out of the box - really simple.

You could define caching for external service (ADF BC SOAP web service in my case), just edit service definition. This is our business service running on WebLogic backend, where actual processing happens. Naturally we would like to eliminate duplicate calls and retrieve previous resultsets from cache stored in Service Bus layer:


Wizard allows to enable result caching by cache token expression. In my case, nameVar is a variable from ADF BC SOAP web service, findEmployees method. You could use a wizard to construct expression. On runtime it will cache resultsets for all the requests according to specified expression. Basically it will cache all invocations of findEmployees method and will track cached data by nameVar parameter value. You could specify cache expiration time, if data is updated more often, expiration time should be shorter. Expiration time even can be dynamic, taken from the request variable:


That't is - this was really simple. All Coherence complexity is hidden and you don't need to worry about it.

I will be running MAF application to perform a test. Here I'm searching by value - th:


If we would check WebLogic log, where ADF BC SOAP web service is deployed, we could see the SQL query was executed with nameVar=th (as per search request in MAF application screen):


Let's run a different query now, search for nameVar=ew:


Again repeat previous search with nameVar=th, there should be query executed on WebLogic server with ADF BC SOAP web service, result should be taken from cache stored in Service Bus (as directed per performance optimization tuning above):


Indeed there was no SQL query executed for nameVar=th this time, data was taken from cache - this is great performance optimisation:


Download sample application (it contains ADF BC SOAP web service, Service Bus and MAF implementations) - MobileServiceBusApp_v4.zip.

Internet browsers at the heart of enterprise hacks, says study

Chris Foot - Tue, 2015-02-03 09:47

Which browser are your employees using? Their choices may affect how secure your digital enterprise assets are. 

Microsoft's Internet Explorer is often characterized as being the least secure among Firefox, Chrome and Safari, but is this really the case? What features are indicative of an insecure Web browser? What sort of techniques are hackers using to access databases through Internet browsers? 

The point of infiltration 
According to a study conducted by the Ponemon Institute, and sponsored by Spikes Security, insecure Web browsers caused 55 percent of malware infections over the course of 2014. Both organizations surveyed IT professionals for the report, the majority of whom maintained that their current security tools are incapable of detecting Web-borne malware. 

"The findings of this research reveal that current solutions are not stopping the growth of Web-borne malware," said Ponemon Institute Chairman and Founder Dr. Larry Ponemon, as quoted by Dark Reading. "Almost all IT practitioners in our study agree that their existing security tools are not capable of completely detecting Web-borne malware, and the insecure Web browser is a primary attack vendor. 

The Ponemon Institute and Spikes Security also made the following discoveries: 

  • 69 percent of survey participants maintained that browser-borne malware is more prevalent than it was a year ago. 
  • Nearly half of organizations reported that Web-based malware bypassed their layered firewall defense systems.
  • 38 percent of respondents maintained sandboxing and content analysis engines still allowed Web-borne malware to infect corporate machines. 

Which is the biggest target? 
Dark Reading acknowledged that the number of flaws discovered in Chrome, Firefox, Internet Explorer, Opera and Safari decreased 19 percent in 2014. Google attributed this success to its bug bounty program. Last year, the tech giant paid $1.5 million to researchers who found more than 500 bugs in its Web browser. 

However, Firefox was the most exploited Browser at Pwn2Own 2014, a hacking challenge hosted by Hewlett-Packard, according to eWEEK. The open source Web browser possessed four zero-day flaws, all of which were taken advantage of. Since the March 2014 event, Firefox has patched these vulnerabilities. 

Yet it's important to determine which browsers are the most popular among professionals and consumers alike, as this will dictate hackers' priorities. It makes more sense for a cybercriminal to target a heavily-used browser than it is for him or her to attack one that is sparingly used. W3schools.com regarded Chrome as the most frequently used solution, so it's likely that hackers are focusing their efforts on this particular browser. 

The post Internet browsers at the heart of enterprise hacks, says study appeared first on Remote DBA Experts.

A Primer on Oracle Documents Cloud Service Administration - Part 1

WebCenter Team - Tue, 2015-02-03 08:42

Author: Thyaga Vasudevan, Senior Director, Oracle WebCenter Product Management

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

At OpenWorld last year, Oracle announced Oracle Documents Cloud Service - an enterprise-grade, secure, cloud based file sharing and sync solution. The November edition of Oracle Fusion Middleware Newsletter ran a feature on it, giving a general overview of Oracle Documents Cloud Service (DOCS). In addition to strong security features and deeper integration with on-premise content management, one of the best features of DOCS is how quickly you can get it running for use. On this blog, time and again, we would try and get deeper into DOCS use cases and feature/functionality. And if there are topics you would like to see covered, please do let us know by leaving a comment.

As an Oracle Document Cloud Service administrator, you want to be confident your organization is getting the most from the service. In this three-part series, I will walk you through five simple tips to get started and get your users on-boarded to the Oracle Document Cloud Service quickly and easily.

My post today focuses on:

Tip 1: Adding Users to Oracle Documents Cloud Service

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

There are 2 ways to provision users to the Document Cloud service:

Option1. Adding Single User at a Time

  1. Sign in to the My Services application.
    a. Please note that you can navigate to My Services by signing in to Oracle Cloud from https://cloud.oracle.com/home.
    b. You can also access the dashboard directly by using the My Services URL, which is dependent on the data center for your service, for example: https://myservices.us2.oraclecloud.com/mycloud

  1. In the My Services application, click the Users tab in the upper right.
  2. Click Add, and then provide first and last name and an email address for the new user, and assign the “Oracle Documents Cloud Service User” role for each user.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Option 2. Bulk Import Users

You can also add users to the service by importing a set of users from a file. Click the Import button. In the Import Users dialog, select a file from your local system that contains attributes as detailed below.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

The user file contains a header line, followed by a line for each user to be created. Each user line contains first name, last name, and email address:

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

First Name,Last Name,Email

John,Smith,john.smith@acme.com

Anne,Taylor,anne.taylor@acme.com


Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Next, you have to assign the Oracle Documents Cloud Service User role to the imported users using the “Batch Assign Role” option.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Clicking on this option will request you to upload a CSV file.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

From the Role drop down, select “Oracle Documents Cloud Service User” and click Assign.

The CSV file contains a header line, followed by a line for each user to be created. Each user line contains only the email address.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

Email

john.smith@acme.com

anne.taylor@acme.com

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

And there you have it - it's that simple.

In my next post,we will look at Assigning User Quota and Resetting a User's Password.

Webinar Followup

Randolf Geist - Tue, 2015-02-03 06:47
Thanks everyone who attended my recent webinar at AllThingsOracle.com.

The link to the webinar recording can be found here.

The presentation PDF can be downloaded here. Note that this site uses a non-default HTTP port, so if you're behind a firewall this might be blocked.

Thanks again to AllThingsOracle.com and Amy Burrows for hosting the event.

Social Coding Resolves JAX-RS and CDI Producer Problem

Steve Button - Tue, 2015-02-03 06:04
The inimitable Bruno Borges picked up tweet earlier today commenting on a problem using @Produces with non-CDI libraries with WebLogic Server 12.1.3.

The tweeter put his example up on a github repository to share - quite a nice example of using JAX-RS, CDI integration and of using Arquillian to verify it works correctly.  Ticked a couple of boxes for what I've been looking at lately

Forking his project to have a look at it locally:

https://github.com/buttso/weblogic-producers

Turns out that the issue was quite a simple and common one - a missing reference to the jax-rs:2.0 shared-library that is needed to use JAX-RS 2.0 on WebLogic Server 12.1.3.   Needs a weblogic.xml to reference that library.

I made the changes in a local branch and tested it again:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] producers .......................................... SUCCESS [  0.002 s]
[INFO] bean ............................................... SUCCESS [  0.686 s]
[INFO] web ................................................ SUCCESS [  7.795 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

With the tests now passing, pushed the branch to my fork and sent Kuba a pull request to have a look at the changes I made:

https://github.com/buttso/weblogic-producers/tree/steve_work

I now just hope it works in his environment too :-)

The GitHub model is pretty magical really.

Big changes ahead for India's IT majors

Abhinav Agarwal - Tue, 2015-02-03 02:22
My article on challenges confronting the Indian IT majors was published in DNA in January 2015.

Here is the complete text of the article - Big changes ahead for India's IT majors:

Hidden among the noise surrounding the big three of the Indian IT industry - TCS, Wipro, and Infosys - was a very interesting sliver of signal that points to possibly big changes on the horizon. Though Cognizant should be counted among these biggies - based on its size and revenues - let's focus on these three for the time being.

Statements made by the respective CEOs of Infosys and Wipro, and the actions of TCS, provide hints on how these companies plan on addressing the coming headwinds that the Indian IT industry faces. Make no mistake. These are strong headwinds that threaten to derail the mostly good fairy tale of the Indian IT industry. Whether it is the challenge of continuing to show growth on top of a large base - each of these companies is close to or has exceeded ten billion dollars in annual revenues; protecting margins when everyone seems to be in a race to the bottom; operating overseas in the face of unremitting resistance to outsourcing; or finding ways to do business in light of the multiple disruptions thrust by cloud computing, big data, and the Internet of Things, they cannot continue in a business-as-usual model any longer.



For nearly two decades the Indian IT industry has grown at a furious pace, but also grown fat in the process, on a staple diet of low-cost business that relied on the undeniable advantage of labour-cost arbitrage. Plainly speaking, people cost a lot overseas, but they cost a lot less in India. The favourable dollar exchange-rate ensured that four, five (or even ten engineers at one point in time) could be hired in India for the cost of one software engineer in the United States. There was no meaningful incentive to either optimize on staffing, or build value-added skills when people could be retained by offering fifteen per cent salary hikes, every year. Those days are fast fading, and while the Indian IT workforce's average age has continued to inch up, the sophistication of the work performed has not kept pace, resulting in companies paying their employees more and more every year for work that is much the same.

TCS, willy nilly, has brought to the front a stark truth facing much of the Indian IT industry - how to cut costs in the face of a downward pressure on most of the work it performs, which has for the most part remained routine and undifferentiated. Based on a remark made by its HR head on "layoffs" and "restructuring" that would take place over the course of 2015, the story snowballed into a raging controversy. It was alleged that TCS was planning on retrenching tens of thousands of employees - mostly senior employees who cost more than college graduates with only a few years of experience. Cursory and level-headed thinking would have revealed that prima-facie any such large layoffs could not be true. But such is the way with rumours - they have long legs. What however remains unchanged is the fact that without more value-based business, an "experienced" workforce is a drag on margins. It's a liability, not an asset. Ignore, for a minute, the absolute worst way in which TCS handled the public relations fiasco arising out of its layoff rumours - something even its CEO, N Chandraskaran, acknowledged. Whether one likes it or not, so-called senior resources at companies that cannot lay claim to skills that are in demand will find themselves under the dark cloud of layoffs. If you prefer, call them "involuntary attrition", "labour cost rationalization", or anything else. The immediate reward of a lowered loaded cost number will override any longer-term damage such a step may involve. If it is a driver for TCS, it will be a driver for Wipro and Infosys.

Infosys, predictably, and as I had written some six months back, is trying to use the innovation route to find its way to both sustained growth and higher margins. Its CEO, Vishal Sikka, certainly has the pedigree to make innovation succeed. His words have unambiguously underlined his intention to pursue, acquire, or fund innovation. Unsurprisingly, there are several challenges to this approach. First, outsourced innovation is open to market risks. If you invest early enough, you will get in at lower valuations, but you will also have to cast a wider net, which requires more time and focus. Invest later, and you pay through your nose by way of sky-high valuations. Second, external innovation breeds resentment internally. It sends the message that the company does not consider its own employees "good enough" to innovate. To counter this perception, Vishal has exhorted Infosys employees "to innovate proactively on every single thing they are working on." This is a smart strategy. It is low cost, low risk, and a big morale booster. However, it also distracts. Employees can easily get distracted by the "cool" factor of doing what they believe is innovative thinking. "20%" may well be a myth in any case. How does a company put a process in place that can evaluate, nurture, and manage innovative ideas coming out of tens of thousands of employees? Clearly, there are issues to be balanced. The key to success, like in most other things, will lie in execution - as Ram Charan has laid out in his excellent book, unsurprisingly titled "Execution".

Lastly, there is Wipro. In an interview, Wipro's CEO, TK Kurien, announced that Wipro would use "subcontracting to drive growth". This seems to have gone largely unnoticed, in the industry. Wipro seems to have realized, on the basis of this statement at least, that it cannot continue to keep sliding down the slipper slope of low-cost undifferentiated work. If the BJP government's vision of developing a hundred cities in India into so-called "Smart Cities", one could well see small software consulting and services firm sprout up all over India, in Tier 2 and even Tier 3 cities. These firms will benefit from the e-infrastructure available as a result of the Smart Cities initiative on the one hand, and find a ready market for their services that requires a low cost model to begin with on the other. This will leave Wipro free to subcontract low-value, undifferentiated work, to smaller companies in smaller cities. A truly virtuous circle. In theory at least. However, even here it would be useful for Wipro to remember the Dell and Asus story. Dell was at one point among the most innovative of computer manufacturers. It kept on giving away more and more of its computer manufacturing business - from motherboard designing, laptop assembly, and so on - to Asus, because it helped Dell keep its margins high while allowing it to focus on what it deemed its core competencies. Soon enough, Asus had learned everything about the computer business, and it launched its own computer brand. The road to commoditization hell is paved with the best intentions of cost-cutting.

While it may appear that these three IT behemoths are pursuing three mutually exclusive strategies, it would be naïve to judge these three strategies as an either-or play. Wach will likely, and hopefully, pursue a mix of these strategies, focusing more on what they decide fits their company best, and resist the temptation to follow each other in a monkey-see-monkey-do race. Will one of the big three Indian IT majors pull ahead of its peers and compete with the IBM, Accenture, and other majors globally? Watch this space.

IOUG Collaborate #C15LV

Yann Neuhaus - Tue, 2015-02-03 01:48

The IOUG - Independant Oracle User Group - has a great event each year: the COLLABORATE. This year it's in April 12-16, 2015 at The Mandalay Bay Resort & Casino in Las Vegas.

I'll be a speaker and a RAC Attack Ninja as well.

alt  IOUG COLLABORATE provides all the real-world technical training you need – not sales pitches. The IOUG Forum presents hundreds of educational sessions on Oracle technology, led by the most informed and accomplished Oracle users and experts in the world, bringing more than 5,500 Oracle technology and applications professionals to one venue for Oracle education, customer exchange and networking.         

Installing the Oracle Application Management Pack for Oracle Utilities

Anthony Shorten - Mon, 2015-02-02 22:15

The Application Management Pack for Oracle Utilities is a plugin to the Oracle Enterprise Manager product to allow management, patching and monitoring of Oracle Utilities applications.

To install the pack you use the following technique:

  • If you are a customer who has installed the previous versions of the pack, all targets from that pack must be removed and the pack deinstalled prior to using the new version (12.1.0.1.0). This is because the new pack is a completely different set of software and it is recommended to remove old versions. This will only occur in this release as future releases will automatically upgrade.
  • Navigate to the Setup --> Extensibility --> Plugins and search for the "Oracle Utilities Application" plugin. Do not use the "Oracle Utilities" plugin as that is the previous release.
  • Press "Download" to download the plugin to your repository.
  • Press "Apply" to apply the pack to your OMS console instance. This will install the server components of the pack.
  • From the Plugin Manager (you will be directed there), you can deploy the pack to your OMS using Deploy On Management Servers.  This will start the deployment process.
  • After deployment to the server, you then can also deploy the plug in on any licensed Oracle Utilities Servers using the Deploy On Management Agents. Select the servers from the list.
  • You now have installed the Pack.
  • You should first, discover and promote the Oracle WebLogic targets for the domain, clusters (if used), servers and application deployments for the Oracle Utilities products.
  • Run the discover against the Oracle Utilities servers to discover the pack specific targets.

At this point you can create groups on the targets or even dashboards.

My Oracle Support Release 15.1 is Live!

Joshua Solomin - Mon, 2015-02-02 20:12
My Oracle Support Release 15.1 is Live!

My Oracle Support release 15.1 is now live. Improvements include:

All Customer User Administrators (CUAs) can manage and group their users and assets using the Support Identifier Groups (SIGs) feature.
Knowledge Search automatically provides unfiltered results when filters return no results. In addition, product and version detail displays in bug search results.
The SR platform selector groups common products with the appropriate platform.
Some problem types for non-technical SRs have guided resolution workflow.
In the Proactive Analysis Center: all clickable links are underlined, users only see applicable reports, and column headers can be sorted.



Learn more by viewing the What's new in My Oracle Support video.

Exadata Vulnerability

Pakistan's First Oracle Blog - Mon, 2015-02-02 19:49
This Exadata vulnerability is related to glibc vulnerability. A heap-based buffer overflow was found in glibc's __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() glibc function calls.

A remote attacker able to make an application call either of these functions could use this flaw to execute arbitrary code with the permissions of the user running the application.

In order to check if your Exadata system suffers from this vulnerability, use:

[root@server ~]# ./ghostest-rhn-cf.sh
vulnerable

The solution and action plan for this vulnerability is available by My Oracle Support in the following document:

glibc vulnerability (CVE-2015-0235) patch availability for Oracle Exadata Database Machine (Doc ID 1965525.1)
Categories: DBA Blogs

Scrutinizing Exadata X5 Datasheet IOPS Claims…and Correcting Mistakes

Kevin Closson - Mon, 2015-02-02 19:37

I want to make these two points right out of the gate:

  1. I do not question Oracle’s IOPS claims in Exadata datasheets
  2. Everyone makes mistakes
Everyone Makes Mistakes

Like me. On January 21, 2015, Oracle announced the X5 generation of Exadata. I spent some time studying the datasheets from this product family and also compared the information to prior generations of Exadata namely the X3 and X4. Yesterday I graphed some of the datasheet numbers from these Exadata products and tweeted the graphs. I’m sorry  to report that two of the graphs were faulty–the result of hasty cut and paste. This post will clear up the mistakes but I owe an apology to Oracle for incorrectly graphing their datasheet information. Everyone makes mistakes. I fess up when I do. I am posting the fixed slides but will link to the deprecated slides at the end of this post.

We’re Only Human

Wouldn’t IT be a more enjoyable industry if certain IT vendors stepped up and admitted when they’ve made little, tiny mistakes like the one I’m blogging about here? In fact, wouldn’t it be wonderful if some of the exceedingly gruesome mistakes certain IT vendors make would result in a little soul-searching and confession? Yes. It would be really nice! But it’ll never happen–well, not for certain IT companies anyway. Enough of that. I’ll move on to the meat of this post. The rest of this article covers:

  • Three Generations of Exadata IOPS Capability
  • Exadata IOPS Per Host CPU
  • Exadata IOPS Per Flash SSD
  • IOPS Per Exadata Storage Server License Cost
Three Generations of Exadata IOPS Capability

The following chart shows how Oracle has evolved Exadata from the X3 to the X5 EF model with regard to IOPS capability. As per Oracle’s datasheets on the matter these are, of course, SQL-driven IOPS. Oracle would likely show you this chart and nothing else. Why? Because it shows favorable,  generational progress in IOPS capability. A quick glance shows that read IOPS improved just shy of 3x and write IOPS capability improved over 4x from the X3 to X5 product releases. These are good numbers. I should point out that the X3 and X4 numbers are the datasheet citations for 100% cached data in Exadata Smart Flash Cache. These models had 4 Exadata Smart Flash Cache PCIe cards in each storage server (aka, cell). The X5 numbers I’m focused on reflect the performance of the all-new Extreme Flash (EF) X5 model. It seems Oracle has started to investigate the value of all-flash technology and, indeed, the X5 EF is the top-dog in the Exadata line-up. For this reason I choose to graph X5 EF data as opposed to the more pedestrian High Capacity model which has 12 4TB SATA drives fronted with PCI Flash cards (4 per storage server). exadata-evolution-iops-gold-1 The tweets I hastily posted yesterday with the faulty data points aimed to normalize these performance numbers to important factors such as host CPU, SSD count and Exadata Storage Server Software licensing costs.  The following set of charts are the error-free versions of the tweeted charts.

Exadata IOPS Per Host CPU

Oracle’s IOPS performance citations are based on SQL-driven workloads. This can be seen in every Exadata datasheet. All Exadata datasheets for generations prior to X4 clearly stated that Exadata IOPS are limited by host CPU. Indeed, anyone who studies Oracle Database with SLOB knows how all of that works. SQL-driven IOPS requires host CPU. Sadly, however, Oracle ceased stating the fact that IOPS are host-CPU bound in Exadata as of the advent of the X4 product family. I presume Oracle stopped correctly stating the factual correlation between host CPU and SQL-driven IOPS for only the most honorable of reasons with the best of customers’ intentions in mind. In case anyone should doubt my assertion that Oracle historically associated Exadata IOPS limitations with host CPU I submit the following screen shot of the pertinent section of the X3 datasheet:   X3-datasheet-truth Now that the established relationship between SQL-driven IOPS and host CPU has been demystified, I’ll offer the following chart which normalizes IOPS to host CPU core count: exadata-evolution-iops-per-core-gold I think the data speaks for itself but I’ll add some commentary. Where Exadata is concerned, Oracle gives no choice of host CPU to customers. If you adopt Exadata you will be forced to take the top-bin Xeon SKU with the most cores offered in the respective Intel CPU family.  For example, the X3 product used 8-core Sandy Bridge Xeons. The X4 used 12-core Ivy Bridge Xeons and finally the X5 uses 18-core Haswell Xeons. In each of these CPU families there are other processors of varying core counts at the same TDP. For example, the Exadata X5 processor is the E5-2699v3 which is a 145w 18-core part. In the same line of Xeons there is also a 145w 14c part (E5-2697v3) but that is not an option to Exadata customers.

All of this is important since Oracle customers must license Oracle Database software by the host CPU core. The chart shows us that read IOPS per core from X3 to X4 improved 18% but from X4 to X5 we see only a 3.6% increase. The chart also shows that write IOPS/core peaked at X4 and has actually dropped some 9% in the X5 product. These important trends suggest Oracle’s balance between storage plumbing and I/O bandwidth in the Storage Servers is not keeping up with the rate at which Intel is packing cores into the Xeon EP family of CPUs. The nugget of truth that is missing here is whether the 145w 14-core  E5-2697v3 might in fact be able to improve this IOPS/core ratio. While such information would be quite beneficial to Exadata-minded customers, the 22% drop in expensive Oracle Database software in such an 18c versus 14c scenario is not beneficial to Oracle–especially not while Oracle is struggling to subsidize its languishing hardware business with gains from traditional software.

Exadata IOPS Per Flash SSD

Oracle uses their own branded Flash cards in all of the X3 through X5 products. While it may seem like an implementation detail, some technicians consider it important to scrutinize how well Oracle leverages their own components in their Engineered Systems. In fact, some customers expect that adding significant amounts of important performance components, like Flash cards, should pay commensurate dividends. So, before you let your eyes drift to the following graph please be reminded that X3 and X4 products came with 4 Gen3 PCI Flash Cards per Exadata Storage Server whereas X5 is fit with 8 NVMe flash cards. And now, feel free to take a gander at how well Exadata architecture leverages a 100% increase in Flash componentry: exadata-evolution-iops-per-SSD-gold This chart helps us visualize the facts sort of hidden in the datasheet information. From Exadata X3 to Exadata X4 Oracle improved IOPS per Flash device by just shy of 100% for both read and write IOPS. On the other hand, Exadata X5 exhibits nearly flat (5%) write IOPS and a troubling drop in read IOPS per SSD device of 22%.  Now, all I can do is share the facts. I cannot change people’s belief system–this I know. That said, I can’t imagine how anyone can spin a per-SSD drop of 22%–especially considering the NVMe SSD product is so significantly faster than the X4 PCIe Flash card. By significant I mean the NVMe SSD used in the X5 model is rated at 260,000 random 8KB IOPS whereas the X4 PCIe Flash card was only rated at 160,000 8KB read IOPS. So X5 has double the SSDs–each of which is rated at 63% more IOPS capacity–than the X4 yet IOPS per SSD dropped 22% from the X4 to the X5. That means an architectural imbalance–somewhere.  However, since Exadata is a completely closed system you are on your own to find out why doubling resources doesn’t double your performance. All of that might sound like taking shots at implementation details. If that seems like the case then the next section of this article might be of interest.

IOPS Per Exadata Storage Server License Cost

As I wrote earlier in this article, both Exadata X3 and Exadata X4 used PCIe Flash cards for accelerating IOPS. Each X3 and X4 Exadata Storage Server came with 12 hard disk drives and 4 PCIe Flash cards. Oracle licenses Exadata Storage Server Software by the hard drive in X3/X4 and by the NVMe SSD in the X5 EF model. To that end the license “basis” is 12 units for X3/X5 and 8 for X5. Already readers are breathing a sigh of relief because less license basis must surely mean less total license cost. Surely Not! Exadata X3 and X4 list price for Exadata Storage Server software was $10,000 per disk drive for an extended price of $120,000 per storage server. The X5 EF model, on the other hand, prices Exadata Storage Server Software at $20,000 per NVMe SSD for an extended price of $160,000 per Exadata Storage Server. With these values in mind feel free to direct your attention to the following chart which graphs the IOPS per Exadata Storage Server Software list price (IOPS/license$$). exadata-evolution-iops-per-license-cost-gold The trend in the X3 to X4 timeframe was a doubling of write IOPS/license$$ and just short of a 100% improvement in read IOPS/license$$. In stark contrast, however, the X5 EF product delivers only a 57% increase in write IOPS/license$$ and a troubling, tiny, 17% increase in read IOPS/license$$. Remember, X5 has 100% more SSD componentry when compared to the X3 and X4 products.

Summary

No summary needed. At least I don’t think so.

About Those Faulty Tweeted Graphs

As promised, I’ve left links to the faulty graphs I tweeted here: Faulty / Deleted Tweet Graph of Exadata IOPS/SSD: http://wp.me/a21zc-1ek Faulty / Deleted Tweet Graph of Exadata IOPS/license$$: http://wp.me/a21zc-1ej

References

Exadata X3-2 datasheet: http://www.oracle.com/technetwork/server-storage/engineered-systems/exadata/exadata-dbmachine-x3-2-ds-1855384.pdf Exadata X4-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-dbmachine-x4-2-ds-2076448.pdf Exadata X5-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-x5-2-ds-2406241.pdf X4 SSD info: http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f80/overview/index.html X5 SSD info: http://docs.oracle.com/cd/E54943_01/html/E54944/gokdw.html#scrolltoc Engineered Systems Price List: http://www.oracle.com/us/corporate/pricing/exadata-pricelist-070598.pdf , http://www.ogs.state.ny.us/purchase/prices/7600020944pl_oracle.pdf


Filed under: oracle

If You Want It, Here It Is

Floyd Teter - Mon, 2015-02-02 18:22
If you want it
Here it is, come and get it
Mmmm, make your mind up fast
If you want it
Anytime, I can give it
But you better hurry
Cause it may not last
    - From "Come And Get It", written by Sir Paul McCartney and originally recorded by Badfinger

I'm watching changes in the SaaS world...some people are keeping up with the changes, and some people are not.  The approach is selling SaaS subscriptions is an area that stands out in my mind where the market players have not all quite wrapped their brains around a new reality.

In the old days of selling on-premise applications (also lovingly referred to now as "fat apps"), the initial sale was the key battleground between applications vendors in their quest for customers.  That's because switching on-premise apps was hard.  Ask anyone switching from Oracle to SAP for enterprise apps...a very tough, very expensive, and very long process.

In the SaaS world, switching is quicker, easier, and much less expensive.  No technology footprint to switch out.  Get my data from the current SaaS vendor, map and convert to the new SaaS applications, train my workforce, cut off the old SaaS vendor, start paying the new SaaS vendor.  While it's still not a small undertaking, it's a comparative drop in the bucket.

Oh, what about hybrid platforms?  Still easier to switch out the SaaS portion of your system.  And so far as integrations:  well, the commonly used integrations are fast becoming commodities.  That's what Cloud Integration platforms from providers like Oracle, Sierra-Cedar (yeah, that was a plug - pretty slick the way I slipped it in there, huh?), Boomi, Workday, etc...providing highly-reused application integrations as a managed cloud service.

So what does this mean?  It means that as SaaS becomes more prevalent in the enterprise applications world, it won't be about making the deal as much as it will be about keeping the customer while concurrently enticing other players customers to switch while concurrently hunting for customers just entering the enterprise applications customer space.  In other words, we'll soon see huge churning of accounts from Brand X to Brand Y.  And we'll also see vendors attempting to protect their own patch of accounts.  And, at the same time, we'll see more offerings geared toward the SMB space...because that's where the net new growth opportunities will exist.

We're entering a great time for buyers...vendor lock-in in the enterprise apps market will become a less predominant factor.  And, frankly, vendors will be treat each customer like the "belle of the ball".

Watch for SaaS vendors to begin singing Sir Paul's tune:  "If you want it, here it is..." - on very customer-favorable terms.

Last year's big four cybersecurity vulnerabilities [VIDEO]

Chris Foot - Mon, 2015-02-02 09:04

Transcript 

Hi, welcome to RDX! 2014 was a rough year in regard to cybersecurity. Between April and November of last year, four critical vulnerabilities were unraveled. Here’s a recap.

The Heartbleed bug is a flaw in the Open SSL cryptographic software library that allows people to steal data protected by the SSL/TLS encryption method.

Shellshock is a collection of security bugs used in the Unix Bash shell, which could potentially allow a hacker to issue unsanctioned commands through a Linux distribution.

Winshock enables those exploiting the flaw to possibly issue denial-of-service attacks and enter unauthenticated remote code executions.

Lastly, Kerberos Checksum could allow Active Directory to regard incorrect passwords as legitimate, exposing corporate networks.

As the former three vulnerabilities are applicable to both Windows and Linux server operating systems, consulting with personnel capable of assessing and patching these bugs is critical.

Thanks for watching! Visit us next time for news regarding operating system vulnerabilities.

The post Last year's big four cybersecurity vulnerabilities [VIDEO] appeared first on Remote DBA Experts.

Why won't my APEX submit buttons submit?

Tony Andrews - Mon, 2015-02-02 07:46
I hit a weird jQuery issue today that took a ridiculous amount of time to solve.  It is easy to demonstrate: Create a simple APEX page with an HTML region Create 2 buttons that submit the page with a request e.g. SUBMIT and CANCEL Run the page So far, it works - if you press either button you can see that the page is being submitted.   Now edit the buttons and assign them static IDs of "Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2015/02/why-wont-my-apex-submit-buttons-submit.html