Feed aggregator

How To Approach Different Oracle Database Performance Problems

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.How To Approach Different Oracle Database Performance ProblemsJump Start Your Oracle Database Tuning Effort

Every Oracle Database Administrator will tell you no two performance problems are the same. But a seasoned Oracle DBA recognizes there are similarities...patterns. Fast problem pattern recognition allows us to minimize diagnosis time, so we can focus on developing amazing solutions.

I tend to group Oracle performance problems into four patterns. Quickly exploring these four patterns is what this article is all about.


You Can Not Possibly List Every Problem And Solution
When I teach, some Oracle Database Administrators want me to outline every conceivable problem along with the solution. Not only is the thought of this exhausting, it's not possible. Even my Stori product uses pattern matching. One of the keys to becoming a fantastic performance analyst is the ability quickly look at a problem and then decided which diagnosis approach is the best. For example, if you don't know the problem SQL (assuming there is one) tracing is not likely to be your best approach.

The Four Oracle Database Performance Patterns
Here are the four performance patterns I tend to group problems into.

The SQL Is Known
Many times there is a well know SQL statement that is responsible for the poor performance. While I will always do a quick Oracle Time Based Analysis (see below) and verify the accused SQL, I will directly attack this problem by tuning with SQL specific diagnostic and tuning tools.

But... I will also ask a senior application user, if the users are using the application correctly. Sometimes new applications users try and use a new application like their old application. It's like trying to drive a car with moving your feet as you are riding a bicycle... not going to work and it's dangerous!

Business Process Specific

I find that when the business is seriously affected by application performance issues, that's when the "limited budget" is suddenly not so limited. When managers and their business's are affected they want action.

When I'm approached to help solve a problem, I always ask how the business is being affected. If I keep hearing about a specific business process or application module I know two things.

First, there are many SQL statements involved. Second, the problem is bounded by a business process or application. This is when I start the diagnostic process with an Oracle Time Based Analysis approach which, will result in multiple solutions to the same problem.

As I teach in my online seminar How To Tune Oracle With An AWR Report, user feel performance through time. So, if our analysis is time based we can create a quantitative link between our analysis and their experience. If our analysis creates solutions that reduce time, then we can expect the user experience to improve. This combined with my "3 Circle" approach yields spot-on solutions very quickly.

While an Oracle Time Based Analysis is amazing, because Oracle does not instrument CPU consumption we can't answer the question, "What's Oracle doing with all that CPU?" If you want to drill into this topic check out my online seminar, Detailing Oracle CPU Consumption: The Missing Link.

It's Just Slow
How many times have I experienced this... It's Just Slow!


If what the user is attempting to explain is true, the performance issue is affecting a wide range of business processes. The problem is probably not a single issue (but could be) and clearly the key SQL is not know. Again, this is a perfect problem scenario to apply an Oracle Time Based Analysis.

The reason I say this is because an OTBA will look at the problem from multiple perspectives, categorize Oracle time and develop solutions to reduce those big categories of time. If you also do Unit Of Work Time Based Analysis, you can an even anticipate the impact of your solutions! Do an OraPub website search HERE or search my blog for UOWTBA.
Random Incident That Quickly Appears And Vanishes
This is the most difficult problem to fix. Mainly because the problem "randomly" appears and can't be duplicated. (Don't even bother calling Oracle Support to help in this situation.) Furthermore, it's too quick for an AWR report to show it's activity and you don't want to impact the production system by gathering tons of detailed performance statistics.

Even a solid Oracle Time Based Analysis will likely not help in this situation. Again, the problem is performance data collection and retention. The instrumented AWR or Statpack data does not provide enough detail. What we need step-by-step activity...like a timeline.

Because this type of problem scares both DBAs and business managers, you will likely need to answer questions like this:

  • What is that blip all about?
  • Did this impact users?
  • Has it happened before?
  • Will it happen again?
  • What should we do about it?

The only way I know how to truly diagnose a problem like this is to do a session-level time-line analysis. Thankfully, this is possible using the Oracle Active Session History data. Both v$active_session_history and dba_hist_active_sess_history are absolutely key in solving problems like this.

ASH samples Oracle Database session activity once each second (by default). This is very different than measuring how long something takes, which is the data an AWR report is based upon. Because sampling is non-continuous, a lot of detail can be collected, stored and analyzed.

A time-line type of analysis is so important, I enhanced my ASH tools in my OraPub System Monitor (OSM) toolkit to provide this type of analysis. If you want to check them out, download the OSM toolkit HERE, install it and read the osm/interactive/ash-readme.txt file.

As an example, using these tools you can construct an incident time-line like this:

HH:MM:SS.FFF User/Process Notes
------------ ------------- -----------------
15:18:28.796 suspect (837) started the massive update (see SQL below)

15:28:00.389 user (57) application hung (row lock on TM_SHEET_LINE_EXPLOR)
15:28:30.486 user (74) application hung (row lock on TM_SHEET_LINE_EXPLOR)
15:29:30.??? - row locks becomes the top wait event (16 locked users)
15:29:50.749 user (83) application hung (row lock on TM_SHEET_LINE_EXPLOR)

15:30:20.871 user (837) suspect broke out of update (implied)
15:30:20.871 user (57) application returned
15:30:20.871 user (74) application returned
15:30:20.871 user (83) application returned

15:30:30.905 smon (721) first smon action since before 15:25:00 (os thread startup)
15:30:50.974 user (837) first wait for undo - suspect broke out of update
15:30:50.974 - 225 active session, now top event (wait for a undo record)

15:33:41.636 smon (721) last PQ event (PX Deq: Test for msg)
15:33:41.636 user (837) application returned to suspect. Undo completed
15:33:51.670 smon (721) last related event (DFS lock handle)

Without ASH seemingly random problems would be a virtually impossible nightmare scenario for an Oracle DBA.

Summary
It's true. You need the right tool for the job. And the same is true when diagnosing Oracle Database performance. What I've done above is group probably 90% of the problems we face as Oracle DBAs into four categories. And each of these categories needs a special kind of tool and/or diagnosis method.

Once we recognize the problem pattern and get the best tool/method involved to diagnosis the problem, then we will know the time spent developing amazing solutions is time well spent.

Enjoy your work!

Craig.


Categories: DBA Blogs

Bare-Bones Example of Using WebLogic and Arquillian

Steve Button - Tue, 2015-02-03 16:18
The Arquillian project is proving to be very popular for testing code and applications.  It's particularly useful for Java EE projects since it allows for in-container testing to be performed, enabling unit tests to use dependency injection and all the common services  provided by the Java EE platform.

Arquillian uses the concept of container adapters to allow it to execute test code with a  specific test environment.  For the Java EE area,  most of the Java EE implementations have an adapter than can be used to perform the deployment of the archive under test and to execute and report on the results of the unit tests.
A handy way to see all the WebLogic Server related content on the Arquillian blog is this URL: http://arquillian.org/blog/tags/wls/For WebLogic Server the current set of adapters are listed here: http://arquillian.org/blog/2015/01/09/arquillian-container-wls-1-0-0-Alpha3/

There are multiple adapters available for use.  Some of them are historical and some are for use with older versions of WebLogic Server (10.3).
We are actively working with the Arquillian team on finalizing the name, version and status of a WebLogic Server adapter.The preferred adapter set from the WebLogic Server perspective are these:


These adapters utilize the WebLogic Server JMX API to perform their tasks and are the adapters used internally by the development teams when working with Arquillian.  They have been tested to work with WebLogic Server [12.1.1, 12.1.2, 12.1.3].  We also have been using them internally with the 12.2.1 version under development to run the CDI TCK and other tests.

To demonstrate WebLogic Server working with Arquillian a bare-bones example is available on GitHub here: https://github.com/buttso/weblogic-with-arquillian

This example has the most basic configuration you can use to employ Arquillian with a Maven project to deploy and execute tests using WebLogic Server 12.1.3.
 
The README.md file in the project contains more details and a longer description.  In summary though:

1. The first step is to add the Arquillian related dependencies in the Maven pom.xml:

2. The next step is to create an arquillian.xml file that the container adapter uses to connect to the remote server that is being used as the server to run the tests:

3. The last step is to create a unit test which is run with Arquillian.  The unit test is responsible for implementing the @Deployment method which constructs an archive to deploy that contains the code to be tested.  The unit test then provides @Test methods in which the deployment is tested to verify its behaviour:


Executing the unit tests, with the associated archive creation and deployment to the server is performed using the maven test goal:


The tests can be executed directly from IDEs such as NetBeans and Eclipse using the Run Test features:

Executing Tests using NetBeans

Social Coding Resolves JAX-RS and CDI Producer Problem

Steve Button - Tue, 2015-02-03 06:04
The inimitable Bruno Borges picked up tweet earlier today commenting on a problem using @Produces with non-CDI libraries with WebLogic Server 12.1.3.

The tweeter put his example up on a github repository to share - quite a nice example of using JAX-RS, CDI integration and of using Arquillian to verify it works correctly.  Ticked a couple of boxes for what I've been looking at lately

Forking his project to have a look at it locally:

https://github.com/buttso/weblogic-producers

Turns out that the issue was quite a simple and common one - a missing reference to the jax-rs:2.0 shared-library that is needed to use JAX-RS 2.0 on WebLogic Server 12.1.3.   Needs a weblogic.xml to reference that library.

I made the changes in a local branch and tested it again:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] producers .......................................... SUCCESS [  0.002 s]
[INFO] bean ............................................... SUCCESS [  0.686 s]
[INFO] web ................................................ SUCCESS [  7.795 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------


With the tests now passing, pushed the branch to my fork and sent Kuba a pull request to have a look at the changes I made:

https://github.com/buttso/weblogic-producers/tree/steve_work

I now just hope it works in his environment too :-)

The GitHub model is pretty magical really.

Big changes ahead for India's IT majors

Abhinav Agarwal - Tue, 2015-02-03 02:22
My article on challenges confronting the Indian IT majors was published in DNA in January 2015.

Here is the complete text of the article - Big changes ahead for India's IT majors:

Hidden among the noise surrounding the big three of the Indian IT industry - TCS, Wipro, and Infosys - was a very interesting sliver of signal that points to possibly big changes on the horizon. Though Cognizant should be counted among these biggies - based on its size and revenues - let's focus on these three for the time being.

Statements made by the respective CEOs of Infosys and Wipro, and the actions of TCS, provide hints on how these companies plan on addressing the coming headwinds that the Indian IT industry faces. Make no mistake. These are strong headwinds that threaten to derail the mostly good fairy tale of the Indian IT industry. Whether it is the challenge of continuing to show growth on top of a large base - each of these companies is close to or has exceeded ten billion dollars in annual revenues; protecting margins when everyone seems to be in a race to the bottom; operating overseas in the face of unremitting resistance to outsourcing; or finding ways to do business in light of the multiple disruptions thrust by cloud computing, big data, and the Internet of Things, they cannot continue in a business-as-usual model any longer.



For nearly two decades the Indian IT industry has grown at a furious pace, but also grown fat in the process, on a staple diet of low-cost business that relied on the undeniable advantage of labour-cost arbitrage. Plainly speaking, people cost a lot overseas, but they cost a lot less in India. The favourable dollar exchange-rate ensured that four, five (or even ten engineers at one point in time) could be hired in India for the cost of one software engineer in the United States. There was no meaningful incentive to either optimize on staffing, or build value-added skills when people could be retained by offering fifteen per cent salary hikes, every year. Those days are fast fading, and while the Indian IT workforce's average age has continued to inch up, the sophistication of the work performed has not kept pace, resulting in companies paying their employees more and more every year for work that is much the same.

TCS, willy nilly, has brought to the front a stark truth facing much of the Indian IT industry - how to cut costs in the face of a downward pressure on most of the work it performs, which has for the most part remained routine and undifferentiated. Based on a remark made by its HR head on "layoffs" and "restructuring" that would take place over the course of 2015, the story snowballed into a raging controversy. It was alleged that TCS was planning on retrenching tens of thousands of employees - mostly senior employees who cost more than college graduates with only a few years of experience. Cursory and level-headed thinking would have revealed that prima-facie any such large layoffs could not be true. But such is the way with rumours - they have long legs. What however remains unchanged is the fact that without more value-based business, an "experienced" workforce is a drag on margins. It's a liability, not an asset. Ignore, for a minute, the absolute worst way in which TCS handled the public relations fiasco arising out of its layoff rumours - something even its CEO, N Chandraskaran, acknowledged. Whether one likes it or not, so-called senior resources at companies that cannot lay claim to skills that are in demand will find themselves under the dark cloud of layoffs. If you prefer, call them "involuntary attrition", "labour cost rationalization", or anything else. The immediate reward of a lowered loaded cost number will override any longer-term damage such a step may involve. If it is a driver for TCS, it will be a driver for Wipro and Infosys.

Infosys, predictably, and as I had written some six months back, is trying to use the innovation route to find its way to both sustained growth and higher margins. Its CEO, Vishal Sikka, certainly has the pedigree to make innovation succeed. His words have unambiguously underlined his intention to pursue, acquire, or fund innovation. Unsurprisingly, there are several challenges to this approach. First, outsourced innovation is open to market risks. If you invest early enough, you will get in at lower valuations, but you will also have to cast a wider net, which requires more time and focus. Invest later, and you pay through your nose by way of sky-high valuations. Second, external innovation breeds resentment internally. It sends the message that the company does not consider its own employees "good enough" to innovate. To counter this perception, Vishal has exhorted Infosys employees "to innovate proactively on every single thing they are working on." This is a smart strategy. It is low cost, low risk, and a big morale booster. However, it also distracts. Employees can easily get distracted by the "cool" factor of doing what they believe is innovative thinking. "20%" may well be a myth in any case. How does a company put a process in place that can evaluate, nurture, and manage innovative ideas coming out of tens of thousands of employees? Clearly, there are issues to be balanced. The key to success, like in most other things, will lie in execution - as Ram Charan has laid out in his excellent book, unsurprisingly titled "Execution".

Lastly, there is Wipro. In an interview, Wipro's CEO, TK Kurien, announced that Wipro would use "subcontracting to drive growth". This seems to have gone largely unnoticed, in the industry. Wipro seems to have realized, on the basis of this statement at least, that it cannot continue to keep sliding down the slipper slope of low-cost undifferentiated work. If the BJP government's vision of developing a hundred cities in India into so-called "Smart Cities", one could well see small software consulting and services firm sprout up all over India, in Tier 2 and even Tier 3 cities. These firms will benefit from the e-infrastructure available as a result of the Smart Cities initiative on the one hand, and find a ready market for their services that requires a low cost model to begin with on the other. This will leave Wipro free to subcontract low-value, undifferentiated work, to smaller companies in smaller cities. A truly virtuous circle. In theory at least. However, even here it would be useful for Wipro to remember the Dell and Asus story. Dell was at one point among the most innovative of computer manufacturers. It kept on giving away more and more of its computer manufacturing business - from motherboard designing, laptop assembly, and so on - to Asus, because it helped Dell keep its margins high while allowing it to focus on what it deemed its core competencies. Soon enough, Asus had learned everything about the computer business, and it launched its own computer brand. The road to commoditization hell is paved with the best intentions of cost-cutting.

While it may appear that these three IT behemoths are pursuing three mutually exclusive strategies, it would be naïve to judge these three strategies as an either-or play. Wach will likely, and hopefully, pursue a mix of these strategies, focusing more on what they decide fits their company best, and resist the temptation to follow each other in a monkey-see-monkey-do race. Will one of the big three Indian IT majors pull ahead of its peers and compete with the IBM, Accenture, and other majors globally? Watch this space.

Oracle MAA 12c – A Data Plan for Meeting Every Organization’s Needs

VitalSoftTech - Mon, 2015-02-02 22:36
Data is one of the most valuable and crucial asset for any company, hence businesses spend huge amounts of money on their enterprise software to ensure its safety and availability. RAC and clustering are some examples of the ways companies practice for high availability and performance of their enterprise and databases. However there are still […]
Categories: DBA Blogs

My Oracle Support Release 15.1 is Live!

Joshua Solomin - Mon, 2015-02-02 20:12
My Oracle Support Release 15.1 is Live!

My Oracle Support release 15.1 is now live. Improvements include:

All Customer User Administrators (CUAs) can manage and group their users and assets using the Support Identifier Groups (SIGs) feature.
Knowledge Search automatically provides unfiltered results when filters return no results. In addition, product and version detail displays in bug search results.
The SR platform selector groups common products with the appropriate platform.
Some problem types for non-technical SRs have guided resolution workflow.
In the Proactive Analysis Center: all clickable links are underlined, users only see applicable reports, and column headers can be sorted.



Learn more by viewing the What's new in My Oracle Support video.

Exadata Vulnerability

Pakistan's First Oracle Blog - Mon, 2015-02-02 19:49
This Exadata vulnerability is related to glibc vulnerability. A heap-based buffer overflow was found in glibc's __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() glibc function calls.

A remote attacker able to make an application call either of these functions could use this flaw to execute arbitrary code with the permissions of the user running the application.

In order to check if your Exadata system suffers from this vulnerability, use:

[root@server ~]# ./ghostest-rhn-cf.sh
vulnerable

The solution and action plan for this vulnerability is available by My Oracle Support in the following document:

glibc vulnerability (CVE-2015-0235) patch availability for Oracle Exadata Database Machine (Doc ID 1965525.1)
Categories: DBA Blogs

Scrutinizing Exadata X5 Datasheet IOPS Claims…and Correcting Mistakes

Kevin Closson - Mon, 2015-02-02 19:37

I want to make these two points right out of the gate:

  1. I do not question Oracle’s IOPS claims in Exadata datasheets
  2. Everyone makes mistakes
Everyone Makes Mistakes

Like me. On January 21, 2015, Oracle announced the X5 generation of Exadata. I spent some time studying the datasheets from this product family and also compared the information to prior generations of Exadata namely the X3 and X4. Yesterday I graphed some of the datasheet numbers from these Exadata products and tweeted the graphs. I’m sorry  to report that two of the graphs were faulty–the result of hasty cut and paste. This post will clear up the mistakes but I owe an apology to Oracle for incorrectly graphing their datasheet information. Everyone makes mistakes. I fess up when I do. I am posting the fixed slides but will link to the deprecated slides at the end of this post.

We’re Only Human

Wouldn’t IT be a more enjoyable industry if certain IT vendors stepped up and admitted when they’ve made little, tiny mistakes like the one I’m blogging about here? In fact, wouldn’t it be wonderful if some of the exceedingly gruesome mistakes certain IT vendors make would result in a little soul-searching and confession? Yes. It would be really nice! But it’ll never happen–well, not for certain IT companies anyway. Enough of that. I’ll move on to the meat of this post. The rest of this article covers:

  • Three Generations of Exadata IOPS Capability
  • Exadata IOPS Per Host CPU
  • Exadata IOPS Per Flash SSD
  • IOPS Per Exadata Storage Server License Cost
Three Generations of Exadata IOPS Capability

The following chart shows how Oracle has evolved Exadata from the X3 to the X5 EF model with regard to IOPS capability. As per Oracle’s datasheets on the matter these are, of course, SQL-driven IOPS. Oracle would likely show you this chart and nothing else. Why? Because it shows favorable,  generational progress in IOPS capability. A quick glance shows that read IOPS improved just shy of 3x and write IOPS capability improved over 4x from the X3 to X5 product releases. These are good numbers. I should point out that the X3 and X4 numbers are the datasheet citations for 100% cached data in Exadata Smart Flash Cache. These models had 4 Exadata Smart Flash Cache PCIe cards in each storage server (aka, cell). The X5 numbers I’m focused on reflect the performance of the all-new Extreme Flash (EF) X5 model. It seems Oracle has started to investigate the value of all-flash technology and, indeed, the X5 EF is the top-dog in the Exadata line-up. For this reason I choose to graph X5 EF data as opposed to the more pedestrian High Capacity model which has 12 4TB SATA drives fronted with PCI Flash cards (4 per storage server). exadata-evolution-iops-gold-1 The tweets I hastily posted yesterday with the faulty data points aimed to normalize these performance numbers to important factors such as host CPU, SSD count and Exadata Storage Server Software licensing costs.  The following set of charts are the error-free versions of the tweeted charts.

Exadata IOPS Per Host CPU

Oracle’s IOPS performance citations are based on SQL-driven workloads. This can be seen in every Exadata datasheet. All Exadata datasheets for generations prior to X4 clearly stated that Exadata IOPS are limited by host CPU. That is a very important fact to understand because SQL-driven IOPS is a host metric no matter what your storage is.

Indeed, anyone who studies Oracle Database with SLOB knows how all of that works. SQL-driven IOPS requires host CPU. Sadly, however, Oracle ceased stating the fact that IOPS are host-CPU bound in Exadata as of the advent of the X4 product family. I presume Oracle stopped correctly stating the factual correlation between host CPU and SQL-driven IOPS for only the most honorable of reasons with the best of customers’ intentions in mind.

In case anyone should doubt my assertion that Oracle historically associated Exadata IOPS limitations with host CPU I submit the following screen shot of the pertinent section of the X3 datasheet:   X3-datasheet-truth Now that the established relationship between SQL-driven IOPS and host CPU has been demystified, I’ll offer the following chart which normalizes IOPS to host CPU core count: exadata-evolution-iops-per-core-gold I think the data speaks for itself but I’ll add some commentary. Where Exadata is concerned, Oracle gives no choice of host CPU to customers. If you adopt Exadata you will be forced to take the top-bin Xeon SKU with the most cores offered in the respective Intel CPU family.  For example, the X3 product used 8-core Sandy Bridge Xeons. The X4 used 12-core Ivy Bridge Xeons and finally the X5 uses 18-core Haswell Xeons. In each of these CPU families there are other processors of varying core counts at the same TDP. For example, the Exadata X5 processor is the E5-2699v3 which is a 145w 18-core part. In the same line of Xeons there is also a 145w 14c part (E5-2697v3) but that is not an option to Exadata customers.

All of this is important since Oracle customers must license Oracle Database software by the host CPU core. The chart shows us that read IOPS per core from X3 to X4 improved 18% but from X4 to X5 we see only a 3.6% increase. The chart also shows that write IOPS/core peaked at X4 and has actually dropped some 9% in the X5 product. These important trends suggest Oracle’s balance between storage plumbing and I/O bandwidth in the Storage Servers is not keeping up with the rate at which Intel is packing cores into the Xeon EP family of CPUs. The nugget of truth that is missing here is whether the 145w 14-core  E5-2697v3 might in fact be able to improve this IOPS/core ratio. While such information would be quite beneficial to Exadata-minded customers, the 22% drop in expensive Oracle Database software in such an 18c versus 14c scenario is not beneficial to Oracle–especially not while Oracle is struggling to subsidize its languishing hardware business with gains from traditional software.

Exadata IOPS Per Flash SSD

Oracle uses their own branded Flash cards in all of the X3 through X5 products. While it may seem like an implementation detail, some technicians consider it important to scrutinize how well Oracle leverages their own components in their Engineered Systems. In fact, some customers expect that adding significant amounts of important performance components, like Flash cards, should pay commensurate dividends. So, before you let your eyes drift to the following graph please be reminded that X3 and X4 products came with 4 Gen3 PCI Flash Cards per Exadata Storage Server whereas X5 is fit with 8 NVMe flash cards. And now, feel free to take a gander at how well Exadata architecture leverages a 100% increase in Flash componentry: exadata-evolution-iops-per-SSD-gold This chart helps us visualize the facts sort of hidden in the datasheet information. From Exadata X3 to Exadata X4 Oracle improved IOPS per Flash device by just shy of 100% for both read and write IOPS. On the other hand, Exadata X5 exhibits nearly flat (5%) write IOPS and a troubling drop in read IOPS per SSD device of 22%.  Now, all I can do is share the facts. I cannot change people’s belief system–this I know. That said, I can’t imagine how anyone can spin a per-SSD drop of 22%–especially considering the NVMe SSD product is so significantly faster than the X4 PCIe Flash card. By significant I mean the NVMe SSD used in the X5 model is rated at 260,000 random 8KB IOPS whereas the X4 PCIe Flash card was only rated at 160,000 8KB read IOPS. So X5 has double the SSDs–each of which is rated at 63% more IOPS capacity–than the X4 yet IOPS per SSD dropped 22% from the X4 to the X5. That means an architectural imbalance–somewhere.  However, since Exadata is a completely closed system you are on your own to find out why doubling resources doesn’t double your performance. All of that might sound like taking shots at implementation details. If that seems like the case then the next section of this article might be of interest.

IOPS Per Exadata Storage Server License Cost

As I wrote earlier in this article, both Exadata X3 and Exadata X4 used PCIe Flash cards for accelerating IOPS. Each X3 and X4 Exadata Storage Server came with 12 hard disk drives and 4 PCIe Flash cards. Oracle licenses Exadata Storage Server Software by the hard drive in X3/X4 and by the NVMe SSD in the X5 EF model. To that end the license “basis” is 12 units for X3/X5 and 8 for X5. Already readers are breathing a sigh of relief because less license basis must surely mean less total license cost. Surely Not! Exadata X3 and X4 list price for Exadata Storage Server software was $10,000 per disk drive for an extended price of $120,000 per storage server. The X5 EF model, on the other hand, prices Exadata Storage Server Software at $20,000 per NVMe SSD for an extended price of $160,000 per Exadata Storage Server. With these values in mind feel free to direct your attention to the following chart which graphs the IOPS per Exadata Storage Server Software list price (IOPS/license$$). exadata-evolution-iops-per-license-cost-gold The trend in the X3 to X4 timeframe was a doubling of write IOPS/license$$ and just short of a 100% improvement in read IOPS/license$$. In stark contrast, however, the X5 EF product delivers only a 57% increase in write IOPS/license$$ and a troubling, tiny, 17% increase in read IOPS/license$$. Remember, X5 has 100% more SSD componentry when compared to the X3 and X4 products.

Summary

No summary needed. At least I don’t think so.

About Those Faulty Tweeted Graphs

As promised, I’ve left links to the faulty graphs I tweeted here: Faulty / Deleted Tweet Graph of Exadata IOPS/SSD: http://wp.me/a21zc-1ek Faulty / Deleted Tweet Graph of Exadata IOPS/license$$: http://wp.me/a21zc-1ej

References

Exadata X3-2 datasheet: http://www.oracle.com/technetwork/server-storage/engineered-systems/exadata/exadata-dbmachine-x3-2-ds-1855384.pdf Exadata X4-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-dbmachine-x4-2-ds-2076448.pdf Exadata X5-2 datasheet: http://www.oracle.com/technetwork/database/exadata/exadata-x5-2-ds-2406241.pdf X4 SSD info: http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f80/overview/index.html X5 SSD info: http://docs.oracle.com/cd/E54943_01/html/E54944/gokdw.html#scrolltoc Engineered Systems Price List: http://www.oracle.com/us/corporate/pricing/exadata-pricelist-070598.pdf , http://www.ogs.state.ny.us/purchase/prices/7600020944pl_oracle.pdf


Filed under: oracle

Why won't my APEX submit buttons submit?

Tony Andrews - Mon, 2015-02-02 07:46
I hit a weird jQuery issue today that took a ridiculous amount of time to solve.  It is easy to demonstrate: Create a simple APEX page with an HTML region Create 2 buttons that submit the page with a request e.g. SUBMIT and CANCEL Run the page So far, it works - if you press either button you can see that the page is being submitted.   Now edit the buttons and assign them static IDs of "Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2015/02/why-wont-my-apex-submit-buttons-submit.html

Using the WebLogic Embedded EJB Container

Steve Button - Sun, 2015-02-01 21:39
The WebLogic Server 12.1.3 EJB Developers Guide was recently updated to note that the embedded EJB container can be used by adding a reference to weblogic.jar to the CLASSPATH when the EJB client is being executed.

https://docs.oracle.com/middleware/1213/wls/EJBAD/embedejb.htm#EJBAD1403

This is very convenient since it enables the WebLogic Server embedded EJB container to used by simply adding weblogic.jar to the classpath when running the client:

Or for example if you are developing unit tests using JUnit and running them from a maven project, you can configure the maven-surefire-plugin to use WebLogic Server to run the EJB test code in its embedded EJB container:

A fully working example of using this is available in this GitHub repository:

https://github.com/buttso/weblogic-embedded-ejb

For more information have a look at the repository and check out the example.

Moving My Beers From Couchbase to MongoDB

Tugdual Grall - Sun, 2015-02-01 09:01
See it on my new blog : here Few days ago I have posted a joke on Twitter Moving my Java from Couchbase to MongoDB pic.twitter.com/Wnn3pXfMGi — Tugdual Grall (@tgrall) January 26, 2015 So I decided to move it from a simple picture to a real project. Let’s look at the two phases of this so called project: Moving the data from Couchbase to MongoDB Updating the application code to use Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2

Getting started with iOS development using Eclipse and Java

Shay Shmeltzer - Fri, 2015-01-30 16:05

Want to use Eclipse to build an on-device mobile application that runs on iOS devices (iPhones and iPads)?

No problem - here is a step by step demo on how to do this:

Oh, and by the way the same app will function also on Android without any changes to the code :-)  

This is an extract from an online seminar that I recorded for one of Oracle's Virtual Technology Summits - and I figured people who didn't sign up for that event might still benefit from having access to the demo part of the video.

In the demo I show how to build an on-device app that access local data as well as remote data through web services, and how easy it is to integrate device features too.

If you want to try this on your own, get a copy of the Oracle Enterprise Pack for Eclipse, and follow the setup steps in the tutorial here.

And then just follow the video steps.

The location of the web service I accessed is at: http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL

And the Java classes I use to simulate local data are  here.



Categories: Development

Oracle Audit Vault - Oracle Client Identifier and Last Login

Several standard features of the Oracle database should be kept in mind when considering what alerts and correlations are possible when combining Oracle database and application log and audit data.

Client Identifier

Default Oracle database auditing stores the database username but not the application username.  In order to pull the application username into the audit logs, the CLIENT IDENTIFIER attribute needs to be set for the application session which is connecting to the database.  The CLIENT_IDENTIFIER is a predefined attribute of the built-in application context namespace, USERENV, and can be used to capture the application user name for use with global application context, or it can be used independently. 

CLIENT IDENTIFIER is set using the DBMS_SESSION.SET_IDENTIFIER procedure to store the application username.  The CLIENT IDENTIFIER attribute is one the same as V$SESSION.CLIENT_IDENTIFIER.  Once set you can query V$SESSION or select sys_context('userenv','client_identifier') from dual.

The table below offers several examples of how CLIENT_IDENTIFIER is used.  For each example, for Level 3 alerts, consider how the value of CLIENT_IDENTIFIER could be used along with network usernames, enterprise applications usernames as well as security and electronic door system activity logs.

Oracle CLIENT_IDENTIFIER

Application

Example of how used

E-Business Suite

As of Release 12, the Oracle E-Business Suite automatically sets and updates client_identifier to the FND_USER.USERNAME of the user logged on.  Prior to Release 12, follow Support Note How to add DBMS_SESSION.SET_IDENTIFIER(FND_GLOBAL.USER_NAME) to FND_GLOBAL.APPS_INITIALIZE procedure (Doc ID 1130254.1)

PeopleSoft

Starting with PeopleTools 8.50, the PSOPRID is now additionally set in the Oracle database CLIENT_IDENTIFIER attribute. 

SAP

With SAP version 7.10 above, the SAP user name is stored in the CLIENT_IDENTIFIER.

Oracle Business Intelligence Enterprise Edition(OBIEE)

When querying an Oracle database using OBIEE the connection pool username is passed to the database.  To also pass the middle-tier username, set the user identifier on the session.  To do this in OBIEE, open the RPD, edit the connection pool settings and create a new connection script to run at connect time.  Add the following line to the connect script:

 

CALL DBMS_SESSION.SET_IDENTIFIER('VALUEOF(NQ_SESSION.USER)')

 

Last Login

Tracking when database users last logged in is a common compliance requirement.  This is required in order to reconcile users and cull stale users.  New with Oracle12c, Oracle provides this information for database users.  The system table SYS.DBA_USERS has a column, last_login. 

Example:

select username, account_status, common, last_login

from sys.dba_users

order by last_login asc;

Username

Account_Status

Common

Last_Login

C##INTEGRIGY

OPEN

YES

05-AUG-14 12.46.52.000000000 PM AMERICA/NEW_YORK

C##INTEGRIGY_TEST_2

OPEN

YES

02-SEP-14 12.29.04.000000000 PM AMERICA/NEW_YORK

XS$NULL

EXPIRED & LOCKED

YES

02-SEP-14 12.35.56.000000000 PM AMERICA/NEW_YORK

SYSTEM

OPEN

YES

04-SEP-14 05.03.53.000000000 PM AMERICA/NEW_YORK

 

If you have questions, please contact us at mailto:info@integrigy.com

Reference
Auditing, Oracle Audit Vault, Oracle Database
Categories: APPS Blogs, Security Blogs

WebLogic Maven Plugin - Simplest Example

Steve Button - Fri, 2015-01-30 00:34
I've seen a question or two in recent days on how to configure the weblogic maven plugin.

The official documentation is extensive ... but could be considered TL;DR for a quick bootstrapping on how to use it.

As a late friday afternoon exercise I just pushed out an example of a very simple project that uses the weblogic-maven-plugin to deploy a web module.  It's almost the simplest configuration that can be done of the plugin to perform deployment related operations of a project/module:

https://github.com/buttso/weblogic-maven-plugin

This relies on the presence of either a local/corporate repository that contains the set of weblogic artefacts and plugins - OR - you configure and use the Oracle Maven Repository instead.  

Example pom.xml

Updated WebLogic Server 12.1.3 Developer Zip Distribution

Steve Button - Thu, 2015-01-29 18:50
We've just pushed out an update to the WebLogic Server 12.1.3 Developer Zip distribution containing the bug fixes from a recent PSU (patch set update).

This is great for developers since it maintains the high quality of the developer zip distribution and the convenience it provides - avoids reverting to the generic installer to then enable the application of patch set updates.  For development use only.

Download it from OTN:

http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-for-dev-1703574.html

Check out the readme for the list of bug fixes:

http://download.oracle.com/otn/nt/middleware/12c/wls/1213/README_WIN_UP1.txt

Annonce : Devenez expert Cloud Oracle !

Jean-Philippe Pinte - Thu, 2015-01-29 03:37
Vous souhaitez évoluer dans votre carrière?
Rejoignez l'un de nos partenaires pour devenir un expert des solutions Cloud Oracle !

Evènement: Oracle Virtual Cloud Summit

Jean-Philippe Pinte - Wed, 2015-01-28 16:08
4 séminaires en ligne :
  • Back up your Database securely to the Cloud
  • Move your Test & Development to the Cloud
  • Secure Document File-sync & share in the Cloud
  • Accelerate Application development in the Cloud
Enregistrez-vous à l'évènement Oracle Virtual Cloud Summit : http://cloud.oraclevirtualsummit.com

ERROR - CLONE-20372 Server port validation failed

Vikram Das - Wed, 2015-01-28 15:19
Alok and Shoaib pinged me about this error. This error is reported in logs when adcfgclone.pl is run for a R12.2.4 appsTier where the source and target instances are on same physical server.

SEVERE : Jan 27, 2015 3:40:09 PM - ERROR - CLONE-20372   Server port validation failed.
SEVERE : Jan 27, 2015 3:40:09 PM - CAUSE - CLONE-20372   Ports of following servers - oacore_server2(7256),forms_server2(7456),oafm_server2(7656),forms-c4ws_server2(7856),oaea_server1(6856) - are not available.
4:00 PM
SEVERE : Jan 27, 2015 3:40:09 PM - ERROR - CLONE-20372   Server port validation failed.
SEVERE : Jan 27, 2015 3:40:09 PM - CAUSE - CLONE-20372   Ports of following servers - oacore_server2(7256),forms_server2(7456),oafm_server2(7656),forms-c4ws_server2(7856),oaea_server1(6856) - are not available.
SEVERE : Jan 27, 2015 3:40:09 PM - ACTION - CLONE-20372   Provide valid free ports.
oracle.as.t2p.exceptions.FMWT2PPasteConfigException: PasteConfig failed. Make sure that the move plan and the values specified in moveplan are correct

The ports reported are those in the source instance.  Searching on support.oracle.com bug database I found three articles:

EBS 12.2.2.4 RAPID CLONE FAILS WITH ERROR - CLONE-20372 SERVER PORT VALIDATION(Bug ID 20147454)

12.2: N->1 CLONING TO SAME APPS TIER FAILING DUE TO PORT CONFLICT(Bug ID 20389864)

FS_CLONE IS NOT ABLE TO COMPLETE FOR MULTI-NODE SETUP(Bug ID 18460148)

The situation described in the first two bugs is same.  The articles reference each other but don't provide any solution.

Logically thinking, adcfgclone.pl is picking this up from source configuration that is in $COMMON_TOP/clone directory.  So we did grep on subdirectories of $COMMON_TOP/clone:

cd $COMMON_TOP/clone
find . -type f -print | xargs grep 7256

7256 is one of the ports that failed validation.

It is present in

CTXORIG.xml and
FMW/ohs/moveplan.xml
FMW/wls/moveplan.xml

We tried changing the port numbers in CTXORIG.xml and re-tried adcfgclone.pl and it failed again.

So we changed the port numbers of the ports that failed validation in

$COMMON_TOP/clone/FMW/ohs/moveplan.xml and
$COMMON_TOP/clone/FMW/wls/moveplan.xml

cd $FMW_HOME
find . -name detachHome.sh |grep -v Template

The above command returns the detachHome.sh scripts for all the ORACLE_HOMEs inside FMW_HOME.  Executed this to detach all of them.

Removed the FMW_HOME directory

Re-executed
adcfgclone.pl appsTier

It succeeded this time.  Till we get a patch for this bug, we will continue to use this workaround to complete clones.


Categories: APPS Blogs

Innovating with Middleware Platform

Anshu Sharma - Wed, 2015-01-28 13:01

I was recently discussing with a partner executive on howOracle can help the ISV innovate. Decided to pen my thoughts here too -

1) WebLogicInnovation - WebLogic is our market leading App Server. The area which I wouldlike to highlight is Exalogic. Seeing more and more cases where Telco,Financial Services, Govt solution providers are seeing business benefits ofrunning their business critical application on Exalogic. With the upcominglaunch of Exalogic Cloud Software 12c and already available X5-2 hardware, WebLogicperformance on Exalogic will continue to get better. But more importantlypartners would be able to get a simplified experience, similar to Oracle PublicCloud, on Exalogic as explained in this blog post.

2) Middleware Platform for Industry solutions - Oracle SOASuite solves core integration challenges for Healthcareentities, Retailers/Manufacturers,Airlinesetc. Oracle BPM allows you to design complex processes for FinancialServices, Telcos, Public Sector etc. Oracle Event Processing allows you to analyzeand act on data from a variety of devices (IoT) in Fast DataSolutions being deployed in Telcos (Mobile Data offloading, QoSManagement), Transportation (Vehicle Monitoring), Retail (Real Time Coupons),Utilities (Smart Grids) etc. Partners providing process management and integrationsolutions for vertical industries can roll out innovations while keeping thelights running by deploying on Oracle Middleware Platform (SOA, BPM, OEP, WLS,Exalogic, Enterprise Manager).

3) Mobile Platform - Adoption ofmobility in enterprises offers tremendous opportunities to ISVs. We asked onepartner, RapidValue, to share their experience. In this writeup,RapidValue explains how they were able to use power of Oracle Mobile Platformto quickly bring to market a suite of Mobile Applications for Field Service,HRMS, Approvals, Order Management, Inventory Management, and Expense Management.

4) Public Cloud – In recent years theworld of application development has adopted new methodologies, like Agile,that improve the quality and speed in which applications are delivered. Toolssuch as automatic build utilities combined with continuous integrationplatforms simplify the adoption of these new methodologies. These tools areavailable in Oracle DeveloperCloud Service for every licensee of Java Cloud Service. 

Making DevOps Business Driven - a service view

Steve Jones - Wed, 2015-01-28 08:59
I've been doing a bit recently around DevOps and what I've been seeing is that companies that having been scaling DevOps tend to run into a problem: exactly what is a good boundary for a DevOps team? Now I've talked before about how Microservices are just SOA with a new logo, well there is an interesting piece about DevOps as well, its not actually a brand new thing.  Its an evolution and
Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator