Feed aggregator

Learn How To Save 48% On Your Access Management Deployment

Mark Wilcox - Fri, 2011-09-02 06:30
Oracle Identity Management Webinar logo We're hosting an upcoming webinar with the Aberdeen group that will show you research that will show how using an Identity Management platform can save you significant money vs a point-solution based deployment. Click Here to register.

The two best ways to meet Larry Ellison in person.

Andrews Consulting - Thu, 2011-09-01 13:06
        Oracle CEO Larry Ellison is famously elusive.  It is harder to get a private meeting with him than the Pope.  Outsiders, especially reporters and analysts, have almost no chance of ever meeting him.  Only the inner circle within Oracle itself ever has a real two-way dialog with Ellison.  There are, however, two proven ways […]
Categories: APPS Blogs

OOW 2011: What is Going on for JDBC, UCP, Java in the Database, Net Services, OCI, PHP, Ruby, Python, and Perl

Kuassi Mensah - Wed, 2011-08-31 17:37

Oracle OpenWorld San Fransicso 2011


Sessions for DBA and/or application developers (Java, C, C++, PHP, Ruby, Python, Perl) looking for features and best practices to optimize networking and build best performing, scalable and highly-available applications with Oracle Database 11g.


Session: 14702

Java in the Database—The Sky's the Limit: Lessons from the Largest Deployment

Where: Marriott Marquis - Salon 8 -- When: 10/3/11, 15:30

---


BoF Session: 26220

Meet the Oracle JDBC and Oracle Universal Connection Pool Development Team

Where: Marriott Marquis - Salon 8 -- When: 10/3/11, 18:30

---


BoF Session: 26240

Meet the Oracle Database Clients Developers: C, C++, PHP, Python, Ruby, and Perl

Where: Marriott Marquis - Salon 8 -- When: 10/3/11, 19:30

---

Hands-on Lab - Session: 30082

Develop and Deploy High-Performance Web 2.0 PHP, Ruby, or Python Applications

Where: Marriott Marquis - Salon 10/11 -- When: 10/3/11, 17:00

---

Session: 13360

Optimize Java Persistence/Scale Database Access with JDBC and Oracle Universal Connection Pool (UCP)

Where: Marriott Marquis - Salon 8 -- When: 10/4/11, 17:00

---

Hands-on-lab - Session: 30021

Efficient Java Persistence with JDBC, Java Universal Connection Pool, and Java in the Database

Where: Marriott Marquis - Salon 10/11 -- When: 10/4/11, 13:15

---

Session: 14345

Net Services: Best Practices for Performance, Scalability, and High Availability

Where: Moscone South - 303 -- When: 10/5/11, 13:15

---

Session: 14703

Build, Deploy, and Troubleshoot Highly Performant, Highly Available Apps with Oracle Database

Where: Moscone South - 303 -- When: 10/5/11, 17:00

---

Session: 14704

PHP, Ruby, Python, and Perl: Develop and Deploy Mission-Critical Apps with Oracle Database 11g

Where: Marriott Marquis - Salon 8 -- When: 10/5/11, 11:45



My OOW Presentations

Chris Muir - Sat, 2011-08-27 02:07
If you're heading to OOW this year, it'd be great to have you at either one of my following presentations:

Session ID: 02240
Session Title: Angels in the Architecture: An Oracle Application Development Framework Architectural Blueprint
Venue / Room: Marriott Marquis - Golden Gate B
Date and Time: Wednesday 10/5/11, 01:00 PM

Oracle Application Development Framework (Oracle ADF) in Oracle JDeveloper 11g presents an interesting service-oriented solution with the adoption of bounded task flows and Oracle ADF libraries. Yet unlike its Release 10.1.3 counterpart, in which the tendency was to build one large application from the start, the 11g solution comes with its own challenges in terms of bringing the multiple moving parts into a composite master Oracle ADF application. This session presents a blueprint for Oracle ADF application architecture that can be applied to small and large projects alike to assist beginners in pulling 11g Oracle ADF applications together.

Session ID: 2241
Session Title: A Change Is as Good as a REST: Oracle JDeveloper 11g's REST Web Services
Venue / Room: Marriott Marquis - Golden Gate A
Date and Time: Thursday 10/6/11, 03:00 PM

It can seem like a hard slog to develop SOAP-based Web services, with their contract-first design requirements and over-engineered specification that includes everything but the kitchen sink. REST-based Web services provide light relief for quick Web service development, whereas the constraints of SOAP Web services feel extreme for some short, sharp development. Luckily, Oracle JDeveloper provides support for both styles, and this presentation includes a short and sweet demonstration of getting REST Web services up and running by using Oracle JDeveloper's support for the Java API for RESTful Web Services (JAX-RS).

In turn don't forget the day of ADF presentations for the ADF EMG on the User Group Sunday at Open World.

Unicode: Code Points in SQL or PL/SQL (New SQL Snippets Topic)

Joe Fuda - Wed, 2011-08-24 19:00
A new topic has been added to the Unicode section on SQL Snippets. "Code Points in SQL or PL/SQL" examines various techniques for determining a character's Unicode code point. One technique involves creating a custom PL/SQL function called UNICODEPOINT(). When compared to other techniques UNICODEPOINT() proves to be faster, leaner, and more scalable. It also works on more Oracle versions than the other techniques.
...

Those Who Can’t Do, Audit

Mary Ann Davidson - Wed, 2011-08-24 11:53

I am often asked what the biggest change is that I’ve seen in my years working in information security. (Most of those who ask are too polite to ask how many years I’ve worked in information security. I used to say I joined Oracle when I was 8. Now, I tell everyone I started at Oracle when I was 3. Pretty soon, I’ll be a prenatal employee.)

There are too many changes to pick just one, but on a personal level, I seem to spend a lot more time on government affairs (relative to the rest of my job) that I used to: working with others in industry to discuss cybersecurity-related public policy, meeting with staffers on the Hill, commenting on draft legislation, commenting on others’ comments on draft legislation, and so on. On the one hand, it’s a natural result of the amount of dependence that everyone has on cybersecurity and the amount of highly publicized breaches, hacks, and so on we continue to see. You can hardly open the paper – or open an email with a link to the daily electronic paper – without reading about another data breach, “outing” of sensitive information, new thing you didn’t even know was IP-accessible being broken into (e.g., starting cars remotely), and so on.

Legislators often want to Do Something when they see a problem – that’s why we elected them, presumably. I’ve opined in previous blogs on the importance of defining what problem you want to solve, specifying what “it” is that you want to legislate, understanding costs – especially those pertaining to unintended consequences - and so on. I spend more time on govie affairs now because more legislators are proposing legislation that they hope will improve cybersecurity.

Most people who help frame public policy discussions about cybersecurity are well intended and they want to minimize or resolve the problem. However, there are also a whole lotta entities that want to use those discussions to get handed a nice, big, fat “public policy printing press”: ideally, legislative mandates where their businesses receive a direct and lucrative benefit, ka-ching. Their idea of public policy begins and ends at the cash register.

Some businesses would love to be in a position where they could write/draft/influence legislation about how no commercial company can be trusted with security and therefore needs someone (presumably people like themselves) to bless their systems and all their business practices – the more, the merrier. This includes legislative mandates on suppliers – who, as we all know – are busy throwing crappy code over the wall with malice aforethought. Those evil suppliers simply cannot be trusted, and thus the entirety of supplier business practices in security must be validated by Outside Experts* if not the entirety of their code scanned. (Nice work if you can get it.) (Aside: boy, with all the energy we suppliers expend on deliberately producing rotten code and putting our customers’ systems at risk – not to mention our own businesses since we run on our own stuff - it’s really hard to find time for other malicious acts like starving baby rabbits and kitties.**)

At the same time, these businesses are less than forthcoming when asked how they will be vetted: what do they find, how well do they find it and at what cost, since everybody who has worked with code scanning tools know they need to be tuned. (Having to plow through 1000 alleged vulnerabilities to find the three that are legitimate is way too expensive for any company to contemplate doing it.) Now is a perfect time for a disclosure: I am on an advisory board to a company in this sector.

One company in particular is beginning to get under my skin on a lot of levels pertaining to “creating a market for themselves.” Let’s call them SASO (Static Analysis Service Offering) that – ta da! - do static analysis as a service offering. More specifically, they analyze the binaries to do static analysis – an important point. When SASO got started, I thought they had a useful value proposition in that a lot of small software companies don’t have security expertise in-house – one reason why Oracle has strong “business alignment” processes when we acquire a company that include alignment with our assurance practices, which the acquired entities are by-and-large really happy to do. The small company wants to increase their ability to sell into core markets and to do so they may need to convince prospective customers that their code is appropriately secure. They hire SASO to analyze their code, and SASO produces a summary report that tells the customer whether there are significant potentially unfixed vulnerabilities and also gives the small company the details of those vulnerabilities so they can validate actual vulnerabilities and then fix them. Fair enough, and a good service offering, to a point, more on which later. The real benefit to the small company is to get actionable intelligence on problems in their code earlier because for one, it’s almost always easier and cheaper to fix these before products ship, and/or remediate them in some severity order. Also, it’s not always easy to use static analysis tools, yourself: the tools require knowledgeable people to run them and analyze the results.

It’s not, in other words, the “gold star” from SASO they get that has value; it’s the SASO expertise they get to fix issues earlier until they get that expertise in-house and up-to-speed. And they really do need that expertise in-house because security is a core element of software development and you cannot outsource it -- or you will have no security. (Among other things Oracle does in terms of secure development, we have our ethical hacking team conduct product assessments. They are as good as anybody in industry – or better – and they find problems that code analysis tools do not find and never will find. Their expertise is often codified in our own tools that we use in addition to commercially available static and dynamic analysis tools.)

Of course, the market for “helping small inexperienced companies improve the security-worthiness of their code” isn’t big enough, so SASO has been trying to expand the size of their market by means of two understandably - yet obnoxiously - self-promoting activities. One of them is by convincing entire market sectors that they cannot trust their vendors, and their suppliers’ code needs to be “tested” by SASO. You do want to ask, why would anybody trust them? I mean, more and more of SASO’s potential market size is based on FUD. Whereas, the suppliers’ market is based on their customers’ ability to run their entire businesses on the software or hardware. That is, customers bet the digital farm on their suppliers, and the suppliers understand this. If you lose your customer’s confidence for whatever reason – security among them – your customers will switch suppliers faster than you can say “don’t bother with the code scan.” And thus, suppliers are out of business if they screw it up, because their competitors will be ruthless. Competitors are ruthless.

Whom do you think is more trustworthy? Who has a greater incentive to do the job right – someone who builds something, or someone who builds FUD around what others build? Did I mention that most large hardware and software companies run their own businesses on their own products so if there’s a problem, they – or rather, we – are the first ones to suffer? Can SASO say that? I thought not.

Being approached by more and more companies who want our code SASOed has led me to the “enough, already” phase of dealing with them. For the record, we do not and will not allow our code to be SASOed; in fact, we do not allow any third parties access to source code for purposes of security analysis (with one exception noted below). I’ve also produced a canned response that explains to customers why SASO will never darken our code base. It covers the following points, though admittedly when I express them to customers they are less glib and more professional:

1) We have source code and we do our own static analysis. Ergo, I don’t need SASO or anybody else to analyze our binaries (to get at the source code). Moreover, we have not only one but two static analysis tools we use in-house, one of which we built (Parfait, which is built by Sun Labs, a fine outfit we got via the Sun acquisition). Also, as I’ve opined at length in past blog entries, these tools are not “plug and play” – if they were, everybody would run down to Radio Shack, or wait for late night television to see advertisements for Code Scannerama (“finds all security vulnerabilities and malware in 28 seconds -- with No False Positives! But wait, there’s more: order now, and get a free turnip twaddler!”). For the amount of time we’d expend to work with SASO and get useful, accurate and actionable output – we could just get on with it and use the tools we’ve already licensed or already have. SASOing our code hurts us and doesn’t help us.

2) Security != testing. I am all for the use of automated tools to find vulnerabilities in software – it’s a code hygiene thing – but it is not the totality of assurance. Oracle has an extensive assurance program for our products that includes the use of static analysis tools. (In fact, Oracle requires the use of multiple tools, some of which we have developed in-house because no single tool “does it all.”) We also track compliance with our assurance program and report it to our senior management, all the way up to the CEO. In short, we do a far more comprehensive job in security than SASO can do for us. (I also note that – hype to the contrary – these tools will not find much if any malware in code. I am a blond and a bad programmer and it took me about ten minutes to think of ways to put bad stuff in code in a way that would not be detectable by automated code analysis.) “Code analysis” may be a necessary element for security-worthy code but it sure ain’t sufficient.

3) Precedent. It’s a really, really bad precedent to hand your source code to a third party for purposes of “security analysis” because gee, lots of governments have asked for the same privilege. And some governments who ask (I won’t name them, but we all know who they are) engage in state-sponsored industrial espionage so you might as well just kiss your intellectual property good bye as hand your code to those guys (“Wanna start a new software/hardware company? Here’s our source code, O Official Government of Foobaria IP Theft Department!”). Some governments also want to easily do security analysis they then exploit for national security purposes – e.g., scan source code to find problems that they hand to their intelligence agencies to . Companies can’t really say, “we will allow SASO to scan our code if customers nudge us enough” and then “just say no” to governments who want exactly the same privilege. Also, does anybody think it is a good idea for any third party to amass a database of unfixed vulnerabilities in a bunch of products? How are they going to protect that? Does anybody think that would be a really nice, fat, juicy hacker target? You betcha.

(As a matter of fact, Oracle does think that “bug repositories” are potentially nice, fat, juicy hacker targets. Security bug information is highly sensitive information, which is why security bugs are not published to customers and our bug database enforces “need to know” using our own row level access control technology. The security analysts who work for me have security bug access so they can analyze, triage, ensure complete fixes, and so on, but I don’t, because we enforce “need to know,” and I don’t.) I don’t trust any third party to secure detailed bug information and it is bad public policy to allow a third party to amass it in the first place.

4) Fixability. Only Oracle can fix vulnerabilities: SASO cannot. We have the code: they don’t.

5) Equality as public policy. Favoring a single customer or group of customers over others in terms of security vulnerability information is bad public policy. Frankly, all customers' data is precious to them and who would not want to be on the "insider" list? When we do have vulnerabilities in products, all customers get the same information at the same time and the same level of information. It’s basic equity and fairness. It’s also a lot better policy than playing favorites.

6) Global practices for global markets. The industry standard for "security evaluations" is an ISO standard called the Common Criteria (ISO 15408) that looks at the totality of security functionality needed to address specific threats as well as "assurance." A product with no exploitable security defects - that has no authentication, auditing or access control – and isn’t designed to protect against any threats - is absolutely, utterly useless. SASOing your code does not provide any information about the presence or absence of security functionality that meets particular threats. It’s like having someone do a termite inspection on the house you are about to buy but not looking at structural soundness: you know you don’t have bugs – pun intended – but you have not a clue about whether the house will fall down. Common Criteria evaluations do include vulnerability analysis but do not require static analysis. However, many vendors who Common Criteria-evaluate their product also do static analysis, whether or not they get any “credit” for doing it, because it is good practice to do so. Our Common Criteria labs do in some cases get source code access because of the type of analysis they have to do at some assurance levels. But the labs are not “governments,” they are licensed (so there is some quality control around their skill set), access to our source code is incredibly restricted, and we have highly enforceable liability provisions regarding protection of that source code.

7) All tools are not created equal. The only real utility in static analysis is in using the tools in development and fixing what you find in some priority order. In particular, requiring static analysis – e.g., by third parties - will not lead to significant improvement except possibly in the very short term and in the medium to long term will increase consumer costs for no benefit. Specifically: · The state of the art in vulnerability discovery tools is changing very rapidly. Mandating use of a particular tool - or service - will almost guarantee that the mandated tool/service will be obsolete quickly, resulting in added cost for incremental benefit at best. Obsolescence is especially guaranteed if a particular tool or service is mandated because the provider, having been granted a monopoly, has no incentive to improve their service. · The best types of tools used to discover vulnerabilities differ depending on the type of product or service that is being analyzed. Mandating a particular tool is thus suboptimal at best, and an expensive mistake at worst.

I said at the outset that SASO has a useful model for smaller companies that perhaps do not have security expertise in house - yet. But having said that, I offer a cautionary story. A company Oracle acquired some time ago had agreed to have their code SASOed at the request of a customer - before we acquired them. The first problem that created was that the team, by agreeing to a customer request – and continuing to agree to “outside code auditing” until we stopped it – set the market expectation that “gosh, we don’t know what we are doing in security and you shouldn’t trust us,” which is bad enough. What is far, far, worse: I believe this led to a mentality that “those (SASO or other) testing people will find security issues, so I don’t need to worry about security.” I told the product group that they absolutely, positively, needed in-house security expertise, that “outsourcing testing” would create an “outsourcing security” mentality that is unacceptable. Oracle cannot – does not – outsource security.

By way of contrast, consider another company that does static analysis as a service. Let’s call them HuiMaika‘i (Hawaiian for “good group”). HuiMaika‘i provides a service where customers can send in a particular type of code (i.e., based on a particular development framework) and get a report back from HuiMaika‘i that details suspected security vulnerabilities in it. Recently, a HuiMaika‘i customer sent them code which, when analyzed, was found to contain multiple vulnerabilities. HuiMaika‘i somehow determined that this code was actually Oracle code (that is, from an Oracle site) and, because it is their policy not to provide vulnerability reports to customers who submit code not owned by the customer, they returned the vulnerability report to Oracle and not the paying customer. (How cool is that?)

I think their policy is a good one, for multiple reasons. The first is that running such vulnerability-detecting software against code not owned by the customer may violate terms-of-use licenses (possibly resulting in liability for both the customer and the vendor of such vulnerability detecting software). The second reason is that it would be best all around if the vendor of the software being scanned be informed about possible vulnerabilities before public disclosure so that the vendor has a chance to provide fixes in the software. Having said that, you can bet that Oracle had some takeaways from this exercise. First, we pulled the code down wikiwiki (Hawaiian for “really fast”) until we could fix it. And we fixed it wikiwiki. Second, my team is reminding development groups - in words of one syllable - that our coding standards require that any code someone gets from Oracle – even if “just an example” - be held to the same development standards as products are. Code is code. Besides, customers have a habit of taking demo code that does Useful Stuff and using it in production. Ergo, you shouldn’t write crappy code and post it as “an example for customers.” Duh.

Our third takeaway is that we are looking at using HuiMaika‘i onsite in that particular development area. It wasn’t, by the way, just because of what they found - it’s the way they conducted themselves: ethically, professionally, and they didn’t play “vendor gotcha.” Thanks very much, Hui. Small company – big integrity.

Returning to SASO, the other way – besides marketing “you can’t trust suppliers but you can trust us” - in which they are trying to expand their market is a well-loved and unfortunately, often-attempted technique – get the government to do your market expansion work for you. I recently heard that SASO has hired a lobbyist. (I did fact check with them and they stated that, while they had hired a lobbyist, they weren’t “seriously pursuing that angle” – yet.) I have to wonder, what are they going to lobby for? Is it a coincidence that they hired a lobbyist just at the time we have so much draft cybersecurity legislation, e.g., that would have the Department of Homeland Security (DHS) design a supply chain risk management strategy – including third party testing of code - and then use that as part of Federal Acquisition Regulations (FAR) to regulate the supply chain of what the government buys? (Joy.)

I suspect that SASO would love the government to mandate third party code scans – it’s like winning the legislative lottery! As to assurance, many of my arguments above about why we “just say No, to S-A-S-O!” are still equally – if not more – relevant in a legislative context. Secondly, and related to “supply chain” concerns, you really cannot find malware in code by means of static analysis. You can find some coding errors that lead to certain types of security vulnerabilities - exploitable or not. You might be able to find known patterns of “commercially available” malware (e.g., someone downloaded a library and incorporated it into their product, and did not bother to scan it first before doing so. Quelle surprise! The library you got from CluelessRandomWebSite.com is infected with a worm). That is not the same thing as a deliberately introduced back door, and code scans will not find those.

In my opinion, neither SASO - nor any other requirement for third party security testing - has any place in a supply chain discussion. If the concern is assurance, the answer is to work within existing international assurance standards, not create a new one. Particularly not a new, US-only requirement to “hand a big fat market win by regulatory fiat to any vendor who lobbied for a provision that expanded their markets.” Ka-ching.

The bigger concern I have is the increasing amount of attempts by many companies, not to innovate, but to convince legislators that people who actually do stuff and build things can’t be trusted (how about asking themselves the same question?). I recently attending a meeting in which one of the leading presenters was a tenured professor of law whose area of focus is government regulation, and who admitted having no expertise in software. Said individual stated that we should license software developers because “we need accountability.” (I note that as a tenured professor, that person has little accountability since she is “unfireable” regardless of how poor a job she does, or whether there is any market demand for what she wants to teach.)

I felt compelled to interject: “Accountable to whom? Because industry hires many or most of these grads. We can enforce accountability – we fire people who don’t perform. If your concern is our accountability to our customers – we suppliers absolutely are accountable. Our customers can fire us – move to a competitor – if we don’t do our jobs well.”

In some cases, there are valid public policy arguments for regulation. The to- and fro- that many of us have in discussions with well-meaning legislators is what the problem really is, whether there is a market failure and - frankly -whether regulation will work, and at what cost. I can agree to disagree with others over where a regulatory line is drawn. But having said that, what I find most distasteful in all the above, is the example of a company with a useful service - “hiring guns to defend yourself, until you get your own guns” - deciding that instead of innovating their way to a larger market, they are FUDing their way to a larger market and potentially “regulating” their way to a larger market.

America has been in the past, a proud nation of innovators, several of whom I have been privileged to know and call “friends.” I fear that we are becoming nation of auditors, where nobody will be able to create, breathe, live, without asking someone else, who has never built anything but has been handed power they did not earn, “Mother, May I?”

Live free, or die.  

* Aside from everything else, I draw the line at someone who knows less than I do about my job tell me how to do it. Unless, of course, there is some reciprocity. I’d like to head the Nuclear Regulatory Commission. Of course, I don’t know a darn thing about nuclear energy, but I’m smart and I mean well. Of course, many people given authority to do things nobody asked them to do - and wouldn’t ask them to do - create havoc because, despite being smart and meaning well, they don’t have a clue. Like the line goes from The Music Man, “you gotta know the territory.”

** Of course we don’t do that – it was a joke. I have a pair of bunnies who live at the end of my street and I’ve become quite attached to them. I always slow down at the end of the street to make sure I don’t hit them. I also hope no predator gets them because they are quite charming. Of course, I might not find them so charming if they were noshing on my arugula, but that’s why you schlep on over to Moss Garden Center and buy anti-deer and anti-bunny spray for your plants. I like kitties, too, for the record.

Book of the Month

Bats at the Library by Brian Lies 
This is a children’s book but it is utterly charming, to the point where I bought a copy just to have one. I highly recommend it for the future little reader in your life. Bats see that a library window has been left open, and they enter the library to enjoy all its delights, including all the books (and of course, playing with the copy machine and the water fountain). The illustrations are wonderful, especially all the riffs on classic children’s stories (e.g.,Winnie the Pooh as a bat, Peter Rabbit with bat wings, the headless horseman with bat wings, etc.). There are two other books in the series: Bats at the Beach and Bats at the Ballgame. I haven’t read them, but I bet they are good, too.

Shattered Sword: The Untold Story of the Battle of Midway by Jonathan Parshall and Anthony Tully
I can’t believe I bought and read, much less I am recommending yet another book on the battle of Midway, but this is a must read. The authors went back to primary Japanese sources and discuss the battle from the Japanese point of view. It’s particularly interesting since so many Western discussions of the battle use as a primary source the account of the battle by Mitsuo Fuchida (who led the air strike on Pearl Harbor and who was at Midway on ADM Nagumo’s flagship), which was apparently largely – oh dear – fabricated to help various participants “save face.” For example, you find out how important little details are such as the US forces regularly drained aviation fuel lines (to avoid catastrophic fires during battles – which the Japanese did not do) and that the Japanese – unlike the US - had dedicated damage control teams (if the damage control team was killed, a Japanese ship could not do even basic damage control). It’s an amazing and fascinating book, especially since the authors are not professional historians but regular folks who have had a lifelong interest in the battle. Terrific read.

Neptune’s Inferno: The US Navy at Guadalcanal by James Hornfischer 
I might as well just recommend every book James Hornfischer has ever written since he is a wonderful naval historian, a compelling storyteller, and incorporates oral history so broadly and vividly into his work. You see, hear, and feel -- to the extent you can without having been there -- what individuals who were in the battle see, heard and felt. Despite having read multiple works on Guadalcanal (this one focuses on the sea battles, not the land battle), I came away with a much better understanding of the “who” and “why.” I was so convinced it was going to be wonderful and I’d want to read it again that I bought it in hardcover (instead of waiting for paperback and/or getting it at the library). Terrif.

Cutting for Stone by Abraham Verghese 
I don’t generally read much contemporary fiction since so much of it is postmodern dreary drivel. However, a friend bought this for me - and what a lovely gift! This is a compelling story (twin brothers born in Ethiopia to a surgeon farther and a nun mother) about what constitutes family - and forgiveness. It has been a best seller (not that that always means much) and highly acclaimed (which in this case does mean much). The story draws you in and the characters are richly drawn. I am not all that interested in surgery but the author makes it interesting, particularly in describing the life and work of surgeons in destitute areas.

Those Who Can’t Do, Audit

Mary Ann Davidson - Wed, 2011-08-24 11:53

I am often asked what the biggest change is that I’ve seen in my years working in information security. (Most of those who ask are too polite to ask how many years I’ve worked in information security. I used to say I joined Oracle when I was 8. Now, I tell everyone I started at Oracle when I was 3. Pretty soon, I’ll be a prenatal employee.)


There are too many changes to pick just one, but on a personal level, I seem to spend a lot more time on government affairs (relative to the rest of my job) that I used to: working with others in industry to discuss cybersecurity-related public policy, meeting with staffers on the Hill, commenting on draft legislation, commenting on others’ comments on draft legislation, and so on. On the one hand, it’s a natural result of the amount of dependence that everyone has on cybersecurity and the amount of highly publicized breaches, hacks, and so on we continue to see. You can hardly open the paper – or open an email with a link to the daily electronic paper – without reading about another data breach, “outing” of sensitive information, new thing you didn’t even know was IP-accessible being broken into (e.g., starting cars remotely), and so on.


Legislators often want to Do Something when they see a problem – that’s why we elected them, presumably. I’ve opined in previous blogs on the importance of defining what problem you want to solve, specifying what “it” is that you want to legislate, understanding costs – especially those pertaining to unintended consequences - and so on. I spend more time on govie affairs now because more legislators are proposing legislation that they hope will improve cybersecurity.


Most people who help frame public policy discussions about cybersecurity are well intended and they want to minimize or resolve the problem. However, there are also a whole lotta entities that want to use those discussions to get handed a nice, big, fat “public policy printing press”: ideally, legislative mandates where their businesses receive a direct and lucrative benefit, ka-ching. Their idea of public policy begins and ends at the cash register.


Some businesses would love to be in a position where they could write/draft/influence legislation about how no commercial company can be trusted with security and therefore needs someone (presumably people like themselves) to bless their systems and all their business practices – the more, the merrier. This includes legislative mandates on suppliers – who, as we all know – are busy throwing crappy code over the wall with malice aforethought. Those evil suppliers simply cannot be trusted, and thus the entirety of supplier business practices in security must be validated by Outside Experts* if not the entirety of their code scanned. (Nice work if you can get it.) (Aside: boy, with all the energy we suppliers expend on deliberately producing rotten code and putting our customers’ systems at risk – not to mention our own businesses since we run on our own stuff - it’s really hard to find time for other malicious acts like starving baby rabbits and kitties.**)


At the same time, these businesses are less than forthcoming when asked how they will be vetted: what do they find, how well do they find it and at what cost, since everybody who has worked with code scanning tools know they need to be tuned. (Having to plow through 1000 alleged vulnerabilities to find the three that are legitimate is way too expensive for any company to contemplate doing it.) Now is a perfect time for a disclosure: I am on an advisory board to a company in this sector.


One company in particular is beginning to get under my skin on a lot of levels pertaining to “creating a market for themselves.” Let’s call them SASO (Static Analysis Service Offering) that – ta da! - do static analysis as a service offering. More specifically, they analyze the binaries to do static analysis – an important point. When SASO got started, I thought they had a useful value proposition in that a lot of small software companies don’t have security expertise in-house – one reason why Oracle has strong “business alignment” processes when we acquire a company that include alignment with our assurance practices, which the acquired entities are by-and-large really happy to do. The small company wants to increase their ability to sell into core markets and to do so they may need to convince prospective customers that their code is appropriately secure. They hire SASO to analyze their code, and SASO produces a summary report that tells the customer whether there are significant potentially unfixed vulnerabilities and also gives the small company the details of those vulnerabilities so they can validate actual vulnerabilities and then fix them. Fair enough, and a good service offering, to a point, more on which later. The real benefit to the small company is to get actionable intelligence on problems in their code earlier because for one, it’s almost always easier and cheaper to fix these before products ship, and/or remediate them in some severity order. Also, it’s not always easy to use static analysis tools, yourself: the tools require knowledgeable people to run them and analyze the results.


It’s not, in other words, the “gold star” from SASO they get that has value; it’s the SASO expertise they get to fix issues earlier until they get that expertise in-house and up-to-speed. And they really do need that expertise in-house because security is a core element of software development and you cannot outsource it -- or you will have no security. (Among other things Oracle does in terms of secure development, we have our ethical hacking team conduct product assessments. They are as good as anybody in industry – or better – and they find problems that code analysis tools do not find and never will find. Their expertise is often codified in our own tools that we use in addition to commercially available static and dynamic analysis tools.)


Of course, the market for “helping small inexperienced companies improve the security-worthiness of their code” isn’t big enough, so SASO has been trying to expand the size of their market by means of two understandably - yet obnoxiously - self-promoting activities. One of them is by convincing entire market sectors that they cannot trust their vendors, and their suppliers’ code needs to be “tested” by SASO. You do want to ask, why would anybody trust them? I mean, more and more of SASO’s potential market size is based on FUD. Whereas, the suppliers’ market is based on their customers’ ability to run their entire businesses on the software or hardware. That is, customers bet the digital farm on their suppliers, and the suppliers understand this. If you lose your customer’s confidence for whatever reason – security among them – your customers will switch suppliers faster than you can say “don’t bother with the code scan.” And thus, suppliers are out of business if they screw it up, because their competitors will be ruthless. Competitors are ruthless.


Whom do you think is more trustworthy? Who has a greater incentive to do the job right – someone who builds something, or someone who builds FUD around what others build? Did I mention that most large hardware and software companies run their own businesses on their own products so if there’s a problem, they – or rather, we – are the first ones to suffer? Can SASO say that? I thought not.


Being approached by more and more companies who want our code SASOed has led me to the “enough, already” phase of dealing with them. For the record, we do not and will not allow our code to be SASOed; in fact, we do not allow any third parties access to source code for purposes of security analysis (with one exception noted below). I’ve also produced a canned response that explains to customers why SASO will never darken our code base. It covers the following points, though admittedly when I express them to customers they are less glib and more professional:



1) We have source code and we do our own static analysis. Ergo, I don’t need SASO or anybody else to analyze our binaries (to get at the source code). Moreover, we have not only one but two static analysis tools we use in-house, one of which we built (Parfait, which is built by Sun Labs, a fine outfit we got via the Sun acquisition). Also, as I’ve opined at length in past blog entries, these tools are not “plug and play” – if they were, everybody would run down to Radio Shack, or wait for late night television to see advertisements for Code Scannerama (“finds all security vulnerabilities and malware in 28 seconds -- with No False Positives! But wait, there’s more: order now, and get a free turnip twaddler!”). For the amount of time we’d expend to work with SASO and get useful, accurate and actionable output – we could just get on with it and use the tools we’ve already licensed or already have. SASOing our code hurts us and doesn’t help us.


2) Security != testing. I am all for the use of automated tools to find vulnerabilities in software – it’s a code hygiene thing – but it is not the totality of assurance. Oracle has an extensive assurance program for our products that includes the use of static analysis tools. (In fact, Oracle requires the use of multiple tools, some of which we have developed in-house because no single tool “does it all.”) We also track compliance with our assurance program and report it to our senior management, all the way up to the CEO. In short, we do a far more comprehensive job in security than SASO can do for us. (I also note that – hype to the contrary – these tools will not find much if any malware in code. I am a blond and a bad programmer and it took me about ten minutes to think of ways to put bad stuff in code in a way that would not be detectable by automated code analysis.) “Code analysis” may be a necessary element for security-worthy code but it sure ain’t sufficient.


3) Precedent. It’s a really, really bad precedent to hand your source code to a third party for purposes of “security analysis” because gee, lots of governments have asked for the same privilege. And some governments who ask (I won’t name them, but we all know who they are) engage in state-sponsored industrial espionage so you might as well just kiss your intellectual property good bye as hand your code to those guys (“Wanna start a new software/hardware company? Here’s our source code, O Official Government of Foobaria IP Theft Department!”). Some governments also want to easily do security analysis they then exploit for national security purposes – e.g., scan source code to find problems that they hand to their intelligence agencies to . Companies can’t really say, “we will allow SASO to scan our code if customers nudge us enough” and then “just say no” to governments who want exactly the same privilege. Also, does anybody think it is a good idea for any third party to amass a database of unfixed vulnerabilities in a bunch of products? How are they going to protect that? Does anybody think that would be a really nice, fat, juicy hacker target? You betcha.


(As a matter of fact, Oracle does think that “bug repositories” are potentially nice, fat, juicy hacker targets. Security bug information is highly sensitive information, which is why security bugs are not published to customers and our bug database enforces “need to know” using our own row level access control technology. The security analysts who work for me have security bug access so they can analyze, triage, ensure complete fixes, and so on, but I don’t, because we enforce “need to know,” and I don’t.) I don’t trust any third party to secure detailed bug information and it is bad public policy to allow a third party to amass it in the first place.


4) Fixability. Only Oracle can fix vulnerabilities: SASO cannot. We have the code: they don’t.


5) Equality as public policy. Favoring a single customer or group of customers over others in terms of security vulnerability information is bad public policy. Frankly, all customers' data is precious to them and who would not want to be on the "insider" list? When we do have vulnerabilities in products, all customers get the same information at the same time and the same level of information. It’s basic equity and fairness. It’s also a lot better policy than playing favorites.


6) Global practices for global markets. The industry standard for "security evaluations" is an ISO standard called the Common Criteria (ISO 15408) that looks at the totality of security functionality needed to address specific threats as well as "assurance." A product with no exploitable security defects - that has no authentication, auditing or access control – and isn’t designed to protect against any threats - is absolutely, utterly useless. SASOing your code does not provide any information about the presence or absence of security functionality that meets particular threats. It’s like having someone do a termite inspection on the house you are about to buy but not looking at structural soundness: you know you don’t have bugs – pun intended – but you have not a clue about whether the house will fall down. Common Criteria evaluations do include vulnerability analysis but do not require static analysis. However, many vendors who Common Criteria-evaluate their product also do static analysis, whether or not they get any “credit” for doing it, because it is good practice to do so. Our Common Criteria labs do in some cases get source code access because of the type of analysis they have to do at some assurance levels. But the labs are not “governments,” they are licensed (so there is some quality control around their skill set), access to our source code is incredibly restricted, and we have highly enforceable liability provisions regarding protection of that source code.


7) All tools are not created equal. The only real utility in static analysis is in using the tools in development and fixing what you find in some priority order. In particular, requiring static analysis – e.g., by third parties - will not lead to significant improvement except possibly in the very short term and in the medium to long term will increase consumer costs for no benefit. Specifically: · The state of the art in vulnerability discovery tools is changing very rapidly. Mandating use of a particular tool - or service - will almost guarantee that the mandated tool/service will be obsolete quickly, resulting in added cost for incremental benefit at best. Obsolescence is especially guaranteed if a particular tool or service is mandated because the provider, having been granted a monopoly, has no incentive to improve their service. · The best types of tools used to discover vulnerabilities differ depending on the type of product or service that is being analyzed. Mandating a particular tool is thus suboptimal at best, and an expensive mistake at worst.



I said at the outset that SASO has a useful model for smaller companies that perhaps do not have security expertise in house - yet. But having said that, I offer a cautionary story. A company Oracle acquired some time ago had agreed to have their code SASOed at the request of a customer - before we acquired them. The first problem that created was that the team, by agreeing to a customer request – and continuing to agree to “outside code auditing” until we stopped it – set the market expectation that “gosh, we don’t know what we are doing in security and you shouldn’t trust us,” which is bad enough. What is far, far, worse: I believe this led to a mentality that “those (SASO or other) testing people will find security issues, so I don’t need to worry about security.” I told the product group that they absolutely, positively, needed in-house security expertise, that “outsourcing testing” would create an “outsourcing security” mentality that is unacceptable. Oracle cannot – does not – outsource security.


By way of contrast, consider another company that does static analysis as a service. Let’s call them HuiMaika‘i (Hawaiian for “good group”). HuiMaika‘i provides a service where customers can send in a particular type of code (i.e., based on a particular development framework) and get a report back from HuiMaika‘i that details suspected security vulnerabilities in it. Recently, a HuiMaika‘i customer sent them code which, when analyzed, was found to contain multiple vulnerabilities. HuiMaika‘i somehow determined that this code was actually Oracle code (that is, from an Oracle site) and, because it is their policy not to provide vulnerability reports to customers who submit code not owned by the customer, they returned the vulnerability report to Oracle and not the paying customer. (How cool is that?)


I think their policy is a good one, for multiple reasons. The first is that running such vulnerability-detecting software against code not owned by the customer may violate terms-of-use licenses (possibly resulting in liability for both the customer and the vendor of such vulnerability detecting software). The second reason is that it would be best all around if the vendor of the software being scanned be informed about possible vulnerabilities before public disclosure so that the vendor has a chance to provide fixes in the software. Having said that, you can bet that Oracle had some takeaways from this exercise. First, we pulled the code down wikiwiki (Hawaiian for “really fast”) until we could fix it. And we fixed it wikiwiki. Second, my team is reminding development groups - in words of one syllable - that our coding standards require that any code someone gets from Oracle – even if “just an example” - be held to the same development standards as products are. Code is code. Besides, customers have a habit of taking demo code that does Useful Stuff and using it in production. Ergo, you shouldn’t write crappy code and post it as “an example for customers.” Duh.


Our third takeaway is that we are looking at using HuiMaika‘i onsite in that particular development area. It wasn’t, by the way, just because of what they found - it’s the way they conducted themselves: ethically, professionally, and they didn’t play “vendor gotcha.” Thanks very much, Hui. Small company – big integrity.


Returning to SASO, the other way – besides marketing “you can’t trust suppliers but you can trust us” - in which they are trying to expand their market is a well-loved and unfortunately, often-attempted technique – get the government to do your market expansion work for you. I recently heard that SASO has hired a lobbyist. (I did fact check with them and they stated that, while they had hired a lobbyist, they weren’t “seriously pursuing that angle” – yet.) I have to wonder, what are they going to lobby for? Is it a coincidence that they hired a lobbyist just at the time we have so much draft cybersecurity legislation, e.g., that would have the Department of Homeland Security (DHS) design a supply chain risk management strategy – including third party testing of code - and then use that as part of Federal Acquisition Regulations (FAR) to regulate the supply chain of what the government buys? (Joy.)


I suspect that SASO would love the government to mandate third party code scans – it’s like winning the legislative lottery! As to assurance, many of my arguments above about why we “just say No, to S-A-S-O!” are still equally – if not more – relevant in a legislative context. Secondly, and related to “supply chain” concerns, you really cannot find malware in code by means of static analysis. You can find some coding errors that lead to certain types of security vulnerabilities - exploitable or not. You might be able to find known patterns of “commercially available” malware (e.g., someone downloaded a library and incorporated it into their product, and did not bother to scan it first before doing so. Quelle surprise! The library you got from CluelessRandomWebSite.com is infected with a worm). That is not the same thing as a deliberately introduced back door, and code scans will not find those.


In my opinion, neither SASO - nor any other requirement for third party security testing - has any place in a supply chain discussion. If the concern is assurance, the answer is to work within existing international assurance standards, not create a new one. Particularly not a new, US-only requirement to “hand a big fat market win by regulatory fiat to any vendor who lobbied for a provision that expanded their markets.” Ka-ching.


The bigger concern I have is the increasing amount of attempts by many companies, not to innovate, but to convince legislators that people who actually do stuff and build things can’t be trusted (how about asking themselves the same question?). I recently attending a meeting in which one of the leading presenters was a tenured professor of law whose area of focus is government regulation, and who admitted having no expertise in software. Said individual stated that we should license software developers because “we need accountability.” (I note that as a tenured professor, that person has little accountability since she is “unfireable” regardless of how poor a job she does, or whether there is any market demand for what she wants to teach.)


I felt compelled to interject: “Accountable to whom? Because industry hires many or most of these grads. We can enforce accountability – we fire people who don’t perform. If your concern is our accountability to our customers – we suppliers absolutely are accountable. Our customers can fire us – move to a competitor – if we don’t do our jobs well.”


In some cases, there are valid public policy arguments for regulation. The to- and fro- that many of us have in discussions with well-meaning legislators is what the problem really is, whether there is a market failure and - frankly -whether regulation will work, and at what cost. I can agree to disagree with others over where a regulatory line is drawn. But having said that, what I find most distasteful in all the above, is the example of a company with a useful service - “hiring guns to defend yourself, until you get your own guns” - deciding that instead of innovating their way to a larger market, they are FUDing their way to a larger market and potentially “regulating” their way to a larger market.


America has been in the past, a proud nation of innovators, several of whom I have been privileged to know and call “friends.” I fear that we are becoming nation of auditors, where nobody will be able to create, breathe, live, without asking someone else, who has never built anything but has been handed power they did not earn, “Mother, May I?”


Live free, or die.  


* Aside from everything else, I draw the line at someone who knows less than I do about my job tell me how to do it. Unless, of course, there is some reciprocity. I’d like to head the Nuclear Regulatory Commission. Of course, I don’t know a darn thing about nuclear energy, but I’m smart and I mean well. Of course, many people given authority to do things nobody asked them to do - and wouldn’t ask them to do - create havoc because, despite being smart and meaning well, they don’t have a clue. Like the line goes from The Music Man, “you gotta know the territory.”


** Of course we don’t do that – it was a joke. I have a pair of bunnies who live at the end of my street and I’ve become quite attached to them. I always slow down at the end of the street to make sure I don’t hit them. I also hope no predator gets them because they are quite charming. Of course, I might not find them so charming if they were noshing on my arugula, but that’s why you schlep on over to Moss Garden Center and buy anti-deer and anti-bunny spray for your plants. I like kitties, too, for the record.


Book of the Month


Bats at the Library by Brian Lies 
This is a children’s book but it is utterly charming, to the point where I bought a copy just to have one. I highly recommend it for the future little reader in your life. Bats see that a library window has been left open, and they enter the library to enjoy all its delights, including all the books (and of course, playing with the copy machine and the water fountain). The illustrations are wonderful, especially all the riffs on classic children’s stories (e.g.,Winnie the Pooh as a bat, Peter Rabbit with bat wings, the headless horseman with bat wings, etc.). There are two other books in the series: Bats at the Beach and Bats at the Ballgame. I haven’t read them, but I bet they are good, too.


Shattered Sword: The Untold Story of the Battle of Midway by Jonathan Parshall and Anthony Tully
I can’t believe I bought and read, much less I am recommending yet another book on the battle of Midway, but this is a must read. The authors went back to primary Japanese sources and discuss the battle from the Japanese point of view. It’s particularly interesting since so many Western discussions of the battle use as a primary source the account of the battle by Mitsuo Fuchida (who led the air strike on Pearl Harbor and who was at Midway on ADM Nagumo’s flagship), which was apparently largely – oh dear – fabricated to help various participants “save face.” For example, you find out how important little details are such as the US forces regularly drained aviation fuel lines (to avoid catastrophic fires during battles – which the Japanese did not do) and that the Japanese – unlike the US - had dedicated damage control teams (if the damage control team was killed, a Japanese ship could not do even basic damage control). It’s an amazing and fascinating book, especially since the authors are not professional historians but regular folks who have had a lifelong interest in the battle. Terrific read.


Neptune’s Inferno: The US Navy at Guadalcanal by James Hornfischer 
I might as well just recommend every book James Hornfischer has ever written since he is a wonderful naval historian, a compelling storyteller, and incorporates oral history so broadly and vividly into his work. You see, hear, and feel -- to the extent you can without having been there -- what individuals who were in the battle see, heard and felt. Despite having read multiple works on Guadalcanal (this one focuses on the sea battles, not the land battle), I came away with a much better understanding of the “who” and “why.” I was so convinced it was going to be wonderful and I’d want to read it again that I bought it in hardcover (instead of waiting for paperback and/or getting it at the library). Terrif.


Cutting for Stone by Abraham Verghese 
I don’t generally read much contemporary fiction since so much of it is postmodern dreary drivel. However, a friend bought this for me - and what a lovely gift! This is a compelling story (twin brothers born in Ethiopia to a surgeon farther and a nun mother) about what constitutes family - and forgiveness. It has been a best seller (not that that always means much) and highly acclaimed (which in this case does mean much). The story draws you in and the characters are richly drawn. I am not all that interested in surgery but the author makes it interesting, particularly in describing the life and work of surgeons in destitute areas.

Book Review: Middleware and Cloud Computing: Oracle on Amazon Web Services and Rackspace Cloud

Chris Muir - Wed, 2011-08-24 08:07
With the explosion of Internet content, especially that for the IT industry, it leaves an interesting question hanging over the worth (if any) of IT textbooks. When you can find an answer on just about anything online, what’s the point of shelling out money, especially for IT texts that have been overpriced for sometime?

Frank Munz’s Middleware and Cloud Computing: Oracle on Amazon Web Services and Rackspace Cloud book is a good reminder of one key fact about text books in context of an internet society, they can save you a lot of research and time on the internet looking for the nitty-gritty details.

The book is clearly aimed at system administrators & architects who are looking for details about moving Oracle Fusion Middleware (FMW) products to the cloud. A healthy dose of system admin knowledge is required of readers, with discussions on operating system (particularly Linux), us of command lines, and a knowledge of networking concepts would greatly assist too. FMW knowledge isn’t assumed, with an introductory chapter included, but knowledge in Oracle’s WebLogic Server (WLS) would be highly beneficial to readers, and a familiarity of Java EE technologies too.

Munz’s book is broken into logical halves. The first is a general introduction into “as a Service” cloud computing concepts. For readers who have heard the terminology but haven’t kept up with all the in’s and out’s of what a cloud service is, this provides an opportunity to learn the lingo and also learn how to critique the cloud offerings, which is (let’s just say) over hyped by IT marketing.

The first part of the book also takes care to look in depth at Amazon Web Services (AWS), including images, instances, storage and even pricing. In this area the book departs from a typical theoretical text encouraging readers to create their own AWS accounts and gives details on how to configure and run your own instance. The text however doesn’t just focus on AWS, and also looks at Rackspace’s equivalent cloud services.

The second half is where Munz’s book shines. Moving on from cloud basics, readers are led through considerations on designing and architecture within the cloud, management, availability and scalability, all in context of FMW and specifically of WLS and its supported JEE technologies. In each area the reader is brought back to specific considerations and limitations of Amazon’s & Rackspace’s platforms. On completing the book it becomes obvious this is a well thought out inclusion, as like enterprise home-baked operating systems and network infrastructure, cloud vendors’ platform are not born equal or include every feature required. The implication being certain FMW features and designs simply won’t work on specific cloud platforms.

The book isn’t without fault. Munz does take a narrative approach that may not be everybody’s cup of tea. In turn there’s a section that takes an unfortunate cop out on not tackling Oracle’s (let’s just say) less than favourable licensing. Yet overall the outcome for FMW professionals, in particular administrators and architects, is a positive one, and a recommended read. In turn it’s the careful research into actually testing what FMW features will really work on each cloud vendor’s platform, all collated into 1 book rather than sprayed across the internet, which will save readers significant time: prewarned is prearmed.

Remember Your Password Or You Won't Get Your Donut

Mark Wilcox - Wed, 2011-08-24 04:26
People have trouble remembering complex passwords.Click here to see one organization's ingenious way to get their employees to remember them.Click it or no donut for you.

Remember Your Password Or You Won't Get Your Donut

Mark Wilcox - Wed, 2011-08-24 04:26
People have trouble remembering complex passwords. Click here to see one organization's ingenious way to get their employees to remember them. Click it or no donut for you.

Tuning SOA 11g in a nutshell

Marc Kelderman - Sun, 2011-08-21 09:19

Tuning SOA 11g can be done in a few quick steps, this will tackle most common performance issues at infrastructure level. On the other hand, most performance issues are in the application (composite) self. If you design a bad application, the application will not perform well.

I did my tuning on the following subjects:
  • JVM Memory
  • JVM Garbage Collection
  • Datasources
  • Threads
  • EJB
  • Database

JVM Memory
Make sure you have set the minimal and maximal heap size the same size. The size should be at least 1024MB.
Make sure the maxpermsize parameter is set to at least 512MB. In most cases requests in the SOA world are relative small, this means you could shrink the size of the stack for one Java thread.

Example:

-server -Xms1280m -Xmx1280m -XX:MaxPermSize=512m -Xss128k


JVM Garbage Collection
Running your application server with one CPU, you would not expect enhanchments by setting garbage collection to parallel. On VMWare we saw some significant improvement when we set this option even the virtual machine has one CPU. I think it has to do with the hyper-visor and the ESX Server. Somehow the virtual machine benefits from the ESX environment.

Example:

-XX:-UseParallelGC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing  

JMX
Setting some JMX options will not improve the performance, but it gives you insight information on the Java VM. If you not using JROckit your could set the following options and use VisualVM to view the Java engine.

-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=7004 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false 

http://orasoa.blogspot.com/2010/04/monitoring-your-soa-jvm.html



Otherwise your JRockit Mission Control to get more information out of your Java VM.

Data-sources
Using the correct settings for all the connections to your database is essential. The SOA Suite is using  a lot of data-sources, while the OSB is using one data-source for the reports. Measure your connections to the database while running a load test on your environment. Based on these measurements you can predict the correct configuration.

The following table shows an example of the settings your should tune. Note that the values are based on my measurements on a particular system. These settings should be applied via the Weblogic Console or through Weblogic scripting.


   

DatasourcesEDNDataSourceEDNLocalTxDataSourcemds-owsmmds-soaOraSDPMDataSourceSOADataSourceSOALocalTxDataSource
Hostdb-hostdb-hostdb-hostdb-hostdb-hostdb-hostdb-host
Port1521152115211521152115211521
SIDorclorclorclorclorclorclorcl
TypeXAnon-XAnon-XAnon-XAnon-XAXAnon-XA
Initial Capacity:1111122
Maximum Capacity:10101010102020
Capacity Increment:2222222
Statement Cache Size:10101010101010
Test Connections On ReserveTRUETRUETRUETRUETRUETRUETRUE
Test Frequency:180180180180180180180
Seconds to Trust an Idle Pool Connection:30303030303030
Shrink Frequency:300300300300300300300
Connection Creation Retry Frequency:30303030303030
Inactive Connection Timeout:30303030303030
Maximum Waiting for Connection:2147483647214748364721474836472147483647214748364721474836472147483647
Connection Reserve Timeout: 30303030303030
Statement Timeout:-1-1-1-1-1-1-1
Enviroment
oracle.net.CONNECT_TIMEOUT100000100000100000100000100000100000100000
Set XA Transaction Timeout:TRUETRUE
XA Transaction Timeout:00
XA Retry Duration:300300
XA Retry Interval:6060

Threads
Do not touch the default threading model of Weblogic, unless you known what your are doing. Weblogic is using an automated mechanism for tuning his own work. WebLogic Server prioritizes work and allocates threads based on an execution model that takes into the managed servers parameters and actual run-time statistics (performance and throughput).

Within the SOA Suite 11g, you can still tune the threads of the BPEL engine itself. Keep in meind that these thread will use a database connection from the pool. It is not said that the total number of BPEL threads results in the same amount of connection to the database. You can set these values via Oracle Enterprise manager

 	

dspInvokeThread10
dspEngineThreads15
dspSystemThreads2


synMaxWaitTime150

EJB
The  soa-infra application contains all the EJB objects that are responsible for SOA11g (BPEL, Workflow, BPM, Sensors, Mediators, etc..). Each of these EJB objects has his own time out. By default it is 300 seconds. When you have long running process, also known as not optimal designed and implemented SOA Composites ;-) , you could run into time-out errors that could slow down the whole SOA environment. Therefore your should design and implement efficicient and optimal composites.

To change the time-out of the EJB's, execute the following steps.
  1. Stop all the managed servers
  2. Apply the EJB time-out changes
  3. Start the managed servers
or
  1. Stop all the managed servers
  2. Undeploy the soa-infra application
  3. Activate changes
  4. Deploy the soa-infra application from $ORACLE_HOME/soa/applications/soa-infra-wls.ear, as 'soa-infra' and copy application to all servers in the cluster
  5. Activate changes
  6. Redeploy soa-infra application with configuration file (Plan.xml) with all the new EJB time-ou settings.
  7. Activate changes
  8. Start soa-infra applicatio to server all requests.
Here is the configuration file for the soa-infra application, Plan.xml file:
Plan.xml:

<?xml version='1.0' encoding='UTF-8'?>
<deployment-plan xmlns="http://xmlns.oracle.com/weblogic/deployment-plan" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/deployment-plan http://xmlns.oracle.com/weblogic/deployment-plan/1.0/deployment-plan.xsd">
<application-name>applications</application-name>
<variable-definition>
<variable>
<name>TransactionDescriptor_transTimeoutSeconds_BPELEngineBean</name>
<value>1800</value>
</variable>
<variable>
<name>TransactionDescriptor_transTimeoutSeconds_BPELDeliveryBean</name>
<value>1800</value>
</variable>
<variable>
<name>TransactionDescriptor_transTimeoutSeconds_BPELActivityManagerBean</name>
<value>1800</value>
</variable>
<variable>
<name>TransactionDescriptor_transTimeoutSeconds_BPELServerManagerBean</name>
<value>1800</value>
</variable>
<variable>
<name>TransactionDescriptor_transTimeoutSeconds_BPELProcessManagerBean</name>
<value>1800</value>
</variable>
<variable>
<name>TransactionDescriptor_transTimeoutSeconds_BPELInstanceManagerBean</name>
<value>1800</value>
</variable>
<variable>
<name>TransactionDescriptor_transTimeoutSeconds_BPELFinderBean</name>
<value>1800</value>
</variable>
<variable>
<name>TransactionDescriptor_transTimeoutSeconds_BPELDispatcherBean</name>
<value>1800</value>
</variable>
<variable>
<name>TransactionDescriptor_transTimeoutSeconds_BPELSensorValuesBean</name>
<value>1800</value>
</variable>
</variable-definition>
<module-override>
<module-name>soa-infra-wls.ear</module-name>
<module-type>ear</module-type>
<module-descriptor external="false">
<root-element>weblogic-application</root-element>
<uri>META-INF/weblogic-application.xml</uri>
</module-descriptor>
<module-descriptor external="false">
<root-element>application</root-element>
<uri>META-INF/application.xml</uri>
</module-descriptor>
<module-descriptor external="true">
<root-element>wldf-resource</root-element>
<uri>META-INF/weblogic-diagnostics.xml</uri>
</module-descriptor>
<module-descriptor external="true">
<root-element>persistence-configuration</root-element>
<uri>bpel_persistence_weblogic.jar!/META-INF/persistence-configuration.xml</uri>
</module-descriptor>
<module-descriptor external="false">
<root-element>persistence</root-element>
<uri>bpel_persistence_weblogic.jar!/META-INF/persistence.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>ejb_ob_engine_wls.jar</module-name>
<module-type>ejb</module-type>
<module-descriptor external="false">
<root-element>weblogic-ejb-jar</root-element>
<uri>META-INF/weblogic-ejb-jar.xml</uri>
<variable-assignment>
<name>TransactionDescriptor_transTimeoutSeconds_BPELEngineBean</name>
<xpath>/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="BPELEngineBean"]/transaction-descriptor/trans-timeout-seconds</xpath>
</variable-assignment>
<variable-assignment>
<name>TransactionDescriptor_transTimeoutSeconds_BPELDeliveryBean</name>
<xpath>/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="BPELDeliveryBean"]/transaction-descriptor/trans-timeout-seconds</xpath>
</variable-assignment>
<variable-assignment>
<name>TransactionDescriptor_transTimeoutSeconds_BPELActivityManagerBean</name>
<xpath>/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="BPELActivityManagerBean"]/transaction-descriptor/trans-timeout-seconds</xpath>
</variable-assignment>
<variable-assignment>
<name>TransactionDescriptor_transTimeoutSeconds_BPELServerManagerBean</name>
<xpath>/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="BPELServerManagerBean"]/transaction-descriptor/trans-timeout-seconds</xpath>
</variable-assignment>
<variable-assignment>
<name>TransactionDescriptor_transTimeoutSeconds_BPELProcessManagerBean</name>
<xpath>/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="BPELProcessManagerBean"]/transaction-descriptor/trans-timeout-seconds</xpath>
</variable-assignment>
<variable-assignment>
<name>TransactionDescriptor_transTimeoutSeconds_BPELInstanceManagerBean</name>
<xpath>/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="BPELInstanceManagerBean"]/transaction-descriptor/trans-timeout-seconds</xpath>
</variable-assignment>
<variable-assignment>
<name>TransactionDescriptor_transTimeoutSeconds_BPELFinderBean</name>
<xpath>/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="BPELFinderBean"]/transaction-descriptor/trans-timeout-seconds</xpath>
</variable-assignment>
<variable-assignment>
<name>TransactionDescriptor_transTimeoutSeconds_BPELDispatcherBean</name>
<xpath>/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="BPELDispatcherBean"]/transaction-descriptor/trans-timeout-seconds</xpath>
</variable-assignment>
<variable-assignment>
<name>TransactionDescriptor_transTimeoutSeconds_BPELSensorValuesBean</name>
<xpath>/weblogic-ejb-jar/weblogic-enterprise-bean/[ejb-name="BPELSensorValuesBean"]/transaction-descriptor/trans-timeout-seconds</xpath>
</variable-assignment>
</module-descriptor>
<module-descriptor external="false">
<root-element>ejb-jar</root-element>
<uri>META-INF/ejb-jar.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>fabric-ejb.jar</module-name>
<module-type>ejb</module-type>
<module-descriptor external="false">
<root-element>weblogic-ejb-jar</root-element>
<uri>META-INF/weblogic-ejb-jar.xml</uri>
</module-descriptor>
<module-descriptor external="false">
<root-element>ejb-jar</root-element>
<uri>META-INF/ejb-jar.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>hw_services_wls_ejb.jar</module-name>
<module-type>ejb</module-type>
<module-descriptor external="false">
<root-element>weblogic-ejb-jar</root-element>
<uri>META-INF/weblogic-ejb-jar.xml</uri>
</module-descriptor>
<module-descriptor external="false">
<root-element>ejb-jar</root-element>
<uri>META-INF/ejb-jar.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>sdpmessagingclient-ejb.jar</module-name>
<module-type>ejb</module-type>
<module-descriptor external="false">
<root-element>weblogic-ejb-jar</root-element>
<uri>META-INF/weblogic-ejb-jar.xml</uri>
</module-descriptor>
<module-descriptor external="false">
<root-element>ejb-jar</root-element>
<uri>META-INF/ejb-jar.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>b2b_engine_wls.jar</module-name>
<module-type>ejb</module-type>
<module-descriptor external="false">
<root-element>weblogic-ejb-jar</root-element>
<uri>META-INF/weblogic-ejb-jar.xml</uri>
</module-descriptor>
<module-descriptor external="false">
<root-element>ejb-jar</root-element>
<uri>META-INF/ejb-jar.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>fabric.war</module-name>
<module-type>war</module-type>
<module-descriptor external="false">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>hw_services.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170529</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>taskservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170530</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>taskmetadataservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170531</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>taskqueryservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170532</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>taskreportservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170533</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>IdentityService.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170533</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>usermetadataservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170528</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>runtimeconfigservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170527</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>taskevidenceservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170526</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>compositemetadataservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170525</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>b2bws.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170524</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
<module-descriptor external="false">
<root-element>weblogic-webservices</root-element>
<uri>WEB-INF/weblogic-webservices.xml</uri>
</module-descriptor>
<module-descriptor external="false">
<root-element>webservices</root-element>
<uri>WEB-INF/webservices.xml</uri>
</module-descriptor>
<module-descriptor external="true">
<root-element>webservice-policy-ref</root-element>
<uri>WEB-INF/weblogic-webservices-policy.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>b2b_starter_wls.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170509</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>agmetadataservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170508</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>agqueryservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170499</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<module-override>
<module-name>agadminservice.war</module-name>
<module-type>war</module-type>
<module-descriptor external="true">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<hash-code>1278926170497</hash-code>
</module-descriptor>
<module-descriptor external="false">
<root-element>web-app</root-element>
<uri>WEB-INF/web.xml</uri>
</module-descriptor>
</module-override>
<config-root>/opt/weblogic/Oracle/Middleware/KIM_SOA/soa/applications/plan</config-root>
</deployment-plan>

Set the following time-out of the EJB's:

	 	

BPEL/EJB properties Timeout
BPELActivityManagerBean 1800
BPELDeliveryBean 1800
BPELDispatcherBean 1800
BPELEngineBean 1800
BPELFinderBean 1800
BPELInstanceManagerBean 1800
BPELProcessManagerBean 1800
BPELSensorValuesBean 1800
BPELServerManagerBean 1800

Verify the the global transaction time out on all the managed servers is larger then the time out of the EJB. Set the global transaxtion time-out, JTA, to 3600.

Database 
To tune the dabatase, make sure that you follow the requirements of the SOA Suite 11g, make sure you can create enough process and session to the database. The database must be tuned, so the it has enough memory for caching, PGA, SGA memory should be adequate.

Make sure your have the purging scripts place. And verify if you have added an index to the SOA dehydration tables.

References



Quick Ref: Linux Mint 11 #1

Vattekkat Babu - Sat, 2011-08-20 23:50

To force a filesystem check on root filesystem on next reboot, do:

sudo touch /forcefsck

To restore the gnome panel to defaults:

gconftool-2 --shutdown
rm -rf ~/.gconf/apps/panel
pkill gnome-panel

If clicking on any file location from say google chrome download list or other links gives an error the requested location is not a folder:

sudo apt-get remove exo-utils

OSB using BLOB data with DBAdapter results in ora-2049

Marc Kelderman - Thu, 2011-08-18 06:57
When you want to poll data in the OSB, you will use the DBAdapter. In a 4-node production cluster we saw some strange errors periodically. These errors came up in the log file of any of the application servers:

Caused By: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: servicebus:/WSDL/EmailService/BusinessService/WijzigEmailStatus/wijzigStatus [ wijzigStatus_ptt::wijzigStatus(InputParameters,OutputParameters) ] - WSIF JCA Execute of operation 'wijzigStatus' failed due to: Stored procedure invocation error.

Error while trying to prepare and execute the "schema/package/procedure" API.
An error occurred while preparing and executing the "schema/package/procedure" API.
Cause: java.sql.SQLSyntaxErrorException: ORA-02049: timeout: distributed transaction waiting for lock
ORA-06512: at "schema/package", line 113
ORA-06512: at line 1

; nested exception is:
        BINDING.JCA-11811
Stored procedure invocation error.
Error while trying to prepare and execute the "schema/package/procedure" API.
An error occurred while preparing and executing the "schema/package/procedure" API. 
Cause: java.sql.SQLSyntaxErrorException: ORA-02049: timeout: distributed transaction waiting for lock
ORA-06512: at "schema/package", line 113
ORA-06512: at line 1 

The DBAdapter was polling data from database, this was a TopLink definition based on four tables. The ProxyService that was started an in the request pipeline the message was routed , to the bussiness service. While in the response pipeline the one of these four tables was updated.

The issue is related not related in high traffic on the system. It is all about using the DBAdapter/Toplink poller mechanism. The adapter is polling data and one of the columns is a BLOB column.

If data in this column is large, the DbAdapter will take some time to process this information to the service. While the set records are locked other services wants to update this data. This will result in the Oracle error.

To resolve this issue, large (blob) data should be processed as a stream in OSB. This can easily be done by enabling the 'Content Streaming' option for this (polling) proxy service.



Remarks from development on this article:
The exception in question is just a transaction timeout. That can be avoided by
  • shrinking MaxTransactionSize on inbound so that the transaction started by DbAdapter polling is shorter
  • shrinking MaxRaiseSize on inbound so that any transactions started by OSB will be shorter
  • increasing transaction timeout
If the problem is that inbound polling is locking a row for a long time that another process is trying to lock, that could be because inbound has configured distributed polling, where the records are locked on select.  The default behaviour is to only lock records on update, which should be independent of the time it takes to process the BLOB.My remarks:
Shrinking the MaxTransactionSize to 1, could solve the issue, but if you need to handle ten-thousand or more messages with large binary data, the whole polling process will be slow. I am using the MaxTransactionSize with value 10.



References:


Best Practice For Oracle Virtual Directory (OVD) Backup and Disaster Recovery.

Mark Wilcox - Thu, 2011-08-18 05:03
I'm writing this in response to a question on one of our mailing lists because of the current nature of the Oracle docset (something the doc team is working on) - it's kind of hard to figure out in a concise form.Here are the things to do:
  • Make sure to have 2 or more OVD instances deployed in production. OVD provides tools to keep the configurations in synch between systems
  • If you have an external DR site - then synchronize the OVD configuration to this external site. Note this will assume that hostnames will be same in the DR site as primary. If not - then will require manual tweaking of the names.
  • OVD keeps all of its configuration in files in the $ORACLE_INSTANCE directory. Back this directory up. If you needed to recover - this can be restored. Most likely would need to re-register the instance with OPMN and EM - which is covered in the OVD documentation.

Best Practice For Oracle Virtual Directory (OVD) Backup and Disaster Recovery.

Mark Wilcox - Thu, 2011-08-18 05:03
I'm writing this in response to a question on one of our mailing lists because of the current nature of the Oracle docset (something the doc team is working on) - it's kind of hard to figure out in a concise form. Here are the things to do:
  • Make sure to have 2 or more OVD instances deployed in production. OVD provides tools to keep the configurations in synch between systems
  • If you have an external DR site - then synchronize the OVD configuration to this external site. Note this will assume that hostnames will be same in the DR site as primary. If not - then will require manual tweaking of the names.
  • OVD keeps all of its configuration in files in the $ORACLE_INSTANCE directory. Back this directory up. If you needed to recover - this can be restored. Most likely would need to re-register the instance with OPMN and EM - which is covered in the OVD documentation.

Quick Ref: SVN command line

Vattekkat Babu - Tue, 2011-08-16 20:57
#make a working dir
mkdir sandbox
cd sandbox
#checkout
svn co svn://svn-repo-machine/repo/path myproject --username myusernameatsvnserver
#go to work on items
cd myproject
#pull latest from server
svn up 
#queue a file for addition
svn add newfile.c
#queue for deletion
svn delete anoldfilewedontneed.c 
#get the info on what is different in your local vs server
svn status 
#list commit comments on a file
svn log AFILESOMEONECHANGED.c 
#commit one file. * will commit all that changed.
svn ci -m "Edited this file for ..." filethatgotchanged.c 

Now, configure vim as the svn diff tool.

Git with Dropbox

Vattekkat Babu - Sun, 2011-08-14 13:38

First, check out Git for personal use. Extending those principles to Dropbox is fairly easy.

 
cd ~/Dropbox
mkdir -p gitroot/research.git && cd !$
git --bare init
 
cd ~/Projects
git init
git remote add dropbox file://$HOME/Dropbox/gitroot/research.git
git add testfile.txt
git commit -m "initial"
git push dropbox master
 
ssh anothercomputer
cd ~/Projects
git clone -o dropbox file://$HOME/Dropbox/gitroot/research.git
cd research
ls >> testfile.txt
git commit -m "just testing"
git push dropbox master
git pull dropbox master

For the first computer, once you fill the bare repo with some stuff, you can delete the folder and do a clone like how you did with the second computer.

Book Review: Oracle ADF Enterprise Application Development Made Simple

Chris Muir - Sun, 2011-08-14 01:15
There are very few pieces of software where a casual approach can be taken to the process of software development. Software development is intrinsically a difficult process, gathering requirements, design, development and testing all taking large effort. In reaction enterprises have setup and adopted strict development processes to build software (or at least the successful enterprises have ;-).

ADF and JDeveloper are sold as productivity boosters for developers. Yet in the end they are just a technology platform. By themselves they don't solve the complexities of software development process as a whole (though they do make parts of it easier).

This leaves IT departments struggling, as they might like the potential of ADF as a platform, but they intrinsically know there's more to software development, there's the process of development to consider. How will the tasks of requirement gathering, design, development & testing be applied to ADF? And more importantly how to shortcut the process of establishing these for ADF within the enterprise? There's a need to bring the two concepts together, the technology & the process, to help the adoption of ADF in the enterprise.

As ADF matures we're slowly seeing more books on Oracle's strategic JSF based web framework. Up to now the books have focused on the holistic understanding of the technical framework, the tricky & expert level implementation, or more recently the speed of development. Sten Vesterli's Oracle ADF Enterprise Application Development - Made Simple is the first to apply the processes and methodologies of design and development to ADF.

Of particular delight Sten's book starts out by suggesting building a proof of concept to skill up teams in ADF, to learn the complexities, & gather empirical evidence of the effort required. From here a focus on estimation and considerations of team structure are raised, all valuable stuff for the enterprise trying to control the tricky process of software development.

In order to make use of reuse, the next few chapters focus on the infrastructure and ADF constructs that need to be setup before a real project commences, including page templates, ADF BC framework classes and more. This is the stuff you wish you'd been told to create first if you forged ahead in building with ADF without any guidance.

Then chapter by chapter the tasks of building, adding security, internalization and other development efforts are covered. There's even a chapter on using JMeter to stress test your ADF app (with a link back to my blog! Thanks Sten).

As can be seen from the topics, there's relatively little consideration of actually implementing ADF Business Components or ADF Faces RC. As such it must be said reading this book won't make you an expert in the technical side of ADF, but rather will address how you can take a sensible approach to the overall development process.

All in all Sten Vesterli's Oracle ADF Enterprise Application Development - Made Simple is another valuable book in my ADF bookshelf and a recommended read for enterprise specialists looking at adopting ADF into their organisation.

Disclaimer: I know Sten personally and in addition Packt Publishing has provided me Sten's book for free to review.

RAC investigations part I

Freek D’Hooge - Wed, 2011-08-10 17:06

Environment description

2 node rac with Oracle 11.2.0.2.2
Oracle Linux 5.6 with the Unbreakable Enterprise Kernel (2.6.32-100.36.1.el5uek)

Conducted tests

test_srv is a service which has both the instance running on node1 and node2 as preferred instances.
On node1 the service was manually stopped.

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
 NAME=ora.mydb.test_srv.svc
 TYPE=ora.service.type
 CARDINALITY_ID=1
 DEGREE_ID=1
 TARGET=OFFLINE
 STATE=OFFLINE
 CARDINALITY_ID=2
 DEGREE_ID=1
 TARGET=ONLINE
 STATE=ONLINE on node2

Issue a “shutdown abort” on the instance running on node2

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node1

start the instance again

[grid@node1 ~]$ srvctl start instance -d mydb -i mydb2

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node1

The service is now running on both instances, although before the crash the service was set offline on node1.

Same test, but this time the service is stopped on all instances

[grid@node1 ~]$ srvctl stop service -d mydb -s test_srv

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

[grid@node1 ~]$ srvctl stop instance -d mydb -i mydb2 -o abort

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

This time both services stay offline.
But what happens if we start the instance again:

[grid@node1 ~]$ srvctl start instance -d mydb -i mydb2

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

Now the service has started again on the restarted instance.
Explanation for this is that the service was configured to come up automatically with the instance, which explains why the service is started on the restarted node.
For the failover this seems to me as expected behaviour as it is the same as what would happen with a preferred / available configuration.

For the third test, we will reconfigure the service to have a preferred and an available node

[grid@node1 ~]$ srvctl stop service -d mydb -s test_srv
[grid@node1 ~]$ srvctl modify service -d mydb -s test_srv -n -i mydb2 -a mydb1

[grid@node1 ~]$ srvctl config service -d mydb -s test_srv
Service name: test_srv
Service is enabled
Server pool: mydb_test_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: mydb2
Available instances: mydb1

[grid@node1 ~]$ srvctl start service -d mydb -s test_srv -i mydb2
[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

The service is running on its preferred instance, which we will now crash

[grid@node1 ~]$ srvctl stop instance -d mydb -i mydb2 -o abort

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=OFFLINE

eumm, I actually expected a relocation here…
As I have other services which have a preferred / available configuration, I know this service should failover.

[grid@node1 ~]$ srvctl status service -d mydb -s test_srv
Service test_srv is not running.

[grid@node1 ~]$ srvctl config service -d mydb -s test_srv
Service name: test_srv
Service is enabled
Server pool: mydb_test_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: mydb2
Available instances: mydb1

[grid@node1 ~]$ srvctl status database -d mydb
Instance mydb1 is running on node node1
Instance mydb2 is not running on node node2

I could find no clues in the different cluster log files as of why the relocation did not occur.
More testing will be necessary.
Also note that the output of the crsctl status resource does not contain information about on which node or instance the service is expected to be online.
But by using the -v flag we can see the last_server attribute:

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -v
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
LAST_SERVER=node2
STATE=OFFLINE
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=137
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.mydb.test_srv.svc 1 1
INCARNATION=5
LAST_RESTART=08/10/2011 16:32:53
LAST_STATE_CHANGE=08/10/2011 16:34:03
STATE_DETAILS=
INTERNAL_STATE=STABLE

After starting the instance again, the service was back available

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

A second run of this test gave the same result.
Manually relocating the service did work though:

[grid@node1 ~]$ srvctl relocate service -d mydb -s test_srv -i mydb1 -t mydb2
[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

What if I removed the service and recreated it directly as preferred / available:

[grid@node1 ~]$ srvctl stop service -d mydb -s test_srv

[grid@node1 ~]$ srvctl remove service -d mydb -s test_srv

[grid@node1 ~]$ srvctl add service -d mydb -s test_srv -r mydb2 -a mydb1 -y AUTOMATIC -P BASIC -e SELECT
PRCD-1026 : Failed to create service test_srv for database mydb
PRKH-1014 : Current user grid is not the same as oracle owner orauser of oracle home /opt/oracle/orauser/product/11.2.0.2/dbhome_1.

would it?
Let us test it:

[grid@node1 ~]$ su - orauser
Password:

[orauser@node1 ~]$ srvctl add service -d mydb -s test_srv -r mydb1,mydb2 -y AUTOMATIC -P BASIC -e SELECT

[orauser@node1 ~]$ srvctl config service -d mydb -s test_srv
Service name: test_srv
Service is enabled
Server pool: mydb_test_srv
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: mydb1,mydb2
Available instances:

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

CARDINALITY_ID=2
DEGREE_ID=1
TARGET=OFFLINE
STATE=OFFLINE

now modify it:

[orauser@node1 ~]$ srvctl modify service -d mydb -s test_srv -n -i mydb2 -a mydb1

[orauser@node1 ~]$ srvctl config service -d mydb -s test_srv
Service name: test_srv
Service is enabled
Server pool: mydb_test_srv
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: mydb2
Available instances: mydb1

[orauser@node1 ~]$ srvctl start service -d mydb -s test_srv -i mydb2

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

[orauser@node1 ~]$ srvctl stop instance -d mydb -i mydb2 -o abort

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=OFFLINE

Nope, the user modifying the service has nothing to do with it.
I also tested the scenario where I directly created a preferred / available service, but in this case the failover also did not work.
But after some more testing I found the reason.
During the first test I had shutdown the instance via sqlplus, not via srvctl. And the other services I talked about had failed over during this test (I never did a failback).
After doing the shutdown abort again via sqlplus, the failover worked again.

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node2

[orauser@node2 ~]$ export ORACLE_SID=mydb2
[orauser@node2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Wed Aug 10 18:28:29 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL&gt; shutdown abort
ORACLE instance shut down.

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node1

SQL&gt; startup
ORACLE instance started.

Total System Global Area 3140026368 bytes
Fixed Size                  2230600 bytes
Variable Size            1526728376 bytes
Database Buffers         1593835520 bytes
Redo Buffers               17231872 bytes
Database mounted.
Database opened.

[orauser@node1 ~]$ /opt/grid/11.2.0.2/bin/crsctl status resource ora.mydb.test_srv.svc -l
NAME=ora.mydb.test_srv.svc
TYPE=ora.service.type
CARDINALITY_ID=1
DEGREE_ID=1
TARGET=ONLINE
STATE=ONLINE on node1

as expected, starting the instance again did not trigger a failback of the service.

Question now is, if the failover not happening when issuing the shutdown via srvctl is expected behaviour or not.
For this, one probably would have to open a service case, answer a couple of question not important for this issue, escalate and still have to wait for several months.
Do I sound bitter now?

Conclusion:

  • When restarting an instance, an offline service that has this instance listed as a preferred node will be started (management policy = automatic).
  • When an instance on which a service was running fails, the service is started on at least one other preferred instance.
  • The service will remain running on this instance, even when the original instance is started again (in which case the service will run on both instances).
  • When a service has a preferred / available configuration, the service will failover to the available instance, but not failback afterwards.
  • Failover in a preferred / available configuration does not happen when the instance was stopped via “srvctl shutdown <db_unique_name> – o abort”

Questions remaining:

  • What if there where more then 2 nodes, with a service that has all three or more nodes listed as preferred, but currently only running on one node.
    If the instance on which that service is running fails, would the service then be started on all preferred nodes or on only 1 of them?
  • What if, in the above case, the service was running on 2 nodes.
    Would it still be started on other nodes?
  • And what if one of the nodes was configured as available and not as preferred? Would the service on the preferred node still be started or the one on the available instance or both?
  • And last but not least, is the srcvtl shutdown behaviour a bug or not?

It would be neat if someone has access to a 3 or more node rac on which they can run the above tests and send me the results  :-)

Update 13/08/2011:
Amar Lettat, one of my colleagues at Uptime, has pointed me to MOS note 1324574.1 – “11gR2 RAC Service Not Failing Over To Other Node When Instance Is Shut Down”.
This note clearly points out that the service not failing over when shutting down with srvctl is expected behaviour in 11.2.
It also points to the Oracle documentation, where this behaviour is also documented.
So not a bug, only a well documented change in behaviour.


Categories: DBA Blogs

Oracle enhanced adapter 1.4.0 and Readme Driven Development

Raimonds Simanovskis - Mon, 2011-08-08 16:00

I just released Oracle enhanced adapter version 1.4.0 and here is the summary of main changes.

Rails 3.1 support

Oracle enhanced adapter GitHub version was working with Rails 3.1 betas and release candidate versions already but it was not explicitly stated anywhere that you should use git version with Rails 3.1. Therefore I am releasing new version 1.4.0 which is passing all tests with latest Rails 3.1 release candidate. As I wrote before main changes in ActiveRecord 3.1 are that it using prepared statement cache and using bind variables in many statements (currently in select by primary key, insert and delete statements) which result in better performance and better database resources usage.

To follow up how Oracle enhanced adapter is working with different Rails versions and different Ruby implementations I have set up Continuous Integration server to run tests on different version combinations. At the moment of writing everything was green :)

Other bug fixes and improvements

Main fixes and improvements in this version are the following:

  • On JRuby I switched from using old ojdbc14.jar JDBC driver to latest ojdbc6.jar (on Java 6) or ojdbc5.jar (on Java 5). And JDBC driver can be located in Rails application ./lib directory as well.

  • RAW data type is now supported (which is quite often used in legacy databases instead of nowadays recommended CLOB and BLOB types).

  • rake db:create and rake db:drop can be used to create development or test database schemas.

  • Support for virtual columns in improved (only working on Oracle 11g database).

  • Default table, index, CLOB and BLOB tablespaces can be specified (if your DBA is insisting on putting everything in separate tablespaces :)).

  • Several improvements for context index additional options and definition dump.

See list of all enhancements and bug fixes

If you want to have a new feature in Oracle enhanced adapter then the best way is to implement it by yourself and write some tests for that feature and send me pull request. In this release I have included commits from five new contributors and two existing contributors - so it is not so hard to start contributing to open source!

Readme Driven Development

One of the worst parts of Oracle enhanced adapter so far was that for new users it was quite hard to understand how to start to use it and what are all additional features that Oracle enhanced adapter provides. There were many blog posts in this blog, there were wiki pages, there were posts in discussion forums. But all this information was in different places and some posts were already outdated and therefore for new users it was hard to understand how to start.

After reading about Readme Driven Development and watching presentation about Readme Driven Development I knew that README of Oracle enhanced adapter was quite bad and should be improved (in my other projects I am already trying to be better but this was my first one :)).

Therefore I have written new README of Oracle enhanced adapter which includes main installation, configuration, usage and troubleshooting tasks which previously was scattered across different other posts. If you find that some important information is missing or outdated then please submit patches to README as well so that it stays up to date and with relevant information.

If you have any questions please use discussion group or report issues at GitHub or post comments here.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator