Feed aggregator

RAC Attack!

Charles Schultz - Wed, 2009-09-09 14:01
Jeremy Schneider graced us with RAC Attack last week - it was quite awesome! Jeremy brings such a wealth of knowledge and passion for the technology that often times I found myself hard pressed to keep the workshop going. As I was the "organizer" person, I felt some responsibilities in those directions.

It also opened my eyes on several fronts. This is the first time I have helped to facilitate such a workshop, and there were a number of interesting obstacles, logistical and technological. Jeremy handled it all with his usual easy manner and we got it all worked out quite well. For instance, the harddrives of the individual computers were just a tad too small to accomodate all the jumpstart VM images that Jeremy likes to deploy; as a result, we ended up hosting files on various computers and mapping network drives. Not the quickest thing in the world, but hey, it worked. Also, again from the perspective of a facilitator, I found it challenging to address the numerous questions that folks had from time to time, which gave me a greater respect for those who do this kind of thing on a regular basis. Not only did Jeremy answer questions, but took advantage of several opportunities to delve into the deeper details of "how things work".

In retrospect, we are faced with the ubiquitous puzzle of how to address different styles of learning. For those, like me, who crave the hands-on aspect, this workshop is excellent! For those who need more lecture, this lab was a bit of a wake-up call. *grin* Actually, if only we had more time, we could certainly have entertained more dialogue; RAC is rich with controversy. =)

Jeremy was also able to spill the beans a little on Oracle 11gR2, since someone decided to release the Linux version the Tuesday before the workshop began. So we were treated to a few sneak peeks and tidbits. Good stuff.

Personally, I was challenged to discover new ways to do these kind of labs/workshops. I heard a lot of positive feedback about the wide variety of skill sets and job roles in the class, but as a result of that, the various backgrounds required different levels of "background information". Going forward, I would try to break the labs into more modular components (opposed to a totally open lab time) and preceed each lab with some solid instruction. What Jeremy did was great for DBAs, but we had some folks who needed a bit more hand-holding. That is just the nature of the beast. The good news is that Jeremy equipped us to do exactly that - we can now hold our own lab and choose any pace we want. I am hoping to pursue this a little and get others involved, especially in terms of disucssing how we as an organization want to tackle overlapping job roles in regards to cluster and storage management.

The virtualization aspect was also very nice. I think it gave us a glimpse into what we can do with virtualized resources, something we can definitely utilize more fully for future labs and group sessions.

Thanks, Jeremy,

Oracle Enterprise Linux 5 Update 4 available on ULN

Sergio's Blog - Wed, 2009-09-09 07:33

Oracle Enterprise Linux 5 Update 4 has been added to Unbreakable Linux Network (ULN) Customers with Linux support from Oracle may download and install OEL 5.4 packages for i386 and x84_64 architectures. Itanium packages are coming soon. Also coming soon: OEL 5.4 on public-yum.oracle.com and installation media on edelivery.oracle.com/linux

As for the rumors that OEL is based on CentOS? Only in a universe where time flows backward. Hint: Centos 5.3 was announced on April 3rd, 2009 and OEL 5.3 was announced on January 28th, 2009, more than two months earlier. We have no relationship with CentOS and do not rely on them.

Update:Installation media now available via edelivery.oracle.com/linux

Categories: DBA Blogs

Summer R & R

Mary Ann Davidson - Tue, 2009-09-08 07:35

Many of us take summer vacations to indulge in some R&R. Usually, we mean "rest and relaxation" by the abbreviation. R&R can also mean "reading and reruns" for those of us of the couch potato persuasion. I've done a lot of reading this summer (more on that below) and on those evenings when I can't concentrate on a demanding book, I sack out in front of the couch and watch reruns (e.g., NCIS and Law and Order. I find I am much better at figuring out whodunnit if I already know who did it. Less mental effort, too.).

There are other summer reruns materializing in Washington, in particular a revamped version of S. 773, the Cybersecurity Act of 2009 (aka the Snowe-Rockefeller Bill, after Senators Olympia Snowe (R-Maine) and Jay Rockefeller (D-WV)). First, the disclaimers: I've written a column for Oracle Magazine on this topic so I am stealing material from myself (otherwise known as "repurposing content"). Second, I always assume that members of Congress and their staff have the best of intentions when they draft a piece of legislation. So, no evil motives are assigned to them by me nor should be imputed. This disclaimer will be especially important when I explain why the Snowe-Rockefeller rerun is, despite good intentions, not an improvement from its original version.

I've reviewed a number of bills in my years working in cybersecurity and I have seen plenty that have become laws that best fit into the "what were they thinking?" category. I therefore offer a modest proposal: members of Congress should observe just four ironclad rules when drafting cybersecurity legislation, rules that would result in better, clearer and less ambiguous legislation, which is less subject to random interpretation and/or legal challenges (e.g., on Constitutional grounds). Here they are:

1)

Set limits; don't overreach. Before writing a law, determine the problem(s) the bill is trying to solve, whether legislation will actually solve the problem(s), at what cost and with what "unintended consequences." Also, determine whether there is another remedy equally or more effective at less cost and/or reach.2)

Do no harm. The legislative remedy shouldn't kill the problem by maiming the patient. 3)

Use precise language. Vague language will be misinterpreted or - worse - lead to people spending a lot of money without knowing if they are "there." In the case of cybersecurity, vague language means lawyers are more likely to be making the security decisions for companies. Worst of all are the "no auditor left behind" security bills for the amount of work they create and expenditure they require without materially improving security.4)

Uphold our current laws and values (e.g., the Constitution).

With that in mind, here are my thoughts on the Snowe-Rockefeller rerun.

First, the draft bill calls for certification of cybersecurity professionals; however, the term "cybersecurity professionals" is not defined. What, precisely does that term cover?

Someone who is a CISO? A CSO?Someone who is a security architect?Someone who applies patches, some of which are security patches? Someone who configures any product (after all, some settings are security settings)? Someone who installs AV software on mom and pop's home computer (gee, that could include their 9-year-old son Chad, the computer whiz)? Someone who administers firewalls? Someone who does forensic analysis? What about software developers - after all, if their code is flawed, it may lead to security vulnerabilities that bypass security settings?Does it mean security researchers? What about actual hackers? (It would be an interesting consequence of this bill if, in the future, someone isn't convicted for hacking (computer trespass) but is fined because (s)he does not have a CISHP (Certified Information Security Hacking Professional) certification.)

If you cannot tell based on the information in a bill to whom it applies and what "compliance" means, the likely beneficiaries are auditors, who were already given a industry boost courtesy of the Sarbanes Oxley Act, the gold standard of the "No Auditor Left Behind" bills I mentioned and the slayer of the US IPO market. More to the point, for all the money organizations could spend getting cybersecurity professional certifications for the people who don't do anything more in security than send out the "don't forget to change your password!" notices every 90 days, they could do more that actually improves security with the same funds. Getting certifications for people who don't need them crowds our more useful activity and thus could do actual harm. The lack of a clear definition in the draft bill alone runs afoul of my ironclad rules 1, 2 and 3 (and 4, as I will show later).

There is another problem with this provision: the potential for windfall profits by some (on top of not necessarily making the problem space better and possibly making it worse). Aside from product certifications (e.g., "so-and-so is a certified professional in administering product FOO"), which vendors administer, I believe that many "cyber-certification " bodies that exist now are for profit (meaning, such a bill is a mandate to spend money). The problem is made worse if the entities are effectively granted monopoly power over certifications.

To wit, a small aside here to bash ISC(2), or more correctly, a single individual within ISC(2). I and most of my team have received the new Certified Secure Software Lifecycle Professional (CSSLP) certification. I have to say, I didn't think it was that hard to get nor do you really have to demonstrate much actual expertise in development practice. The hard part of "secure software lifecycle" is doing it, not writing about it, taking exams about it, or the like. The next thing I know, I am getting a cold call from someone who I can only construe to be a sales rep for ISC(2) telling me why everybody in Oracle should take their CSSLP training classes and get the certification.

My response was what I outlined above: I did not see the value for the money. The hard part is doing secure development, not getting a CSSLP certification and anyway, for the amount of money we'd spend to do massive CSSLP training (and by the way, we actually do secure development so I don't see the need for ISC(2) training on top of what we already do in practice or the training we provide to developers), we could do more valuable things towards, oh, actually improving Oracle product security. I'd rather improve product security than line ISC(2)'s pockets. Customers would prefer I do that, too.

In response, I received what I can only construe as a "policy threat," which was Slimy Sales Guy saying that the Defense Department was going to start requiring CSSLPs as a condition of procurement so I needed to talk to him. (Gee, I bet ISC(2)'s lobbyists were busy.) My response was "hey, good to know, because that sounds like you've been handed a monopoly by DoD, which is inherently anticompetitive - who in the IT industry made you the arbiters of what constitutes 'secure development skill?'" I also said that I would work to oppose that provision - if it exists - on public policy grounds. ISC(2)'s certification wasn't broadly enough arrived at (full disclosure: I was asked about the utility of such a certification before ISC(2) developed it and I said I did not see the need for it). More to the point, you could get a CSSLP and still work for an organization that does not (technical, secure development terminology follows) give a rat's behind about actually building secure software so who the bleep cares?

I shouldn't single ISC(2) out in the sense that a lot of entities want to get legislation passed that allows them to get government-mandated money by, say, requiring someone to get their certification, or buy their product, or use their services.* If Slimy Sales Guy does not speak for ISC(2), my apologies to them, but I did not appreciate Oracle being "shaken down" as thanks for my team being an early adopter of CSSLP.

Back to the Snowe-Rockefeller rerun: it's bad enough that one out of every five people in the US has a licensing or certification requirement for his job** but if we are going to add one more requirement and license cybersecurity professionals, then at least figure out who "cybersecurity professionals" are, why we need to do that, how we will do it and constrain the problem.

The bill compounds the vague definition of "cybersecurity professional" by requiring that "3 years after the date of enactment of this Act, it shall be unlawful for an individual who is not certified under the program to represent himself or herself as a cybersecurity professional." Why does the federal government want to directly regulate cybersecurity professionals to a degree that arguably exceeds medical licensing, professional engineers' licensing, architects' licensing and so forth? Even in professions that have licensing requirements, there are state-by-state requirements that differ (e.g., California has more stringent licensing for structural engineers because there is a requirement for seismic design in CA that other, less earthquake-prone states do not have). Also, such a hands-on role for the federal government raises real constitutional concerns. Where in the Constitution is the Federal government authority as the licensing and regulatory body for all cybersecurity? (See ironclad rule number 4.)

The draft bill also would allow the president to exert control over "critical infrastructure information systems and networks" in the event of a "national emergency" - including private networks - without defining what either of those things are, which would leave the discretion to the executive branch. I read this to mean the President would be able (in an "emergency") to exert authority over private networks based on whatever criteria he/she wants to use to declare them "critical." *** If "critical infrastructure information systems and networks" are so critical, why can't we define what they are before legislating them? Are those networks pertaining to:

Utilities? Financial services? Manufacturing? (What kind of manufacturing - someone's toy making control systems or are we talking about heavy industry?) Health care?Agriculture? Other?

I have concerns - because I am a student of history - about giving anyone too much power in what we think is a good cause and watching that power turned against us. Vague terms combined with explicit presidential authority over these ill-defined terms can be a dangerous legislative formula.

There is also a provision that requires "...real time cybersecurity status and vulnerability information of all Federal Government information systems and networks managed by the Department of Commerce, including an inventory of such, vulnerabilities of such systems and networks, and corrective action plans for those vulnerabilities..." Of course, it makes sense for any owner of a network to know what's on their network and its state of "mission readiness," which in this context could include the state of its security configuration and whether security patches have been applied. However - and I made the same comment on the first draft bill - "vulnerabilities" is not defined and there is almost no such thing as "real time vulnerability information" if "vulnerability" includes defects in software that are not publicly known and for which no workaround or patch exists. Most vendors do not provide real time vulnerability information because there is nothing that increases the risk to customers like telling them of a vulnerability with no fix (or other threat mitigation) available.

"Everybody knows what we mean" is not good enough if cybersecurity is truly a national security problem, which it clearly is. At a minimum, for purposes of this bill, "vulnerability" should be explicitly defined as either a configuration weakness or a defect in software that has been publicly disclosed and for which a patch or other remediation exists. Otherwise, someone will construe this draft bill to require vendors to notify customers about security problems with no solutions as soon as they find the problems - real time, no less. Uh, no, not going to happen.

We do not need legislation or regulation for the sake of regulation, especially when it is not clear what and who is being "regulated" and what "compliance" means and at what cost. And, most importantly, I need to be convinced that the cost of regulation - the all in cost - is worth a clear benefit and that benefit could not be derived in a better or more economical or less draconian way. Most importantly, I want this bill - or any bill - to uphold our values and specifically the values enumerated in the Constitution. Good motives are not enough to create good public policy. I truly hope the next remake of Snowe-Rockefeller is worthy of its intentions, and advances our nation's cybersecurity posture.

* Here's mine: I would like a bill passed called the Hawaiian Language Preservation Act. As part of that act, I'd like to require musicians to (in addition to paying authors of works their royalties if the work is performed in public) obtain a certification that they pronounce the lyrics of the song correctly. You won't be able to perform in public (or at least, sing Hawaiian music) unless you have a Correct Hawaiian Lyrics Pronunciation (CHLP) certification. This is a bigger problem than you would think, according to my 'ukulele teacher, Saichi (who insists we pronounce the language correctly as we sing and "good on him"). Because I am a straight up gal, I won't even be greedy - I'll just require CHLP certification for anyone publicly performing any of the Rev. Dennis Kamakahi's songs (he's written about 400 or so songs, as far as I can tell he has never written a bad song, they are very popular and often played). Now, everybody will have to come to me to get a piece of paper that asserts they can pronounce "hāwanawana" correctly (it shows up in the second verse of Koke'e). See how easy that was? I figure I can use the proceeds of my CHLP certification program to buy a house in Honolulu (and improve everyone's Hawaiian pronunciation, too).

** Source: The Dirty Dozen, more about which below.

*** A colleague who reviewed this blog entry for me raised some even scarier concerns I thought were spot-on. Consider that some elements of our country have been at "heightened alert status" since 9/11/01 (e.g., air transportation). Some networks (e.g., DoD) are being probed daily so it's conceivable that a similar "heightened alert status" for cyber could be put in place in some sectors and left "on." Would the government be able to search any records, at any time, in a sector once a (semi-permanent) cyberalert exists? It's sometimes happened that a company that works with a law enforcement entity after a cyberincident is asked for "everything": logs, machines, access to people. Perhaps an experienced person knows how to ask for the minimum information needed to investigate an incident, but the law can't require that an "experienced, reasonable person with judgment" would be the enforcement mechanism. No company wants to face having to hand over all their data, their servers and their people because of an "alert." What would the government really accomplish if every company in that sector flooded them with records? Also, would companies receive some immunity or could data obtained under an "alert" be used for another purpose by the government?

Books of the Month

I have not blogged in awhile so I am overloading the following section. I have been doing a lot of summer reading and it is hard to recommend just one book:Huckleberry Finn by Mark Twain

Ernest Hemingway declared that "All modern American literature comes from one book by Mark Twain called Huckleberry Finn." It is a classic, and that is all the more reason to read it if you haven't already and reread it if you haven't read it in awhile. It's ineffably sad and short-sighted that a lot of schools either don't have a copy or don't teach this book anymore due to the prevalence of the "n word" in the text. That is political correctness run amok, especially since Twain was an expert satirist and the most heroic character in the book is the runaway slave, Jim. If you think Twain condones slavery, you didn't read the book closely enough: no, not at all.On Wings of Trust: The Quest of Pilot Carole Leigh by Maynard D. Poland

http://www.amazon.com/Wings-Trust-Quest-Pilot-Carole/dp/1419637800

I am particularly partial to this book because it is about a friend of mine. No, she's more than that, she is a great friend of long standing (we were Navy buddies) and she was a pioneer - a P3 pilot in the Navy and then a commercial airline pilot. Carole is one of the highest integrity people I know and that shines throughout the book, never more so than in her dealing with scary emergencies in-flight - and in her not turning a blind eye when something Is Not Right. The highest compliment I could pay someone is that I would trust her with my life, and I would trust Carole with mine. It's a great (true) story about a great person.

A Moveable Feast: The Restored Edition by Ernest Hemingway

http://www.amazon.com/Moveable-Feast-Restored-Ernest-Hemingway/dp/1416591311/ref=sr_1_1?ie=UTF8&s=books&qid=1252113968&sr=1-1

A Moveable Feast has been in print for some time (and is one of my favorite books by Hemingway), but this is a new version: since the book was published posthumously and there was no "definitive manuscript," it is hard in some sections to know what Hemingway intended to write. The expanded version gives in some cases an entirely differently flavor: Hemingway comes across as much less - literary criticism term - "snotty" towards F. Scott Fitzgerald in this version. The book gives a real flavor both of Paris and the Lost Generation's place in it in the 1920s.

Baking Cakes in Kigali by Gaile Parkin

http://www.amazon.com/dp/0385343434/?tag=googhydr-20&hvadid=4024611209&ref=pd_sl_45rnacbtln_e

People who like the gentle humor of the No. 1 Ladies' Detective Agency will like this. People in Kigali come to Angel, an expert cake baker, to order cakes and as they do, they tell their stories. The book does not spare the real challenges faced in Rwanda - the devastation wrought by AIDS, for example, and yet it's a lovely, redemptive story.

The Blue Notebook by James Levine

http://www.amazon.com/Blue-Notebook-James-Levine-M-D/dp/038552871X/ref=sr_1_1?ie=UTF8&s=books&qid=1251937532&sr=1-1This is the story of a young Indian girl sold into child prostitution despite which, her spirit prevails. It is a disturbing and tragic book - and yet, extremely moving, all the more so when you realize that the author is donating the US proceeds of the book to the Center for Missing and Exploited children. A wonderful read.

The Dirty Dozen: How Twelve Supreme Court Cases Radically Expanded Government and Eroded Freedom by Robert A. Levy and William Mellor

http://www.amazon.com/Blue-Notebook-James-Levine-M-D/dp/038552871X/ref=sr_1_1?ie=UTF8&s=books&qid=1251937532&sr=1-1

This book analyzes the twelve worst decisions by the US Supreme Court and how they have affected our freedoms. You will need Maalox or a stiff gin and tonic after reading it. The concept of limited government envisioned by our founding fathers is not what we have now, and this book explains why. The erosion of freedom/expansion of government began for the most part under Franklin Roosevelt but there are some recent cases highlighted such as Kelo vs. New London, that upheld government abuse of eminent domain. At the time the book went to print DC vs. Heller (an important 2nd amendment case) had not been decided but it is mentioned in the book. I finished the book four days ago and I am still aghast at what I learned.

The Art of Racing in the Rain - by Garth Stein

http://www.amazon.com/Art-Racing-Rain/dp/B0017SWPXY

I picked this up because someone recommended it to me and I was going to spend the day on planes and in the airport. After I opened it, I could not put it down, and when I finished it, I felt I had read something wondrous. The book is about the travails in a family, told from the dog's point of view. It sounds too strange to work, but it does work, and the character Enzo (the dog) is unforgettable. He puke kapu (a sacred book).

Summer R & R

Mary Ann Davidson - Tue, 2009-09-08 07:35


Many of us take summer vacations to indulge in some R&R. Usually, we mean "rest and relaxation" by the abbreviation. R&R can also mean "reading and reruns" for those of us of the couch potato persuasion. I've done a lot of reading this summer (more on that below) and on those evenings when I can't concentrate on a demanding book, I sack out in front of the couch and watch reruns (e.g., NCIS and Law and Order. I find I am much better at figuring out whodunnit if I already know who did it. Less mental effort, too.).

There are other summer reruns materializing in Washington, in particular a revamped version of S. 773, the Cybersecurity Act of 2009 (aka the Snowe-Rockefeller Bill, after Senators Olympia Snowe (R-Maine) and Jay Rockefeller (D-WV)). First, the disclaimers: I've written a column for Oracle Magazine on this topic so I am stealing material from myself (otherwise known as "repurposing content"). Second, I always assume that members of Congress and their staff have the best of intentions when they draft a piece of legislation. So, no evil motives are assigned to them by me nor should be imputed. This disclaimer will be especially important when I explain why the Snowe-Rockefeller rerun is, despite good intentions, not an improvement from its original version.

I've reviewed a number of bills in my years working in cybersecurity and I have seen plenty that have become laws that best fit into the "what were they thinking?" category. I therefore offer a modest proposal: members of Congress should observe just four ironclad rules when drafting cybersecurity legislation, rules that would result in better, clearer and less ambiguous legislation, which is less subject to random interpretation and/or legal challenges (e.g., on Constitutional grounds). Here they are:

1) Set limits; don't overreach. Before writing a law, determine the problem(s) the bill is trying to solve, whether legislation will actually solve the problem(s), at what cost and with what "unintended consequences." Also, determine whether there is another remedy equally or more effective at less cost and/or reach.
2) Do no harm. The legislative remedy shouldn't kill the problem by maiming the patient.
3) Use precise language. Vague language will be misinterpreted or - worse - lead to people spending a lot of money without knowing if they are "there." In the case of cybersecurity, vague language means lawyers are more likely to be making the security decisions for companies. Worst of all are the "no auditor left behind" security bills for the amount of work they create and expenditure they require without materially improving security.
4) Uphold our current laws and values (e.g., the Constitution).

With that in mind, here are my thoughts on the Snowe-Rockefeller rerun.

First, the draft bill calls for certification of cybersecurity professionals; however, the term "cybersecurity professionals" is not defined. What, precisely does that term cover?

Someone who is a CISO? A CSO?
Someone who is a security architect?
Someone who applies patches, some of which are security patches?
Someone who configures any product (after all, some settings are security settings)?
Someone who installs AV software on mom and pop's home computer (gee, that could include their 9-year-old son Chad, the computer whiz)?
Someone who administers firewalls?
Someone who does forensic analysis?
What about software developers - after all, if their code is flawed, it may lead to security vulnerabilities that bypass security settings?
Does it mean security researchers? What about actual hackers? (It would be an interesting consequence of this bill if, in the future, someone isn't convicted for hacking (computer trespass) but is fined because (s)he does not have a CISHP (Certified Information Security Hacking Professional) certification.)

If you cannot tell based on the information in a bill to whom it applies and what "compliance" means, the likely beneficiaries are auditors, who were already given a industry boost courtesy of the Sarbanes Oxley Act, the gold standard of the "No Auditor Left Behind" bills I mentioned and the slayer of the US IPO market. More to the point, for all the money organizations could spend getting cybersecurity professional certifications for the people who don't do anything more in security than send out the "don't forget to change your password!" notices every 90 days, they could do more that actually improves security with the same funds. Getting certifications for people who don't need them crowds our more useful activity and thus could do actual harm. The lack of a clear definition in the draft bill alone runs afoul of my ironclad rules 1, 2 and 3 (and 4, as I will show later).

There is another problem with this provision: the potential for windfall profits by some (on top of not necessarily making the problem space better and possibly making it worse). Aside from product certifications (e.g., "so-and-so is a certified professional in administering product FOO"), which vendors administer, I believe that many "cyber-certification " bodies that exist now are for profit (meaning, such a bill is a mandate to spend money). The problem is made worse if the entities are effectively granted monopoly power over certifications.

To wit, a small aside here to bash ISC(2), or more correctly, a single individual within ISC(2). I and most of my team have received the new Certified Secure Software Lifecycle Professional (CSSLP) certification. I have to say, I didn't think it was that hard to get nor do you really have to demonstrate much actual expertise in development practice. The hard part of "secure software lifecycle" is doing it, not writing about it, taking exams about it, or the like. The next thing I know, I am getting a cold call from someone who I can only construe to be a sales rep for ISC(2) telling me why everybody in Oracle should take their CSSLP training classes and get the certification.

My response was what I outlined above: I did not see the value for the money. The hard part is doing secure development, not getting a CSSLP certification and anyway, for the amount of money we'd spend to do massive CSSLP training (and by the way, we actually do secure development so I don't see the need for ISC(2) training on top of what we already do in practice or the training we provide to developers), we could do more valuable things towards, oh, actually improving Oracle product security. I'd rather improve product security than line ISC(2)'s pockets. Customers would prefer I do that, too.

In response, I received what I can only construe as a "policy threat," which was Slimy Sales Guy saying that the Defense Department was going to start requiring CSSLPs as a condition of procurement so I needed to talk to him. (Gee, I bet ISC(2)'s lobbyists were busy.) My response was "hey, good to know, because that sounds like you've been handed a monopoly by DoD, which is inherently anticompetitive - who in the IT industry made you the arbiters of what constitutes 'secure development skill?'" I also said that I would work to oppose that provision - if it exists - on public policy grounds. ISC(2)'s certification wasn't broadly enough arrived at (full disclosure: I was asked about the utility of such a certification before ISC(2) developed it and I said I did not see the need for it). More to the point, you could get a CSSLP and still work for an organization that does not (technical, secure development terminology follows) give a rat's behind about actually building secure software so who the bleep cares?

I shouldn't single ISC(2) out in the sense that a lot of entities want to get legislation passed that allows them to get government-mandated money by, say, requiring someone to get their certification, or buy their product, or use their services.* If Slimy Sales Guy does not speak for ISC(2), my apologies to them, but I did not appreciate Oracle being "shaken down" as thanks for my team being an early adopter of CSSLP.

Back to the Snowe-Rockefeller rerun: it's bad enough that one out of every five people in the US has a licensing or certification requirement for his job** but if we are going to add one more requirement and license cybersecurity professionals, then at least figure out who "cybersecurity professionals" are, why we need to do that, how we will do it and constrain the problem.

The bill compounds the vague definition of "cybersecurity professional" by requiring that "3 years after the date of enactment of this Act, it shall be unlawful for an individual who is not certified under the program to represent himself or herself as a cybersecurity professional." Why does the federal government want to directly regulate cybersecurity professionals to a degree that arguably exceeds medical licensing, professional engineers' licensing, architects' licensing and so forth? Even in professions that have licensing requirements, there are state-by-state requirements that differ (e.g., California has more stringent licensing for structural engineers because there is a requirement for seismic design in CA that other, less earthquake-prone states do not have). Also, such a hands-on role for the federal government raises real constitutional concerns. Where in the Constitution is the Federal government authority as the licensing and regulatory body for all cybersecurity? (See ironclad rule number 4.)

The draft bill also would allow the president to exert control over "critical infrastructure information systems and networks" in the event of a "national emergency" - including private networks - without defining what either of those things are, which would leave the discretion to the executive branch. I read this to mean the President would be able (in an "emergency") to exert authority over private networks based on whatever criteria he/she wants to use to declare them "critical." *** If "critical infrastructure information systems and networks" are so critical, why can't we define what they are before legislating them? Are those networks pertaining to:

Utilities?
Financial services?
Manufacturing? (What kind of manufacturing - someone's toy making control systems or are we talking about heavy industry?)
Health care?
Agriculture?
Other?

I have concerns - because I am a student of history - about giving anyone too much power in what we think is a good cause and watching that power turned against us. Vague terms combined with explicit presidential authority over these ill-defined terms can be a dangerous legislative formula.

There is also a provision that requires "...real time cybersecurity status and vulnerability information of all Federal Government information systems and networks managed by the Department of Commerce, including an inventory of such, vulnerabilities of such systems and networks, and corrective action plans for those vulnerabilities..." Of course, it makes sense for any owner of a network to know what's on their network and its state of "mission readiness," which in this context could include the state of its security configuration and whether security patches have been applied. However - and I made the same comment on the first draft bill - "vulnerabilities" is not defined and there is almost no such thing as "real time vulnerability information" if "vulnerability" includes defects in software that are not publicly known and for which no workaround or patch exists. Most vendors do not provide real time vulnerability information because there is nothing that increases the risk to customers like telling them of a vulnerability with no fix (or other threat mitigation) available.

"Everybody knows what we mean" is not good enough if cybersecurity is truly a national security problem, which it clearly is. At a minimum, for purposes of this bill, "vulnerability" should be explicitly defined as either a configuration weakness or a defect in software that has been publicly disclosed and for which a patch or other remediation exists. Otherwise, someone will construe this draft bill to require vendors to notify customers about security problems with no solutions as soon as they find the problems - real time, no less. Uh, no, not going to happen.

We do not need legislation or regulation for the sake of regulation, especially when it is not clear what and who is being "regulated" and what "compliance" means and at what cost. And, most importantly, I need to be convinced that the cost of regulation - the all in cost - is worth a clear benefit and that benefit could not be derived in a better or more economical or less draconian way. Most importantly, I want this bill - or any bill - to uphold our values and specifically the values enumerated in the Constitution. Good motives are not enough to create good public policy. I truly hope the next remake of Snowe-Rockefeller is worthy of its intentions, and advances our nation's cybersecurity posture.

* Here's mine: I would like a bill passed called the Hawaiian Language Preservation Act. As part of that act, I'd like to require musicians to (in addition to paying authors of works their royalties if the work is performed in public) obtain a certification that they pronounce the lyrics of the song correctly. You won't be able to perform in public (or at least, sing Hawaiian music) unless you have a Correct Hawaiian Lyrics Pronunciation (CHLP) certification. This is a bigger problem than you would think, according to my 'ukulele teacher, Saichi (who insists we pronounce the language correctly as we sing and "good on him"). Because I am a straight up gal, I won't even be greedy - I'll just require CHLP certification for anyone publicly performing any of the Rev. Dennis Kamakahi's songs (he's written about 400 or so songs, as far as I can tell he has never written a bad song, they are very popular and often played). Now, everybody will have to come to me to get a piece of paper that asserts they can pronounce "hāwanawana" correctly (it shows up in the second verse of Koke'e). See how easy that was? I figure I can use the proceeds of my CHLP certification program to buy a house in Honolulu (and improve everyone's Hawaiian pronunciation, too).

** Source: The Dirty Dozen, more about which below.

*** A colleague who reviewed this blog entry for me raised some even scarier concerns I thought were spot-on. Consider that some elements of our country have been at "heightened alert status" since 9/11/01 (e.g., air transportation). Some networks (e.g., DoD) are being probed daily so it's conceivable that a similar "heightened alert status" for cyber could be put in place in some sectors and left "on." Would the government be able to search any records, at any time, in a sector once a (semi-permanent) cyberalert exists? It's sometimes happened that a company that works with a law enforcement entity after a cyberincident is asked for "everything": logs, machines, access to people. Perhaps an experienced person knows how to ask for the minimum information needed to investigate an incident, but the law can't require that an "experienced, reasonable person with judgment" would be the enforcement mechanism. No company wants to face having to hand over all their data, their servers and their people because of an "alert." What would the government really accomplish if every company in that sector flooded them with records? Also, would companies receive some immunity or could data obtained under an "alert" be used for another purpose by the government?

Books of the Month

I have not blogged in awhile so I am overloading the following section. I have been doing a lot of summer reading and it is hard to recommend just one book:

Huckleberry Finn
by Mark Twain

Ernest Hemingway declared that "All modern American literature comes from one book by Mark Twain called Huckleberry Finn." It is a classic, and that is all the more reason to read it if you haven't already and reread it if you haven't read it in awhile. It's ineffably sad and short-sighted that a lot of schools either don't have a copy or don't teach this book anymore due to the prevalence of the "n word" in the text. That is political correctness run amok, especially since Twain was an expert satirist and the most heroic character in the book is the runaway slave, Jim. If you think Twain condones slavery, you didn't read the book closely enough: no, not at all.

On Wings of Trust: The Quest of Pilot Carole Leigh
by Maynard D. Poland

http://www.amazon.com/Wings-Trust-Quest-Pilot-Carole/dp/1419637800

I am particularly partial to this book because it is about a friend of mine. No, she's more than that, she is a great friend of long standing (we were Navy buddies) and she was a pioneer - a P3 pilot in the Navy and then a commercial airline pilot. Carole is one of the highest integrity people I know and that shines throughout the book, never more so than in her dealing with scary emergencies in-flight - and in her not turning a blind eye when something Is Not Right. The highest compliment I could pay someone is that I would trust her with my life, and I would trust Carole with mine. It's a great (true) story about a great person.

A Moveable Feast: The Restored Edition by Ernest Hemingway

http://www.amazon.com/Moveable-Feast-Restored-Ernest-Hemingway/dp/1416591311/ref=sr_1_1?ie=UTF8&s=books&qid=1252113968&sr=1-1

A Moveable Feast has been in print for some time (and is one of my favorite books by Hemingway), but this is a new version: since the book was published posthumously and there was no "definitive manuscript," it is hard in some sections to know what Hemingway intended to write. The expanded version gives in some cases an entirely differently flavor: Hemingway comes across as much less - literary criticism term - "snotty" towards F. Scott Fitzgerald in this version. The book gives a real flavor both of Paris and the Lost Generation's place in it in the 1920s.

Baking Cakes in Kigali by Gaile Parkin

http://www.amazon.com/dp/0385343434/?tag=googhydr-20&hvadid=4024611209&ref=pd_sl_45rnacbtln_e

People who like the gentle humor of the No. 1 Ladies' Detective Agency will like this. People in Kigali come to Angel, an expert cake baker, to order cakes and as they do, they tell their stories. The book does not spare the real challenges faced in Rwanda - the devastation wrought by AIDS, for example, and yet it's a lovely, redemptive story.

The Blue Notebook by James Levine

http://www.amazon.com/Blue-Notebook-James-Levine-M-D/dp/038552871X/ref=sr_1_1?ie=UTF8&s=books&qid=1251937532&sr=1-1
This is the story of a young Indian girl sold into child prostitution despite which, her spirit prevails. It is a disturbing and tragic book - and yet, extremely moving, all the more so when you realize that the author is donating the US proceeds of the book to the Center for Missing and Exploited children. A wonderful read.

The Dirty Dozen: How Twelve Supreme Court Cases Radically Expanded Government and Eroded Freedom by Robert A. Levy and William Mellor

http://www.amazon.com/Blue-Notebook-James-Levine-M-D/dp/038552871X/ref=sr_1_1?ie=UTF8&s=books&qid=1251937532&sr=1-1

This book analyzes the twelve worst decisions by the US Supreme Court and how they have affected our freedoms. You will need Maalox or a stiff gin and tonic after reading it. The concept of limited government envisioned by our founding fathers is not what we have now, and this book explains why. The erosion of freedom/expansion of government began for the most part under Franklin Roosevelt but there are some recent cases highlighted such as Kelo vs. New London, that upheld government abuse of eminent domain. At the time the book went to print DC vs. Heller (an important 2nd amendment case) had not been decided but it is mentioned in the book. I finished the book four days ago and I am still aghast at what I learned.

The Art of Racing in the Rain - by Garth Stein

http://www.amazon.com/Art-Racing-Rain/dp/B0017SWPXY

I picked this up because someone recommended it to me and I was going to spend the day on planes and in the airport. After I opened it, I could not put it down, and when I finished it, I felt I had read something wondrous. The book is about the travails in a family, told from the dog's point of view. It sounds too strange to work, but it does work, and the character Enzo (the dog) is unforgettable. He puke kapu (a sacred book).

Shell Tricks

Jared Still - Sun, 2009-09-06 18:27
DBAs from time to time must write shell scripts. If your environment is strictly Windows based, this article may hold little interest for you.

Many DBAs however rely on shell scripting to manage databases. Even if you use OEM for many tasks, you likely use shell scripts to manage some aspects of DBA work.

Lately I have been writing a number of scripts to manage database statistics - gathering, deleting, and importing exporting both to and from statistics tables exp files.

Years ago I started using the shell builtin getopts to gather arguments from the command line. A typical use might look like the following:

while getopts d:u:s:T:t:n: arg
do
case $arg in
d) DATABASE=$OPTARG
echo DATABASE: $DATABASE;;
u) USERNAME=$OPTARG
echo USERNAME: $USERNAME;;
s) SCHEMA=$OPTARG
echo SCHEMA: $SCHEMA;;
T) TYPE=$OPTARG
echo TYPE: $TYPE;;
t) TABLE_NAME=$OPTARG;;
#echo TABLE_NAME: $TABLE_NAME
n) OWNER=$OPTARG
echo OWNER: $OWNER;;
*) echo "invalid argument specified"; usage;exit 1;
esac

done


In this example, the valid arguments are -d, -u, -s, -T, -t and -n. All of these arguments require a value.

The command line arguments might look like this:
somescript.sh -d orcl -u system -s scott

If an invalid argument such as -z is passed, the script will exit with the exit code set to 1.

For the script to work correctly, some checking of the arguments passed to the script must be done.

For this script, the rules are as follows:
  • -d and -u must always be set
  • -s must be set if -T is 'SCHEMA'
  • -t and -n must both have a value or be blank
  • -s must be used with -T
In this example, values for -T other than 'SCHEMA' are not being checked.

The usual method (at least for me) to test the validity of command line arguments has always been to use the test, or [] operator with combinations of arguments.

For the command line arguments just discussed, the tests might look like the following:

[ -z "$DATABASE" -o -z "$USERNAME" ] && {
echo Database or Username is blank
exit 2
}

# include schema name if necessary
[ "$TYPE" == 'SCHEMA' -a -z "$SCHEMA" ] && {
echo Please include schema name
exit 3
}

# both owner and tablename must have a value, or both be blank
[ \( -z "$TABLE_NAME" -a -n "$OWNER" \) -o \( -n "$TABLE_NAME" -a -z "$OWNER" \) ] && {
echo Please specify both owner and tablename
echo or leave both blank
exit 4
}

# if -s is set, so must -T
[ -n "$SCHEMA" -a -z "$TYPE" ] && {
echo Please include a type with -T
exit 5
}


As you can see, there are a fair number of tests involved to determine the validity of the command line arguments. You may have guessed why I skipped one for this demo - I just did not want to write any more tests.

Validating command line arguments really gets difficult with a larger number of possible arguments. Worse yet, any later modifications to the script that require a new command line argument become dreaded tasks that are put off as long as possible due the complexity of testing the validity of command line arguments.

While writing a script that had 11 possible arguments, I was dreading writing the command line argument validation section, I thought there must be a better way.

It seemed that there must be a simple method of using regular expressions to validate combinations of command line arguments. I had never seen this done, and after spending a fair bit of time googling the topic it became apparent that there was not any code available for a cut and paste solution, so it seemed a nice opportunity to be innovative.

After experimenting a bit, I found what I think is a better way.

The method I use is to concatenate all possible command line arguments into a ':' delimited string, and then use a set of pre-prepared regexes to determine whether or not the command line arguments are valid.

One immediately obvious drawback to this method is that arguments containing the ':' character cannot be used. However the delimiting character can easily be changed if needed.

Using the same example as previous, the command line arguments are all concatenated into a string and converted to upper case:

ALLARGS=":$USERNAME:$DATABASE:$OWNER:$TABLE_NAME:$SCHEMA:$TYPE:"
# upper case arges
ALLARGS=$(echo $ALLARGS | tr "[a-z]" "[A-Z]")


Next a series of regular expressions are created. The first two are generic, and may or may not be used as building blocks for other regular expressions. The others all correspond to a specific command line argument


# alphanumeric only, at least 1 character
export ALNUM1="[[:alnum:]]+"
# alphanumeric only, at least 3 characters
export ALNUM3="[[:alnum:]]{3,}"
# username - alphanumeric only at least 3 characters
export USER_RE=$ALNUM3
# database - alphanumeric only at least 3 characters
export DATABASE_RE=$ALNUM3
# owner - alphanumeric and _ and $ characters
export OWNER_RE='[[:alnum:]_$]+'
# table_name - alphanumeric and _, # and $ characters
export TABLE_RE='[[:alnum:]_#$]+'
# schema - alphanumeric and _ and $ characters
export SCHEMA_RE='[[:alnum:]_$]+'


These regular expressions could use further refinement (such as username must start with alpha only ) but are sufficient for this demonstration.

Next, the regular expressions are concatenated together into ':' delimited strings, with each possible command line argument represented either by its corresponding regex, or by null.

The regexes are stuffed into a bash array. For our example, it looks like this:
#   :   user        :  db           :  owner        :  table     : schema        : type
VALID_ARGS=(
":$USER_RE:$DATABASE_RE:$OWNER_RE:$TABLE_RE::(DICTIONARY_STATS|SYSTEM_STATS|FIXED_OBJECTS_STATS):" \
":$USER_RE:$DATABASE_RE::::(DICTIONARY_STATS|SYSTEM_STATS|FIXED_OBJECTS_STATS):" \
":$USER_RE:$DATABASE_RE:$OWNER_RE:$TABLE_RE:$SCHEMA_RE:(SCHEMA):" \
":$USER_RE:$DATABASE_RE:::$SCHEMA_RE:SCHEMA:")

Notice that there are four different combitations of command line arguments represented.

In all cases the USERNAME and DATABASE are required and must correspond to the regex provided.

In the first combination of arguments, the owner and table must also be specified, and type (-T) must be either one of DICTIONARY_STATS, SYSTEM_STATS or FIXED_OBJECTS_STATS.

In the second possible combination, the only argument allowed in addition to DATABASE and USERNAME is the type (-T) argument.

The third combination requires the OWNER, TABLE_NAME and SCHEMA argument to have a valid value, and the TYPE argument must be set to SCHEMA.

The final combination of arguments requires just the SCHEMA argument and the TYPE argument must be set to SCHEMA, in addition to the USERNAME and DATABASE arguments.

By now you likely want to know just how these regular expressions are tested. The following function is used to test the command line arguments against each regular expression:
function validate_args {
typeset arglist
arglist=$1

while shift
do
[ -z "$1" ] && break
if [ $(echo $arglist | grep -E $1 ) ]; then
return 0
fi

done
return 1

}


Here's how it is used in the script:

# VALID_ARGS must NOT be quoted or it will appear as a single arg in the function
validate_args $ALLARGS ${VALID_ARGS[*]}


While this method may appear somewhat confusing at first, it becomes less so after using it a few times. It greatly simplifies the use of many command line arguments that may appear in differing combinations.

As far as I know, this method only works properly with the bash shell. I have done testing on only two shells, bash and ksh. It does not work properly on ksh.

Here's a demonstration of the ksh problem. The following script is run from both ksh and bash:

function va {

echo ARG1: $1
}


R1="[[:alnum]]+"
R2="[[:alnum]]{3,}"

va $R1
va $R2
And here are the results:
18:9-jkstill-18 > ksh t3
ARG1: [[:alnum]]+
ARG1: [[:alnum]]3
[ /home/jkstill/bin ]

jkstill-18 > bash t3
ARG1: [[:alnum]]+
ARG1: [[:alnum]]{3,}
[ /home/jkstill/bin ]



Notice that when the script is run with ksh, the '{', '}' and ',' are removed from the regular expression. I could find no combination of quoting and escape characters that could prevent that from happening. This method of command line argument validation could be made to work using ksh if those characters are not used in the regexes. That would be rather limiting though.

One other drawback you may have noticed with this method of validating command line arguments is that when an error condition is encountered, the exit code is always 1. With the [] method it was easy to exit with different codes to indicate the nature of the error. Something similar could likely be done by embedding a code into each set of regexes, but I will leave that as an exercise for the reader.

The complete prototype script, as well as a test script can be downloaded:


The next article will include a set of functions used along with the validate_args() function to make shell scripts a bit more robust.

Categories: DBA Blogs

How to setup Ruby and Oracle Instant Client on Snow Leopard

Raimonds Simanovskis - Sat, 2009-09-05 16:00
Introduction

Mac OS X Snow Leopard is out and many Rubyists are rushing to upgrade to it. The main difference for Ruby after upgrading to Snow Leopard is that Ruby installation has been changed from 32-bit to 64-bit program and version has changed from 1.8.6 to 1.8.7. And it means that all Ruby gems with C extensions should be reinstalled and recompiled using 64-bit external libraries.

After upgrading to Snow Leopard the first thing to do is to follow instructions on official Ruby on Rails blog. After that follow instructions below.

Installing 64-bit Oracle Instant Client for Intel Mac

Download Oracle Instant Client 64-bit version. Download “Instant Client Package – Basic”, “Instant Client Package – SDK” and “Instant Client Package – SQL*Plus”.

Unzip downloaded archives and move it where you would like to have it – I am keeping it in /usr/local/oracle/instantclient_10_2 (if you have previous 32-bit Oracle Instant Client in this directory then delete it beforehand). Then go to this directory and make symbolic links for dynamic libraries

sudo ln -s libclntsh.dylib.10.1 libclntsh.dylib
sudo ln -s libocci.dylib.10.1 libocci.dylib

Then I recommend to create and place somewhere your tnsnames.ora file where you will keep your database connections definitions – I place this file in directory /usr/local/oracle/network/admin.

Then finally you need to set up necessary environment variables – I place the following definitions in my .bash_profile script:

export DYLD_LIBRARY_PATH="/usr/local/oracle/instantclient_10_2"
export SQLPATH="/usr/local/oracle/instantclient_10_2"
export TNS_ADMIN="/usr/local/oracle/network/admin"
export NLS_LANG="AMERICAN_AMERICA.UTF8"
export PATH=$PATH:$DYLD_LIBRARY_PATH

Use your path to Oracle Instant Client if it differs from /usr/local/oracle/instantclient_10_2. And as you see I also define NLS_LANG environment variable – this is necessary if your database is not in UTF8 encoding but in Ruby you want to get UTF-8 encoded strings from the database. Specifying this NLS_LANG environment variable you will force that Oracle Instant Client will do character set translation.

After these steps relaunch Terminal application (so that new environment variables are set), specify database connection in tnsnames.ora file and try if you can access your database with sqlplus from command line.

Install ruby-oci8 gem

The latest versions of ruby-oci8 are available as Ruby gems and therefore I recommend to install it as a gem and not to compile and install as library (as I have recommended previously in my blog).

If you previously installed ruby-oci8 as a library then I recommend to delete it from Ruby installation. Go to /usr/lib/ruby/site_ruby/1.8 directory and remove oci8.rb file as well as remove oci8lib.bundle compiled library from either universal-darwin9.0 or universal-darwin10.0 subdirectory.

Now install ruby-oci8 with the following command:

sudo env DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH ARCHFLAGS="-arch x86_64" gem install ruby-oci8

It is important to pass DYLD_LIBRARY_PATH environment variable to sudo (as otherwise ruby-oci8 gem installation will not find Oracle Instant Client) as well as specify ARCHFLAGS to compile C extension just for 64-bit platform as otherwise it will try to compile both for 32-bit and 64-bit platform.

Now try

ruby -rubygems -e "require 'oci8'; OCI8.new('scott','tiger','orcl').exec('select * from dual') do |r| puts r.join(','); end"

or similar (replacing username, password or database alias) to verify that you can access Oracle database from ruby.

That’s it! Please write in comments if something is not working according to these instructions.

Categories: Development

ODP.NET: The provider is not compatible with the version of Oracle client

Mark A. Williams - Sat, 2009-09-05 08:10

One potentially perplexing error that may be raised when using Oracle Data Provider for .NET (ODP.NET) is "The provider is not compatible with the version of Oracle client". The reason I say "potentially perplexing" is that the error can be raised in a situation that doesn't necessarily seem to agree with the wording of the message. More on that later.

ODP.NET consists of both managed and unmanaged components. The managed component is the Oracle.DataAccess.dll and one of the key unmanaged components is the OraOpsXX.dll which I refer to as the bridge dll. The exact name of OraOpsXX.dll depends on the ODP.NET version as well as the .NET Framework version. In this post I am using ODAC 11.1.0.6.21 which includes ODP.NET versions targeted to the .NET Framework 1.x and 2.x versions. Beginning with the 10.2 versions of ODP.NET the .NET Framework major version is pre-pended to the ODP.NET version to differentiate between 1.x and 2.x of the .NET Framework. Therefore, the Oracle.DataAccess.dll I am using will report 2.111.6.20 as its version number. The corresponding OraOpsXX.dll will be named OraOps11w.dll and is found in the %ORACLE_HOME%\bin directory if using a full install or (typically) in the root folder of an Instant Client install.

I'll show what I think are the three most common reasons for this error. In order to do so, I use a (very) simple C# console application:

using System;
using System.Data;
using Oracle.DataAccess.Types;
using Oracle.DataAccess.Client;

namespace NotCompatibleTest
{
  class Program
  {
    static void Main(string[] args)
    {
      /*
       * connection string using EZCONNECT format
       * be sure to change for your environment
       */
      string constr = "user id=hr;" +
                      "password=hr;" +
                      "data source=liverpool:1521/V112;" +
                      "enlist=false;" +
                      "pooling=false";

      /*
       * create and open connection
       */
      OracleConnection con = new OracleConnection(constr);
      con.Open();

      /*
       * write server version to console
       */
      Console.WriteLine("Connected to Oracle version {0}",
                        con.ServerVersion);

      /*
       * explicit clean-up
       */
      con.Dispose();
    }
  }
}

As you can see this simply connects to a database, writes the server version to the console window, and exits. However, it is sufficient to use ODP.NET and for the purposes here. I simply execute this sample in debug mode from within Visual Studio for each of the "tests".

Cause 1: OraOpsXX.dll is Wrong Version (and Hence the Client is Too)

This is a typical case and the message text makes the most sense for this case. Here, the correct version can not be found. How might this occur? One easy way is when you develop your application using 11.1.0.6 of Oracle Client and ODP.NET and then deploy to a machine that has a lower version of Oracle Client and ODP.NET installed. This is what the error looks like in debug mode with an unhandled TypeInitializationException when instantiating the OracleConnection object in the sample code:

PNC_Cause1   Cause 2: OraOpsXX.dll is Missing

In order to simulate OraOpsXX.dll missing, I rename my OraOps11w.dll in %ORACLE_HOME%\bin to OraOps11wXX.dll and execute the sample. Sure enough, I get the same error as above. Here the message may not make as much sense. Instead of "The provider is not compatible with the version of Oracle client" it might be better if the message indicated the real issue is that OraOpsXX.dll can't be located.

Cause 3: The tricky one

This cause is certainly less intuitive than either Cause 1 or Cause 2. As mentioned earlier, OraOpsXX.dll is unmanaged code. It also has a dependency on the Microsoft C runtime, in particular version 7 of the C runtime which lives in msvcr71.dll and which many systems have on the system path. However, if that file is not on the system path or in the same directory as OraOpsXX.dll you will receive the "The provider is not compatible with the version of Oracle client" message!

If you are receiving "The provider is not compatible with the version of Oracle client" messages in your environment perhaps it is due to one of the three causes here.

Oracle Database 11g Release 2 New Features : Edition based redefination

Virag Sharma - Fri, 2009-09-04 08:33

Every release has some Major changes , which we usually says New Features.Some of these features dominate the version, For example 11g R1 has SPA , DB Replay Active standby etc. Same this Oracle Release ( Oracle Database 11g Release 2 ) has some New features for which this release will be known in feature. These features are "Edition based redefination"


Most likely these features designed to give big support to APPS upgrade ( ie Oracle E-Business suite upgrade). When you upgrade APPS database , it need lots of down time , hope, using these new features APPS upgrade will take less time in future.


This feature will allow application upgrade( AS DBA , i would prefer to say Online Database object upgrade) with Minimum down time or may be zero down time. I consider this feature as one step toward ZERO DOWN time for application upgrade.


In 10g statistics collected on table published immediatly, That usually cause lots of performance issue. In oracle 11g r1 there is feature, for collecting stats on tables and publishing stats , as per need to avoid performance issue due to stats collection.

Taking similar feature to next step , 11g R2 has feature "REDEFINITION" , which upgrades objects , but not published immediately, also database can have multiple Editions of objects definition. Of-course there are, some limitation



Check Default Edition

SQL>
1 SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES
2* Where PROPERTY_NAME = 'DEFAULT_EDITION'
SQL> /

PROPERTY_VALUE
--------------------------------------------------------------------------------
ORA$BASE



Changing Edition at session or Database level

SQL> ALTER SESSION SET EDITION=ora$base;

Session altered.

SQL> ALTER DATABASE DEFAULT EDITION =ora$base;

Database altered.





Grant create or drop edition to user

SQL> GRANT CREATE ANY EDITION, DROP ANY EDITION to virag;

Grant succeeded.


Enable Edition on schema / User

SQL> ALTER USER virag ENABLE EDITIONS force;


Of-course there are, some limitation......

Will add more ......soon , for How to...


Reference

19 Edition-Based Redefinition

More Post on Oracle RDBMS Database 11g R2 ( Release 2 ) 

Oracle 11g Release 2 (11.2 ) New Features : SCAN - Single Client Access Name
11G R2 New Feature : Purge audit trail records using DBMS_AUDIT_MGMT
Oracle Database 11g Release 2 New Features : Edition based redefination

Categories: DBA Blogs

11gr2: it looks like someone is listening, after all...

Nuno Souto - Fri, 2009-09-04 00:31
Some of you folks might recall my 2008 wishlist for Oracle.The number one pet peeve was the need to create the initial segment of any data object even when it is empty. A big no-no for products such as Peoplesoft, where in a typical installation one gets 25000 tables and 35000 indexes of which only around 1000 are ever filled with any data.Well, it appears someone at Oracle is reading this blog,Noonsnoreply@blogger.com8

Oracle 11g Release 2 (11.2 ) New Features : SCAN - Single Client Access Name

Virag Sharma - Wed, 2009-09-02 19:32
Once you decided to add/remove node from RAC database/cluster. We need to change tns
entry. Oracle 11g R2 introduced new concept called Single Client Access Name (SCAN).
Which eliminate the need to change tnsnetry when nodes are added to or removed from
the Cluster.

RAC Instances register to SCAN listeners as remote listeners. SCAN is fully qulified name.
Oracle recommends to assign 3 address to SCAN , which create three SCAN listeners.



$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node apps001
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node apps002
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node apps002




Running following command on Node 2 (apps002)


$ ps -aef |grep -i SCAN

oracle 9380 1 0 Aug13 ? 00:01:09 /d01/apps/oracle_crs/11.2/bin/tnslsnr LISTENER_SCAN2 -inherit

oracle 9380 1 0 Aug13 ? 00:01:09 /d01/apps/oracle_crs/11.2/bin/tnslsnr LISTENER_SCAN3 -inherit

oracle 9993 7114 0 09:57 pts/3 00:00:00 grep -i crs





From above output, it is clear that SCAN listener is running from CRS_HOME

$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521


$ srvctl config scan
SCAN name: apps-scan, Network: 1/192.168.182.0/255.255.255.0/
SCAN VIP name: scan1, IP: /apps-scan.us.oracle.com/192.168.182.109
SCAN VIP name: scan2, IP: /apps-scan.us.oracle.com/192.168.182.110
SCAN VIP name: scan3, IP: /apps-scan.us.oracle.com/192.168.182.108




tns entry can use single address ( SCAN Name ) in tnsentry , instead os using entry for all Node


tns entry configured to use VIP addresses for Database will work without any issue. using
SCANs is not Medatory ( May be to support backward compatibility )



#
# TNS ENTRY with SCAN
#

test.world =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST==apps-scan.world)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=R1211.world))
)

#
# TNS Entry without SCAN ( Old way)
#

test.world =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=apps001-vip.world)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=apps002-vip.world)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=R1211.world)))
)
)




Clients Can connect to a particular instance of the database using SCAN. Entry will looks like


test.world =
(description=
(address=(protocol=tcp)(host=apps-scan.world)(port=1521))
(connect_data=
(service_name=R1211.world)
(instance_name=apps1cl1)))




tns entry configured to use VIP addresses for Database will work without any issue. using

SCANs is not Medatory ( May be to support backward compatibility )

More Post on Oracle RDBMS Database 11g R2 ( Release 2 ) 

Oracle 11g Release 2 (11.2 ) New Features : SCAN - Single Client Access Name
11G R2 New Feature : Purge audit trail records using DBMS_AUDIT_MGMT
Oracle Database 11g Release 2 New Features : Edition based redefination

Categories: DBA Blogs

11G R2 New Feature : Purge audit trail records using DBMS_AUDIT_MGMT

Virag Sharma - Wed, 2009-09-02 11:08
In earlier version there is no standard way to change AUDIT tables tablespace.
IN Oracle 11g R2 ( Also included in 11.1.0.7 and 10.2.0.5 ( Need to check)) , you can change audit table (SYS.AUD$ and SYS.FGA_LOG$) tablespace using DBMS_AUDIT_MGMT


Not ONLY you can change audits table tablespace , now you can periodically deleting the audit trail records using CLEAN_AUDIT_TRAIL (new in 11.2 ) Procedure.

So Now it is official , that you can change AUD$ table tablespace and purge :-) .

In Below given example , am trying to change tablespace for AUD$

Checking current tablespace from AUD$

SQL> select TABLESPACE_NAME from dba_segments where SEGMENT_NAME='AUD$';

TABLESPACE_NAME
--------------------------------------------------------------------------------
SYSTEM



Changing Tablespace from SYSTEM to SYSAUX for AUD$

SQL>

BEGIN
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
AUDIT_TRAIL_LOCATION_VALUE => 'SYSAUX');
END;
/

PL/SQL procedure successfully completed.


Checking changed Tablespace


SQL> select TABLESPACE_NAME from dba_segments where SEGMENT_NAME='AUD$';

TABLESPACE_NAME
--------------------------------------------------------------------------------
SYSAUX




AUDIT_TRAIL_TYPE: Refers to the database audit trail type. Enter one of the following values:

  1. DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD: Standard audit trail table, AUD$.
  2. DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD: Fine-grained audit trail table, FGA_LOG$.
  3. DBMS_AUDIT_MGMT.AUDIT_TRAIL_DB_STD: Both standard and fine-grained audit trail tables.

AUDIT_TRAIL_LOCATION_VALUE: Specifies the NEW destination tablespace.

Not ONLY you can change audits table tablespace , now you can periodically deleting the audit trail records/xml/.aud files etc using CLEAN_AUDIT_TRAIL Procedure.

STEPS for Purging AUDIT TRAIL

# Check initialization

BEGIN
IF
NOT DBMS_AUDIT_MGMT.IS_CLEANUP_INITIALIZED(DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD)
THEN
dbms_output.put_line('CLEANUP NOT INITIALIZED' );
ELSE
dbms_output.put_line('CLEANUP INITIALIZED' );
END IF;
END;


# Set initialization

BEGIN
DBMS_AUDIT_MGMT.INIT_CLEANUP(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
DEFAULT_CLEANUP_INTERVAL => 6 );
END;

# Set Last Audit Time stamp

SQL> desc DBA_AUDIT_MGMT_LAST_ARCH_TS
Name Null? Type
----------------------------------------- -------- ----------------------------
AUDIT_TRAIL VARCHAR2(20)
RAC_INSTANCE NOT NULL NUMBER
LAST_ARCHIVE_TS TIMESTAMP(6) WITH TIME ZONE

SQL> select * from DBA_AUDIT_MGMT_LAST_ARCH_TS;

# Check is Last Audit Time stamp set or not



BEGIN
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
LAST_ARCHIVE_TIME => sysdate -30 <------ want to delete aud file older then 30 days
RAC_INSTANCE_NUMBER => 1 );
END;

# For non RAC don't use "RAC_INSTANCE_NUMBER =>"
# If RAC system having 4 node then run above command 4 time
# with RAC_INSTANCE_NUMBER 1 , 2, 3 ,4

# Manual Purge

BEGIN
DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
USE_LAST_ARCH_TIMESTAMP => TRUE);
END;


#
# If
USE_LAST_ARCH_TIMESTAMP is False , it purge all audit trail
#

# Here we used DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL =>
# All audit trail types. This includes the standard database audit trail
# (SYS.AUD$ and SYS.FGA_LOG$ tables), operating system (OS) audit trail,
# and XML audit trail. More details are given below
#AUDIT_TRAIL_ALL => All audit trail types. This includes the standard database audit trail (SYS.AUD$ and SYS.FGA_LOG$ tables), operating system (OS) audit trail, and XML audit trail.
#AUDIT_TRAIL_AUD_STD => Standard database audit records in the SYS.AUD$ table
#AUDIT_TRAIL_DB_STD => Both standard audit (SYS.AUD$) and FGA audit(SYS.FGA_LOG$) records
#AUDIT_TRAIL_FGA_STD => Standard database fine-grained auditing (FGA) records in the SYS.FGA_LOG$ table
#AUDIT_TRAIL_FILES => Both operating system (OS) and XML audit trails
#AUDIT_TRAIL_OS => Operating system audit trail. This refers to the audit records stored in operating system files.
#AUDIT_TRAIL_XML => XML audit trail. This refers to the audit records stored in XML files.

Refrence

DBMS_AUDIT_MGMT ( Oracle Documentation 11.2 )

More Post on Oracle RDBMS Database 11g R2 ( Release 2 )

Oracle 11g Release 2 (11.2 ) New Features : SCAN - Single Client Access Name
11G R2 New Feature : Purge audit trail records using DBMS_AUDIT_MGMT
Oracle Database 11g Release 2 New Features : Edition based redefination


Categories: DBA Blogs

If at first you don't succeed...

Oracle WTF - Wed, 2009-09-02 05:57

...then try again. Then try again more 125 times. Then quit.

PROCEDURE get_id
    ( p_id_out         OUT NUMBER
    , p_name_in        IN VARCHAR2
    , p_create_user_in IN VARCHAR2 )
IS
    v_new_id      NUMBER := 0;
    v_max_tries   PLS_INTEGER := 127;
    v_default_id  NUMBER := 0;
BEGIN
    v_new_id := lookup_id(p_name_in); -- will be 0 if not found

    WHILE v_new_id = 0 AND v_max_tries > 0
    LOOP
        BEGIN
            INSERT INTO entry
            ( entry_id
            , entry_name
            , create_date
            , create_user
            , create_app
            , mod_date
            , mod_user
            , mod_app)
            VALUES
            ( entry_seq.NEXTVAL
            , p_name_in
            , SYSDATE
            , p_create_user_in
            , 'get_id'
            , SYSDATE
            , p_create_user_in
            , 'get_id' )
            RETURNING entry_id INTO v_new_id;

        EXCEPTION
            WHEN OTHERS THEN NULL;
        END;
    
        v_max_tries := v_max_tries - 1;
    END LOOP;

    p_id_out := v_new_id;
END get_id;

Thanks BB for sending this.

The ultimate story about OCR, OCRMIRROR and 2 storage boxes – Chapter 1

Geert De Paep - Tue, 2009-09-01 14:44
Scenario 1: loss of ocrmirror, both nodes up

(This is the followup of article “Introduction“)

Facts
  • CRS is running on all nodes
  • The storage box containing the OCRmirror is made unavailable to both hosts (simulating a crash of one storage box).

What happens?

The crs alertfile ($ORA_CRS_HOME/log/hostname/alert.log) of node 1 shows:

2008-07-18 15:30:23.176
[crsd(6563)]CRS-1006:The OCR location /dev/oracle/ocrmirror is inaccessible. Details in /app/oracle/crs/log/nodea01/crsd/crsd.log.

And the CRS logfile of node 1 shows:

2008-07-18 15:30:23.176: [ OCROSD][14]utwrite:3: problem writing the buffer 1c03000 buflen 4096 retval -1 phy_offset 102400 retry 0
2008-07-18 15:30:23.176: [ OCROSD][14]utwrite:4: problem writing the buffer errno 5 errstring I/O error
2008-07-18 15:30:23.177: [ OCRRAW][768]propriowv_bootbuf: Vote information on disk 0 [/dev/oracle/ocr] is adjusted from [1/2] to [2/2]

There is nothing in the crs alertfile or crsd logfile of node 2 (allthough node 2 can’t see the lun either).
On both nodes we have:

(/app/oracle) $ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     295452
         Used space (kbytes)      :       5112
         Available space (kbytes) :     290340
         ID                       : 1930338735
         Device/File Name         : /dev/oracle/ocr
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/oracle/ocrmirror
                                    <font color="#ff9900">Device/File unavailable</font>
         Cluster registry integrity check succeeded

CRS continues to work normally on both nodes

Discussion

This test indicates that the loss of the ocrmirror leaves the cluster running normally. In other words, a crash of a storage box would allow us to continue our production normally. Great!

However I’m not easily satisfied and hence I still have a lot of questions: how to recover from this, what happens internally, can we now change/update the ocr, …? Lets’ investigate.

The most interesting things are colored in the output above. The fact that the ocrmirror device file is unavailable makes sense. Remember however the other message: vote count updated from 1 to 2.
Let’s see what happens if we now stop and start CRS on node 1 (while ocrmirror is still unavailable):
Stopping CRS on node 1 happens as usual, no error messages. However at the time of stopping CRS on node 1, we see a very interesting message in the crsd logfile of node 2:

2008-07-18 15:34:38.504: [ OCRMAS][23]th_master:13: I AM THE NEW OCR MASTER at incar 2. Node Number 2
2008-07-18 15:34:38.511: [ OCRRAW][23]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1866209186) total id sets (1), 1st set (138575874 6,1866209186), 2nd set (0,0) my votes (2), total votes (2)
2008-07-18 15:34:38.514: [ OCROSD][23]utread:3: problem reading buffer 162e000 buflen 4096 retval -1 phy_offset 106496 retry 0
2008-07-18 15:34:38.514: [ OCROSD][23]utread:4: problem reading the buffer errno 5 errstring I/O error
2008-07-18 15:34:38.559: [ OCRMAS][23]th_master: Deleted ver keys from cache (master)

I am the new master??? So it looks as if node 1 was the master until we stopped CRS there. This makes a link to the fact that, when the lun became unavailable, that only node 1 wrote messages in its logfiles. At that time, nothing was written into the logfile of node 2, because node 2 was not the master! A very interesting concept: in a RAC cluster, one node is the the crs master and is responsible for updating the vote count in the OCR. I never read that in the doc…. Also note that the new master also identifies that the ocr has 2 votes now: “my votes (2)”.

Also, at the time of stopping CRS on node 1, the crs alert file of node 2 showed:

2008-07-18 15:34:38.446
[evmd(18282)]CRS-1006:The OCR location /dev/oracle/ocrmirror is inaccessible. Details in /app/oracle/crs/log/nodeb01/evmd/evmd.log.
2008-07-18 15:34:38.514
[crsd(18594)]CRS-1006:The OCR location /dev/oracle/ocrmirror is inaccessible. Details in /app/oracle/crs/log/nodeb01/crsd/crsd.log.
2008-07-18 15:34:38.558
[crsd(18594)]CRS-1005:The OCR upgrade was completed. Version has changed from 169870336 to 169870336. Details in /app/oracle/crs/log/nodeb01/crsd/crsd.log.
2008-07-18 15:34:55.153

So it looks as if node 2 is checking again the availability of the ocrmirror and sees it is not available.

Now let’s start crs on node 1 again, maybe he becomes master again?… Not really. The only thing we see in the crsd logfile is:

2008-07-18 15:39:19.603: [ CLSVER][1] Active Version from OCR:10.2.0.4.0
2008-07-18 15:39:19.603: [ CLSVER][1] Active Version and Software Version are same
2008-07-18 15:39:19.603: [ CRSMAIN][1] Initializing OCR
2008-07-18 15:39:19.619: [ OCRRAW][1]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1866209186) total id sets (1), 1st set (1385758746,1866209186), 2nd set (0,0) my votes (2), total votes (2)

Recovery

Now how do we get things back to normal? Let’s first make the lun visible again on the san switch. At that time nothing happens in any logfile, so CRS doesn’t seem to poll to see if the ocrmirror is back. However when we execute now an ocrcheck, we get:

Status of Oracle Cluster Registry is as follows :<br />Version                  :          2<br />Total space (kbytes)     :     295452<br />Used space (kbytes)      :       5112<br />Available space (kbytes) :     290340<br />ID                       : 1930338735<br />Device/File Name         : /dev/oracle/ocr<br />Device/File integrity check succeeded<br />Device/File Name         : /dev/oracle/ocrmirror<br /><span style="color: rgb(255, 128, 0);">Device/File needs to be synchronized with the other device</span><br /><br />Cluster registry integrity check succeeded

Again, this makes sense. While the ocrmirror was unavailable, you may have added services, instances or whatever, so the contents of the (old) ocrmirror may be different from those of the current ocr. In our case however, nothing was changed on cluster level, so theoretically the contents of ocr and ocrmirror should still be the same. Still we get the message above. Anyway, the way to synchronize this ocr is to issue as root:

ocrconfig -replace ocrmirror /dev/oracle/ocrmirror

This will copy the contents of the ocr over the ocrmirror being located at /dev/oracle/ocrmirror. In other words, it will create a new ocrmirror in location /dev/oracle/ocrmirror as a copy of the existing ocr. Be careful with the syntax; do not use “-replace ocr” when the ocrmirror is corrupt.
At that time, we see in the crs logfile on both nodes:

2008-07-18 15:51:06.254: [ OCRMAS][25]th_master: Deleted ver keys from cache (non master)

2008-07-18 15:51:06.263: [ OCRRAW][30]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1866209186) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (1), total votes (2)

2008-07-18 15:51:06.263: [ OCRRAW][30]proprioo: for disk 1 (/dev/oracle/ocrmirror), id match (1), my id set (1385758746,1866209186) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (1), total votes (2)

2008-07-18 15:51:06.364: [ OCRMAS][25]th_master: Deleted ver keys from cache (non master)

and in the alert file:

2008-07-18 15:51:06.246
[crsd(13848)]CRS-1007:The OCR/OCR mirror location was replaced by /dev/oracle/ocrmirror.

Note again the highlighted messages above: each ocr again has 1 vote. And all is ok again:

Status of Oracle Cluster Registry is as follows :<br />Version                  :          2<br />Total space (kbytes)     :     295452<br />Used space (kbytes)      :       5112<br />Available space (kbytes) :     290340<br />ID                       : 1930338735<br />Device/File Name         : /dev/oracle/ocr<br />Device/File integrity check succeeded<br />Device/File Name         : /dev/oracle/ocrmirror<br />Device/File integrity check succeeded<br />Cluster registry integrity check succeeded
Conclusion of scenario 1

Loosing the storage box containing the ocrmirror is no problem (the same is true for loosing ocr while ocrmirror remains available). Moreover it can be recovered without having to stop the cluster (the restart of crs on node 1 above was for educational purposes only). This corresponds with what is told in the RAC FAQ on Metalink Note 220970.1: “If the corruption happens while the Oracle Clusterware stack is up and running, then the corruption will be tolerated and the Oracle Clusterware will continue to funtion without interruptions” (however I think that the logfiles above give you much more insight in what really happens).

However another important concept is the story of the votecount. The test above shows that CRS is able to start if it finds 2 ocr devices each having one vote (the normal case) or if it finds 1 ocr having 2 votes (the case after loosing the ocrmirror). Note that at the moment of the failure, the vote count of the ocr could be increased by oracle from 1 to 2, because CRS was running.

In the next chapter, we will do this over again, but with both nodes down…


The ultimate story about OCR, OCRMIRROR and 2 storage boxes – Introduction

Geert De Paep - Fri, 2009-08-28 13:57

Some time ago I wrote a blog about stretched clusters and the OCR. The final conclusion at that time was that there was no easy way to get your OCR safe on both storages, and hence I disrecommended clusters with 2 storage boxes. However, after some more investigation I may have to change my mind. I did extended testing on the OCR and in this blog I want to share my experiences.

This is the setup:

  • 2-node RAC cluster (10.2.0.4 on Solaris), located in 2 server rooms
  • 2 storage boxes, one in each server room
  • ASM mirroring of all data (diskgroups with normal redudancy)
  • One voting disk on one storage box, 2nd voting disk on the other box, 3rd voting disk on nfs on a server in a 3rd location (outside the 2 server rooms)

For the components above, this setup is safe against server room failure:

  • The data is mirrored in ASM and will remain available on the other box.
  • The cluster can continue because it still sees 2 voting disks (one in the surviving server room and one on nfs).

But what about the OCR?

We did as what looks logical: OCR on storage box 1 and OCRmirror on storage box 2, resulting in:

         Device/File Name         : /dev/oracle/ocr
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/oracle/ocrmirror
                                    Device/File integrity check succeeded

Now we can start playing. For the unattended reader, “playing” means: closing ports on the fibre switches in such a way that a storage box becomes totally unavailable to the servers. This simulates a storage box failure.

The result is a story of 5 chapters and a conclusion. Please standby for the next upcoming blog posts.


Out Now!! Application Express 3.2.1

Anthony Rayner - Wed, 2009-08-26 04:28
The Oracle Application Express 3.2.1 patch set is now available for download and provides not only fixes to the following bugs, but also some additional functionality and considerations as summarised by Joel and detailed in the patch set notes.

You can get hold of it by either:
  • Downloading the full version from OTN.

  • Download the patch set 8548651 from METALINK.

If you're upgrading from any APEX version pre-3.2, then you'll need to use the full OTN release. Otherwise if you're upgrading from 3.2, then you only need the patch set.

Also in this patch set, we have included an additional documentation chapter, entitled Accessibility in Oracle Application Express. This aims to provide information for users who are accessing Oracle Application Express utilizing only a keyboard or Freedom Scientific's screen reader JAWS. It details the current accessibility issues in APEX and shows workarounds where they are possible. (We hope to address a number of these issues in APEX 4.0.)

I would be very interested to hear from anyone who uses APEX with keyboard only, screen reader or other assistive technology to get feedback on how we can hopefully get better at being accessible to our users with disabilities. Also if you use APEX to build applications that have strict accessibility requirements and have feedback on your experiences then I would love to hear from you also.

Please drop me an email at the email address in my profile if you would like to talk about this.

Anthony.

Categories: Development

Generate Days in Month (PIPELINED Functions)

Duncan Mein - Tue, 2009-08-25 08:57
This cool example is not one I can take the credit for but since it is used pretty heavily in our organisation, I thought I would share it as it's not only pretty cool buy also demonstrates how useful Oracle Pipelined functions can be.

In essence a Pipeline table function (introduced in 9i) allow you use a PL/SQL function as the source of a query rather than a physical table. This is really useful in our case to generate all the days in a calendar month via PL/SQL and query them back within our application.

To see this in operation, simply create the following objects:

CREATE OR REPLACE TYPE TABLE_OF_DATES IS TABLE OF DATE;

CREATE OR REPLACE FUNCTION GET_DAYS_IN_MONTH
(
pv_start_date_i IN DATE
)
RETURN TABLE_OF_DATES PIPELINED
IS
lv_working_date DATE;
lv_days_in_month NUMBER;
lv_cnt NUMBER;
BEGIN

lv_working_date := TO_DATE(TO_CHAR(pv_start_date_i, 'RRRRMM') || '01', 'RRRRMMDD');
lv_days_in_month := TRUNC(LAST_DAY(lv_working_date)) - TRUNC(lv_working_date);

PIPE ROW(lv_working_date);

FOR lv_cnt IN 1..lv_days_in_month
LOOP
lv_working_date := lv_working_date + 1;
PIPE ROW (lv_working_date);
END LOOP;

RETURN;

END GET_DAYS_IN_MONTH;
/

Once your objects are successfully complied, you can generate all the days in a month by executing the following query:

SELECT column_value the_date
, TO_CHAR(column_value, 'DAY') the_day
FROM TABLE (get_days_in_month(sysdate));

THE_DATE THE_DAY
------------------------
01-AUG-09 SATURDAY
02-AUG-09 SUNDAY
03-AUG-09 MONDAY
04-AUG-09 TUESDAY
05-AUG-09 WEDNESDAY
06-AUG-09 THURSDAY
07-AUG-09 FRIDAY
08-AUG-09 SATURDAY
09-AUG-09 SUNDAY
10-AUG-09 MONDAY
11-AUG-09 TUESDAY
12-AUG-09 WEDNESDAY
13-AUG-09 THURSDAY
14-AUG-09 FRIDAY
15-AUG-09 SATURDAY
16-AUG-09 SUNDAY
17-AUG-09 MONDAY
18-AUG-09 TUESDAY
19-AUG-09 WEDNESDAY
20-AUG-09 THURSDAY
21-AUG-09 FRIDAY
22-AUG-09 SATURDAY
23-AUG-09 SUNDAY
24-AUG-09 MONDAY
25-AUG-09 TUESDAY
26-AUG-09 WEDNESDAY
27-AUG-09 THURSDAY
28-AUG-09 FRIDAY
29-AUG-09 SATURDAY
30-AUG-09 SUNDAY
31-AUG-09 MONDAY


I hope someone finds this example as useful as we do. The credits go to Simon Hunt on this one as it was "borrowed" from one of his apps. Since I offered to buy him a beer he has promised not to make too big a deal out it :)

As always, you can read up on this topic here

Missing AppsLogin.jsp...

Bas Klaassen - Tue, 2009-08-25 01:20
I am still facing the same problem with my R12 upgrade.When running the post install checks using Rapidwiz , only the JSP and the Login page show errors.For jsp I see 'JSP not responding, waiting 15 seconds and retesting'and the Login Page shows 'RW-50016: Error. -{0} was not created. File= {1}'The strange thing is that all other checks are oke. Even the /OA_HTML/help check !So, the problem is Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com15
Categories: APPS Blogs

R12 upgrade

Bas Klaassen - Sun, 2009-08-23 03:46
I Finally upgraded my 11.5.10.2 environment to R12.I followed the steps mentioned in the different upgrade guides. What do I have runing right now ?- Oracle eBS 12.0.6- Oracle database 10.2.0.4- Oracle tech stack (old ora directory) 10.1.2.3.0- Oracle tech stack (old iAS directory) 10.1.3.4.0So, having no problems during the upgrade proces, I finished starting al services. When trying to login myBas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com5
Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator