Feed aggregator

switching to wintertime

Freek D’Hooge - Thu, 2009-10-22 11:17

In Belgium we are switching to wintertime this Sunday, which is good opportunity for me to write this post.
I normally intended to write it when we switched to summer time, so everything will be from the point of view of changing from winter time to summer time (confused yet? ).

The reason that I wanted to write about it, where some alerts we got back then from our monitoring considering scheduler jobs which where no longer running on time.
Quickly it became clear that these jobs did not follow the change to summer time, but instead ran an hour later.
The key is to look at the dba_scheduler_jobs table in the correct format. You see, the *_run_date columns are of the datatype “timestamp(6) with timezone”, so to get all the information you need to use the right format model. Using the TZR and TZD models you can respectively see the timezone and the daylight saving information:

sys@WPS50> select job_name, to_char(last_start_date, 'DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ') last_start_date, to_char(next_run_date, 'DD/MM/YYYY HH24:MI:SS "TS:" TZR "DS:" TZD ') next_run_date from dba_scheduler_jobs;

JOB_NAME                       LAST_START_DATE                                    NEXT_RUN_DATE
------------------------------ -------------------------------------------------- --------------------------------------------------
AUTO_SPACE_ADVISOR_JOB         28/03/2009 06:00:04 TZ: +01:00 DS:
GATHER_STATS_JOB               02/02/2009 22:00:00 TZ: +01:00 DS:
PURGE_LOG                      29/03/2009 03:00:00 TZ: MET DS: MEST               30/03/2009 03:00:00 TS: MET DS: MEST
ANALYZETHIS_PURGEHISTORY       29/03/2009 17:00:00 TZ: +01:00 DS:                 30/03/2009 17:00:00 TS: +01:00 DS:
GATHER_WK_TEST_STATS           29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_SESSIONUSR_STATS        29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_RELEASEUSR_STATS        29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_LMDBUSR_STATS           29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_ICMADMIN_STATS          29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_COMMUNITYUSR_STATS      29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_CUSTOMIZATIONUSR_STATS  29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
MGMT_STATS_CONFIG_JOB          01/03/2009 01:01:01 TZ: +01:00 DS:                 01/04/2009 01:01:01 TS: +01:00 DS:
MGMT_CONFIG_JOB                28/03/2009 06:00:04 TZ: +01:00 DS:

14 rows selected.

(note the additional space after the TZD format, I needed to add this to actually show the information if I used the "DS:" litteral in front (probably this is a bug) )

As you can see, each job has its own timezone offset and some have also daylight saving information.
So, what happened with our jobs? Well, when a job gets created, Oracle stores the timezone information of the start_date parameters. If this timezone is specified in an absolute offset then no daylight saving changes are applied.
When the server switches to summer time (GMT +2 in Belgium), the scheduler job stays in its own little world and remains in the timezone GMT +1.
So, when for the rest of the database the time is 07:00, the job thinks it is still 06:00 and does not start. As the monitoring check did not take the timezone of the job in account, it reported the job as being late.

To avoid this situation, you need to use a named timezone, in which case oracle will apply automatically the correct daylight saving settings.
How do you do this? Well either you use the to_timestamp_tz to convert a text string to a timestamp with timezone information or Oracle retrieves the timezone from your session.
The timezone information in your session can be set with alter session, or by using the ORA_SDTZ variable in your client environment.
But there is a catch. In the following example I have set my timezone to Europe/Brussels, and then verified the timezone information in systimestamp:

sys@WPS50> select sessiontimezone from dual;


sys@WPS50> select to_char(systimestamp, 'DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ') from dual;

30/03/2009 01:57:15 TZ: +02:00 DS:

As you can see, the timezone part an absolute notation, not a named timezone.
Systimestamp will never use the named timezone notation, so whenever you use systimestamp as value for the next_date parameter in dbms_scheduler, you will use an absolute offset and thus not follow daylight saving switches.
The current_timestamp variable, will however use the correct notation:

sys@WPS50> select to_char(current_timestamp, 'DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ') from dual;

30/03/2009 01:57:18 TZ: EUROPE/BRUSSELS DS: CEST

So, when you want to specify the current date as value for the next_date parameter, use current_timestamp and not systimestamp.

This timezone stuff is only applicable when you have an interval that is at least 1 day. With smaller intervals, Oracle will make sure that the period between 2 runs remain the same.
If a job runs every 3 hours and last ran on midnight and the clock is then moved forward from 02:00 to 03:00, then the next run date of the job becomes 04:00, so that the 3 hour period between two job runs is retained.

More information on this, including how Oracle behaves when no start_date parameter is given can be found here:

The database version on which the tests where done is

Categories: DBA Blogs

Listing files with the external table preprocessor in 11g

Adrian Billington - Thu, 2009-10-22 03:00
Using the 11g external table preprocessor to get directory listings in SQL. October 2009

What's the bedrock to your Social CRM Strategy?

Peter O'Brien - Wed, 2009-10-21 16:24

Perhaps it is taken for granted, but Esteban Kolsky's recent article on The SCRM Roadmap, is missing the requirement for a Social Media Participation Policy for employees. No matter where you are on his SCRM pyramid such a policy is vital. The first thing it does is give an employee permission to use their own initiative and participate in customer conversations.

There are few better experts on your products or services than your employees. Apple Inc. makes a point of highlighting these Geniuses and puts them to work in customer facing roles. With the advent of Social Networking, your experts do not need to have such a specific role, but can still be recognised for their contribution in making their company great.

Permission to talk
Your company doesn't really have to have more of a strategy than permission to get started on becoming a social business. Moreover, a simple, publicly available guideline affords employees some protection from pressure to discuss or comment on topics that are not appropriate.

It can be made clear that while all company employees are welcome to participate in Social Media, it is expected that all who participates in online commentary understand and follow some simple but important guidelines. This really is irrespective of medium (video, blogs, twitter, etc) and could also be applied to more traditional communication mediums such as the press. However, I'll stick to the internet based communication channels for now. These rules might sound strict and contain a bit of legal-sounding jargon. Some of the rules may be intuitive and considered basic common sense. However, please keep in mind that the overall goal is simple: to participate online in a respectful, relevant way that protects the company's reputation.

A great list of sample policies from existing organisations, some of which are written with industry specific rules, is available at Social Media Governance. It is well worth a read. The following is a guide highlighting some of the dos and don’ts covered in most of these policies.

  • Protect confidential information.
  • Respect copyrights.
  • Be credible, transparent and identify yourself as an employee.
  • Be in a position to offer comment from a point of clear knowledge of the subject matter.
  • Avoid jargon by writing in simple terms.
  • Use caution. While not all company employees are official spokespeople, please use caution in your comments as reporters and analysts may report on anything included in your online postings. Also, once online, the comments are cached, distributed quickly and available effectively forever.
  • Secure approval. If you are an official spokesperson for the company, you must submit each posting to for review / approval before posting. If you are not a spokesperson, you will need to check with the company to determine if your blog and postings need to be approved.
  • Make it clear that your views are your own and do not necessarily reflect the views of your employer.
  • Comment on merger and acquisition activity.
  • Discuss future product offerings, including upgrades, or new releases.
  • Make growth predictions of any kind.
  • Break out revenue by specific product or country.
  • Use any inflammatory language or discredit others’ views.
  • Provide headcount numbers for any country, region, group or department.
  • Discuss customers that are not currently referenceable to the press.
  • Provide the number of customers for a specific product area.
  • Speak for the company.
  • Lie.
Even if your not sure what a Social CRM strategy is all about and your current strategy is that you have no strategy, a policy providing guidelines to employees on the use of Social Media is an excellent platform enabling employees and customers to engage in dialogue about your products and services. That's the beginning of a beautiful relationship.

NZOUG Conference 2010 Call for Papers - 15-16 March

Gareth Roberts - Tue, 2009-10-20 18:16

If you're interesting in presenting please take a moment to read this message from the New Zealand Oracle Users Group, and note the deadline for submissions is at the end of next week 31-Oct-09! For full information please see the NZOUG Website. Disclaimer: I'm on the NZOUG Committee.

The New Zealand Oracle Users Group is pleased to announce…

Call for Papers and Training for the 2010 Conference

15th and 16th March 2010

We only require your presentation or training topic and short abstract at this stage.

We invite all users and suppliers of Oracle technology, Oracle applications and related products and services to submit presentations or training sessions including:

  • Stories from Oracle Users – everyone wants to hear them! Tell us your story, case study or any useful tips
  • Oracle Partners playing a key role in the Oracle world
  • Third party suppliers of products and services that will assist Oracle technology and application users

Presentation topics may include but are not limited to Oracle Applications such as EBS, JDE and Peoplesoft, BI Applications, Oracle Development, Middleware and Database technologies. Everything is welcome for submission but delegates are especially interested in hearing case studies of real life scenarios and situations, lessons learnt and best practices.

Both Technical and Business streams will run throughout the conference programme with presentations covering the implementation, use and support of the whole range of Oracle and associated third party products and services.

If your paper is selected for presentation:

  • You’ll be admitted to the conference free of charge upon submission of your full written paper, and
  • Your paper will be published in the conference proceedings.


How do I submit an abstract? Visit our abstract submission website and complete the Call for Papers submission, by 31 October 2009. We encourage you to submit your topic or interest by this time as last year we had a great selection from the first call.

You are welcome to suggest a topic you’d like to hear at the conference and we’ll try to source a presenter, also any questions, suggestions or comments are welcome at anytime to papers@nzoug.org

Remember to visit our website for all up to date information on regional events, the conference, and other news.

We look forward to receiving your submissions by 31 October 2009


Simply click here to register for the conference at our online registration page.


There will be an early bird discount for all registrations before the 1st January 2010. After that standard rates apply so get registering now.

Amazing 2010 Offer!!!! 

Register 3 Full Registration Attendees and get an additional Full Registration, for another member of your team, absolutely free. So if you have more than 3 people who would like to attend this great exciting educational event then it is even cheaper than ever before.

Conditions Apply.


Early Bird Standard Rate

Full Registration: NZOUG Member



Full Registration: Non Member
This fee includes the cost of NZOUG Individual Membership



Day Registration (includes Monday dinner)



Additional Exhibitor/Sponsor Staff Registration




15th and 16th March 2010

Rotorua - Energy Events Centre – Same city, new location!


Contact us via our website or for any other questions please contact our conference management company, The Conference Company.

By phone on 09 360 1240 or by email at nzoug@tcc.co.nz

Multiple standby databases and supplemental logging

Freek D’Hooge - Tue, 2009-10-20 11:10

A quick warning:

When you setup a logical standby database, you need to activate supplemental logging on the primary database.
This is done automatically when you build the data dictionary (by running the dbms_logstdby.build procedure).
Activating supplemental logging is however (I know now) a control file change and is thus not replicated to the other physical standby databases.
As a result, the logical standby will become (logical) corrupt when you perform a role switch between your primary and another physical standby database.

I learned this the hard way  :(
Luckily it was during a proof of concept and not in a real production environment … .

Of course, AFTERWARDS, I found the following maa document which points out that you have to enable supplemental logging yourself on the other physical standby databases.
It still makes a good read though

Categories: DBA Blogs

Notes from Oracle OpenWorld 2009

Raimonds Simanovskis - Mon, 2009-10-19 16:00

Last week I participated in annual Oracle OpenWorld 2009 conference. There is quite wide coverage of conference in various web sites and blogs therefore I will write just some personal notes that I wanted to highlight.

For me the most value was meeting with different interesting people. At first thanks to Justin Kestelyn and all OTN team for Oracle community support. Oracle ACE dinner, bloggers meetup, OTN lounge and unconference were great places where to meet and discuss with interesting and active Oracle community members.

It was nice to meet Kuassi Mensah and Christopher Jones who are supporters of dynamic languages in Oracle and supporters of Ruby in particular. And also had interesting discussions with Rich Manalang – Ruby guru at Oracle, who is from the AppsLab team.

This year there were quite a few Sun people in the conference. Scott McNealy and James Gosling were doing keynotes. And I had interesting discussions with Arun Gupta and Tim Bray. BTW they have very good coverage of Oracle OpenWorld in their blogs (and also have a fresh look at it as they were for the first time here).

This year I did two unconference sessions – Oracle adapters for Ruby ORMs and Server Installation and Configuration with Chef. They were not very many attendees but at least it seemed that those who attended were satisfied with content :) This year Oracle Develop track was located quite far from unconference location and probably this also was a reason why there were not very many attendees (as my sessions were quite developer oriented).


Here is the list of Oracle products and technologies that I am interested in to spend some time investigating them:

  • Fustion applications. I expected to hear more about next-generation of new Fusion applications but there was just short demo in the final keynote and a promise that they will be available sometime next year. User interface of new applications seems much better than for the current Oracle applications as well as current beta-testers are telling that usability is really much better. So I am really looking for trying them out.
  • Application Development Framework (ADF). I am not a big fan of ADF drag-and-drop development style (that’s why I prefer Ruby on Rails :)) but as ADF is the main development platform for Fusion Applications then it will be necessary to use it if we would like to extend or customize Fusion applications. But what I would be really interested in is how to integrate JRuby with ADF – it would be nice to use ADF Faces UI components to get ADF look and feel, but to use JRuby for model & controller business logic development.
  • SQL Developer unit testing. It was nice to see that finally Oracle has PL/SQL unit testing support in latest version of SQL Developer which hopefully will increase awareness about unit testing among PL/SQL developers. Steven Feuerstein gave very good “motivational” talk about unit testing during converence. But I still can’t decide if SQL Developer repository based unit tests is the best way how to do them. E.g. as all unit tests are stored in database repository you cannot version control them with Subversion or Git (which is the place where we store source of all PL/SQL procedures).
    Therefore I plan to make enhancements to my ruby-plsql gem to support more PL/SQL data types and then it would be possible to write PL/SQL unit tests with Ruby and RSpec which would provide more compact syntax compared to current utPLSQL framework. Need to write blog post about it :)
  • Oracle Coherence. Recently I have heard many references to Oracle Coherence in-memory data grid which is often used to achieve high-scalability of web applications. Therefore I am thinking about Ruby client for Coherence and potentially using Coherence as cache solution in Ruby on Rails applications.
  • Java in database. Recently I did some experiments with Java stored procedures in Oracle database – and the main reason is that it could provide integration of Oracle database with other systems that have Java based API. I already did experiments with creating Oracle client for RabbitMQ messaging system.
  • Oracle object types. Many Oracle products (like Spatial Data option) are using Oracle object types for storing data. Currently these object data types are not supported by Ruby ActiveRecord and DataMapper ORMs. Need to do investigation how they could be supported and how to use Ruby e.g. for accessing spatial data in Oracle database.
Oracle Magazine’s Developer of the Year

And finally during Oracle OpenWorld annual Oracle Magazine Editors’ Choice Awards 2009 were published. And it was pleasant surprise for me that in this year I got Oracle Magazine’s Developer of the Year award. Thanks to Oracle people who promoted me and thanks for congratulations that I received :) Here is my picture and profile from the latest Oracle Magazine:


Photo © Delmi Alvarez / Getty Images

Categories: Development

Hosted Security Breakfast Seminar: Dublin

Peter O'Brien - Mon, 2009-10-19 07:49
Staying on the topic of hosted applications and security I thought I'd bring this to your attention. MessageLabs, a Symantec Company, is organising a breakfast seminar on Hosted Security in Dublin, in November. Although I won't be attending, you may find the presentations interesting and maybe get a free croissant or two.


Oracle Closed World - now closed

Moans Nogood - Fri, 2009-10-16 08:13
It's Friday morning and I'm on my way away from San Francisco after a splendid week of OOW, good guys, a few beers, and a lot of tech talk.

We ran OCW four times from Monday to Thursday, and it was really good presenters we had talked into showing up:

Monday: Jeff Needham on processors and how Oracle runs on them. Opteron good. Nehalem good. A reporter named Kate was present in order to write about OCW. Code: 41.

Tuesday: Jonathan Lewis showing why the crowd were not experts. Ouch. Code 43.

Wednesday: Jeremiah Wilton about the Cloud, and especially the Amazon Cloud. He seems to know a good deal about Amazon. Code 24.

Thursday: Uri Shaft on counting eg. NDV in the optimizer, and some compression theory - and then Dan Norris & Greg Rahn about the Database Machine. Code 42.

And Kate's funny article about OCW appeared in the daily conference newspaper on Thursday. She got all the technical and non-technical stuff right - very impressive! She also gave away the secret location (Thirsty Bear on 661 Howard, upstairs), but thankfully only on the very last day of OCW :).

I truly enjoyed it, and so did several others, so we'll probably do it again next year.

Apart from that, it also appears that the guys from Miracle who were here with me (Morten Tangaa, Jesper Haure, Kaj Christensen, Claus Sørensen) got good things out of the conference.

While I remember it: Thank you to Victoria Lira, Lillian Buziak, and Justin Kestelyn for allocating a reporter for OCW, for managing the whole ACE Director thing, and many other favors that make the conference work.

New address http://www.soastation.org/

Peter O'Brien - Fri, 2009-10-16 05:29
SOA Station has it's own domain now: soastation.org. The blog is still hosted via Google's Blogger service and will be still accessible at soastation.blogspot.com.

Deconstructing "Everything is UNIX"

Tahiti Views - Thu, 2009-10-15 23:42
From Linux magazine, an article by Jeremy Zawodny: Everything is UNIX.For me, this is an example of the "Miller meme" from Repo Man. "Suppose you're thinkin' about a plate o' shrimp. Suddenly someone'll say, like, "plate," or "shrimp," or "plate o' shrimp" out of the blue, no explanation." You go through life thinking you'll find something better than UNIX. The man pages still have the same bad John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com1

This is How We Do It: Social Media at Oracle

Ken Pulverman - Thu, 2009-10-15 19:40
Hey. Here's an interview I just did at Oracle OpenWorld 09 on our social media efforts at Oracle. Just advance the stream to about 31 minutes and 20 seconds.

Oracle Open World 2009 Report - Part Two

Jared Still - Thu, 2009-10-15 14:13

Tuesday October 13th

Unconference on Indexes
Richard Foote
10/13/2009 10:00 AM

I started off the day attending the indexing presentation of fellow Oak Table member Richard Foote.  Foote has become quite well known for his expertise on index internals since the publication of Oracle B-Tree Index Internals: Rebuilding the Truth

This was basically a Q&A session, and I will include just a couple of the questions.

Q: Have you ever seen an index Skip Scan used correctly?
A: The short answer was 'No'

Foote mentioned that he had only rarely seen an index skip scan used, and then inappropriately.  For more information on skip scan, see Foote's blog entry on Index Skip Scans

Q: When can you safely drop an index that doesn't seem to be used?
A: That is very difficult to determine

The explanation for this answer is that it is very difficult to determine in an index is never used. It does require some patience, as the code that uses the index may be run only rarely, making it difficult to determine if it is actually used

Oracle Closed World

OCW actually started on Monday, though due to the wonders of technology I missed it on that day.  The event was invitation only, either by being present when it was mentioned, or by receiving an SMS text on your phone.

This is where technology comes in.  The SMS was rather garbled, and I received through a series of very short SMS messages what seemed to be an invitation to stroll into a dark alley somewhere in downtown San Francisco.  It was later cleared up and I attended on Tuesday.

Oracle Closed World is the brain child of Mogens Norgaard, another Oak Table member, and co-founder of Miracle AS Oracle consulting

On Tuesday Jonathan Lewis reprised his "How to be an Expert" presentation, the difference being that this audience was comprised of folks with a wide breadth of Oracle knowledge.

Lewis took advantage of this by making the questions harder, and chiding the audience for not knowing the answers.  All was in good fun. Undoubtedly the presence of beer didn't make the questions any easier to answer.

Wednesday was a presentation by Jeremiah Wilton, Oak Table member and formerly a DBA at Amazon.com.

Wilton presented a live demo on using Amazon's Elastic Compute Cloud (EC2) to provision a linux server, using Elastic Block Storage (EBS) to provide persistant storage, and preconfigured Amazon Machine Instances (AMI) to build provision the server with Oracle already installed.

The fact that Wilton was able to do this during a 1 hour live demo, complete with the inevitible mishaps that can occur during a live demo, and complete the task was quite impressive.

This appears to be a great method to setup test instances of Oracle for experimentation.  There are companies using this for production use as well.

 Amazon Web Services

Perl - A DBA's and Developers Best (forgotten) Friend
Arjen Visser - Avisit Solutions

Perl is a topic near and dear to my heart.

I have been using it since version 4 in the early 1990's, and have advocated it's use ever since.  It is a robust and powerful language with a huge collection of libraries developed by the user community and archived in the Comprehensive Perl Archive Network (URL HERE:  http://cpan.org/)

When I spotted the Perl session on the schedule I immediately signed up for it.

What I had not notice was the subtitle indicating it was a session for beginners.

No matter, I had to go.

The sesssion began with a concise but clear introduction to Perl basics.

So far, so good.

When the time came to discuss Perl connectivity to Oracle, it was a bit surprising to be confronted with a slide showing how to use Perl as a wrapper for sqlplus.

"Surely" I thought, "this is going to be a slide showing how not to do it"

If you have used Perl with Oracle, you are no doubt familiar with DBI  and DBD::Oracle

DBI is the Perl Database Interface module developed and maintained by Tim Bunce

DBD::Oracle is the Oracle driver for DBI, also originally developed and mainted by Tim Bunce, and now being maintained by The Pythian Group

DBI and DBD::Oracle are very mature and robust Perl packages for using Perl with Oracle.

You would also likely know that using Perl as a wrapper for sqlplus is something that is very cumbersome and inelegant. So as to not write whole treatise on why you shouldn't do this, I will simply say that doing so is rarely necessary, and never an optimal method.

Which brings us to the next slide in the presentation, which had a diagram showing the how DBI and DBD::Oracle fit into the Perl architecture.

The speaker then told the audience that these were hard to install and difficult to use, and didn't recommend using them.

After picking my jaw back up off the floor, I lost all interest in the rest of the presentation.  I don't remember what the rest of the slides were.  Maybe I blacked out from the shock. What I remember is walking away from the presentation rather incrudulous.

Just last week, a friend that had not used Perl asked my how to install it on a Solaris server.  With only a few lines of email that I typed from memory he was able to successfully install DBI and DBD::Oracle.

Hard to install indeed.

11 Things about 11gR2
Tom Kyte

Really it was Tom's top 10 list for 11gR2 - he liked his favorite feature so much he counted it twice.

And that is the one I will mention.

It is Edition Based Redefinition,

In a nutshell this feature allows you to create a new namespace for PL/SQL objects, creating new versions in a production database.

This will allow upgrading applications with little or no downtime, something that has always been on of the DBA holy grails.

Rather than try to explain it (OK, I don't yet know know it works) I will just tell you to take a look at Chapter 19 in the 11gR2 Advanced Application Developers Guide.

Wednesday Keynote
Larry Ellison

Ellison promised to discuss 4 topics, I will include 3 of them.

I left before the Fusion Middleware discussion.

Oracle enterprise linux update

One interesting fact presented was a survey performed by HP detailing Linux usage in corporate data centers.  The numbers are rather surprising.

* Oracle Enterprise Linux 65%
* Redhat 37%
* Suse 15%
* Other 2%

Next was the second generation of the Exadata Database Machine.

Essentially it is faster then gen 1.

It was used to set a new TPCC benchmark record - I believe it was 1,000,000 transactions per seond.

Ellison was proud of the record being 16 times faster than the record previously set by IBM, and rightfully so if those numbers are correct.

It seems IBM has challenged the results however, claiming the Exadata 2 as  'only 6 times faster'.  As you might imagine, Ellison had some fun with that, even offering a $10 million prize to anyone that can show that a Sun Exadata machine cannot run the app at least twice as fast as another other system.  IBM is invicted to participate.

At this time Ellison welcomed a special guest to the stage. Californie Governor  Arnold Schwarzenegger.

Commenting on being in a room with so many IT folks Schwarzenegger commented "As I came out on stage I felt my IQ shoot up 10 pts."

Schwarzenegger talked for a few minutes on the impact of technology on peoples lives. "Technologies impact is flesh and blood" in reference to how tech is used to aid response of public services such as firefighting.

Arnold called for a round of applause for Larry Ellison and Scott McNeely for being technology leaders.

The camera cut to Ellison, looking uncharacteristically humble as he mouthed 'Thank you'.

After Schwarzenegger left the stage, Ellison continued, this time discussing My Oracle Support.

My Oracle Support has been a hot topic lately, as the majority of DBA's are less than thrilled with the new Flash interface being imposed.  It is my understanding that a HTML version of the interface will be maintained, so we won't have to deal with Flash if we don't want to.

Here's where it gets interesting - the unification of Oracle Enterprise Manager and My Oracle Support.

There is now a 'My Oracle Support' tab in OEM.

DBAs will be allowed to opt in to OCM, Oracle Configuration Manager, allowing Oracle to perform automated discovery of bugs and patches needed, either in Oracle or other vendors on server (OS bugs)

Oracle will will then have a global database to mine for proactive response to possible problems.

When a configuration is found to have issues, all users with that configuration can be proactively notified.

The real news IMO though is the impact on patching.

Oracle recently started offering a new patch pacakge - PSU.

This is different than the CPU patch system, as it may require merge patches to resolve patch conflicts.

If OEM My Oracle Support determines that a merge patch is needed, it will automatically file an SR requesting the patch and notify you when it is available.

Even if you don't like OEM, this may be a good use of it.

Ok, that's enough for now, time for lunch.

Categories: DBA Blogs

The ultimate story about OCR, OCRMIRROR and 2 storage boxes – Conclusion

Geert De Paep - Thu, 2009-10-15 09:00

This is a follow-up of chapter 5.
The most important thing in this story is the fact that it is perfectly possible to configure your Oracle RAC cluster with 2 storage boxes in a safe way. You just need an independent location for the 3rd voting disk, but if you have that, you can be sure that your cluster will remain running when one of those storage boxes fail. You will even be able to repair it without downtime after e.g. buying a new storage box (call Uptime for good prices…:)

So were all these tests then really needed? Yes, I do think so, because of the following reasons

  • Seeing that it works gives much more confidence than just reading it in the Oracle doc, or on Metalink
  • The story of the vote count is really interesting. There is (almost) nothing to find about this in the Oracle doc or metalink. With the information in this blog, you will be able to better understand and interpret the error messages in the log files. You will also know better when to (not) update the vote count manually.
  • The concept of OCR master is nice to know. Again, it gives your more insight in the messages in the logfiles.

But apart from these straightforward conclusions, there is one thing I find most interesting. The different scenarios have produced different output, and in one case (scenario 5) even real error messages, allthough they all did the same thing: removing the ocrmirror. With the different scenario’s above, you know why the output can be different. Because if ever you have to handle a RAC case with Oracle support, and you get as reply “we are unable to reproduce your case”, you may now be able to give them more info about what parameter can make a difference (who is ocr master, where is crs stopped, …). Otherwise it can be so frustrating that something fails in your situation and does work in somebody else’s situation.
But now I may be getting too philosophical (which I tend to have after another good “Trappist“)…

Good luck with it!

P.S. And yes, I do have them all in my cellar…

Oracle Open World Report for October 11th and 12th

Jared Still - Wed, 2009-10-14 12:03
As I am attending Open World 2009 on blogger credentials, it seems proper I should  actually blog about it.

So, here it is.  I won't be blogging about keynotes or other things that will appear in the news the following day, but rather on some of the sessions I attend.

As I got back to my room too late and too tired to do this properly on Monday, I am putting Sunday and Monday in the same post.

Here goes:

Open World - Sunday 10/11/2009

While attending Oracle Open 2009, I thought it a good idea to make some report of sessions attended, and any interesting developments at OOW.

Some of the sessions I attended may not be considered DBA topics. I thought it would be interesting to break out of the DBA mold for a bit and attend some sessions that might be a bit outside the DBA realm.

Sue Harper - Everyday Tasks with Oracle SQL Developer

Sue Harper is the product manager for SQL Developer, and was presenting some of the useful new features of the SQL Developer 2.1 Early Adopter release.

While I have used SQL Developer from the time it was first released as Raptor, I have not until recently used it simply as a database browsing tool.  After exploring some of the features that allow writing reports with master/detail sections, I converted some SQLPLus scripts for use with SQL Developer.

SQL Developer is a very capable tool, so I attended this session to see what else I might be missing out on.

There was only one hour allocated for the session, and given the first 15 minutes were consumed convincing the audience why they should be using SQL Developer, there was just that much less time available to see the new features.

Taking a few minutes to market it is probably just in the product manager DNA.

Some of the features demonstrated were actually available in 1.5, but maybe not widely known.  As I have not used 2.1, I won't always differentiate between versions here. Some of these features may not be new to 2.1, maybe just improved.

Though not a new feature in 2.1, a few minutes were used to demonstrate the use of the built in version control integration. This is a very useful feature, and can be setup for seamless integration for CVS, SubVersion, Perforce, and one other I can't recall now.  It's definitely worth a look.

Some features that are new to 2.1 looked very useful:

Persistent column organizing and hiding.  When viewing data in SQL Developer, the columns may be easily rearranged and selected or de-selected for viewing.  While previous versions allowed dragging columns around, 2.1 has a nice dialog that makes this much easier.

New to 2.1 is column filtering.  By right clicking on a cell in the data pane, a dialog can be brought up to filter the data based on values found.  This allows filtering the data without requerying the table.

Also new to 2.1 is an XML DB Repository Navigator. It was mentioned, but alas there was not time to demonstrate it.



Christoper Jones - Python/Django Framework

This was a hands on developer session centered on using the Python scripting language with the Django Web application framework.  This was a fun session.  The lab was already setup, running Oracle Linux VM's with access via individual laptops setup in the training room.

The lab was a go at your own pace session, with instructions both printed and available via browser.  Browser based was the way to go with the instructions, as the examples could be cut and pasted, saving a lot of time typing.

I wasn't able to quite complete the lab as I kept encountering an error when running the web app.  It was probably due to an error in one of the scripts I modified during the session, but enough was accomplished to see that the Django Framework looked very interesting.  Perhaps even simple enough to use for a DBA.  Yes, I did search the schedule for a similar Perl session, perhaps using Mason or somesuch.

The training materials are going to be placed on OTN in the Oracle By Example section after Open World concludes.


Ray Smith - Linux Shell Scripting Craftmanship

The last session I attended on Sunday was all about shell script craftsmanship. Ray Smith was sharing some common sense methods that can be used to greatly enhance your shell scripts.

If you have done any software development, the information presented would be similar to what you already know.

  • Use white space and format your code for readability.
  • Don't be overly clever - other people will have to read the shell script.
  • Format your scripts with a header explaining the purpose of the script, and separate sections for independent and dependent variables, and a section for the code.
  • Use getops to control command line arguments.

Smith strongly advocated that everyone in the audience obtain a copy of the book "The Art of Unix Programming" by Eric S. Raymond.  This is not a new book by any means, but Smith drew upon it for many of the principles he advocated in scripting.

A couple of tools new to me were mentioned:

Zenity and Dialog - both of these are graphical dialog boxes that may be called from shell scripts in linux.


Dialog is installed with linux, so just do man dialog to check it out.

It was an interesting presentation.  Though a lot of it was not new to me, the two dialog tools mentioned were, showing that no matter how well you think you may know a subject, you can always learn something from someone else.

Open World - Monday 10/12/2009

Jonathan Lewis Unconference - How to be an Expert

Jonathan Lewis had an interesting unconference presentation.


In a nutshell, it comes down to this:

You must practice, and practice quite a lot.

To make the point, he used the joke about the American Tourist asking the grounds keeper how the lawns of the Royal Estates are maintained to be so lush, have such and even texture and in short, to be so perfect.

The groundskeeper explained while the tourist took notes.

First you must dig down 4 inches.

Then you must put down a layer of charcoal.

Then another 1 inch layer find sharp sand.

Finally a layer of fine loam goes on top.

You then must seed the lawn, and water it very well for 6 weeks.

After 6 weeks, you must cut the grass, being very carefully to remove only a small amount as you mow.  This must be done three times a week.

And then you continue doing this for 200 years.

Ok, everyone had a good laugh at that, but the point was made.

Reading some books and being able to run some scripts does not make you an expert.  Lots and lots of practice may make you an expert, if you apply yourself well.

During the presentation he asked a number of questions of the audience made up mostly of DBA's. I will reprise a couple of them here.

Q1:  Assuming you have a simple heap table, with no indexes, you update a single column in 1 row of the table.  How many bytes of redo will that generate?

Q2: Who of you in the audience when you insert data into a table, deliberately insert duplicate data into the database?

I will leave you to speculate on the answers a bit.

Of those 2 questions, only 1 was answered correctly by the audience.

Leng Tan and Tom Kyte DBA 2.0 - Battle of the DBA's

What is the difference between a DBA 1.0 (the old days) and a DBA 2.0 ( the modern DBA)

DBA 2.0 has modern tools, self managing database enabled by AWR and the Diag and Tuning packs.

DBA 1.0 uses scripts and works from the command line.

One the stage in addition to Kyte and Tan were two DBA's, each with a laptop and an oracle server to work on.

Two scenarios were presented for a timed hands on problem that each DBA must work through.

First scenario - Security Audit

Each DBA is given 6 minutes to do a database audit and report on possible vulnerabilities

DBA 1.0 ran scripts to check for open accounts, default passwords, publicly granted packages and umask settings.

After doing so he ran a script to remove privileges granted to PUBLIC, and locked a couple of accounts.

DBA 2.0

DBA 2.0 worked from the Oracle Enterprise Manager console, using the Secure
Configuration for Oracle Database.

He was able to observe the database security score, navigate through several screens and correct the same security problems that DBA 1.0 did.  Following that he was able to see that the security score for the database had improved.

So the conclusion made by the presenter is that OEM is clearly superior because OEM will automatically generate the needed data every night.

By contrast DBA 1.0 can only do one db at a time.

I do not believe this demonstration to be a valid comparison - it's quite simple to run the script against any number of databases from a script, and report on anomalies.

At this point it should be mentioned that DBA 1.0 spent 4 minutes explaining what he was going to do, another minute explaining what the scripts were doing, with less than 1 minute spent actually running the scripts.

By comparison, DBA 2.0 was navigating through screens through nearly the entire 6 minutes.

The statement was made by the presented that doing this with scripts at the command line was far too tedious a task, and DBA 1.0 would never be able to accomplish the task for 200 databases.

I won't belabor the point (well, not too much) but automating these kinds of tasks is relatively simple for command line tools.  Which is easier and more productive?  Automating a set of scripts to poll all of your databases, or navigate through OEM for 200 databases?

The present referred to using OEM as "really convenient"  Sorry, but I have never found OEM to be at all convenient.  Whenever I run into problems with it, it requires a SR to fix it.

Thetre was a round 2 as well regarding testing execution plans both before and after setting optimizer_features _enable to a newer version.  OEM fared well here compared the the scripting method as the scripts used 'explain plan' and OEM actually executed the queries to gather execution plan information.

That isn't to say however that the scripts could not be modified to do the same.  No, I am not completely against GUI environments.  I am just against making more work for DBA tasks.

Enough for now, I will report on Tuesdays conferences later this week.
Categories: DBA Blogs

Extra! Extra! Oracle Closed World today.... on Cloud

Moans Nogood - Wed, 2009-10-14 09:31
We had planned not to have any OCW presentations today in order not to steal Larry's audience from his planned keynote, but we're doing it anyway.

It's at 1200 hours, NOT 1300 hours as usual.

More details via text messages later, including todays codeword. If you want text messages from me for the OCW sessions, send me a text/SMS on +45 25277100.

Cloud computing is 'hot'. So is Larry when he talks about it on YouTube. Funny as Hell, actually.

There are at least these two videos. They are partly overlapping, but that doesn't matter- you'll want to see him do this standup routine a couple of times, trust me:


Which is why today, at the secret location, Oracle Closed World will present a couple of guys that know everything about 'the cloud'.


OOW 2009 1st report

Fadi Hasweh - Wed, 2009-10-14 00:03
It’s all about innovation that what Larry Ellison/Oracle and Scott McNealy/Sun said at the opening key note. I arrived to san Francisco for oow 2009 and I have to admit it’s a butifaul place. The organization of the oow is more than perfect so far. I did my registration online at moscone center and even thought there were thousands doing registration at the same time the registration process did not take more that 5 min. it was all online and automated. Meeting some of oracle team/employees was they were very friendly and helpful. Attending the unconffrance session was very good experience I attended Michael taylor session (Hyperion on Linux Live Demo) and Fundamentals of Performance (Oracle ACE Director Cary Millsap). Attending ahmed al-omari session was a very good experience to me it was a Q&A session which was very helpful really. Also tom kyte session was great too I cannot put every thing in one post but more to come soon I will post some video and pictures I took from the event here soon If anyone who is reading this blog at the conference it’s a great chance to meet please drop me a line.

The Unconference sessions

the games area were people can have some fun

more to come soon.

User friendly / supported monitoring of concurrent processes

Nigel Thomas - Tue, 2009-10-13 10:59
Yes, I know everyone else is having a great time at OOW, but some of us are back in the real world still.

I've asked a question on OTN (under EBS General Discussion) Best way to execute / monitor long running custom conc request with slave.

Can anyone help me with suggestions for an EBS-supported API (11.5.10 on Solaris 10 / Oracle 9iR2) that would enable the professional user who launched a (PL/SQL) concurrent process to monitor its progress over several hours from his/her application UI? To add to the fun, the process is going to spawn some slaves to make use of all the spare CPUs / cores / threads we have lying around.

As a developer, I would normally start with the DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure (and I'll build that in anyhow) - but in this case I'm struggling to find any documentation or ML notes. to point me at something that would actually appear on the apps UI.

Answers here - or better still, on the OTN thread. Thanks in advance!

It's wonderful, but will I survive?

Claudia Zeiler - Tue, 2009-10-13 01:58
It's Open World! Sunday a full day of IOUG lectures. Today I heard Jonathan Lewis on "Performance Tuning - being an expert"; Greg Rahm on Data Warehousing and Exedata; Cary Millsap on Performance and Chen Shapira on the uses of charts. I had an introduction to desktop widgets from two experts, I was the only attendee. And I had a nice long introduction to Apex at the Demo Grounds.

It is all Wonderful. Just one little question. How am I going to survive 3 more days? I'm going to bed!

The ultimate story about OCR, OCRMIRROR and 2 storage boxes – Chapter 5

Geert De Paep - Mon, 2009-10-12 09:00
Scenario 5: Loss of ocrmirror from non-ocr-master – reloaded

This is a follow-up of chapter 4.
In this final scenario, we do the same thing as in scenario 4. I.e. while crs is running on both nodes, we hide the ocrmirror from the non-ocr-master node, which is node 2 now.
So node 1 is the master, we hide ocrmirror from node 2 and we verify on node 2:
(nodeb01 /app/oracle/crs/log/nodeb01) $ dd if=/dev/oracle/ocrmirror of=/dev/null bs=64k count=1
dd: /dev/oracle/ocrmirror: open: I/O error

What happens?

As we know from scenario 4, ocrcheck on node 2 now fails with:
(nodeb01 /app/oracle/crs/log/nodeb01) $ ocrcheck
PROT-602: Failed to retrieve data from the cluster registry

On node 1 all is ok. This is still the same as scenario 4, but in scenario 4 we now stopped crs on the ocr master who can see both luns. In this scenario we will now stop crs on the non-master node (node 2) who can see only ocr.

And now it gets interesting….
-bash-3.00# crsctl stop crs
OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage

Did I say “really interesting”? We don’t seem to be able to stop crs anymore on the non-ocr-master node. Maybe it is worth referring to the RAC FAQ on Metalink that says “If the corruption happens while the Oracle Clusterware stack is up and running, then the corruption will be tolerated and the Oracle Clusterware will continue to funtion without interruptions”. That’s true, but they don’t seem to speak about stopping crs. Anyway, the real “playing” continues:

Let’s try to tell Oracle CRS that the ocr is the correct version to continue with, and ask kindly to increase its votecount to 2. We do this on node 2 and get:
ocrconfig -overwrite
PROT-19: Cannot proceed while clusterware is running. Shutdown clusterware first

Deadlock on node 2! We can’t stop crs, but in order trying to correct the problem, crs has to be down…

Moreover, at this time, it is not possible anymore to modify the OCR. Both nodes now give:
(nodea01 /app/oracle/crs/log/nodea01/client) $ srvctl remove service -d ARES -s aressrv
PRKR-1007 : getting of cluster database ARES configuration failed, PROC-5: User does not have permission to perform a cluster registry operation on this key. Authentication error [User does not have permission to perform this operation] [0]
PRKO-2005 : Application error: Failure in getting Cluster Database Configuration for: ARES

And doing the above command on each node gives always in the alert logfile of node 1 (who is the master):
[  OCRAPI][29]a_check_permission_int: Other doesn’t have permission

Note: “srvctl add service” doesn’t work either.

Now it seems like things are really messed up. We have never seen permission errors before. Please be aware now that the steps below are the steps I took trying to get things right again. There may be other options, but I only did this scenario once, with the steps below:

As the original root cause of the problem was making the ocrmirror unavailable, let’s try to tell the cluster to forget about this ocrmirror, and continue only with ocr, which is still visible on both nodes.

So in order to remove ocrmirror from the configuration, we do as root on node 2:
-bash-3.00# ocrconfig -replace ocrmirror “”

Note: specifying an empty string (“”) is used to remove the raw device from the configuration.

At that time in the crs logfile of node 1:
2008-07-23 11:11:18.136: [  OCRRAW][29]proprioo: for disk 0 (/dev/oracle/ocr), id match (0), my id set (1385758746,1028247821) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1866209186) my votes (1), total votes (2)
2008-07-23 11:11:18.136: [  OCRRAW][29]propriowv_bootbuf: Vote information on disk 0 [/dev/oracle/ocr] is adjusted from [1/2] to [2/2]
2008-07-23 11:11:18.195: [  OCRMAS][25]th_master: Deleted ver keys from cache (master)
2008-07-23 11:11:18.195: [  OCRMAS][25]th_master: Deleted ver keys from cache (master)

That looks ok. We will be left with one ocr device having 2 votes. This is intended behaviour.

In the alert file of node 1, we see:
2008-07-23 11:11:18.125
[crsd(26268)]CRS-1010:The OCR mirror location /dev/oracle/ocrmirror was removed.

and in the crs logfile of node 2:
2008-07-23 11:11:18.155: [  OCRRAW][34]proprioo: for disk 0 (/dev/oracle/ocr), id match (1), my id set (1385758746,1028247821) total id sets (2), 1st set (1385758746,1866209186), 2nd set (1385758746,1028247821) my votes (2), total votes (2)
2008-07-23 11:11:18.223: [  OCRMAS][25]th_master: Deleted ver keys from cache (non master)
2008-07-23 11:11:18.223: [  OCRMAS][25]th_master: Deleted ver keys from cache (non master)

(node 2 updates its local cache) and in the alert file of node 2:
2008-07-23 11:11:18.150
[crsd(10831)]CRS-1010:The OCR mirror location /dev/oracle/ocrmirror was removed.

Now we do an ocrcheck on node 2:

(nodeb01 /app/oracle/crs/log/nodeb01) $ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     295452
         Used space (kbytes)      :       5600
         Available space (kbytes) :     289852
         ID                       : 1930338735
         Device/File Name         : /dev/oracle/ocr
                                    Device/File integrity check succeeded
<br />                                    Device/File not configured
         Cluster registry integrity check succeeded

Now the configuration looks ok again, but the error remains on node 2 (we do this as user oracle):
(nodeb01 /app/oracle/crs/log/nodeb01) $ srvctl remove service -d ARES -s aressrv
PRKR-1007 : getting of cluster database ARES configuration failed, PROC-5: User does not have permission to perform a cluster registry operation on this key. Authentication error [User does not have permission to perform this operation] [0]
PRKO-2005 : Application error: Failure in getting Cluster Database Configuration for: ARES

However doing the same command as root on node 2 succeeds:
-bash-3.00# srvctl remove service -d ARES -s aressrv
Service aressrv is disabled.
Remove service aressrv from the database ARES? (y/[n]) y

After this, managing the resources as user oracle succeeds again:
(nodeb01 /app/oracle/crs/log/nodeb01) $ srvctl add service -d ARES -s aressrv2 -r ARES1
(nodeb01 /app/oracle/crs/log/nodeb01) $ srvctl remove service -d ARES -s aressrv2
aressrv2 PREF: ARES1 AVAIL:
Remove service aressrv2 from the database ARES? (y/[n]) y

At this point, unfortunately the internals end. At the moment of my testing, I had no time to investigate this further, and since then I had no time to make and test a similar setup (that’s why this blog posting took so long, I would have loved to do more research on this). However I remember I have done some more testing in some place at some customer site (but I have no tracscript of that, so no details to write here) and I can still tell the following:

For some reason, the ownership of the ARES resource in OCR seems to be changed from oracle to root. A way to get out of this as well is using the following commands:
 crs_setperm -o oracle | -g dba

This allows to change ownership back to oracle, and then all will become ok again.

I can’t say where it went wrong. Maybe I have done something as root, instead of oracle, without knowing (however I double checked my transcripts). I think it went wrong at the moment where I first tried to stop crs as root on node 2 and then did an “ocrconfig -overwrite” as root on node 2. I wonder if something has then been sent to node 1 (who is ocr master), i.e. as root, that may have changed some permission in the ocr…? If anyone has time and resources to investigate this further, please don’t hesitate to do so, and inform me about the results. In this way, you may gain perpetual honour in my personal in-memory list of great Oracle guys.


Altthough crs is very robust and 2 storage boxes are ok, there may be a situation where you get unexpected error messages. Hopefully this chapter will help you in getting out of this without problems, and strengthen your confidence in Oracle RAC.

Let’s make a final conclusion in the next chapter…

Oracle Closed World - an underground conference...

Moans Nogood - Sun, 2009-10-11 16:49
I'm here in San Francisco for the Oracle Open World conference along with four other guys from Miracle, the two crazy Miracle Finland guys and some other crazy people - we've rented a couple of big apartments as usual, and are doing work, beer and other essential stuff together.

Last year at Oracle Open World (OOW) my friend Iggy Fernandez, who edits the NOCOUG (Northern California Oracle User Group) magazine/journal, suggested an Oracle Closed World conference, where REAL, TECHNICAL presentations would take place underground in secret locations, using secret passwords, and what have you.

Well, it's here. Monday, Tuesday and Thursdag at a secret location we'll do deep and very technical presentations about various topics. The secret location (which is indeed underground) has the capability to serve beer, by the way.

Let me know if you're interested in hearing more about OCW - email me on mno@MiracleAS.dk or text me on +45 2527 7100.



Subscribe to Oracle FAQ aggregator