DBA Blogs

Data Modeling

Jared Still - Thu, 2009-11-12 13:24

Most readers of the blog are probably DBA's, or do DBA work along with development or other duties.

Though my title is DBA, Data Modeling is something I really like to do.

When first learning Oracle, I cut my teeth on data modeling, and used CASE 5.1 on unix to model a database system. True, CASE 5.0 used an Oracle Forms 3.x based interface, and the GUI modeling was unix only.

That was alright with me, as the Form interface allowed manual changes to be made quite quickly.

And the graphic modeling tool was fairly decent, even on a PC running Hummingbird X Server.

When Designer 2000 came out, it was clearly a more capable tool. Not only did it do everything that CASE 5.1 could do, it could do more. I won't make any silly claim that I was ever able to fully exploit D2K, as it was an end-to-end tool that could do much more than model data and databases.

What it could do with just the databases however was quite good.  Data models could be created, and then a physical database could be generated from the model.

Changes in the database model could be reverse engineered back to the model, and changes in the model could be forward engineered in to the physical model. D2K could truly separate logical and physical models, and allow changes to be migrated back and forth between the two.

There are other high end tools such as Erwin which can no doubt accomplish the same thing, but I have not used them.  

One important differentiation for me between D2K and other tools was that D2K worked with Barker Notation, which is the notation I first learned, and the one I still prefer.  

I should not speak of Designer 2000 in past tense I guess, as it is still available from Oracle as part of the Oracle Development Suite, but is now called Oracle Designer.  It just hasn't received much attention in the past few years, as I think many people have come to think of data modeling as too much overhead.  

I've tried several low end tools in the past few years, and while some claim to separate logical and physical models, those that I have tried actually do a rather poor job of it.

All this leads to some new (at least, new to me) developments from of all places, Microsoft.

Maybe you have heard of Oslo, Microsoft's Data Modeling toolset that has been in development for the past couple years.

If you're just now hearing about it, you will likely be hearing much more. The bit I have read has made me think this will be a very impressive tool.

If you have done data modeling, you have likely used traditional tools that allow you to define entities, drop them on a graphical model, and define relationships.

The tool you used may even have allowed you to create domains that could be used to provide data consistency among the entities.

Oslo is different.  

Oslo incorporates a data definition language M. The definitions can be translated to T-SQL, which in turn can be used to create the physical aspects of the model.  M also allows easy creation of strongly typed data types which are carried over into the model.

Whether Oslo will allow round trip engineering ala D2K, I don't yet know.

I do however think this is a very innovative approach to modeling data. 

Here are a few Oslo related links to peruse :


You may be thinking that I have given SQL Developer Data Modeler short shrift.

Along with a lot of other folks, I eagerly anticipated the arrival of SQL Developer Data Modeler.

And along with many others, was disappointed to learn that this add on to SQL Developer would set us back a cool $3000 US per seat.  That seems a pretty steep price for tool that is nowhere near as capable as Oracle Designer, which is included as part of the Oracle Internet Developer Suite. True the price is nearly double that of SQL Modeler at $5800, but you get quite a bit more than just Designer with the Suite.

As for the cost of Oslo, it's probably too early to tell.

Some reading suggests that it will be included as part of SQL Server 2008, but it's probably too soon to tell.

Why all the talk about a SQL Server specific tool?

Because data modeling has been in a rut for quite some time, and Microsoft seems to have broken out of that rut.  It's time for Oracle to take notice and provide better tools for modeling, rather than upholding the status quo.







Categories: DBA Blogs

MetaLink, we barely knew ye

Jared Still - Mon, 2009-11-09 14:08
But, we wish we had more time to get better acquainted.

If you work with Oracle, you probably know that MetaLink went the way of the Dodo as part of an upgrade to My Oracle Support during the weekend of November 6th, 2009.

And so far it hasn't gone too well, as evidenced by these threads on Oracle-L:

Issues with My Oracle Support
Metalink Fiasco

Many people were lamenting the loss of MetaLink well before its demise, but I don't think any were quite expecting the issues that are currently appearing.

A few have reported that it is working fine for them, but personally, I have found  it unusable all morning.

At least one issue with MetaLink appears to have been cleared up with MOS, that is while I was able to login to it last week.

During a routine audit of who had access to our CSI numbers, I came across a group of consultants that were no longer working for our company, and froze their accounts.  The next day I received a frantic voice mail  from a member of the consulting firm, and he informed me that they had no access to MetaLink because I had frozen their accounts.


I returned the call just a few minutes later, but they had already been able to resolve the issue, as one of their consultants with admin rights had been unaffected, and was able to unfreeze their accounts.


Removing them from the CSI is the better procedure, but in the past when I have attempted to do so, I found that there were still open issues owned by the accounts, and could not remove them. The application owners had been very clear that this access should be removed, so I froze the accounts, so that is what I did on this occasion as well.


This all seemed quite bizarre to me.  This must be a very strange schema in the ML user database, and some strange logic to go along with it.  By granting a user access to a CSI, MetaLink was giving me Carte Blanche to effectively remove them from MetaLink.

How has My Oracle Support fixed this?  Try as I might, I could not find a 'freeze' button in user administration in MOS.  So the fix seems to have been "remove the button"

Categories: DBA Blogs

Good Article About btrfs

Sergio's Blog - Tue, 2009-11-03 06:56

I found this article about btrfs helpful.

Categories: DBA Blogs

Two Oracle RAC bugs on the wall, two Oracle bugs. Take one down …

Freek D’Hooge - Mon, 2009-11-02 18:50

Ok, not as good as beer and they can give you a nasty headache, so you have been warned   ;)

Reason for this post are 2 bugs I discovered with Oracle RAC, both resulting in a single point of failure.
The platform on which I’m working is Oracle 10gR2 (10.2.0.4) on OEL 4.7.

The first one is when you are using NFS volumes to host the ocr and ocrmirror volumes.
Normally, when the ocr volume gets corrupted or unavailable , oracle should failover to the ocrmirror volume. The exact response is documented in the RAC FAQ on metalink (note 220970.1) and is currently discussed in a series of blog posts by Geert de Paep.
With NFS, however, you must use both the nointr and hard mount options (OS is OEL 4.7) and as a result the process that is trying to read or write an unavailable ocr volume will wait undefinitly on a response. This is not only happening when using commands such as crs_stat or srvctl, but also when an instance or service failover is initiated.
Oracle support however, does not exactly see it this way and has first blamed the os, then the storage and finally stated that there is no failover foreseen between the ocr and ocrmirror volumes…
It took some escalating and a change in support engineer to get some progress in that SR (mind you that after more then 4 months, they still have not acknowledged it as a bug).

The second problem is that, when you made the public interface redundant with os bonding, the racgvip script does not detect when all interfaces in the bond are disconnected.
This is caused because the script, unlike older version, is using mii-tool to check the availability of the public interface. Only when mii-tool states that the link is down, a ping test is done to the public gateway. If that test fails as well, then the vip fails over and the rac instances on that node are placed in a blocked state.
The problem however with mii-tool is that it plays not very well with bonds, and always reports the bond status as being up (in fact, regardless of the link state, mii-tool is always reporting a network bond as “bond0: 10 Mbit, half duplex, link ok”). So, the racgvip script always thinks that the public interface is up.
As mii-tool is an os utility, I first opened a case on the Oracle Enterprise Linux support, to check with them if its behavior was normal (I already confirmed that by googeling, but Oracle support does not seem to accept results from google :)   ). And after running multiple tests with different bond options, they finally stated that mii-tool was indeed obsolete and should not be used to verify a bond status (yes, I know. Its own man page already states that mii-tool is obsolete).
So next, I opened a SR on part of the clusterware and oracle development promptly stated that it was not a clusterware bug but an os issue, pointing the finger to mii-tool and asking where it was written that mii-tool is obsolete… . After making them aware of the statement made by their OEL colleagues and the mii-tool man page, they have seemed to have accepted it as a bug.
I have checked the 11gR2 version of the racgvip script, and it seems to suffer the same problem.

ps) Note 365605.1 – “Oracle Bug Status Codes, Descriptions and Usage” is, although it seems incomplete, very usefull to understand the different status codes


Categories: DBA Blogs

Oracle Open World 2009 Summary

Inside the Oracle Optimizer - Wed, 2009-10-28 13:09
We had a great time talking to our users at Open World 2009 both at our Demogrounds booth and at our two sessions. We received a lot of interesting questions during the Optimizer Roundtable discussion, but we did not get to answer all of them due to time constraints. We plan to address the questions we received (both answered and unanswered) in future blog posts... so stay tuned. If you didn't get to attend the discussion, but have a question about the Optimizer, submit it through the email link above.

For those of you who did not get a chance to stop by our Demogrounds booth, here's a recap of the new features that we talked about. Many of the topics have already been covered in earlier blog posts.
These topics are focused on well-known pain points from earlier versions of Oracle. But we also have plenty of new optimizations in Oracle 11gR1 and 11gR2. Stay tuned for details about some of our new optimizations.

Categories: DBA Blogs, Development

Wintertime (again)

Freek D’Hooge - Sun, 2009-10-25 08:28

During my prior post on the effect of daylight saving settings on the Oracle scheduler, I already pointed out that it is best to set your session timezone information to a named timezone and not to an absolute offset. In this post I would like to investigate how the session timezone settings affect the sysdate, current_date, systimestamp and current_timestamp variables during the switchover to or from daylight saving time. Current_date and current_timestamp, are using the date/time information of the server on which the database runs and modify that time using the timezone settings of the session.
As with the last post, the tests where done in response to the switching from wintertime to summertime, and I’m to lazy to redo them.

In the first test, I do not explicitly set timezone information in my session.
Both the server time and the client time has been set to a couple of minutes before the swithover from wintertime to summertime:

sys@GUNNAR> alter session set nls_date_format = 'DD/MM/YYYY HH24:MI:SS';

Session altered.

sys@GUNNAR> alter session set nls_timestamp_tz_format='DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ';

Session altered.

sys@GUNNAR> column systimestamp format a35
sys@GUNNAR> column current_timestamp format a35
sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------
29/03/2009 01:58:32 29/03/2009 01:58:32 29/03/2009 01:58:32 TZ: +01:00 DS:  29/03/2009 01:58:32 TZ: +01:00 DS:

As you can see the timezone information uses the absolute offset notation and is set to GMT +1 (which corresponds with wintertime in Belgium).
After some minutes (when the summertime came in effect), I execute the same query again:

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------
29/03/2009 03:00:18 29/03/2009 02:00:18 29/03/2009 03:00:18 TZ: +02:00 DS:  29/03/2009 02:00:18 TZ: +01:00 DS:

Both sysdate and systimestamp has jumped 1 hour in the feature and systimestamp now shows the timezone as “GMT + 2” (summertime in Belgium).
Current_date and current_timestamp both show the time without summertime corrections, but with current_timestamp the timezone information places the time in the right context.

Next, I disconnect and reconnect the session:

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------
29/03/2009 03:01:15 29/03/2009 03:01:15 29/03/2009 03:01:15 TZ: +02:00 DS:  29/03/2009 03:01:15 TZ: +02:00 DS:

This time, all 4 show the same time and timezone information (all using summertime).
The explanation for this is that the timezone information for a session is determined when the session is created, and Oracle only applies daylight saving settings when using a named timezone. So as long as the session is connected, it uses the “old” timezone of GMT +1. With sysdate and systimestamp the timezone information comes from the server, not from the client.

In the second test, I have set the ORA_SDTZ variable in the client environment to “Europe/Brussels”

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------------------
29/03/2009 01:57:37 29/03/2009 01:57:38 29/03/2009 01:57:37 TZ: +01:00 DS:  29/03/2009 01:57:37 TZ: EUROPE/BRUSSELS DS: CET

### a couple of minutes later

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- ------------------------------------------------
29/03/2009 03:00:04 29/03/2009 03:00:04 29/03/2009 03:00:04 TZ: +02:00 DS:  29/03/2009 03:00:04 TZ: EUROPE/BRUSSELS DS: CEST

Both current_date and current_timestamp have now also jumped 1 hour in the “future” and the daylight saving settings in current_timestamp has changed from CET (Central European Time) to CEST (Central European Summer Time).
To me this shows that it is important to set the timezone of you clients correctly, even if the database is not used from different timezones.
A long running session is sufficient to pollute your data, certainly if you are using current_date as it has no timezone information.


Categories: DBA Blogs

switching to wintertime

Freek D’Hooge - Thu, 2009-10-22 11:17

In Belgium we are switching to wintertime this Sunday, which is good opportunity for me to write this post.
I normally intended to write it when we switched to summer time, so everything will be from the point of view of changing from winter time to summer time (confused yet? ).

The reason that I wanted to write about it, where some alerts we got back then from our monitoring considering scheduler jobs which where no longer running on time.
Quickly it became clear that these jobs did not follow the change to summer time, but instead ran an hour later.
The key is to look at the dba_scheduler_jobs table in the correct format. You see, the *_run_date columns are of the datatype “timestamp(6) with timezone”, so to get all the information you need to use the right format model. Using the TZR and TZD models you can respectively see the timezone and the daylight saving information:

sys@WPS50> select job_name, to_char(last_start_date, 'DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ') last_start_date, to_char(next_run_date, 'DD/MM/YYYY HH24:MI:SS "TS:" TZR "DS:" TZD ') next_run_date from dba_scheduler_jobs;

JOB_NAME                       LAST_START_DATE                                    NEXT_RUN_DATE
------------------------------ -------------------------------------------------- --------------------------------------------------
AUTO_SPACE_ADVISOR_JOB         28/03/2009 06:00:04 TZ: +01:00 DS:
GATHER_STATS_JOB               02/02/2009 22:00:00 TZ: +01:00 DS:
FGR$AUTOPURGE_JOB
PURGE_LOG                      29/03/2009 03:00:00 TZ: MET DS: MEST               30/03/2009 03:00:00 TS: MET DS: MEST
ANALYZETHIS_PURGEHISTORY       29/03/2009 17:00:00 TZ: +01:00 DS:                 30/03/2009 17:00:00 TS: +01:00 DS:
GATHER_WK_TEST_STATS           29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_SESSIONUSR_STATS        29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_RELEASEUSR_STATS        29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_LMDBUSR_STATS           29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_ICMADMIN_STATS          29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_COMMUNITYUSR_STATS      29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_CUSTOMIZATIONUSR_STATS  29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
MGMT_STATS_CONFIG_JOB          01/03/2009 01:01:01 TZ: +01:00 DS:                 01/04/2009 01:01:01 TS: +01:00 DS:
MGMT_CONFIG_JOB                28/03/2009 06:00:04 TZ: +01:00 DS:

14 rows selected.

(note the additional space after the TZD format, I needed to add this to actually show the information if I used the "DS:" litteral in front (probably this is a bug) )

As you can see, each job has its own timezone offset and some have also daylight saving information.
So, what happened with our jobs? Well, when a job gets created, Oracle stores the timezone information of the start_date parameters. If this timezone is specified in an absolute offset then no daylight saving changes are applied.
When the server switches to summer time (GMT +2 in Belgium), the scheduler job stays in its own little world and remains in the timezone GMT +1.
So, when for the rest of the database the time is 07:00, the job thinks it is still 06:00 and does not start. As the monitoring check did not take the timezone of the job in account, it reported the job as being late.

To avoid this situation, you need to use a named timezone, in which case oracle will apply automatically the correct daylight saving settings.
How do you do this? Well either you use the to_timestamp_tz to convert a text string to a timestamp with timezone information or Oracle retrieves the timezone from your session.
The timezone information in your session can be set with alter session, or by using the ORA_SDTZ variable in your client environment.
But there is a catch. In the following example I have set my timezone to Europe/Brussels, and then verified the timezone information in systimestamp:

sys@WPS50> select sessiontimezone from dual;

 SESSIONTIMEZONE
---------------------------------------------------------------------------
Europe/Brussels

sys@WPS50> select to_char(systimestamp, 'DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ') from dual;

TO_CHAR(SYSTIMESTAMP,'DD/MM/YYYYHH24:MI:SS"TZ:"TZR"DS:"TZD')
--------------------------------------------------------------------
30/03/2009 01:57:15 TZ: +02:00 DS:

As you can see, the timezone part an absolute notation, not a named timezone.
Systimestamp will never use the named timezone notation, so whenever you use systimestamp as value for the next_date parameter in dbms_scheduler, you will use an absolute offset and thus not follow daylight saving switches.
The current_timestamp variable, will however use the correct notation:

sys@WPS50> select to_char(current_timestamp, 'DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ') from dual;

TO_CHAR(CURRENT_TIMESTAMP,'DD/MM/YYYYHH24:MI:SS"TZ:"TZR"DS:"TZD')
--------------------------------------------------------------------
30/03/2009 01:57:18 TZ: EUROPE/BRUSSELS DS: CEST

So, when you want to specify the current date as value for the next_date parameter, use current_timestamp and not systimestamp.

This timezone stuff is only applicable when you have an interval that is at least 1 day. With smaller intervals, Oracle will make sure that the period between 2 runs remain the same.
If a job runs every 3 hours and last ran on midnight and the clock is then moved forward from 02:00 to 03:00, then the next run date of the job becomes 04:00, so that the 3 hour period between two job runs is retained.

More information on this, including how Oracle behaves when no start_date parameter is given can be found here:

The database version on which the tests where done is 10.2.0.4


Categories: DBA Blogs

Multiple standby databases and supplemental logging

Freek D’Hooge - Tue, 2009-10-20 11:10

A quick warning:

When you setup a logical standby database, you need to activate supplemental logging on the primary database.
This is done automatically when you build the data dictionary (by running the dbms_logstdby.build procedure).
Activating supplemental logging is however (I know now) a control file change and is thus not replicated to the other physical standby databases.
As a result, the logical standby will become (logical) corrupt when you perform a role switch between your primary and another physical standby database.

I learned this the hard way  :(
Luckily it was during a proof of concept and not in a real production environment … .

Of course, AFTERWARDS, I found the following maa document which points out that you have to enable supplemental logging yourself on the other physical standby databases.
It still makes a good read though


Categories: DBA Blogs

Oracle Open World 2009 Report - Part Two

Jared Still - Thu, 2009-10-15 14:13

Tuesday October 13th

Unconference on Indexes
Richard Foote
10/13/2009 10:00 AM

I started off the day attending the indexing presentation of fellow Oak Table member Richard Foote.  Foote has become quite well known for his expertise on index internals since the publication of Oracle B-Tree Index Internals: Rebuilding the Truth

This was basically a Q&A session, and I will include just a couple of the questions.

Q: Have you ever seen an index Skip Scan used correctly?
A: The short answer was 'No'

Foote mentioned that he had only rarely seen an index skip scan used, and then inappropriately.  For more information on skip scan, see Foote's blog entry on Index Skip Scans

Q: When can you safely drop an index that doesn't seem to be used?
A: That is very difficult to determine

The explanation for this answer is that it is very difficult to determine in an index is never used. It does require some patience, as the code that uses the index may be run only rarely, making it difficult to determine if it is actually used

Oracle Closed World

OCW actually started on Monday, though due to the wonders of technology I missed it on that day.  The event was invitation only, either by being present when it was mentioned, or by receiving an SMS text on your phone.

This is where technology comes in.  The SMS was rather garbled, and I received through a series of very short SMS messages what seemed to be an invitation to stroll into a dark alley somewhere in downtown San Francisco.  It was later cleared up and I attended on Tuesday.

Oracle Closed World is the brain child of Mogens Norgaard, another Oak Table member, and co-founder of Miracle AS Oracle consulting

On Tuesday Jonathan Lewis reprised his "How to be an Expert" presentation, the difference being that this audience was comprised of folks with a wide breadth of Oracle knowledge.

Lewis took advantage of this by making the questions harder, and chiding the audience for not knowing the answers.  All was in good fun. Undoubtedly the presence of beer didn't make the questions any easier to answer.

Wednesday was a presentation by Jeremiah Wilton, Oak Table member and formerly a DBA at Amazon.com.

Wilton presented a live demo on using Amazon's Elastic Compute Cloud (EC2) to provision a linux server, using Elastic Block Storage (EBS) to provide persistant storage, and preconfigured Amazon Machine Instances (AMI) to build provision the server with Oracle already installed.

The fact that Wilton was able to do this during a 1 hour live demo, complete with the inevitible mishaps that can occur during a live demo, and complete the task was quite impressive.

This appears to be a great method to setup test instances of Oracle for experimentation.  There are companies using this for production use as well.

 Amazon Web Services

Perl - A DBA's and Developers Best (forgotten) Friend
Arjen Visser - Avisit Solutions
10/13/2009 

Perl is a topic near and dear to my heart.

I have been using it since version 4 in the early 1990's, and have advocated it's use ever since.  It is a robust and powerful language with a huge collection of libraries developed by the user community and archived in the Comprehensive Perl Archive Network (URL HERE:  http://cpan.org/)

When I spotted the Perl session on the schedule I immediately signed up for it.

What I had not notice was the subtitle indicating it was a session for beginners.

No matter, I had to go.

The sesssion began with a concise but clear introduction to Perl basics.

So far, so good.

When the time came to discuss Perl connectivity to Oracle, it was a bit surprising to be confronted with a slide showing how to use Perl as a wrapper for sqlplus.

"Surely" I thought, "this is going to be a slide showing how not to do it"

If you have used Perl with Oracle, you are no doubt familiar with DBI  and DBD::Oracle

DBI is the Perl Database Interface module developed and maintained by Tim Bunce

DBD::Oracle is the Oracle driver for DBI, also originally developed and mainted by Tim Bunce, and now being maintained by The Pythian Group

DBI and DBD::Oracle are very mature and robust Perl packages for using Perl with Oracle.

You would also likely know that using Perl as a wrapper for sqlplus is something that is very cumbersome and inelegant. So as to not write whole treatise on why you shouldn't do this, I will simply say that doing so is rarely necessary, and never an optimal method.

Which brings us to the next slide in the presentation, which had a diagram showing the how DBI and DBD::Oracle fit into the Perl architecture.

The speaker then told the audience that these were hard to install and difficult to use, and didn't recommend using them.

After picking my jaw back up off the floor, I lost all interest in the rest of the presentation.  I don't remember what the rest of the slides were.  Maybe I blacked out from the shock. What I remember is walking away from the presentation rather incrudulous.

Just last week, a friend that had not used Perl asked my how to install it on a Solaris server.  With only a few lines of email that I typed from memory he was able to successfully install DBI and DBD::Oracle.

Hard to install indeed.

11 Things about 11gR2
Tom Kyte

Really it was Tom's top 10 list for 11gR2 - he liked his favorite feature so much he counted it twice.

And that is the one I will mention.

It is Edition Based Redefinition,

In a nutshell this feature allows you to create a new namespace for PL/SQL objects, creating new versions in a production database.

This will allow upgrading applications with little or no downtime, something that has always been on of the DBA holy grails.

Rather than try to explain it (OK, I don't yet know know it works) I will just tell you to take a look at Chapter 19 in the 11gR2 Advanced Application Developers Guide.

Wednesday Keynote
Larry Ellison

Ellison promised to discuss 4 topics, I will include 3 of them.

I left before the Fusion Middleware discussion.

Oracle enterprise linux update

One interesting fact presented was a survey performed by HP detailing Linux usage in corporate data centers.  The numbers are rather surprising.

* Oracle Enterprise Linux 65%
* Redhat 37%
* Suse 15%
* Other 2%


Next was the second generation of the Exadata Database Machine.

Essentially it is faster then gen 1.

It was used to set a new TPCC benchmark record - I believe it was 1,000,000 transactions per seond.

Ellison was proud of the record being 16 times faster than the record previously set by IBM, and rightfully so if those numbers are correct.

It seems IBM has challenged the results however, claiming the Exadata 2 as  'only 6 times faster'.  As you might imagine, Ellison had some fun with that, even offering a $10 million prize to anyone that can show that a Sun Exadata machine cannot run the app at least twice as fast as another other system.  IBM is invicted to participate.

At this time Ellison welcomed a special guest to the stage. Californie Governor  Arnold Schwarzenegger.

Commenting on being in a room with so many IT folks Schwarzenegger commented "As I came out on stage I felt my IQ shoot up 10 pts."

Schwarzenegger talked for a few minutes on the impact of technology on peoples lives. "Technologies impact is flesh and blood" in reference to how tech is used to aid response of public services such as firefighting.

Arnold called for a round of applause for Larry Ellison and Scott McNeely for being technology leaders.

The camera cut to Ellison, looking uncharacteristically humble as he mouthed 'Thank you'.

After Schwarzenegger left the stage, Ellison continued, this time discussing My Oracle Support.

My Oracle Support has been a hot topic lately, as the majority of DBA's are less than thrilled with the new Flash interface being imposed.  It is my understanding that a HTML version of the interface will be maintained, so we won't have to deal with Flash if we don't want to.

Here's where it gets interesting - the unification of Oracle Enterprise Manager and My Oracle Support.

There is now a 'My Oracle Support' tab in OEM.

DBAs will be allowed to opt in to OCM, Oracle Configuration Manager, allowing Oracle to perform automated discovery of bugs and patches needed, either in Oracle or other vendors on server (OS bugs)

Oracle will will then have a global database to mine for proactive response to possible problems.

When a configuration is found to have issues, all users with that configuration can be proactively notified.

The real news IMO though is the impact on patching.

Oracle recently started offering a new patch pacakge - PSU.

This is different than the CPU patch system, as it may require merge patches to resolve patch conflicts.

If OEM My Oracle Support determines that a merge patch is needed, it will automatically file an SR requesting the patch and notify you when it is available.

Even if you don't like OEM, this may be a good use of it.

Ok, that's enough for now, time for lunch.



Categories: DBA Blogs

Oracle Open World Report for October 11th and 12th

Jared Still - Wed, 2009-10-14 12:03
As I am attending Open World 2009 on blogger credentials, it seems proper I should  actually blog about it.

So, here it is.  I won't be blogging about keynotes or other things that will appear in the news the following day, but rather on some of the sessions I attend.

As I got back to my room too late and too tired to do this properly on Monday, I am putting Sunday and Monday in the same post.

Here goes:


Open World - Sunday 10/11/2009

While attending Oracle Open 2009, I thought it a good idea to make some report of sessions attended, and any interesting developments at OOW.

Some of the sessions I attended may not be considered DBA topics. I thought it would be interesting to break out of the DBA mold for a bit and attend some sessions that might be a bit outside the DBA realm.

Sue Harper - Everyday Tasks with Oracle SQL Developer

Sue Harper is the product manager for SQL Developer, and was presenting some of the useful new features of the SQL Developer 2.1 Early Adopter release.

While I have used SQL Developer from the time it was first released as Raptor, I have not until recently used it simply as a database browsing tool.  After exploring some of the features that allow writing reports with master/detail sections, I converted some SQLPLus scripts for use with SQL Developer.

SQL Developer is a very capable tool, so I attended this session to see what else I might be missing out on.

There was only one hour allocated for the session, and given the first 15 minutes were consumed convincing the audience why they should be using SQL Developer, there was just that much less time available to see the new features.

Taking a few minutes to market it is probably just in the product manager DNA.

Some of the features demonstrated were actually available in 1.5, but maybe not widely known.  As I have not used 2.1, I won't always differentiate between versions here. Some of these features may not be new to 2.1, maybe just improved.

Though not a new feature in 2.1, a few minutes were used to demonstrate the use of the built in version control integration. This is a very useful feature, and can be setup for seamless integration for CVS, SubVersion, Perforce, and one other I can't recall now.  It's definitely worth a look.

Some features that are new to 2.1 looked very useful:

Persistent column organizing and hiding.  When viewing data in SQL Developer, the columns may be easily rearranged and selected or de-selected for viewing.  While previous versions allowed dragging columns around, 2.1 has a nice dialog that makes this much easier.

New to 2.1 is column filtering.  By right clicking on a cell in the data pane, a dialog can be brought up to filter the data based on values found.  This allows filtering the data without requerying the table.

Also new to 2.1 is an XML DB Repository Navigator. It was mentioned, but alas there was not time to demonstrate it.

http://www.oracle.com/technology/products/database/sql_developer/index.html

http://sqldeveloper.oracle.com/


Christoper Jones - Python/Django Framework

This was a hands on developer session centered on using the Python scripting language with the Django Web application framework.  This was a fun session.  The lab was already setup, running Oracle Linux VM's with access via individual laptops setup in the training room.

The lab was a go at your own pace session, with instructions both printed and available via browser.  Browser based was the way to go with the instructions, as the examples could be cut and pasted, saving a lot of time typing.

I wasn't able to quite complete the lab as I kept encountering an error when running the web app.  It was probably due to an error in one of the scripts I modified during the session, but enough was accomplished to see that the Django Framework looked very interesting.  Perhaps even simple enough to use for a DBA.  Yes, I did search the schedule for a similar Perl session, perhaps using Mason or somesuch.

The training materials are going to be placed on OTN in the Oracle By Example section after Open World concludes.

http://www.oracle.com/technology/obe/start/index.html

Ray Smith - Linux Shell Scripting Craftmanship

The last session I attended on Sunday was all about shell script craftsmanship. Ray Smith was sharing some common sense methods that can be used to greatly enhance your shell scripts.

If you have done any software development, the information presented would be similar to what you already know.

  • Use white space and format your code for readability.
  • Don't be overly clever - other people will have to read the shell script.
  • Format your scripts with a header explaining the purpose of the script, and separate sections for independent and dependent variables, and a section for the code.
  • Use getops to control command line arguments.


Smith strongly advocated that everyone in the audience obtain a copy of the book "The Art of Unix Programming" by Eric S. Raymond.  This is not a new book by any means, but Smith drew upon it for many of the principles he advocated in scripting.



A couple of tools new to me were mentioned:

Zenity and Dialog - both of these are graphical dialog boxes that may be called from shell scripts in linux.

http://freshmeat.net/projects/zenity

Dialog is installed with linux, so just do man dialog to check it out.

It was an interesting presentation.  Though a lot of it was not new to me, the two dialog tools mentioned were, showing that no matter how well you think you may know a subject, you can always learn something from someone else.


Open World - Monday 10/12/2009


Jonathan Lewis Unconference - How to be an Expert
http://www.jlcomp.demon.co.uk/
http://jonathanlewis.wordpress.com/

Jonathan Lewis had an interesting unconference presentation.

http://wiki.oracle.com/page/Oracle+OpenWorld+Unconference

In a nutshell, it comes down to this:

You must practice, and practice quite a lot.

To make the point, he used the joke about the American Tourist asking the grounds keeper how the lawns of the Royal Estates are maintained to be so lush, have such and even texture and in short, to be so perfect.

The groundskeeper explained while the tourist took notes.

First you must dig down 4 inches.

Then you must put down a layer of charcoal.

Then another 1 inch layer find sharp sand.

Finally a layer of fine loam goes on top.

You then must seed the lawn, and water it very well for 6 weeks.

After 6 weeks, you must cut the grass, being very carefully to remove only a small amount as you mow.  This must be done three times a week.

And then you continue doing this for 200 years.

Ok, everyone had a good laugh at that, but the point was made.

Reading some books and being able to run some scripts does not make you an expert.  Lots and lots of practice may make you an expert, if you apply yourself well.

During the presentation he asked a number of questions of the audience made up mostly of DBA's. I will reprise a couple of them here.

Q1:  Assuming you have a simple heap table, with no indexes, you update a single column in 1 row of the table.  How many bytes of redo will that generate?

Q2: Who of you in the audience when you insert data into a table, deliberately insert duplicate data into the database?

I will leave you to speculate on the answers a bit.

Of those 2 questions, only 1 was answered correctly by the audience.

Leng Tan and Tom Kyte DBA 2.0 - Battle of the DBA's

What is the difference between a DBA 1.0 (the old days) and a DBA 2.0 ( the modern DBA)

DBA 2.0 has modern tools, self managing database enabled by AWR and the Diag and Tuning packs.

DBA 1.0 uses scripts and works from the command line.

One the stage in addition to Kyte and Tan were two DBA's, each with a laptop and an oracle server to work on.

Two scenarios were presented for a timed hands on problem that each DBA must work through.

First scenario - Security Audit

Each DBA is given 6 minutes to do a database audit and report on possible vulnerabilities

DBA 1.0 ran scripts to check for open accounts, default passwords, publicly granted packages and umask settings.

After doing so he ran a script to remove privileges granted to PUBLIC, and locked a couple of accounts.

DBA 2.0

DBA 2.0 worked from the Oracle Enterprise Manager console, using the Secure
Configuration for Oracle Database.

He was able to observe the database security score, navigate through several screens and correct the same security problems that DBA 1.0 did.  Following that he was able to see that the security score for the database had improved.

So the conclusion made by the presenter is that OEM is clearly superior because OEM will automatically generate the needed data every night.

By contrast DBA 1.0 can only do one db at a time.

I do not believe this demonstration to be a valid comparison - it's quite simple to run the script against any number of databases from a script, and report on anomalies.

At this point it should be mentioned that DBA 1.0 spent 4 minutes explaining what he was going to do, another minute explaining what the scripts were doing, with less than 1 minute spent actually running the scripts.

By comparison, DBA 2.0 was navigating through screens through nearly the entire 6 minutes.

The statement was made by the presented that doing this with scripts at the command line was far too tedious a task, and DBA 1.0 would never be able to accomplish the task for 200 databases.

I won't belabor the point (well, not too much) but automating these kinds of tasks is relatively simple for command line tools.  Which is easier and more productive?  Automating a set of scripts to poll all of your databases, or navigate through OEM for 200 databases?

The present referred to using OEM as "really convenient"  Sorry, but I have never found OEM to be at all convenient.  Whenever I run into problems with it, it requires a SR to fix it.

Thetre was a round 2 as well regarding testing execution plans both before and after setting optimizer_features _enable to a newer version.  OEM fared well here compared the the scripting method as the scripts used 'explain plan' and OEM actually executed the queries to gather execution plan information.

That isn't to say however that the scripts could not be modified to do the same.  No, I am not completely against GUI environments.  I am just against making more work for DBA tasks.

Enough for now, I will report on Tuesdays conferences later this week.
Categories: DBA Blogs

Open World Recap and New White papers

Inside the Oracle Optimizer - Fri, 2009-10-09 14:32
The Optimizer group has two session and a demo station in the Database campground at this year's Oracle Open World. We will give a technical presentation on What to Expect from the Oracle Optimizer When Upgrading to Oracle Database 11g and host an Oracle Optimizer Roundtable.

The technical session, which is on Tuesday Oct 13 at 2:30 pm, gives step by step instructions on how to use the new 11g features to ensure your upgrade goes smoothly and without any SQL plan regressions. This session is based on our latest white papers, Upgrading from Oracle Database 10g to 11g: What to expect from the Optimizer and SQL Plan Management in Oracle Database 11g.

The roundtable, which is on Thursday Oct. 15th at 10:30 am, will give you a first hand opportunity to pose you burning Optimizer and statistics questions directly to a panel of our leading Optimizer developers. In fact if you plan to attend the roundtable and already know what questions you would like to ask, then please send them to us via email and we will be sure to include them. Other wise, you can hand in your questions at our demo station at any stage during the week, or as you enter the actual session. Just be sure to write your questions in clear block capitals!

We look forward to see you all at Open world.

Categories: DBA Blogs, Development

Module name for logon trigger in 11g R2 AWR report

Virag Sharma - Mon, 2009-09-28 00:53
Today while working on Production performance tuning, We came across one difference in awr report of 11g R1 and 11R2

In oracle 11g R1 AWR report logon trigger modelue name usually come like
sqlplus / perl / Mid Tier@server_name_from_it_login

But in 11g R2 AWR report logon trigger modelue name usually come like oraagent.bin@DB_SERVER_WHERE_IT_RUN

New change looks more logical because DB server running logon trigger code ,
not the user.




# Excerpt from 11g R1 AWR report
# In below given example app678 is server name from where
# user logged in to database using sqlplus

Module: sqlplus@app678utl (TNS V1-V3)
UPDATE scott.MY_AUDIT SET LAST_LOGIN = SYSDATE WHERE XYZ_USERNAME = :B1


#Excerpt from 11g R2 AWR report
#In below given example apps001 is database server

Module: oraagent.bin@apps001 (TNS V1-V3)
UPDATE scott.MY_AUDIT SET LAST_LOGIN = SYSDATE WHERE XYZ_USERNAME = :B1



Categories: DBA Blogs

ASM Hands-On Training, Server Enviroment Setup And Aliases

Alejandro Vargas - Sat, 2009-09-12 05:37

The server where the labs of the ASM Hands-On Training works is configured with tcsh as the default oracle user shell.

A .cshrc file containing several aliases was setup to easy moving around and executing repetitive commands.

On this document there is a general explanation of how the environment can be used.

Details on this file: Server Enviroment Setup And Aliases

Categories: DBA Blogs

ASM Hands-On Training, Lab 21, ASMCMD Use And Options

Alejandro Vargas - Sat, 2009-09-12 05:27

On this lab we will review some of the useful set of commands provided by the ASM command line utility.

Some of the asmcmd commands display information, these information is based on v$asm views, other commands actually make changes to the structure like mkdir or rm.

Details on file: ASMCMD Use And Options

Categories: DBA Blogs

ASM Hands-On Training, Lab 20, Storage Consolidation With ASM

Alejandro Vargas - Sat, 2009-09-12 05:22

On this lab we will share our ASM disks with a second server and we will open the sati12 database on it.

To do that we copied the vm to another location, without including the ASM disks, that are located on a separate folder, this way the second vm is pointing to the same ASM disks as the first one.

Details on file: Storage Consolidation With ASM

Categories: DBA Blogs

ASM Hands-On Training, Lab18, ASM Metadata and Other Checkups

Alejandro Vargas - Sat, 2009-09-12 05:01

On this lab we will review various utilities that provide further information for managing ASM.

Checksum Mismatch After Storage Crash, AMDU an 11g tool that can be used with 10g as well, Blockdumps, Asmiostats and asmdebug

Details on this file: ASM Metadata and Other Checkups

Categories: DBA Blogs

ASM Hands-On Training, Lab 17, Measuring Throughput

Alejandro Vargas - Sat, 2009-09-12 03:55

On this lab we will review simple methods for getting input about the the database througput and response time.

We will produce the load using swingbench and we will gather AWR snapshots every 10 minutes. After some time we will be able to check througput statistics based on the AWR snapshots information.

Details on this Document: Measuring Throughput

Another useful script for checking IO:

set pages 50000 lines 250
alter session set nls_date_format='dd-mm-yy hh24:mi';
spool chkio1-Physical-rw.log
select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then average end) Physical_Read_Total_Bps,
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then average end) Physical_Write_Total_Bps,
snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;
spool off

select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end) Physical_Read_Total_Bps,
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end) Physical_Write_Total_Bps,
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end) +
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end) Total_IO,
snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id
/

Categories: DBA Blogs

ASM Hands-On Training, Lab 16, Configuring And Running Swingbench And OSWatcher

Alejandro Vargas - Sat, 2009-09-12 01:02

Whenever implementing new environments it is convenient to run stress tests and to gather OS statistics that will provide valuable diagnostics information.

Swingbench provide an easy configurable tool to run oltp or dss like stress tests; that is very useful when the customer does not have any possibility to benchmark the new configuration.

Oswatcher will collect statistics from the OS in a cyclic 2-day period by default, which can be extended to whatever period is convenient for you.

Details on this file:
Lab16-Configure-SwingBench-and-OsWatcher.pdf

Categories: DBA Blogs

ASM Hands-On Training, Lab 15, Cloning A Disk Group

Alejandro Vargas - Sat, 2009-09-12 00:16

Sometimes it is required having several clones of the same database running on the same server. That was simple to do when the database was based on File System, and is still simple to do with the help of Rman.

But there are very large databases, which are usually cloned using storage tools like bcv, srdf or snapshots that cannot consider Rman as a viable possibility.

For these databases there is the possibility to implement the procedure we will test on this lab.

Note that the rename disk group script has not been made public.
The rename disk group functionality is available on 11g R2 that was made available to the general public on August 2009.

These are the steps required to complete the process:

1. Prepare a set of storage devices with enough space to hold a copy of the diskgroup to be copied
2. Shutdown the source database
3. Make a bit by bit copy of the source database on the target devices
4. Add kfed path to the path of user root
5. Execute the rename diskgroup script
6. Rename ASM disks of the cloned devices
7. Rescan ASM disks
8. Start the ASM instance and mount the new diskgroup
9. Recreate the controlfile and rename the database based on the cloned diskgroup
10.Open the clone with the resetlogs option

Details on this file:
Cloning Disk Group

Categories: DBA Blogs

ASM Hands-On Training, Lab 14, 11g Compatibility Parameters and Resilience Test

Alejandro Vargas - Fri, 2009-09-11 21:48

On this lab we will make a review of the 11g Compatibility Parameters and 3 Resilience Tests

The compatibility parameters compatible.asm and compatible.rdbms define the minimum ASM and database versions that will be able to connect to a disk group. These parameters can be advanced only

On the resilience probe we will do the following tests


  • · Add ASM disks and bring down the ASM instance on the middle of the operation

  • · Remove ASM disks and bring down the ASM instance on the middle of the operation

  • · Destroy the ASM metadata and restore the disk

Details on this file:
11g Compatibility Parameters and Resilience Test

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs