DBA Blogs

Who's using a database link?

Jared Still - Fri, 2010-03-05 12:41
Every once in awhile it is useful to find out which sessions are using a database link in an Oracle database. It's one of those things that you may not need very often, but when you do need it, it is usually rather important.

Unfortunately for those of us charged with the care and feeding of the Oracle RDBMS, this information is not terribly easy to track down.

Some years ago when I first had need to determine which sessions were at each end of database link, I found a script that was supplied courtesy of Mark Bobak. When asked, Mark said he got the script from Tom Kyte. Yong Huang includes this script on his website, and notes that Mark further attributed authorship in Metalink Forum thread 524821.994. Yong has informed me that this note is no longer available.  I have found the script in other locations as well, such Dan Morgan's website. So now you know the scripts provenance.

Here's the script, complete with comments.  Following the script is an example of usage.


-- who is querying via dblink?
-- Courtesy of Tom Kyte, via Mark Bobak
-- this script can be used at both ends of the database link
-- to match up which session on the remote database started
-- the local transaction
-- the GTXID will match for those sessions
-- just run the script on both databases

Select /*+ ORDERED */
substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10) "ORIGIN",
substr(g.K2GTITID_ORA,1,35) "GTXID",
substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,
s2.username,
substr(
decode(bitand(ksuseidl,11),
1,'ACTIVE',
0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),
2,'SNIPED',
3,'SNIPED',
'KILLED'
),1,1
) "S",
substr(w.event,1,10) "WAITING"
from x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2
where g.K2GTDXCB =t.ktcxbxba
and g.K2GTDSES=t.ktcxbses
and s.addr=g.K2GTDSES
and w.sid=s.indx
and s2.sid = w.sid


Now let's take a look a the results of the script.

Logging on to DB1 as system, create a database link to db2 using the SCOTT account:

create database link scott_link connect to scott identified by "tiger" using 'db2';

Make sure it works:

system@db1 SQL> select sysdate from dual@scott_link;

SYSDATE
-------------------
03/05/2010 10:13:00

1 row selected.


Now logon to DB1 as sysdba and run who_dblink.sql:

sys@db1 SQL> @who_dblink

ORIGIN GTXID LSESSION USERNAME S WAITING
--------------------- ----------------------------------- ---------- ---------- - ----------
oraserver.-21901 DB1.d6d6d69e.3.16.7190 500.15059 SYSTEM I SQL*Net me

1 row selected.

Now do the same on DB2:

sys@db2 SQL> @who_dblink

ORIGIN GTXID LSESSION USERNAME S WAITING
--------------------- ----------------------------------- ---------- ---------- - ----------
ordevdb01.-21903 DB1.d6d6d69e.3.16.7190 138.28152 SCOTT I SQL*Net me

1 row selected.

How do you identify the session on the database where the database link connection was initiated?

Notice that the output from DB1 shows the PID in the ORIGIN column.  In this case it is 21901.

Running the following SQL on DB1 we can identify the session from which the SYSTEM user initiated the database link connection:

select
b.username,
b.sid SID,
b.serial# SERIAL#,
b.osuser,
b.process
from v$session b,
v$process d,
v$sess_io e,
v$timer
where
b.sid = e.sid
and b.paddr = d.addr
and b.username is not null
-- added 0.0000001 to the division above to
-- avoid divide by zero errors
-- this is to show all sessions, whether they
-- have done IO or not
--and (e.consistent_Gets + e.block_Gets) > 0
-- uncomment to see only your own session
--and userenv('SESSIONID') = b.audsid
order by
b.username,
b.osuser,
b.serial#,
d.spid,
b.process,
b.status,
e.consistent_Gets,
e.block_Gets,
e.Physical_reads;


USERNAME SID SERIAL # OS USER PROCESS
---------- ------ -------- -------------------- ------------------------
SYS 507 12708 oracle 22917

SYSTEM 500 15059 oracle 21901

2 rows selected.

The session created using the SCOTT account via the database link can also be seen using the same query on DB2 and looking for the session with a PID of 21903:

USERNAME    SID SERIAL# OSUSER  PROCESS
---------- ---- ------- ------- ------------
SCOTT 138 28152 oracle 21903
SYS 147 33860 oracle 22991
146 40204 oracle 24096


3 rows selected.
Categories: DBA Blogs

Treasure Trove of Oracle Security Documents

Jared Still - Tue, 2010-03-02 11:32
This morning I stumbled across a veritable treasure trove of Oracle Security documents at the web site of the Defense Information Systems Agency.

I've only spent a few minutes glancing through a small sampling of them, and they appear to be fairly comprehensive. When you consider that these are documents used to audit sensitive database installations, it makes a lot of sense that they would cross all the t's and dot all the i's.

The index of documents can be found here: DISA IT Security Documents

As you can see for yourself, there are documents covering security concerns for a number of other systems.
Categories: DBA Blogs

Compression for tables with more than 250 columns

Alejandro Vargas - Sun, 2010-02-28 19:15

Compression for tables with more than 250 columns

Tables with more than 250 columns are not supported to be compressed, this restriction remains in place even on 11g R2.

On the 11g R2, Sql Language Reference Manual, page 16-36 we can read:

Restrictions on Table Compression

* COMPRESS FOR OLTP and COMPRESS BASIC are not supported for tables with more than 255 columns.

This is a serious limitation specially for Telecoms where CDR tables can have a number of columns way over 255.

The available workaround:


  • Split the table into 2 sub-tables.

  • create table A as select pk,field 1 to 150 from origtable

  • create table B as select pk,field 151 to 300 from origtable

  • Each one will have less than 250 rows.

  • They will be joined by the primary key.

  • The table will be accessed using a view that has all the columns of the original table.

  • create view origtable as select a.pk,field a.1 to a.150, field b.151 to b.300 from a, b where a.pk=b.pk

Categories: DBA Blogs

Cool but unknown RMAN feature

Jared Still - Fri, 2010-02-19 11:53
Unknown to me anyway until just this week.

Some time ago I read a post about RMAN on Oracle-L that detailed what seemed like a very good idea.

The poster's RMAN scripts were written so that the only connection while making backups was a local one using the control file only for the RMAN repository.
rman target sys/manager nocatalog

After the backups were made, a connection was made to the RMAN catalog and a SYNC command was issued.

The reason for this was that if the catalog was unavailable for some reason, the backups would still succeed, which would not be the case with this command:


rman target sys/manager catalog rman/password@rcat

This week I found out this is not true.

Possibly this is news to no one but me, but I'm sharing anyway. :)

Last week I cloned an apps system and created a new OID database on a server. I remembered to do nearly everything, but I did forget to setup TNS so that the catalog database could be found.

After setting up the backups vie NetBackup, the logs showed that there was an error condition, but the backup obviously succeeded:

archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_294_709899427.dbf recid=232 stamp=710999909
deleted archive log
archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_295_709899427.dbf recid=233 stamp=710999910
Deleted 11 objects


Starting backup at 16-FEB-10
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting full datafile backupset
channel ORA_SBT_TAPE_1: specifying datafile(s) in backupset
including current controlfile in backupset
channel ORA_SBT_TAPE_1: starting piece 1 at 16-FEB-10
channel ORA_SBT_TAPE_1: finished piece 1 at 16-FEB-10
piece handle=OIDDEV_T20100216_ctl_s73_p1_t711086776 comment=API Version 2.0,MMS Version 5.0.0.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:45
Finished backup at 16-FEB-10

Starting Control File and SPFILE Autobackup at 16-FEB-10
piece handle=c-3982952863-20100216-02 comment=API Version 2.0,MMS Version 5.0.0.0
Finished Control File and SPFILE Autobackup at 16-FEB-10

RMAN> RMAN>

Recovery Manager complete.

Script /usr/openv/netbackup/scripts/oiddev/oracle_db_rman.sh
==== ended in error on Tue Feb 16 04:07:59 PST 2010 ====


That seemed rather strange, and it was happening in both of the new databases.
The key to this was to look at the top of the log file, where I found the following:

BACKUP_MODE: lvl_0
BACKUP_TYPE: INCREMENTAL LEVEL=0
ORACLE_SID : oiddev
PWD_SID : oiddev
ORACLE_HOME: /u01/oracle/oas
PATH: /sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin

Recovery Manager: Release 10.1.0.5.0 - Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.


RMAN;
connected to target database: OIDDEV (DBID=3982952863)

RMAN;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-04004: error from recovery catalog database: ORA-12154: TNS:could not resolve the connect identifier specified

RMAN;
Starting backup at 16-FEB-10
using target database controlfile instead of recovery catalogallocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backupset

Notice the line near the bottom of the displayed output?

The one that says "using target database controlfile instead of recovery catalog" ?

RMAN will go ahead with the backup of the database even though the connection to the catalog database failed.  This apparently only works when running in a scripted environment, as when I tried connecting on the command line RMAN would simply exit when the connection to the catalog could not be made.

The RMAN scripts are being run on a linux server in the following format:

$OH/bin/rman target sys/manager catalog rman/password@rcat <<-EOF >> $LOGFILE

rman commands go here

EOF


This was quite interesting to discover, and my be old news to many of you, but it was new to me.

This is not exactly a new feature either - one of the databases being backed up is 9.2.0.6. And of course there is now no need to update the backup scripts.
Categories: DBA Blogs

RAC, ASM and Linux Forum, December 15, 13:30 - 18:00 Beit HP Raanana

Alejandro Vargas - Sun, 2009-12-13 03:04

It's time for our 2nd, 2009 RAC, ASM and Linux Forum in Tel Aviv!

I would like to warmly invite you to our next RAC, ASM and Linux forum to be held at Beit HP in Raanana, on December 15.

You can register on the Israel Oracle User Group site.

On this forum we will have a 11g R2 Technology Update and 2 very interesting Customer Presentations about minimum downtime VLDB Migration to RAC on Linux and Creating and Managing RPM's for Oracle Silent Install on Linux.

Panel on Grid Infrastructure, ASM, Clusterware and RAC 11g R2, Technology Update

Annie Flint, Principal Member of Technical Staff, Oracle Corporation
Ofir Manor, Senior Sales Consultant, Oracle Israel
Alejandro Vargas, Principal Support Consultant, Oracle Advanced Customer Services

In the few months since last forum on June many things happened, 11g Release 2 is already production and brought a revolution in terms of performance and deep changes to the world of ASM, Oracle Clusterware and RAC.

Exadata Release 2 was released opening the way for OLTP databases based on the new Oracle - Sun Database Machine.

In this seminar we will review the new face of ASM, Oracle Clusterware and RAC on 11g Release 2 and we will comment on some of the incredible performance enhancements of the new version.

Migration of a VLDB to RAC 11g with Minimal Downtime

Dotan Mor,
Senior DBA
Pelephone DBA Team

Dotan will tell us the whole story of migrating an 8TB Datawarehouse, with near 0 downtime, from Linux 3 with OCFS2 to Linux 5, with CRS, RAC and ASM 11g, and Infiniband; and how his customer still cannot believe the incredible improvements they got in performance.

He will tell us also all the several problems faced in the way to this big success and how all of them were overcome.

Packaging Application and Database Together On Linux for Super-Silent-Installs

Liron Amitzi,
VP Professional Services
Brillix

Liron will explain us how to build a Linux RPM that contains inside the whole set of files required to easily and fast deploy a complete application, from the database to last application executable.

See you there!

Best Regards,

Alejandro

Categories: DBA Blogs

Check IO Scripts

Alejandro Vargas - Wed, 2009-12-09 19:32

These scripts are very useful to check throughput.

The original version can be found on My Oracle Support Note 422414.1 by Luca Canali.

set lines 250 pages 50000

alter session set nls_date_format='dd-mm-yyyy hh24:mi';

col Phys_Read_Total_Bps for 999999999999
col Phys_Write_Total_Bps for 999999999999
col Redo_Bytes_per_sec for 999999999999
col Phys_Read_IOPS for 999999999999
col Phys_write_IOPS for 999999999999
col Phys_redo_IOPS for 999999999999
col OS_LOad for 999999999999
col DB_CPU_Usage_per_sec for 999999999999
col Host_CPU_util for 999999999999
col Network_bytes_per_sec for 999999999999
col Phys_IO_Tot_MBps for 999999999999
col Phys_IOPS_Tot for 999999999999

spool io_max_checkup.log

select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end) Phys_Read_Tot_Bps,
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end) Phys_Write_Tot_Bps,
sum(case metric_name when 'Redo Generated Per Sec' then maxval end) Redo_Bytes_per_sec,
sum(case metric_name when 'Physical Read Total IO Requests Per Sec' then maxval end) Phys_Read_IOPS,
sum(case metric_name when 'Physical Write Total IO Requests Per Sec' then maxval end) Phys_write_IOPS,
sum(case metric_name when 'Redo Writes Per Sec' then maxval end) Phys_redo_IOPS,
sum(case metric_name when 'Current OS Load' then maxval end) OS_LOad,
sum(case metric_name when 'CPU Usage Per Sec' then maxval end) DB_CPU_Usage_per_sec,
sum(case metric_name when 'Host CPU Utilization (%)' then maxval end) Host_CPU_util, --NOTE 100% = 1 loaded RAC node
sum(case metric_name when 'Network Traffic Volume Per Sec' then maxval end) Network_bytes_per_sec,
snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;

spool off

spool io_maxtot_summary.log

select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end)/1024/1024 +
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end)/1024/1024 +
sum(case metric_name when 'Redo Generated Per Sec' then maxval end)/1024/1024 Phys_IO_Tot_MBps,
sum(case metric_name when 'Physical Read Total IO Requests Per Sec' then maxval end) +
sum(case metric_name when 'Physical Write Total IO Requests Per Sec' then maxval end) +
sum(case metric_name when 'Redo Writes Per Sec' then maxval end) Phys_IOPS_Tot,
sum(case metric_name when 'Current OS Load' then maxval end) OS_LOad,
sum(case metric_name when 'CPU Usage Per Sec' then maxval end) DB_CPU_Usage_per_sec,
sum(case metric_name when 'Host CPU Utilization (%)' then maxval end) Host_CPU_util, --NOTE 100% = 1 loaded RAC node
sum(case metric_name when 'Network Traffic Volume Per Sec' then maxval end) Network_bytes_per_sec,
snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;

spool off

Categories: DBA Blogs

Using OCFS2 as a Generic Cluster File System?

Sergio's Blog - Wed, 2009-12-09 00:56
If you use OCFS2 for purposes other than running Oracle software, please leave a comment or drop me a line: sergio[dot]leunissen[at]oracle[dot]com I'm curious how you use OCFS2, how big your filesystems are, how many nodes are in the cluster, what you like about OCFS2, etc.
Categories: DBA Blogs

RAC install root.sh fail on 2nd Node with error: "Timed out waiting for the CRS stack to start"

Alejandro Vargas - Mon, 2009-11-30 20:51

Yesterday we spent the day trial to solve this problem.

The actual solution, once the problem was found, took a couple of minutes to solve.

Usually this issue will arise when there is a network misconfiguration. We did check that carefully and everything was well configured:

- Same device names at both ends
- No dropped packets or transmission errors
- Both sides configured full duplex

Then we decided to move the install software to the second node and used scp over the interconnect:

scp -r -priv:

scp progressed for a while and then become stalled.

Retrying using the public network worked without any problem.

Any other test using the interconnect got stalled after a while.

The network people checked what the problem can be and found that the switch was configured half duplex.

They corrected the problem and everything worked fine.

Oracle Support Note 745215.1 describe the problem and other areas to check as well.

Categories: DBA Blogs

0x1A

Freek D’Hooge - Sun, 2009-11-29 16:23

0x1A, better know as the end of file character.
Now also known as the cause of me waisting several hours on analyzing a 500MB raw trace file trying to figure out why the tkprof report did not seemed to be correct.

Still wondering why the application I was tracing had an EOF character in the value of a varchar2 type bind variable.


Categories: DBA Blogs

Data Modeling

Jared Still - Thu, 2009-11-12 13:24

Most readers of the blog are probably DBA's, or do DBA work along with development or other duties.

Though my title is DBA, Data Modeling is something I really like to do.

When first learning Oracle, I cut my teeth on data modeling, and used CASE 5.1 on unix to model a database system. True, CASE 5.0 used an Oracle Forms 3.x based interface, and the GUI modeling was unix only.

That was alright with me, as the Form interface allowed manual changes to be made quite quickly.

And the graphic modeling tool was fairly decent, even on a PC running Hummingbird X Server.

When Designer 2000 came out, it was clearly a more capable tool. Not only did it do everything that CASE 5.1 could do, it could do more. I won't make any silly claim that I was ever able to fully exploit D2K, as it was an end-to-end tool that could do much more than model data and databases.

What it could do with just the databases however was quite good.  Data models could be created, and then a physical database could be generated from the model.

Changes in the database model could be reverse engineered back to the model, and changes in the model could be forward engineered in to the physical model. D2K could truly separate logical and physical models, and allow changes to be migrated back and forth between the two.

There are other high end tools such as Erwin which can no doubt accomplish the same thing, but I have not used them.  

One important differentiation for me between D2K and other tools was that D2K worked with Barker Notation, which is the notation I first learned, and the one I still prefer.  

I should not speak of Designer 2000 in past tense I guess, as it is still available from Oracle as part of the Oracle Development Suite, but is now called Oracle Designer.  It just hasn't received much attention in the past few years, as I think many people have come to think of data modeling as too much overhead.  

I've tried several low end tools in the past few years, and while some claim to separate logical and physical models, those that I have tried actually do a rather poor job of it.

All this leads to some new (at least, new to me) developments from of all places, Microsoft.

Maybe you have heard of Oslo, Microsoft's Data Modeling toolset that has been in development for the past couple years.

If you're just now hearing about it, you will likely be hearing much more. The bit I have read has made me think this will be a very impressive tool.

If you have done data modeling, you have likely used traditional tools that allow you to define entities, drop them on a graphical model, and define relationships.

The tool you used may even have allowed you to create domains that could be used to provide data consistency among the entities.

Oslo is different.  

Oslo incorporates a data definition language M. The definitions can be translated to T-SQL, which in turn can be used to create the physical aspects of the model.  M also allows easy creation of strongly typed data types which are carried over into the model.

Whether Oslo will allow round trip engineering ala D2K, I don't yet know.

I do however think this is a very innovative approach to modeling data. 

Here are a few Oslo related links to peruse :


You may be thinking that I have given SQL Developer Data Modeler short shrift.

Along with a lot of other folks, I eagerly anticipated the arrival of SQL Developer Data Modeler.

And along with many others, was disappointed to learn that this add on to SQL Developer would set us back a cool $3000 US per seat.  That seems a pretty steep price for tool that is nowhere near as capable as Oracle Designer, which is included as part of the Oracle Internet Developer Suite. True the price is nearly double that of SQL Modeler at $5800, but you get quite a bit more than just Designer with the Suite.

As for the cost of Oslo, it's probably too early to tell.

Some reading suggests that it will be included as part of SQL Server 2008, but it's probably too soon to tell.

Why all the talk about a SQL Server specific tool?

Because data modeling has been in a rut for quite some time, and Microsoft seems to have broken out of that rut.  It's time for Oracle to take notice and provide better tools for modeling, rather than upholding the status quo.







Categories: DBA Blogs

MetaLink, we barely knew ye

Jared Still - Mon, 2009-11-09 14:08
But, we wish we had more time to get better acquainted.

If you work with Oracle, you probably know that MetaLink went the way of the Dodo as part of an upgrade to My Oracle Support during the weekend of November 6th, 2009.

And so far it hasn't gone too well, as evidenced by these threads on Oracle-L:

Issues with My Oracle Support
Metalink Fiasco

Many people were lamenting the loss of MetaLink well before its demise, but I don't think any were quite expecting the issues that are currently appearing.

A few have reported that it is working fine for them, but personally, I have found  it unusable all morning.

At least one issue with MetaLink appears to have been cleared up with MOS, that is while I was able to login to it last week.

During a routine audit of who had access to our CSI numbers, I came across a group of consultants that were no longer working for our company, and froze their accounts.  The next day I received a frantic voice mail  from a member of the consulting firm, and he informed me that they had no access to MetaLink because I had frozen their accounts.


I returned the call just a few minutes later, but they had already been able to resolve the issue, as one of their consultants with admin rights had been unaffected, and was able to unfreeze their accounts.


Removing them from the CSI is the better procedure, but in the past when I have attempted to do so, I found that there were still open issues owned by the accounts, and could not remove them. The application owners had been very clear that this access should be removed, so I froze the accounts, so that is what I did on this occasion as well.


This all seemed quite bizarre to me.  This must be a very strange schema in the ML user database, and some strange logic to go along with it.  By granting a user access to a CSI, MetaLink was giving me Carte Blanche to effectively remove them from MetaLink.

How has My Oracle Support fixed this?  Try as I might, I could not find a 'freeze' button in user administration in MOS.  So the fix seems to have been "remove the button"

Categories: DBA Blogs

Good Article About btrfs

Sergio's Blog - Tue, 2009-11-03 06:56

I found this article about btrfs helpful.

Categories: DBA Blogs

Two Oracle RAC bugs on the wall, two Oracle bugs. Take one down …

Freek D’Hooge - Mon, 2009-11-02 18:50

Ok, not as good as beer and they can give you a nasty headache, so you have been warned   ;)

Reason for this post are 2 bugs I discovered with Oracle RAC, both resulting in a single point of failure.
The platform on which I’m working is Oracle 10gR2 (10.2.0.4) on OEL 4.7.

The first one is when you are using NFS volumes to host the ocr and ocrmirror volumes.
Normally, when the ocr volume gets corrupted or unavailable , oracle should failover to the ocrmirror volume. The exact response is documented in the RAC FAQ on metalink (note 220970.1) and is currently discussed in a series of blog posts by Geert de Paep.
With NFS, however, you must use both the nointr and hard mount options (OS is OEL 4.7) and as a result the process that is trying to read or write an unavailable ocr volume will wait undefinitly on a response. This is not only happening when using commands such as crs_stat or srvctl, but also when an instance or service failover is initiated.
Oracle support however, does not exactly see it this way and has first blamed the os, then the storage and finally stated that there is no failover foreseen between the ocr and ocrmirror volumes…
It took some escalating and a change in support engineer to get some progress in that SR (mind you that after more then 4 months, they still have not acknowledged it as a bug).

The second problem is that, when you made the public interface redundant with os bonding, the racgvip script does not detect when all interfaces in the bond are disconnected.
This is caused because the script, unlike older version, is using mii-tool to check the availability of the public interface. Only when mii-tool states that the link is down, a ping test is done to the public gateway. If that test fails as well, then the vip fails over and the rac instances on that node are placed in a blocked state.
The problem however with mii-tool is that it plays not very well with bonds, and always reports the bond status as being up (in fact, regardless of the link state, mii-tool is always reporting a network bond as “bond0: 10 Mbit, half duplex, link ok”). So, the racgvip script always thinks that the public interface is up.
As mii-tool is an os utility, I first opened a case on the Oracle Enterprise Linux support, to check with them if its behavior was normal (I already confirmed that by googeling, but Oracle support does not seem to accept results from google :)   ). And after running multiple tests with different bond options, they finally stated that mii-tool was indeed obsolete and should not be used to verify a bond status (yes, I know. Its own man page already states that mii-tool is obsolete).
So next, I opened a SR on part of the clusterware and oracle development promptly stated that it was not a clusterware bug but an os issue, pointing the finger to mii-tool and asking where it was written that mii-tool is obsolete… . After making them aware of the statement made by their OEL colleagues and the mii-tool man page, they have seemed to have accepted it as a bug.
I have checked the 11gR2 version of the racgvip script, and it seems to suffer the same problem.

ps) Note 365605.1 – “Oracle Bug Status Codes, Descriptions and Usage” is, although it seems incomplete, very usefull to understand the different status codes


Categories: DBA Blogs

Oracle Open World 2009 Summary

Inside the Oracle Optimizer - Wed, 2009-10-28 13:09
We had a great time talking to our users at Open World 2009 both at our Demogrounds booth and at our two sessions. We received a lot of interesting questions during the Optimizer Roundtable discussion, but we did not get to answer all of them due to time constraints. We plan to address the questions we received (both answered and unanswered) in future blog posts... so stay tuned. If you didn't get to attend the discussion, but have a question about the Optimizer, submit it through the email link above.

For those of you who did not get a chance to stop by our Demogrounds booth, here's a recap of the new features that we talked about. Many of the topics have already been covered in earlier blog posts.
These topics are focused on well-known pain points from earlier versions of Oracle. But we also have plenty of new optimizations in Oracle 11gR1 and 11gR2. Stay tuned for details about some of our new optimizations.

Categories: DBA Blogs, Development

Wintertime (again)

Freek D’Hooge - Sun, 2009-10-25 08:28

During my prior post on the effect of daylight saving settings on the Oracle scheduler, I already pointed out that it is best to set your session timezone information to a named timezone and not to an absolute offset. In this post I would like to investigate how the session timezone settings affect the sysdate, current_date, systimestamp and current_timestamp variables during the switchover to or from daylight saving time. Current_date and current_timestamp, are using the date/time information of the server on which the database runs and modify that time using the timezone settings of the session.
As with the last post, the tests where done in response to the switching from wintertime to summertime, and I’m to lazy to redo them.

In the first test, I do not explicitly set timezone information in my session.
Both the server time and the client time has been set to a couple of minutes before the swithover from wintertime to summertime:

sys@GUNNAR> alter session set nls_date_format = 'DD/MM/YYYY HH24:MI:SS';

Session altered.

sys@GUNNAR> alter session set nls_timestamp_tz_format='DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ';

Session altered.

sys@GUNNAR> column systimestamp format a35
sys@GUNNAR> column current_timestamp format a35
sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------
29/03/2009 01:58:32 29/03/2009 01:58:32 29/03/2009 01:58:32 TZ: +01:00 DS:  29/03/2009 01:58:32 TZ: +01:00 DS:

As you can see the timezone information uses the absolute offset notation and is set to GMT +1 (which corresponds with wintertime in Belgium).
After some minutes (when the summertime came in effect), I execute the same query again:

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------
29/03/2009 03:00:18 29/03/2009 02:00:18 29/03/2009 03:00:18 TZ: +02:00 DS:  29/03/2009 02:00:18 TZ: +01:00 DS:

Both sysdate and systimestamp has jumped 1 hour in the feature and systimestamp now shows the timezone as “GMT + 2” (summertime in Belgium).
Current_date and current_timestamp both show the time without summertime corrections, but with current_timestamp the timezone information places the time in the right context.

Next, I disconnect and reconnect the session:

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------
29/03/2009 03:01:15 29/03/2009 03:01:15 29/03/2009 03:01:15 TZ: +02:00 DS:  29/03/2009 03:01:15 TZ: +02:00 DS:

This time, all 4 show the same time and timezone information (all using summertime).
The explanation for this is that the timezone information for a session is determined when the session is created, and Oracle only applies daylight saving settings when using a named timezone. So as long as the session is connected, it uses the “old” timezone of GMT +1. With sysdate and systimestamp the timezone information comes from the server, not from the client.

In the second test, I have set the ORA_SDTZ variable in the client environment to “Europe/Brussels”

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------------------
29/03/2009 01:57:37 29/03/2009 01:57:38 29/03/2009 01:57:37 TZ: +01:00 DS:  29/03/2009 01:57:37 TZ: EUROPE/BRUSSELS DS: CET

### a couple of minutes later

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- ------------------------------------------------
29/03/2009 03:00:04 29/03/2009 03:00:04 29/03/2009 03:00:04 TZ: +02:00 DS:  29/03/2009 03:00:04 TZ: EUROPE/BRUSSELS DS: CEST

Both current_date and current_timestamp have now also jumped 1 hour in the “future” and the daylight saving settings in current_timestamp has changed from CET (Central European Time) to CEST (Central European Summer Time).
To me this shows that it is important to set the timezone of you clients correctly, even if the database is not used from different timezones.
A long running session is sufficient to pollute your data, certainly if you are using current_date as it has no timezone information.


Categories: DBA Blogs

switching to wintertime

Freek D’Hooge - Thu, 2009-10-22 11:17

In Belgium we are switching to wintertime this Sunday, which is good opportunity for me to write this post.
I normally intended to write it when we switched to summer time, so everything will be from the point of view of changing from winter time to summer time (confused yet? ).

The reason that I wanted to write about it, where some alerts we got back then from our monitoring considering scheduler jobs which where no longer running on time.
Quickly it became clear that these jobs did not follow the change to summer time, but instead ran an hour later.
The key is to look at the dba_scheduler_jobs table in the correct format. You see, the *_run_date columns are of the datatype “timestamp(6) with timezone”, so to get all the information you need to use the right format model. Using the TZR and TZD models you can respectively see the timezone and the daylight saving information:

sys@WPS50> select job_name, to_char(last_start_date, 'DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ') last_start_date, to_char(next_run_date, 'DD/MM/YYYY HH24:MI:SS "TS:" TZR "DS:" TZD ') next_run_date from dba_scheduler_jobs;

JOB_NAME                       LAST_START_DATE                                    NEXT_RUN_DATE
------------------------------ -------------------------------------------------- --------------------------------------------------
AUTO_SPACE_ADVISOR_JOB         28/03/2009 06:00:04 TZ: +01:00 DS:
GATHER_STATS_JOB               02/02/2009 22:00:00 TZ: +01:00 DS:
FGR$AUTOPURGE_JOB
PURGE_LOG                      29/03/2009 03:00:00 TZ: MET DS: MEST               30/03/2009 03:00:00 TS: MET DS: MEST
ANALYZETHIS_PURGEHISTORY       29/03/2009 17:00:00 TZ: +01:00 DS:                 30/03/2009 17:00:00 TS: +01:00 DS:
GATHER_WK_TEST_STATS           29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_SESSIONUSR_STATS        29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_RELEASEUSR_STATS        29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_LMDBUSR_STATS           29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_ICMADMIN_STATS          29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_COMMUNITYUSR_STATS      29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
GATHER_CUSTOMIZATIONUSR_STATS  29/03/2009 18:00:00 TZ: +01:00 DS:                 30/03/2009 18:00:00 TS: +01:00 DS:
MGMT_STATS_CONFIG_JOB          01/03/2009 01:01:01 TZ: +01:00 DS:                 01/04/2009 01:01:01 TS: +01:00 DS:
MGMT_CONFIG_JOB                28/03/2009 06:00:04 TZ: +01:00 DS:

14 rows selected.

(note the additional space after the TZD format, I needed to add this to actually show the information if I used the "DS:" litteral in front (probably this is a bug) )

As you can see, each job has its own timezone offset and some have also daylight saving information.
So, what happened with our jobs? Well, when a job gets created, Oracle stores the timezone information of the start_date parameters. If this timezone is specified in an absolute offset then no daylight saving changes are applied.
When the server switches to summer time (GMT +2 in Belgium), the scheduler job stays in its own little world and remains in the timezone GMT +1.
So, when for the rest of the database the time is 07:00, the job thinks it is still 06:00 and does not start. As the monitoring check did not take the timezone of the job in account, it reported the job as being late.

To avoid this situation, you need to use a named timezone, in which case oracle will apply automatically the correct daylight saving settings.
How do you do this? Well either you use the to_timestamp_tz to convert a text string to a timestamp with timezone information or Oracle retrieves the timezone from your session.
The timezone information in your session can be set with alter session, or by using the ORA_SDTZ variable in your client environment.
But there is a catch. In the following example I have set my timezone to Europe/Brussels, and then verified the timezone information in systimestamp:

sys@WPS50> select sessiontimezone from dual;

 SESSIONTIMEZONE
---------------------------------------------------------------------------
Europe/Brussels

sys@WPS50> select to_char(systimestamp, 'DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ') from dual;

TO_CHAR(SYSTIMESTAMP,'DD/MM/YYYYHH24:MI:SS"TZ:"TZR"DS:"TZD')
--------------------------------------------------------------------
30/03/2009 01:57:15 TZ: +02:00 DS:

As you can see, the timezone part an absolute notation, not a named timezone.
Systimestamp will never use the named timezone notation, so whenever you use systimestamp as value for the next_date parameter in dbms_scheduler, you will use an absolute offset and thus not follow daylight saving switches.
The current_timestamp variable, will however use the correct notation:

sys@WPS50> select to_char(current_timestamp, 'DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ') from dual;

TO_CHAR(CURRENT_TIMESTAMP,'DD/MM/YYYYHH24:MI:SS"TZ:"TZR"DS:"TZD')
--------------------------------------------------------------------
30/03/2009 01:57:18 TZ: EUROPE/BRUSSELS DS: CEST

So, when you want to specify the current date as value for the next_date parameter, use current_timestamp and not systimestamp.

This timezone stuff is only applicable when you have an interval that is at least 1 day. With smaller intervals, Oracle will make sure that the period between 2 runs remain the same.
If a job runs every 3 hours and last ran on midnight and the clock is then moved forward from 02:00 to 03:00, then the next run date of the job becomes 04:00, so that the 3 hour period between two job runs is retained.

More information on this, including how Oracle behaves when no start_date parameter is given can be found here:

The database version on which the tests where done is 10.2.0.4


Categories: DBA Blogs

Multiple standby databases and supplemental logging

Freek D’Hooge - Tue, 2009-10-20 11:10

A quick warning:

When you setup a logical standby database, you need to activate supplemental logging on the primary database.
This is done automatically when you build the data dictionary (by running the dbms_logstdby.build procedure).
Activating supplemental logging is however (I know now) a control file change and is thus not replicated to the other physical standby databases.
As a result, the logical standby will become (logical) corrupt when you perform a role switch between your primary and another physical standby database.

I learned this the hard way  :(
Luckily it was during a proof of concept and not in a real production environment … .

Of course, AFTERWARDS, I found the following maa document which points out that you have to enable supplemental logging yourself on the other physical standby databases.
It still makes a good read though


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs