DBA Blogs

The Oracle Enterprise Linux Software and Hardware Ecosystem

Sergio's Blog - Wed, 2010-05-19 00:43

It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux.

As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing.

Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities.

The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via oelhelp_ww@oracle.com

Chip Manufacturers Server vendors Storage Systems, Volume Management and File Systems Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand SOA and Middleware Backup, Recovery & Replication Data Center Automation Clustering & High Availability Virtualization Platforms and Cloud Providers Security Management
Categories: DBA Blogs

The Next RAC, ASM and Linux Forum. May 4, 2010 Beit HP Raanana

Alejandro Vargas - Tue, 2010-04-27 18:39

The next RAC, ASM and Linux forum will take place next week, you are still on time to register : Israel Oracle Users Group RAC,ASM and Linux Forum

This time we will have a panel formed by Principal Oracle Advanced Customer Services Engineers and RAC experts Galit Elad and Nickita Chernovski and Senior Oracle Advanced Customer Services Engineers and RAC experts Roy Burstein and Dorit Noga.

They will address the subject: 5 years of experience with RAC at Israeli Customers, lessons learned. It is a wonderful opportunity to meet with the people that is present at most major implementations and helped to solve all major issues along the last years.

In addition we will have 2 most interesting Customer Presentations:

Visa Cal DBA Team Leader Harel Safra will tell about their experience with scalability using standard Linux Servers for their mission critical data warehouse.

Bank Discount Infrastructure DBA Uril Levin, who is in charge of the Bank Backup and Recovery Project, will speak about their Corporate Backup Solution using RMAN; that includes an end to end solution for VLDBS and mission critical databases. One of the most interesting RMAN implementations in Israel.

This time I will not be able to attend myself as I'm abroad on business, Galit Elad will greet you and will lead the meeting.

I'm sure you will enjoy a very, very interesting meeting.

Beit HP is located at 9 Dafna Street, Raanana

Best Regards


View Larger Map

Categories: DBA Blogs

Certain things in this world

Virag Sharma - Tue, 2010-04-13 09:03

"In this world nothing can be said to be certain but death and taxes."
--Benjamin Franklin

Categories: DBA Blogs

Oracle Enterprise Linux 5.5 Installation Media Now Available

Sergio's Blog - Tue, 2010-04-13 00:10
OEL 5.5 was released on ULN on April 5, 2010 and on public-yum.oracle.com shortly after. Now you can also download the installation DVD or CDs from edelivery.oracle.com/linux.
Categories: DBA Blogs

OCFS2 Now Certified for E-Business Suite Release 12 Application Tiers

Sergio's Blog - Wed, 2010-03-24 09:30

Steven Chan writes that OCFS2 is now certified for use as a clustered filesystem for sharing files between all of your E-Business Suite application tier servers.  OCFS2 (Oracle Cluster File System 2) is a free, open source, general-purpose, extent-based clustered file system which Oracle developed and contributed to the Linux community.  It was accepted into Linux kernel 2.6.16.

OCFS2 is included in Oracle Enterprise Linux (OEL) and supported under Unbreakable Linux support.

Categories: DBA Blogs

Cloning A Database On The Same Server Using Rman Duplicate From Active Database

Alejandro Vargas - Sat, 2010-03-20 01:16

To clone a database using Rman we used to require an existing Rman backup, on 11g we can clone databases using the "from active" database option.

In this case we do not require an existing backup, the active datafiles will be used as the source for the clone.

In order to clone with the source database open it must be on archivelog mode. Otherwise we can make the clone mounting the source database, as shown in this example.

These are the steps required to complete the clone:

  1. Configure The Network

  2. Create A Password File For The New Database

  3. Create An Init.Ora For The New Database

  4. Create The Admin Directory For The New Database

  5. Shutdown And Startup Mount The Source Database

  6. Startup Nomount The New Database

  7. Connect To The Target (Source) And Auxiliary (New Clone) Databases Using Rman

  8. Execute The Duplicate Command

  9. Remove The Old Pfile

  10. Check The New Database

A step by step example is provided on this file: rman-duplicate-from-active-database.pdf

Categories: DBA Blogs

Who's using a database link?

Jared Still - Fri, 2010-03-05 12:41
Every once in awhile it is useful to find out which sessions are using a database link in an Oracle database. It's one of those things that you may not need very often, but when you do need it, it is usually rather important.

Unfortunately for those of us charged with the care and feeding of the Oracle RDBMS, this information is not terribly easy to track down.

Some years ago when I first had need to determine which sessions were at each end of database link, I found a script that was supplied courtesy of Mark Bobak. When asked, Mark said he got the script from Tom Kyte. Yong Huang includes this script on his website, and notes that Mark further attributed authorship in Metalink Forum thread 524821.994. Yong has informed me that this note is no longer available.  I have found the script in other locations as well, such Dan Morgan's website. So now you know the scripts provenance.

Here's the script, complete with comments.  Following the script is an example of usage.

-- who is querying via dblink?
-- Courtesy of Tom Kyte, via Mark Bobak
-- this script can be used at both ends of the database link
-- to match up which session on the remote database started
-- the local transaction
-- the GTXID will match for those sessions
-- just run the script on both databases

Select /*+ ORDERED */
substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10) "ORIGIN",
substr(g.K2GTITID_ORA,1,35) "GTXID",
substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,
0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),
) "S",
substr(w.event,1,10) "WAITING"
from x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2
where g.K2GTDXCB =t.ktcxbxba
and g.K2GTDSES=t.ktcxbses
and s.addr=g.K2GTDSES
and w.sid=s.indx
and s2.sid = w.sid

Now let's take a look a the results of the script.

Logging on to DB1 as system, create a database link to db2 using the SCOTT account:

create database link scott_link connect to scott identified by "tiger" using 'db2';

Make sure it works:

system@db1 SQL> select sysdate from dual@scott_link;

03/05/2010 10:13:00

1 row selected.

Now logon to DB1 as sysdba and run who_dblink.sql:

sys@db1 SQL> @who_dblink

--------------------- ----------------------------------- ---------- ---------- - ----------
oraserver.-21901 DB1.d6d6d69e.3.16.7190 500.15059 SYSTEM I SQL*Net me

1 row selected.

Now do the same on DB2:

sys@db2 SQL> @who_dblink

--------------------- ----------------------------------- ---------- ---------- - ----------
ordevdb01.-21903 DB1.d6d6d69e.3.16.7190 138.28152 SCOTT I SQL*Net me

1 row selected.

How do you identify the session on the database where the database link connection was initiated?

Notice that the output from DB1 shows the PID in the ORIGIN column.  In this case it is 21901.

Running the following SQL on DB1 we can identify the session from which the SYSTEM user initiated the database link connection:

b.sid SID,
b.serial# SERIAL#,
from v$session b,
v$process d,
v$sess_io e,
b.sid = e.sid
and b.paddr = d.addr
and b.username is not null
-- added 0.0000001 to the division above to
-- avoid divide by zero errors
-- this is to show all sessions, whether they
-- have done IO or not
--and (e.consistent_Gets + e.block_Gets) > 0
-- uncomment to see only your own session
--and userenv('SESSIONID') = b.audsid
order by

---------- ------ -------- -------------------- ------------------------
SYS 507 12708 oracle 22917

SYSTEM 500 15059 oracle 21901

2 rows selected.

The session created using the SCOTT account via the database link can also be seen using the same query on DB2 and looking for the session with a PID of 21903:

---------- ---- ------- ------- ------------
SCOTT 138 28152 oracle 21903
SYS 147 33860 oracle 22991
146 40204 oracle 24096

3 rows selected.
Categories: DBA Blogs

Treasure Trove of Oracle Security Documents

Jared Still - Tue, 2010-03-02 11:32
This morning I stumbled across a veritable treasure trove of Oracle Security documents at the web site of the Defense Information Systems Agency.

I've only spent a few minutes glancing through a small sampling of them, and they appear to be fairly comprehensive. When you consider that these are documents used to audit sensitive database installations, it makes a lot of sense that they would cross all the t's and dot all the i's.

The index of documents can be found here: DISA IT Security Documents

As you can see for yourself, there are documents covering security concerns for a number of other systems.
Categories: DBA Blogs

Compression for tables with more than 250 columns

Alejandro Vargas - Sun, 2010-02-28 19:15

Compression for tables with more than 250 columns

Tables with more than 250 columns are not supported to be compressed, this restriction remains in place even on 11g R2.

On the 11g R2, Sql Language Reference Manual, page 16-36 we can read:

Restrictions on Table Compression

* COMPRESS FOR OLTP and COMPRESS BASIC are not supported for tables with more than 255 columns.

This is a serious limitation specially for Telecoms where CDR tables can have a number of columns way over 255.

The available workaround:

  • Split the table into 2 sub-tables.

  • create table A as select pk,field 1 to 150 from origtable

  • create table B as select pk,field 151 to 300 from origtable

  • Each one will have less than 250 rows.

  • They will be joined by the primary key.

  • The table will be accessed using a view that has all the columns of the original table.

  • create view origtable as select a.pk,field a.1 to a.150, field b.151 to b.300 from a, b where a.pk=b.pk

Categories: DBA Blogs

Cool but unknown RMAN feature

Jared Still - Fri, 2010-02-19 11:53
Unknown to me anyway until just this week.

Some time ago I read a post about RMAN on Oracle-L that detailed what seemed like a very good idea.

The poster's RMAN scripts were written so that the only connection while making backups was a local one using the control file only for the RMAN repository.
rman target sys/manager nocatalog

After the backups were made, a connection was made to the RMAN catalog and a SYNC command was issued.

The reason for this was that if the catalog was unavailable for some reason, the backups would still succeed, which would not be the case with this command:

rman target sys/manager catalog rman/password@rcat

This week I found out this is not true.

Possibly this is news to no one but me, but I'm sharing anyway. :)

Last week I cloned an apps system and created a new OID database on a server. I remembered to do nearly everything, but I did forget to setup TNS so that the catalog database could be found.

After setting up the backups vie NetBackup, the logs showed that there was an error condition, but the backup obviously succeeded:

archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_294_709899427.dbf recid=232 stamp=710999909
deleted archive log
archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_295_709899427.dbf recid=233 stamp=710999910
Deleted 11 objects

Starting backup at 16-FEB-10
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting full datafile backupset
channel ORA_SBT_TAPE_1: specifying datafile(s) in backupset
including current controlfile in backupset
channel ORA_SBT_TAPE_1: starting piece 1 at 16-FEB-10
channel ORA_SBT_TAPE_1: finished piece 1 at 16-FEB-10
piece handle=OIDDEV_T20100216_ctl_s73_p1_t711086776 comment=API Version 2.0,MMS Version
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:45
Finished backup at 16-FEB-10

Starting Control File and SPFILE Autobackup at 16-FEB-10
piece handle=c-3982952863-20100216-02 comment=API Version 2.0,MMS Version
Finished Control File and SPFILE Autobackup at 16-FEB-10


Recovery Manager complete.

Script /usr/openv/netbackup/scripts/oiddev/oracle_db_rman.sh
==== ended in error on Tue Feb 16 04:07:59 PST 2010 ====

That seemed rather strange, and it was happening in both of the new databases.
The key to this was to look at the top of the log file, where I found the following:

ORACLE_SID : oiddev
PWD_SID : oiddev
ORACLE_HOME: /u01/oracle/oas
PATH: /sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin

Recovery Manager: Release - Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: OIDDEV (DBID=3982952863)

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-04004: error from recovery catalog database: ORA-12154: TNS:could not resolve the connect identifier specified

Starting backup at 16-FEB-10
using target database controlfile instead of recovery catalogallocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backupset

Notice the line near the bottom of the displayed output?

The one that says "using target database controlfile instead of recovery catalog" ?

RMAN will go ahead with the backup of the database even though the connection to the catalog database failed.  This apparently only works when running in a scripted environment, as when I tried connecting on the command line RMAN would simply exit when the connection to the catalog could not be made.

The RMAN scripts are being run on a linux server in the following format:

$OH/bin/rman target sys/manager catalog rman/password@rcat <<-EOF >> $LOGFILE

rman commands go here


This was quite interesting to discover, and my be old news to many of you, but it was new to me.

This is not exactly a new feature either - one of the databases being backed up is And of course there is now no need to update the backup scripts.
Categories: DBA Blogs

RAC, ASM and Linux Forum, December 15, 13:30 - 18:00 Beit HP Raanana

Alejandro Vargas - Sun, 2009-12-13 03:04

It's time for our 2nd, 2009 RAC, ASM and Linux Forum in Tel Aviv!

I would like to warmly invite you to our next RAC, ASM and Linux forum to be held at Beit HP in Raanana, on December 15.

You can register on the Israel Oracle User Group site.

On this forum we will have a 11g R2 Technology Update and 2 very interesting Customer Presentations about minimum downtime VLDB Migration to RAC on Linux and Creating and Managing RPM's for Oracle Silent Install on Linux.

Panel on Grid Infrastructure, ASM, Clusterware and RAC 11g R2, Technology Update

Annie Flint, Principal Member of Technical Staff, Oracle Corporation
Ofir Manor, Senior Sales Consultant, Oracle Israel
Alejandro Vargas, Principal Support Consultant, Oracle Advanced Customer Services

In the few months since last forum on June many things happened, 11g Release 2 is already production and brought a revolution in terms of performance and deep changes to the world of ASM, Oracle Clusterware and RAC.

Exadata Release 2 was released opening the way for OLTP databases based on the new Oracle - Sun Database Machine.

In this seminar we will review the new face of ASM, Oracle Clusterware and RAC on 11g Release 2 and we will comment on some of the incredible performance enhancements of the new version.

Migration of a VLDB to RAC 11g with Minimal Downtime

Dotan Mor,
Senior DBA
Pelephone DBA Team

Dotan will tell us the whole story of migrating an 8TB Datawarehouse, with near 0 downtime, from Linux 3 with OCFS2 to Linux 5, with CRS, RAC and ASM 11g, and Infiniband; and how his customer still cannot believe the incredible improvements they got in performance.

He will tell us also all the several problems faced in the way to this big success and how all of them were overcome.

Packaging Application and Database Together On Linux for Super-Silent-Installs

Liron Amitzi,
VP Professional Services

Liron will explain us how to build a Linux RPM that contains inside the whole set of files required to easily and fast deploy a complete application, from the database to last application executable.

See you there!

Best Regards,


Categories: DBA Blogs

Check IO Scripts

Alejandro Vargas - Wed, 2009-12-09 19:32

These scripts are very useful to check throughput.

The original version can be found on My Oracle Support Note 422414.1 by Luca Canali.

set lines 250 pages 50000

alter session set nls_date_format='dd-mm-yyyy hh24:mi';

col Phys_Read_Total_Bps for 999999999999
col Phys_Write_Total_Bps for 999999999999
col Redo_Bytes_per_sec for 999999999999
col Phys_Read_IOPS for 999999999999
col Phys_write_IOPS for 999999999999
col Phys_redo_IOPS for 999999999999
col OS_LOad for 999999999999
col DB_CPU_Usage_per_sec for 999999999999
col Host_CPU_util for 999999999999
col Network_bytes_per_sec for 999999999999
col Phys_IO_Tot_MBps for 999999999999
col Phys_IOPS_Tot for 999999999999

spool io_max_checkup.log

select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end) Phys_Read_Tot_Bps,
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end) Phys_Write_Tot_Bps,
sum(case metric_name when 'Redo Generated Per Sec' then maxval end) Redo_Bytes_per_sec,
sum(case metric_name when 'Physical Read Total IO Requests Per Sec' then maxval end) Phys_Read_IOPS,
sum(case metric_name when 'Physical Write Total IO Requests Per Sec' then maxval end) Phys_write_IOPS,
sum(case metric_name when 'Redo Writes Per Sec' then maxval end) Phys_redo_IOPS,
sum(case metric_name when 'Current OS Load' then maxval end) OS_LOad,
sum(case metric_name when 'CPU Usage Per Sec' then maxval end) DB_CPU_Usage_per_sec,
sum(case metric_name when 'Host CPU Utilization (%)' then maxval end) Host_CPU_util, --NOTE 100% = 1 loaded RAC node
sum(case metric_name when 'Network Traffic Volume Per Sec' then maxval end) Network_bytes_per_sec,
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;

spool off

spool io_maxtot_summary.log

select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end)/1024/1024 +
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end)/1024/1024 +
sum(case metric_name when 'Redo Generated Per Sec' then maxval end)/1024/1024 Phys_IO_Tot_MBps,
sum(case metric_name when 'Physical Read Total IO Requests Per Sec' then maxval end) +
sum(case metric_name when 'Physical Write Total IO Requests Per Sec' then maxval end) +
sum(case metric_name when 'Redo Writes Per Sec' then maxval end) Phys_IOPS_Tot,
sum(case metric_name when 'Current OS Load' then maxval end) OS_LOad,
sum(case metric_name when 'CPU Usage Per Sec' then maxval end) DB_CPU_Usage_per_sec,
sum(case metric_name when 'Host CPU Utilization (%)' then maxval end) Host_CPU_util, --NOTE 100% = 1 loaded RAC node
sum(case metric_name when 'Network Traffic Volume Per Sec' then maxval end) Network_bytes_per_sec,
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;

spool off

Categories: DBA Blogs

Using OCFS2 as a Generic Cluster File System?

Sergio's Blog - Wed, 2009-12-09 00:56
If you use OCFS2 for purposes other than running Oracle software, please leave a comment or drop me a line: sergio[dot]leunissen[at]oracle[dot]com I'm curious how you use OCFS2, how big your filesystems are, how many nodes are in the cluster, what you like about OCFS2, etc.
Categories: DBA Blogs

RAC install root.sh fail on 2nd Node with error: "Timed out waiting for the CRS stack to start"

Alejandro Vargas - Mon, 2009-11-30 20:51

Yesterday we spent the day trial to solve this problem.

The actual solution, once the problem was found, took a couple of minutes to solve.

Usually this issue will arise when there is a network misconfiguration. We did check that carefully and everything was well configured:

- Same device names at both ends
- No dropped packets or transmission errors
- Both sides configured full duplex

Then we decided to move the install software to the second node and used scp over the interconnect:

scp -r -priv:

scp progressed for a while and then become stalled.

Retrying using the public network worked without any problem.

Any other test using the interconnect got stalled after a while.

The network people checked what the problem can be and found that the switch was configured half duplex.

They corrected the problem and everything worked fine.

Oracle Support Note 745215.1 describe the problem and other areas to check as well.

Categories: DBA Blogs


Freek D’Hooge - Sun, 2009-11-29 16:23

0x1A, better know as the end of file character.
Now also known as the cause of me waisting several hours on analyzing a 500MB raw trace file trying to figure out why the tkprof report did not seemed to be correct.

Still wondering why the application I was tracing had an EOF character in the value of a varchar2 type bind variable.

Categories: DBA Blogs

Data Modeling

Jared Still - Thu, 2009-11-12 13:24

Most readers of the blog are probably DBA's, or do DBA work along with development or other duties.

Though my title is DBA, Data Modeling is something I really like to do.

When first learning Oracle, I cut my teeth on data modeling, and used CASE 5.1 on unix to model a database system. True, CASE 5.0 used an Oracle Forms 3.x based interface, and the GUI modeling was unix only.

That was alright with me, as the Form interface allowed manual changes to be made quite quickly.

And the graphic modeling tool was fairly decent, even on a PC running Hummingbird X Server.

When Designer 2000 came out, it was clearly a more capable tool. Not only did it do everything that CASE 5.1 could do, it could do more. I won't make any silly claim that I was ever able to fully exploit D2K, as it was an end-to-end tool that could do much more than model data and databases.

What it could do with just the databases however was quite good.  Data models could be created, and then a physical database could be generated from the model.

Changes in the database model could be reverse engineered back to the model, and changes in the model could be forward engineered in to the physical model. D2K could truly separate logical and physical models, and allow changes to be migrated back and forth between the two.

There are other high end tools such as Erwin which can no doubt accomplish the same thing, but I have not used them.  

One important differentiation for me between D2K and other tools was that D2K worked with Barker Notation, which is the notation I first learned, and the one I still prefer.  

I should not speak of Designer 2000 in past tense I guess, as it is still available from Oracle as part of the Oracle Development Suite, but is now called Oracle Designer.  It just hasn't received much attention in the past few years, as I think many people have come to think of data modeling as too much overhead.  

I've tried several low end tools in the past few years, and while some claim to separate logical and physical models, those that I have tried actually do a rather poor job of it.

All this leads to some new (at least, new to me) developments from of all places, Microsoft.

Maybe you have heard of Oslo, Microsoft's Data Modeling toolset that has been in development for the past couple years.

If you're just now hearing about it, you will likely be hearing much more. The bit I have read has made me think this will be a very impressive tool.

If you have done data modeling, you have likely used traditional tools that allow you to define entities, drop them on a graphical model, and define relationships.

The tool you used may even have allowed you to create domains that could be used to provide data consistency among the entities.

Oslo is different.  

Oslo incorporates a data definition language M. The definitions can be translated to T-SQL, which in turn can be used to create the physical aspects of the model.  M also allows easy creation of strongly typed data types which are carried over into the model.

Whether Oslo will allow round trip engineering ala D2K, I don't yet know.

I do however think this is a very innovative approach to modeling data. 

Here are a few Oslo related links to peruse :

You may be thinking that I have given SQL Developer Data Modeler short shrift.

Along with a lot of other folks, I eagerly anticipated the arrival of SQL Developer Data Modeler.

And along with many others, was disappointed to learn that this add on to SQL Developer would set us back a cool $3000 US per seat.  That seems a pretty steep price for tool that is nowhere near as capable as Oracle Designer, which is included as part of the Oracle Internet Developer Suite. True the price is nearly double that of SQL Modeler at $5800, but you get quite a bit more than just Designer with the Suite.

As for the cost of Oslo, it's probably too early to tell.

Some reading suggests that it will be included as part of SQL Server 2008, but it's probably too soon to tell.

Why all the talk about a SQL Server specific tool?

Because data modeling has been in a rut for quite some time, and Microsoft seems to have broken out of that rut.  It's time for Oracle to take notice and provide better tools for modeling, rather than upholding the status quo.

Categories: DBA Blogs

MetaLink, we barely knew ye

Jared Still - Mon, 2009-11-09 14:08
But, we wish we had more time to get better acquainted.

If you work with Oracle, you probably know that MetaLink went the way of the Dodo as part of an upgrade to My Oracle Support during the weekend of November 6th, 2009.

And so far it hasn't gone too well, as evidenced by these threads on Oracle-L:

Issues with My Oracle Support
Metalink Fiasco

Many people were lamenting the loss of MetaLink well before its demise, but I don't think any were quite expecting the issues that are currently appearing.

A few have reported that it is working fine for them, but personally, I have found  it unusable all morning.

At least one issue with MetaLink appears to have been cleared up with MOS, that is while I was able to login to it last week.

During a routine audit of who had access to our CSI numbers, I came across a group of consultants that were no longer working for our company, and froze their accounts.  The next day I received a frantic voice mail  from a member of the consulting firm, and he informed me that they had no access to MetaLink because I had frozen their accounts.

I returned the call just a few minutes later, but they had already been able to resolve the issue, as one of their consultants with admin rights had been unaffected, and was able to unfreeze their accounts.

Removing them from the CSI is the better procedure, but in the past when I have attempted to do so, I found that there were still open issues owned by the accounts, and could not remove them. The application owners had been very clear that this access should be removed, so I froze the accounts, so that is what I did on this occasion as well.

This all seemed quite bizarre to me.  This must be a very strange schema in the ML user database, and some strange logic to go along with it.  By granting a user access to a CSI, MetaLink was giving me Carte Blanche to effectively remove them from MetaLink.

How has My Oracle Support fixed this?  Try as I might, I could not find a 'freeze' button in user administration in MOS.  So the fix seems to have been "remove the button"

Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator - DBA Blogs