Feed aggregator

Oracle at KDD 2008 and KDD 2007 Pictures

Marcos Campos - Fri, 2008-08-15 22:34
It is that time of the year again. In about a week I am going to be attending the KDD (Knowledge Discovery in Databases) 2008 conference (conference website) along with some other Oracle colleagues. KDD is one of the primary conferences on data mining. This year it will take place in Las Vegas, Nevada, from August 24 to 27.Oracle is a Gold sponsor for the event and will have again a large Marcoshttp://www.blogger.com/profile/14756167848125664628noreply@blogger.com0
Categories: BI & Warehousing

vi for Apps DBAs (2)

Madhu Thatamsetty - Thu, 2008-08-14 04:23
Happy Independence Day !! - August 15th 2008. Its been a year I started blogging. I started my first post with vi for Apps DBAs on Aug 15th 2007. It was a great experience being part of the online community. I admire and I'm inspired by seeing the time and efforts put in by the fellow bloggers in the world to share their experiences. No wonder blogging is a great platform to convey, express and Madhu Sudhanhttp://www.blogger.com/profile/11947987602520523332noreply@blogger.com0

DB Link to Oracle 11g

Pawel Barut - Wed, 2008-08-13 13:59
Written by Paweł Barut
As you know in Oracle 11g passwords are case sensitive by default. This applies to connecting via SQL*Plus or other client tools. And it also applies to database links between databases. So when you link from Oracle 10g to Oracle 11g create database link like this:
CREATE DATABASE LINK my_link
CONNECT TO remote_user IDENTIFIED BY "CaSe_SeNsItIvE_PaSsWoRd"
USING 'TNS_ID_11G';

Do not forget to enclose password by double-quote marks!
When you do not set password this way, you will be getting:
ORA-01017: invalid username/password; logon denied.
Hope this small tip will be useful for someone.

Paweł

More on Creating Database Links.

--
Related Articles on Paweł Barut blog:
Categories: Development

DD Tips and Techniques - 2 Displaying Shapes

Susan Duncan - Tue, 2008-08-12 09:18
When I said occasional in the last blog - I didn't specify the time interval - so here is a second post in as many minutes!

There are a number of ways to tidy a cluttered diagram. Sometimes you don't want to see all the columns/constraints in the table shape, or perhaps you only want to see some of the columns - the keys or the most important columns:

View As -> Compact
Select shapes in a diagram and use this context menu to completely remove any columns or constraints from the shape

Hide Selected Shapes
If you want to show only some of the members of your shape (for instance only the keys and major columns in a table) select members you want to hide and use this context menu. The members are cut from the diagram but are still available to you through the shape editor. To see all the members again use Show All Hidden Members

Tools -> Preferences -> Diagrams -> Database
Under this node you will find many different options for your database diagram - you can change font, shape colors for many different database shapes. For instance - select Table from the dropdown list and uncheck Show Constraints if you want to exclude them from the diagram. Note that changes made in this way only apply to new shapes added to a diagram. So it's good practice to check your preferences before you start work on a diagram



EMD upload error: uploadXMLFiles skipped :: OMS version not checked yet

Madhu Thatamsetty - Tue, 2008-08-12 08:53
I ran into an issue where agent was not uploading the xmls to the OMS server. emctl upload fails with the error "EMD upload error: uploadXMLFiles skipped :: OMS version not checked yet..."To resolve the issue try the following. Check if the host is able to communicate with the OMS by using telnet as mentioned below. If you punch in "^]" escape sequence, you will receive Apache version that OMS isMadhu Sudhanhttp://www.blogger.com/profile/11947987602520523332noreply@blogger.com0

Database Diagramming Tips and Techniques - 1 Keyboard Accelerators

Susan Duncan - Tue, 2008-08-12 07:03
I'm working on a new database design and thought I'd pass on some of the tips that I use to aid my work. As always with my blog entries, this will be an occasional series.

Keyboard Accelerators - diagram cleaning aid

I often create new offline database tables using a database diagram. I can visualize and edit objects in-place easily and often. But having to resize the shapes and tidy up the relationship lines can be very frustrating! So I do the following:

First the setup -
Go into Tools -> Preferences -> Accelerators
Scroll down the Actions list and select Height and Width
Add a new accelerator (if none set) - I use Ctrl H
Scroll to Select All and check what accelerator is set - default is Ctrl A
Scroll to Straighten Lines and add an accelerator - say Ctrl L

Now, in my diagram whenever I can't see all the info in my table shapes I either select the specific tables and do Ctrl H or do Ctrl A + Ctrl H to resize all my shapes followed by Ctrl L to straighten the lines.

Easy Tip No1!

My agenda at OpenWorld 2008

Oracle Apex Notebook - Mon, 2008-08-11 07:48
Essentially, at OpenWorld I'll be looking for everything related with APEX. Unfortunately it's impossible to attend all APEX related sessions, because (fortunately) there is a huge number of interesting sessions and some of them share the same time schedule. David Peake has built an application that puts together all APEX sessions (good work btw) which helps picking the sessions. Beyond the
Categories: Development

Oracle E-Business Tax: Regime to Rate Flow in Oracle R12

Krishanu Bose - Mon, 2008-08-11 07:46

In Oracle Release12, there has been some very significant changes, one among them being how we define tax. E-Business Tax is now a single point where we define our taxes for all the sub-ledger modules. In this article I will try to do a sample Tax setup using the Regime to Rate Flow of Oracle E-Business Tax in the Tax Manager responsibility.

Creating a Tax Regime

In Oracle E-Business Tax, a Tax Regime is the system of regulations for the determination and administration of one or more taxes.

  1. Navigate to the Create Tax Regime:

• Navigation: Tax Configuration àTax Regimes

• Click the Button “Create”

  1. Enter a new tax regime based on the following information:

· Tax Regime Code: TESTREGIME1

· Name: Test Regime1

· Regime Level: Country

· Country Name: United States

· Parent Regime Code: Blank

· Effective From: 01-JAN-1950

· Effective To: Leave Blank

· Used to group Regimes: Unchecked

  1. Expand the Controls and Defaults region

· Allow Tax Recovery: Checked

· Allow Override and Entry of Inclusive Tax Lines: Unchecked

· Allow Tax Exemptions: Checked

· Allow Tax Exceptions: Checked

· Tax Currency: USD

· Minimum Accountable Unit: 0.01

· Rounding Rule: Nearest

· Tax Precision: 2 (default)

  1. Click on Button “Continue”
  2. Enter tax regime configuration options:
    • Party Name: Vision Operations (Note: Select the Operating Unit owning Tax Content party type)
    • Configuration for Taxes and Rules: Common Configuration with Party Overrides
    • Configuration for Product Exceptions: Party-Specific Configuration
    • Effective From: 01-JAN-1950
    • Effective To: Leave blank
  3. Click on Button “Finish”
  4. You will get the message “The tax regime was successfully created”

Creating a Tax

  1. Navigate: Tax Configuration à Tax Regimes
  2. Search for the Tax Regime you created above using the filter criteria of Country as “United States” and Tax Regime Code as “TESTREGIME1”
  3. Click on button “Go”
  4. Click on icon “Regime to Rate Flow”
  5. Click on button “Create Tax”

  1. 2Create a new Tax as per information mentioned below:
    • Tax Regime Code: TESTREGIME1 (defaults)
    • Configuration Owner: Global Configuration Owner
    • Tax Source: Create a new tax (defaults)
    • Tax: TESTTAX1
    • Tax Name: Test Tax1
    • Tax Type: SALES
    • Effective From: 01-JAN-1950 (defaults)
    • Effective To: Leave blank
    • Geography Type: STATE
    • Parent Geography Type: COUNTRY
    • Parent Geography Name: United States
    • Tax Currency: USD (defaults)
    • Minimum Accountable Unit: 0.01 (defaults)
    • Rounding Rule: Nearest (defaults)
    • Tax Precision: 2 (defaults)
    • Exchange Rate Type: Blank
  2. Click on “Show Controls and Defaults”
  3. Enter the details as per information mentioned below
    • Allow Override and Entry of Inclusive Tax Lines: Unchecked
    • Allow Tax Rounding Override: Unchecked
    • Allow Override for Calculated Tax Lines: Checked
    • Allow Entry of Manual Tax Lines: Checked
    • Use Legal Registration Number: Unchecked
    • Allow Duplicate Tax Registration Numbers: Unchecked
    • Allow Multiple Jurisdictions: Unchecked
    • Tax Accounts Creation Method: Create Tax Accounts (defaults)
    • Allow Tax Exceptions: Checked (defaults)
    • Allow Tax Exemptions: Checked (defaults)
    • Tax Exemptions Creation Method: Create Tax Exemptions (defaults)
    • Allow Tax Recovery: Unchecked

  1. Click on Button “Apply”
  2. You will get the message “The Tax has been successfully created”
  3. Return to Tax Regimes. Click on “Expand All”
  4. Click on icon “Create Tax Status”
  5. Enter the following:
    • Tax Status Code: TESTSTATUS1
    • Name: Test Tax Status1
    • Click on Button “Apply”
    • Check as Default Tax Status: Checked
    • Default Status Effective From: 01-Jan-1950
  6. Click on button “Apply”
  7. You will get the message “The tax status was successfully created.”

Create Tax Rates

  1. Go back to the Regime to Rate Flow
  2. Click on “Expand All”.
  3. Click on icon “Create Tax Rate”. Enter the Tax Rates based on information mentioned below:
    • Tax Regime Code: TESTREGIME1 (defaults)
    • Configuration Owner: Global Configuration Owner (defaults)
    • Tax: TESTTAX1 (defaults)
    • Tax Status Code: TESTSTATUS1 (defaults)
    • Tax Jurisdiction Code: Blank
    • Tax Rate Code: TESTRATE1
    • Rate Type: Percentage (defaults)
    • Percentage Rate: 10
    • Effective From: 01-JAN-1950 (defaults)
    • Effective From: Leave blank

  1. Click on icon “Rate Details”. Enter the Tax Rate Details based on information mentioned below:

· Tax Rate Name: Test Sales Tax Rate1

· Tax Rate Description: Test Sales Tax Rate1

· Set as Default Rate: Checked

· Default Effective From: 01-JAN-1950

· Default Effective To: Leave blank

· Allow Tax Exemptions: Checked (defaults)

· Allow Tax Exceptions: Checked (defaults)

· Internet Expenses Enabled : Checked

  1. Click on the button “Apply” to return to the Create Tax Rate page.
  2. Click on the button “Apply” to return to the Regime to Rate Flow page
  3. You will get the message “The Tax Rate has been successfully created.”

Creating Tax Jurisdiction

  1. Navigate: tax Configuration à Tax Jurisdiction
  2. Click on button “Create”. Enter the Tax Jurisdiction details based on information mentioned below:
    • Tax Jurisdiction Code: TESTJUR1
    • Tax Jurisdiction Name: Test Tax Jurisdiction1
    • Tax Regime Code: TESTREGIME1
    • Tax: TESTTAX1
    • Geography Type: STATE (pick from list!)
    • Effective From: 01-JAN-1950
    • Effective To: Leave Blank
    • Geography Name: CA
    • Precedence Level: 300 (defaults)
    • Collecting Tax Authority: Blank
    • Reporting Tax Authority: Blank
    • Set as default Tax Jurisdiction: Yes (defaults)
    • Default Effective From: 01-JAN-1950
    • Default Effective To: Leave blank

  1. Click on the button “Apply” to return to the Regime to Rate Flow page
  2. You will get the message “The tax jurisdiction was successfully created.”

Creating Tax Accounts

  1. Navigate: Tax Configuration à Taxes
  2. Search for the Taxes that we created just now using the following filter criteria:
    • Country Name: United States
    • Tax Regime Code: TESTREGIME1
    • Tax: TESTTAX1
  3. Click on the button “Go”. Then click on icon “Update”.
  4. Click on button “Tax Accounts”.
  5. Enter the Ledger as “Vision Operations (USA)”
  6. Click on button “Create”
  7. Enter the Tax Account details based on following information:

· Operating Unit : Vision Operations

· Tax Expense: 01-210-7710-0000-000

· Tax Recoverable/Liability: 01-000-2220-0000-000

  1. Click on button “Apply” and return back to Tax Accounts page.
  2. Click the button “Apply”, to return to the Update Tax page.
  3. Click the button “Apply”, to return to the Taxes page.
  4. You will get the message “The tax was successfully updated”.

Creating Tax Rules

  1. Navigate: Tax Configuration à Tax Rules
  2. Access the rule type by filtering on the following criteria:
    • Configuration Owner: Global Configuration Owner
    • Tax Regime Code: TESTREGIME1
    • Tax: TESTTAX1
  3. Click on button “Go”. Then click on “Expand All”.
  4. Set the following values corresponding to the Rule Types:
    • Determine Place of Supply: Ship To
    • Determine Tax Applicability: Applicable
    • Determine Tax Registration: Bill From Party
    • Determine Taxable Basis: STANDARD_TB
    • Calculate Tax Amounts: STANDARD_TC

Making Tax Available for Transactions

  1. Navigate: Tax Configuration àTaxes
  2. Select the Tax created based on the following filter criteria:
    • Country Name: United States
    • Tax Regime Code: TESTREGIME1
    • Tax: TESTTAX1
  3. Click on button “Go”. Then click on icon “Update” on the “TESTTAX1” row.
  4. Check the box “Make Tax Available for Transactions”

  1. Click on button “Apply”.
  2. You will get the message “This tax does not have an exchange rate type. Are you sure you want to enable this tax?” having options “Yes” and “No”. Click on “Yes”
  3. You will get the message “The tax was successfully updated.”

Test your Tax Setup by creating an Invoice in Payables /Receivables or creating a Purchase Order.

Me at Oracle OpenWorld 2008

Oracle Apex Notebook - Mon, 2008-08-11 03:56
It is my first time at Oracle's big event, the OpenWorld. David Peake was kind enough to invite me to participate in one of the official sessions. So, if you want to meet me, you can find me at San Francisco next September :) Here is the session description, don't forget to register: Session Id: S300458 Title Building Commercial Software-as-a-Service Applications with Oracle Application
Categories: Development

Fast-Start Failover - It is reliable

Mihajlo Tekic - Sun, 2008-08-10 20:45
Last Friday there was one post on OTN forums that brought my attention. The OP was wondering “Is Data Guard Buggy” with attention to Fast-Start Failover (FSFO) feature that provides an ability of automatic failover to the standby database if the primary database is not available for certain time.

He had some concerns about FSFO being unreliable, very difficult to be implemented and actually doesn't work properly.

The OP got some immediate response from the OTN users. I couldn't agree more with Joseph Meeks's comment on the topic.

In my opinion, FSFO is very nice feature that plays big role in Oracle's Maximum Availability Architecture. There might be some valid reasons not to implemented it, but if the automatic failover is a requirement, FSFO is the way to go. Should one have any problems implementing it, the best way to go is to get Oracle Support involved.

In this post, I'd like to show that implementation of the FSFO should not be a difficult task once you have the Data Guard environment set up.

I configured data guard physical standby environment on my laptop. Due to hardware limitations, I'll have the observer running on the same machine with the standby databases. Keep in mind this is just an example. In practice, the primary database, the standby database and the observer should run on different hosts. This example also answers one of the questions OP asked: Will it be possible to set it up on one machine? The answer would be it is possible, as shown in the example :-), but it is not the right way to go.

I use DGMGRL utility in the example..

So, I configured a Data Guard environment where DB1.MYDOMAIN.COM is primary database and STDB.MYDOMAIN.COM is physical standby. Both databases are 10.2.x
By default, protection mode of the Data Guard configuration is set to MAXIMUM PERFORMANCE

DGMGRL> connect sys/***@db1
Connected.
DGMGRL> show configuration

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
db1 - Primary database
stdb - Physical standby database

Current status for "DRTest":
SUCCESS


Enabling Fast-Start Failover requires the following pre-requisites to be met:
Flashback Database feature is enabled on both the primary and the standby database.
The protection mode of the configuration must be set to MAXIMUM AVAILABILITY
tnsnames.ora in the ORACLE_HOME where the observer runs must be set to see both databases, the primary and the standby.
DGMGRL must be available on the observer host.

Enable Flashback Database
I will enable flashback database feature on both databases. This assumes that the flash recovery area is configured.
The flashback database feature provides an ability for an easy reinstatement of the failed primary database to new standby database.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

SQL> startup mount
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1261348 bytes
Variable Size 100663516 bytes
Database Buffers 176160768 bytes
Redo Buffers 7127040 bytes
Database mounted.
SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.


Set the protection mode to MAXIMUM AVAILABILITY
The Fast-Start Failover can be enabled only if the protection mode is set to MAXIMUM AVAILABILITY.
LogXptMode has to be set to 'SYNC'

DGMGRL> edit database 'db1'
> set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'stdb'
> set property LogXptMode='SYNC';
Property "logxptmode" updated

DGMGRL> edit configuration set protection mode as MaxAvailability;
Succeeded.
DGMGRL> show configuration

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: DISABLED
Databases:
db1 - Primary database
stdb - Physical standby database

Current status for "DRTest":
SUCCESS

SQL> conn sys/***@db1 as sysdba
Connected.
SQL> select protection_mode, protection_level from v$database;

PROTECTION_MODE PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY

SQL> conn sys/***@stdb as sysdba
Connected.
SQL> select protection_mode, protection_level from v$database;

PROTECTION_MODE PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY

SQL>


I ensured that tnsnames.ora are set correctly as well DGMGRL is installed.

Now, since all the prerequisites are met, lets move forward and enable the FSFO.
Before enabling it, make sure each of the databases in the configuration has set a fast start failover target. This is achieved by setting the FastStartFailoverTarget parameter.


DGMGRL> edit database 'db1' set property faststartfailovertarget='stdb';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'stdb' set property faststartfailovertarget='db1';
Property "faststartfailovertarget" updated


Another important parameter that has to be set is FastStartFailoverThreshold. This parameter specifies the amount of time (in seconds) the observers attempts to reconnect to the primary database before starting the fast-start failover to the standby database. The default value is set to 30 seconds. In the example I set this parameter to 120 seconds.


DGMGRL> edit configuration set property FastStartFailoverThreshold=120;
Property "faststartfailoverthreshold" updated


Now lets enable the fast start failover:


DGMGRL> ENABLE FAST_START FAILOVER;
Enabled.


If you check the configuration at this moment you may find the following warning: ORA-16608: one or more databases have warnings.
If you check the status of one of the databases, you can see this warning: ORA-16819: Fast-Start Failover observer not started


DGMGRL> show configuration

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Primary database
stdb - Physical standby database
- Fast-Start Failover target

Current status for "DRTest":
Warning: ORA-16608: one or more databases have warnings


DGMGRL> show database verbose 'db1'

Database
Name: db1
Role: PRIMARY
Enabled: YES
Intended State: ONLINE
Instance(s):
db1

Properties:
InitialConnectIdentifier = 'db1.mydomain.com'
LogXptMode = 'SYNC'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'auto'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '2'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'stdb, db1'
LogFileNameConvert = 'stdb, db1'
FastStartFailoverTarget = 'stdb'
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'miki-laptop'
SidName = 'db1'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=miki-laptop)(PORT=1521))'
StandbyArchiveLocation = 'dgsby_db1'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'

Current status for "db1":
Warning: ORA-16819: Fast-Start Failover observer not started


So lets start the observer. I will repeat again, the observer should run on a different host, however for the sake of this example it will run on the same machine as the databases.
In order to start the observer one should start DGMGRL utility and login to the data guard configuration.
Once logged in, issue START OBSERVER command. This will start the observer.
Optionally you can set a log file destination while invoking DGMGRL utility and specify name for the observer configuration file (for more information check Data Guard Command-Line Interface Reference).
Once the observer is started, the control is not returned to the user until the observer is stopped.


DGMGRL for Linux: Version 10.2.0.3.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/***@db1
Connected.
DGMGRL> start observer;
Observer started


So lets check the configuration now.


$ dgmgrl
DGMGRL for Linux: Version 10.2.0.3.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/***@db1
Connected.
DGMGRL> show configuration verbose;

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Primary database
stdb - Physical standby database
- Fast-Start Failover target

Fast-Start Failover
Threshold: 120 seconds
Observer: miki-laptop

Current status for "DRTest":
SUCCESS

DGMGRL>


As of this moment my configuration has FSFO enabled.

Now lets test if the FSFO really works.
One should be aware of the conditions that must be satisfied for the observer to attempt FSFO.
The full list of conditions can be found in Data Guard Broker user guide under 5.5.2.1 What Happens When Fast-Start Failover and the Observer Are Running?

I will issue SHUTDOWN ABORT to the primary database (SHUTDOWN NORMAL/IMMEDIATE/TRANSACTIONAL would not trigger the failover).
Once I crash the database, the status of the configuration will return Error: ORA-16625: cannot reach the database. Be aware that since the primary database is down, the only way to check the configuration is to connect using the standby database credentials.


SQL> conn sys/***@db1 as sysdba
Connected.
SQL> shutdown abort
ORACLE instance shut down.
SQL>

DGMGRL> connect sys/***@stdb
Connected.
DGMGRL> show configuration

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Primary database
stdb - Physical standby database
- Fast-Start Failover target

Current status for "DRTest":
Error: ORA-16625: cannot reach the database


After waiting for two minutes (FSFO threshold was set to 120 seconds), I checked the observer log file and found out that it started the failover.


[W000 08/09 17:58:49.75] Observer started.

18:05:38.31 Saturday, August 09, 2008
Initiating fast-start failover to database "stdb"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "stdb"
18:06:16.82 Saturday, August 09, 2008



DGMGRL> show configuration verbose

Configuration

Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Physical standby database (disabled)
- Fast-Start Failover target
stdb - Primary database

Fast-Start Failover

Threshold: 120 seconds
Observer: miki-laptop

Current status for "DRTest":

Warning: ORA-16608: one or more databases have warnings

DGMGRL>


So the observer started the fast-start failover and has successfully converted the former standby to new primary database.

Once the failover is done the observer will attempt to automatically reinstate the former primary database to new physical standby. Certain requirements have to be met for the automatic reinstatement to take place. If not, the database can be manually reinstated. Once the reinstatement is successful you can see that the former primary database became new physical standby.

DGMGRL> show configuration

Configuration

Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:

db1 - Physical standby database
- Fast-Start Failover target
stdb - Primary database

Current status for "DRTest":

SUCCESS

DGMGRL>


Finally, you can do switchover to perform role change and to place the configuration in its initial state.

The example above shows that enabling fast-start failover is pretty straight forward. There are many good documents with information how to enable FSFO feature. Some of them are listed below:

Oracle® Data Guard Broker 10g Release 2 (10.2) - 5.5 Fast-Start Failover
What Happens When Fast-Start Failover and the Observer Are Running?

7.6 Scenario 5: Enabling Fast-Start Failover and the Observer

Metalink Note#359555.1 IMPLEMENTING FAST-START FAILOVER IN 10GR2 DATAGUARD BROKER ENVIRONMENT

Oracle Maximum Availability Architecture – MAA

I hope I didn't miss something.

Cheers!

APEX with eBusiness Suite survey reminder

Anthony Rayner - Thu, 2008-08-07 09:58
Readers of the APEX blogosphere will already know, we are currently conducting a survey in an attempt to better understand how people are using Application Express with Oracle eBusiness Suite / Oracle Applications. For example, have you implemented a killer reporting module with our 'Interactive Reports' to empower your users to get better information from their eBusiness Suite application data? Have you had other integration successes?

If so, this is just a quick reminder to anyone who hasn't yet completed David Peake's single page survey, please if you can take a moment to fill this in and tell us your thoughts, we would much appreciate it. Information provided is completely confidential, unless you state otherwise.

Thank you!
Anthony
Categories: Development

How to Configure OEM 10g Database Control after Clone

Aviad Elbaz - Thu, 2008-08-07 05:32

After cloning an EBS environment I had tried to start the OEM 10g Database Control (DBConsole) and I got this message:

[oratest@testsrv1 ~]$ emctl start dbconsole
TZ set to Israel
OC4J Configuration issue.
/oracle/test/db/testdb/10.2/ oc4j/j2ee/OC4J_DBConsole_testsrv1.domain_TEST not found.

However, it was an acceptable message since DB Console wasn't configured to this instance yet.

Using emca I've tried to recreate the DB Control configuration files as follow:

[oratest@testsrv1 ~]$ emca -config dbcontrol db

...
INFO: Database Control started successfully
...
INFO: >>>>>>>>>>> The Database Control URL is http://testsrv1.domain:5500/em <<<<<<<<<<<
Enterprise Manager configuration completed successfully

Well, I thought it looks fine, but when I opened the DB Console I saw this message at the top of the page: 

error:
java.lang.Exception: Exception in sending Request :: null

After some research, logs digging and Metalink searches, I found that the Agent component should be relinked manually.
But before there are two issues in env_sysman.mk file that should be fixed before executing the relink command.

I will show my actions step by step:

  1. Stop DB Console:
    emctl stop dbconsole
     
  2. cd $ORACLE_HOME/sysman/lib
     
  3. Edit env_sysman.mk and look for "JRE_LIB_DIR"
    In my instance JRE_LIB_DIR was populated with the old environment name..
    I changed it from:
    JRE_LIB_DIR=/oracle/prod/db/proddb/10.2/jdk/jre/lib/i386
    To:
    JRE_LIB_DIR=/oracle/test/db/testdb/10.2/jdk/jre/lib/i386
     
  4. One more issue for env_sysman.mk, look for "$(LIBNMEM)  $(LIBNMER)"
    And change from:
    $(LIBNMEM)  $(LIBNMER) \
    To:
    $(LIBNMEM)  $(LIBNMER) $(LIBNMO) \
     
  5. Now relink the Agent component by issuing this command:
    make -f ins_sysman.mk agent
     
  6. Recreate the DB Control configuration files (Again, even if you did it before)
    emca -config dbcontrol db

Following this procedure will make the DBConsole configured and work for the new cloned instance.

Categories: APPS Blogs

A really recommended ASM patch – failing lun

Geert De Paep - Wed, 2008-08-06 10:22

The following is a real life experience about failing disks/luns and how ASM reacts to this. We used 10.2.0.4 on Solaris with a HDS storage box and MPXio. We made an ASM diskgroup of 2 mirrorred disks. Then we made the luns unavailable to the hosts (hide lun). The result was not really what we expected.After hiding the lun, the ASM alert file shows the following ‘normal’ messages:

15:26:57+: Errors in file /app/oracle/admin/+ASM/bdump/+asm1_gmon_10611.trc:
15:26:57+: ORA-27091: unable to queue I/O
15:26:57+: ORA-27072: File I/O error
15:26:57+: SVR4 Error: 5: I/O error
15:26:57+: Additional information: 4
15:26:57+: Additional information: 2048
15:26:57+: Additional information: -1

and some time later

15:32:23+: WARNING: offlining disk 0.4042332304 (ARESDATAA) with mask 0x3
15:32:23+: WARNING: offlining disk 0.4042332304 (ARESDATAA) with mask 0x3

and

15:32:33+: WARNING: kfk failed to open a disk[/dev/oracle/asm/aresdata-a]
15:32:33+: Errors in file /app/oracle/admin/+ASM/udump/+asm1_ora_18313.trc:
15:32:33+: ORA-15025: could not open disk ‘/dev/oracle/asm/aresdata-a’
15:32:33+: ORA-27041: unable to open file
15:32:33+: SVR4 Error: 5: I/O error
15:32:33+: Additional information: 3
15:32:33+: NOTE: PST update: grp = 1, dsk = 0, mode = 0x6
15:32:35+: NOTE: PST update: grp = 1, dsk = 0, mode = 0x4
15:32:35+: NOTE: group ARESDATA: relocated PST to: disk 0001 (PST copy 0)
15:32:35+: NOTE: PST update: grp = 1, dsk = 0, mode = 0x4
15:32:35+: NOTE: cache closing disk 0 of grp 1: ARESDATAA
15:32:35+: NOTE: cache closing disk 0 of grp 1: ARESDATAA
15:33:50+: WARNING: PST-initiated drop disk 1(780265568).0(4042332304) (ARESDATAA)
15:33:50+: NOTE: PST update: grp = 1
15:33:50+: NOTE: group ARESDATA: relocated PST to: disk 0001 (PST copy 0)
15:33:50+: NOTE: requesting all-instance membership refresh for group=1
15:33:50+: NOTE: membership refresh pending for group 1/0x2e81e860 (ARESDATA)
15:33:50+: SUCCESS: refreshed membership for 1/0x2e81e860 (ARESDATA)
15:33:53+: SUCCESS: PST-initiated disk drop completed
15:33:53+: SUCCESS: PST-initiated disk drop completed
15:33:56+: NOTE: starting rebalance of group 1/0x2e81e860 (ARESDATA) at power 1
15:33:56+: Starting background process ARB0
15:33:56+: ARB0 started with pid=21, OS id=19285
15:33:56+: NOTE: assigning ARB0 to group 1/0x2e81e860 (ARESDATA)
15:33:56+: NOTE: F1X0 copy 1 relocating from 0:2 to 1:2
15:33:56+: NOTE: F1X0 copy 2 relocating from 1:2 to 0:2
15:33:56+: NOTE: F1X0 copy 3 relocating from 65534:4294967294 to 65534:4294967294
15:33:56+: NOTE: X->S down convert bast on F1B3 bastCount=2

15:34:14+: NOTE: group ARESDATA: relocated PST to: disk 0001 (PST copy 0)
15:34:14+: WARNING: offline disk number 0 has references (1394 AUs)
15:34:14+: NOTE: PST update: grp = 1
15:34:14+: NOTE: group ARESDATA: relocated PST to: disk 0001 (PST copy 0)

However, every time we do a query on the v$asm_disk view, we experience hangs of 30..90 seconds. Same when adding a tablespace in a database. It looks as if ASM, whenever trying to access the failed disk, waits for OS timeouts. I assume that every operation that needs access to ASM disks (e.g. data file autoextend that needs to allocate space in the asm disk, creation of archivelog, …) suffers from these timeout. Not really acceptable for a production environment. I do want to mention that dd, and even orion for asynch IO, detect the error immediately without waiting for any timeout.

You can clearly see the ASM waits when you truss the server process of the asm sqlplus session when you do a select on v$asm_disk. For each failed disk you get:

9377/1: 17.3106 open(“/dev/oracle/asm/aresdata-a”, O_RDWR|O_DSYNC) Err#5 EIO
9377/1: 18.3195 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 18.3198 open(“/dev/oracle/asm/aresdata-a”, O_RDWR|O_DSYNC) Err#5 EIO
9377/1: 19.3295 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 19.3298 open(“/dev/oracle/asm/aresdata-a”, O_RDWR|O_DSYNC) Err#5 EIO
9377/1: 20.3395 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 20.3398 open(“/dev/oracle/asm/aresdata-a”, O_RDWR|O_DSYNC) Err#5 EIO
9377/1: 21.3495 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 21.3497 open(“/dev/oracle/asm/aresdata-a”, O_RDWR|O_DSYNC) Err#5 EIO
9377/1: 22.3595 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 22.3598 open(“/dev/oracle/asm/aresdata-a”, O_RDWR|O_DSYNC) Err#5 EIO
9377/1: 22.3605 open(“/dev/oracle/asm/aresdata-a”, O_RDONLY|O_DSYNC) Err#5 EIO
9377/1: 23.3695 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 23.3697 open(“/dev/oracle/asm/aresdata-a”, O_RDONLY|O_DSYNC) Err#5 EIO
9377/1: 24.3795 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 24.3798 open(“/dev/oracle/asm/aresdata-a”, O_RDONLY|O_DSYNC) Err#5 EIO
9377/1: 25.3895 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 25.3897 open(“/dev/oracle/asm/aresdata-a”, O_RDONLY|O_DSYNC) Err#5 EIO
9377/1: 26.3995 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 26.3998 open(“/dev/oracle/asm/aresdata-a”, O_RDONLY|O_DSYNC) Err#5 EIO
9377/1: 27.4095 nanosleep(0xFFFFFFFF7FFF8FE0, 0xFFFFFFFF7FFF8FD0) = 0
9377/1: 27.4097 open(“/dev/oracle/asm/aresdata-a”, O_RDONLY|O_DSYNC) Err#5 EIO
9377/1: 27.4105 write(5, ” * * * 2 0 0 8 – 0 7 -“.., 27) = 27
9377/1: 27.4106 write(5, “\n”, 1) = 1
9377/1: 27.4109 write(5, ” W A R N I N G : k f k”.., 62) = 62
9377/1: 27.4109 write(5, “\n”, 1) = 1
9377/1: 27.4111 close(6) = 0
9377/1: 27.4111 open(“/app/oracle/admin/+ASM/bdump/alert_+ASM2.log”, O_WRONLY|O_APPEND|O_CREAT|O_LARGEFILE, 0660) = 6
9377/1: 27.4118 time() = 1217417698
9377/1: 27.4119 writev(6, 0xFFFFFFFF7FFF8080, 3) = 88

So it tries to access each disk 6 times in read write mode and 6 times again in read only mode. A loss of 12 valuable seconds….

At the same time, the os messages file generates the following messages every second:

Jul 30 11:01:05 node1 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/ssd@g60060e800562f400000062f4000000d5 (ssd63):
Jul 30 11:01:05 node1 offline or reservation conflict

Jul 30 11:01:06 node1 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/ssd@g60060e800562f400000062f4000000d5 (ssd
63):
Jul 30 11:01:06 node1 offline or reservation conflict

We would expect that ASM is intelligent enough to detect that the disk failed, but obviously it keeps trying to access it including the waits and timeouts.
FYI, the state of the disks after the failure has become:

DISKGROUP  PATH                            Total   Used  %Usd ST  Header    FAILGROUP   STATE      DiskName
---------- ------------------------------ ------ ------ ----- --- --------- ----------- ---------- ---------------
ARESDATA   /dev/oracle/asm/aresdata-b      46068   1396     3 ONL MEMBER    B           MOUNTED    ARESDATAB

ARESDATA                                   46068   1396     3 OFF UNKNOWN   A           MOUNTED    ARESDATAA

I opened an SR on Metalink and I uploaded all possible traces I could generate (*). And guess what, due to some reason (maybe (*)), I immediately came to an excellent engineer who identified the problem immediately as a known bug, and asked development to provide a patch for 10.2.0.4 (which did not exist yet at that time). It took only 5 days for the patch to be available, and that patch solves the problem completely. After applying it, every select on v$asm_disk returns immediately.

This is it:

Patch 6278034
Description WHEN SWITCHING OFF ONE ARRAY CONTAINING ONE FAILGROUP, THE PERFORMANCE TURNS BAD
Product RDBMS Server
Select a Release 10.2.0.310.2.0.4
Platform: Sun Solaris SPARC (64-bit)
Last Updated 04-AUG-2008
Size 97K (99336 bytes)
Classification General

The patch exist as well for 10.2.0.3. I would recommend to install it on your oracle_home where asm runs. However I have no idea if the problem is applicable as well to non-Solaris environments.

Note: To resync the disks after the lun is available again, use the ALTER DISKGROUP ADD FAILGROUP x DISK ‘/dev/path/somedevice’ NAME some_new_name [FORCE] command. Not so straightforward, it turns out that trying to offline or drop the disk will not work. I.e.:

==================== OVERVIEW OF ASM DISKS ======================================

DISKGROUP PATH Total Used %Usd ST Header FAILGROUP STATE DiskName MOUNT_S
———- —————————— —— —— —– — ——— ———– ———- ————— ——-
ARESARCHA /dev/oracle/asm/aresarch-a 30708 11226 36 ONL MEMBER AREAARCHA NORMAL AREAARCHA CACHED
ARESARCHB /dev/oracle/asm/aresarch-b 30708 11202 36 ONL MEMBER ARESARCHB NORMAL ARESARCHB CACHED
ARESDATA /dev/oracle/asm/aresdata-a 46068 1412 3 ONL MEMBER A NORMAL ARESDATAA CACHED

ARESDATA 0 0 OFF UNKNOWN B HUNG ARESDATAB MISSING

Trying to add it with the same name as before:
SQL> alter diskgroup ARESDATA add failgroup B disk ‘/dev/oracle/asm/aresdata-b’ name ARESDATAB force;
alter diskgroup ARESDATA add failgroup B disk ‘/dev/oracle/asm/aresdata-b’ name ARESDATAB force
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15010: name is already used by an existing ASM disk

Adding it using a new name:

SQL> alter diskgroup ARESDATA add failgroup B disk ‘/dev/oracle/asm/aresdata-b’ name ARESDATAB2 force;
alter diskgroup ARESDATA add failgroup B disk ‘/dev/oracle/asm/aresdata-b’ name ARESDATAB2 force
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15034: disk ‘/dev/oracle/asm/aresdata-b’ does not require the FORCE option

SQL> alter diskgroup ARESDATA add failgroup B disk ‘/dev/oracle/asm/aresdata-b’ name ARESDATAB2;

Diskgroup altered.

I assume that the need to use the ‘force’ option depends on the kind of error you got.

Sometimes I see that the rebalance does not start automatically. Then you get the following status:
DISKGROUP PATH Total Used %Usd ST Header FAILGROUP STATE DiskName MOUNT_S
———- —————————— —— —— —– — ——— ———– ———- ————— ——-
ARESARCHA /dev/oracle/asm/aresarch-a 30708 11322 36 ONL MEMBER AREAARCHA NORMAL AREAARCHA CACHED
ARESARCHB /dev/oracle/asm/aresarch-b 30708 11298 36 ONL MEMBER ARESARCHB NORMAL ARESARCHB CACHED
ARESDATA /dev/oracle/asm/aresdata-a 46068 1412 3 ONL MEMBER A NORMAL ARESDATAA CACHED
/dev/oracle/asm/aresdata-b 46068 2 0 ONL MEMBER B NORMAL ARESDATAB2 CACHED

ARESDATA 0 0 OFF UNKNOWN B FORCING ARESDATAB MISSING

In that case, start it manually using:

SQL> alter diskgroup ARESDATA rebalance;

Diskgroup altered.


How I got picked for special attention in Denver International Airport

Moans Nogood - Wed, 2008-08-06 09:06
So my wife Anette and I are on our way home from Tim's wedding.

We flew British Airways both ways. In Copenhagen I told a lady at the BA check-in counter that I might have discovered a way for terrorists to put bombs on planes without being on the plane themselves.

You see, the last couple of times where I have had to change terminals in Heathrow and there's been approximately two hours or less between the flights my bags haven't made it.

So if the bags are onboard a plane but the passenger doesn't show up, they'll pull the bags. But if the bags are delayed they'll let the passenger fly without his luggage.

Everyone knows there have been huge problems with luggage in Heathrow. At one point there was more than 42,000 pieces stacked up. IBM had stopped a DW/BI project without having created even indexes on the Oracle database tables, so every piece of luggage required a full table scan of a rather large table, so it took a while to get over that one.

So I told the lady at the checkin in Kastrup airport, Copenhagen, that there might be a security risk in Heathrow and she said she would relay the information.

Well, apparently she did, because the checkin guy in Denver yesterday suddenly started behaving very strange, went into the backoffice to "do a security check" and marked our boarding cars with the dreaded "ssss" code highlighted in yellow which means "pay very special attention", and which meant that both Anette and I had to go through the new machine that will blow air on you so that it can smell traces of explosives, etc etc.

We're currently in Heathrow, about to board for Copenhagen. I wonder if our bags will make it.

So much for trying to warn the folks about a security problem :-))).

Mogens

stop fixing the symptom!

Nuno Souto - Wed, 2008-08-06 06:02
Long time no post!Sorry folks: been busy with a lot at work and at home. It's Winter outdoor sports season here so all spare time is spent ferrying kids from venue to venue.Anyways: attended a recent Oracle seminar on high availability.Good stuff. And for once, we didn't get another "injection" of "Larry Ellison did or said this or that or the other": Hallelujah!!!Interesting also to see some Noonsnoreply@blogger.com6

Index block split bug in 9i

Yasin Baskan - Wed, 2008-08-06 02:18
In his famous index internals presentation Richard Foote mentions a bug in 9i about index block splits when rows are inserted in the order of the index columns. Depending on when you commit your inserts the index size changes dramatically.

While I was trying to find out why a 3-column primary key index takes more space than its table I recalled that bug and it turned out that was the reason of the space issue. The related bug is 3196414 and it is fixed in 10G.

Here is the test case Richard presents in his paper.

SQL> create table t(id number,value varchar2(10));

Table created.

SQL> create index t_ind on t(id);

Index created.

SQL> @mystat split

NAME VALUE
------------------------------ ----------
leaf node splits 0
leaf node 90-10 splits 0
branch node splits 0

SQL> ed
Wrote file afiedt.buf

1 begin
2 for i in 1..10000 loop
3 insert into t values(i,'test');
4 commit;
5 end loop;
6* end;
SQL> r
1 begin
2 for i in 1..10000 loop
3 insert into t values(i,'test');
4 commit;
5 end loop;
6* end;

PL/SQL procedure successfully completed.

SQL> @mystat2 split

NAME VALUE DIFF
------------------------------ ---------- ----------
leaf node splits 35 35
leaf node 90-10 splits 0 0
branch node splits 0 0

SQL> analyze index t_ind validate structure;

Index analyzed.

SQL> select lf_blks, pct_used from index_stats;

LF_BLKS PCT_USED
---------- ----------
36 51

SQL> drop table t;

Table dropped.



I am trying to insert the rows in the order of the primary key column, so what I expect to see is that when an index block fills there will be a 90-10 split and the index will grow in size. But as the number of leaf block splits show there are 35 block splits and none of them are 90-10 splits meaning all are 50-50 block splits. I have 36 leaf blocks but half of each one is empty.

If we try the same inserts but commit after the loop the result changes.

SQL> create table t(id number,value varchar2(10));

Table created.

SQL> create index t_ind on t(id);

Index created.

SQL> @mystat split

NAME VALUE
------------------------------ ----------
leaf node splits 35
leaf node 90-10 splits 0
branch node splits 0

SQL> ed
Wrote file afiedt.buf

1 begin
2 for i in 1..10000 loop
3 insert into t values(i,'test');
4 end loop;
5 commit;
6* end;
SQL> r
1 begin
2 for i in 1..10000 loop
3 insert into t values(i,'test');
4 end loop;
5 commit;
6* end;

PL/SQL procedure successfully completed.

SQL> @mystat2 split

NAME VALUE DIFF
------------------------------ ---------- ----------
leaf node splits 53 53
leaf node 90-10 splits 18 18
branch node splits 0 0

SQL> analyze index t_ind validate structure;

Index analyzed.

SQL> select lf_blks, pct_used from index_stats;

LF_BLKS PCT_USED
---------- ----------
19 94


In this case we see that there have been 18 block splits and all were 90-10 splits as expected. We have 19 leaf blocks and all are nearly full. Depending on where the commit is we can get an index twice the size it has to be. When I ran the same test in 10G it did not matter where the commit was. I got 19 leaf blocks in both cases.

I did not test if this problem happens when several sessions insert a single row and commit just like in an OLTP system but I think it is likely because we have indexes showing this behavior in OLTP systems.

Lost Pictures - Recovered Pictures

Pawel Barut - Mon, 2008-08-04 16:09
Written by Paweł Barut
This weekend I've spend great time with family, and we make lot of pictures. But when I've arrived home, and plugged camera to PC, it looked like this:
All pictures seems gone ...
For us computer geeks data lost is not an option. I've started to look for tools that can help me recover those data. In such situation rule number one is: not allow to write anything on this storage. Any additional write could lead to total data lost. I've googl`ed for software that could help. I've found lot of software that does "FAT Recovery", "Recover Digital Camera", "Data Recovery" - most of them priced 49-200$. Well, not big money, but still quite many when you need to recover 50 pictures. And you never know, if this software is worth it, and will solve your problem.
Finally I've found really great software that I would like to recommend: PC INSPECTOR™ smart recovery. It run very fast, and recovered almost all pictures. When looking into Camera counter and number of files, only one picture is missing. This software is freeware, but authors wants us to "Pay What You Want". And I've did it, as this software did his job perfectly.
Hope it help someone else to recover lost pictures.

Paweł

--
Related Articles on Paweł Barut blog:
    Categories: Development

    JavaFX Preview SDK

    Oracle EPM Smart Space - Mon, 2008-08-04 15:11

    In previous posts I have attempted to show how RIA’s (Rich Internet Applications) can be used for delivering EPM and BI data to end users.  Until now I have focused mostly on Microsoft’s Silverlight and have gotten some grief.  Well  now I have another option; JavaFX has launched their preview SDK and it is time to see how it compares and what the user experience will be like. 

    For this test I plan to see If I can create some Essbase ‘gadgets’ that can be used in the browser and then dragged to the desktop (cool feature shown in a JavaFX video).  Comment if you have ideas and stay tuned for posts on my progress.

    Categories: Development

    Tim is getting married...

    Moans Nogood - Sat, 2008-08-02 10:33
    Anette and I are in Denver, Colorado these days, because Tim Gorman is getting married to Lori tonight (Saturday). It's a hot wedding: This is, I think, the 20th day in a row with over 90 degrees Fahrenheit, which makes it the hottest period since 1857 or something like that.

    Tim is very well known in the Oracle community: He spent many years inside Oracle where I had the pleasure of communicating a lot with him on the wonderful HELPKERN list there.

    He also wrote a couple of books and contributed to the Tales of The OakTable book. Here's his website: http://www.evdbt.com/

    Good luck to Lori & Tim! (said the guy on his third marriage...)

    An unusual cause of ORA-12154

    Jared Still - Fri, 2008-08-01 17:30
    The ORA-12154 (and its cousin ORA-12514) have been the bane of many a novice DBA.

    This error is essentially telling you that you have messed up your database naming configuration, whether it be tnsnames, oracle names or OID. The fix is normally quite simple - just correct the naming.

    This is usually quite easily done with netmgr, a tool that is fairly good at its job. The syntax for setting up a tnsname is a little convoluted, and I've fallen back on netmgr a number of times when I can't seem to get it just right by typing the entries in the file.

    There's at least one other way to cause ORA-12154 to pop up and consume more time than you may like to admit. I won't tell how long I played with this...

    The cause is actually due to security policy. While the characters !@#$%^&*()_-=+~` are not normally allowed in Oracle passwords, it is actually quite simple to include them. Simply enclose the password in double quotes.
    alter user scott identified by "th!sat^st";

    This was done on a number of accounts on our databases, all in the name of security.
    These types of passwords have been used without issue for some time now.

    Today was a notable exception.

    After spending some time fiddling with tnsnames.ora files, I realized what the problem actually was.

    Suppose you were to have a password with an '@' in it? Yep, that was the problem.
    First let's see what it looks like from within sqlplus:

    15:41:52 SQL> alter user scott identified by "what@mistake";

    User altered.

    15:42:03 SQL> connect scott/"what@mistake";
    Connected.

    scott SQL> select user from dual;

    USER
    ------------------------------
    SCOTT

    SQL> connect scott/what@mistake
    ERROR:
    ORA-12154: TNS:could not resolve the connect identifier specified



    As long as the connect string was in double quotes, no problem.

    Now let's try it from a linux command line:

    Linux> sqlplus scott/'what@mistake'

    SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:42:20 2008

    ERROR:
    ORA-12154: TNS:could not resolve the connect identifier specified


    Linux> sqlplus scott/'what\@mistake'

    SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:14 2008

    ERROR:
    ORA-01017: invalid username/password; logon denied


    Linux> sqlplus scott/"what\@mistake"

    SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:21 2008

    Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

    ERROR:
    ORA-01017: invalid username/password; logon denied


    Linux> sqlplus 'scott/what\@mistake'

    SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:47:23 2008

    Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

    ERROR:
    ORA-01017: invalid username/password; logon denied

    Linux > sqlplus scott/what\@mistake

    SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:48:52 2008

    Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

    ERROR:
    ORA-12154: TNS:could not resolve the connect identifier specified

    There doesn't seem to be any method to use a password with the @ character in it, at least not from sqlplus. The same password works fine when used on the command line with perl script that logs on to the same database and account:

    Linux> DBI_template.pl -database dv11 -username scott -password 'what@mistake'
    X


    The 'X' is the correct output as this script simply selects * from dual.

    Lesson learned, do not ever, under any circumstances, use the @ character in the password for an Oracle database account.

    You might think that goes without saying, but it's actually pretty easy to get a password with '@' in it when the passwords are generated random characters.

    FWIW, this problem was manifested in Windows as well
    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator