Jared Still

Subscribe to Jared Still feed
Here you will find posts that are mostly about my work as an Oracle DBA. There may occasionally be other topics posted, but by and large this blog will be about Oracle and other geeky IT topics. Perl will likely be mentioned from time to time.Jaredhttp://www.blogger.com/profile/09187267459019576469noreply@blogger.comBlogger28125
Updated: 12 hours 22 min ago

An unusual cause of ORA-12154

Fri, 2008-08-01 17:30
The ORA-12154 (and its cousin ORA-12514) have been the bane of many a novice DBA.

This error is essentially telling you that you have messed up your database naming configuration, whether it be tnsnames, oracle names or OID. The fix is normally quite simple - just correct the naming.

This is usually quite easily done with netmgr, a tool that is fairly good at its job. The syntax for setting up a tnsname is a little convoluted, and I've fallen back on netmgr a number of times when I can't seem to get it just right by typing the entries in the file.

There's at least one other way to cause ORA-12154 to pop up and consume more time than you may like to admit. I won't tell how long I played with this...

The cause is actually due to security policy. While the characters !@#$%^&*()_-=+~` are not normally allowed in Oracle passwords, it is actually quite simple to include them. Simply enclose the password in double quotes.
alter user scott identified by "th!sat^st";

This was done on a number of accounts on our databases, all in the name of security.
These types of passwords have been used without issue for some time now.

Today was a notable exception.

After spending some time fiddling with tnsnames.ora files, I realized what the problem actually was.

Suppose you were to have a password with an '@' in it? Yep, that was the problem.
First let's see what it looks like from within sqlplus:

15:41:52 SQL> alter user scott identified by "what@mistake";

User altered.

15:42:03 SQL> connect scott/"what@mistake";
Connected.

scott SQL> select user from dual;

USER
------------------------------
SCOTT

SQL> connect scott/what@mistake
ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified



As long as the connect string was in double quotes, no problem.

Now let's try it from a linux command line:

Linux> sqlplus scott/'what@mistake'

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:42:20 2008

ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified


Linux> sqlplus scott/'what\@mistake'

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:14 2008

ERROR:
ORA-01017: invalid username/password; logon denied


Linux> sqlplus scott/"what\@mistake"

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:21 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Linux> sqlplus 'scott/what\@mistake'

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:47:23 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

Linux > sqlplus scott/what\@mistake

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:48:52 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified

There doesn't seem to be any method to use a password with the @ character in it, at least not from sqlplus. The same password works fine when used on the command line with perl script that logs on to the same database and account:

Linux> DBI_template.pl -database dv11 -username scott -password 'what@mistake'
X


The 'X' is the correct output as this script simply selects * from dual.

Lesson learned, do not ever, under any circumstances, use the @ character in the password for an Oracle database account.

You might think that goes without saying, but it's actually pretty easy to get a password with '@' in it when the passwords are generated random characters.

FWIW, this problem was manifested in Windows as well
Categories: DBA Blogs

Oracle Locator Express

Wed, 2008-07-23 16:44
If you do much work with the Oracle database on Windows, and you have 1+N Oracle homes installed, you've probably lamented the fact that the Oracle Home Switcher is no longer included with Oracle.

I can't recall exactly what the tool was called or which version Oracle was at when it ceased to be a part of the installation. I do know that it doesn't work with 10g+.

A little tool called Oracle Locater Express fills this niche nicely, and it does work with 10g. Sorry, have not yet tried it with 11g.

"Oracle Locator Express"


I've used it for several months without issue, save one minor glitch.

Sometimes Oracle 10g Homes are not displayed properly in the list of Oracle Homes to choose from. Other than that, no complaints

Categories: DBA Blogs

Oracle's CPU Patch Naming

Tue, 2008-05-13 19:30
Just today I discovered that Oracle changed the nomenclature used to refer to Critical Patch Updates.

Up through the patch bundle issued in October 2007, these were referred to by the date of release. The October 2007 release was "Oracle Critical Patch Update - October 2007", often shortened to "CPU Oct 2007".
Oracle Critical Patch Update - October 2007

When viewing the documentation for the October 2007 CPU you will see this:

Oracle® DatabaseCritical Patch Update Note
Release 10.2.0.2 for Microsoft Windows (x64)

The documentation for the January 2008 CPU however looks a bit different:

Oracle® Database Server Version 10.2.0.2 Patch 17
Bundle Patch Update Note
Release 10.2.0.2 for Microsoft Windows (x64)


Note the difference. The Oct 2007 CPU is referred to by date, while the Jan 2008 CPU is referred to by patch number.

By digging down into the Oct 2007 CPU documentation you will find that it is referred to as "Patch 16".

Doing the same for the "Patch 17" documentation reveals that it is indeed the Jan 2008 CPU.

Why does it matter?

Imagine working with consultants for a major application system that uses Oracle as a database. Imagine also that these consultants also know only 'Patch 17', while you know only about 'Jan 2008 CPU'.

This can lead to a great deal of confusion, as it did for me (as well as the consultants) recently. Digging down into the docs revealed the change in nomenclature.

It led to (I am telling on myself here) me installing both the October 2007 CPU and the January 2008 CPU on a single database. This wouldn't be so bad, if they had been applied in the wrong order.

Maybe most other Oracle folks already know about this, and I just missed the memo explaining that Oracle would start referring to their CPU's by a different name.

In any case, not knowing this caused a lot of extra work.
Categories: DBA Blogs

ORA-6502 "Bulk Bind: Truncated Bind" error

Thu, 2008-03-13 14:33
ORA-6502 is an error that is apparently not well documented when it occurs in conjunction with the use of PL/SQL tables, and possibly bulk binds.

I ran into this problem recently when some custom code that had worked well for several years suddenly started failing.

As it turns out, and you will see just a little later here, the error is rather easy to fix. What makes it diffucult is if you've never encountered an ORA-6502 under these circumstances. There is precious little about it via MetaLink or Google. Writing about it here may be of help to the next innocent victim of ORA-6502 with bulk binds.

My suspicion was that new data loaded into the system from another database had something to do with the problem, the problem was determining where it was occurring and why.

The problem went unnoticed for some time due to a quirk in error handling built into the package. (Tom Kyte would likely agree)
Why Do People Do This?"
Built into the package is detailed profiling information. This is a good thing. Also built into the package is an error handling routine. This might be a good thing to have if the package is only run from the command line as it provides a call stack and the error message.

There is an attribute of this type of error handling that does not lend itself well to use in a job that is run via DBMS_JOB. The error message is printed, but the error is not raised.

The consequence of this type of error handling is that regardless of any errors encountered during the execution of the job, they are hidden from DBMS_JOB. No error is ever raised. After a user reports that production data is not propagating to the system, the DBA (me) checks the DBA_JOBS view and find that it is executing successfully.

Wondering what is going on, the erstwhile DBA (me again) runs the job manually and discovers that it does not actually work at all, but no error was raised.

The printed error message was "PL/SQL: numeric or value error" - bulk bind truncated.

The PL/SQL package that generated this is 3000 lines long and has some level of abstration. Translation: hard to debug when you don't know what the code is doing.

Before showing a more relevant example, I will show a simple one. If you found this page via google, this is enough to help you understand what is going on.

The problem is simply a mismatch of the data type scale. Obvious once you see it. This is a 9i example (10g would take less code) because the database it was written on was 9.2 at the time.

If you are accustomed to working with the data dictionary the problem will likely be obvious to you.

declare
type owner_type is table of varchar2(30) index by pls_integer;
type object_name_type is table of varchar2(20) index by pls_integer;
type object_type_type is table of varchar2(18) index by pls_integer;

t_owner owner_type;
t_object_name object_name_type;
t_object_type object_type_type;

cursor c_obj
is
select owner, object_name, object_type
from dba_objects;

begin
open c_obj;
while c_obj%isopen
loop
fetch c_obj
bulk collect into t_owner, t_object_name, t_object_type
limit 100;

if c_obj%notfound then
close c_obj;
end if;

end loop;
end;
/

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind
ORA-06512: at line 19

The fix is simple: change length of object_name_type from 20 to 30. The data dictionary (DBA_OBJECTS.OBJECT_NAME) actually specifies this as 128, but this particular database has no objects with names longer than the standard length of 30 characters.

Where the mystery of what caused this error becomes difficult to unravel is when there are a few levels of indirection.

There is a staging table that is the temporary home for data from another system. The PL/SQL code gets its specifications for data types from the staging table, not the source table. You probably already know what the problem is.

This example is similar to what was happening.

First create the staging table.
create table tbind_test (
owner varchar2(30) not null,
object_name varchar2(20) not null,
object_type varchar2(18) not null
)
/

Now run a similar, but somewhat different PL/SQL block.

declare
type owner_type is table of tbind_test.owner%TYPE index by pls_integer;
type object_name_type is table of tbind_test.object_name%TYPE index by pls_integer;
type object_type_type is table of tbind_test.object_type%TYPE index by pls_integer;

t_owner owner_type;
t_object_name object_name_type;
t_object_type object_type_type;

cursor c_obj
is
select owner, object_name, object_type
from dba_objects;

begin
open c_obj;
while c_obj%isopen
loop
fetch c_obj
bulk collect into t_owner, t_object_name, t_object_type
limit 100;
if c_obj%notfound then
close c_obj;
end if;

end loop;
end;
/

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind
ORA-06512: at line 19

Here's the offending bit of code:
19    fetch c_obj
20 bulk collect into t_owner, t_object_name, t_object_type
21 limit 100;


Hmmmm. The specification for the PL/SQL tables came straight from the
table via %TYPE, what could be the problem? The problem is that you
need to look at the cursor, and compare the datatypes in the table
referenced by the cursor with those of the TBIND_TEST table.

Doing that you will find that TBIND_TEST.OBJECT_NAME should really be 128
 1* alter table tbind_test modify (object_name varchar2(128) )
SQL> /

Table altered.

Now the bulk binds will run without error.
Categories: DBA Blogs

Detect numbers with TRANSLATE() - Take two

Thu, 2008-01-31 04:03
Last week I wrote about using TRANSLATE to detect numbers in data Using The TRANSLATE() function...

Andrew Clarke at Radio Free Tooting pointed out the shortcomings of using TRANSLATE() to detect numbers.

As I said earlier, all I needed to do was detect if the characters in a string were all digits or not, and I wanted it to be very fast.

But Andrew's remarks got me thinking - could translate be used to detect more complex numbers?

Here's the short list of requirements:

* Detect integers
* Detect numbers with decimal point ( 4.6, 0.2, .7)
* Detect negative and positive ( leading + or - )
* Reject text with more than 1 '.', such as an IP address ( 127.0.0.1 )
* Reject anything with alpha text

And comma's are considered as text. 99,324.1 would be alpha.

If you need to do this on 10g, no problem, as a regular expression can handle it.

Fist create some test data:

drop table number_test;

create table number_test( alphacol varchar2(20));

insert into number_test values('.5');
insert into number_test values('1');
insert into number_test values('2');
insert into number_test values(' 3');
insert into number_test values('4 ');
insert into number_test values('3.14159');
insert into number_test values('127.0.0.1');
insert into number_test values('+34.45');
insert into number_test values('-54.43');
insert into number_test values('this is a test');
insert into number_test values('th1s is 4 t3st');
insert into number_test values('.');
commit;

Now select only columns where the value is a number:

select alphacol
from number_test
where regexp_instr(trim(alphacol),'^[-+]?[0-9]*(\.?[0-9]+)?$') > 0
order by 1

SQL> /

ALPHACOL
--------------------
3
+34.45
-54.43
1
2
3.14159
4

7 rows selected.

That seems to work.

But what if you're stuck doing this on 9i? REGEXP_INSTR is not available.

You can use the user defined function IS_NUMBER(), which works well, but is very slow if used on large amounts of data.

Might we be able to use and abuse the TRANSLATE() function to speed this up? Here's a bit of convoluted SQL that works well on the limited test data:

select alphacol, alpha2
from
(
select alphacol,
-- is there a sign +- ? - remove it
decode(substr(alpha2,1,1),
'-',substr(alpha2,2),
'+',substr(alpha2,2),
alpha2
) alpha2
from (
select
alphacol,
-- remove a single '.' if it/they exists
replace(substr(alphacol,1,instr(alphacol,'.')),'.') || substr(alphacol,instr(alphacol,'.')+1) alpha2
from (
select trim(alphacol) alphacol
from number_test
)
)
)
where substr('||||||||||||||||||||||||||||||||',1,length(alpha2)) = translate(alpha2,'0123456789','||||||||||')
/


(Sorry about formatting - I seem to lose all formatting when I paste SQL)

Output from this nasty bit of SQL is identical to that when using REGEXP_INSTR:

ALPHACOL ALPHA2
-------------------- ----------------------------------------
.5 5
1 1
2 2
3 3
4 4
3.14159 314159
+34.45 3445
-54.43 5443

8 rows selected.

To make the TRANLATE() function do what is needed, a lot of data manipulation had to be done in the SQL. There is so much work being done now that it now takes nearly as long to run as does the IS_NUMBER() function, so there isn't much point in using TRANSLATE().

Runstats results:

SQL> @th5
.047739 secs
.037447 secs
PL/SQL procedure successfully completed.

If nothing else, this was an interesting exercise.
Categories: DBA Blogs

Pages