DBA Blogs

Undocumented Oracle Functions

Jared Still - Wed, 2008-10-01 18:58
Undocumented functions in Oracle are always fun, and you just may find something useful.

The caveat of course is that they are undocumented. They can change without notice between releases or patch levels, so building apps that depend on them may be unwise.

They are often quite useful from a DBA perspective when used in SQL scripts.

Here are a few that I've played with. These are all found in Oracle 10.2.0.3

These functions have one thing in common - they have a prefix of SYS_OP_

Some of these appear to be identical to documented functions.

I don't know of any official explanation regarding the purpose of undocumented functions that seem to mimic documented functions. It could be that the source for the documented functions are separate from those that are documented, ensuring that functionality will not change for an undocumented function that is used in the Oracle kernel, even though its documented doppelganger may change in future releases.

In any case, undocumented functions are always interesting, and here are a few to play with.

Just keep in mind that these are undocumented, and as such may change or disappear entirely in future releases

sys_op_vacand - Return the binary AND of two raw values. Results are in hex

SELECT sys_op_vecand(hextoraw('FF'),hextoraw('FE')) from dual;
FE

16:13:12 SQL>SELECT sys_op_vecand(hextoraw('C3'),hextoraw('7E')) from dual;
42

sys_op_vecor - Return the binary OR of two raw values. Results are in hex

16:14:39 SQL>SELECT sys_op_vecor(hextoraw('FF'),hextoraw('FE')) from dual;
FF

sys_op_vecxor - Return the binary XOR of two raw values. Results are in hex

16:14:39 SQL>SELECT sys_op_vecor(hextoraw('FF'),hextoraw('FE')) from dual;
FF

sys_op_vecbit - Return the value of the bit at position N in a raw value

The return value is 0 or 1

This is an interesting function as it can be used to determine the value of bits in a number. If for instance some flags are stored in a bit vector and you need to know the value of the 3 bit, this is an easy way to do it.

I believe the upper limit on the number of bits is 127.

prompt
define decnum=10
prompt &&decnum dec = 1010 bin

16:16:27 SQL>select 'Bit 0 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),0) from dual;
Bit 0 is 0

16:16:27 SQL>select 'Bit 1 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),1) from dual;
Bit 1 is 1

16:16:27 SQL>select 'Bit 2 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),2) from dual;
Bit 2 is 0

16:16:27 SQL>select 'Bit 3 is ' || sys_op_vecbit(SYS_OP_NUMTORAW(&&decnum),3) from dual;
Bit 3 is 1

sys_op_bitvec - This appears to be for used to build a bit vector, but I haven't figured out
how to use it. Please let me know if you do.

sys_op_map_nonnull - This has been thouroughly discussed on Eddie Awad's blog:
sys_op_map_nonnull discussion

sys_op_descend - Returns value as it would be used in a descending index. This is essentially
reverse() function with output in hex.

16:32:41 SQL>select sys_op_descend('ABC') from dual;
BEBDBCFF

sys_op_undescend - The inverse of sys_op_descend. Well, almost

17:12:59 SQL>select sys_op_undescend(sys_op_descend('ABC')) from dual
17:12:59 2 /

414243

Notice the output is in the original order, but in decimal rather than hex.


sys_op_dump - dump the data from an ADT/UDT (Abtract/User Data Type)

16:54:13 SQL>CREATE OR REPLACE TYPE my_adt AS OBJECT (
16:54:13 2 last_name varchar2(30),
16:54:13 3 first_name varchar2(30),
16:54:13 4 id number(6)
16:54:13 5 )
16:54:13 6 /
16:54:13 SQL>
16:54:13 SQL>
16:54:13 SQL>select sys_op_dump(my_adt('still','jared',234987)) from dual;

('still','jared',234987)

I don't use objects in the database, but this would likely be useful for someone that does.

sys_op_guid - this appears to be identical to sys_guid

17:00:50 SQL>select sys_guid(), sys_op_guid() from dual;

52BA7CF06BB488ECE040010A7C646200 52BA7CF06BB588ECE040010A7C646200
Categories: DBA Blogs

Point

Sergio's Blog - Tue, 2008-09-30 10:10
Categories: DBA Blogs

Search Through Region Source in Application Express

Sergio's Blog - Tue, 2008-09-30 09:46

As I was working to migrate an application and its theme over to Application Express 3.1.2 from 2.1.2 I needed to move some images around and update the references accordingly. I remembered that the Application Builder has a feature to search through region source, which came in handy.

Here's where you can find it:

I could have used this type of feature to search through templates as well. If anyone from the APEX team is reading this...

Categories: DBA Blogs

Data Modeling with SQL Developer

Jared Still - Fri, 2008-09-26 00:33
Unlike Open World 2007 there were many database oriented sessions at Oracle Open World 2008. There were many good performance oriented sessions, so many in fact that there were several conflicts in the schedule, and I had to pick one in several time slots that had multiple choices.

One of the more interesting sessions (for me anyway) at OOW 2008 was a session not on database performance, but on data modeling.

The SQL Developer team has been hard at working creating a data modeling plugin for SQL Developer.

This appears to be a very full featured tool, and appears to be the answer to the question "What will replace Oracle Designer?"

While Designer is much more than a data modeling tool, that is one of the core features of the tool, and many folks have used it just for its data modeling capabilities.

The new ERD tool is no lightweight, it is quite full featured from a database modeling and design standpoint.

Some of the features included:
  • Domains generated from data
  • Real logical and physical modeling, not just one model with 2 different names.
  • The ability to reverse engineer several schemas at once and have them appear not only as a master model, but each individually as a sub model.
  • Sub model views may be created on sets of objects as well.
  • The tool can determine all tables related to a table through FKs and create a sub model based on that set.
  • Two forms of notation: Barker and IE
  • Many options for displaying sub/super types (D2k fans rejoice!)
  • Glossary - a predefined set of names. These can be used to enforce naming conventions for entities, tables and relations.
  • Schema comparison with DDL change generation
Also of note, in addition to Oracle schemas can be imported from SQL Server, DB2, or any ODBC connected database.

The repository can be either file based, or database based.
There are two versions of the tool, a plugin to SQL Developer, and a stand alone version. The stand alone version will use only the file based repository.

Now for the bad news.

The release date has not been established. The only release information given was 'sometime in the 2009 calendar year'. As the database repository has not yet been designed, the long time to release is understandable.

And finally, licensing has not been established. It might be free, it might not. If not, at least we can hope for reasonably priced. Personally I thinking having a decent data modeling tool that comes free of charge with SQL Developer would contribute to higher quality databases, as more people would use a real database designer rather than a drawing tool.

There was probably more that didn't make it into my notes.
Suffice it to say this is a great development for data modelers and database designers.

Following a few screen shots taken during the presentation.





Categories: DBA Blogs

oracle-validated RPM also available on Oracle Enterprise Linux 5 Update 2 media

Sergio's Blog - Thu, 2008-09-11 00:23

In the comments on my previous post on the Oracle validated RPM, Frank points out that this RPM is also available on the Oracle Enterprise Linux 5 installation media, starting with Update 2. You can download" Oracle Enterprise Linux from edelivery.

I learned something new today!

Categories: DBA Blogs

Where to Download Oracle VM Templates

Sergio's Blog - Mon, 2008-09-08 03:28

Wim recently blogged about Oracle VM Templates, announced a few weeks ago. If you're interested in them, you can download them from edelivery.oracle.com/oraclevm.

ovm_edp.png

Categories: DBA Blogs

cx_Oracle and DBMS_APPLICATION_INFO

Sergio's Blog - Fri, 2008-08-29 05:51

Here's another post from the old blog, harvested from archive.org

The other day I was playing with some Python and cx_Oracle, a Python module for connecting to Oracle databases. While cx_Oracle seems to work great, I found the documentation light on examples. Because I follow Tom Kyte's advice closely, I was trying to instrument my code with some calls to DBMS_APPLICATION_INFO. As it turns out, the cx_Oracle connection object has write only attributes to set module, action, and clientinfo.

Here's a quick example of that in action (warning: newbie Python ahead)

-bash-3.00$ python
Python 2.3.4 (#1, Feb  2 2005, 11:44:49) 
[GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_Oracle
>>> con=cx_Oracle.connect ("hr/hr")
>>> cur=con.cursor()
>>> con.module="python module"
>>> con.action="some big query"

>>> cur.execute ("select 1 from dual")
[<cx_Oracle.NUMBER with value None>]
>>> con.action=""
>>> con.module=""
>>> con.close()

Before closing the connection with con.close(), this is what was observed in the database:

SQL> select sid, module, action from v$session where module is not null 
  2  /
       SID MODULE                                           ACTION
---------- ------------------------------------------------ --------------------------------
        22 sqlplus@captain.us.oracle.com (TNS V1-V3)
        26 python module                                    some big query

Categories: DBA Blogs

oracle-validated RPMs now available outside of ULN

Sergio's Blog - Thu, 2008-08-28 09:10

For those of you who'd like to use the oracle-validated RPM, but don't have access to ULN, here they are:

Categories: DBA Blogs

AWR Usage Poll

Jared Still - Mon, 2008-08-18 14:04
A number of recent threads in the Oracle-L list have made it pretty clear that Automated Workload Repository (AWR) is a tool that you are expected to use when troubleshooting a database problem.

Never mind the fact that AWR is still a product that is licensed separately from the database, and that a large segment of the Oracle DBA population doesn't seem to realize that. Or that Active Session History (ASH) is part of AWR, and falls under the same license restrictions.

So I conducted a poll regarding the use of AWR. AWR Usage Poll. If you haven't in the AWR Poll, please do so.

While the web site does provide a chart of results, those results don't include the extra comments made by poll takers. You may are may not be able to download all the results, I'm not sure if that is restricted to the poll owner.

Nonetheless, I have compiled the results from a 100 or so respondents in to an Excel workbook, along with a few charts. You may find some of the additional comments of interest as well. AWR Usage Results

Draw your own conclusions regarding these results. I think it interesting to that AWR appears to be quite widely used. Personally I fall into the category of not using it because of the expense. I may work on changing that for a couple of key servers, as AWR is not that expensive, but in a small shop, spending $20k on feature that is not often needed is sometimes a hard sell.

One question I purposely left out was "Do you use AWR even though you have not licensed it"? While it might satisfy the curiosity of some (including me) I didn't want to give any Oracle sales people (or Oracle attorneys for that matter) any reasons to contact me regarding the poll.

In retrospect a good question would have been: "Did you realize AWR/ASH is a separately licensed product?". Too late to add that now, but bringing that up quite often leads to lively discussion.

Another interesting bit was that a few people have extended STATSPACK in some way, even using it on Oracle 10g+. One even mentioned the excellent repository of statspack scripts assembled by Tim Gorman. Tim Gorman's Statspack Scripts
Categories: DBA Blogs

An unusual cause of ORA-12154

Jared Still - Fri, 2008-08-01 17:30
The ORA-12154 (and its cousin ORA-12514) have been the bane of many a novice DBA.

This error is essentially telling you that you have messed up your database naming configuration, whether it be tnsnames, oracle names or OID. The fix is normally quite simple - just correct the naming.

This is usually quite easily done with netmgr, a tool that is fairly good at its job. The syntax for setting up a tnsname is a little convoluted, and I've fallen back on netmgr a number of times when I can't seem to get it just right by typing the entries in the file.

There's at least one other way to cause ORA-12154 to pop up and consume more time than you may like to admit. I won't tell how long I played with this...

The cause is actually due to security policy. While the characters !@#$%^&*()_-=+~` are not normally allowed in Oracle passwords, it is actually quite simple to include them. Simply enclose the password in double quotes.
alter user scott identified by "th!sat^st";

This was done on a number of accounts on our databases, all in the name of security.
These types of passwords have been used without issue for some time now.

Today was a notable exception.

After spending some time fiddling with tnsnames.ora files, I realized what the problem actually was.

Suppose you were to have a password with an '@' in it? Yep, that was the problem.
First let's see what it looks like from within sqlplus:

15:41:52 SQL> alter user scott identified by "what@mistake";

User altered.

15:42:03 SQL> connect scott/"what@mistake";
Connected.

scott SQL> select user from dual;

USER
------------------------------
SCOTT

SQL> connect scott/what@mistake
ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified



As long as the connect string was in double quotes, no problem.

Now let's try it from a linux command line:

Linux> sqlplus scott/'what@mistake'

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:42:20 2008

ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified


Linux> sqlplus scott/'what\@mistake'

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:14 2008

ERROR:
ORA-01017: invalid username/password; logon denied


Linux> sqlplus scott/"what\@mistake"

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:46:21 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Linux> sqlplus 'scott/what\@mistake'

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:47:23 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

Linux > sqlplus scott/what\@mistake

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 1 15:48:52 2008

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified

There doesn't seem to be any method to use a password with the @ character in it, at least not from sqlplus. The same password works fine when used on the command line with perl script that logs on to the same database and account:

Linux> DBI_template.pl -database dv11 -username scott -password 'what@mistake'
X


The 'X' is the correct output as this script simply selects * from dual.

Lesson learned, do not ever, under any circumstances, use the @ character in the password for an Oracle database account.

You might think that goes without saying, but it's actually pretty easy to get a password with '@' in it when the passwords are generated random characters.

FWIW, this problem was manifested in Windows as well
Categories: DBA Blogs

Will the Optimizer development team be at Oracle Open World?

Inside the Oracle Optimizer - Thu, 2008-07-24 17:50
The largest gathering of Oracle customers, partners, developers, and technology enthusiasts will happen in September when Oracle will host its annual user conference Open World in San Francisco and the Optimizer development group will be there! You will have two opportunities to meet the team -- attend the technical presentation "Inside the 11g Optimizer - Removing the mystery" on Tuesday morning at 9am or stop by the Oracle demo grounds (in Moscone West) to see all of the demos for the 11g new features and ask the development team any burning questions you may have!

Categories: DBA Blogs, Development

Oracle Locator Express

Jared Still - Wed, 2008-07-23 16:44
If you do much work with the Oracle database on Windows, and you have 1+N Oracle homes installed, you've probably lamented the fact that the Oracle Home Switcher is no longer included with Oracle.

I can't recall exactly what the tool was called or which version Oracle was at when it ceased to be a part of the installation. I do know that it doesn't work with 10g+.

A little tool called Oracle Locater Express fills this niche nicely, and it does work with 10g. Sorry, have not yet tried it with 11g.

"Oracle Locator Express"


I've used it for several months without issue, save one minor glitch.

Sometimes Oracle 10g Homes are not displayed properly in the list of Oracle Homes to choose from. Other than that, no complaints

Categories: DBA Blogs

How to get mail working with Apex packed with 11g ..

Pankaj Chandiramani - Tue, 2008-07-22 22:44

Hi , Recently i installed 11g db on one of my systems ,as it comes with APEX i thought to move my apex app(which were in 10g) to the same .........when i moved my apps , i got everything working but mail .......it gave error :ORA-24247: network access denied by access control list (ACL)

The issue is with ACL as by default, the ability to interact with network services is disabled in Oracle Database 11g. To enable i needed to run a procedure as shown below .

DECLARE
ACL_PATH VARCHAR2(4000);
ACL_ID RAW(16);
BEGIN
-- Look for the ACL currently assigned to '*' and give FLOWS_030000
-- the "connect" privilege if FLOWS_030000 does not have the privilege yet.

SELECT ACL INTO ACL_PATH FROM DBA_NETWORK_ACLS
WHERE HOST = '*' AND LOWER_PORT IS NULL AND UPPER_PORT IS NULL;

-- Before checking the privilege, make sure that the ACL is valid
-- (for example, does not contain stale references to dropped users).
-- If it does, the following exception will be raised:
--
-- ORA-44416: Invalid ACL: Unresolved principal 'FLOWS_030000'
-- ORA-06512: at "XDB.DBMS_XDBZ", line ...
--
SELECT SYS_OP_R2O(extractValue(P.RES, '/Resource/XMLRef')) INTO ACL_ID
FROM XDB.XDB$ACL A, PATH_VIEW P
WHERE extractValue(P.RES, '/Resource/XMLRef') = REF(A) AND
EQUALS_PATH(P.RES, ACL_PATH) = 1;

DBMS_XDBZ.ValidateACL(ACL_ID);
IF DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE(ACL_PATH, 'FLOWS_030000',
'connect') IS NULL THEN
DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(ACL_PATH,
'FLOWS_030000', TRUE, 'connect');
END IF;

EXCEPTION
-- When no ACL has been assigned to '*'.
WHEN NO_DATA_FOUND THEN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL('power_users.xml',
'ACL that lets power users to connect to everywhere',
'FLOWS_030000', TRUE, 'connect');
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL('power_users.xml','*');
END;
/
COMMIT;

Finally got it working :)

Categories: DBA Blogs

TechNight All India Oracle User Group (AIOUG)

Virag Sharma - Sat, 2008-07-19 11:48

AIOUG started his first ever Technical session in India at Hyderabad… On … Friday, July 18th 2008, between 5:00 PM to 8:00 PM. Session started with quick introduction about AIOUG, there mission goal by Murali Vallath. Latter Phani Arega took technical session on "Efficient SQL Programming - Some Tricks and Tips" and Vivek Sharma on " Real Time Performance Tuning".

Both the technical session was quite good and they shared lots of real life examples in there presentation. Here is some photos from AIOUG TechNight

Murali Vallath

Phani Arega






Categories: DBA Blogs

Why are some of the tables in my query missing from the plan?

Inside the Oracle Optimizer - Thu, 2008-06-26 17:35
We apologize for our brief hiatus from blogging. We've been busy working on improvements to the optimizer.

In 10gR2, we introduced a new transformation, table elimination (alternately called "join elimination"), which removes redundant tables from a query. A table is redundant if its columns are only referenced to in join predicates, and it is guaranteed that those joins neither filter nor expand the resulting rows. There are several cases where Oracle will eliminate a redundant table. We will discuss each case in turn.

Primary Key-Foreign Key Table Elimination

Starting in 10gR2, the optimizer eliminates tables that are redundant due to primary key-foreign key constraints. Consider the following example tables:

create table jobs
(
job_id NUMBER PRIMARY KEY,

job_title VARCHAR2(35) NOT NULL,
min_salary NUMBER,
max_salary NUMBER
);

create table departments
(
department_id NUMBER PRIMARY KEY,
department_name VARCHAR2(50)
);
create table employees
(
employee_id NUMBER PRIMARY KEY,
employee_name VARCHAR2(50),
department_id NUMBER REFERENCES departments(department_id),
job_id NUMBER REFERENCES jobs(job_id)
);

and the query:

select e.employee_name
from employees e, departments d
where e.department_id = d.department_id;

In this query, the join to departments is redundant. The only column referenced in the query appears in the join predicate, and the primary key-foreign key constraint guarantees that there is at most one match in departments for each row in employees. Hence, the query is equivalent to:

select e.employee_name
from employees e
where e.department_id is not null;


The optimizer will generate this plan for the query:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("E"."DEPARTMENT_ID" IS NOT NULL)

Note that the IS NOT NULL predicate is not necessary if the column has a NOT NULL constraint on it.

Starting in 11gR1, the optimizer will also eliminate tables that are semi-joined or anti-joined. Consider the following query:

select e.employee_id, e.employee_name
from employees e
where not exists (select 1
from jobs j
where j.job_id = e.job_id);


Since employees.job_id is a foreign key to jobs.job_id, any non-null value in employees.job_id must have a match in jobs. So only employees with null values for employees.job_id will appear in the result. Hence, this query is equivalent to:

select e.employee_id, e.employee_name
from employees e
where job_id is null;

and the optimizer can choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("E"."JOB_ID" IS NULL)

Suppose employees.job_id has a NOT NULL constraint:

alter table employees modify job_id not null;

In this case, there could not possibly be any rows in EMPLOYEES, and the optimizer could choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 FILTER
2 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(NULL IS NOT NULL)

The "NULL IS NOT NULL" filter is a false constant predicate, that will prevent the table scan from even taking place.

Also in 11gR1, the optimization became available for ANSI compliant joins. For this query:

select employee_name
from employees e inner join jobs j
on e.job_id = j.job_id;

the optimizer can eliminate JOBS and produce this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Outer Join Table Elimination

In 11gR1, a new form of table elimination was introduced for outer joins, which does not require PK-FK constraints. For the example, we require a new table and an addition to EMPLOYEES:

create table projects
(
project_id NUMBER UNIQUE,
deadline DATE,
priority NUMBER
);

alter table employees add project_id number;

Now consider a query that outer joins employees and projects:

select e.employee_name, e.project_id
from employees e, projects p
where e.project_id = p.project_id (+);

The outer join guarantees that every row in employees will appear at least once in the result. The unique constraint on projects.project_id guarantees that every row in employees will match at most one row in projects. Together, these two properties guarantee that every row in employees will appear in the result exactly once. Since no other columns from projects are referenced, projects can be eliminated, and the optimizer can choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Why Would I Ever Write Such a Query?

All of the example queries in this post are very simple, and one would be unlikely to write a query where the join is so obviously unnecessary. There are many real world scenarios where table elimination may be helpful, including machine-generated queries and elimination of tables in views. For example, a set of tables might be exposed as a view, which contains a join. The join may be necessary to retrieve all of the columns exposed by the view. But some users of the view may only access a subset of the columns, and in this case, the joined table can be eliminated.

For example, consider the view:

create view employee_directory_v as
select e.employee_name, d.department_name, j.job_title
from employees e, departments d, jobs j
where e.department_id = d.department_id
and e.job_id = j.job_id;

This view might be exposed to a simple employee directory application. To lookup employee names by job title, the application issues a query:

select employee_name
from employee_directory_v
where department = 'ACCOUNTING';

Since the job_title column is not referenced, jobs can be eliminated from the query, and the optimizer can choose this plan:

--------------------------------------------
Id Operation Name
--------------------------------------------
0 SELECT STATEMENT
* 1 HASH JOIN
2 TABLE ACCESS FULL EMPLOYEES
* 3 TABLE ACCESS FULL DEPARTMENTS
--------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
3 - filter("D"."DEPARTMENT_NAME"='ACCOUNTING')

Known Limitations

There are currently a few limitations of table elimination:
  • Multi-column primary key-foreign key constraints are not supported.
  • Referring to the join key elsewhere in the query will prevent table elimination. For an inner join, the join keys on each side of the join are equivalent, but if the query contains other references to the join key from the table that could otherwise be eliminated, this prevents elimination. A workaround is to rewrite the query to refer to the join key from the other table (we realize this is not always possible).
Categories: DBA Blogs, Development

&quot;Demo It To Oracle&quot; (DITO) - CamStudio Help

Pankaj Chandiramani - Sun, 2008-05-25 22:03

Now you can record & share the issues you are facing to Oracle Support .

https://metalink.oracle.com/metalink/plsql/f?p=130:14:7679480494464650902::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,11.1,1,1,0,helvetica

Its a nice way to share the error & show the support guys on how that error occurred & faster reproducibility.

Categories: DBA Blogs

upcoming posts

Pankaj Chandiramani - Thu, 2008-05-22 20:37

Been long time since i posted something  the reason being have started in new directions  ......working with OIM , OID & stuff these days .

So will be writing a post to install/config OIM in next couple of days & will keep on posting regularly on the topic.



 

Categories: DBA Blogs

Oracle's CPU Patch Naming

Jared Still - Tue, 2008-05-13 19:30
Just today I discovered that Oracle changed the nomenclature used to refer to Critical Patch Updates.

Up through the patch bundle issued in October 2007, these were referred to by the date of release. The October 2007 release was "Oracle Critical Patch Update - October 2007", often shortened to "CPU Oct 2007".
Oracle Critical Patch Update - October 2007

When viewing the documentation for the October 2007 CPU you will see this:

Oracle® DatabaseCritical Patch Update Note
Release 10.2.0.2 for Microsoft Windows (x64)

The documentation for the January 2008 CPU however looks a bit different:

Oracle® Database Server Version 10.2.0.2 Patch 17
Bundle Patch Update Note
Release 10.2.0.2 for Microsoft Windows (x64)


Note the difference. The Oct 2007 CPU is referred to by date, while the Jan 2008 CPU is referred to by patch number.

By digging down into the Oct 2007 CPU documentation you will find that it is referred to as "Patch 16".

Doing the same for the "Patch 17" documentation reveals that it is indeed the Jan 2008 CPU.

Why does it matter?

Imagine working with consultants for a major application system that uses Oracle as a database. Imagine also that these consultants also know only 'Patch 17', while you know only about 'Jan 2008 CPU'.

This can lead to a great deal of confusion, as it did for me (as well as the consultants) recently. Digging down into the docs revealed the change in nomenclature.

It led to (I am telling on myself here) me installing both the October 2007 CPU and the January 2008 CPU on a single database. This wouldn't be so bad, if they had been applied in the wrong order.

Maybe most other Oracle folks already know about this, and I just missed the memo explaining that Oracle would start referring to their CPU's by a different name.

In any case, not knowing this caused a lot of extra work.
Categories: DBA Blogs

DBMS_STATS Enhancements in Oracle 11g Database

Virag Sharma - Wed, 2008-04-30 07:14

DBMS_STATS Enhancements in Oracle 11g Database
Virag Sharma virag123@gmail.com

We know in 10g and 11g there is automatic job that collect stats of database based on certain preferences settings , lets have a look what are these preferences and what are there default values and how DBMS_STATS is different in 11g from 10g

Changing Preferences for Statistics

Preference name Default Values

CASCADE

AUTO

DEGREE

AUTO

ESTIMATE_PERCENT

AUTO

METHOD_OPT

FOR ALL COLUMN SIZE AUTO

NO_INVALIDATE

AUTO

GRANULARITY

AUTO

PUBLISH ( New in 11g)

TRUE

INCREMENTAL new in 11g

FALSE

STALE_PERCENT (New in 11g)

10



#
# In Oracle 10g
#

$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Mar 6 19:04:57 2008

SQL> SELECT dbms_stats.get_param('ESTIMATE_PERCENT') FROM dual;

DBMS_STATS.GET_PARAM('ESTIMATE_PERCENT')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_SAMPLE_SIZE

SQL>

SQL> SELECT dbms_stats.get_param('method_opt') FROM dual;

DBMS_STATS.GET_PARAM('METHOD_OPT')
--------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO

SQL> SELECT dbms_stats.get_param('GRANULARITY') FROM dual;

DBMS_STATS.GET_PARAM('GRANULARITY')
--------------------------------------------------------------------------------
AUTO

SQL> SELECT dbms_stats.get_param('NO_INVALIDATE') FROM dual;

DBMS_STATS.GET_PARAM('NO_INVALIDATE')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_INVALIDATE

SQL> SELECT dbms_stats.get_param('DEGREE') FROM dual;

DBMS_STATS.GET_PARAM('DEGREE')
--------------------------------------------------------------------------------
NULL

SQL> SELECT dbms_stats.get_param('CASCADE') FROM dual;

DBMS_STATS.GET_PARAM('CASCADE')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_CASCADE

<>


SQL> exec dbms_stats.set_param('ESTIMATE_PERCENT','100');

PL/SQL procedure successfully completed.


SQL> SELECT dbms_stats.get_param('ESTIMATE_PERCENT') FROM dual;

DBMS_STATS.GET_PARAM('ESTIMATE_PERCENT')
---------------------------------------------------------------------
100

SQL> exec dbms_stats.RESET_PARAM_DEFAULTS();

PL/SQL procedure successfully completed.

SQL> SELECT dbms_stats.get_param('ESTIMATE_PERCENT') FROM dual;

DBMS_STATS.GET_PARAM('ESTIMATE_PERCENT')
---------------------------------------------------------------------
DBMS_STATS.AUTO_SAMPLE_SIZE

->

GET_PARAM , RESET_GLOBAL_PREFS_DEFAULTS and SET_PARAM are obsolete in Oracle 11g.

In place of above procedures need to use following procedures GET_PREFS , RESET_GLOBAL_PREF_DEFAULTS and SET_GLOBAL_PREFS

SQL> Select dbms_stats.GET_PREFS('CASCADE') from dual;

DBMS_STATS.GET_PREFS('CASCADE')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_CASCADE

SQL> Select dbms_stats.GET_PREFS('DEGREE') from dual;

DBMS_STATS.GET_PREFS('DEGREE')
--------------------------------------------------------------------------------
NULL

SQL> Select dbms_stats.GET_PREFS('ESTIMATE_PERCENT') from dual;

DBMS_STATS.GET_PREFS('ESTIMATE_PERCENT')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_SAMPLE_SIZE

SQL> Select dbms_stats.GET_PREFS('METHOD_OPT') from dual;

DBMS_STATS.GET_PREFS('METHOD_OPT')
--------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO

SQL> Select dbms_stats.GET_PREFS('NO_INVALIDATE') from dual;

DBMS_STATS.GET_PREFS('NO_INVALIDATE')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_INVALIDATE

SQL> Select dbms_stats.GET_PREFS('GRANULARITY') from dual;

DBMS_STATS.GET_PREFS('GRANULARITY')
--------------------------------------------------------------------------------
AUTO

SQL> Select dbms_stats.GET_PREFS('PUBLISH') from dual;

DBMS_STATS.GET_PREFS('PUBLISH')
--------------------------------------------------------------------------------
TRUE

SQL> Select dbms_stats.GET_PREFS('INCREMENTAL') from dual;

DBMS_STATS.GET_PREFS('INCREMENTAL')
--------------------------------------------------------------------------------
FALSE

SQL> Select dbms_stats.GET_PREFS('STALE_PERCENT') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT')
--------------------------------------------------------------------------------
10

In Oracle 11g you can set preference not only on database level but on global level , schema level , table level as well. It really give more control to DBA for example on one table you want histograms should collected always or want to set METHOD_OPT => ‘FOR ALL COLUMNS SIZE 254’ on some tables

Procedure for setting preference on global , database , schema and table level are given below

1. SET_GLOBAL_PREFS

2. SET_DATABASE_PREFS

3. SET_SCHEMA_PREFS

4. SET_TABLE_PREFS

  • Preference set on global apply for new objects or object which no preference available
  • Database level preference will be applied on all objects in the database excluding the tables owned by Oracle. These tables can included by passing TRUE for the add_sys parameter of procedure.
  • If you set preference value to NULL , it will set to Oracle default value
>

#
#
Test case
#

SQL> create table test1(abc number);

Table created.

SQL> create table test2(abc number);

Table created.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST1') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST1')
--------------------------------------------------------------------------------
10

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
10


SQL> execute dbms_stats.set_table_prefs('SYS', 'TEST2', 'STALE_PERCENT', '35');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35

SQL> execute dbms_stats.set_global_prefs('STALE_PERCENT', '20');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST1') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST1')
--------------------------------------------------------------------------------
20

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35

SQL> exec dbms_stats.SET_DATABASE_PREFS('STALE_PERCENT', '15');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST1') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST1')
--------------------------------------------------------------------------------
20

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35


SQL> exec dbms_stats.set_database_prefs('STALE_PERCENT','30');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'BUG', 'BUG_USER')
from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','BUG','BUG_USER')
--------------------------------------------------------------------------------
30

SQL> exec dbms_stats.SET_DATABASE_PREFS('STALE_PERCENT', '15');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'BUG', 'BUG_USER')
from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','BUG','BUG_USER')
--------------------------------------------------------------------------------
15

Published and Pending Statistics

Statistics published by default when system/user/job collect stats. In Oracle 11g we have option to put newly collect stats in pending state and published latter once satisfied by performance testing based on new statistics.

In 10g we have face performance issue when collected stats on OLTP (24 X 7 ) database, probably because stats of one index/ table published before other. So collect stats in pending stat , test it and once satisfied with testing published it.

SQL> col name form a40
SQL> col VALUE form a6
SQL> col ISSES_MODIFIABL VALUE form a6
SQL> col ISSES_MODIFIABL form a6
SQL> col ISSYS_MODIFIABLE form a12
SQL> select name,value,isses_modifiable,issys_modifiable
from v$parameter
where name='optimizer_use_pending_statistics' ;

NAME VALUE ISSES_MODIFIABL ISSYS_MODIFI
---------------------------------------- ------ --------------- ------------
optimizer_use_pending_statistics FALSE TRUE IMMEDIATE

#
# Test Case
#

# By default Publish is set TRUE
#

SQL> select dbms_stats.get_prefs('PUBLISH') publish from dual
PUBLISH
---------
TRUE

SQL> select dbms_stats.get_prefs('PUBLISH', 'VIRSHARM', 'TEST1') publish
from dual;

PUBLISH
------------
TRUE

#
# Setting Publish false for TEST1 table
#

SQL> exec dbms_stats.set_table_prefs('VIRSHARM', 'TEST1', 'PUBLISH', 'false');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('PUBLISH', 'VIRSHARM', 'TEST1') publish
from dual;


PUBLISH
-------------
FALSE

SQL> execute dbms_stats.gather_table_stats('VIRSHARM', 'TEST1');

PL/SQL procedure successfully completed.

SQL> select table_name, last_analyzed from user_col_pending_stats
where table_name ='TEST1'

TABLE_NAME LAST_ANALYZED
---------- ---------------
TEST1 30-APR-08

SQL> select table_name, last_analyzed from user_ind_pending_stats
where table_name ='TEST1';

no rows selected

SQL> select table_name, last_analyzed from user_col_pending_stats
where table_name ='TEST1';

TABLE_NAME LAST_ANALYZED
---------- ---------------
TEST1 30-APR-08

#
# Following views will have no record for table TEST1
# because stats not published
#

SQL> select table_name, last_analyzed from user_tables
where table_name = 'TEST1';

TABLE_NAME LAST_ANALYZED
------------------------------ -------------------
TEST1

SQL> select index_name, last_analyzed from user_indexes
where table_name ='TEST1';

no rows selected

SQL> select column_name, last_analyzed from user_tab_columns
where table_name='TEST1' ;

COLUMN_NAME LAST_ANALYZED
------------ ---------------
ABC

>

#
# Testing
#

SQL> ALTER SESSION SET optimizer_use_pending_statistics =TRUE;

Session altered.

#
# OR export / import STATS
#

SQL> exec dbms_stats.create_stat_table('VIRSHARM','STATS');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.export_pending_stats(tabname=>'TEST1',stattab=>'STATS');

PL/SQL procedure successfully completed.

#
# Use expdp /impdp ot exp/imp to import stats in TEST/QA database
#

SQL> exec

dbms_stats.import_table_stats(ownname=>'VIRSHARM',tabname=>'TEST1',
stattab=>'STATS');

select column_name, last_analyzed from user_tab_columns
wheretable_name='TEST1';

#
# Published stats
# Publish the Stats to Data Dictionary for Optimizer Usage
#

SQL> exec dbms_stats.publish_pending_stats(tabname=>'TEST1');

PL/SQL procedure successfully completed.

If Stats published and performance become worse , then you can restore old version of stats. Oracle will manage the historical statistics repository, purging the statistics on a regular basis, by default every 31 days

By default Oracle keep stats for 31 day after that it purge. Retention can be increased useing following

SQL> exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION ( 60 );

PL/SQL procedure successfully completed.

Categories: DBA Blogs

11g RAC Database hanged or appears to hanged

Virag Sharma - Thu, 2008-04-24 06:07

11g RAC Database hanged or appears to hanged

Around lunch time 1:00 PM database hanged. When we tried to login as sysdba
our sqlplus session also hanged. we not able to login on either of node of RAC.
Thought to kill all the remote connection , so we will able to login

ps -aef |grep LOCAL=NO |awk '{ print $2}' |xargs kill -9

When your database hanged and you not able to login, we have used following procedure
to create hang analysis dump

sqlplus /nolog

SQL> set _prelim on
SQL> connect / as sysdba
SQL> oradebug setmypid
Statement processed.
SQL> oradebug unlimit
Statement processed.
SQL> oradebug -g all hanganalyze 3
SQL> oradebug -g all hanganalyze 3
Cycle 1: (1/1965)--(1/1839)
Hang Analysis in /u04/oraout/abcap/11gdiag/diag/rdbms/abcap/abc2ap/trace/abc2ap_diag_9269.trc

We also generate ASH report for that(=hang) duration.
Here is what , hang analysis show

#
# Session id is in RED colour for further analysis

#

Chains most likely to have caused the hang:
[a] Chain 1 Signature: 'row cache lock'<='row cache lock' (cycle) Chain 1 Signature Hash: 0x75bdd0c

===============================================================================
Cycles:

-------------------------------------------------------------------------------
Chain 1:
-------------------------------------------------------------------------------
Oracle session identified by:
{
instance: 1 (abcap.abc1ap)
os id: 27244
process id: 343, oracle@abc232
session id: 1839
session serial #: 12
}
is waiting for 'row cache lock' with wait info:
{
p1: 'cache id'=0x7
p2: 'mode'=0x0
p3: 'request'=0x5
time in wait: 0.526566 secs
heur. time in wait: 18.534318 secs
timeout after: 2.473434 secs
wait id: 49
blocking: 2 sessions
current sql:
short stack: <-ksedsts()+285<-ksdxfstk()+22<-ksdxcb()+1599<-sspuser()+102<-semtimedop()+36<-sskgpwwait()+211<-skgpwwait()+104<-ksliwat()+1256<-kslwaitctx()+135<-kqrigt()+1592<-kqrLockAndPinPo()+420<-kqrpre1()+915<-kqrpre()+57<-kziasfc()+235<-kpolnb()+5279<-kpoauth()+582<-opiodr()+991<-ttcpip()+1066<-opitsk()+1275<-opiino()+882<-opiodr()+991<-opidrv()+462<-sou2o()+91<-opimai_real()+109<-ssthrdmain()+142<-main()+116<-__libc_start_main()+211<-_start()+33 wait history: 1. event: 'row cache lock'
wait id: 48 p1: 'cache id'=0x7
time waited: 3.001908 secs p2: 'mode'=0x0
p3: 'request'=0x5
2. event: 'row cache lock'
wait id: 47 p1: 'cache id'=0x7
time waited: 3.001615 secs p2: 'mode'=0x0
wait id: 47 p1: 'cache id'=0x7
time waited: 3.001615 secs p2: 'mode'=0x0
p3: 'request'=0x5
3. event: 'row cache lock'
wait id: 46 p1: 'cache id'=0x7
time waited: 3.000437 secs p2: 'mode'=0x0
p3: 'request'=0x5
}
and is blocked by
=> Oracle session identified by:
{
instance: 1 (abcap.abc1ap)
os id: 27941
process id: 125, oracle@abc232
session id: 1965
session serial #: 476
}
which is waiting for 'row cache lock' with wait info:
{
p1: 'cache id'=0x7
p2: 'mode'=0x0
p3: 'request'=0x3
time in wait: 0.462402 secs
heur. time in wait: 18.536906 secs
timeout after: 2.537598 secs
wait id: 30
blocking: 2 sessions
current sql:
short stack: <-ksedsts()+285<-ksdxfstk()+22<-ksdxcb()+1599<-sspuser()+102<-semtimedop()+36<-sskgpwwait()+211<-skgpwwait()+104<-ksliwat()+1256<-kslwaitctx()+135<-kqrigt()+1592<-kqrLockAndPinPo()+420<-kqrpre1()+915<-kqrpre()+57<-kkdlgui()+186<-kziavdb.()+2023<-kziaia.()+220<-kpolnb()+580<-kpoauth()+582<-opiodr()+991<-ttcpip()+1066<-opitsk()+1275<-opiino()+882<-opiodr()+991<-opidrv()+462<-sou2o()+91<-opimai_real()+109<-ssthrdmain()+142<-main()+116<-__libc_start_main()+211<-_start()+33 wait history: 1. event: 'row cache lock' wait id: 29 p1: 'cache id'=0x7 time waited: 3.001776 secs p2: 'mode'=0x0 p3: 'request'=0x3 2. event: 'row cache lock' wait id: 28 p1: 'cache id'=0x7 time waited: 3.001578 secs p2: 'mode'=0x0 p3: 'request'=0x3 3. event: 'row cache lock' wait id: 27 p1: 'cache id'=0x7

When checked above session ( as marked in RED in above windows ) in ASH table.
We found both sessions are belong to “sys” user.

Lets have a look on ASH report, one which we created, when database was hanged


Top User Events

Event

Event Class

% Event

Avg Active Sessions

resmgr:cpu quantum

Scheduler

80.66

55.53

row cache lock

Concurrency

7.37

5.08

db file sequential read

User I/O

3.55

2.45

null event

Other

2.66

1.83

CPU + Wait for CPU

CPU

1.90

1.31

Top Event P1/P2/P3 Values

Event

% Event

P1 Value, P2 Value, P3 Value

% Activity

Parameter 1

Parameter 2

Parameter 3

resmgr:cpu quantum

80.75

"1","0","0"

60.85

location





"2","0","0"

17.81






"3","0","0"

2.09




row cache lock

7.38

"10","0","3"

7.31

cache id

mode

request

db file sequential read

3.58

"59","23006","1"

0.20

file#

block#

blocks

<

So ASH report is showing different story , that mean database was not hanged , it appeared to hanged due to event “resmgr:cpu quantum”

As per

Oracle® Database Reference
11g Release 1 (11.1)

Part Number B28320-01

resmgr: cpu quantum

The session is waiting to be allocated a quantum of cpu. This event occurs when the resource manager is enabled and is throttling CPU consumption. To reduce the occurrence of this wait event, increase the CPU allocation for the sessions's current consumer group.

Wait Time: The time the session waited to acquire a CPU quantum


We have not enabled any resource plan , how it come into picture,In Oracle
11g all predefine maintenance windows use DEFAULT_MAINTENANCE_PLAN resource plan and Automatedmaintenance tasks run under its subplan ORA$AUTOTASK_SUB_PLAN supportEmptyParas]-->which has 25% resource allocation.


Possible Work around

Disable resource plan


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs