DBA Blogs

How to get mail working with Apex packed with 11g ..

Pankaj Chandiramani - Tue, 2008-07-22 22:44

Hi , Recently i installed 11g db on one of my systems ,as it comes with APEX i thought to move my apex app(which were in 10g) to the same .........when i moved my apps , i got everything working but mail .......it gave error :ORA-24247: network access denied by access control list (ACL)

The issue is with ACL as by default, the ability to interact with network services is disabled in Oracle Database 11g. To enable i needed to run a procedure as shown below .

DECLARE
ACL_PATH VARCHAR2(4000);
ACL_ID RAW(16);
BEGIN
-- Look for the ACL currently assigned to '*' and give FLOWS_030000
-- the "connect" privilege if FLOWS_030000 does not have the privilege yet.

SELECT ACL INTO ACL_PATH FROM DBA_NETWORK_ACLS
WHERE HOST = '*' AND LOWER_PORT IS NULL AND UPPER_PORT IS NULL;

-- Before checking the privilege, make sure that the ACL is valid
-- (for example, does not contain stale references to dropped users).
-- If it does, the following exception will be raised:
--
-- ORA-44416: Invalid ACL: Unresolved principal 'FLOWS_030000'
-- ORA-06512: at "XDB.DBMS_XDBZ", line ...
--
SELECT SYS_OP_R2O(extractValue(P.RES, '/Resource/XMLRef')) INTO ACL_ID
FROM XDB.XDB$ACL A, PATH_VIEW P
WHERE extractValue(P.RES, '/Resource/XMLRef') = REF(A) AND
EQUALS_PATH(P.RES, ACL_PATH) = 1;

DBMS_XDBZ.ValidateACL(ACL_ID);
IF DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE(ACL_PATH, 'FLOWS_030000',
'connect') IS NULL THEN
DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(ACL_PATH,
'FLOWS_030000', TRUE, 'connect');
END IF;

EXCEPTION
-- When no ACL has been assigned to '*'.
WHEN NO_DATA_FOUND THEN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL('power_users.xml',
'ACL that lets power users to connect to everywhere',
'FLOWS_030000', TRUE, 'connect');
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL('power_users.xml','*');
END;
/
COMMIT;

Finally got it working :)

Categories: DBA Blogs

TechNight All India Oracle User Group (AIOUG)

Virag Sharma - Sat, 2008-07-19 11:48

AIOUG started his first ever Technical session in India at Hyderabad… On … Friday, July 18th 2008, between 5:00 PM to 8:00 PM. Session started with quick introduction about AIOUG, there mission goal by Murali Vallath. Latter Phani Arega took technical session on "Efficient SQL Programming - Some Tricks and Tips" and Vivek Sharma on " Real Time Performance Tuning".

Both the technical session was quite good and they shared lots of real life examples in there presentation. Here is some photos from AIOUG TechNight

Murali Vallath

Phani Arega






Categories: DBA Blogs

Why are some of the tables in my query missing from the plan?

Inside the Oracle Optimizer - Thu, 2008-06-26 17:35
We apologize for our brief hiatus from blogging. We've been busy working on improvements to the optimizer.

In 10gR2, we introduced a new transformation, table elimination (alternately called "join elimination"), which removes redundant tables from a query. A table is redundant if its columns are only referenced to in join predicates, and it is guaranteed that those joins neither filter nor expand the resulting rows. There are several cases where Oracle will eliminate a redundant table. We will discuss each case in turn.

Primary Key-Foreign Key Table Elimination

Starting in 10gR2, the optimizer eliminates tables that are redundant due to primary key-foreign key constraints. Consider the following example tables:

create table jobs
(
job_id NUMBER PRIMARY KEY,

job_title VARCHAR2(35) NOT NULL,
min_salary NUMBER,
max_salary NUMBER
);

create table departments
(
department_id NUMBER PRIMARY KEY,
department_name VARCHAR2(50)
);
create table employees
(
employee_id NUMBER PRIMARY KEY,
employee_name VARCHAR2(50),
department_id NUMBER REFERENCES departments(department_id),
job_id NUMBER REFERENCES jobs(job_id)
);

and the query:

select e.employee_name
from employees e, departments d
where e.department_id = d.department_id;

In this query, the join to departments is redundant. The only column referenced in the query appears in the join predicate, and the primary key-foreign key constraint guarantees that there is at most one match in departments for each row in employees. Hence, the query is equivalent to:

select e.employee_name
from employees e
where e.department_id is not null;


The optimizer will generate this plan for the query:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("E"."DEPARTMENT_ID" IS NOT NULL)

Note that the IS NOT NULL predicate is not necessary if the column has a NOT NULL constraint on it.

Starting in 11gR1, the optimizer will also eliminate tables that are semi-joined or anti-joined. Consider the following query:

select e.employee_id, e.employee_name
from employees e
where not exists (select 1
from jobs j
where j.job_id = e.job_id);


Since employees.job_id is a foreign key to jobs.job_id, any non-null value in employees.job_id must have a match in jobs. So only employees with null values for employees.job_id will appear in the result. Hence, this query is equivalent to:

select e.employee_id, e.employee_name
from employees e
where job_id is null;

and the optimizer can choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("E"."JOB_ID" IS NULL)

Suppose employees.job_id has a NOT NULL constraint:

alter table employees modify job_id not null;

In this case, there could not possibly be any rows in EMPLOYEES, and the optimizer could choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
* 1 FILTER
2 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(NULL IS NOT NULL)

The "NULL IS NOT NULL" filter is a false constant predicate, that will prevent the table scan from even taking place.

Also in 11gR1, the optimization became available for ANSI compliant joins. For this query:

select employee_name
from employees e inner join jobs j
on e.job_id = j.job_id;

the optimizer can eliminate JOBS and produce this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Outer Join Table Elimination

In 11gR1, a new form of table elimination was introduced for outer joins, which does not require PK-FK constraints. For the example, we require a new table and an addition to EMPLOYEES:

create table projects
(
project_id NUMBER UNIQUE,
deadline DATE,
priority NUMBER
);

alter table employees add project_id number;

Now consider a query that outer joins employees and projects:

select e.employee_name, e.project_id
from employees e, projects p
where e.project_id = p.project_id (+);

The outer join guarantees that every row in employees will appear at least once in the result. The unique constraint on projects.project_id guarantees that every row in employees will match at most one row in projects. Together, these two properties guarantee that every row in employees will appear in the result exactly once. Since no other columns from projects are referenced, projects can be eliminated, and the optimizer can choose this plan:

-------------------------------------------
Id Operation Name
-------------------------------------------
0 SELECT STATEMENT
1 TABLE ACCESS FULL EMPLOYEES
-------------------------------------------

Why Would I Ever Write Such a Query?

All of the example queries in this post are very simple, and one would be unlikely to write a query where the join is so obviously unnecessary. There are many real world scenarios where table elimination may be helpful, including machine-generated queries and elimination of tables in views. For example, a set of tables might be exposed as a view, which contains a join. The join may be necessary to retrieve all of the columns exposed by the view. But some users of the view may only access a subset of the columns, and in this case, the joined table can be eliminated.

For example, consider the view:

create view employee_directory_v as
select e.employee_name, d.department_name, j.job_title
from employees e, departments d, jobs j
where e.department_id = d.department_id
and e.job_id = j.job_id;

This view might be exposed to a simple employee directory application. To lookup employee names by job title, the application issues a query:

select employee_name
from employee_directory_v
where department = 'ACCOUNTING';

Since the job_title column is not referenced, jobs can be eliminated from the query, and the optimizer can choose this plan:

--------------------------------------------
Id Operation Name
--------------------------------------------
0 SELECT STATEMENT
* 1 HASH JOIN
2 TABLE ACCESS FULL EMPLOYEES
* 3 TABLE ACCESS FULL DEPARTMENTS
--------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")
3 - filter("D"."DEPARTMENT_NAME"='ACCOUNTING')

Known Limitations

There are currently a few limitations of table elimination:
  • Multi-column primary key-foreign key constraints are not supported.
  • Referring to the join key elsewhere in the query will prevent table elimination. For an inner join, the join keys on each side of the join are equivalent, but if the query contains other references to the join key from the table that could otherwise be eliminated, this prevents elimination. A workaround is to rewrite the query to refer to the join key from the other table (we realize this is not always possible).
Categories: DBA Blogs, Development

"Demo It To Oracle" (DITO) - CamStudio Help

Pankaj Chandiramani - Sun, 2008-05-25 22:03

Now you can record & share the issues you are facing to Oracle Support .

https://metalink.oracle.com/metalink/plsql/f?p=130:14:7679480494464650902::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,11.1,1,1,0,helvetica

Its a nice way to share the error & show the support guys on how that error occurred & faster reproducibility.

Categories: DBA Blogs

upcoming posts

Pankaj Chandiramani - Thu, 2008-05-22 20:37

Been long time since i posted something  the reason being have started in new directions  ......working with OIM , OID & stuff these days .

So will be writing a post to install/config OIM in next couple of days & will keep on posting regularly on the topic.



 

Categories: DBA Blogs

Oracle's CPU Patch Naming

Jared Still - Tue, 2008-05-13 19:30
Just today I discovered that Oracle changed the nomenclature used to refer to Critical Patch Updates.

Up through the patch bundle issued in October 2007, these were referred to by the date of release. The October 2007 release was "Oracle Critical Patch Update - October 2007", often shortened to "CPU Oct 2007".
Oracle Critical Patch Update - October 2007

When viewing the documentation for the October 2007 CPU you will see this:

Oracle® DatabaseCritical Patch Update Note
Release 10.2.0.2 for Microsoft Windows (x64)

The documentation for the January 2008 CPU however looks a bit different:

Oracle® Database Server Version 10.2.0.2 Patch 17
Bundle Patch Update Note
Release 10.2.0.2 for Microsoft Windows (x64)


Note the difference. The Oct 2007 CPU is referred to by date, while the Jan 2008 CPU is referred to by patch number.

By digging down into the Oct 2007 CPU documentation you will find that it is referred to as "Patch 16".

Doing the same for the "Patch 17" documentation reveals that it is indeed the Jan 2008 CPU.

Why does it matter?

Imagine working with consultants for a major application system that uses Oracle as a database. Imagine also that these consultants also know only 'Patch 17', while you know only about 'Jan 2008 CPU'.

This can lead to a great deal of confusion, as it did for me (as well as the consultants) recently. Digging down into the docs revealed the change in nomenclature.

It led to (I am telling on myself here) me installing both the October 2007 CPU and the January 2008 CPU on a single database. This wouldn't be so bad, if they had been applied in the wrong order.

Maybe most other Oracle folks already know about this, and I just missed the memo explaining that Oracle would start referring to their CPU's by a different name.

In any case, not knowing this caused a lot of extra work.
Categories: DBA Blogs

DBMS_STATS Enhancements in Oracle 11g Database

Virag Sharma - Wed, 2008-04-30 07:14

DBMS_STATS Enhancements in Oracle 11g Database
Virag Sharma virag123@gmail.com

We know in 10g and 11g there is automatic job that collect stats of database based on certain preferences settings , lets have a look what are these preferences and what are there default values and how DBMS_STATS is different in 11g from 10g

Changing Preferences for Statistics

Preference name Default Values

CASCADE

AUTO

DEGREE

AUTO

ESTIMATE_PERCENT

AUTO

METHOD_OPT

FOR ALL COLUMN SIZE AUTO

NO_INVALIDATE

AUTO

GRANULARITY

AUTO

PUBLISH ( New in 11g)

TRUE

INCREMENTAL new in 11g

FALSE

STALE_PERCENT (New in 11g)

10



#
# In Oracle 10g
#

$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Mar 6 19:04:57 2008

SQL> SELECT dbms_stats.get_param('ESTIMATE_PERCENT') FROM dual;

DBMS_STATS.GET_PARAM('ESTIMATE_PERCENT')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_SAMPLE_SIZE

SQL>

SQL> SELECT dbms_stats.get_param('method_opt') FROM dual;

DBMS_STATS.GET_PARAM('METHOD_OPT')
--------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO

SQL> SELECT dbms_stats.get_param('GRANULARITY') FROM dual;

DBMS_STATS.GET_PARAM('GRANULARITY')
--------------------------------------------------------------------------------
AUTO

SQL> SELECT dbms_stats.get_param('NO_INVALIDATE') FROM dual;

DBMS_STATS.GET_PARAM('NO_INVALIDATE')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_INVALIDATE

SQL> SELECT dbms_stats.get_param('DEGREE') FROM dual;

DBMS_STATS.GET_PARAM('DEGREE')
--------------------------------------------------------------------------------
NULL

SQL> SELECT dbms_stats.get_param('CASCADE') FROM dual;

DBMS_STATS.GET_PARAM('CASCADE')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_CASCADE

<>


SQL> exec dbms_stats.set_param('ESTIMATE_PERCENT','100');

PL/SQL procedure successfully completed.


SQL> SELECT dbms_stats.get_param('ESTIMATE_PERCENT') FROM dual;

DBMS_STATS.GET_PARAM('ESTIMATE_PERCENT')
---------------------------------------------------------------------
100

SQL> exec dbms_stats.RESET_PARAM_DEFAULTS();

PL/SQL procedure successfully completed.

SQL> SELECT dbms_stats.get_param('ESTIMATE_PERCENT') FROM dual;

DBMS_STATS.GET_PARAM('ESTIMATE_PERCENT')
---------------------------------------------------------------------
DBMS_STATS.AUTO_SAMPLE_SIZE

->

GET_PARAM , RESET_GLOBAL_PREFS_DEFAULTS and SET_PARAM are obsolete in Oracle 11g.

In place of above procedures need to use following procedures GET_PREFS , RESET_GLOBAL_PREF_DEFAULTS and SET_GLOBAL_PREFS

SQL> Select dbms_stats.GET_PREFS('CASCADE') from dual;

DBMS_STATS.GET_PREFS('CASCADE')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_CASCADE

SQL> Select dbms_stats.GET_PREFS('DEGREE') from dual;

DBMS_STATS.GET_PREFS('DEGREE')
--------------------------------------------------------------------------------
NULL

SQL> Select dbms_stats.GET_PREFS('ESTIMATE_PERCENT') from dual;

DBMS_STATS.GET_PREFS('ESTIMATE_PERCENT')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_SAMPLE_SIZE

SQL> Select dbms_stats.GET_PREFS('METHOD_OPT') from dual;

DBMS_STATS.GET_PREFS('METHOD_OPT')
--------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO

SQL> Select dbms_stats.GET_PREFS('NO_INVALIDATE') from dual;

DBMS_STATS.GET_PREFS('NO_INVALIDATE')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_INVALIDATE

SQL> Select dbms_stats.GET_PREFS('GRANULARITY') from dual;

DBMS_STATS.GET_PREFS('GRANULARITY')
--------------------------------------------------------------------------------
AUTO

SQL> Select dbms_stats.GET_PREFS('PUBLISH') from dual;

DBMS_STATS.GET_PREFS('PUBLISH')
--------------------------------------------------------------------------------
TRUE

SQL> Select dbms_stats.GET_PREFS('INCREMENTAL') from dual;

DBMS_STATS.GET_PREFS('INCREMENTAL')
--------------------------------------------------------------------------------
FALSE

SQL> Select dbms_stats.GET_PREFS('STALE_PERCENT') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT')
--------------------------------------------------------------------------------
10

In Oracle 11g you can set preference not only on database level but on global level , schema level , table level as well. It really give more control to DBA for example on one table you want histograms should collected always or want to set METHOD_OPT => ‘FOR ALL COLUMNS SIZE 254’ on some tables

Procedure for setting preference on global , database , schema and table level are given below

1. SET_GLOBAL_PREFS

2. SET_DATABASE_PREFS

3. SET_SCHEMA_PREFS

4. SET_TABLE_PREFS

  • Preference set on global apply for new objects or object which no preference available
  • Database level preference will be applied on all objects in the database excluding the tables owned by Oracle. These tables can included by passing TRUE for the add_sys parameter of procedure.
  • If you set preference value to NULL , it will set to Oracle default value
>

#
#
Test case
#

SQL> create table test1(abc number);

Table created.

SQL> create table test2(abc number);

Table created.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST1') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST1')
--------------------------------------------------------------------------------
10

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
10


SQL> execute dbms_stats.set_table_prefs('SYS', 'TEST2', 'STALE_PERCENT', '35');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35

SQL> execute dbms_stats.set_global_prefs('STALE_PERCENT', '20');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST1') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST1')
--------------------------------------------------------------------------------
20

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35

SQL> exec dbms_stats.SET_DATABASE_PREFS('STALE_PERCENT', '15');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST1') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST1')
--------------------------------------------------------------------------------
20

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35


SQL> exec dbms_stats.set_database_prefs('STALE_PERCENT','30');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'BUG', 'BUG_USER')
from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','BUG','BUG_USER')
--------------------------------------------------------------------------------
30

SQL> exec dbms_stats.SET_DATABASE_PREFS('STALE_PERCENT', '15');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'SYS', 'TEST2') from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','SYS','TEST2')
--------------------------------------------------------------------------------
35

SQL> select dbms_stats.get_prefs('STALE_PERCENT', 'BUG', 'BUG_USER')
from dual;

DBMS_STATS.GET_PREFS('STALE_PERCENT','BUG','BUG_USER')
--------------------------------------------------------------------------------
15

Published and Pending Statistics

Statistics published by default when system/user/job collect stats. In Oracle 11g we have option to put newly collect stats in pending state and published latter once satisfied by performance testing based on new statistics.

In 10g we have face performance issue when collected stats on OLTP (24 X 7 ) database, probably because stats of one index/ table published before other. So collect stats in pending stat , test it and once satisfied with testing published it.

SQL> col name form a40
SQL> col VALUE form a6
SQL> col ISSES_MODIFIABL VALUE form a6
SQL> col ISSES_MODIFIABL form a6
SQL> col ISSYS_MODIFIABLE form a12
SQL> select name,value,isses_modifiable,issys_modifiable
from v$parameter
where name='optimizer_use_pending_statistics' ;

NAME VALUE ISSES_MODIFIABL ISSYS_MODIFI
---------------------------------------- ------ --------------- ------------
optimizer_use_pending_statistics FALSE TRUE IMMEDIATE

#
# Test Case
#

# By default Publish is set TRUE
#

SQL> select dbms_stats.get_prefs('PUBLISH') publish from dual
PUBLISH
---------
TRUE

SQL> select dbms_stats.get_prefs('PUBLISH', 'VIRSHARM', 'TEST1') publish
from dual;

PUBLISH
------------
TRUE

#
# Setting Publish false for TEST1 table
#

SQL> exec dbms_stats.set_table_prefs('VIRSHARM', 'TEST1', 'PUBLISH', 'false');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('PUBLISH', 'VIRSHARM', 'TEST1') publish
from dual;


PUBLISH
-------------
FALSE

SQL> execute dbms_stats.gather_table_stats('VIRSHARM', 'TEST1');

PL/SQL procedure successfully completed.

SQL> select table_name, last_analyzed from user_col_pending_stats
where table_name ='TEST1'

TABLE_NAME LAST_ANALYZED
---------- ---------------
TEST1 30-APR-08

SQL> select table_name, last_analyzed from user_ind_pending_stats
where table_name ='TEST1';

no rows selected

SQL> select table_name, last_analyzed from user_col_pending_stats
where table_name ='TEST1';

TABLE_NAME LAST_ANALYZED
---------- ---------------
TEST1 30-APR-08

#
# Following views will have no record for table TEST1
# because stats not published
#

SQL> select table_name, last_analyzed from user_tables
where table_name = 'TEST1';

TABLE_NAME LAST_ANALYZED
------------------------------ -------------------
TEST1

SQL> select index_name, last_analyzed from user_indexes
where table_name ='TEST1';

no rows selected

SQL> select column_name, last_analyzed from user_tab_columns
where table_name='TEST1' ;

COLUMN_NAME LAST_ANALYZED
------------ ---------------
ABC

>

#
# Testing
#

SQL> ALTER SESSION SET optimizer_use_pending_statistics =TRUE;

Session altered.

#
# OR export / import STATS
#

SQL> exec dbms_stats.create_stat_table('VIRSHARM','STATS');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.export_pending_stats(tabname=>'TEST1',stattab=>'STATS');

PL/SQL procedure successfully completed.

#
# Use expdp /impdp ot exp/imp to import stats in TEST/QA database
#

SQL> exec

dbms_stats.import_table_stats(ownname=>'VIRSHARM',tabname=>'TEST1',
stattab=>'STATS');

select column_name, last_analyzed from user_tab_columns
wheretable_name='TEST1';

#
# Published stats
# Publish the Stats to Data Dictionary for Optimizer Usage
#

SQL> exec dbms_stats.publish_pending_stats(tabname=>'TEST1');

PL/SQL procedure successfully completed.

If Stats published and performance become worse , then you can restore old version of stats. Oracle will manage the historical statistics repository, purging the statistics on a regular basis, by default every 31 days

By default Oracle keep stats for 31 day after that it purge. Retention can be increased useing following

SQL> exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION ( 60 );

PL/SQL procedure successfully completed.

Categories: DBA Blogs

11g RAC Database hanged or appears to hanged

Virag Sharma - Thu, 2008-04-24 06:07

11g RAC Database hanged or appears to hanged

Around lunch time 1:00 PM database hanged. When we tried to login as sysdba
our sqlplus session also hanged. we not able to login on either of node of RAC.
Thought to kill all the remote connection , so we will able to login

ps -aef |grep LOCAL=NO |awk '{ print $2}' |xargs kill -9

When your database hanged and you not able to login, we have used following procedure
to create hang analysis dump

sqlplus /nolog

SQL> set _prelim on
SQL> connect / as sysdba
SQL> oradebug setmypid
Statement processed.
SQL> oradebug unlimit
Statement processed.
SQL> oradebug -g all hanganalyze 3
SQL> oradebug -g all hanganalyze 3
Cycle 1: (1/1965)--(1/1839)
Hang Analysis in /u04/oraout/abcap/11gdiag/diag/rdbms/abcap/abc2ap/trace/abc2ap_diag_9269.trc

We also generate ASH report for that(=hang) duration.
Here is what , hang analysis show

#
# Session id is in RED colour for further analysis

#

Chains most likely to have caused the hang:
[a] Chain 1 Signature: 'row cache lock'<='row cache lock' (cycle) Chain 1 Signature Hash: 0x75bdd0c

===============================================================================
Cycles:

-------------------------------------------------------------------------------
Chain 1:
-------------------------------------------------------------------------------
Oracle session identified by:
{
instance: 1 (abcap.abc1ap)
os id: 27244
process id: 343, oracle@abc232
session id: 1839
session serial #: 12
}
is waiting for 'row cache lock' with wait info:
{
p1: 'cache id'=0x7
p2: 'mode'=0x0
p3: 'request'=0x5
time in wait: 0.526566 secs
heur. time in wait: 18.534318 secs
timeout after: 2.473434 secs
wait id: 49
blocking: 2 sessions
current sql:
short stack: <-ksedsts()+285<-ksdxfstk()+22<-ksdxcb()+1599<-sspuser()+102<-semtimedop()+36<-sskgpwwait()+211<-skgpwwait()+104<-ksliwat()+1256<-kslwaitctx()+135<-kqrigt()+1592<-kqrLockAndPinPo()+420<-kqrpre1()+915<-kqrpre()+57<-kziasfc()+235<-kpolnb()+5279<-kpoauth()+582<-opiodr()+991<-ttcpip()+1066<-opitsk()+1275<-opiino()+882<-opiodr()+991<-opidrv()+462<-sou2o()+91<-opimai_real()+109<-ssthrdmain()+142<-main()+116<-__libc_start_main()+211<-_start()+33 wait history: 1. event: 'row cache lock'
wait id: 48 p1: 'cache id'=0x7
time waited: 3.001908 secs p2: 'mode'=0x0
p3: 'request'=0x5
2. event: 'row cache lock'
wait id: 47 p1: 'cache id'=0x7
time waited: 3.001615 secs p2: 'mode'=0x0
wait id: 47 p1: 'cache id'=0x7
time waited: 3.001615 secs p2: 'mode'=0x0
p3: 'request'=0x5
3. event: 'row cache lock'
wait id: 46 p1: 'cache id'=0x7
time waited: 3.000437 secs p2: 'mode'=0x0
p3: 'request'=0x5
}
and is blocked by
=> Oracle session identified by:
{
instance: 1 (abcap.abc1ap)
os id: 27941
process id: 125, oracle@abc232
session id: 1965
session serial #: 476
}
which is waiting for 'row cache lock' with wait info:
{
p1: 'cache id'=0x7
p2: 'mode'=0x0
p3: 'request'=0x3
time in wait: 0.462402 secs
heur. time in wait: 18.536906 secs
timeout after: 2.537598 secs
wait id: 30
blocking: 2 sessions
current sql:
short stack: <-ksedsts()+285<-ksdxfstk()+22<-ksdxcb()+1599<-sspuser()+102<-semtimedop()+36<-sskgpwwait()+211<-skgpwwait()+104<-ksliwat()+1256<-kslwaitctx()+135<-kqrigt()+1592<-kqrLockAndPinPo()+420<-kqrpre1()+915<-kqrpre()+57<-kkdlgui()+186<-kziavdb.()+2023<-kziaia.()+220<-kpolnb()+580<-kpoauth()+582<-opiodr()+991<-ttcpip()+1066<-opitsk()+1275<-opiino()+882<-opiodr()+991<-opidrv()+462<-sou2o()+91<-opimai_real()+109<-ssthrdmain()+142<-main()+116<-__libc_start_main()+211<-_start()+33 wait history: 1. event: 'row cache lock' wait id: 29 p1: 'cache id'=0x7 time waited: 3.001776 secs p2: 'mode'=0x0 p3: 'request'=0x3 2. event: 'row cache lock' wait id: 28 p1: 'cache id'=0x7 time waited: 3.001578 secs p2: 'mode'=0x0 p3: 'request'=0x3 3. event: 'row cache lock' wait id: 27 p1: 'cache id'=0x7

When checked above session ( as marked in RED in above windows ) in ASH table.
We found both sessions are belong to “sys” user.

Lets have a look on ASH report, one which we created, when database was hanged


Top User Events

Event

Event Class

% Event

Avg Active Sessions

resmgr:cpu quantum

Scheduler

80.66

55.53

row cache lock

Concurrency

7.37

5.08

db file sequential read

User I/O

3.55

2.45

null event

Other

2.66

1.83

CPU + Wait for CPU

CPU

1.90

1.31

Top Event P1/P2/P3 Values

Event

% Event

P1 Value, P2 Value, P3 Value

% Activity

Parameter 1

Parameter 2

Parameter 3

resmgr:cpu quantum

80.75

"1","0","0"

60.85

location





"2","0","0"

17.81






"3","0","0"

2.09




row cache lock

7.38

"10","0","3"

7.31

cache id

mode

request

db file sequential read

3.58

"59","23006","1"

0.20

file#

block#

blocks

<

So ASH report is showing different story , that mean database was not hanged , it appeared to hanged due to event “resmgr:cpu quantum”

As per

Oracle® Database Reference
11g Release 1 (11.1)

Part Number B28320-01

resmgr: cpu quantum

The session is waiting to be allocated a quantum of cpu. This event occurs when the resource manager is enabled and is throttling CPU consumption. To reduce the occurrence of this wait event, increase the CPU allocation for the sessions's current consumer group.

Wait Time: The time the session waited to acquire a CPU quantum


We have not enabled any resource plan , how it come into picture,In Oracle
11g all predefine maintenance windows use DEFAULT_MAINTENANCE_PLAN resource plan and Automatedmaintenance tasks run under its subplan ORA$AUTOTASK_SUB_PLAN supportEmptyParas]-->which has 25% resource allocation.


Possible Work around

Disable resource plan


Categories: DBA Blogs

Oracle keeps closing my TAR because I cannot provide a testcase, can you help?

Inside the Oracle Optimizer - Fri, 2008-03-14 14:31
The answer to this question is yes, as Oracle Database 11g provides a new diagnostic tool called SQL Test Case Builder. In this article, we explain what SQL Test Case Builder is, and how to use it with examples.

Why SQL Test Case Builder?

For most SQL problems, the single most important factor for a speedy bug resolution is to obtain a reproducible test case. However, this is normally the longest and most painful step for customers. The goal of the SQL Test Case Builder (TCB) is to automatically gather as much information as possible related to a SQL incident (problem) and package it in a way that allows a developer or a support engineer to reproduce the problem on his or her own machine quickly.

At a very high-level, SQL Test Case Builder can be seen as a way to export a SQL. Currently, Oracle export (expdp) takes a schema or a set of tables and exports all the dependents objects. SQL Test Case Builder provides the same service but takes a SQL statement as input.

What's Inside Test Case Builder?

The main input of SQL Test Case Builder is a SQL object. A SQL object is defined as the SQL text plus all the information required to compile it on a particular database instance (this contains the parsing user name, for example).

Logically, a SQL test case appears as a script containing all the necessary commands to recreate the objects, the user, the statistics, and the environment.

Within the Oracle Diagnosability infrastructure, TCB compiles the problem SQL in a special capture mode to obtain the set of objects to export. A test case captures two types of information:

  1. Permanent information

    • SQL text

    • PL/SQL functions, procedures, packages

    • Statistics

    • Bind variables

    • Compilation environment

    • User information (like privileges)

    • SQL profiles, stored outlines, or other SQL Management Objects

    • Meta data on all the objects involved

    • Optimizer statistics

    • The execution plan information

    • The table content (sample or full). This is optional.

  2. Transient information

  3. For most of the SQL test cases, the permanent information above is enough to reproduce a problem. There are however cases where this is not enough and additional information about the context in which this SQL was compiled is required. Therefore, in addition to the permanent information, SQL Test Case Builder captures transient information, e.g. information that is only available as part of the compilation of the SQL statement. This includes dynamic sampling results, cached information, some run time information, like the actual degree of parallelism used, etc.

    As part of creating a SQL test case, the SQL object is reloaded and all the diagnostic information available generated and gathered automatically. This information will be made available to Oracle support and developers.


How do I use the SQL Test Case Builder?

The task of creating a SQL test case can be performed in two ways:
  • From EM (Enterprise Manager), where TCB is invoked on user-demand via IPS (Incident Packaging Service) after a SQL incident occurred. The user can also manually create an incident for a problem query for building test case purpose.

  • From SQLPLUS, where you can directly invoke one of the PL/SQL API functions in the SQL Diagnostic package. We will give examples of using the APIs below.

All the new PL/SQL procedures supporting SQL Test Case Builder are part of a new PL/SQL package called dbms_sqldiag (see dbmsdiag.sql for details). The two main features related to TCB in this package are export and import test cases.

  • Procedure dbms_sqldiag.export_sql_testcase exports a SQL test case for a given SQL statement to a given directory.

  • Procedure dbms_sqldiag.import_sql_testcase imports a test case from a given directory.

To build (or export) a test case, the simplest form would be something like:

     dbms_sqldiag.export_sql_testcase(
directory => 'TCB_DIR_EXP',
sql_text => 'select count(*) from sales',
testcase => tco)

Here directory and sql_text are inputs which specify where the test case will be stored, and the problem query statement, respectively. Testcase specifies the test case metadata as output.

For security reason, the user data are not exported by default. You have the option to set exportData to TRUE to include the data. You can also set samplingPercent if you are exporting with data. To protect users proprietary codes, TCB will not export PL/SQL package body by default.

Once the test case has been built, you can copy all the files under the export directory to your test environment. Note there is a file called xxxxxxxxmain.xml, for example, oratcb1_03C600800001main.xml, which contains the metadata of the test case.

Now importing the test case can be as simple as:


     dbms_sqldiag.import_sql_testcase(
directory => 'TEST_DIR',
filename => 'oratcb1_03C600800001main.xml')

To verify that the test case is successfully rebuilt, you can just issue an explain command for the problem query. However, if you want to actully run the query, then you need to have the data available.

You can refer to dbmsdiag.sql for more information about other options available for these procedures.

Example - We now show the typical steps of using TCB by a sample query with materialized view. In this exmaple, we set the exportData option to TRUE, so we can re-run the same query after the TCB task is completed.

  1. Setup


  2. SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> create or replace directory TCB_DIR_EXP as
    2 '/net/tiger/apps/tcb_exp';
    Directory created.
    SQL>
    SQL> grant dba to apps;
    Grant succeeded.
    SQL>
    SQL> connect apps/apps
    Connected.
    SQL>
    SQL> create materialized view scp_mvu
    2 parallel 2
    3 as
    4 select p.prod_name, c.cust_gender,
    5 max(s.amount_sold) max_amount_sold
    6 from sales s, products p, customers c
    7 where s.prod_id = p.prod_id
    8 and s.cust_id = c.cust_id
    9 group by p.prod_name, c.cust_gender;

    Materialized view created.

    SQL>
    SQL> desc scp_mvu;
    Name Null? Type
    ----------------------------------------- -------- ------------
    PROD_NAME NOT NULL VARCHAR2(50)
    CUST_GENDER CHAR(1)
    MAX_AMOUNT_SOLD NUMBER

    SQL>
    SQL> select * from scp_mvu where max_amount_sold > 7000 order by 3;

    PROD_NAME C MAX_AMOUNT_SOLD
    -------------------------------------------------- - ---------------
    Joseph Sportcoat F 7400.8
    Kenny Cool Leather Skirt M 7708
    Leather Boot-Cut Trousers M 8184

    3 rows selected.

  3. Export as user APPS


  4. SQL> connect apps/apps
    Connected.

    SQL>
    SQL> Rem define the problem SQL statement
    SQL> create or replace package define_vars is
    2 sql_stmt1 varchar2(2000) := q'# select * from scp_mvu
    3 where max_amount_sold > 7000
    4 order by 3
    5 #';
    6 end;
    7 /

    Package created.
    SQL>
    SQL> set serveroutput on
    SQL>
    SQL> declare
    2 tco clob;
    3 begin
    4 -- Export test case
    5 dbms_sqldiag.export_sql_testcase
    6 (
    7 directory => 'TCB_DIR_EXP',
    8 sql_text => define_vars.sql_stmt1,
    9 user_name => 'APPS',
    10 exportData => TRUE,
    11 testcase => tco
    12 );
    13
    14 end;
    15 /

    PL/SQL procedure successfully completed.
    SQL>
    SQL> Rem Drop MV before importing
    SQL> drop materialized view scp_mvu;

    Materialized view dropped.

    At this stage, the export procedure has successfully completed. The next commands prepare a directory for import purpose. The directory could be on a different machine.

    SQL> conn / as sysdba
    Connected.
    SQL> create or replace directory TCB_DIR_IMP
    2 as '/net/lion/test/tcb_imp';
    Directory created.
    SQL>
    SQL> grant dba to test;
    Grant succeeded.

    As the export has finished successfully, you can now transfer all the files under TCB_DIR_EXP to a directory in test environment, for example, TCB_DIR_IMP as created above. Again, look up and make note of the TCB metadata file xxxxxxxxmain.xml, which will be used below.

  5. Import as user TEST


  6. SQL> connect test/test
    Connected.
    SQL>
    SQL> set serveroutput on
    SQL>
    SQL> begin
    2 -- Import test case
    3 dbms_sqldiag.import_sql_testcase
    4 (
    5 directory => 'TCB_DIR_IMP',
    6 filename => 'oratcb3_05e803500001main.xml',
    7 importData => TRUE
    8 );
    9
    10 end;
    11 /

    PL/SQL procedure successfully completed.

  7. Verification. This is to check that now all relevant objects were imported successfully.

SQL> desc scp_mvu;
Name Null? Type
----------------------------------------- -------- ------------
PROD_NAME NOT NULL VARCHAR2(50)
CUST_GENDER CHAR(1)
MAX_AMOUNT_SOLD NUMBER
SQL>
SQL> select * from scp_mvu where max_amount_sold > 7000 order by 3;

PROD_NAME C MAX_AMOUNT_SOLD
-------------------------------------------------- - ---------------
Joseph Sportcoat F 7400.8
Kenny Cool Leather Skirt M 7708
Leather Boot-Cut Trousers M 8184

3 rows selected.

Finally, we also have good news for 10g users: SQL Test Case Builder has been backported to 10.2.0.4!
Categories: DBA Blogs, Development

ORA-6502 "Bulk Bind: Truncated Bind" error

Jared Still - Thu, 2008-03-13 14:33
ORA-6502 is an error that is apparently not well documented when it occurs in conjunction with the use of PL/SQL tables, and possibly bulk binds.

I ran into this problem recently when some custom code that had worked well for several years suddenly started failing.

As it turns out, and you will see just a little later here, the error is rather easy to fix. What makes it diffucult is if you've never encountered an ORA-6502 under these circumstances. There is precious little about it via MetaLink or Google. Writing about it here may be of help to the next innocent victim of ORA-6502 with bulk binds.

My suspicion was that new data loaded into the system from another database had something to do with the problem, the problem was determining where it was occurring and why.

The problem went unnoticed for some time due to a quirk in error handling built into the package. (Tom Kyte would likely agree)
Why Do People Do This?"
Built into the package is detailed profiling information. This is a good thing. Also built into the package is an error handling routine. This might be a good thing to have if the package is only run from the command line as it provides a call stack and the error message.

There is an attribute of this type of error handling that does not lend itself well to use in a job that is run via DBMS_JOB. The error message is printed, but the error is not raised.

The consequence of this type of error handling is that regardless of any errors encountered during the execution of the job, they are hidden from DBMS_JOB. No error is ever raised. After a user reports that production data is not propagating to the system, the DBA (me) checks the DBA_JOBS view and find that it is executing successfully.

Wondering what is going on, the erstwhile DBA (me again) runs the job manually and discovers that it does not actually work at all, but no error was raised.

The printed error message was "PL/SQL: numeric or value error" - bulk bind truncated.

The PL/SQL package that generated this is 3000 lines long and has some level of abstration. Translation: hard to debug when you don't know what the code is doing.

Before showing a more relevant example, I will show a simple one. If you found this page via google, this is enough to help you understand what is going on.

The problem is simply a mismatch of the data type scale. Obvious once you see it. This is a 9i example (10g would take less code) because the database it was written on was 9.2 at the time.

If you are accustomed to working with the data dictionary the problem will likely be obvious to you.

declare
type owner_type is table of varchar2(30) index by pls_integer;
type object_name_type is table of varchar2(20) index by pls_integer;
type object_type_type is table of varchar2(18) index by pls_integer;

t_owner owner_type;
t_object_name object_name_type;
t_object_type object_type_type;

cursor c_obj
is
select owner, object_name, object_type
from dba_objects;

begin
open c_obj;
while c_obj%isopen
loop
fetch c_obj
bulk collect into t_owner, t_object_name, t_object_type
limit 100;

if c_obj%notfound then
close c_obj;
end if;

end loop;
end;
/

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind
ORA-06512: at line 19

The fix is simple: change length of object_name_type from 20 to 30. The data dictionary (DBA_OBJECTS.OBJECT_NAME) actually specifies this as 128, but this particular database has no objects with names longer than the standard length of 30 characters.

Where the mystery of what caused this error becomes difficult to unravel is when there are a few levels of indirection.

There is a staging table that is the temporary home for data from another system. The PL/SQL code gets its specifications for data types from the staging table, not the source table. You probably already know what the problem is.

This example is similar to what was happening.

First create the staging table.
create table tbind_test (
owner varchar2(30) not null,
object_name varchar2(20) not null,
object_type varchar2(18) not null
)
/

Now run a similar, but somewhat different PL/SQL block.

declare
type owner_type is table of tbind_test.owner%TYPE index by pls_integer;
type object_name_type is table of tbind_test.object_name%TYPE index by pls_integer;
type object_type_type is table of tbind_test.object_type%TYPE index by pls_integer;

t_owner owner_type;
t_object_name object_name_type;
t_object_type object_type_type;

cursor c_obj
is
select owner, object_name, object_type
from dba_objects;

begin
open c_obj;
while c_obj%isopen
loop
fetch c_obj
bulk collect into t_owner, t_object_name, t_object_type
limit 100;
if c_obj%notfound then
close c_obj;
end if;

end loop;
end;
/

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind
ORA-06512: at line 19

Here's the offending bit of code:
19    fetch c_obj
20 bulk collect into t_owner, t_object_name, t_object_type
21 limit 100;


Hmmmm. The specification for the PL/SQL tables came straight from the
table via %TYPE, what could be the problem? The problem is that you
need to look at the cursor, and compare the datatypes in the table
referenced by the cursor with those of the TBIND_TEST table.

Doing that you will find that TBIND_TEST.OBJECT_NAME should really be 128
 1* alter table tbind_test modify (object_name varchar2(128) )
SQL> /

Table altered.

Now the bulk binds will run without error.
Categories: DBA Blogs

Interesting Corrupted Redo Log Case

Renaps' Blog - Sun, 2008-03-02 11:59

It has been a while since I wrote on Renaps’ Blog .. most probably because I didn’t run through any interesting case to talk about for a while !

Yesterday, at one of our main clients, the production (soon to be – on Monday) database hung. Was it because it was a Feb 29th of a bisextile year ? or simply because the week-end arriving in the next 4 hours was the go-live of the main data warehouse ? — Its hard to say, I believe that it was just a little wink from our old friend Murphy !

The environment is a 16 Itanium CPUs Windows Server running 10.2.0.3.0; ASM is used even though the database runs on a single instance mode.

When the database hung, the alert.log showed the following error messages (of course, no one could connect or run any transaction on the database):

ARC0: All Archive destinations made inactive due to error 354
Committing creation of archivelog ‘+FLASH/inf1/archivelog/arc17972_0627748574.001’ (error 354)
ARCH: Archival stopped, error occurred. Will continue retrying
Fri Feb 29 11:48:05 2008
Errors in file c:oraadmininf1bdumpinf1_arc0_10236.trc:
ORA-16038: Message 16038 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [17972]
ORA-00354: Message 354 not found; No message file for product=RDBMS, facility=ORA
ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [+FLASH/inf1/onlinelog/group_1.257.627748575]

The first line of this message stack made me believe that the database just hung because there was no more space for the archiver to do its job. But after further analysis, it appeared that the problem was much more serious.

The actual cause of the issue was related to Redo-Log Corruption. The cause of the corruption was a “write or rewrite” SAN related error.

When Redo-Log corruption occurs, Oracle first tries to use the second (or third) member of the same group showing corruption to avoid un-availability of the database. In our case, this was impossible since there was only one member per group of Redo-Log — I was surprised to see that this client did not multiplex the Redo-Logs …

My first attempt to get the database back on its legs was to dump the content of the redo-log showing corruption issues (in this case, group 1).

SQL> ALTER SYSTEM DUMP LOGFILE ‘+FLASH/inf1/onlinelog/group_1.257.627748575’;
ALTER SYSTEM DUMP LOGFILE ‘+FLASH/inf1/onlinelog/group_1.257.627748575’
*
ERREUR à la ligne 1 :
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 53504 change 70389305 time 02/29/2008 11:48:02
ORA-00334: archived log: ‘+FLASH/inf1/onlinelog/group_1.257.627748575’

As you can see, this attempt failed, I then attempted to clear the unarchived redo-log:
SQL> alter database clear unarchived logfile ‘+FLASH/inf1/onlinelog/group_1.257.627748575’;

at this point, my command hung indefinitely, and I realized that pmon was holding a lock on the Redo-log Header block…

My margin of maneuver has become much smaller, here is the sequence of commands that brought the database back on track:

  1. shutdown abort;
  2. startup mount;
  3. issue the same command that was previously hanging (this time, it worked right away) :
    alter database clear unarchived logfile ‘+FLASH/inf1/onlinelog/group_1.257.627748575’;
  4. alter database open;
  5. shutdown immediate;
  6. Immediately take a FULL backup at this point;
  7. startup;
  8. Don’t forget to multiplex your Redo-Logs to avoid to run into this situation again !

Here you go, Redo-Log corruption on the header block of a group when only one member is used does look bad at first sight but I hope this article can help you if you run into a similar scenario.

References:

Metalink Notes:
145769.1
332672.1
438176.1
114734.1

Patrick Hamou
E-Mail: phamou@renaps.com
Web: www.renaps.com


Categories: DBA Blogs

Applying CPU Patch on RAC using EM -A White paper

Pankaj Chandiramani - Wed, 2008-02-27 23:59

I wrote this paper to describes the process & benefits of applying patches using the EM.
Case : RAC - 3 node 10.2.0.3
Patch : CPU July 2007
Method : Rolling patch , zero downtime , using Deployment procedures in EM.

You can access the paper here on OTN.
or by copying the below url :
http://www.oracle.com/technology/products/oem/pdf/saving%20time%20and%20labor%20on%20oracle%20patching%20with%20em%20provisioning%20pack%20-%20a%20case%20study%20with%20oracle%20internal%20it.pdf

Other interesting white papers on EM are also listed here .

Categories: DBA Blogs

Upgrading from Oracle Database 9i to 10g: What to expect from the Optimizer

Inside the Oracle Optimizer - Tue, 2008-02-26 15:34
One of the most daunting activities a DBA can undertake is upgrading the database to a new version. Having to comprehend all of the new features and to deal with potential plan changes can be overwhelming. In order to help DBA's upgrade from Oracle Database 9i to 10g a new whitepaper called "Upgrading from Oracle Database 9i to 10g: What to expect from the Optimizer" has recently been posted on Oracle Technology Network (OTN). This paper aims to explain in detail what to expect from the CBO when you upgrade from Oracle database 9i to 10g and describes what steps you should take before and after the upgrade to minimize any potential SQL regressions. This is a must read for any DBA planning on upgrading from 9i to 10g in the near future!
Categories: DBA Blogs, Development

Reactive performance management By Craig Shallahamer

Virag Sharma - Sun, 2008-02-17 00:00

Last week Oracle University arrange Seminar by Performance Management guru Craig Shallahamer in India. Craig brings his unique experiences to many as a keynote speaker, a sought after teacher, a researcher and publisher for improving Oracle performance management, and also the architect of Horizone, OraPub's service level management solution.<?xml:namespace prefix = o />


His way of teaching is very very good , also contents of seminar is different and very effective. He focused on accurate performance diagnosis of even the most complex Oracle systems and how oracle interact with OS and application’s. I have been reading Craig’s papers/articles for long time , but never got full/big picture. After attending seminar his papers/articles look easy to understand.

Focus point of seminar was how to pin point issue area and different method to resolve it. He also cover some area of Mutex, IMU ( In memory Undo ) and performance forecasting etc. The way he teach latch Internals, anybody can understand it, even small kids J, he Create an honest and understandable story explaining the problem, the solutions, and how to best implement your solutions in a complex and highly available Oracle environment.

Second day of training , we had little tough time , as next to our conference hall in Oberoi Hotel, New Delhi , Shahrukh Khan( One of India Famous actor) was coming to inugrate/launch "Kya Aap Paanchvi Paas Se Tez Hain" new serial on Star Plus channel. But Craig enjoy music and continue with training

Categories: DBA Blogs

The “log file sync” wait event is not always spent waiting for an I/O

Christian Bilien - Tue, 2008-02-12 10:41
I went through a rather common problem some times ago, which shows how one can get easily fooled by the “log file sync” wait event. Here is a log file sync wait histogram which was interpreted as a poor storage array response time by the DBAs, often a good way to get off the hook […]
Categories: DBA Blogs

Displaying and reading the execution plans for a SQL statement

Inside the Oracle Optimizer - Thu, 2008-02-07 14:37
Generating and displaying the execution plan of a SQL statement is a common task for most DBAs, SQL developers, and preformance experts as it provides them information on the performance characteristics of a SQL statement. An execution plan shows the detailed steps necessary to execute a SQL statement. These steps are expressed as a set of database operators that consumes and produces rows. The order of the operators and their implentation is decided by the query optimizer using a combination of query transformations and physical optimization techniques.

While the display is commonly shown in a tabular format, the plan is in fact tree-shaped. For example, consider the following query based on the SH schema (Sales History):



select prod_category, avg(amount_sold)
from sales s, products p
where p.prod_id = s.prod_id
group by prod_category;


The tabular representation of this query's plan is:



------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------

While the tree-shaped representation of the plan is:

GROUP BY
|
JOIN
_____|_______
| |
ACCESS ACCESS
(PRODUCTS) (SALES)




When you read a plan tree you should start from the bottom up. In the above example begin by looking at the access operators (or the leaves of the tree). In this case the access operators are implemented using full table scans. The rows produced by these tables scans will be consumed by the join operator. Here the join operator is a hash-join (other alternatives include nested-loop or sort-merge join). Finally the group-by operator implemented here using hash (alternative would be sort) consumes rows produced by the join-opertor.

The execution plan generated for a SQL statement is just one of the many alternative execution plans considered by the query optimizer. The query optimizer selects the execution plan with the lowest cost. Cost is a proxy for performance, the lower is the cost the better is the performance. The cost model used by the query optimizer accounts for the IO, CPU, and network usage in the query.

There are two different methods you can use to look at the execution plan of a SQL statement:


  1. EXPLAIN PLAN command - This displays an execution plan for a SQL statement without actually executing the statement.

  2. V$SQL_PLAN - A dictionary view introduced in Oracle 9i that shows the execution plan for a SQL statement that has been compiled into a cursor in the cursor cache.


Under certain conditions the plan shown when using EXPLAIN PLAN can be different from the plan shown using V$SQL_PLAN. For example, when the SQL statement contains bind variables the plan shown from using EXPLAIN PLAN ignores the bind variable values while the plan shown in V$SQL_PLAN takes the bind variable values into account in the plan generation process.

Displaying an execution plan has been made easier after the introduction of the dbms_xplan package in Oracle 9i and by the enhancements made to it in subsequent releases. This packages provides several PL/SQL procedures to display the plan from different sources:


  1. EXPLAIN PLAN command

  2. V$SQL_PLAN

  3. Automatic Workload Repository (AWR)

  4. SQL Tuning Set (STS)

  5. SQL Plan Baseline (SPM)


The following examples illustrate how to generate and display an execution plan for our original SQL statement using the different functions provided in the dbms_xplan package.

Example 1 Uses the EXPLAIN PLAN command and the dbms_xplan.display function.


SQL> EXPLAIN PLAN FOR
2 select prod_category, avg(amount_sold)
3 from sales s, products p
4 where p.prod_id = s.prod_id
5 group by prod_category;

Explained.



SQL> select plan_table_output
2 from table(dbms_xplan.display('plan_table',null,'basic'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------


The arguments are for dbms_xplan.display are:


  • plan table name (default 'PLAN_TABLE'),

  • statement_id (default null),

  • format (default 'TYPICAL')


More details can be found in $ORACLE_HOME/rdbms/admin/dbmsxpln.sql.

Example 2 Generating and displaying the execution plan for the last SQL statement executed in a session:



SQL> select prod_category, avg(amount_sold)
2 from sales s, products p
3 where p.prod_id = s.prod_id
4 group by prod_category;

no rows selected


SQL> select plan_table_output
2 from table(dbms_xplan.display_cursor(null,null,'basic'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------


The arguments used by dbms_xplay.dispay_cursor are:


  • SQL ID (default null, null means the last SQL statement executed in this session),

  • child number (default 0),

  • format (default 'TYPICAL')


The details are in $ORACLE_HOME/rdbms/admin/dbmsxpln.sql.


Example 3 Displaying the execution plan for any other statement requires the SQL ID to be provided, either directly or indirectly:

  1. Directly:

    SQL> select plan_table_output from
    2 table(dbms_xplan.display_cursor('fnrtqw9c233tt',null,'basic'));


  2. Indirectly:

    SQL> select plan_table_output
    2 from v$sql s,
    3 table(dbms_xplan.display_cursor(s.sql_id,
    4 s.child_number, 'basic')) t
    5 where s.sql_text like 'select PROD_CATEGORY%';


Example 4 - Displaying an execution plan corresponding to a SQL Plan Baseline. SQL Plan Baselines have been introduced in Oracle 11g to support the SQL Plan Management feature (SPM). In order to illustrate such a case we need to create a SQL Plan Baseline first.


SQL> alter session set optimizer_capture_sql_plan_baselines=true;

Session altered.

SQL> select prod_category, avg(amount_sold)
2 from sales s, products p
3 where p.prod_id = s.prod_id
4 group by prod_category;

no rows selected

If the above statement has been executed more than once, a SQL Plan Baseline will be created for it and you can verified this using the follows query:


SQL> select SQL_HANDLE, PLAN_NAME, ACCEPTED
2 from dba_sql_plan_baselines
3 where sql_text like 'select prod_category%';

SQL_HANDLE PLAN_NAME ACC
------------------------------ ------------------------------ ---
SYS_SQL_1899bb9331ed7772 SYS_SQL_PLAN_31ed7772f2c7a4c2 YES


The execution plan for the SQL Plan Baseline created above can be displayed either directly or indirectly:

  1. Directly
    select t.* from
    table(dbms_xplan.display_sql_plan_baseline('SYS_SQL_1899bb9331ed7772',
    format => 'basic')) t


  2. Indirectly
    select t.*
    from (select distinct sql_handle
    from dba_sql_plan_baselines
    where sql_text like 'select prod_category%') pb,
    table(dbms_xplan.display_sql_plan_baseline(pb.sql_handle,
    null,'basic')) t;



The output of either of these two statements is:



----------------------------------------------------------------------------
SQL handle: SYS_SQL_1899bb9331ed7772
SQL text: select prod_category, avg(amount_sold) from sales s, products p
where p.prod_id = s.prod_id group by prod_category
----------------------------------------------------------------------------

----------------------------------------------------------------------------
Plan name: SYS_SQL_PLAN_31ed7772f2c7a4c2
Enabled: YES Fixed: NO Accepted: YES Origin: AUTO-CAPTURE
----------------------------------------------------------------------------

Plan hash value: 4073170114

---------------------------------------------------------
Id Operation Name
---------------------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 VIEW index$_join$_002
4 HASH JOIN
5 INDEX FAST FULL SCAN PRODUCTS_PK
6 INDEX FAST FULL SCAN PRODUCTS_PROD_CAT_IX
7 PARTITION RANGE ALL
8 TABLE ACCESS FULL SALES
---------------------------------------------------------


Formatting


The format argument is highly customizable and allows you to see as little (high-level) or as much (low-level) details as you need / want in the plan output. The high-level options are:

  1. Basic
    The plan includes the operation, options, and the object name (table, index, MV, etc)
  2. Typical
    It includes the information shown in BASIC plus additional optimizer-related internal information such as cost, size, cardinality, etc. These information are shown for every operation in the plan and represents what the optimizer thinks is the operation cost, the number of rows produced, etc. It also shows the predicates evaluation by the operation. There are two types of predicates: ACCESS and FILTER. The ACCESS predicates for an index are used to fetch the relevant blocks because they apply to the search columns. The FILTER predicates are evaluated after the blocks have been fetched.
  3. All
    It includes the information shown in TYPICAL plus the lists of expressions (columns) produced by every operation, the hint alias and query block names where the operation belongs. The last two pieces of information can be used as arguments to add hints to the statement.
The low-level options allow the inclusion or exclusion of find details, such as predicates and cost.
For example,


select plan_table_output
from table(dbms_xplan.display('plan_table',null,'basic +predicate +cost'));

-------------------------------------------------------
Id Operation Name Cost (%CPU)
-------------------------------------------------------
0 SELECT STATEMENT 17 (18)
1 HASH GROUP BY 17 (18)
* 2 HASH JOIN 15 (7)
3 TABLE ACCESS FULL PRODUCTS 9 (0)
4 PARTITION RANGE ALL 5 (0)
5 TABLE ACCESS FULL SALES 5 (0)
-------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("P"."PROD_ID"="S"."PROD_ID")



select plan_table_output from
table(dbms_xplan.display('plan_table',null,'typical -cost -bytes'));

----------------------------------------------------------------------------
Id Operation Name Rows Time Pstart Pstop
----------------------------------------------------------------------------
0 SELECT STATEMENT 4 00:00:01
1 HASH GROUP BY 4 00:00:01
* 2 HASH JOIN 960 00:00:01
3 TABLE ACCESS FULL PRODUCTS 766 00:00:01
4 PARTITION RANGE ALL 960 00:00:01 1 16
5 TABLE ACCESS FULL SALES 960 00:00:01 1 16
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("P"."PROD_ID"="S"."PROD_ID")

Note Section


In addition to the plan, the package displays notes in the NOTE section, such as that dynamic sampling was used during query optimization or that star transformation was applied to the query.
For example, if the table SALES did not have statistics then the optimizer will use dynamic sampling and the plan display will report it as follows (see '+note' detail in the query):


select plan_table_output
from table(dbms_xplan.display('plan_table',null,'basic +note'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------

Note
-----
- dynamic sampling used for this statement


Bind peeking



The query optimizer takes into account the values of bind variable values when generation an execution plan. It does what is generally called bind peeking. See the first post in this blog about the concept of bind peeking and its impact on the plans and the performance of SQL statements.
As stated earlier the plan shown in V$SQL_PLAN takes into account the values of bind variables while the one shown from using EXPLAIN PLAN does not. Starting with 10gR2, the dbms_xplan package allows the display of the bind variable values used to generate a particular cursor/plan. This is done by adding '+peeked_binds' to the format argument when using display_cursor().
This is illustrated with the following example:



variable pcat varchar2(50)
exec :pcat := 'Women'

select PROD_CATEGORY, avg(amount_sold)
from sales s, products p
where p.PROD_ID = s.PROD_ID
and prod_category != :pcat
group by PROD_CATEGORY;

select plan_table_output
from table(dbms_xplan.display_cursor(null,null,'basic +PEEKED_BINDS'));

------------------------------------------
Id Operation Name
------------------------------------------
0 SELECT STATEMENT
1 HASH GROUP BY
2 HASH JOIN
3 TABLE ACCESS FULL PRODUCTS
4 PARTITION RANGE ALL
5 TABLE ACCESS FULL SALES
------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

1 - :PCAT (VARCHAR2(30), CSID=2): 'Women'
Categories: DBA Blogs, Development

What CPU Jan 2008 Patch will not tell you

Virag Sharma - Tue, 2008-02-05 09:55

What CPU Jan 2008 Patch will not tell you

File system space :- opatch suppose to check files system space as a part of Running prerequisite checks... and start apply patch when enough space is there.

Patching component oracle.rdbms, 11.1.0.6.0...

Updating archive file "/u01/app/oracle/product/11.1.0.6/lib/libserver11.a" with "lib/libserver11.a/kzsr.o"
Updating archive file "/u01/app/oracle/product/11.1.0.6/lib/libserver11.a" with "lib/libserver11.a/kzu.o"

The following actions have failed:

Archive not applied /home/oracle/patch/CPU2008/6646866/6650135/files/lib/libserver11.a/kzsr.o to /u01/app/oracle/product/11.1.0.6/lib/libserver11.a... ''

Archive not applied /home/oracle/patch/CPU2008/6646866/6650135/files/lib/libserver11.a/kzu.o to /u01/app/oracle/product/11.1.0.6/lib/libserver11.a... ''

Do you want to proceed? [yn]

Well I came out with N option. When we started opatch again , it gave error some thing like spacecheck fail.

So make sure you have enough space on file system.

In case of RAC database, CPU patch readme.txt will not tell you that one need to set cluster_database = false , before starting database in upgrade mode(“startup upgrade”). For compiling views and objects.

So make sure, in RAC environment you have set cluster_database = false before starting database in upgrade mode

<?xml:namespace prefix = o />

Categories: DBA Blogs

Secrets of Happiness

Virag Sharma - Sun, 2008-02-03 10:26
Secrets of Happiness

I was traveling from my from Agra (= City of Taj Mahal, 30 Miles away from my home town ) to Hyderabad. My Train AP Express was late so thought to pick some book. As usual I picked some my favorites books/magazines like Reader Digest , Champak ( Famous Kids book in India ). While purchasing books/magazines, saw book with title “Secrets of Happiness - Tanushree Podder”. Title looks very odd to me because , I feel how one can define happiness. Well , Just picked the book and browse some page , it look good. It is not different from other book like “Mega Living- Robin Sharma” , “Who will Cry when you die Robin Sharma”. English look typical Indian English. Some of the stories we already heard in our childhood from Grand Mother , Mom , aunty etc. But it is really nice to re-visit those stories. Writer presentation look good and that make book more interesting. I started reading book from Agra and keep reading book till Gwalior, Everybody in train want to sleep , since I was reading book light was on and everybody eye brow in train getting tight. Finally switched off light , but finished book before reaching Hyderabad. I feel book is worth to read , that’s why thought to write about this book.

Check book excerpt [click Here]

Summary
  1. What you put in life , you get back
  2. No situation is good / bad / ugly , it is our believe that colored our perception about situation and we feel accordingly(good / bad / ugly) about situation. So change our believe , thought things will improve/change, otherwise same thought/believe same result
  3. Keep It Simple and Straight ( KISS) .....................................

Apart from this there are two more book, that really worth to read

Monk Who Sold His Ferrari – Robin Sharma

Follow Your Heart
- Andrew Matthews

I read above book frequently , and feel , If I would have got these book 6 Year Back ……. :-)

><?xml:namespace prefix = o />


Categories: DBA Blogs

Detect numbers with TRANSLATE() - Take two

Jared Still - Thu, 2008-01-31 04:03
Last week I wrote about using TRANSLATE to detect numbers in data Using The TRANSLATE() function...

Andrew Clarke at Radio Free Tooting pointed out the shortcomings of using TRANSLATE() to detect numbers.

As I said earlier, all I needed to do was detect if the characters in a string were all digits or not, and I wanted it to be very fast.

But Andrew's remarks got me thinking - could translate be used to detect more complex numbers?

Here's the short list of requirements:

* Detect integers
* Detect numbers with decimal point ( 4.6, 0.2, .7)
* Detect negative and positive ( leading + or - )
* Reject text with more than 1 '.', such as an IP address ( 127.0.0.1 )
* Reject anything with alpha text

And comma's are considered as text. 99,324.1 would be alpha.

If you need to do this on 10g, no problem, as a regular expression can handle it.

Fist create some test data:

drop table number_test;

create table number_test( alphacol varchar2(20));

insert into number_test values('.5');
insert into number_test values('1');
insert into number_test values('2');
insert into number_test values(' 3');
insert into number_test values('4 ');
insert into number_test values('3.14159');
insert into number_test values('127.0.0.1');
insert into number_test values('+34.45');
insert into number_test values('-54.43');
insert into number_test values('this is a test');
insert into number_test values('th1s is 4 t3st');
insert into number_test values('.');
commit;

Now select only columns where the value is a number:

select alphacol
from number_test
where regexp_instr(trim(alphacol),'^[-+]?[0-9]*(\.?[0-9]+)?$') > 0
order by 1

SQL> /

ALPHACOL
--------------------
3
+34.45
-54.43
1
2
3.14159
4

7 rows selected.

That seems to work.

But what if you're stuck doing this on 9i? REGEXP_INSTR is not available.

You can use the user defined function IS_NUMBER(), which works well, but is very slow if used on large amounts of data.

Might we be able to use and abuse the TRANSLATE() function to speed this up? Here's a bit of convoluted SQL that works well on the limited test data:

select alphacol, alpha2
from
(
select alphacol,
-- is there a sign +- ? - remove it
decode(substr(alpha2,1,1),
'-',substr(alpha2,2),
'+',substr(alpha2,2),
alpha2
) alpha2
from (
select
alphacol,
-- remove a single '.' if it/they exists
replace(substr(alphacol,1,instr(alphacol,'.')),'.') || substr(alphacol,instr(alphacol,'.')+1) alpha2
from (
select trim(alphacol) alphacol
from number_test
)
)
)
where substr('||||||||||||||||||||||||||||||||',1,length(alpha2)) = translate(alpha2,'0123456789','||||||||||')
/


(Sorry about formatting - I seem to lose all formatting when I paste SQL)

Output from this nasty bit of SQL is identical to that when using REGEXP_INSTR:

ALPHACOL ALPHA2
-------------------- ----------------------------------------
.5 5
1 1
2 2
3 3
4 4
3.14159 314159
+34.45 3445
-54.43 5443

8 rows selected.

To make the TRANLATE() function do what is needed, a lot of data manipulation had to be done in the SQL. There is so much work being done now that it now takes nearly as long to run as does the IS_NUMBER() function, so there isn't much point in using TRANSLATE().

Runstats results:

SQL> @th5
.047739 secs
.037447 secs
PL/SQL procedure successfully completed.

If nothing else, this was an interesting exercise.
Categories: DBA Blogs

An upper bound of the transactions throughputs

Christian Bilien - Mon, 2008-01-28 15:46
Capacity planning fundamental laws are seldom used to identify benchmark flaws, although some of these laws are almost trivial. Worse, some performance assessments provide performance outputs which are individually commented without even realizing that physical laws bind them together. Perhaps the simplest of them all is the Utilization law, which states that the utilization of […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs