DBA Blogs

Open World Recap and New White papers

Inside the Oracle Optimizer - Fri, 2009-10-09 14:32
The Optimizer group has two session and a demo station in the Database campground at this year's Oracle Open World. We will give a technical presentation on What to Expect from the Oracle Optimizer When Upgrading to Oracle Database 11g and host an Oracle Optimizer Roundtable.

The technical session, which is on Tuesday Oct 13 at 2:30 pm, gives step by step instructions on how to use the new 11g features to ensure your upgrade goes smoothly and without any SQL plan regressions. This session is based on our latest white papers, Upgrading from Oracle Database 10g to 11g: What to expect from the Optimizer and SQL Plan Management in Oracle Database 11g.

The roundtable, which is on Thursday Oct. 15th at 10:30 am, will give you a first hand opportunity to pose you burning Optimizer and statistics questions directly to a panel of our leading Optimizer developers. In fact if you plan to attend the roundtable and already know what questions you would like to ask, then please send them to us via email and we will be sure to include them. Other wise, you can hand in your questions at our demo station at any stage during the week, or as you enter the actual session. Just be sure to write your questions in clear block capitals!

We look forward to see you all at Open world.

Categories: DBA Blogs, Development

Module name for logon trigger in 11g R2 AWR report

Virag Sharma - Mon, 2009-09-28 00:53
Today while working on Production performance tuning, We came across one difference in awr report of 11g R1 and 11R2

In oracle 11g R1 AWR report logon trigger modelue name usually come like
sqlplus / perl / Mid Tier@server_name_from_it_login

But in 11g R2 AWR report logon trigger modelue name usually come like oraagent.bin@DB_SERVER_WHERE_IT_RUN

New change looks more logical because DB server running logon trigger code ,
not the user.




# Excerpt from 11g R1 AWR report
# In below given example app678 is server name from where
# user logged in to database using sqlplus

Module: sqlplus@app678utl (TNS V1-V3)
UPDATE scott.MY_AUDIT SET LAST_LOGIN = SYSDATE WHERE XYZ_USERNAME = :B1


#Excerpt from 11g R2 AWR report
#In below given example apps001 is database server

Module: oraagent.bin@apps001 (TNS V1-V3)
UPDATE scott.MY_AUDIT SET LAST_LOGIN = SYSDATE WHERE XYZ_USERNAME = :B1



Categories: DBA Blogs

ASM Hands-On Training, Server Enviroment Setup And Aliases

Alejandro Vargas - Sat, 2009-09-12 05:37

The server where the labs of the ASM Hands-On Training works is configured with tcsh as the default oracle user shell.

A .cshrc file containing several aliases was setup to easy moving around and executing repetitive commands.

On this document there is a general explanation of how the environment can be used.

Details on this file: Server Enviroment Setup And Aliases

Categories: DBA Blogs

ASM Hands-On Training, Lab 21, ASMCMD Use And Options

Alejandro Vargas - Sat, 2009-09-12 05:27

On this lab we will review some of the useful set of commands provided by the ASM command line utility.

Some of the asmcmd commands display information, these information is based on v$asm views, other commands actually make changes to the structure like mkdir or rm.

Details on file: ASMCMD Use And Options

Categories: DBA Blogs

ASM Hands-On Training, Lab 20, Storage Consolidation With ASM

Alejandro Vargas - Sat, 2009-09-12 05:22

On this lab we will share our ASM disks with a second server and we will open the sati12 database on it.

To do that we copied the vm to another location, without including the ASM disks, that are located on a separate folder, this way the second vm is pointing to the same ASM disks as the first one.

Details on file: Storage Consolidation With ASM

Categories: DBA Blogs

ASM Hands-On Training, Lab18, ASM Metadata and Other Checkups

Alejandro Vargas - Sat, 2009-09-12 05:01

On this lab we will review various utilities that provide further information for managing ASM.

Checksum Mismatch After Storage Crash, AMDU an 11g tool that can be used with 10g as well, Blockdumps, Asmiostats and asmdebug

Details on this file: ASM Metadata and Other Checkups

Categories: DBA Blogs

ASM Hands-On Training, Lab 17, Measuring Throughput

Alejandro Vargas - Sat, 2009-09-12 03:55

On this lab we will review simple methods for getting input about the the database througput and response time.

We will produce the load using swingbench and we will gather AWR snapshots every 10 minutes. After some time we will be able to check througput statistics based on the AWR snapshots information.

Details on this Document: Measuring Throughput

Another useful script for checking IO:

set pages 50000 lines 250
alter session set nls_date_format='dd-mm-yy hh24:mi';
spool chkio1-Physical-rw.log
select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then average end) Physical_Read_Total_Bps,
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then average end) Physical_Write_Total_Bps,
snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;
spool off

select min(begin_time), max(end_time),
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end) Physical_Read_Total_Bps,
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end) Physical_Write_Total_Bps,
sum(case metric_name when 'Physical Read Total Bytes Per Sec' then maxval end) +
sum(case metric_name when 'Physical Write Total Bytes Per Sec' then maxval end) Total_IO,
snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id
/

Categories: DBA Blogs

ASM Hands-On Training, Lab 16, Configuring And Running Swingbench And OSWatcher

Alejandro Vargas - Sat, 2009-09-12 01:02

Whenever implementing new environments it is convenient to run stress tests and to gather OS statistics that will provide valuable diagnostics information.

Swingbench provide an easy configurable tool to run oltp or dss like stress tests; that is very useful when the customer does not have any possibility to benchmark the new configuration.

Oswatcher will collect statistics from the OS in a cyclic 2-day period by default, which can be extended to whatever period is convenient for you.

Details on this file:
Lab16-Configure-SwingBench-and-OsWatcher.pdf

Categories: DBA Blogs

ASM Hands-On Training, Lab 15, Cloning A Disk Group

Alejandro Vargas - Sat, 2009-09-12 00:16

Sometimes it is required having several clones of the same database running on the same server. That was simple to do when the database was based on File System, and is still simple to do with the help of Rman.

But there are very large databases, which are usually cloned using storage tools like bcv, srdf or snapshots that cannot consider Rman as a viable possibility.

For these databases there is the possibility to implement the procedure we will test on this lab.

Note that the rename disk group script has not been made public.
The rename disk group functionality is available on 11g R2 that was made available to the general public on August 2009.

These are the steps required to complete the process:

1. Prepare a set of storage devices with enough space to hold a copy of the diskgroup to be copied
2. Shutdown the source database
3. Make a bit by bit copy of the source database on the target devices
4. Add kfed path to the path of user root
5. Execute the rename diskgroup script
6. Rename ASM disks of the cloned devices
7. Rescan ASM disks
8. Start the ASM instance and mount the new diskgroup
9. Recreate the controlfile and rename the database based on the cloned diskgroup
10.Open the clone with the resetlogs option

Details on this file:
Cloning Disk Group

Categories: DBA Blogs

ASM Hands-On Training, Lab 14, 11g Compatibility Parameters and Resilience Test

Alejandro Vargas - Fri, 2009-09-11 21:48

On this lab we will make a review of the 11g Compatibility Parameters and 3 Resilience Tests

The compatibility parameters compatible.asm and compatible.rdbms define the minimum ASM and database versions that will be able to connect to a disk group. These parameters can be advanced only

On the resilience probe we will do the following tests


  • · Add ASM disks and bring down the ASM instance on the middle of the operation

  • · Remove ASM disks and bring down the ASM instance on the middle of the operation

  • · Destroy the ASM metadata and restore the disk

Details on this file:
11g Compatibility Parameters and Resilience Test

Categories: DBA Blogs

Oracle Enterprise Linux 5 Update 4 available on ULN

Sergio's Blog - Wed, 2009-09-09 07:33

Oracle Enterprise Linux 5 Update 4 has been added to Unbreakable Linux Network (ULN) Customers with Linux support from Oracle may download and install OEL 5.4 packages for i386 and x84_64 architectures. Itanium packages are coming soon. Also coming soon: OEL 5.4 on public-yum.oracle.com and installation media on edelivery.oracle.com/linux

As for the rumors that OEL is based on CentOS? Only in a universe where time flows backward. Hint: Centos 5.3 was announced on April 3rd, 2009 and OEL 5.3 was announced on January 28th, 2009, more than two months earlier. We have no relationship with CentOS and do not rely on them.

Update:Installation media now available via edelivery.oracle.com/linux

Categories: DBA Blogs

Shell Tricks

Jared Still - Sun, 2009-09-06 18:27
DBAs from time to time must write shell scripts. If your environment is strictly Windows based, this article may hold little interest for you.

Many DBAs however rely on shell scripting to manage databases. Even if you use OEM for many tasks, you likely use shell scripts to manage some aspects of DBA work.

Lately I have been writing a number of scripts to manage database statistics - gathering, deleting, and importing exporting both to and from statistics tables exp files.

Years ago I started using the shell builtin getopts to gather arguments from the command line. A typical use might look like the following:

while getopts d:u:s:T:t:n: arg
do
case $arg in
d) DATABASE=$OPTARG
echo DATABASE: $DATABASE;;
u) USERNAME=$OPTARG
echo USERNAME: $USERNAME;;
s) SCHEMA=$OPTARG
echo SCHEMA: $SCHEMA;;
T) TYPE=$OPTARG
echo TYPE: $TYPE;;
t) TABLE_NAME=$OPTARG;;
#echo TABLE_NAME: $TABLE_NAME
n) OWNER=$OPTARG
echo OWNER: $OWNER;;
*) echo "invalid argument specified"; usage;exit 1;
esac

done


In this example, the valid arguments are -d, -u, -s, -T, -t and -n. All of these arguments require a value.

The command line arguments might look like this:
somescript.sh -d orcl -u system -s scott

If an invalid argument such as -z is passed, the script will exit with the exit code set to 1.

For the script to work correctly, some checking of the arguments passed to the script must be done.

For this script, the rules are as follows:
  • -d and -u must always be set
  • -s must be set if -T is 'SCHEMA'
  • -t and -n must both have a value or be blank
  • -s must be used with -T
In this example, values for -T other than 'SCHEMA' are not being checked.

The usual method (at least for me) to test the validity of command line arguments has always been to use the test, or [] operator with combinations of arguments.

For the command line arguments just discussed, the tests might look like the following:

[ -z "$DATABASE" -o -z "$USERNAME" ] && {
echo Database or Username is blank
exit 2
}

# include schema name if necessary
[ "$TYPE" == 'SCHEMA' -a -z "$SCHEMA" ] && {
echo Please include schema name
exit 3
}

# both owner and tablename must have a value, or both be blank
[ \( -z "$TABLE_NAME" -a -n "$OWNER" \) -o \( -n "$TABLE_NAME" -a -z "$OWNER" \) ] && {
echo Please specify both owner and tablename
echo or leave both blank
exit 4
}

# if -s is set, so must -T
[ -n "$SCHEMA" -a -z "$TYPE" ] && {
echo Please include a type with -T
exit 5
}


As you can see, there are a fair number of tests involved to determine the validity of the command line arguments. You may have guessed why I skipped one for this demo - I just did not want to write any more tests.

Validating command line arguments really gets difficult with a larger number of possible arguments. Worse yet, any later modifications to the script that require a new command line argument become dreaded tasks that are put off as long as possible due the complexity of testing the validity of command line arguments.

While writing a script that had 11 possible arguments, I was dreading writing the command line argument validation section, I thought there must be a better way.

It seemed that there must be a simple method of using regular expressions to validate combinations of command line arguments. I had never seen this done, and after spending a fair bit of time googling the topic it became apparent that there was not any code available for a cut and paste solution, so it seemed a nice opportunity to be innovative.

After experimenting a bit, I found what I think is a better way.

The method I use is to concatenate all possible command line arguments into a ':' delimited string, and then use a set of pre-prepared regexes to determine whether or not the command line arguments are valid.

One immediately obvious drawback to this method is that arguments containing the ':' character cannot be used. However the delimiting character can easily be changed if needed.

Using the same example as previous, the command line arguments are all concatenated into a string and converted to upper case:

ALLARGS=":$USERNAME:$DATABASE:$OWNER:$TABLE_NAME:$SCHEMA:$TYPE:"
# upper case arges
ALLARGS=$(echo $ALLARGS | tr "[a-z]" "[A-Z]")


Next a series of regular expressions are created. The first two are generic, and may or may not be used as building blocks for other regular expressions. The others all correspond to a specific command line argument


# alphanumeric only, at least 1 character
export ALNUM1="[[:alnum:]]+"
# alphanumeric only, at least 3 characters
export ALNUM3="[[:alnum:]]{3,}"
# username - alphanumeric only at least 3 characters
export USER_RE=$ALNUM3
# database - alphanumeric only at least 3 characters
export DATABASE_RE=$ALNUM3
# owner - alphanumeric and _ and $ characters
export OWNER_RE='[[:alnum:]_$]+'
# table_name - alphanumeric and _, # and $ characters
export TABLE_RE='[[:alnum:]_#$]+'
# schema - alphanumeric and _ and $ characters
export SCHEMA_RE='[[:alnum:]_$]+'


These regular expressions could use further refinement (such as username must start with alpha only ) but are sufficient for this demonstration.

Next, the regular expressions are concatenated together into ':' delimited strings, with each possible command line argument represented either by its corresponding regex, or by null.

The regexes are stuffed into a bash array. For our example, it looks like this:
#   :   user        :  db           :  owner        :  table     : schema        : type
VALID_ARGS=(
":$USER_RE:$DATABASE_RE:$OWNER_RE:$TABLE_RE::(DICTIONARY_STATS|SYSTEM_STATS|FIXED_OBJECTS_STATS):" \
":$USER_RE:$DATABASE_RE::::(DICTIONARY_STATS|SYSTEM_STATS|FIXED_OBJECTS_STATS):" \
":$USER_RE:$DATABASE_RE:$OWNER_RE:$TABLE_RE:$SCHEMA_RE:(SCHEMA):" \
":$USER_RE:$DATABASE_RE:::$SCHEMA_RE:SCHEMA:")

Notice that there are four different combitations of command line arguments represented.

In all cases the USERNAME and DATABASE are required and must correspond to the regex provided.

In the first combination of arguments, the owner and table must also be specified, and type (-T) must be either one of DICTIONARY_STATS, SYSTEM_STATS or FIXED_OBJECTS_STATS.

In the second possible combination, the only argument allowed in addition to DATABASE and USERNAME is the type (-T) argument.

The third combination requires the OWNER, TABLE_NAME and SCHEMA argument to have a valid value, and the TYPE argument must be set to SCHEMA.

The final combination of arguments requires just the SCHEMA argument and the TYPE argument must be set to SCHEMA, in addition to the USERNAME and DATABASE arguments.

By now you likely want to know just how these regular expressions are tested. The following function is used to test the command line arguments against each regular expression:
function validate_args {
typeset arglist
arglist=$1

while shift
do
[ -z "$1" ] && break
if [ $(echo $arglist | grep -E $1 ) ]; then
return 0
fi

done
return 1

}


Here's how it is used in the script:

# VALID_ARGS must NOT be quoted or it will appear as a single arg in the function
validate_args $ALLARGS ${VALID_ARGS[*]}


While this method may appear somewhat confusing at first, it becomes less so after using it a few times. It greatly simplifies the use of many command line arguments that may appear in differing combinations.

As far as I know, this method only works properly with the bash shell. I have done testing on only two shells, bash and ksh. It does not work properly on ksh.

Here's a demonstration of the ksh problem. The following script is run from both ksh and bash:

function va {

echo ARG1: $1
}


R1="[[:alnum]]+"
R2="[[:alnum]]{3,}"

va $R1
va $R2
And here are the results:
18:9-jkstill-18 > ksh t3
ARG1: [[:alnum]]+
ARG1: [[:alnum]]3
[ /home/jkstill/bin ]

jkstill-18 > bash t3
ARG1: [[:alnum]]+
ARG1: [[:alnum]]{3,}
[ /home/jkstill/bin ]



Notice that when the script is run with ksh, the '{', '}' and ',' are removed from the regular expression. I could find no combination of quoting and escape characters that could prevent that from happening. This method of command line argument validation could be made to work using ksh if those characters are not used in the regexes. That would be rather limiting though.

One other drawback you may have noticed with this method of validating command line arguments is that when an error condition is encountered, the exit code is always 1. With the [] method it was easy to exit with different codes to indicate the nature of the error. Something similar could likely be done by embedding a code into each set of regexes, but I will leave that as an exercise for the reader.

The complete prototype script, as well as a test script can be downloaded:


The next article will include a set of functions used along with the validate_args() function to make shell scripts a bit more robust.

Categories: DBA Blogs

Oracle Database 11g Release 2 New Features : Edition based redefination

Virag Sharma - Fri, 2009-09-04 08:33

Every release has some Major changes , which we usually says New Features.Some of these features dominate the version, For example 11g R1 has SPA , DB Replay Active standby etc. Same this Oracle Release ( Oracle Database 11g Release 2 ) has some New features for which this release will be known in feature. These features are "Edition based redefination"


Most likely these features designed to give big support to APPS upgrade ( ie Oracle E-Business suite upgrade). When you upgrade APPS database , it need lots of down time , hope, using these new features APPS upgrade will take less time in future.


This feature will allow application upgrade( AS DBA , i would prefer to say Online Database object upgrade) with Minimum down time or may be zero down time. I consider this feature as one step toward ZERO DOWN time for application upgrade.


In 10g statistics collected on table published immediatly, That usually cause lots of performance issue. In oracle 11g r1 there is feature, for collecting stats on tables and publishing stats , as per need to avoid performance issue due to stats collection.

Taking similar feature to next step , 11g R2 has feature "REDEFINITION" , which upgrades objects , but not published immediately, also database can have multiple Editions of objects definition. Of-course there are, some limitation



Check Default Edition

SQL>
1 SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES
2* Where PROPERTY_NAME = 'DEFAULT_EDITION'
SQL> /

PROPERTY_VALUE
--------------------------------------------------------------------------------
ORA$BASE



Changing Edition at session or Database level

SQL> ALTER SESSION SET EDITION=ora$base;

Session altered.

SQL> ALTER DATABASE DEFAULT EDITION =ora$base;

Database altered.





Grant create or drop edition to user

SQL> GRANT CREATE ANY EDITION, DROP ANY EDITION to virag;

Grant succeeded.


Enable Edition on schema / User

SQL> ALTER USER virag ENABLE EDITIONS force;


Of-course there are, some limitation......

Will add more ......soon , for How to...


Reference

19 Edition-Based Redefinition

More Post on Oracle RDBMS Database 11g R2 ( Release 2 ) 

Oracle 11g Release 2 (11.2 ) New Features : SCAN - Single Client Access Name
11G R2 New Feature : Purge audit trail records using DBMS_AUDIT_MGMT
Oracle Database 11g Release 2 New Features : Edition based redefination

Categories: DBA Blogs

Oracle 11g Release 2 (11.2 ) New Features : SCAN - Single Client Access Name

Virag Sharma - Wed, 2009-09-02 19:32
Once you decided to add/remove node from RAC database/cluster. We need to change tns
entry. Oracle 11g R2 introduced new concept called Single Client Access Name (SCAN).
Which eliminate the need to change tnsnetry when nodes are added to or removed from
the Cluster.

RAC Instances register to SCAN listeners as remote listeners. SCAN is fully qulified name.
Oracle recommends to assign 3 address to SCAN , which create three SCAN listeners.



$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node apps001
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node apps002
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node apps002




Running following command on Node 2 (apps002)


$ ps -aef |grep -i SCAN

oracle 9380 1 0 Aug13 ? 00:01:09 /d01/apps/oracle_crs/11.2/bin/tnslsnr LISTENER_SCAN2 -inherit

oracle 9380 1 0 Aug13 ? 00:01:09 /d01/apps/oracle_crs/11.2/bin/tnslsnr LISTENER_SCAN3 -inherit

oracle 9993 7114 0 09:57 pts/3 00:00:00 grep -i crs





From above output, it is clear that SCAN listener is running from CRS_HOME

$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521


$ srvctl config scan
SCAN name: apps-scan, Network: 1/192.168.182.0/255.255.255.0/
SCAN VIP name: scan1, IP: /apps-scan.us.oracle.com/192.168.182.109
SCAN VIP name: scan2, IP: /apps-scan.us.oracle.com/192.168.182.110
SCAN VIP name: scan3, IP: /apps-scan.us.oracle.com/192.168.182.108




tns entry can use single address ( SCAN Name ) in tnsentry , instead os using entry for all Node


tns entry configured to use VIP addresses for Database will work without any issue. using
SCANs is not Medatory ( May be to support backward compatibility )



#
# TNS ENTRY with SCAN
#

test.world =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST==apps-scan.world)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=R1211.world))
)

#
# TNS Entry without SCAN ( Old way)
#

test.world =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=apps001-vip.world)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=apps002-vip.world)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=R1211.world)))
)
)




Clients Can connect to a particular instance of the database using SCAN. Entry will looks like


test.world =
(description=
(address=(protocol=tcp)(host=apps-scan.world)(port=1521))
(connect_data=
(service_name=R1211.world)
(instance_name=apps1cl1)))




tns entry configured to use VIP addresses for Database will work without any issue. using

SCANs is not Medatory ( May be to support backward compatibility )

More Post on Oracle RDBMS Database 11g R2 ( Release 2 ) 

Oracle 11g Release 2 (11.2 ) New Features : SCAN - Single Client Access Name
11G R2 New Feature : Purge audit trail records using DBMS_AUDIT_MGMT
Oracle Database 11g Release 2 New Features : Edition based redefination

Categories: DBA Blogs

11G R2 New Feature : Purge audit trail records using DBMS_AUDIT_MGMT

Virag Sharma - Wed, 2009-09-02 11:08
In earlier version there is no standard way to change AUDIT tables tablespace.
IN Oracle 11g R2 ( Also included in 11.1.0.7 and 10.2.0.5 ( Need to check)) , you can change audit table (SYS.AUD$ and SYS.FGA_LOG$) tablespace using DBMS_AUDIT_MGMT


Not ONLY you can change audits table tablespace , now you can periodically deleting the audit trail records using CLEAN_AUDIT_TRAIL (new in 11.2 ) Procedure.

So Now it is official , that you can change AUD$ table tablespace and purge :-) .

In Below given example , am trying to change tablespace for AUD$

Checking current tablespace from AUD$

SQL> select TABLESPACE_NAME from dba_segments where SEGMENT_NAME='AUD$';

TABLESPACE_NAME
--------------------------------------------------------------------------------
SYSTEM



Changing Tablespace from SYSTEM to SYSAUX for AUD$

SQL>

BEGIN
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
AUDIT_TRAIL_LOCATION_VALUE => 'SYSAUX');
END;
/

PL/SQL procedure successfully completed.


Checking changed Tablespace


SQL> select TABLESPACE_NAME from dba_segments where SEGMENT_NAME='AUD$';

TABLESPACE_NAME
--------------------------------------------------------------------------------
SYSAUX




AUDIT_TRAIL_TYPE: Refers to the database audit trail type. Enter one of the following values:

  1. DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD: Standard audit trail table, AUD$.
  2. DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD: Fine-grained audit trail table, FGA_LOG$.
  3. DBMS_AUDIT_MGMT.AUDIT_TRAIL_DB_STD: Both standard and fine-grained audit trail tables.

AUDIT_TRAIL_LOCATION_VALUE: Specifies the NEW destination tablespace.

Not ONLY you can change audits table tablespace , now you can periodically deleting the audit trail records/xml/.aud files etc using CLEAN_AUDIT_TRAIL Procedure.

STEPS for Purging AUDIT TRAIL

# Check initialization

BEGIN
IF
NOT DBMS_AUDIT_MGMT.IS_CLEANUP_INITIALIZED(DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD)
THEN
dbms_output.put_line('CLEANUP NOT INITIALIZED' );
ELSE
dbms_output.put_line('CLEANUP INITIALIZED' );
END IF;
END;


# Set initialization

BEGIN
DBMS_AUDIT_MGMT.INIT_CLEANUP(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
DEFAULT_CLEANUP_INTERVAL => 6 );
END;

# Set Last Audit Time stamp

SQL> desc DBA_AUDIT_MGMT_LAST_ARCH_TS
Name Null? Type
----------------------------------------- -------- ----------------------------
AUDIT_TRAIL VARCHAR2(20)
RAC_INSTANCE NOT NULL NUMBER
LAST_ARCHIVE_TS TIMESTAMP(6) WITH TIME ZONE

SQL> select * from DBA_AUDIT_MGMT_LAST_ARCH_TS;

# Check is Last Audit Time stamp set or not



BEGIN
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
LAST_ARCHIVE_TIME => sysdate -30 <------ want to delete aud file older then 30 days
RAC_INSTANCE_NUMBER => 1 );
END;

# For non RAC don't use "RAC_INSTANCE_NUMBER =>"
# If RAC system having 4 node then run above command 4 time
# with RAC_INSTANCE_NUMBER 1 , 2, 3 ,4

# Manual Purge

BEGIN
DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(
AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
USE_LAST_ARCH_TIMESTAMP => TRUE);
END;


#
# If
USE_LAST_ARCH_TIMESTAMP is False , it purge all audit trail
#

# Here we used DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL =>
# All audit trail types. This includes the standard database audit trail
# (SYS.AUD$ and SYS.FGA_LOG$ tables), operating system (OS) audit trail,
# and XML audit trail. More details are given below
#AUDIT_TRAIL_ALL => All audit trail types. This includes the standard database audit trail (SYS.AUD$ and SYS.FGA_LOG$ tables), operating system (OS) audit trail, and XML audit trail.
#AUDIT_TRAIL_AUD_STD => Standard database audit records in the SYS.AUD$ table
#AUDIT_TRAIL_DB_STD => Both standard audit (SYS.AUD$) and FGA audit(SYS.FGA_LOG$) records
#AUDIT_TRAIL_FGA_STD => Standard database fine-grained auditing (FGA) records in the SYS.FGA_LOG$ table
#AUDIT_TRAIL_FILES => Both operating system (OS) and XML audit trails
#AUDIT_TRAIL_OS => Operating system audit trail. This refers to the audit records stored in operating system files.
#AUDIT_TRAIL_XML => XML audit trail. This refers to the audit records stored in XML files.

Refrence

DBMS_AUDIT_MGMT ( Oracle Documentation 11.2 )

More Post on Oracle RDBMS Database 11g R2 ( Release 2 )

Oracle 11g Release 2 (11.2 ) New Features : SCAN - Single Client Access Name
11G R2 New Feature : Purge audit trail records using DBMS_AUDIT_MGMT
Oracle Database 11g Release 2 New Features : Edition based redefination


Categories: DBA Blogs

From Oracle to a Successful Vineyard

Virag Sharma - Sun, 2009-08-16 11:33

Last Month , I was there in Nasik ( One Small Town near Mumabi ). We roam around the city and visited many places in town. Our Driver suggested to visit Sual Vineyard. I am not very found of wine ,

But had very good time with friends in Napa Valley, CA, US, So thought let me experience Indian Vineyard.In Vineyard , we learned that owner of Sula Vineyard Mr. Rajeev is graduated in Economics and Industrial Engineering from Stanford and was working with Oracle in the Silicon Valley. Latter he left

Job and started Vineyard in India , today Sula is one of the well know Wine band in India. Story look interesting , so thought, to share with all(Links are given below ).


From Oracle to a Successful Vineyard: Rajeev Samant of Sula Wines
http://www.delhiwineclub.com/Interview/interview_with_rajeev2.asp
http://www.pagalguy.com/plugins/p2_news/printarticle.php?p2_articleid=912
Categories: DBA Blogs

Understanding DBMS_STATS.SET_*_PREFS procedures

Inside the Oracle Optimizer - Tue, 2009-08-11 21:25
In previous Database releases you had to use the DBMS_STATS.SET_PARM procedure to change the default value for the parameters used by the DBMS_STATS.GATHER_*_STATS procedures. The scope of any changes that were made was all subsequent operations. In Oracle Database 11g, the DBMS_STATS.SET_PARM procedure has been deprecated and it has been replaced with a set of procedures that allow you to set a preference for each parameter at a table, schema, database, and Global level. These new procedures are called DBMS_STATS.SET_*_PREFS and offer a much finer granularity of control.

However there has been some confusion around which procedure you should use when and what the hierarchy is among these procedures. In this post we hope to clear up the confusion. Lets start by looking at the list of parameters you can change using the DBMS_STAT.SET_*_PREFS procedures.

  • AUTOSTATS_TARGET (SET_GLOBAL_PREFS only)
  • CASCADE
  • DEGREE
  • ESTIMATE_PERCENT
  • METHOD_OPT
  • NO_INVALIDATE
  • GRANULARITY
  • PUBLISH
  • INCREMENTAL
  • STALE_PERCENT

As mentioned above there are four DBMS_STATS.SET_*_PREFS procedures.

  1. SET_TABLE_PREFS

  2. SET_SCHEMA_PREFS

  3. SET_DATABASE_PREFS

  4. SET_GLOBAL_PREFS


The DBMS_STATS.SET_TABLE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for the specified table only.

The DBMS_STATS.SET_SCHEMA_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the existing objects in the specified schema. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in the specified schema. Since it uses DBMS_STATS.SET_TABLE_PREFS calling this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters.

The DBMS_STATS.SET_DATABASE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the user defined schemas in the database. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in each of the user defined schemas. Since it uses DBMS_STATS.SET_TABLE_PREFS this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters. It is also possible to include the Oracle owned schemas (sys, system, etc) by setting the ADD_SYS parameter to TRUE.

The DBMS_STATS.SET_GLOBAL_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for any object in the database that does not have an existing table preference. All parameters default to the global setting unless there is a table preference set or the parameter is explicitly set in the DBMS_STATS.GATHER_*_STATS command. Changes made by this procedure will affect any new objects created after it has been run as new objects will pick up the GLOBAL_PREF values for all parameters.

With GLOBAL_PREFS it is also possible to set a default value for one additional parameter, called AUTOSTAT_TARGET. This additional parameter controls what objects the automatic statistic gathering job (that runs in the nightly maintenance window) will look after. The possible values for this parameter are ALL,ORACLE, and AUTO. ALL means the automatic statistics gathering job will gather statistics on all objects in the database. ORACLE means that the automatic statistics gathering job will only gather statistics for Oracle owned schemas (sys, sytem, etc) Finally AUTO (the default) means Oracle will decide what objects to gather statistics on. Currently AUTO and ALL behave the same.

In summary, DBMS_STATS obeys the following hierarchy for parameter values, parameters values set in the DBMS_STAT.GATHER*_STATS command over rules everything. If the parameter has not been set in the command we check for a table level preference. If there is no table preference set we use the global preference.
Categories: DBA Blogs, Development

Detecting Corrupt Data Blocks

Jared Still - Thu, 2009-08-06 16:50
Or more accurately, how not to detect corrupt data blocks.

This thread on Oracle-L is regarding lost writes on a database.

One suggestion was made to use the exp utility to export the database, thereby determining if there are corrupt blocks in the database due to disk failure. I didn't give it much thought at first, but fellow Oak Table member Mark Farnham got me thinking about it.

Using exp to detect corrupt blocks, or rather, the absence of corrupt blocks may work, but then again, it may not. It is entirely possible to do a  full table scan on a table successfully, as would happen during an export, even though the blocks on disk have been corrupted.

This can be demonstrated by building a table, ensuring the contents are cached, then destroying the data in the data file, followed by a successful export of the table.

Granted, there are a lot of mitigating factors that could be taken into consideration as to whether or not this would happen in a production database. That's not the point: the point is that it could happen, so exp is not a reliable indicator of the state of the data files on disk.

This test was performed on Oracle 10.2.0.4 EE on RH Linux ES 4. Both are 32 bit.

First create a test tablespace:

create tablespace lost_write datafile '/u01/oradata/dv11/lost_write.dbf' size 1m
extent management local
uniform size 64k
/



Next the table LOST_WRITE is created in the tablespace of the same name. This will be used to test the assertion that a successful export of the table can be done even though the data on disk is corrupt.

create table lost_write
cache
tablespace lost_write
as
select * from dba_objects
where rownum <= 1000
/

begin
dbms_stats.gather_table_stats(user,'LOST_WRITE');
end;
/

select tablespace_name, blocks, bytes
from user_segments
where segment_name = 'LOST_WRITE'
/


TABLESPACE_NAME BLOCKS BYTES
------------------------------ ---------- ----------
LOST_WRITE 16 131072

1 row selected.



Next, do a full table scan and verify that the blocks are cached:

select * from lost_write;

Verify in cache:
select file#,block#,class#, status
from v$bh where ts# = (select ts# from sys.ts$ where name = 'LOST_WRITE')
order by block#
/

FILE# BLOCK# CLASS# STATUS
---------- ---------- ---------- -------
40 2 13 xcur
40 3 12 xcur
40 9 8 xcur
40 10 9 xcur
40 11 4 xcur
40 12 1 xcur
40 13 1 xcur
40 14 1 xcur
40 15 1 xcur
40 16 1 xcur
40 17 1 xcur
40 18 1 xcur
40 19 1 xcur
40 20 1 xcur
40 21 1 xcur
40 22 1 xcur
40 23 1 xcur




Now swap the bytes in the file, skipping the first 2 oracle blocks
Caveat: I don't know if that was the correct # of blocks, and I didn't spend any time trying to find out
Also, I belatedly saw that count probably should have been 22 rather than 16, but the results still served the purpose of corrupting the datafile, as we shall see in a bit.

What this dd command is doing is using the same file for both input and output, and rewriting blocks 3-18, swapping each pair of bytes.

dd if=/u01/oradata/dv11/lost_write.dbf of=/u01/oradata/dv11/lost_write.dbf bs=8129 skip=2 count=16 conv=swab,notrunc



The effect is demonstrated by this simple test:

jkstill-19 > echo hello | dd
hello
0+1 records in
0+1 records out
[ /home/jkstill ]

jkstill-19 > echo hello | dd conv=swab
ehll
o0+1 records in
0+1 records out


Now we can attempt the export:

exp tables=\(jkstill.lost_write\) ...

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Oracle Label Security, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table LOST_WRITE 1000 rows exported
Export terminated successfully without warnings.
19 > echo hello | dd
hello
0+1 records in
0+1 records out
[ /home/jkstill ]

jkstill-19 > echo hello | dd conv=swab
ehll
o0+1 records in
0+1 records out


So, even though the data on disk has been corrupted, the export succeeded. That is due to the table being created with the CACHE option, and all the blocks being cached at the time of export. It may not be necessary to use the CACHE option, but I used it to ensure the test would succeed.

Now let's see what happens when trying to scan the table again. First the NOCACHE option will be set on the table, then a checkpoint.

10:42:45 dv11 SQL> alter table lost_write nocache;

10:43:02 dv11 SQL> alter system checkpoint;

Now try to scan the table again:

10:43:14 ordevdb01.radisys.com - js001292@dv11 SQL> /
select * from lost_write
*
ERROR at line 1:
ORA-00376: file 40 cannot be read at this time
ORA-01110: data file 40: '/u01/oradata/dv11/lost_write.dbf'



A corollary conclusion can drawn from this example.

If you do discover bad data blocks, you just might be able to do an export of the tables that are in the affected region before doing any recovery. This might be a belt and suspenders approach, but DBA's are not generally being known for taking unnecessary chances when possible data loss is on the line.
Categories: DBA Blogs

Will the Optimizer Team be at Oracle Open World 2009?

Inside the Oracle Optimizer - Thu, 2009-07-23 20:18
With only two and a half months to go until Oracle Open World in San Francisco, October 11-15th, we have gotten several requests asking if we plan to present any session at the conference.

We have two session and a demo station in the Database campground at this year's show. We will give a technical presentation on What to Expect from the Oracle Optimizer When Upgrading to Oracle Database 11g and the Oracle Optimizer Roundtable.

The technical session, which is on Tuesday Oct 13 at 2:30 pm, gives step by step instructions and detailed examples of how to use the new 11g features to ensure your upgrade goes smoothly and without any SQL plan regressions.

The roundtable, which is on Thursday Oct. 15th at 10:30 am, will give you a first hand opportunity to pose you burning Optimizer and statistics questions directly to a panel of our leading Optimizer developers. In fact if you plan to attend the roundtable and already know what questions you would like to ask, then please send them to us via email and we will be sure to include them. Other wise, you can hand in your questions at our demo station at any stage during the week, or as you enter the actual session. Just be sure to write your questions in clear block capitals!

We look forward to seeing you all at Oracle Open World.

Categories: DBA Blogs, Development

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs