DBA Blogs

Using Index Hints in oracle

Learn DB Concepts with me... - Wed, 2016-06-08 09:59

Hints : Hints are used to give specific information that we know about our data and application, to Oracle. This further improves the performance of our system. There can be instances where the default optimizer may not be efficient for certain SQL statements. We can specify HINTS with the SQL statements, to improve the efficiency of those SQL statements. Hints should only be used as a last-resort if statistics were gathered and the query is still following a sub-optimal execution plan.

Example of the correct syntax for an index hint:

select /*+ index(TEST_IDX IDX_OS_USR) */ * from TEST_IDX;







If we alias the table (A in below case), you must use the alias in the index hint:

select /*+ index(A IDX_OS_USR) */ * from TEST_IDX A;

Note :

Oracle decides to use weather to use this hint or not, of oracle finds that it has faster execution plan without using hint it ignores it. You might think that an index may be helpfull and provide it as hint but oracle may still ignore it. In below case you can see hint being ignored.






Categories: DBA Blogs

@OraclePartners FY17 Global Kickoff

REGISTER NOW: ORACLE FY17 GLOBAL PARTNER KICKOFF It’s the start of a new Oracle fiscal year and we are excited about the tremendous opportunity ahead of us. Join...

We share our skills to maximize your revenue!
Categories: DBA Blogs

CREATE AN INVISIBLE INDEX ON A TABLE

Learn DB Concepts with me... - Tue, 2016-06-07 12:50
INVISIBLE INDEX:

Oracle 11g gives us ability to create indexes that can be marked as invisible. Invisible indexes are maintained like any other index, but they are ignored by the optimizer unless the OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at the instance or session level

CREATE AN INVISIBLE INDEX:
 
CREATE INDEX INV_IDX_OS_USR ON TEST_IDX (ID) INVISIBLE;

lets check the newly created index :

SQL> select OWNER,INDEX_NAME,TABLE_OWNER,TABLE_NAME,VISIBILITY from all_indexes where index_name='INV_IDX_OS_USR';

OWNER         INDEX_NAME            TABLE_OWNER         TABLE_NAME      VISIBILITY       
--------      -----------                                    ----------               -----------                  --------
ATOORPU       INV_IDX_OS_USR       ATOORPU            TEST_IDX               INVISIBLE   


USER CAN'T MAKE USE OF INVISIBLE INDEX UNTIL HE MAKES IT VISIBLE IN THAT SESSION LETS SEE IF WE CAN USE INVISIBLE INDEX WITH OUT ENABLING IT IN OPTIMIZER:

select /*index (TEST_IDX INV_IDX_OS_USR)*/ * from TEST_IDX where ID=284;




MAKING AN INDEX VISIBLE IN CURRENT SESSION:

ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE;
select /*index (TEST_IDX INV_IDX_OS_USR)*/ * from TEST_IDX where ID=284;






 
MAKING AN INDEX INVISIBLE IN CURRENT SESSION:
 
ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES=FALSE;

select /*index (TEST_IDX INV_IDX_OS_USR)*/ * from TEST_IDX where ID=284; 

-->> you will not have to provide any hints to use index. I have provided hint just to make sure  oracle uses it.

select  * from TEST_IDX where ID=284;        -->> Same as above

 

MAKING AN INDEX INVISIBLE OR VISIBLE:


Indexes can be created as invisible by using the INVISIBLE keyword at the end, and their visibility can be managed using the ALTER INDEX command

TO MAKE AN EXISTING INDEX INVISIBLE USE BELOW SYNTAX: 

ALTER INDEX index_name INVISIBLE;

TO MAKE AN EXISTING INDEX VISIBLE USE BELOW SYNTAX: 


ALTER INDEX index_name VISIBLE;


Categories: DBA Blogs

Oracle Big Data Discovery Cloud Service for Visual Face of Big Data

Oracle Big Data Discovery Cloud Service introduced by Oracle offers all the capabilities of Oracle Big Data Discovery 1.1 as well as push-button provisioning and easy cloud life cycle management. It...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Compression -- 8 : DROPping a Column of a Compressed Table

Hemant K Chitale - Thu, 2016-06-02 09:56
Building on the series on Compression .....

What happens if we try to DROP a column in a Compressed Table ?  How can we execute the DROP ?

Starting with BASIC Compression.


SQL> connect hemant/hemant
Connected.
SQL> create table compress_basic as select * from source_data where 1=2;

Table created.

SQL> alter table compress_basic compress;

Table altered.

SQL> insert /*+ APPEND */ into compress_basic
2 select * from source_data
3 where rownum < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_BASIC'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED BASIC

SQL> alter table compress_basic drop (object_name);
alter table compress_basic drop (object_name)
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables


SQL>
SQL> !oerr ora 39726
39726, 00000, "unsupported add/drop column operation on compressed tables"
// *Cause: An unsupported add/drop column operation for compressed table
// was attemped.
// *Action: When adding a column, do not specify a default value.
// DROP column is only supported in the form of SET UNUSED column
// (meta-data drop column).

SQL>


So, I would have to set the column to UNUSED !

SQL> alter table compress_basic set unused column object_name;

Table altered.

SQL> alter table compress_basic drop (object_name);
alter table compress_basic drop (object_name)
*
ERROR at line 1:
ORA-00904: "OBJECT_NAME": invalid identifier


SQL> alter table compress_basic drop unused columns;
alter table compress_basic drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables


SQL>


I can't drop a column from a table with Compression enabled.

Is there another way ?

SQL> alter table compress_basic move nocompress;

Table altered.

SQL> alter table compress_basic drop unused columns;

Table altered.

SQL>


To actually execute the DROP, I have to Uncompress the table !

 So : Remember : You have to be careful when designing a table that you intend to Compress.  You won't be able to DROP columns !


Repeating the test case with OLTP Compression :

SQL> create table compress_oltp as select * from source_data where 1=2;

Table created.

SQL> alter table compress_oltp compress for oltp;

Table altered.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_OLTP'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED ADVANCED

SQL>
SQL> insert into compress_oltp
2 select * from source_data
3 where rownum < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> alter table compress_oltp drop (object_name);

Table altered.

SQL>


Much easier, I could DROP the column.  But, wait.  Is there a catch ?

SQL> drop table compress_oltp purge;

Table dropped.

SQL> create table compress_oltp as select * from source_data where 1=2;

Table created.

SQL> alter table compress_oltp compress for oltp;

Table altered.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_OLTP'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED ADVANCED

SQL> alter table compress_oltp nocompress;

Table altered.

SQL> select compression, compress_for
2 from user_tables
3 where table_name = 'COMPRESS_OLTP'
4 /

COMPRESS COMPRESS_FOR
-------- ------------------------------
DISABLED

SQL> alter table compress_oltp drop (object_name);
alter table compress_oltp drop (object_name)
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables


SQL>


If I ALTER the table to NOCOMPRESS (which can take effect only on *new* rows, not existing rows), I cannot DROP a column.  This is because Oracle is unsure if there is a mix of Comressed and Non-Compressed rows in the table now.

What I'd have to do is to rebuild it as a NOCOMPRESS table.

SQL> alter table compress_oltp move nocompress;

Table altered.

SQL> alter table compress_oltp drop (object_name);

Table altered.

SQL>


So, once a table is set to COMPRESS OLTP and then set to NOCOMPRESS, you can't simply DROP a column.


This test-case came out of an issue a friend of mine faced today.  He found that he couldn't drop a column from a table that was formerly set to COMPRESS FOR OLTP.

I pointed him to Support Document 1288918.1

.
.
.

Categories: DBA Blogs

Links for 2016-06-01 [del.icio.us]

Categories: DBA Blogs

Storing Date Values As Numbers (The Numbers)

Richard Foote - Tue, 2016-05-31 19:45
In my last couple of posts, I’ve been discussing how storing date data in a character based column is a really really bad idea. In a follow-up question, I was asked if storing dates in NUMBER format was a better option. The answer is that it’s probably an improvement from storing dates as strings but it’s […]
Categories: DBA Blogs

Comment on SOLUTION: A Strange Condition in the Where Clause by lotharflatz

Oracle Riddle Blog - Tue, 2016-05-31 13:18

Hi Martin,

thanks. Good Point. Just hard to find the right name for the column. I will use Mohamed’s “DDL optimized” column instead.

Lothar

Like

Categories: DBA Blogs

Comment on SOLUTION: A Strange Condition in the Where Clause by Martin Preiss

Oracle Riddle Blog - Tue, 2016-05-31 11:14

Lothar,
a nice example and certainly a feature with interesting (and sometimes strange) effects. But I am not sure if I would use the term “virtual” in this context since – if my memory serves me well – the column is technically not a virtual column. Though I would certainly agree that storing the defaults only in the dictionary for rows that have been there before the addition of the column makes these values somehow virtual. Mohamed Houri has written an instructive OTN article containing some additional details: http://www.oracle.com/technetwork/articles/database/ddl-optimizaton-in-odb12c-2331068.html.

Regards
Martin

Like

Categories: DBA Blogs

Restore and Recovery from Incremental Backups : Video

Hemant K Chitale - Mon, 2016-05-30 03:04
A Youtube Video on Restore and Recovery from Incremental Backups.


Categories: DBA Blogs

Storing Date Values As Characters Part II (A Better Future)

Richard Foote - Sun, 2016-05-29 18:35
In the previous post, I discussed how storing date values within a character data type is a really really bad idea and illustrated how the CBO can easily get its costings totally wrong as a result. A function-based date index helped the CBO get the correct costings and protect the integrity of the date data. During […]
Categories: DBA Blogs

Oracle Partners Cloud Connection: Are you connected?

OPN Cloud connection was launched back in Oracle Open World 2014, as a new community for partners to engage, explore and learn about the Oracle Cloud business opportunities. Since then it...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Encrypting sensitive data in puppet using hiera-eyaml

Pythian Group - Fri, 2016-05-27 13:36

Puppet manifests can hold a lot of sensitive information. Sensitive information like passwords or certificates are used in the configuration of many applications. Exposing them in a puppet manifest is not ideal and may conflict with an organization’s compliance policies. That is why data separation is very important aspect of secure puppet code.

Hiera is a pluggable Hierarchical Database. Hiera can help by keeping data out of puppet manifests. Puppet classes can look for data in hiera and hiera would search hierarchically and provide the first instance of value.

Although Hiera is able to provide data separation, it cannot ensure security of sensitive information. Anyone with access to the Hiera data store will be able to see the data.

Enter Hiera-eyaml. Hiera-eyaml is a backend for Hiera that provides per-value encryption of sensitive data within yaml files to be used by Puppet.

The following puppet module can be used to manage hiera with eyaml support.

https://forge.puppetlabs.com/hunner/hiera puppet module.

The module class can be used like below,

modules/profile/manifests/hieraconf.ppclass profile::hieraconf {
# hiera configuration
class { ‘hiera’:
hierarchy => [
‘%{environment}/%{calling_class}’,
‘%{environment}’,
‘%{fqdn}’,
‘common’,
‘accounts’,
‘dev’
],
}
}

The /etc/hiera.conf would look like following after the puppet run,

/etc/puppet/hiera.yaml# managed by puppet

:backends:
– yaml
:logger: console
:hierarchy:
– “%{environment}/%{calling_class}”
– “%{environment}”
– “%{fqdn}”
– common
– accounts
– dev
:yaml:
:datadir: /etc/puppet/hieradata

Moving data to Hiera
In following example, diamond collector for Mongodb does have data like, hosts, user and password. The collector is only enabled for grafana.pythian.com host.

modules/profile/manifests/diamond_coll.pp
[..] diamond::collector { ‘MongoDBCollector’:
options => {
enabled => $fqdn ? { /grafana.pythian.com/ =>True, default => false },
hosts => ‘abc.pythian.com,xyz.pythian.com’,
user => ‘grafana’,
passwd => ‘xxxx’,
}
}

To move the data to hiera, create_resources function can be used in the manifest.

modules/profile/manifests/diamond_coll.ppclass profile::diamond_coll{
[..] $mycollectors = hiera(‘diamond::collectors’, {})
create_resources(‘diamond::collector’, $mycollectors)
[..] }

Then a new yaml file can be created and diamond::collectors code for MongoDBCollector can be abstracted like below,

hieradata/grafana.pythian.com.yaml

diamond::collectors:
MongoDBCollector:
options:
enabled: True
hosts: abc.pythian.com,xyz.pythian.com
user: grafana
passwd: xxxx

Moving data to Hiera-eyaml
Hiera puppet code can be changed to following to enable eyaml.

class profile::hieraconf {
# hiera configuration
class { ‘hiera’:
hierarchy => [
‘%{environment}/%{calling_class}’,
‘%{environment}’,
‘%{fqdn}’,
‘common’,
‘accounts’,
‘dev’
],
eyaml => true,
eyaml_datadir => ‘/etc/puppet/hieradata’,
eyaml_extension => ‘eyaml’,
}
}

This will add eyaml backend to puppet after a puppet run on puppet server. Puppet modules does following to achieve this.

Update
1. The hiera-eyaml gem will be installed.
2. Following keys will be created for hiera-eyaml using ‘eyaml createkeys’.

/etc/puppet/keys/private_key.pkcs7.pem
/etc/puppet/keys/public_key.pkcs7.pem

3. Update /etc/hiera.conf.

The /etc/hiera.conf would look like following after the puppet run,

/etc/puppet/hiera.yaml# managed by puppet

:backends:
– eyaml
– yaml
:logger: console
:hierarchy:
– “%{environment}/%{calling_class}”
– “%{environment}”
– “%{fqdn}”
– common
– accounts
– dev
:yaml:
:datadir: /etc/puppet/hieradata
:eyaml:
:datadir: /etc/puppet/hieradata
:extension: eyaml
:pkcs7_private_key: /etc/puppet/keys/private_key.pkcs7.pem
:pkcs7_public_key: /etc/puppet/keys/public_key.pkcs7.pem

Puppetmaster need to be restarted after this as changes to hiera.conf would need a restart to apply.

Using eyaml command line

Eyaml commands need to be used in a directory with keys directory(In this example /etc/puppet). Following command can be used to encrypt a password. The command would give us two options, string and block.

# eyaml encrypt -p
Enter password: ****
string: ENC[PKCS7,MIIBeQYJKoZIhvcN[..]Fg3jAmdlCLbQ] OR
block: >
ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEw
DQYJKoZIhvcNAQEBBQAEggEAQMo0dyWRmBC30TVDVxEOoClgUsxtzDSXmrJL
pz3ydhvG0Ll96L6F2WbclmGtpaqksbpc98p08Ri8seNmSp4sIoZWaZs6B2Jk
BLOIJBZfSIcHH8tAdeV4gBS1067OD7I+ucpCnjUDTjYAp+PdupyeaBzkoWMA
X/TrmBXW39ndAATsgeymwwG/EcvaAMKm4b4qH0sqIKAWj2qeVJTlrBbkuqCj
qjOO/kc47gKlCarJkeJH/rtErpjJ0Et+6SJdbDxeSbJ2QhieXKGAj/ERCoqh
hXJiOofFuAloAAUfWUfPKnSZQEhHCPDkeyhgDHwc8akWjC4l0eeorZgLPcs1
1oQJqTA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBC+JDHdj2M2++mFu+pv
ORXmgBA/Ng596hsGFg3jAmdlCLbQ]

To decrypt following command can be used.

# eyaml decrypt -s ‘ENC[PKCS7,MIIBeQYJKoZIhvcN[..]Fg3jAmdlCLbQ]’
test

The encrypted string or block can be used in hiera. While using our previous example, the hiera file would look like following. We also have to rename the file to .eyaml from .yaml.

hieradata/grafana.pythian.com.eyaml

diamond::collectors:
MongoDBCollector:
options:
enabled: True
hosts: abc.pythian.com,xyz.pythian.com
user: grafana
passwd: ENC[PKCS7,MIIBeQYJKoZIhvcN[..]Fg3jAmdlCLbQ]

Encrypting certificates
Following is a standard file resource used to copy an ssl certificate..

environments/production/manifests/logstash.pythian.com.ppfile { ‘/etc/logstash/certs/logstash-forwarder.crt’:
ensure => present,
mode => ‘0644’,
owner => ‘root’,
group => ‘root’,
source => ‘puppet:///modules/logstash/logstash-forwarder.crt’,
}

The file resource can be moved to hiera using hiera_hash.

environments/production/manifests/logstash.pythian.com.pp$fileslog = hiera_hash(‘fileslog’)
create_resources ( file, $fileslog )

The data can be added to a yaml file.

hieradata/common.yaml—
files:
‘/etc/logstash-forwarder/certs/logstash-forwarder.crt’:
ensure : present
mode : ‘0644’
owner : ‘root’
group : ‘root’
source : ‘puppet:///modules/logstash/logstash-forwarder.key’

To encrypt data, following command can be used.

# eyaml encrypt -f modules/logstash/files/logstash-forwarder.crt

The returned string value can be added using content parameter of file resource.

hieradata/common.eyaml
[..] files:
‘/etc/logstash-forwarder/certs/logstash-forwarder.crt’:
ensure : present
mode : ‘0644’
owner : ‘root’
group : ‘root’
content : ‘ENC[PKCS7,MIIB+wYJKoZI[..]C609Oc2QUvxARaw==]’

The above examples covers encrypting strings and files, which constitutes most of the sensitive data used in puppet code. Incorporating hiera-eyaml into puppet work-flow will ensure compliance and security of sensitive data.

Categories: DBA Blogs

sketchnote of Ed Wilson’s talk on ‘Operations Management Suite’

Matt Penny - Fri, 2016-05-27 11:17

Wilson, Ed - Microsoft Operations Management Suite

The OMS blog is: https://blogs.technet.microsoft.com/msoms/
The Scripting Guy blog is: https://blogs.technet.microsoft.com/heyscriptingguy/
Ed Wilson is on twitter here: https://twitter.com/ScriptingGuys
The London Powershell Users Group tweets here: https://twitter.com/lonpsug
The UK Powershell Users Group is http://www.get-psuguk.org/

Keywords: OMS, Powershell


Categories: DBA Blogs

Links for 2016-05-26 [del.icio.us]

Categories: DBA Blogs

Understanding MySQL Fabric Faulty Server Detection

Pythian Group - Thu, 2016-05-26 13:39

Awhile ago I found myself analyzing a MySQL fabric installation to understand why a group member was occasionally being marked as FAULTY even when the server was up and running and no failures were observed.  

         
                         server_uuid     address  status       mode weight
------------------------------------ ----------- ------- ---------- ------
ab0b0653-6121-11c5-55a0-007543445454 mysql1:3306 PRIMARY READ_WRITE    1.0
f34dd331-2432-11f4-a2d3-006754678533 mysql2:3306 FAULTY  READ_ONLY     1.0

 

Upon reviewing mysqlfabric logs, I found the following warnings were being logged from time to time:

[WARNING] 1442221217.920115 - FailureDetector(xc_grp_1) - Server (f34dd331-2432-11f4-a2d3-006754678533) in group (xc_grp_1) is unreachable

 

Since I was not clear under which circumstances a server is marked as FAULTY, I decided to review MySQL Fabric code (Python) to better understand the process.

The module responsible for printing this message is failure_detection.py and more specifically, the _run method belonging to FailureDetector class. This method will loop through every server in a group, and attempt a connection to the MySQL instance running on that node. MySQLServer.Is_alive (mysql/fabric/server.py) method is called for this purpose.

Before reviewing the failure detection process, we first need to know that there are four MySQL fabric parameters that will affect when a server is considered unstable or faulty:

DETECTION_TIMEOUT
DETECTION_INTERVAL
DETECTIONS
FAILOVER_INTERVAL

 

Based on the above variables, the logic followed by FailureDetector._run() to mark a server as FAULTY is the following:

1) Every {DETECTION_INTERVAL/DETECTIONS} seconds, a connection against each server in the group is attempted with a timeout equal to DETECTION_TIMEOUT

2) If DETECTION_TIMEOUT is exceeded, the observed message is logged and a counter incremented

3) When this counter reaches DETECTIONS, the server is marked as “unstable” and if the last time the master changed was greater than FAILOVER_INTERVAL ago, the server is marked as FAULTY

With a better understanding of the logic followed by MySQL fabric to detect faulty nodes, I went to the configuration file to check the existing values for each of the parameters:

DETECTIONS=3
DETECTION_TIMEOUT=1
FAILOVER_INTERVAL=0
DETECTION_INTERVAL=6

From the values above we can notice that each group will be polled every 2 seconds (DETECTION_INTERVAL/DETECTIONS) and that the monitored server should respond within a second for the test to be considered successful.

On high concurrency nodes, or nodes under heavy load, a high polling frequency combined with tight timeouts could cause the servers to be marked as FAULTY just because the connection attempt would not be completed or processed (in the case of high connection rates or saturated network interfaces) before the timeout occurs.

Also, having FAILOVER_INTERVAL reduced to 0, will cause the server to be marked as FAULTY even if a failover had just occurred.

A less aggressive configuration would be more appropriated for heavy loaded environment:

DETECTIONS=3 -> It's Ok
DETECTION_TIMEOUT -> 5 TO 8
FAILOVER_INTERVAL -> 1200
DETECTION_INTERVAL -> 45 TO 60

Conclusion

As with any other database clustering solution that relies on a database connection to test node status, situations where the database server would take longer to respond should also be considered. The polling frequency should be adjusted so the detection window is within an acceptable range, but the rate of monitoring connections generated is also kept to the minimum. Check timeouts should also be adjusted to avoid false positives caused by the server not being able to respond in a timely manner.

 

Categories: DBA Blogs

Tracefile Automation – Simplify Your Troubleshooting Tasks

Pythian Group - Thu, 2016-05-26 10:06

Here’s a common Oracle troubleshooting scenario when a SQL statement needs tuning and/or troubleshooting:

  • log on to dev server
  • connect to database (on a different server)
  • run the SQL statement with 10046 tracing enabled
  • ssh to the database server
  • copy the trace file back to work environment
  • process the trace file.

 

All of this takes time. Granted, not a great deal of time, but tuning is an iterative process and so these steps will be performed multiple times. Not only are these steps a productivity killer, but they are repetitive and annoying. No one wants to keep running the same manual command over and over.

This task is ripe for some simple automation.

If both the client and database servers are some form of Unix, automating these tasks is straightforward.

Please note that these scripts require an 11g or later version of the Oracle database. These scripts are dependent on the v$diag_info view to retrieve the tracefile name. While these scripts could be made to work on 10g databases, that is left as an exercise for the reader.

Step by Step

To simplify the process it can be broken down into steps.

 

1. Reconnect

The first step is to create a new connection each time the SQL is executed. Doing so ensures the database session gets a new tracefile, as we want each execution to be isolated.

-- reconnect.sql

connect jkstill/XXXX@oravm

 

2. Get the Tracefile hostname, owner and filename

Oracle provides all the information needed.

In addition the script will set the 10046 event, run the SQL of interest and then disable the 10046 event.

Following is a snippet from the tracefile_identifier_demo.sql script.

 


-- column variables to capture host, owner and tracefile name
col tracehost new_value tracehost noprint
col traceowner new_value traceowner noprint
col tracefile new_value tracefile noprint

set term off head off feed off

-- get oracle owner
select username traceowner from v$process where pname = 'PMON';

-- get host name
select host_name tracehost from v$instance;

-- set tracefile identifier
alter session set tracefile_identifier = 'MYTRACEFILE';

select value tracefile from v$diag_info where name = 'Default Trace File';

set term on head on feed on

-- do your tracing here
alter session set events '10046 trace name context forever, level 12';

-- run your SQL here
@@sql2trace

alter session set events '10046 trace name context off';

 

In this case sql2trace.sql is a simple SELECT from a test table.  All of the scripts used here appear in Github as mentioned at the end of this article.

 

3. Process the Tracefile

Now that the tracefile has been created, it is time to retrieve it.

The following script scp.sql is called from tracefile_identifier_demo.sql.

 


col scp_src new_value scp_src noprint
col scp_target new_value scp_target noprint

set term off feed off verify off echo off

select '&&1' scp_src from dual;
select '&&2' scp_target from dual;

set feed on term on verify on

--disconnect

host scp &&scp_src &&scp_target

Following is an example putting it all together in tracefile_identifier_demo.sql.

 

SQL> @tracefile_identifier_demo
Connected.

1 row selected.


PRODUCT                        VERSION              STATUS
------------------------------ -------------------- --------------------
NLSRTL                         12.1.0.2.0           Production
Oracle Database 12c Enterprise 12.1.0.2.0           64bit Production
 Edition

PL/SQL                         12.1.0.2.0           Production
TNS for Linux:                 12.1.0.2.0           Production

Data Base
------------------------------
P1.JKS.COM

INSTANCE_NAME        HOST_NAME                      CURRDATE
-------------------- ------------------------------ ----------------------
js122a1              ora12c102rac01.jks.com         2016-05-23 16:38:11

STARTUP
--------------------
04/02/2016 11:22:12


Session altered.

Elapsed: 00:00:00.00

OWNER        OBJECT NAME                     OBJECT_ID OBJECT_TYPE             CREATED
------------ ------------------------------ ---------- ----------------------- -------------------
SYS          OLAP_EXPRESSION                     18200 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_BOOL                18206 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_DATE                18204 OPERATOR                2016-01-07 21:46:54
SYS          OLAP_EXPRESSION_TEXT                18202 OPERATOR                2016-01-07 21:46:54
SYS          XMLSEQUENCE                          6379 OPERATOR                2016-01-07 21:41:25
SYS          XQSEQUENCE                           6380 OPERATOR                2016-01-07 21:41:25
SYS          XQWINDOWSEQUENCE                     6393 OPERATOR                2016-01-07 21:41:25

7 rows selected.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.00

js122a1_ora_1725_MYTRACEFILE.trc                                                                                                                                                            100% 3014     2.9KB/s   00:00

SQL> host ls -l js122a1_ora_1725_MYTRACEFILE.trc
-rw-r----- 1 jkstill dba 3014 May 23 16:38 js122a1_ora_1725_MYTRACEFILE.trc

But Wait, There’s More!

This demo shows you how to automate the retrieval of the trace file. But why stop there?  The processing of the file can be modified as well.

Really, it isn’t even necessary to copy the script over, as the content can be retrieved and piped to your favorite command.  The script mrskew.sql for instance uses ssh to cat the tracefile, and then pipe the contents to the Method R utility, mrskew.  Note: mrskew is a commercial utility, not open source software.

 

-- mrskew.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on

--disconnect

host ssh &&ssh_target 'cat &&scp_filename' | mrskew

 

Following is another execution of tracefile_identifier_demo.sql, this time piping output to mrskew. Only the final part of the output is shown following

 

...

Elapsed: 00:00:00.01

CALL-NAME                    DURATION       %  CALLS      MEAN       MIN       MAX
---------------------------  --------  ------  -----  --------  --------  --------
PARSE                        0.002000   33.1%      2  0.001000  0.000000  0.002000
db file sequential read      0.001211   20.0%      5  0.000242  0.000056  0.000342
FETCH                        0.001000   16.5%      1  0.001000  0.001000  0.001000
gc cr grant 2-way            0.000999   16.5%      1  0.000999  0.000999  0.000999
SQL*Net message from client  0.000817   13.5%      2  0.000409  0.000254  0.000563
Disk file operations I/O     0.000018    0.3%      2  0.000009  0.000002  0.000016
SQL*Net message to client    0.000002    0.0%      2  0.000001  0.000001  0.000001
CLOSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
EXEC                         0.000000    0.0%      2  0.000000  0.000000  0.000000
---------------------------  --------  ------  -----  --------  --------  --------
TOTAL (9)                    0.006047  100.0%     19  0.000318  0.000000  0.002000

Now we can see where all the db time was consumed for this SQL statement, and there was no need to copy the trace file to the current working directory. The same can be done for tkprof and other operations.  Please see the plan.sql and tkprof.sql scripts in the Github repository.

 

Wrapping It Up

A little bit of automation goes a long way. Commands that are repetitive, boring and error prone can easily be automated.  Both your productivity and your mood will soar when little steps like these are used to improve your workflow.

All files for this demo can be found at https://github.com/jkstill/tracefile_identifier

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs