Skip navigation.

DBA Blogs

Partner Webcast – Oracle Mobile Application Framework 2.1: Update Overview

Oracle Mobile Application Framework (Oracle MAF) is a hybrid mobile framework that enables developers to rapidly develop single-source applications and deploy to both Apple's iOS and Google's Android...

We share our skills to maximize your revenue!
Categories: DBA Blogs

DDL_LOCK_TIMEOUT to sneak in change on active system

Bobby Durrett's DBA Blog - Wed, 2015-04-29 14:54

I need to change a view and an index on an active production system.  I’m concerned that the change will fail with a “ORA-00054: resource busy” error because I’m changing things that are in use.  I engaged in a twitter conversation with @FranckPachot and @DBoriented and they gave me the idea of using DDL_LOCK_TIMEOUT with a short timeout to sneak in my changes on our production system.  Really, I’m more worried about backing out the changes since I plan to make the change at night when things are quiet.  If the changes cause a problem it will be during the middle of the next day.  Then I’ll need to sneak in and make the index invisible or drop it and put the original view text back.

I tested setting DDL_LOCK_TIMEOUT to one second at the session level.  This is the most conservative setting:

alter session set DDL_LOCK_TIMEOUT=1;

I created a test table with a bunch of rows in it and ran a long updating transaction against it like this:

update /*+ index(test testi) */ test set blocks=blocks+1;

Then I tried to alter the index invisible with the lock timeout:

alter index testi invisible
 *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Same error as before.  The update of the entire table took a lot longer than 1 second.

Next I tried the same thing with a shorter running update:

update /*+ index(test testi) */ test set blocks=blocks+1 where owner='SYS' and table_name='DUAL';
commit;
update /*+ index(test testi) */ test set blocks=blocks+1 where owner='SYS' and table_name='DUAL';
commit;
... lots more of these so script will run for a while...

With the default setting of DDL_LOCK_TIMEOUT=0 my alter index invisible statement usually exited with an ORA-00054 error.  But, eventually, I could get it to work.  But, with DDL_LOCK_TIMEOUT=1 in my testing my alter almost always worked.  I guess in some cases my transaction exceeded the 1 second but usually it did not.

Here is the alter with the timeout:

alter session set DDL_LOCK_TIMEOUT=1;

alter index testi invisible;

Once I made the index invisible the update started taking 4 seconds to run.  So, to make the index visible again I had to bump the timeout up to 5 seconds:

alter session set DDL_LOCK_TIMEOUT=5;

alter index testi visible;

So, if I have to back out these changes at a peak time setting DDL_LOCK_TIMEOUT to a small value should enable me to make the needed changes.

Here is a zip of my scripts if you want to recreate these tests: zip

You need Oracle 11g or later to use DDL_LOCK_TIMEOUT.

These tests were all run on Oracle 11.2.0.3.

Also, I verified that I studied DDL_LOCK_TIMEOUT for my 11g OCP test.  I knew it sounded familiar but I have not been using this feature.  Either I just forgot or I did not realize how helpful it could be for production changes.

– Bobby

Categories: DBA Blogs

Thanks Oracle for R12.AD.C.DELTA.6

Pythian Group - Wed, 2015-04-29 06:18

When reading through the release notes of the latest Oracle E-Business Suite R12.2 AD.C.Delta.6 patch in note 1983782.1, I wondered what they meant by “Simplification and enhancement of adop console messages”. I realized what I was missing after I applied the AD.C.Delta6 patch. The format of the console messaged changed drastically. To be honest, the old console messages printed by adop command reminded me of a program where somebody forgot to turn off the debug feature. The old adop console messages are simply not easily readable and looked more like debug messages of a program. AD.C.Delta6 brought in a fresh layout to the console messages, it’s now more readable and easy to follow. You can see for your self by looking at the below snippet:

### AD.C.Delta.5 ###

$ adop phase=apply patches=19197270 hotpatch=yes

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

 Please wait. Validating credentials...


RUN file system context file: /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml

PATCH file system context file: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
Execute SYSTEM command : df /u01/install/VISION/fs1

************* Start of  session *************
 version: 12.2.0
 started at: Fri Apr 24 2015 13:47:58

APPL_TOP is set to /u01/install/VISION/fs2/EBSapps/appl
[START 2015/04/24 13:48:04] Check if services are down
  [STATEMENT]  Application services are down.
[END   2015/04/24 13:48:09] Check if services are down
[EVENT]     [START 2015/04/24 13:48:09] Checking the DB parameter value
[EVENT]     [END   2015/04/24 13:48:11] Checking the DB parameter value
  Using ADOP Session ID from currently incomplete patching cycle
  [START 2015/04/24 13:48:23] adzdoptl.pl run
    ADOP Session ID: 12
    Phase: apply
    Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log
    [START 2015/04/24 13:48:30] apply phase
        Calling: adpatch  workers=4   options=hotpatch     console=no interactive=no  defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs_ne/EBSapps/patch/19197270 driver=u19197270.drv logfile=u19197270.log
        ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/19197270/log
        [EVENT]     [START 2015/04/24 13:59:45] Running finalize since in hotpatch mode
        [EVENT]     [END   2015/04/24 14:00:10] Running finalize since in hotpatch mode
          Calling: adpatch options=hotpatch,nocompiledb interactive=no console=no workers=4 restart=no abandon=yes defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/patch/115/driver logfile=cutover.log driver=ucutover.drv
          ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/log
        [EVENT]     [START 2015/04/24 14:01:32] Running cutover since in hotpatch mode
        [EVENT]     [END   2015/04/24 14:01:33] Running cutover since in hotpatch mode
      [END   2015/04/24 14:01:36] apply phase
      [START 2015/04/24 14:01:36] Generating Post Apply Reports
        [EVENT]     [START 2015/04/24 14:01:38] Generating AD_ZD_LOGS Report
          [EVENT]     Report: /u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/sql/ADZDSHOWLOG.sql

          [EVENT]     Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/adzdshowlog.out

        [EVENT]     [END   2015/04/24 14:01:42] Generating AD_ZD_LOGS Report
      [END   2015/04/24 14:01:42] Generating Post Apply Reports
    [END   2015/04/24 14:01:46] adzdoptl.pl run
    adop phase=apply - Completed Successfully

    Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log

adop exiting with status = 0 (Success)
### AD.C.Delta.6 ###

$ adop phase=apply patches=19330775 hotpatch=yes

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

Validating credentials...

Initializing...
    Run Edition context  : /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
    Patch edition context: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
Reading driver file (up to 50000000 bytes).
    Patch file system freespace: 181.66 GB

Validating system setup...
    Node registry is valid.
    Application services are down.
    [WARNING]   ETCC: The following database fixes are not applied in node ebs
                  14046443
                  14255128
                  16299727
                  16359751
                  17250794
                  17401353
                  18260550
                  18282562
                  18331812
                  18331850
                  18440047
                  18689530
                  18730542
                  18828868
                  19393542
                  19472320
                  19487147
                  19791273
                  19896336
                  19949371
                  20294666
                Refer to My Oracle Support Knowledge Document 1594274.1 for instructions.

Checking for pending adop sessions...
    Continuing with the existing session [Session id: 12]...

===========================================================================
ADOP (C.Delta.6)
Session ID: 12
Node: ebs
Phase: apply
Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_140643.log
===========================================================================

Applying patch 19330775 with adpatch utility...
    Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/19330775/log/u19330775.log

Running finalize actions for the patches applied...
    Log: @ADZDSHOWLOG.sql "2015/04/24 14:15:09"

Running cutover actions for the patches applied...
    Spawning adpatch parallel workers to process CUTOVER DDLs in parallel
    Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/log/cutover.log
    Performing database cutover in QUICK mode

Generating post apply reports...

Generating log report...
    Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/adzdshowlog.out

adop phase=apply - Completed Successfully


adop exiting with status = 0 (Success)

So what are you waiting for fellow Apps DBAs? Go ahead, apply the new AD Delta update to your R12.2 EBS instances. I am really eager to try out other AD.C.Delta6 new features, especially “Online Patching support for single file system on development or test systems”

Categories: DBA Blogs

How to pass the #Oracle Database 11g OCM Exam

The Oracle Instructor - Wed, 2015-04-29 04:05

The Oracle Certified Master Exam is among the highest rated exams in the IT industry for a good reason: It is extremely hard to pass!

Unlike most other IT exams that are done as multiple choice tests, the OCM exam means two days of  hands-on practical tasks. No chance you can pass it by just reading books or brain dumps and learning by heart without deep understanding. Without years of practical experience with Oracle database administration – don’t even think about it. Even as a seasoned DBA, you won’t find it easy to pass the OCM exam. Why is that so?

The tested topics have a very broad range and some of them are likely outside your comfort zone
Your usual tools (e.g. scripts and google) are not available
There is very limited time to complete the tasks
The exam is exhausting, so after a while oversights become a severe danger

To help you prepare for the exam, we offer a quite useful class: Oracle Database 11g: OCM Exam Preparation Workshop Ed 2

I delivered it many times and it is probably the best preparation you can get – but also expensive, I admit.

I use to give following guidance to the attendees of the workshop – and the last two paragraphs may help you even if you don’t attend it:

During the OCM Preparation Workshop:
Go beyond your comfort zone and put additional focus on the topics you are not yet so familiar with
Notify the pages in the documentation that you can copy from to resolve the tasks and memorize them
Check if & how the Enterprise Manager can help doing things more efficiently than manual procedures
Make sure that you are ABLE to do things manually in the absence of GUIs, though After the Workshop:
Create a sandbox environment (e.g. VirtualBox on your notebook)
Practice using only the Documentation!
Practice the things that you felt not so comfortable with during the workshop in the first place
Practice things from inside your comfort zone also, but with a (short!) time limit for the task During the Exam:
Read the tasks carefully and make sure that you understand them exactly BEFORE you begin working
If the order of tasks is not relevant, do the things first that you feel most comfortable about
Don’t waste too much time on a problematic task if other things can be done instead
You don’t need 100% to pass – so keep up your confidence even if you couldn’t complete all tasks

It is of course possible to prepare also without the workshop. See here for an impressive description about it. Good luck with your journey to become an OCM and I hope you find this little article helpful :-)


Tagged: OCM
Categories: DBA Blogs

Weblogic patch rollback issues in Oracle EBS R12.2

Pythian Group - Tue, 2015-04-28 06:36

When you try to rollback a weblogic patch in Oracle EBS R12.2, you might run into issues similar to below:

$ ./bsu.sh -remove -patchlist=YIJF -prod_dir=/u01/install/VISION/fs2/FMW_Home/wlserver_10.3 -verbose
Checking for conflicts..
No conflict(s) detected

Starting removal of Patch ID: YIJF
Restoring /d1/V1223B5B/fs1/FMW_Home/modules/com.bea.core.datasource6.binding_1.10.0.0.jar
from /u01/install/VISION/fs2/FMW_Home/patch_wls1036/backup/backup.jar
Result: Failure
Failure condition follows:
An error occured while removing patches – run with logging to obtain detailed information
java.io.FileNotFoundException: /d1/V1223B5B/fs1/FMW_Home/modules/com.bea.core.datasource6.binding_1.10.0.0.jar (No such file or directory)
Result: Failure
Failure condition follows:
An error occured while removing patches – run with logging to obtain detailed information
java.io.FileNotFoundException: /d1/V1223B5B/fs1/FMW_Home/modules/com.bea.core.datasource6.binding_1.10.0.0.jar (No such file or directory)

If you observe the errors, it’s trying to restore the jar file to a location that is not present in the current filesystem. This error was captured in a VISION instance created from Oracle VM Template. So you can see that the file path seems to be a path used by an Oracle internal system where the VM template is created. There is not much harm caused by the above issue to the instance.

Now consider this hypothetical situation, where you are trying to rollback a weblogic patch in patch fs ( fs2 ) that was previously applied to fs1 in Production Oracle E-Business Suite R12.2. So the patch history in the patch fs ( fs2) will still have file paths of fs1. So when you rollback a patch it will restore the files to RUN FS ( fs1). This can cause a havoc and  a big outage to online users.

As of now bsu utility doesn’t seem to handle issues this kind of situation. So all Oracle EBS Apps DBAs out there, make sure to check the below file for correct paths, before you try to rollback any weblogic patch

$ vi $FMW_HOME/patch_wls1036/registry/patch-backup.xml

A permanent solution to this problem would be to update bsu patch utility to not restore or touch files outside the weblogic home. Hope Oracle Development notices this issue before it effects some customers EBS production instance.

Categories: DBA Blogs

Log Buffer #420: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-04-27 06:05

This Log Buffer Editions brings few of the very insightful blog posts from the arena of Oracle, SQL Server and MySQL.

Oracle:

  • How to Increase Performance With Business Events in Fusion Applications
  • Advanced Oracle Troubleshooting Guide – Part 12: control file parallel reads causing enq: SQ – contention waits?
  • Changing REVERSE Transformations in Oracle Data Miner
  • Refresh Multiple Materialized Views in One Go: No Data Found
  • Error deploying Oracle Composite with wrong encoding wsdl

SQL Server:

  • How to Recover a SQL Server Login Password
  • Understanding Cross-Database Transactions in SQL Server
  • Adding Slicers to a Reporting Services Report
  • Continue a Foreach loop after an error in a SQL Server Integration Services package
  • Automating Image-Based Deployment of SQL Server on Azure IaaS VMs – Preparing OS Image

MySQL:

  • What Should I Monitor, and How Should I Do It?
  • How MySQL will act if there is no space left on hard disk? To answer this question, let’s test it.
  • Configuring PAM Authentication and User Mapping with MariaDB
  • MySQL Enterprise Database Firewall — Control and Monitor SQL Statement Executions
  • MariaDB : Bug when add index on Partition table
Categories: DBA Blogs

Oracle Database Mobile Server 12c (12.1.0.0.0) Released

Brand new Oracle Database Mobile Server 12.1.0.0.0, now available for download from OTN, is adding several new features including the use of Oracle NoSQL Database as the back end store and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Parallel Execution -- 4 Parsing PX Queries

Hemant K Chitale - Fri, 2015-04-24 09:20
Unlike "regular" Serial Execution queries that undergo only 1 hard parse and multiple soft parses on repeated execution, Parallel Execution queries actually are hard parsed by each PX Server plus the co-ordinator at each execution.  [Correction, as noted by Yasin in his comment : Not hard parsed, but separately parsed by each PX Server]

UPDATE 26-Apr-15:  See Oracle Support  Note 751588.1 and Bug 6274465  to understand how the PX Servers (PQ Slaves) also parse the SQL statement.


Here's a quick demo.

First, I start with a Serial Execution query.

[oracle@localhost ~]$ sqlplus hemant/hemant

SQL*Plus: Release 11.2.0.2.0 Production on Fri Apr 24 22:53:55 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>set serveroutput off
HEMANT>alter table large_table noparallel;

Table altered.

HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5ys3vrapmbx6w, child number 0
-------------------------------------
select count(*) from large_table

Plan hash value: 3874713751

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 18894 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 18894 (1)| 00:03:47 |
--------------------------------------------------------------------------


14 rows selected.

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
1 1 0 select count(*) from large_table

HEMANT>
HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
2 2 0 select count(*) from large_table

HEMANT>
HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
3 3 0 select count(*) from large_table

HEMANT>
HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
4 4 0 select count(*) from large_table

HEMANT>
HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
5 5 0 select count(*) from large_table

HEMANT>


5 executions with no additional parse overheads.

Next, I run Parallel Execution.

[oracle@localhost ~]$ sqlplus hemant/hemant

SQL*Plus: Release 11.2.0.2.0 Production on Fri Apr 24 23:04:45 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>set serveroutput off
HEMANT>alter table large_table parallel 4;

Table altered.

HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 4wd97vn0ytfmc, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from large_table

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1311 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
1 5 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
2 10 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
3 15 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
4 20 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
5 25 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>


Each of the 5 executions had parse overheads for each PX server.
Note : The 5 "PARSE_CALLS" per execution is a result of 4 PX servers.  You might see a different number in your tests.

.
.
.


Categories: DBA Blogs

Pillars of PowerShell: Profiling

Pythian Group - Fri, 2015-04-24 06:53
Introduction

This is the fourth blog post continuing the series on the Pillars of PowerShell. The previous post in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
Profiles

This is something I mentioned in the second post and can be a great way to keep up with those one-liners you use most often in your work. A profile with PowerShell is like using start up scripts in an Active Directory environment. You can “pre-run” things on a domain computer at start up or when a user logs into the machine. In a PowerShell profile you can “pre-load” information, modules, custom functions, or any command you want to execute in the PowerShell console. There is a separate profile for the console and then for PowerShell ISE. Your profile is basically a PowerShell script saved into a specific location under your Documents folder. The path to this profile is actually kept within a system variable, most notably called, $PROFILE.

Output of the $PROFILE variable

Output of the $PROFILE variable

I am using a Windows Azure VM that I just built, so I have not created any profiles on this machine. The path is kept within this variable but that does not mean it actually exists. We will need to create this file and the easiest method to do this is to actually use a cmdlet, New-Item. You can use this cmdlet to create files or folders. You can execute this one-liner to generate the PowerShell script in the path shown above:

New-Item $PROFILE -ItemType File -Force
New-Item $PROFILE

New-Item $PROFILE

Now, from here you can use another cmdlet to open the file within the default editor set to open any “.ps1″ file on your machine, Invoke-Item. This might be Notepad or you can set it to be the PowerShell ISE as well. Just execute this cmdlet followed by the $PROFILE variable (e.g. Invoke-Item $PROFILE).

One of the things I picked up on when I started using my profile more often was you can actually format your console. More specifically, I like to shorten the “PS C:\Users\melton_admin” portion. If you start working in directories that are 3 or 4 folders deep this can take up a good portion of your prompt. I came across a function that I truthfully cannot find the original poster, so sorry for not attributing it.

function prompt
{
if($host.UI.RawUI.CursorPosition.Y -eq 0) { "<$pwd> `n`r" + "PS["+$host.UI.RawUI.CursorPosition.Y+"]> "} else { "PS["+$host.UI.RawUI.CursorPosition.Y+"]> "}
}

Any function you save in your profile that performs an action you can call anytime in the PowerShell console, once it is loaded. However if I want that action to take effect when it loads the profile I simply need to call the function at the end of the profile script. I just add these two lines and ensure they are always the last two lines of my profile, anything added will go between the function above and these two lines:

prompt;
clear;
Profile_format

I use the clear command (just like using cls in the DOS prompt) to just get rid of any output a function or command I have may cause; just starts me out on a fresh clean slate.

If you want to test your profile script you can force it to load into your current session by doing this: .$profile. That is enter “period $profile” and just hit enter. You will need to take note that since I use the clear command in my profile if any cmdlet or one-liner I add outputs an error you will not see it. So when I have issues like this I simply comment the line out of my profile. You can put comments into your script using the pound sign (#), putting that before a command will allow it to be ignored or not run.

Set-ExecutionPolicy

PowerShell is a security product by default, so in certain operating system environments when you try to run your profile script above you may have gotten an error like this:

ExecutionPolicyError

 

 

 

This means pretty much what it says, execution of scripts is disabled. To enable this you need to use the Set-ExecutionPolicy cmdlet with a few parameters. You can find the documentation for this if you want by looking at the “about_Execution_Policies” in PowerShell or follow the link in the error. The documentation will explain the various options and policies you can set. The command below will allow you to execute scripts in your console and let it load your profile scripts:

Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned
Summary

In this post I pointed out the following cmdlets and concepts:

  • New-Item
  • Invoke-Item
  • Commenting in your script
  • Set-ExecutionPolicy

These are fairly basic areas of PowerShell and putting each one into your favorite search engine should lead you to a plentiful list of reading material. This post by no means encompassed all the potential you can do with Profiles, but was simply meant to get you started, and hopefully excited about what can be done.

Categories: DBA Blogs

Singapore Maths Question Solution and Very Interesting Observation (The Trickster)

Richard Foote - Thu, 2015-04-23 02:22
OK, time to reveal the solution to the somewhat tricky Singapore maths exam question I blogged previously. Remember, there were 10 dates: May 13   May 15   May 19 June 13   June 14 July 16   July 18 August 14   August 15   August 16 Bowie only knew the month of my birthday, Ziggy only knew the day. Bowie […]
Categories: DBA Blogs

PLAN_HASH_VALUE calculation different HP-UX and Linux?

Bobby Durrett's DBA Blog - Wed, 2015-04-22 17:01

I’m trying to compare how a query runs on two different 11.2.0.3 systems.  One runs on HP-UX Itanium and one runs on 64 bit x86 Linux.  Same query, same plan, different hash value.

HP-UX:

SQL_ID 0kkhhb2w93cx0
--------------------
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,exts
ize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL,
:13),groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16,
spare1=DECODE(:17,0,NULL,:17),scanhint=:18, bitmapranges=:19 where
ts#=:1 and file#=:2 and block#=:3

Plan hash value: 1283625304

----------------------------------------------------------------------------------------
| Id  | Operation             | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |                |       |       |     2 (100)|          |
|   1 |  UPDATE               | SEG$           |       |       |            |          |
|   2 |   TABLE ACCESS CLUSTER| SEG$           |     1 |    65 |     2   (0)| 00:00:01 |
|   3 |    INDEX UNIQUE SCAN  | I_FILE#_BLOCK# |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Linux:

SQL_ID 0kkhhb2w93cx0
--------------------
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,exts
ize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL,
:13),groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16,
spare1=DECODE(:17,0,NULL,:17),scanhint=:18, bitmapranges=:19 where
ts#=:1 and file#=:2 and block#=:3

Plan hash value: 2170058777

----------------------------------------------------------------------------------------
| Id  | Operation             | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |                |       |       |     2 (100)|          |
|   1 |  UPDATE               | SEG$           |       |       |            |          |
|   2 |   TABLE ACCESS CLUSTER| SEG$           |     1 |    64 |     2   (0)| 00:00:01 |
|   3 |    INDEX UNIQUE SCAN  | I_FILE#_BLOCK# |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

I wonder if the endianness plays into the plan hash value calculation? Or is it just a port specific calculation?

Odd.

– Bobby

Categories: DBA Blogs

Keeping Cassandra Alive

Pythian Group - Mon, 2015-04-20 12:28

Troubleshooting Cassandra under pressure

This is the second blog post in the series. This is a bit more technical than the first one. I will explain some things that can be made to keep a cluster/server running when you are having problems in that cluster.

There were a lot of changes in Cassandra over the last 4 years (from 0.8 to 2.1), so I will refrain from discussing troubleshooting problems that affect some specific versions. Also, this is the kind of troubleshooting you need when you can’t “add a node”.

Why can’t I just add a node? Well, if you aren’t on vnodes, and you didn’t pre-calculate the token ranges, adding a node is a big effort. Other constrains may also apply, like budget or deliver time for hardware (if you are on bare metal). Plus, rack capacity, power constrains, etc…

Now you may say:

“Ok, we can’t add a node! What should we do?! We have a storm coming!”

So, I did navigate over that storm and it’s not an easy task, but it’s doable! First thing, you have to know what you have, that is critical! You also need to know where you can take more damage.

Let’s assume you have the following situation, and what I recommend for it:

  • Heavy Write Cluster, Low Read

Now let’s define “storm”: A storm is not when when Cassandra fails, it’s about an unanticipated load increase or a disaster. What happens is that you have more load than your planned capacity (Either because of failure of nodes or because of a sudden traffic increase). This will increase your resource usage to a point where your machines will start to die.

Let’s understand what can cause a Cassandra process to die, and a probably the machine (If you OOM and you didn’t configure swap… I warned you!) for the scenario described above.

  1. More data to the commitlog = more I/O pressure (Discard if you have commitlog on a different HDD)
  2. Data is written to memtables = Memory is used
  3. Memtables reach thresholds faster, get flushed to disk = I/O pressure
  4. Compaction starts faster and frequently = I/O pressure, CPU pressure
  5. Too many I/O compaction can’t compact fast enough and the memtables aren’t flushing fast enough = Memory not being released.
  6. Too much memory usage, JVM triggers GC more frequently = CPU pressure
  7. JVM can’t release memory = OOM
  8. OOM = PUM! Node dies (if you are “lucky” kernel will kill Cassandra)

And I didn’t go trough the hints that would be stored as nodes became unresponsive and send out once they get back online.

So now we know where our pain points are. Let’s understand them and see what we can do about it:

  • Commitlog – Let’s just assume you have this on separate HDD, and don’t do anything about it (after all it’s your safeguard).
  • Memtables – We can control how often they are flushed. It is a possible tweak point. Although it requires a Cassandra restart for the changes to produce an effect.
  • Compaction – This we can control via nodetool, inclusive we can disable it in the later versions.
  • JVM GC – We can change settings, but difficult to tweak and a restart is needed.
  • Swap – We can play a bit here if we do have a swap partition.
  • Dirty_ratio – How often the data is actually written to the HDD. This can put your data at risk, but also can help.
  • Replication Factor – this can be changed on the fly, will help by having less pressure on the nodes.

So, what do to? Where to start? It depends on a case by case scenario. I would probably make my Read performance suffer to keep the writes getting in. To allow that, the easiest way should be making the reads CL = ONE. That sometimes does look like the fast and easy option. But if you’re writes are not using Quorum or/and you have read_repair… You will spread more writes (And RF>1). I would pick compaction as my first target, you can always try to get it up to pace (re-enable, increase compaction throughput). Another option would be increase dirty_ratio and risk losing data (trusting the commitlogs + RF>1 helps not losing data) but this will give your HDD more room until the cluster recovers.

But every case is a case. I will talk about my own case, problems and resolutions this Wednesday at the Datastax Day in London! Fell free to join me!

 

Categories: DBA Blogs

Pillars of PowerShell: Debugging

Pythian Group - Mon, 2015-04-20 12:09
Introduction

The is the third blog post continuing the series on the Pillars of PowerShell. The first two post are:

  1. Interacting
  2. Commanding

We are going to take a bit of a jump and dig into a particular topic that I think is better to go over up front, instead of later. In this post I want to go over a few things of how you can debug scripts or just issues in PowerShell. This is a topic that can get very advanced and make for a very long blog post. In place of trying to put all that in one blog post, I have a few links that I am going to share at the end of this post that will point you to some of the more deep dive material on debugging.

Pillar 3: Debugging

When it comes to writing scripts or developing T-SQL procedures you will generally see folks use print statements to either check where the processing is at in the script, or output some “additional” information. PowerShell is no different and offers cmdlets that you can output it to with various destinations and to even use it to make a decision. One of the main ones I like to use when I write scripts is Write-Verbose. You may see some folks use Write-Host in their scripts, and all I can say to that is, “be kind to puppies”. The basic gist of it is Write-Host outputs plain text, and will always output text unless you comment it out or remove it from your script. In using Write-Verbose you can actually have that information only output when a parameter switch is used, rightly called “-verbose”. This switch is included in most built-in cmdlets for modules provided by Microsoft. If you want to include it in your script or function you simply need to include this at the top:

[CmdletBinding()]
Param()

So in the example below you can see that both functions will never output the Write-Verbose cmdlet when they are called:

p3_function_verbose_example

The difference you will see is that Test-NoVerbose does not do anything when you include the verbose switch, where Test-WithVerbose will:

p3_function_verbose_example_2

So in cases where other people may be using your scripts this feature will help keep output clean, unless you need it for debugging. I tend to use this most often when I am working on long scripts that I want to initially know what is going on as it runs. If I ever have to come back to the script for debugging I can just use the switch, versus the normal execution which doesn’t need all that output.

Errors

They are going to happen, it is inevitable in your scripting journey that at some point you are going to have an error. You cannot always prepare for every error but you can help in collecting as much information about an error to help in debugging. Just like you would handle errors in T-SQL using a TRY/CATCH block, you can do the same in PowerShell.

PowerShell offers a variable that is available in every session you open or run called $Error. [The dollar sign in PowerShell denotes a variable.] This variable holds records of the errors that have occurred in your session. This variable is going to hold those errors that can occur in your scripts. There are other errors or exceptions that can also be thrown by .NET objects that can work a bit different in how you capture them; I will refer you to Allen White’s post on Handling Errors in PowerShell to see a good example.

Summary

Debugging is one of those topics that can go into a 3-day course so one blog post is obviously not going to cover all the information you might need. I came across a good blog post by the PowerShell Team on Advanced Debugging in PowerShell that should be a read for anyone wanting to get involved with PowerShell scripting.

Categories: DBA Blogs

Oracle Digital Transformation EMEA Partner Community Launch Summit - 28-29th April 2015, Budapest, Hungary

BE A DIGITAL DISRUPTOR! Be part of the Digital Disruption wave that is prevalent in all our partners and customers discussions today. Join us in Budapest to understand what Oracle’s...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Parallel Execution -- 3b Limiting PX Servers with Resource Manager

Hemant K Chitale - Sun, 2015-04-19 10:57
As demonstrated earlier, in the absence of CALIBRATE_IO, the "automatic" degree computed by a PARALLEL Hint is CPU_COUNT x PARALLEL_THREADS_PER_CPU

HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
1 16 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

Now, I shall explore using the Resource Manager to place a limit.

HEMANT>connect system/oracle
Connected.
SYSTEM>BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
END; 2 3
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP (
3 CONSUMER_GROUP=>'PX_QUERY_USER',
4 COMMENT=>'New Group for PX');
5 END;
6 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PLAN(
3 PLAN=>'LIMIT_PX_TO_4',
4 COMMENT=>'Limit PQ/PX to 4 Server Processes');
5 END;
6 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
3 PLAN=>'LIMIT_PX_TO_4',
4 GROUP_OR_SUBPLAN=>'PX_QUERY_USER',
5 PARALLEL_DEGREE_LIMIT_P1=>4,
6 COMMENT=>'Directive to limit PQ/PX to 4 Server Processes');
7 END;
8 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
3 END;
4 /
BEGIN
*
ERROR at line 1:
ORA-29382: validation of pending area failed
ORA-29377: consumer group OTHER_GROUPS is not part of top-plan LIMIT_PX_TO_4
ORA-06512: at "SYS.DBMS_RMIN", line 444
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 809
ORA-06512: at line 2


SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
3 PLAN=>'LIMIT_PX_TO_4',
4 GROUP_OR_SUBPLAN=>'OTHER_GROUPS',
5 MGMT_P1=>10,
6 COMMENT=>'Directive for OTHER_GROUPS (mandatory)');
7 END;
8 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>alter system set resource_manager_plan='LIMIT_PX_TO_4';

System altered.

SYSTEM>alter system flush shared_pool;

System altered.

SYSTEM>

I have now created 1 single consumer group and a plan.
Note 1 :  The COMMENT is mandatory for the Consumer Group, the Plan and the Directives.
Note 2 : It is mandatory to specify Directives for OTHER_GROUPS in the Plan (even if I don't explicitly define any other groups).
For the Group 'PX_QUERY_USER', I've set a PX Limit of 4 (and no CPU limit).  For the 'OTHER_GROUPS', I've set a CPU Limit of 10%
I must now associate the Consumer Group with the User.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING(
3 ATTRIBUTE=>DBMS_RESOURCE_MANAGER.ORACLE_USER,
4 VALUE=>'HEMANT',
5 CONSUMER_GROUP=>'PX_QUERY_USER');
6 END;
7 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>alter system flush shared_pool;

System altered.

SYSTEM>
SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP(
3 GRANTEE_NAME=>'HEMANT',
4 CONSUMER_GROUP=>'PX_QUERY_USER',
5 GRANT_OPTION=>FALSE);
6 END;
7 /

PL/SQL procedure successfully completed.

SYSTEM>
SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>BEGIN
2 DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
3 END;
4 /

PL/SQL procedure successfully completed.

SYSTEM>

Let me test the configuration now.

SYSTEM>connect hemant/hemant
Connected.
HEMANT>select username, resource_consumer_group
2 from v$session
3 where username='HEMANT'
4 /

USERNAME RESOURCE_CONSUMER_GROUP
------------------------------ --------------------------------
HEMANT PX_QUERY_USER

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
1 4 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

Now, the same query executed with only 4 PX servers.

.
.
.


Categories: DBA Blogs

Book Review: Oracle Database 12c New Features by Robert Freeman

Oracle in Action - Sun, 2015-04-19 03:04

RSS content

This book touches a myriad of new features of oracle database 12c relevant to DBA’s, developers and architects. It starts with new features as well as step by step detailed instructions of  installation along with relevant screenshots followed by an  introduction to EM Express. Next chapter covers new features related to upgrading to Oracle Database 12c and various methods to perform the upgrade. All the subsequent chapters explore  a whole lot of  new features from which as a DBA,  I was more interested in multitenant architecture, Flex Clusters, Flex ASM, ACFS, RMAN-Related New Features, Oracle Data Guard New Features, auditing , statistics and Optimizer-related new features. For  every feature,  first the need to introduce the feature has been explained followed by the demonstration of  basic functionality of the  feature with simple and easy to reproduce scripts.  There is a lot left to be learnt and explored for which you need to refer to documentation and practice. In short, this book a launching point to start your journey  for understanding oracle database 12c.

Thanks to the author Robert Freeman, contributors Scott Black,  Tom Kyte and Eric Yen for putting together such a great book. A must have for Oracle enthusiasts.

 

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Book Review: Oracle Database 12c New Features by Robert Freeman], All Right Reserved. 2015.

The post Book Review: Oracle Database 12c New Features by Robert Freeman appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Log Buffer #419: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-04-17 10:51

This Log Buffer Edition covers Oracle, MySQL, SQL Server blog posts from around the world.

Oracle:

  • Why the Internet of Things should matter to you
  • Modifying Sales Behavior Using Oracle SPM – Written by Tyrice Johnson
  • SQLcl: Run a Query Over and Over, Refresh the Screen
  • Data Integration Tips: ODI 12.1.3 – Convert to Flow
  • JRE 1.8.0_45 Certified with Oracle E-Business Suite

SQL Server:

  • What’s this, a conditional WHERE clause that doesn’t use dynamic SQL?
  • The job of a DBA requires a fusion of skill and knowledge. To acquire this requires a craftsman mindset. Craftsmen find that the better they get at the work, the more enjoyable the work gets, and the more successful they become.
  • Using SQL to perform cluster analysis to gain insight into data with unknown groups
  • There are times when you don’t what to return a complete set of records. When you have this kind of requirement to only select the TOP X number of items Transact SQL (TSQL) has the TOP clause to meet your needs.
  • Spatial Data in SQL Server has special indexing because it has to perform specialised functions.

MySQL:

Profiling MySQL queries from Performance Schema

How to Easily Identify Tables With Temporal Types in Old Format!

The Perfect Server – CentOS 7.1 with Apache2, Postfix, Dovecot, Pure-FTPD, BIND and ISPConfig 3

Database Security – How to fully SSL-encrypt MySQL Galera Cluster and ClusterControl

MDX: retrieving the entire hierarchy path with Ancestors()

Categories: DBA Blogs

Good Singapore Maths Students Would Likely Make Good Oracle DBAs (Problems)

Richard Foote - Tue, 2015-04-14 19:46
An interesting mathematics based question from a Singapore high school exam has been doing the internet rounds in the past few days. Considering it’s aimed at high school students, it’s a tricky one and obviously designed to filter out the better students, in a country with a very good reputation for churning out mathematically gifted […]
Categories: DBA Blogs