Skip navigation.

DBA Blogs

Interesting info-graphics on Data-center / DB-Manageability

Pankaj Chandiramani - Mon, 2014-05-19 04:21


 Interesting info-graphics on Data-center / DB-Manageability



Categories: DBA Blogs

last post on GLOC 2014 conference ... new technical stuff coming soon

Grumpy old DBA - Sun, 2014-05-18 18:25
One of the presentations I am going to work on next will be in the SQL Plan Management interaction/intersection area with AWR information.  At GLOC 2014 Craig Martin had a very nice one that kind of is kick starting my interest in coming up with some relevant / worthwhile.

For the conference itself it was an epic "best ever" kind of event that made one feel both valued/valuable for despite all the hard work and there was plenty of that ... it was above and beyond what we were hoping for.  All that achieved through a combination of top speakers and greath workshops.

Somehow even the conference did well when at the last moment our keynote speaker for the final day of the event scratched on us.  Through a set of bad luck and problems in Chicago Steven Feuerstein was unable to fly into Cleveland.  After 15 years of doing events this was our first time when a keynote speaker did not make it.  Yeah I know we shoulda had a contingency plan in place.

Ric Van Dyke from Hotsos stepped up to the plate and delivered an exceptional keynote in Steven's place.  Thank you Ric!

A special thanks to all of our NEOOUG officers and our conference planning committee which also included several people outside of NEOOUG ... you know who you are and we could not have done this without you!

Finally my appreciation for our conference chair Rumpi Gravenstein!  He is the guy the visionary that started this whole journey for us.  His energy and enthusiasm and dedication is inspiring to see!
Categories: DBA Blogs

SQL Activity for the last hour

DBA Scripts and Articles - Sat, 2014-05-17 12:59

This script can be used to show the top 10 SQL activity for the last hour. It uses the v$active_session_history view to search top SQL by resource consumption. Top 10 SQL Activity [crayon-53d9184741a33950657049/] Here is the result you can obtain: and the active sessions history graph for the same period:

The post SQL Activity for the last hour appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

12c Online Table Redefinition Examples

Bobby Durrett's DBA Blog - Fri, 2014-05-16 16:22

I’ve been learning about online table redefinition in Oracle 12c.  Here is a zip of some test scripts I built for my own understanding: zip

I spent some time trying out DBMS_REDEFINITION on an 11.2 Exadata test system and then started messing with it on 12c in a generic Linux VM.  The 12c version of the DBMS_REDEFINITION includes a new procedure called REDEF_TABLE which lets you do in one step certain things you did with multiple calls to the package in 11.2.  This is an online table compress on 12c:

BEGIN
  DBMS_REDEFINITION.REDEF_TABLE(
    uname                      => user,
    tname                      => 'TEST',
    table_compression_type     => 'COMPRESS');
END;
/

Things that can’t be done in one step like this require calls to procedures such as CAN_REDEF_TABLE, START_REDEF_TABLE, REGISTER_DEPENDENT_OBJECT, COPY_TABLE_DEPENDENTS, and FINISH_REDEF_TABLE.  Example online12c5.sql uses all of these.  Here is a summary of each included file’s test:

online12c1.sql – compress table in one step

online12c2.sql – compress table in multiple steps and show that it creates a hidden column without a primary key

online12c3.sql – same as previous test but with primary key

online12c4.sql – copy contraints and indexes

online12c5.sql – change the columns for the non-unique index

online12c6.sql – change order and type of columns

- Bobby

Categories: DBA Blogs

Log Buffer #371, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-05-16 07:56

Fueled by massive growth of data and propelled by mammoth future potential of its application, bloggers across the database technologies are finding new things to explore in the light of previous knowledge. This Log Buffer Edition covers that all.

Oracle:

To configure an instrumentation watch, you first need to know what instrumentation is, and how to instrument applications or servers.

Why Choose to Deploy Agile PLM in the Cloud?

One of the things that makes JSF different from other approaches to server-side Java web development is that it offers a much higher level of abstraction and transparently maintains UI state.

Step by step instructions for setting up a development environment for doing development with Hippo CMS.

Oracle Solaris 11.2 at the Atlanta OpenStack Summit

SQL Server:

RAID and Its Impact on your SQL Performance.

Microsoft Azure Diagnostics Part 1: Introduction

Using Encrypted Backups in SQL Server 2014

A new plug in for Management Studio from Red Gate is free. It will give you access to all the scripts at SQLServerCentral, including your own briefcase.

Validate File Properties with PowerShell Prior to Import into SQL Server

MySQL:

Benchmark: SimpleHTTPServer vs pyclustercheck (twisted implementation)

Archival and Analytics – Importing MySQL data into Hadoop Cluster using Sqoop

Cross your Fingers for Tech14, see you at OSCON

New Tungsten Replicator 2.2.1 now available

MySQL May Newsletter is Available!

Categories: DBA Blogs

Consider speaking at #ukoug_tech14

The Oracle Instructor - Thu, 2014-05-15 10:11

The call for papers is still open for UKOUG Tech 14 – a great event to speak at for sure!

UKOUG Tech 14The board explicitly encourages first-time speakers and women to submit an abstract.

Both doesn’t apply for me, but I have submitted abstracts in spite :-)

I can say only the best about the past annual conferences of the UK Oracle User Group. Great speakers, very good conditions and an excellent opportunity to get in touch with other Oracle enthusiasts.

So if you – yes, YOU! – are an expert in Oracle Core Technology, but hesitated so far to speak at public events about your topics, this might be the best time to get over it :-)


Tagged: #ukoug_tech14
Categories: DBA Blogs

(Slightly Off Topic) Spurious Correlations

Hemant K Chitale - Wed, 2014-05-14 09:33
During the course of the job, we find, discover and analyze "data" and come up with "information".  Sometimes we find correlations and "discover" causes.  We say "Event 'A'  caused Result 'X'".   However, it can  so happen that some "discovered" correlations are not "causal correlations" --- i.e. "Event 'A' has NOT really caused Result 'X'".  The mathematical correlation ("coefficient of correlation") may be high but there really is no logical or physical association between the two.

Here are some examples of Spurious Correlations.

The next time you say "I find a high correlation between the two", stop and think.  For a study of common biases and fallacies, I recommend "the art of thinking clearly" by rolf dobelli.

,
,
,



Categories: DBA Blogs

Oracle Passwords: How to Use Punctuation Symbols

Pythian Group - Wed, 2014-05-14 07:04

You can’t make a password strong enough. But at least you can try. Having at least one upper case character, one lower case character, one number, one punctuation mark or symbol, and greater than 8 characters, you can have a password which can be considered something decently safe, probably.

In Oracle, if you embed punctuation marks within your passwords, you have to use single quote around that password if you are using orapwd utility from command line. If you are altering the password from the sqlplus utility, you need to use the double quotes around the password.

Example of both is given below, as sys password is being changed:

From orapwd:

$ORACLE_HOME/dbs $ orapwd file=orapwtest password=”My1s!pass” entries=40  force=y
-bash: !pass”: event not found

$ORACLE_HOME/dbs $ orapwd file=orapwtest password=’My1s!pass’ entries=40  force=y

Changed successfully.

From sqlplus:

SQL> alter user sys identified by ‘My1s!pass’;
alter user sys identified by ‘My1s!pass
*
ERROR at line 1:
ORA-00988: missing or invalid password(s)

SQL> alter user sys identified by “My1s!pass”;

User altered.

Enjoy!!!

Categories: DBA Blogs

Tackling the challange of Provisoning Databases in an agile datacenter

Pankaj Chandiramani - Wed, 2014-05-14 01:03

One of the key task that a DBA performs repeatedly is Provisioning of Databases which also happens to one of the top 10 Database Challenges as per IOUG Survey .

Most of the challenge comes in form of either Lack of Standardization or it being a Long and Error Prone Process . This is where Enterprise Manager 12c can help by making this a standardized process using profiles and lock-downs ; plus have a role and access separation where lead dba can lock certain properties of database (like character-set or Oracle Home location  or SGA etc) and junior DBA's can't change those during provisioning .Below image describes the solution :



In Short :



  • Its Fast

  • Its Easy 

  • And you have complete control over the lifecycle of your dev and production resources.


I actually wanted to show step by step details on how to provision a 11204 RAC using Provisioning feature of DBLM  , but today i saw a great post by MaaZ Anjum that does the same , so i am going to refer you to his blog here :


Patch and Provision in EM12c: #5 Provision a Real Application Cluster Database


Other Resources : 


Official Doc : http://docs.oracle.com/cd/E24628_01/em.121/e27046/prov_db_overview.htm#CJAJCIDA


Screen Watch : https://apex.oracle.com/pls/apex/f?p=44785:24:112210352584821::NO:24:P24_CONTENT_ID%2CP24_PREV_PAGE:5776%2C1


Others : http://www.oracle.com/technetwork/oem/lifecycle-mgmt-495331.html?ssSourceSiteId=ocomen



Categories: DBA Blogs

Create new cdb with two pdbs

Bobby Durrett's DBA Blog - Tue, 2014-05-13 17:38

Quick post.  I’m messing with 12c again.  I finished recreating my cdb but this time found the option to create multiple pdbs on the initial creation.  I’m using DBCA.

PDF with screenshots of me creating a cdb with two pdbs

- Bobby

Categories: DBA Blogs

The Hadoop and the Hare

Pythian Group - Mon, 2014-05-12 07:38

I speak to a lot of people who are terribly concerned with “real-time”. Can data get into the warehouse in real-time? Can we record everything the user does on the site in real-time? Real-time is a magic phrase, because it really means “so fast that nobody cares”. In a business sense, it usually means that nobody has really thought about the latency requirements, so they’re going to push for the lowest latency possible. This is a dangerous habit, and you need to fight against it at every turn. Some of the best, cheapest, most scalable solutions will only be available when you accept that you can wait for an hour (or even hours!) to see some results.

Case in point, a client asking for a NoSQL store to hold user event data. It would have to handle millions of concurrent users, all generating dozens of events within minutes. They were (rightly) concerned about write performance and scaling this NoSQL store horizontally to handle the volume of writes. When asked why they needed the events, they didn’t know: they wanted to do exploratory analytics and see if there were interesting metrics to be mined. I proposed dumping the events on a queue – Kafka, Kinesis – or just writing logs for Flume to pick up. Rather than asking a data store to make all of these messages available for querying immediately, do things asynchronously, run some batch jobs to process data in Hadoop, and see if interesting data comes out. The Hadoop cluster and the ingestion pipeline is an order of magnitude less expensive and risky than the cool, real-time solution which confers no immediate benefits.

Maybe this client will decide that batch analytics are good enough, and they’ll stick with using Hive or Impala for querying. Maybe they discover a useful metric, or a machine learning technique they want to feed back into their user-facing application. Many specific metrics can be stored as simple counters per user, in a store like Redis, once they’ve been identified as valuable. Machine learning techniques are interesting because (besides cold start situations) they should be stable: the output shouldn’t change dramatically for small amounts of new data, so new information only needs to be considered periodically. In both cases, the requirements should be narrowed down as far as possible before deciding to invest in “real-time”.

In another case, a client presented the classic data warehousing problem: store large volumes of events with fine grain, and slice and dice them in every imaginable way. Once again, the concept of reducing latency came up. How quickly can we get events into the warehouse so people can start querying them? The answer is that we can make anything happen, but it will be needlessly expensive and malperformant. The main goal of the project was to provide historical reporting: there was no indication that clients would want to do the same kind of pivots and filtering on data only from the past minute. The low latency application would be cool and interesting to develop, but it would be much simpler to find which reports  users want to be low-latency, then precompute them and store them in a more appropriate way.

A more appropriate way is Dynamo, Cassandra, or your preferred NoSQL key-value store. Precompute aggregates you know the user wants with very low latency, and store them keyed on time with a very fine grain: you have the benefit of high write throughput here, but at the cost of little query complexity. Once the data is no longer interesting – it’s visible in the warehouse with much more grain along the various dimensions – then drop it from NoSQL.

Starting with a relatively slow, batch platform gives very high flexibility at a low cost, and with little development effort. Once your users – internal or clients – have explored the data set and started complaining about latency, then is the time to build specialized pipelines for those specific use cases. Since Kafka, Kinesis and Flume all support streaming computations as well as batch, these will serve as good connection points for your “real-time” pipeline.

Categories: DBA Blogs

Very proud of my peeps ... GLOC 2014 is rocking

Grumpy old DBA - Fri, 2014-05-09 17:59
At this time ... Friday night ... 2 1/2 days before Great Lakes Oracle Conference 2014 kicks off ( GLOC 2014 ) things are rocking and rolling.

We have a new grand total of attendee's and registrations ... sitting at 321 at the moment.  This is big for us ... top quality speakers / great workshops / most affordable conference ever.

So proud of everyone involved in bringing this conference together.  Lots of people to thank but number one person is Rumpi Gravenstein our conference chair and driving force behind this crazy idea that we should do this and could do this.  Takes a visionary to create a goal!

Still time to get here and take advantage of all that we have to offer.  Special thanks to all our workshop leaders and our great selection of speakers!



Categories: DBA Blogs

Configure #Oracle #GoldenGate #12c for testing between two or more #db12c PDBs.

DBASolved - Fri, 2014-05-09 13:11

Over the last few weeks I’ve been working and traveling a good bit.  Amongst all the travel I’ve been trying to get time to test a few things.  One test I wanted to get done was using Oracle GoldenGate 12c with Oracle Database 12c between Pluggable Databases (in the same instance).   As of today, I’ve finally gotten my Oracle Virtual Box environment configured and ready, so now I can travel and still test items I want to test.

With setting up this environment, I’ve identified a few things that may be of interest to the community.  Before I go into what I’ve had to configure to make this environment work, I have to say that the Oracle documentation for Oracle GoldenGate 12c is decent but still missing a few thing.  The organization of the documentation could improve a good bit as well (personal opinion).

Note: As far as the documentation goes, it seems like downloading the PDF is the better option and a bit more detailed in steps compared to the web version.

In my VM environment, I’m using the following software:

  • Oracle Enterprise Linux 6.5
  • Oracle Database 12c Enterprise Edition (12.1.0.1)
  • Oracle GoldenGate 12c (12.1.2)
Linux:

I’ve just setup a simple Oracle Enterprise Linux 6.5 box.  Not very big, but for a VM it could be considered pretty large.  The VM consists of the following (screen shots provided for each part).

Memory:

3072MB
(could go smaller and may go smaller as I shrink the SGA/PGA requirements for testing)

image

CPU:

2 Processors with 100% execution cap

image

Storage:

Using 7 VMDK disks (File system and ASM disks).
The reason for so many disks is because I’m using Oracle ASM and Restart to keep database files and database instance consistent upon restarting of the VM.

image

Network:

NAT with port forwarding
For more information on port forwarding, a good blog post for this was written by my friend Maaz Anjum on his blog (VirtualBox, NAT, and Port Forwarding).

image

image

Database:

The database is the currently available version of Oracle Database 12c (12.1.0.1) and Grid Infrastructure that you can download from Oracle Technology Network (OTN).  Once the software is downloaded, it can be installed within the VM.  There are many different steps that need to be addressed with the Oracle Grid Infrastructure 12c to get ASM working right.  In my configuration I’m using ASMLib; however the Oracle RACSIG has provided some exceptional documentation on how to setup Oracle RAC 12c using Virtual Box VMs with ASM not using ASMLIB.  It is a good starting point if needed.

Once the Oracle Database 12c (12.1.0.1) software is installed, a Consolidated Database (CDB) with two Pluggable Databases (PDB) need to be created.  The DBCA will allow you to create a CDB with one PDB and additional PDBs with more runs of DBCA.  The way I created my second PDB  was to create the first PDB (PDB1)  and configure it for what I needed in it for Oracle GoldenGate.  Then I cloned PDB1 to create PDB2 with all the settings in place.

I don’t want to spend to much time on the cloning process of PDBs in this post; however, Oracle provides a lot of different ways for cloning PDBs.  I have listed a common ones below:

  • SQL
  • EM Express
  • SQL Developer (easiest way)

Note: To make cloning of PDBs easy, recommend staying with Oracle Managed Files (OMF).

GoldenGate:

I will say configuring Oracle GoldenGate 12c (12.1.2) with Oracle Database 12c (12.1.0.1) using PDBs is the same yet different then configuring with traditional databases (11.2.0.4 and earlier).  I say this because, things that we typically use for setting up Oracle GoldenGate change just ever so slightly.

GoldenGate User:

One example that I’m talking about is the Oracle GoldenGate user.  In previous version, I could create the user grant DBA rights and run.  In Oracle Database 12c, not quite that simple.  In order to use Oracle GoldenGate with PDBs, a “common user” needs to be created.

Details: I discuss “common users” in detail in this post.

I created my common user with the name of “C##GGATE” in the Consolidated Database (CDB). This gives the GoldenGate user access to all the PDBs that are contained within the CDB.

GoldenGate User: Privileges:

With the common user created I needed to grant permissions to the user so the user can do whatever it needs too.  I granted DBA and CDB_DBA along with CONNECT to my common user.  Turns out there is an issue with the CDB_DBA role (I did not investigate fully), but granted DBA and CONNECT worked (to a degree).  So I resorted to Oracle GoldenGate documentation for what permissions were needed.

Reminder: As I mentioned earlier, the documentation is a bit scattered and trying to find this information took some time.

Although I found the documentation and granting the documented privileges for capture and apply, I still had issues.   Outlined below, I have provided the permissions that I’ve had to explicitly grant to my common user.

Privileges Granted:

SQL used to find this information is the same on all CDB and PDBs.  The results returned are for the common user C##GGATE (Roles and System Privileges).


SELECT distinct DECODE(SA1.GRANTEE#, 1, 'PUBLIC', U1.NAME) as "USER", SUBSTR(U2.NAME,1,20) as "ROLE", 
   /*SUBSTR(SPM.NAME,1,27) as "PRIV",*/ UAM.STATUS 
FROM SYS.SYSAUTH$ SA1, SYS.SYSAUTH$ SA2, SYS.USER$ U1, 
     SYS.USER$ U2, SYS.SYSTEM_PRIVILEGE_MAP SPM,SYS.USER_ASTATUS_MAP UAM
WHERE SA1.GRANTEE# = U1.USER#
  AND SA1.PRIVILEGE# = U2.USER#
  AND U2.USER# = SA2.GRANTEE#
  AND SA2.PRIVILEGE# = SPM.PRIVILEGE
  and U1.ASTATUS = UAM.STATUS# and U1.TYPE# =1
  and U1.name = upper('&user_name')
  ORDER BY 1, 2, 3;

Consolidated Database:

image

Pluggable Database 1:

image

Pluggable Database 2:

image

You will notice that the roles CONNECT, DBA and RESOURCE are granted at the CDB level and only CONNECT is needed at the PDB level.  This is needed to allow the common user to switch between PDB containers.

Although these roles are great for the common user, the documentation also states that the common user has to be granted access to the PDBs through a new procedure.


begin
  DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GGATE');
end;
/

or

begin
  DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GGATE', container=>'all');
end;
/

What the DBMS_GOLDENGATE_AUTH procedure actually grants is a set of system privileges that Oracle states are needed for the common user to interact with the PDBs.  In the end, I think the system privileges that are set through the DBMS_GOLDENGATE_AUTH package are not complete.  Below is a list of privileges that I have granted to my common user (in code block is the view I use to find these privileges under SYS (run on CDB and all PDBs)).


CREATE OR REPLACE VIEW DBA_USER_PRIVS (USERNAME, ROLENAME, PRIVILEGE) AS
SELECT DECODE(SA1.GRANTEE#, 1, 'PUBLIC', U1.NAME), SUBSTR(U2.NAME,1,20), 
   SUBSTR(SPM.NAME,1,27) 
FROM SYS.SYSAUTH$ SA1, SYS.SYSAUTH$ SA2, SYS.USER$ U1, 
     SYS.USER$ U2, SYS.SYSTEM_PRIVILEGE_MAP SPM
WHERE SA1.GRANTEE# = U1.USER#
  AND SA1.PRIVILEGE# = U2.USER#
  AND U2.USER# = SA2.GRANTEE#
  AND SA2.PRIVILEGE# = SPM.PRIVILEGE
UNION
SELECT U.NAME, NULL, SUBSTR(SPM.NAME,1,27)
FROM SYS.SYSTEM_PRIVILEGE_MAP SPM, SYS.SYSAUTH$ SA, SYS.USER$ U
WHERE SA.GRANTEE#=U.USER# 
  AND SA.PRIVILEGE#=SPM.PRIVILEGE
/  

select * from dba_user_privs where username = 'C##GGATE' and rolename is null;

Consolidated Database:

image

Pluggable Database 1 (Capture):

image

Pluggable Database 2 (Apply):

image

Lastly, I figured out that I needed to set tablespace quotas both within the CDB and the PDB for the common user.  To make it easy, I just assigned UNLIMITED quotas to the USER tablespace in the CDB and PDB for the common user.

Note: The reason the quota on the USER tablespace is needed is due to the default tablespace for C##GGATE is USER in the CDB. The user needs the same access at the PDB level.

Now the user has been configured,  let’s get the GoldenGate processes configured and ready to run.

Environment Setup:

Just like any other GoldenGate environment, you will have an Extract, Replicat and possibly a Data Pump (optional, I always use one) plus the associated trail files.  The important thing to remember when setting up each process that there are a few changes in the TABLE and MAP statements.  I have provided my working, uni-directional parameter files below.

Tip: Extracts in Oracle Database 12c against PDBs MUST be setup as integrated extracts.  Classic capture extracts can be configured but will not start against a PDB (no error message will be logged as well).

Extract:


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

--Specifies the name of the extract process
EXTRACT ext

--Oracle Login
USERID c##ggate, PASSWORD ggate

--Usa ASM API
--Not needed for integrated capture as of 12.1.2
--TRANLOGOPTIONS DBLOGREADER

--Set Oracle Environment Variables
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/db12c")
SETENV (ORACLE_SID="bc12ctst")

--Warns for a long running transaction
WARNLONGTRANS 5m, CHECKINTERVAL 3m

--Specifies the location of the remote trail file on target machine
EXTTRAIL ./dirdat/lt

--Resolves the TABLES to be replicated ON START-UP
WILDCARDRESOLVE IMMEDIATE

--Table Mappings
SOURCECATALOG PDB1;
TABLE BOBBY.*;

Data Pump (Extract):


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
-- CHECKPARAMS

--Specifies the name of the extract pump process
EXTRACT pmp

--No logic applied just pass everything extract writes to the trail file to the target trail file
PASSTHRU

--Specifies the target system to connect to
RMTHOST localhost, MGRPORT 15000, COMPRESS

--Specifies the location of the remote trail file on target machine
RMTTRAIL ./dirdat/rt

--Table Mappings
SOURCECATALOG PDB1;
TABLE BOBBY.*;

Replicat:


--Specifies the name of the replicat load process.
REPLICAT rep

-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/db12c")
SETENV (ORACLE_SID="bc12ctst")

--Oracle login.
USERID c##ggate@PDB2, PASSWORD ggate

--DBOPTIONS INTEGRATEDPARAMS(parallel 4)

--The source ddl and target ddl are identical
ASSUMETARGETDEFS

--Specifies name and location of a discard file.
DISCARDFILE ./dirrpt/rep.dsc, append, megabytes 5

--Resolves the TARGETs to be replicated during process start-up
WILDCARDRESOLVE IMMEDIATE

--Table Mappings
map PDB1.BOBBY.*, target BOBBY.*;

Notice in the Extract and Replicat parameter files that the TABLE and MAP statements are slightly different.  In both files, I use two different versions of how to associate tables within the PDBs.  In the Extract parameter file, the PDB is being assigned with the SOURCECATALOG option.  This tells GoldenGate to switch containers and to use PDB1 for the tables.  In the Replicat parameter file, the MAP statement is not using SOURCECATALOG; instead it is looking for source tables with a referenced inline format CATALOG.SCHEMA.TABLES (PDB1.BOBBY.*).  The target side does not need the CATALOG option because the replicat is logging into PDB2 directly.

Well, I hope I gave a good overview of how to setup Oracle GoldenGate 12c (12.1.2) within Oracle Database 12c (12.1.0.1) using PDBs.

Enjoy!

twitter: @dbasolved

blog:http://dbasolved.com


Filed under: Database, Golden Gate, Replication
Categories: DBA Blogs

How to get started with OpenStack on Solaris

The expectations for an enterprise-class operating system are greater than ever. The new Solaris 11.2 is a game changer - the world's first Cloud Operating System now comes with out-of-the-box...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #370, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-05-09 07:51

With promising clouds hovering over the skies of database technology, this Log Buffer Edition gathers some selective droplets for Oracle, SQL Server and MySQL.
Oracle:

ACM: organizing the chaos and flexing rigid process structures through adaptive case management by Lucas Jellema.

How to Build GlassFish 4 from Source.

Make Your Oracle Database Run Faster on Oracle Linux.

How to: Run Repository Creation Utility (RCU) on Solaris.

A Customer Centric Approach To Supply Chain: Supply Chain Journey Mapping (SCJM).

SQL Server:

Stairway to PowerPivot and DAX – Level 1: Getting Started with PowerPivot and DAX.

Free eBook: SQL Server Backup and Restore.

12 SQL Server XML Questions You Were Too Shy To Ask.

Context Is Everything – Check Your Settings.

How column COLLATION can affect SQL Server query performance.

MySQL:

Many Sphinx users utilize MySQL as a data source. Divendo is one of them.

Another reason for Xtrabackup error “log block numbers mismatch”.

MySQL Audit Plugin now available in Percona Server 5.5 and 5.6.

New Release Webinar on May 13th: Introducing ClusterControl 1.2.6 – Live Demo.

The Perfect Server – Ubuntu 14.04 (nginx, BIND, MySQL, PHP, Postfix, Dovecot and ISPConfig 3).

Categories: DBA Blogs

Concurrent Processing Issues with Distributed Transactions Glitch

Pythian Group - Thu, 2014-05-08 10:58

This is my first post in Pythian blog, and I wanted to share interesting use-cases that we’ve dealt with recently. I believe someone may see similar cases in the future, so these are the troubleshooting steps to get to the source and apply workaround to fix it. So, the root problem was that concurrent processing doesn’t start up, and Internal Manger log file reports error (same error also throwing Oracle forms trying to access Concurrent -> Administer form)

CP-GENERIC ORACLE ERROR (ERROR=ORA-01591: lock held by in-dou) (ROUTINE=get_lk_handle)


ORA-01591 error clearly identifies lock held by in-doubt distributed transaction string; however, DBA views DBA_2PC_PENDING and DBA_2PC_NEIGHBORS return no rows (also sys.pending_trans$ and sys.pending_sessions$ have no rows). If any of these views return any row, then most probably these transactions can be committed, rolled back or purged and the problem would be resolved. However, this wasn’t the case – so this must be some kind of glitch.

The first document to review is Master Note for Troubleshooting Oracle Managed Distributed Transactions (Doc ID 100664.1), but this gives no valuable information, and neither does the much detailed Manually Resolving In-Doubt Transactions: Different Scenarios (Doc ID 126069.1). Next step is to find out where ORA-01591 is coming from. Here, a database trace is very useful tool. I was lucky this time as error is thrown in forms, so it’s 2 clicks away from getting trace. If it’s not so easy in your case, the SERV_MOD_ACT_TRACE_ENABLE procedure can be used to get trace. In my case it was:


=====================
PARSING IN CURSOR #139830878745608 len=66 dep=1 uid=0 oct=3 lid=0 tim=1398844275853403 hv=1493897133 ad='1c121bb90' sqlid='19x1189chq3xd'
SELECT LOCKID FROM DBMS_LOCK_ALLOCATED WHERE NAME = :B1 FOR UPDATE
END OF STMT
PARSE #139830878745608:c=0,e=137,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=0,tim=1398844275853402
BINDS #139830878745608:
Bind#0
oacdty=01 mxl=128(128) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1206001 frm=01 csi=31 siz=128 off=0
kxsbbbfp=7f2ce9db2d80 bln=128 avl=11 flg=05
value="FNDCPLK_ICM"
...
ERROR #139830878745608:err=1591 tim=1398844275855101
...
=====================

Next, trying to execute the same SQL manually in sqlplus I got the following error:

SQL> select * from DBMS_LOCK_ALLOCATED;
select * from DBMS_LOCK_ALLOCATED
*
ERROR at line 1:
ORA-01591: lock held by in-doubt distributed transaction 35.15.13775

So now I have some details on ORA-01591 error. I know which transaction is holding it – but it’s not displayed in any of DBA_2PC _ views. Also trying to purge, commit or rollback this transaction will end in error as there is no full data about this transaction:

SQL> execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY ('35.15.13775');
BEGIN DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY ('35.15.13775'); END;
*
ERROR at line 1:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.DBMS_TRANSACTION", line 96
ORA-06512: at line 1
SQL> COMMIT FORCE '35.15.13775';
COMMIT FORCE '35.15.13775'
*
ERROR at line 1:
ORA-02058: no prepared transaction found with ID 35.15.13775
SQL> rollback force '35.15.13775';
rollback force '35.15.13775'
*
ERROR at line 1:
ORA-02058: no prepared transaction found with ID 35.15.13775
SQL>

Trying to dig deeper, MoS note DBMS_LOGSTDBY.BUILD Seems to Hang And Does Not Return To SQL prompt. (Doc ID 747495.1) got my attention. Standby database is not used in this configuration, but the fact that x$ktuxe internal view contains information about transactions and query against it soon gave some results:

SELECT KTUXEUSN, KTUXESLT, KTUXESQN, KTUXESTA Status, KTUXECFL Flags FROM x$ktuxe WHERE ktuxesta!='INACTIVE';
KTUXEUSN KTUXESLT KTUXESQN STATUS FLAGS
---------- ---------- ---------- ---------------- ------------------------
35 15 13775 PREPARED SCO|COL|REV|DEAD

This is exact transaction where we identified earlier with trace (ORA-01591: lock held by in-doubt distributed transaction 35.15.13775). Once transaction is identified there is a way to get rid of it, and there are already a few options:

Using the same technique, I inserted dummy data into pending_trans$   and pending_sessions$ tables. Then purge_lost_db_entry solved the case, and the dummy entry was removed, along with references in x$ktuxe table (see examples in above sources I mentioned).  According to the oracle-L comments, this is Oracle Support provided workaround to deal with a particular SR case – so most probably this is not a supported way to deal with such cases.

In my case, we didn’t know what created the half-purged transaction, and this test system was needed ASAP for training, so this method really saved the day.

Categories: DBA Blogs

Oracle Development Tools for Windows & .NET

Interoperability between Oracle and Microsoft technologies around Oracle database support on Windows and with .NET application development have been substantial for years. The latest...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Online Move of Datafiles for Pluggable Databases

The Oracle Instructor - Wed, 2014-05-07 09:22

From my present Oracle Database 12c New Features course in Zürich: We have introduced the handy new functionality that you can move datafiles online in 12c. That is at first glance having an issue for pluggable databases:

 

SQL> @whoami
USER: SYS
SESSION ID: 253
CURRENT_SCHEMA: SYS
INSTANCE NAME: cdb1
CDB NAME: cdb1
CONTAINER NAME: CDB$ROOT
DATABASE ROLE: PRIMARY
OS USER: oracle
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/system01.dbf
/u01/app/oracle/oradata/cdb1/sysaux01.dbf
/u01/app/oracle/oradata/cdb1/undotbs01.dbf
/u01/app/oracle/oradata/cdb1/pdbseed/system01.dbf
/u01/app/oracle/oradata/cdb1/users01.dbf
/u01/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf
/u01/app/oracle/oradata/pdb1_1/system01.dbf
/u01/app/oracle/oradata/pdb1_1/sysaux01.dbf
/u01/app/oracle/oradata/pdb1_1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/pdb1_1/example01.dbf

10 rows selected.

SQL> alter database move datafile '/u01/app/oracle/oradata/pdb1_1/example01.dbf' to '/home/oracle/example01.dbf';
alter database move datafile '/u01/app/oracle/oradata/pdb1_1/example01.dbf' to '/home/oracle/example01.dbf'
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file "11"

The error message is quite useless in this case. It works, but you need to be in the PDBs container where the datafile belongs to:

SQL> alter session set container=pdb1_1;

Session altered.

SQL> alter database move datafile '/u01/app/oracle/oradata/pdb1_1/example01.dbf' to '/home/oracle/example01.dbf';

Database altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/undotbs01.dbf
/u01/app/oracle/oradata/pdb1_1/system01.dbf
/u01/app/oracle/oradata/pdb1_1/sysaux01.dbf
/u01/app/oracle/oradata/pdb1_1/SAMPLE_SCHEMA_users01.dbf
/home/oracle/example01.dbf

Don’t believe it (even error messages may lie!), test it :-)


Tagged: 12c New Features
Categories: DBA Blogs

12c Multi-tenant Environment and DBRM for Container Database

Pythian Group - Wed, 2014-05-07 09:15

In multi-tenant environment, Database Resource Manager (DBRM), at the Container Database (CDB) level enables us to manage the resources like CPU usage and number of parallel execution servers among the plugabble databases (PDBs). Then within each PDB, DBRM enables us to manage resources like CPU, parallelism and managing the runaway queries which exceed the certain thresholds of CPU, physical IO, logical IO or elapsed time.

In 12c version of Oracle database, at the CDB level, a CDB plan is implemented. This CDB plan allocates resources to the PDBs. A CDB plan is made up of directives, with each directive aimed at a single PDB. The directive controls the allocation of CPU and Parallel execution servers. The default CDB plan is DEFAULT_CDB_PLAN. CDB plan deals in share values and utilization limits.

Shares are like counters. More the share a PDB has, the more resources it would enjoy. The utilization limit for a PDB limits resource allocation to the PDB. By default, share value for each PDB is 1, and utilization limit is 100. Utilization limit restrains the system resource usage of a specific PDB. parallel_server_limit and PARALLEL_SERVERS_TARGET are used to limit parallel servers.

Example of a CDB Plan:

Following example enforces a CDB plan ‘cdb_plan’ for two databases ‘PROD’ and ‘DEV’. Prod has 2 shares and 100% utilization limit, whereas DEV has half of it’s shares i.e. 1 and 50% of utilization limit. Pending area is just a staging area to create, edit, and validate the plans.

exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
plan    => ‘cdb_plan’,
comment => ‘CDB plan for CDB’);
END;
/

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
plan                  => ‘cdb_plan’,
pluggable_database    => ‘Prod’,
shares                => 2,
utilization_limit     => 100,
parallel_server_limit => 100);
END;
/

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
plan                  => ‘cdb_plan’,
pluggable_database    => ‘Dev’,
shares                => 1,
utilization_limit     => 50,
parallel_server_limit => 50);
END;
/

exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

You may use the DBA_CDB_RSRC_PLANS view to display all of the CDB resource plans defined in the CDB.

Categories: DBA Blogs

Simple tuning example but uncertain test results

Bobby Durrett's DBA Blog - Tue, 2014-05-06 16:58

Last week I came across what seemed like a simple query tuning problem.  A PeopleSoft batch job ran for many hours and when I did an AWR report I found that the top query was doing a full scan when an index should help.

Here is the query and its bad plan:

SQL_ID 1jx5w9ybgb51g
--------------------
UPDATE PS_JGEN_ACCT_ENTRY 
SET 
JOURNAL_ID = :1, 
JOURNAL_DATE = TO_DATE(:2,'YYYY-MM-DD'), 
FISCAL_YEAR = :3,
ACCOUNTING_PERIOD = :4,
GL_DISTRIB_STATUS = 'D', 
JOURNAL_LINE = :5 
WHERE 
PROCESS_INSTANCE = 6692638 AND
GL_DISTRIB_STATUS = 'J'  AND 
ACCOUNT=:6 AND 
DEPTID=:7 AND 
CURRENCY_CD=:8 AND
FOREIGN_CURRENCY=:9

Plan hash value: 1919611120

-----------------------------------------------------------------------------------------
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |                    |       |       | 21649 (100)|          |
|   1 |  UPDATE            | PS_JGEN_ACCT_ENTRY |       |       |            |          |
|   2 |   TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |    58 | 21649   (5)| 00:01:27 |
-----------------------------------------------------------------------------------------

The problematic batch job ran variations of this query with different literal values for PROCESS_INSTANCE corresponding to each flat file being loaded.  Three updates of this type were in the awr report for the 16 hour period that covered the run of the batch job:

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
    16,899      5,836        3,811        4.4     3.5 4h54qqmbkynaj

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692549 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

     6,170      2,190        1,480        4.2     1.3 a5rd6vx6sm8p3

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692572 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

     6,141      1,983        1,288        4.8     1.3 1jx5w9ybgb51g

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692638 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

The batch job ran about 15 and a half hours so these three plus others like them surely combined to make up the bulk of the run time.

It made sense to me to just add an index on all the columns in the where clause – PROCESS_INSTANCE,GL_DISTRIB_STATUS,ACCOUNT,DEPTID,CURRENCY_CD,FOREIGN_CURRENCY

Just to check how selective this combination of columns might be I did a count on each grouping of these columns and came up with about 50 rows per combination:

>select max(cnt),avg(cnt),min(cnt)
  2  from
  3  (select
  4  PROCESS_INSTANCE,
  5  GL_DISTRIB_STATUS,
  6  ACCOUNT,
  7  DEPTID,
  8  CURRENCY_CD,
  9  FOREIGN_CURRENCY,
 10  count(*) cnt
 11  from sysadm.PS_JGEN_ACCT_ENTRY
 12  group by
 13  PROCESS_INSTANCE,
 14  GL_DISTRIB_STATUS,
 15  ACCOUNT,
 16  DEPTID,
 17  CURRENCY_CD,
 18  FOREIGN_CURRENCY);

  MAX(CNT)   AVG(CNT)   MIN(CNT)
---------- ---------- ----------
      9404  50.167041          1

The table itself has 3 million rows so this is pretty selective:

OWNER                TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN SAMPLE_SIZE LAST_ANALYZED       DEGREE     INSTANCES
-------------------- ------------------------------ ---------- ---------- ----------- ----------- ------------------- ---------- ----------
SYSADM               PS_JGEN_ACCT_ENTRY                3145253      82857         204     3145253 2014-04-21 21:07:02          1          1

But, the strange thing was when we added the index on our test system we didn’t see any performance improvement!  We ran the largest production file on test and it ran in ten minutes with or without the index.  Yack!

So, I tried my own test in sqlplus with the select equivalent of the update and hardcoded values instead of bind variables – quick and dirty.  I thought I had extracted some valid values although I later realized they weren’t.  Here is what I ran and notice the full scan ran just as fast as with the index:

>select * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = NULL;

no rows selected

Elapsed: 00:00:00.30

Execution Plan
----------------------------------------------------------
Plan hash value: 1762298626

---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                    |     1 |   203 |     0   (0)|          |
|*  1 |  FILTER                      |                    |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| PS_JGEN_ACCT_ENTRY |     1 |   203 |     5   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | PSAJGEN_ACCT_ENTRY |     1 |       |     4   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)
   3 - access("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              "CURRENCY_CD"='USD')
       filter(TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
       1761  recursive calls
          0  db block gets
        557  consistent gets
         14  physical reads
          0  redo size
       1866  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
          0  rows processed

>
>select /*+full(PS_JGEN_ACCT_ENTRY) */ * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = NULL;

no rows selected

Elapsed: 00:00:00.17

Execution Plan
----------------------------------------------------------
Plan hash value: 3728573827

-----------------------------------------------------------------------------------------
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                    |     1 |   203 |     0   (0)|          |
|*  1 |  FILTER            |                    |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |   203 | 12185   (2)| 00:02:27 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)
   2 - filter("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
       1873  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

It looks like I passed in a NULL for a column with a NOT NULL constraint and that is what made the full scan version just as fast as the indexed one.  The FILTER condition must have realized no rows could meet both NULL and NOT NULL conditions.  With both plans the database realized immediately that there were no rows matching this bogus collection of constants.  So, then I replaced the NULL with a zero and finally we had proof of the performance improvement of the index:

>select * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = 0;

no rows selected

Elapsed: 00:00:00.02

Execution Plan
----------------------------------------------------------
Plan hash value: 2047014499

--------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                    |     1 |   203 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PS_JGEN_ACCT_ENTRY |     1 |   203 |     5   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | PSAJGEN_ACCT_ENTRY |     1 |       |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              "CURRENCY_CD"='USD')
       filter(TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              TO_NUMBER("FOREIGN_CURRENCY")=0 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
       1866  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

>
>select /*+full(PS_JGEN_ACCT_ENTRY) */ * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = 0;

no rows selected

Elapsed: 00:00:37.11

Execution Plan
----------------------------------------------------------
Plan hash value: 1758291200

----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |     1 |   203 | 12185   (2)| 00:02:27 |
|*  1 |  TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |   203 | 12185   (2)| 00:02:27 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              TO_NUMBER("FOREIGN_CURRENCY")=0 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      56110  consistent gets
      55409  physical reads
          0  redo size
       1873  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

So, I can’t tell you what happened in test but I suspect that we were passing null into one of the bind variables and got a similar efficient filter due to some data that was missing running a production file on an out of date test system.  But, once I forced the equivalent to the production full scan by supplying non-null values for all the constants the value of the index became clear.  It went into production last week and this weekend’s run ran in a few minutes instead of 15 hours.

- Bobby

 

 

Categories: DBA Blogs