Skip navigation.

Feed aggregator

[Free Webinar] Learn Weblogic from Oracle ACE Atul Kumar & Oracle Expert Pavan

Online Apps DBA - Mon, 2016-02-01 11:14
This entry is part 4 of 5 in the series WebLogic Server

WebLogic 1

Nowadays, enterprises are using Weblogic server as it provides all the essential features to build and support JAVA EE applications.

And for that reason, more people are getting attracted towards learning Weblogic, but the main issue still lies. Where to go? and Where to start?

And if you are one those people who are still not sure about What Weblogic really is or why should you learn Weblogic; then we have good news for you.

On Saturday February 6th  at 10:00 PM IST, 4:30 PM GMT, 8:30 AM PST, Oracle ACE Atul Kumar & Oracle Expert Pavan would be discussing about Weblogic; and this is where you can clear all of your doubts related to Weblogic. You can grab this opportunity by clicking on below button to register for the webinar.

Click Here to Register For Free Webinar

We have limited number of seats for a limited time. So, grab it before it goes off!.

In the session, we will also have Live Question & Answer section in which you can ask questions to your heart’s content. So its 100% bonus session, with guaranteed benefit.

And just for the quick start, Weblogic server was first developed by BEA Systems, which was later acquired by Oracle in 2008. Weblogic is a middle tier server software application which is a Online Transaction Processing Platform (OLTP), and is mandatory in EBS 12.2.

For those who are interested in learning Weblogic from scratch, we also provide Training in Weblogic. Where you get a dedicated machine to practice and hone your skills, 24*7 Technical support, and if you are not satisfied then 100% money back guaranteed.

Don’t forget to share this post if you think this could be useful to others and also Subscribe to this blog for more such FREE Webinars and useful content related to Oracle.

The post [Free Webinar] Learn Weblogic from Oracle ACE Atul Kumar & Oracle Expert Pavan appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Log Buffer #458: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2016-02-01 10:38

This Log Buffer Edition covers various useful tips and tricks from blogs for Oracle, SQL Server and MySQL.

Oracle:

  • pstack(or thread stack) for Windows to diagnose Firefox high CPU usage
  • With the ever-changing browser landscape, we needed to make some tough decisions as to which browsers and versions are going to be deemed “supported” for Oracle Application Express.  There isn’t enough time and money to support all browsers and all versions, each with different bugs and varying levels of support of standards.
  • Are you effectively using Java SE 8 streams for data processing? Introduced in Java 8, streams allow you to process data in a declarative way and leverage multi-core architectures without writing multithreaded code.
  • If you are upgrading but would like to increase or decrease the number of data sources you must do so when installing the latest version.
  • When talking about BPM security, you need to know the about certain set of information and where those information will come from etc.

SQL Server:

  • Having fun with PARSENAME (SQL Spackle)
  • Time and Space: How to Monitor Drive Space in SQL Server
  • Application Security with Azure Key Vault
  • Declarative SQL: Using CHECK() & DEFAULT
  • Microsoft SQL Server 2016 Public Preview Boosts Database Security

MySQL:

  • Have you heard the news? As of MySQL 5.7.8, MySQL includes a new JavaScript Object Notation (JSON) data type that enables more efficient access to JSON-encoded data.
  • Oracle MySQL 5.7 Database Nears General Availability
  • Most of you know, that it is possible to synchronize MySQL and MariaDB servers using replication. But with the latest releases, it is also possible to use more than just two servers as a multi-master setup.
  • Transport Layer Security (TLS, also often referred to as SSL) is an important component of a secure MySQL deployment, but the complexities of properly generating the necessary key material and configuring the server dissuaded many users from completing this task.
  • Managing MySQL Replication for High Availability

 

Learn more about Pythian’s expertise in Oracle , SQL Server & MySQL.

Categories: DBA Blogs

February 24, 2016: Solairus Aviation―Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Mon, 2016-02-01 10:34
Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, February 24, 2016.

You will hear Mark Dennen, CFO from Solairus Aviation, talk about the company’s need for an integrated set of applications that was both easy to use and powerful enough to tie HCM and ERP business functions together.

Linda Fishman, Senior Director, Oracle HCM Cloud, will host this call and discuss with Mr. Dennen why the company chose Oracle HCM Cloud, the details of its selection process for new HR software, its implementation experience with Oracle HCM Cloud, and the benefits of its new modern HR system.

Solairus Aviation is a US-based, full-service private aircraft management and charter company.

Register now to attend the live Forum on Wednesday, February 24, 2016, at 9:00 a.m. Pacific Time and learn more about Solairus Aviation’s experience with Oracle HCM Cloud.

LearningStudio and OpenClass End-Of-Life: Pearson is getting out of LMS market

Michael Feldstein - Mon, 2016-02-01 08:20

By Phil HillMore Posts (383)

Pearson has notified customers that LearningStudio will be shut down as a standalone LMS over the next 2-3 years. Created from the Pearson acquisition of both eCollege and Fronter, LearningStudio has been targeted primarily at fully-online programs and associated hybrid programs – not for simple augmentation of face-to-face classes. The customer base has mostly included for-profit institutions as well as not-for-profit programs that are often packaged with an online service prover model (e.g. Embanet customers). As of this year, LearningStudio has approximately 110 customers with 1.2 million unique student enrollments.

This decision is not one isolated to LearningStudio, as the end-of-life notification caps a series of moves by Pearson to get out of the LMS market in general.

Less than a year ago I wrote a post about Texas Christian University claiming that Pearson was “getting out of the LMS market”, although during research for that story the administrator requested a change in the campus newspaper.

“Pearson is out of the learning management system game,” Hughes said. “We need something to evolve with the Academy of Tomorrow and where we’re moving to at TCU.”Hughes said Pearson withdrew from the LMS search process for TCU but remains an LMS provider.

From 2007 through 2012, Pearson aggressively moved into the LMS market. In 2007 the company acquired eCollege for $477 million, taking it private. In 2008 Pearson acquired the European LMS provider Fronter. In 2009 Pearson announced LearningStudio as the rebranded combination of eCollege and Fronter, predominantly from eCollege. Then the big PR move came in 2011 with the splashy announcement of OpenClass, an “completely free” and “amazing” LMS that dominated the discussion at EDUCAUSE that year, partially due to “misleading headlines” implying a partnership with Google.

In the past year, however, Pearson has reversed all of these strategic moves. Announced last September, OpenClass will no longer be available as of January 2018. In November Pearson sold Fronter to itsLearning. And now LearningStudio (and in effect eCollege) is being retired. To be more precise, LearningStudio is being retired as a standalone LMS. What is not publicized it that LearningStudio internally provides the infrastructure and platform support for Pearson’s MyLabs & Mastering courseware. That internal platform will remain, but the external product will go away.

For this story Michael and I interviewed Curtiss Barnes, Managing Director of Technology Products for Pearson Global Higher Education[1] Barnes confirmed the story and said that all LearningStudio customers have been notified, and that there are no plans for a public announcement or press release. Barnes said the decision to get out of the LMS category was based on Pearson’s continuing efforts to reorganize and streamline the diversified company, and being competitive in the LMS market just doesn’t help meet corporate goals.

So what platforms and technology products do meet corporate goals? Barnes said that Pearson does courseware really well, with over 12 million students on these platforms overall and approximately 2 million per day. He sees large distinctions between content-agnostic LMS solutions and courseware. Courseware might require certain features that overlap LMS features, but the fundamentals of what’s being delivered goes well beyond content management store, calendaring, and other LMS basics to include instrumentation of content and science-based learning design. Barnes said that learning design is the key element they’re looking for as a company.

The front page for OpenClass now describes Pearson’s view on LMS and courseware markets.

On January 1, 2018, OpenClass will no longer be available to faculty, students, or administrators, and as of today, no new accounts will be created. You will be able to sign in and access OpenClass and we will maintain SLAs until January 1, 2018. We will also continue to provide Community Forum support and OpenClass Knowledge Base until this date.

At Pearson, we are relentlessly committed to driving learner outcomes and we see a bigger opportunity to provide value to our customers via programs such as MyLab & Mastering and REVEL, and through our professional services, such as curriculum design and online program management.

While the LMS will endure as an important piece of academic infrastructure, we believe our learning applications and services are truly “where the learning happens.” In short, withdrawing from the crowded LMS market allows us to concentrate on areas where we can make the biggest measurable impact on student learning outcomes.

Pearson has told customers that they still have engineers and operations teams to fully support continuing operations and mitigate bugs or issues affecting LearningStudio, but they are not developing new features. LearningStudio will remain available for customers through their existing contracts, but the earliest loss of support for any customer will be December 31, 2017 to allow customers whose contracts expire before then more time to select a different LMS and migrate their courses.

Michael and I pressed during the interview to see if Pearson is favoring one solution over another in their discussions with customers, but Barnes said that Pearson has decided to remain neutral. Customers are not being given recommendations on alternate solutions.

This move out of the LMS market by Pearson has a parallel with last year’s sale of PowerSchool, a Student Information System for the K-12 market. Pearson acquired PowerSchool from Apple in 2006, but it no longer made sense to try and be competitive in the SIS market.

Like the forced migration caused by WebCT and ANGEL end-of-life notices, there will now be more than 100 LMS changes triggered by this announcement. While the for-profit sector has taken big hits in enrollments over the past 3-4 years, there are still some very large online programs that now have to select a new LMS.

This has been an eventful year for the LMS market already, and it’s only one month old. Expect to see more movement and changes.

  1. Disclosure: Pearson is a client of MindWires Consulting on an separate project.

The post LearningStudio and OpenClass End-Of-Life: Pearson is getting out of LMS market appeared first on e-Literate.

Corporate Social Responsibility (Where Can We Serve?)

Rittman Mead Consulting - Mon, 2016-02-01 03:00

At Rittman Mead, we believe that people are more important than profit.
This manifests itself in two ways. First, we want to impact the world beyond data and analytics, and secondly, we want our employees to be able to contribute to organizations they believe are doing impactful work.

This year, we’ve put a Community Service requirement in place for all of our full-time employees.

We’ll each spend 40 hours this year serving with various nonprofits. Most of our team are already involved with some amazing organizations, and this “requirement” allows us to not only be involved after hours and on the weekends, but even during normal business hours.

We want to highlight a few team members and show how they’ve been using their Community Service hours for good.

Beth deSousa
Beth is our Finance Manager and she has been serving with Sawnee Women’s Club. Most of her work has been around getting sponsorship and donations for their annual silent auction. She’s also helped with upgrading a garden at the local high school, collecting toys and gift wrap for their Holiday House, and collecting prom dresses and accessories for girls in need.

Charles Elliott
Charles is the Managing Director of North America. He recently ran in the Dopey Challenge down at Disney World which means he ran a 5k, 10k, half marathon, and full marathon in 4 days. He did the run to raise funds for Autism Speaks. Charles was recognized as the third largest fundraiser for Autism Speaks at the Dopey Challenge!

David Huey
David is our U.S. Business Development rep. He recently served with the nonprofit Hungry For A Day for their Thanksgiving Outreach. He flew up to Detroit the week of Thanksgiving and helped serve over 8,000 Thanksgiving dinners to the homeless and needy in inner city Detroit.

Andy Rocha

Andy is our Consulting Manager. Andy is a regular volunteer and instructor with Vine City Code Crew. VC3 works with inner city youth in Atlanta to teach them about electronics and coding.

Pete Tamisin

Pete is a Principal Consultant. He is also involved as a volunteer and instructor with the aforementioned Code Crew. Pete has taught a course using Makey Makey electronic kits for VC3.

This is just a sample of what our team has done, but engaging in our local communities is something that Rittman Mead is striving to make an integral piece of our corporate DNA.
We can’t wait to show you how we’ve left our communities better in 2016!

The post Corporate Social Responsibility (Where Can We Serve?) appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Trace Files -- 11 : Tracing the Optimization of an SQL Statement

Hemant K Chitale - Sun, 2016-01-31 07:53
So far, the previous examples have been on tracing the Execution of SQL statements and/or the Execution Plan used.

But what if you want to trace the Optimization --- identify how the Optimizer determined an "optimal" execution plan -- of an SQL statement.

Note : Pre-11g methods involved Event 10053.   But as with Event 10046, I prefer to use methods where I don't have to use an Event Number but a Name.  So, here I am not demonstrating the Event 10053 method itself.

Let's assume that there is a particular SQL identified as SQL_ID='b086mzzp82x7w' for which we need to know not just the Execution Plan but also how Oracle arrived at the Execution Plan.

Here's one way :

SQL> alter session set events 'trace[rdbms.SQL_Optimizer.*][sql:b086mzzp82x7w]';

Session altered.

SQL> select 'abc' from dual;

'AB
---
abc

SQL> select count(*) from small_insert;

COUNT(*)
----------
4

SQL> select count(*) from all_objects_many_list
2 where created > sysdate-365;

COUNT(*)
----------
25548

SQL> select count(*) from all_objects_many_list;

COUNT(*)
----------
7254201

SQL> select value from v$diag_info where name = 'Default Trace File';

VALUE
------------------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_3102.trc

SQL>

I have multiple SQLs executed in the session but am interested in the Optimization of only 1 SQL.  Note how the specific SQL_ID is specified in the ALTER SESSION SET EVENTS command.

The resultant trace file is a very long trace file with a listing of all the instance/session parameters (hidden and public), all the Bug Fixes and the costing done for the SQL.  The trace file captures only the SQL of interest, all the other SQLs in the same session are *not* in the trace file.

Here is an extract from the trace file :

Registered qb: SEL$1 0x2173aea0 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

SPM: statement not found in SMB

**************************
Automatic degree of parallelism (ADOP)
**************************
Automatic degree of parallelism is disabled: Parameter.

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=b086mzzp82x7w) -----
select count(*) from all_objects_many_list
where created > sysdate-365
*******************************************
Legend
The following abbreviations are used by optimizer trace.
CBQT - cost-based query transformation
JPPD - join predicate push-down
OJPPD - old-style (non-cost-based) JPPD
FPD - filter push-down
PM - predicate move-around
CVM - complex view merging
SPJ - select-project-join
SJC - set join conversion
SU - subquery unnesting
OBYE - order by elimination
OST - old style star transformation
ST - new (cbqt) star transformation
CNT - count(col) to count(*) transformation
JE - Join Elimination
JF - join factorization
CBY - connect by
SLP - select list pruning
DP - distinct placement
qb - query block
LB - leaf blocks
DK - distinct keys
LB/K - average number of leaf blocks per key
DB/K - average number of data blocks per key
CLUF - clustering factor
NDV - number of distinct values
Resp - response cost
Card - cardinality
Resc - resource cost
NL - nested loops (join)
SM - sort merge (join)
HA - hash (join)
CPUSPEED - CPU Speed
IOTFRSPEED - I/O transfer speed
IOSEEKTIM - I/O seek time
SREADTIM - average single block read time
MREADTIM - average multiblock read time
MBRC - average multiblock read count
MAXTHR - maximum I/O system throughput
SLAVETHR - average slave I/O throughput
dmeth - distribution method
1: no partitioning required
2: value partitioned
4: right is random (round-robin)
128: left is random (round-robin)
8: broadcast right and partition left
16: broadcast left and partition right
32: partition left using partitioning of right
64: partition right using partitioning of left
256: run the join in serial
0: invalid distribution method
sel - selectivity
ptn - partition
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
*************************************
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
Bug Fix Control Environment


*************************************
PARAMETERS WITH DEFAULT VALUES
******************************
Compilation Environment Dump
........... long list of parameters and their values .........
..............................................................
.. followed by ...
........... long list of Bug Fixes that are enabled ..........
..............................................................


***************************************
PARAMETERS IN OPT_PARAM HINT
****************************
***************************************
Column Usage Monitoring is ON: tracking level = 1
***************************************

Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for b086mzzp82x7w.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
OBYE: Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE: OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
OJE: Begin: find best directive for query block SEL$1 (#0)
OJE: End: finding best directive for query block SEL$1 (#0)
CNT: Considering count(col) to count(*) on query block SEL$1 (#0)
*************************
Count(col) to Count(*) (CNT)
*************************
CNT: COUNT() to COUNT(*) not done.
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for b086mzzp82x7w.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
********************
Subquery Unnest (SU)
********************
SJC: Considering set-join conversion in query block SEL$1 (#0)
*************************
Set-Join Conversion (SJC)
*************************
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
PM: PM bypassed: Outer query contains no views.
PM: PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
"ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365

apadrv-start sqlid=12691376846034531580
:
call(in-use=2008, alloc=16344), compile(in-use=56240, alloc=58632), execution(in-use=2504, alloc=4032)

*******************************************
Peeked values of the binds in SQL statement
*******************************************

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "HEMANT"."ALL_OBJECTS_MANY_LIST" "ALL_OBJECTS_MANY_LIST" WHERE "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
kkoqbc: optimizing query block SEL$1 (#0)

:
call(in-use=2056, alloc=16344), compile(in-use=57184, alloc=58632), execution(in-use=2504, alloc=4032)

kkoqbc-subheap (create addr=0x7f5f216ff9d0)
****************
QUERY BLOCK TEXT
****************
select count(*) from all_objects_many_list
where created > sysdate-365
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=0 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
Using NOWORKLOAD Stats
CPUSPEEDNW: 937 millions instructions/sec (default is 100)
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
MBRC: NO VALUE blocks (default is 8)

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
#Rows: 7197952 #Blks: 98279 AvgRowLen: 93.00 ChainCnt: 0.00
Index Stats::
Index: ALL_OBJ_M_L_CRTD_NDX Col#: 7
LVLS: 2 #LB: 19093 #DK: 1232 LB/K: 15.00 DB/K: 351.00 CLUF: 432893.00
Access path analysis for ALL_OBJECTS_MANY_LIST
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]
Column (#7): CREATED(
AvgLen: 8 NDV: 1232 Nulls: 0 Density: 0.000812 Min: 2455803 Max: 2457343
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
Card: Original: 7197952.000000 Rounded: 1346076 Computed: 1346075.60 Non Adjusted: 1346075.60
Access Path: TableScan
Cost: 27174.11 Resp: 27174.11 Degree: 0
Cost_io: 26619.00 Cost_cpu: 6242311042
Resp_io: 26619.00 Resp_cpu: 6242311042
Access Path: index (index (FFS))
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 5173.00 resc_cpu: 4598699894
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 5581.95 Resp: 5581.95 Degree: 1
Cost_io: 5173.00 Cost_cpu: 4598699894
Resp_io: 5173.00 Resp_cpu: 4598699894
Access Path: index (IndexOnly)
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 3573.00 resc_cpu: 294660105
ix_sel: 0.187008 ix_sel_with_filters: 0.187008
Cost: 3599.20 Resp: 3599.20 Degree: 1
Best:: AccessPath: IndexRange
Index: ALL_OBJ_M_L_CRTD_NDX
Cost: 3599.20 Degree: 1 Resp: 3599.20 Card: 1346075.60 Bytes: 0

***************************************


OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]#0
***********************
Best so far: Table#: 0 cost: 3599.2033 card: 1346075.6041 bytes: 10768608
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Enumerating distribution method (advanced)

Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 3599.2033 Degree: 1 Card: 1346076.0000 Bytes: 10768608
Resc: 3599.2033 Resc_io: 3573.0000 Resc_cpu: 294660105
Resp: 3599.2033 Resp_io: 3573.0000 Resc_cpu: 294660105
kkoqbc-subheap (delete addr=0x7f5f216ff9d0, in-use=26384, alloc=32840)
kkoqbc-end:
:
call(in-use=8664, alloc=49288), compile(in-use=59704, alloc=62776), execution(in-use=2504, alloc=4032)

kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
:
call(in-use=8664, alloc=49288), compile(in-use=60616, alloc=62776), execution(in-use=2504, alloc=4032)


Starting SQL statement dump

user_id=87 user_name=HEMANT module=SQL*Plus action=
sql_id=b086mzzp82x7w plan_hash_value=1689651126 problem_type=3
----- Current SQL Statement for this session (sql_id=b086mzzp82x7w) -----
select count(*) from all_objects_many_list
where created > sysdate-365
sql_text_length=71
sql=select count(*) from all_objects_many_list
where created > sysdate-365
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
-------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3599 | |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX RANGE SCAN | ALL_OBJ_M_L_CRTD_NDX| 1315K | 10M | 3599 | 00:00:44 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("CREATED">SYSDATE@!-365)

Content of other_xml column
===========================
db_version : 11.2.0.4
parse_schema : HEMANT
plan_hash : 1689651126
plan_hash_2 : 1742296710
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
DB_VERSION('11.2.0.4')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX(@"SEL$1" "ALL_OBJECTS_MANY_LIST"@"SEL$1" ("ALL_OBJECTS_MANY_LIST"."CREATED"))
END_OUTLINE_DATA
*/

Optimizer state dump:
Compilation Environment Dump
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
........... long list of parameters and their values .........
..............................................................
.. followed by ...
........... long list of Bug Fixes that are enabled ..........
..............................................................

Query Block Registry:
SEL$1 0x2173aea0 (PARSER) [FINAL]

:
call(in-use=11728, alloc=49288), compile(in-use=90576, alloc=152120), execution(in-use=6440, alloc=8088)

End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================

The trace file captured only the SQL of interest.  It also shows all the instance /session parameters and Bug Fixes that are relevant (these are very long lists so I have not reproduced them in entirety).
Note : The listing of parameters and Bug Fixes are very important in that if you have different execution plans in two different databases, you must verify the parameters and bug fixes and ensure that any differences in them are not relevant.

From the trace file, we can determine that this is the Execution Plan chosen :
-------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3599 | |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX RANGE SCAN | ALL_OBJ_M_L_CRTD_NDX| 1315K | 10M | 3599 | 00:00:44 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("CREATED">SYSDATE@!-365)

Content of other_xml column
===========================
db_version : 11.2.0.4
parse_schema : HEMANT
plan_hash : 1689651126
plan_hash_2 : 1742296710
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
DB_VERSION('11.2.0.4')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX(@"SEL$1" "ALL_OBJECTS_MANY_LIST"@"SEL$1" ("ALL_OBJECTS_MANY_LIST"."CREATED"))
END_OUTLINE_DATA
*/

The computation of Cost appears here :
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]
Column (#7): CREATED(
AvgLen: 8 NDV: 1232 Nulls: 0 Density: 0.000812 Min: 2455803 Max: 2457343
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
Card: Original: 7197952.000000 Rounded: 1346076 Computed: 1346075.60 Non Adjusted: 1346075.60
Access Path: TableScan
Cost: 27174.11 Resp: 27174.11 Degree: 0
Cost_io: 26619.00 Cost_cpu: 6242311042
Resp_io: 26619.00 Resp_cpu: 6242311042
Access Path: index (index (FFS))
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 5173.00 resc_cpu: 4598699894
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 5581.95 Resp: 5581.95 Degree: 1
Cost_io: 5173.00 Cost_cpu: 4598699894
Resp_io: 5173.00 Resp_cpu: 4598699894
Access Path: index (IndexOnly)
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 3573.00 resc_cpu: 294660105
ix_sel: 0.187008 ix_sel_with_filters: 0.187008
Cost: 3599.20 Resp: 3599.20 Degree: 1
Best:: AccessPath: IndexRange
Index: ALL_OBJ_M_L_CRTD_NDX
Cost: 3599.20 Degree: 1 Resp: 3599.20 Card: 1346075.60 Bytes: 0

Note how different Access Paths (Table Scan, Index FFS, IndexOnly,IndexRange) are all listed. The Best is shown as an IndexRange on the ALL_OBJ_M_L_CRTD_NDX with a Cost of 3599.20 More details appear here :
OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]#0
***********************
Best so far: Table#: 0 cost: 3599.2033 card: 1346075.6041 bytes: 10768608
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Enumerating distribution method (advanced)

Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 3599.2033 Degree: 1 Card: 1346076.0000 Bytes: 10768608
Resc: 3599.2033 Resc_io: 3573.0000 Resc_cpu: 294660105
Resp: 3599.2033 Resp_io: 3573.0000 Resc_cpu: 294660105

This is a very detailed listing for an SQL query on a single Table (no joins) and a single candidate index.  Try running this with an SQL with Join of two or more tables and more than one candidate Index on each and see how complicated the Cost calculation becomes.


Note : To disable tracing in the session, I would run :

ALTER SESSION SET EVENTS 'trace[rdbms.SQL_Optimizer.*] off';


This sort of tracing can also be done with ALTER SYSTEM if you are not sure which session will be running the SQL_ID of interest and cannot interactively invoke the SQL from a private session.  
.
.
.



Categories: DBA Blogs

Multisessioning with Python

Gary Myers - Sun, 2016-01-31 00:27
I'll admit that I pretty constantly have at least one window either open into SQL*Plus or at the command line ready to run a deployment script through it. But there's time when it is worth taking a step beyond.

One problem with the architecture of most SQL clients is they connect to a database, send off a SQL statement and do nothing until the database responds back with an answer. That's a great model when it takes no more than a second or two to get the response. It is cumbersome when the statement can take minutes to complete. Complex clients, like SQL Developer, allow the user to have multiple sessions open, even against a single schema if you use "unshared" worksheets. But they don't co-ordinate those sessions in any way.

Recently I needed to run a task in a number of schemas. We're all nicely packaged up and all I needed to do was execute a procedure in each of the schemas and we can do that from a master schema with appropriate grants. However the tasks would take several minutes for each schema, and we had dozens of schemas to process. Running them consecutively in a single stream would have taken many hours and we also didn't want to set them all off at once through the job scheduler due to the workload. Ideally we wanted a few running concurrently, and when one finished another would start. I haven't found an easy way to do that in the database scheduler.

Python, on the other hand, makes it so darn simple.
[Credit to Stackoverflow, of course]

proc connects to the database, executes the procedure (in this demo just setting the client info with a delay so you can see it), and returns.
Strs is a collection of parameters.
pool tells it how many concurrent operation to run. And then it maps the strings to the pool, so A, B and C will start, then as they finish D,E,F and G will be processed as threads become available.

I could my collection was a list of the schema names, and the statement was more like 'begin ' + arg + '.task; end;'

#!/usr/bin/python

"""
Global variables
"""

db    = 'host:port/service'
user  = 'scott'
pwd   = 'tiger'

def proc(arg):
   con = cx_Oracle.connect(user + '/' + pwd + '@' + db)
   cur = con.cursor()
   cur.execute('begin sys.dbms_application_info.set_client_info(:info); end;',{'info':arg})
   time.sleep(10)   
   cur.close()
   con.close()
   return
   
import cx_Oracle, time
from multiprocessing.dummy import Pool as ThreadPool 

strs = [
  'A',  'B',  'C',  'D',  'E',  'F',  'G'
  ]

# Make the Pool of workers
pool = ThreadPool(3) 
# Pass the elements of the array to the procedure using the pool 
#  In this case no values are returned so the results is a dummy
results = pool.map(proc, strs)
#close the pool and wait for the work to finish 
pool.close() 
pool.join() 

PS. In this case, I used cx_Oracle as the glue between Python and the database.
The pyOraGeek blog is a good starting point for that.

If/when I get around to blogging again, I'll discuss jaydebeapi / jpype as an alternative. In short, cx_Oracle goes through the OCI client (eg Instant Client) and jaydebeapi takes the JVM / JDBC route.



Upgrading Oracle Apps (EBS) to 12.2 ? OPatch stopped with error “oracle.as.common.clone, 11.1.1.6.0, higher version 11.1.1.7.0 found

Online Apps DBA - Sat, 2016-01-30 13:13
This entry is part 5 of 6 in the series Oracle EBS 12.2 Upgrade

This post is from our Oracle EBS Upgrade R12.2 training where we cover Architecture, Overview of R12.2 & Major features in Upgrading to R12.2, Different upgrade paths available to R12.2, Best practices for R12.2 Upgrade, How to Minimize down time for R12.2 Upgrade, Difficulties/Issues while upgrading to R12.2.

One of upgrade trainee from our previous batch, hitting issue “oracle.as.common.clone, 11.1.1.6.0, higher version 11.1.1.7.0 found” while applying Latest AD and TXK patches  ‘20642039’ in Oracle E-Business 12.2.

Issue:

1. Applying the Latest AD and TXK patches  ‘20642039‘ in Oracle E-Business 12.2 as

export ORACLE_HOME=/u01/oracle/PROD122/fs1/FMW_Home/oracle_common 

Note: Here /u01/oracle/PROD122 is ORACLE_BASE where Oracle EBS 12.2 is installed and as patch is for ORACLE COMMON Home in Fusion Middleware we set ORACLE_HOME accordingly 

export PATH=/u01/oracle/PROD122/fs1/FMW_Home/oracle_common/OPatch:$PATH

cd $PATCH_TOP/20642039

opatch apply

And opatch stopped with below messages

Applying interim patch ‘20642039’ to OH ‘/u01/oracle/PRD12238/fs1/FMW_Home/oracle_common’ 
Verifying environment and performing prerequisite checks… 
OPatch system modification phase did not start: 
Patch “20642039” is not needed since it has no fixes for this Oracle Home. Please see log file for details. 
Log file location: /u01/oracle/PRD12238/fs1/FMW_Home/oracle_common/cfgtoollogs/opatch/20642039_Mar_10_2010_17_22_28/apply2010-03-10_17-22-27PM_1.log

OPatch stopped on request.

2. Then we look into Log file at /u01/oracle/PRD12238/fs1/FMW_Home/oracle_common/cfgtoollogs/opatch

/20642039_Mar_10_2010_17_22_28/apply2010-03-10_17-22-27PM_1.log, it was showing below error message

[Mar 13, 2010 1:09:00 AM]    ——————— Oracle Home discovery ——————— [Mar 13, 2010 1:09:00 AM]    OUI-67086:ApplySession applying interim patch ‘20642039’ to OH ‘/u01/oracle/PRD12238/fs1/FMW_Home/oracle_common’ [Mar 13, 2010 1:09:00 AM]    Applying interim patch ‘20642039’ to OH ‘/u01/oracle/PRD12238/fs1/FMW_Home/oracle_common’ [Mar 13, 2010 1:09:00 AM]    Starting to apply patch to local system at Sat Mar 13 01:09:00 GMT 2010 [Mar 13, 2010 1:09:00 AM]    Verifying environment and performing prerequisite checks… [Mar 13, 2010 1:09:02 AM]    Start the Apply initScript at Sat Mar 13 01:09:02 GMT 2010 [Mar 13, 2010 1:09:02 AM]    Finish the Apply initScript at Sat Mar 13 01:09:02 GMT 2010 [Mar 13, 2010 1:09:02 AM]    ——————— Prerequisite for apply ——————— [Mar 13, 2010 1:09:02 AM]    Running prerequisite checks… [Mar 13, 2010 1:09:02 AM]    Patch “20642039” is ignored as it is not a “Fusion Applications patch”. [Mar 13, 2010 1:09:02 AM]    Check if patch “20642039”  is a no-op patch. [Mar 13, 2010 1:09:02 AM]    Found a higher component in OH inventory: oracle.as.common.clone, 11.1.1.6.0 [Mar 13, 2010 1:09:02 AM]    [ oracle.as.common.clone, 11.1.1.6.0, higher version 11.1.1.7.0 found. ] Fix:  Since version is already 11.1.1.7.0 so this patch is not applicable. In this case there is another patch mentioned in the DOC ID 1903052.1 The latest AD-TXK codelevel has a dependency on Oracle Fusion Middleware:

So we have applied the patch 20756887 and It completed successfully.

If you are applying Delta 6 to a system on a pre-Delta 5 AD-TXK codelevel, you must apply the Oracle Fusion Middleware patch before proceeding with the AD and TXK patches. If you do not apply this patch, application of the TXK patch will fail.

If you want to learn more about Oracle EBS Upgrade to R12.2  then click the button below and register for  our  Oracle Upgrade 12.2  (next batch starts on 20th Februrary, 2016 )

Note: We are so confident on our workshops that we provide 100% Money back guarantee, in unlikely case of you being not happy after first session, just drop us a mail before second session and We’ll refund FULL money (or ask us from our 100s of happy trainees in our private Facebook Group)

Stay Tuned for more Information on Oracle Apps 12.2 Upgrade!!

Oracle E-Business Suite Upgrade to R12.2 Training

Live Instructor led Online sessions with Hands-on Lab Exercises, Dedicated Machines to Practice and Recorded sessions of the Training

Click here to learn more with limited time discounts

The post Upgrading Oracle Apps (EBS) to 12.2 ? OPatch stopped with error “oracle.as.common.clone, 11.1.1.6.0, higher version 11.1.1.7.0 found appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Video: Database as a Service (DBaaS) on Oracle Cloud

Tim Hall - Sat, 2016-01-30 09:29

The latest video on my YouTube Channel is a run through of using the Database as a Service (DBaaS) offering on Oracle Cloud.

There have been a few minor changes in the interface since I last ran through capturing images, so the related article has been brought up to date.

I used my dad for the cameo in this video. Hopefully this will help him get a little more recognition, as he’s pretty much a nobody on the Oracle scene at the moment. With your help this could change!

Cheers

Tim…

Update: Almost as soon as I released this blog post the footage was out of date as Oracle released some minor changes to the interface. I rerecorded the video and re-uploaded it, so it is up to date as of now. All links from my website and this blog post point to the new video. If you have read this post via an RSS reader, you may still be seeing the old version of the post, and as a result see the link to the video as broken. But in that case, you won’t be able to read this either. :)

Video: Database as a Service (DBaaS) on Oracle Cloud was first posted on January 30, 2016 at 4:29 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Database 12c Features Now Available on apex.oracle.com

Joel Kallman - Sat, 2016-01-30 06:42
As a lot of people know, apex.oracle.com is the customer evaluation instance of Oracle Application Express (APEX).  It's a place where anyone on the planet can sign up for a workspace and "kick the tires" of APEX.  After a brief signup process, in a matter of minutes you have access to a slice of an Oracle Database, Oracle REST Data Services, and Oracle Application Express, all easily accessed through your Web browser.

apex.oracle.com has been running Oracle Database 12c for a while now.  But a lot of the 12c-specific developer features weren't available, simply because the database initialization parameter COMPATIBLE wasn't set to 12.0.0.0.0 or higher.  If you've ever tried to use one of these features in SQL on apex.oracle.com, you may have run into the dreaded ORA-00406.  But as of today (January 30, 2016), that's changed.  You can now make full use of the 12c specific features on apex.oracle.com.  Even if you don't care about APEX, you can still sign up on apex.oracle.com and kick the tires of Oracle Database 12c.

What are some things you can do now on apex.oracle.com? You can use IDENTITY columns.  You can generate a default value from a sequence.  You can specify a default value for explicit NULL columns.  And much more.

You might wonder what's taken so long, and let's just say that sometimes it takes a while to move a change like this through the machinery that is Oracle.

P.S.  I've made the request to update MAX_STRING_SIZE to EXTENDED, so you can define column datatypes up to VARCHAR2(32767).  Until this is implemented, you're limited to VARCHAR2(4000).

node-oracledb 1.6.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Sat, 2016-01-30 06:07
Node-oracledb 1.6.0, the Node.js add-on for Oracle Database, is on NPM.

In this release a comprehensive pull request by Dieter Oberkofler adds support for binding PL/SQL Collection Associative Array (Index-by) types. Strings and numbers can now be bound and passed to and from PL/SQL blocks. Dieter tells us that nowadays he only gets to code for a hobby - keep it up Dieter!

Using PL/SQL Associative Arrays can be a very efficient way of transferring database between an application and the database because it can reduce the number of 'round trips' between the two.

As an example, consider this table and PL/SQL package:

  CREATE TABLE mytab (numcol NUMBER);

  CREATE OR REPLACE PACKAGE mypkg IS
    TYPE numtype IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
    PROCEDURE myinproc(p IN numtype);
  END;
  /

  CREATE OR REPLACE PACKAGE BODY mypkg IS
    PROCEDURE myinproc(p IN numtype) IS
    BEGIN
      FORALL i IN INDICES OF p
	INSERT INTO mytab (numcol) VALUES (p(i));
    END;
  END;
  /

With this schema, the following JavaScript will result in mytab containing five rows:

  connection.execute(
    "BEGIN mypkg.myinproc(:bv); END;",
    {
      bv: { type : oracledb.NUMBER,
	    dir: oracledb.BIND_IN,
	    val: [1, 2, 23, 4, 10]
	  }
    },
    function (err) { . . . });

There is a fuller example in examples/plsqlarray.sql and check out the documentation.

Other changes in node-oracledb 1.6 are

  • @KevinSheedy sent a GitHub Pull Request for the README to help the first time reader have the right pre-requisites and avoid the resulting pitfalls.

  • Fixed a LOB problem causing an uncaught error to be generated.

  • Removed the 'close' event that was being generated for LOB Writables Streams. The Node.js Streams doc specifies it only for Readable Streams.
  • Updated the LOB examples to show connection release.

  • Extended the OS X install section with a way to install on El Capitan that doesn't need root access for Instant Client 11.2. Thanks to @raymondfeng for pointing this out.

  • Added RPATH to the link line when building on OS X in preparation for future client.

TypeScript users will be happy to hear Richard Natal recently had a node-oracledb TypeScript type definition file added to the DefinitelyTyped project. This is not part of node-oracledb itself but Richard later mentioned he found a way it could be incorporated. Hopefully he will submit a pull request and it will make it directly to the project so it can be kept in sync.

Thanks to everyone who has worked on this release and kept the momentum going.

What's coming up for the next release? There is discussion about adding a JavaScript layer. This was kicked off by a pull request from Sagie Gur-Ari which has lead to some work by Oracle's Dan McGhan. See the discussion and let us know what you think. Having this layer could make it quicker and easier for JavaScript coders to contribute node-oracledb and do things like reduce API inconsistency, make it easier to add a promise API in future, and of course provide a place to directly add Sagie's Streaming query result suggestion that started the whole thing.

I know a few contributors have recently submitted the Oracle Contributor Agreement ready to do big and small things - every bit counts. I look forward to being able to incorporate your work.

I've heard a couple of reports that Node LTS 4.2.6 on Windows is having some issues building native add-ons. 0.10, 0.12, 5.x and 4.2.5 don't have issues. Drop me a line if you encounter a problem.

Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line!

node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Not Another Standby Monitoring Script

Michael Dinh - Fri, 2016-01-29 21:55

I know what you are thinking. Not another standby monitoring script, right?

There’s plenty of them out there already and what makes this one any different?

You are probably correct and there’s probably nothing different about this.

Having to work with environments that are inconsistent such as manual standby, dataguard without broker, dataguard with broker,
I have been obsessed to create standby SQL which can accommodate any environment and can be run from primary or standby.

From PRIMARY: Recovery is IDLE
oracle@arrow:hawklas:/media/sf_working/dataguard
$ . oraenv <<< hawklas
ORACLE_SID = [hawklas] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawklas:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:04:05 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawklas        READ WRITE           PRIMARY          ENABLED  TO STANDBY           ENABLED   hawksan

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        PRIMARY ACTIVE   ARCH              0
       1        2 hawksan                   VALID        STANDBY ACTIVE   LGWR              0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN            IDLE                                    NONE
       2 VALID        OPEN_READ-ONLY  IDLE                    NO GAP          NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       2        1 NO             886 29-JAN-2016 17:50:04         4      3.7   --- STANDBY LAG
       2        1 YES            882 29-JAN-2016 17:46:22

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 886 thread 1;

*** v$dataguard_stats ***

no rows selected

*** gv$managed_standby ***

     PID  INST_ID  THREAD# PROCESS   CLIENT_P STATUS       SEQUENCE#   BLOCK# DELAY_MINS
-------- -------- -------- --------- -------- ------------ --------- -------- ----------
    2659        1        1 LNS       LNS      WRITING            887    10153          0

ARROW:(SYS@hawklas):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
From STANDBY: Recovery is IDLE
oracle@arrow:hawklas:/media/sf_working/dataguard
$ . oraenv <<< hawksan
ORACLE_SID = [hawklas] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawksan:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:04:11 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawksan        READ ONLY            PHYSICAL STANDBY ENABLED  NOT ALLOWED          ENABLED   hawklas

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL   ACTIVE   ARCH              0
       1       32 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL   ACTIVE   RFS               0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN_READ-ONLY  IDLE                                    NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       1        1 NO             886 29-JAN-2016 17:50:04         4      3.7   --- STANDBY LAG
       1        1 YES            882 29-JAN-2016 17:46:22

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 882 thread 1;

*** v$dataguard_stats ***

NAME                      VALUE              UNIT
------------------------- ------------------ ------------------------------
transport lag             +00 00:00:00       day(2) to second(0) interval
apply lag                 +00 01:14:48       day(2) to second(0) interval

*** gv$managed_standby ***

no rows selected

ARROW:(SYS@hawksan):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
From DG Broker: SET STATE=’APPLY-ON’

Note: Apply Lag is NOT really 1 hour.

Look at the time difference between SQL*Plus sessions.

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:04:11 2016
SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:07:28 2016
oracle@arrow:hawksan:/media/sf_working/dataguard
$ dgmgrl
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected.
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-OFF
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       1 hour(s) 17 minutes 16 seconds (computed 1 second ago)
  Apply Rate:      (unknown)
  Real Time Query: OFF
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> EDIT DATABASE hawksan SET STATE='APPLY-ON';
Succeeded.
DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 2 seconds ago)
  Apply Lag:       1 hour(s) 17 minutes 47 seconds (computed 2 seconds ago)
  Apply Rate:      465.00 KByte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> exit
From PRIMARY: Recovery is STILL IDLE
oracle@arrow:hawksan:/media/sf_working/dataguard
$ . oraenv <<< hawklas
ORACLE_SID = [hawksan] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawklas:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:07:20 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawklas        READ WRITE           PRIMARY          ENABLED  TO STANDBY           ENABLED   hawksan

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        PRIMARY ACTIVE   ARCH              0
       1        2 hawksan                   VALID        STANDBY ACTIVE   LGWR              0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN            IDLE                                    NONE
       2 VALID        OPEN_READ-ONLY  IDLE                    NO GAP          NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       2        1 NO             886 29-JAN-2016 17:50:04         4      3.7   --- STANDBY LAG
       2        1 YES            882 29-JAN-2016 17:46:22

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 886 thread 1;

*** v$dataguard_stats ***

no rows selected

*** gv$managed_standby ***

     PID  INST_ID  THREAD# PROCESS   CLIENT_P STATUS       SEQUENCE#   BLOCK# DELAY_MINS
-------- -------- -------- --------- -------- ------------ --------- -------- ----------
    2659        1        1 LNS       LNS      WRITING            887    10402          0

ARROW:(SYS@hawklas):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
From STANDBY: Recovery is MANAGED REAL TIME APPLY
oracle@arrow:hawklas:/media/sf_working/dataguard
$ . oraenv <<< hawksan
ORACLE_SID = [hawklas] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawksan:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:07:28 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawksan        READ ONLY WITH APPLY PHYSICAL STANDBY ENABLED  NOT ALLOWED          ENABLED   hawklas

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL   ACTIVE   ARCH              0
       1       32 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL   ACTIVE   RFS               0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN_READ-ONLY  MANAGED REAL TIME APPLY                 NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       1        1 IN-MEMORY      886 29-JAN-2016 17:50:04         1 .0333333   --- STANDBY LAG
       1        1 YES            885 29-JAN-2016 17:50:02

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 886 thread 1;

*** v$dataguard_stats ***

NAME                      VALUE              UNIT
------------------------- ------------------ ------------------------------
transport lag             +00 00:00:00       day(2) to second(0) interval
apply lag                 +00 00:00:00       day(2) to second(0) interval

*** gv$managed_standby ***

     PID  INST_ID  THREAD# PROCESS   CLIENT_P STATUS       SEQUENCE#   BLOCK# DELAY_MINS
-------- -------- -------- --------- -------- ------------ --------- -------- ----------
    4463        1        1 MRP0      N/A      APPLYING_LOG       887    10409          0

ARROW:(SYS@hawksan):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@arrow:hawksan:/media/sf_working/dataguard
$
Let’s check again. From PRIMARY: Recovery is MANAGED REAL TIME APPLY
oracle@arrow:hawksan:/media/sf_working/dataguard
$ . oraenv <<< hawklas
ORACLE_SID = [hawksan] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawklas:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:09:14 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawklas        READ WRITE           PRIMARY          ENABLED  TO STANDBY           ENABLED   hawksan

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        PRIMARY ACTIVE   ARCH              0
       1        2 hawksan                   VALID        STANDBY ACTIVE   LGWR              0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN            IDLE                                    NONE
       2 VALID        OPEN_READ-ONLY  MANAGED REAL TIME APPLY NO GAP          NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       2        1 NO             886 29-JAN-2016 17:50:04         1 .0333333   --- STANDBY LAG
       2        1 YES            885 29-JAN-2016 17:50:02

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 886 thread 1;

*** v$dataguard_stats ***

no rows selected

*** gv$managed_standby ***

     PID  INST_ID  THREAD# PROCESS   CLIENT_P STATUS       SEQUENCE#   BLOCK# DELAY_MINS
-------- -------- -------- --------- -------- ------------ --------- -------- ----------
    2659        1        1 LNS       LNS      WRITING            887    10512          0

ARROW:(SYS@hawklas):PRIMARY>

SQL can be download from my Public Google Drive.

Look all the way to the right under Menu or search page for PublicGoogleDrive


Table Scans

Jonathan Lewis - Fri, 2016-01-29 06:49

It’s amazing how easy it is to interpret a number incorrectly until the point comes where you have to look at it closely – and then you realise that there was a lot more to the number than your initial casual assumption, and you would have realised it all along if you’d ever needed to think about it before.

Here’s a little case in point. I have a simple (i.e. non-partitioned) heap table t1 which is basically a clone of the view dba_segments, and I’ve just connected to Oracle through an SQL*Plus session then run a couple of SQL statements. The following is a continuous log of my activity:


SQL> select table_name, partitioned, blocks from user_tables;

TABLE_NAME           PAR     BLOCKS
-------------------- --- ----------
T1                   NO         958

1 row selected.

SQL> select name, value from v$mystat ms, v$statname sn where ms.statistic# = sn.statistic# and name like '%table scan%';

NAME                                  VALUE
-------------------------------- ----------
table scans (short tables)                9
table scans (long tables)                 0
table scans (rowid ranges)                0
table scans (cache partitions)            0
table scans (direct read)                 0
table scan rows gotten                 4188
table scan blocks gotten                 14

7 rows selected.

SQL> select count(extents) from t1;

COUNT(EXTENTS)
--------------
          5143

1 row selected.

SQL> select name, value from v$mystat ms, v$statname sn where ms.statistic# = sn.statistic# and name like '%table scan%';

  • My cache size is quite small, so t1 doesn’t count as a “short” table.
  • I’ve collected stats on the table (and there are no indexes) so the optimizer doesn’t need to do any dynamic sampling to generate an execution plan.
  • This is 11g, so there are no SQL Plan Directives in place to force dynamic sampling

So here’s the question: how many “table scans (long tables)” will Oracle record against my session when I re-run that query against v$mystat ?

Warning – this IS a trick question.

Update number 1

I said it was a trick question and, as you will have seen if you’ve had time to read the comments, the answer is going to depend on various configuration options. Here’s what I got in my test – and I’ve reported not just the session stats, but the segment statistics (v$segment_statistics):


NAME                                  VALUE
-------------------------------- ----------
table scans (short tables)                9
table scans (long tables)                26
table scans (rowid ranges)               26
table scans (cache partitions)            0
table scans (direct read)                 0
table scan rows gotten                 9331
table scan blocks gotten                954

7 rows selected.

SQL> select owner, object_name, object_type, value  from v$segment_statistics where owner = 'TEST_USER' and statistic_name = 'segment scans' and value != 0;

OWNER           OBJECT_NAME          OBJECT_TYPE             VALUE
--------------- -------------------- ------------------ ----------
TEST_USER       T1                   TABLE                      26

1 row selected.

I recorded 26 long table scans – and that 26 was echoed in the segment statistics. (Note: I don’t normally use v$segment_statistics, which is a join between a couple of in-memory structures and three real table, I usually query v$segstat). It’s the segment statistics that made me pause in a recent problem review;  up to that moment I had been using the “Segments by Table Scans” section of the AWR Report as a useful (but badly named – since it also counts (most) index fast full scans) indicator of a potential threat, then I suddenly realised that there was yet another important detail I had to check before I could determine what the numbers were telling me.

So the next question is WHY has Oracle reported 26 tablescans ?

Update number 2

Martin Preiss (see comments) has been working hard to investigate this, and managed to produce a couple more variations in statistics for “the same” problem. In my case I had declared the table as “parallel 2”. For a table of degree N Oracle is very likely to break the table into 13*N chunks – there’s a historic reason for 13 – and that’s why I see 26 “rowid range” scans which, unfortunately, are also recorded as “long” table scans even though each is individually well short of the short table limit. This is really annoying when you start to look at the “Segments by Table Scan” report buecause you now don’t know how many times a table really was scanned unless you know something about the run-time degree of parellelism involved.

If you check the hidden parameters you will find several parameters relating to this chunking, in particular:


_px_min_granules_per_slave        minimum number of rowid range granules to generate per slave (default  13)
_px_max_granules_per_slave        maximum number of rowid range granules to generate per slave (default 100)

Technically, therefore, Oracle MIGHT get to 100 granules per parallel slave, and a tablescan at degree 2 could be recorded as 200 tablescans!

Martin asked why my 26 tablescans didn’t show up as “table scans (direct read)” – but that’s 11g for you, it allows a serial tablescan to use direct path reads, and it allows parallel tablescans to read into the cache, and the combination of cache size and table size meant that my example just happened to read into the cache.

And that introduces ANOTHER interpretation problem – what annoying things might I discover if I declare the table the the CACHE option ? (as Ivica suggested in his first comment below) ?

Final Update (probably)

Iviva has been busy on checking the CACHE operation and shown that there are a number of cases to consider (more, in fact, than I was planning to mention – and the variation the combination in NOPARALLEL and CACHE should give you some pause for thought). The only point I wanted to make was the effect of enabling PARALLEL and CACHE; I don’t think that this is a combination that is very likely to appear in a production system, but boundary conditions (and accidents) do occur. With my little sample – even after I grew the table to be MUCH larger, the tablescan DIDN’T get reported: here are some session stats and segment stats (taken using my snapshot code) of a single tablescan running parallel 2 when the table was cached:


Name                                                                     Value
----                                                                     -----
table scans (rowid ranges)                                                  26
table scans (cache partitions)                                              26
table scan rows gotten                                                 164,672
table scan blocks gotten                                                29,611

  Statistic                                    Value
  ---------                             ------------

T1
  logical reads                               30,272
  physical reads                              29,614
  physical read requests                         261

I know I’ve done a tablescan in 26 pieces (parallel 2) – and scanned 29,000+ blocks doing it; but according to the segment stats AND session stats I haven’t done a tablescan. Fortunately, of course, I can see the parallel tablescan in the session stats, and in this isolated case I can see from the “cache partitions” statistics that that tablescan was on a cached table. But if I’ve been depending on the segment stats to tell me about which tablescans happen most frequently and do most work I’ve found another reason why I can’t trust the stats and have to do more work cross-checking different parts of the AWR for self-consistent results.

I don’t expect to add any more comments about this mixture of tablescans and parallelism, with the varying effects on the session and segment statistics – but there’s always the possibility that one day I’ll start to worry about how the KEEP and RECYCLE (db_keep_cache_size and db_recycle_cache_size) could confuse things further.

Footnote:

It’s probably worth pointing out that the segment statistics have never recorded SHORT tablescans, they’ve only ever captured details of LONG tablescans. There is a slight inconsistency here, though, since they capture all the “index fast full scans (full)” reported in the session stats whether they are short or long – not that the session stats record the difference; unfortunately, another flaw creeps in: parallel index fast full scans “disappear” from the stats, although they show up as “index fast full scans (rowid ranges)” and “index fast full scans (direct)” with the same multiplier of 13 that we see for parallel table scans.


What PeopleSoft content was popular in 2015?

Duncan Davies - Thu, 2016-01-28 17:48

The ‘Year in Blogging’ reports have come through so I can see what posts and newsletter items garnered the most views.

PeopleSoft Tipster Blog

So, according to the summary, this blog was visited 130,000 times during the year, an average of ~350/day with the busiest day being just over double that at 749 visitors. About 50% of the traffic is from the US, 15% from India, and 5% from the UK and Canada.

Amazingly, the most viewed post was one written prior to 2015, about PeopleSoft Entity Relationship Diagrams. The most popular post that was actually authored last year was The Future of PeopleSoft video with Marc Weintraub, followed by PeopleSoft and Taleo integration, the Faster Download of PeopleSoft Images and the profile of Graham Smith and how he works.

The PeopleSoft Weekly Newsletter

The PSW newsletter seems to go from strength to strength. During 2015 the subscriber base rose from 919 to 1,104 which is an approx 20% increase. The ‘open rate’ sits around 40% for any one issue (against an industry average of 17%) with the US accounting for 55% of readers, the UK 15% and India 10%.

The top articles in terms of clicks were:

  1. Gartner’s Report on Oracle’s Commitment to PeopleSoft (263 clicks)
  2. Randy ‘Remote PS Admin’ on Forcing Cache Clears (198)
  3. PeopleSoft Planned Features and Enhancements (180)
  4. 5 Life Lessons I Learned at PeopleSoft (167)
  5. Dan Sticka on stopping writing Record Field PeopleCode (166)
  6. Greg Kelly’s Security Checklist from Alliance (155)
  7. Virginia Ebbeck’s list of PeopleSoft Links (145)
  8. Greg Wendt of Grey Heller on the PS Token Vulnerability (142)
  9. Dennis Howlett on the Oracle vs Rimini St court battle (142)
  10. Wade Coombs on PeopleSoft File Attachments (140)
  11. I’m Graham Smith and this is How I Work (139)
  12. Graham’s PeopleSoft Ping Survey (135)
  13. How to write an efficient PeopleCode (134)
  14. Mohit Jain on Tracing in PeopleSoft (131)
  15. The 4 types of PeopleSoft Testing (130)
  16. PS Admin.io on Cobol (127)
  17. Matthew Haavisto on the Cost of PeopleSoft vs SaaS (124)
  18. The PeopleSoft Spotlight Series (119)
  19. Prashant Tyagi on PeopleSoft Single Signon (118)
  20. Adding Watermarks to PeopleSoft Fields (116)

 

 


Bad to crosscheck archivelog all

Michael Dinh - Thu, 2016-01-28 16:44

Typically, it’s not a good idea to have “crosscheck archivelog all;” in backup scripts and
even worse to have “delete expired archivelog all;” since any evidence will be eradicated.

When you don’t crosscheck archivelog and the archivelog is missing, backup archivelog will fail.
When you crosscheck archivelog and archivelog is missing backup archivelog does not fail since RMAN marks archivelog as expired and ignore.

oracle@arrow:hawklas:/tmp
$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Jan 28 14:17:35 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HAWK (DBID=3130551611)
RMAN> list archivelog all;
using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5191    1    868     X 2016-JAN-28 11:46:46
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc

5201    1    873     A 2016-JAN-28 14:15:44
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc

RMAN> list backup of archivelog sequence 873;
specification does not match any backup in the repository

RMAN> exit
$ mv -v /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc /tmp/
oracle@arrow:hawklas:/tmp
`/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc' -> `/tmp/o1_mf_1_873_cbo4v850_.arc'
oracle@arrow:hawklas:/tmp
$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Jan 28 14:18:30 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HAWK (DBID=3130551611)
RMAN> backup archivelog all;
Starting backup at 2016-JAN-28 14:18:40
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 01/28/2016 14:18:46
RMAN-06059: expected archived log not found, loss of archived log compromises recoverability
ORA-19625: error identifying file /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
RMAN> crosscheck archivelog all;
released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_874_cbo4y4or_.arc RECID=5203 STAMP=902326725
Crosschecked 1 objects

validation failed for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc RECID=5191 STAMP=902317692
validation failed for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc RECID=5201 STAMP=902326632
Crosschecked 2 objects

RMAN> backup archivelog all;

sequence=873 is missing and no errors raised.

Starting backup at 2016-JAN-28 14:19:09
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=874 RECID=5203 STAMP=902326725
channel ORA_DISK_1: starting piece 1 at 2016-JAN-28 14:19:11
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=875 RECID=5204 STAMP=902326750
channel ORA_DISK_2: starting piece 1 at 2016-JAN-28 14:19:11
channel ORA_DISK_1: finished piece 1 at 2016-JAN-28 14:19:12
piece handle=/oradata/backup/HAWK_3130551611_20160128_jdqsgqev_1_1_1645_1 tag=TAG20160128T141910 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_2: finished piece 1 at 2016-JAN-28 14:19:14
piece handle=/oradata/backup/HAWK_3130551611_20160128_jeqsgqev_1_1_1646_1 tag=TAG20160128T141910 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
Finished backup at 2016-JAN-28 14:19:14

Starting Control File and SPFILE Autobackup at 2016-JAN-28 14:19:14
piece handle=/oradata/backup/HAWK_c-3130551611-20160128-09 comment=NONE
Finished Control File and SPFILE Autobackup at 2016-JAN-28 14:19:21
RMAN> list backup of archivelog sequence 873;
specification does not match any backup in the repository

RMAN> exit


Recovery Manager complete.
oracle@arrow:hawklas:/tmp
$

RMAN> delete expired archivelog all;
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5191    1    868     X 2016-JAN-28 11:46:46
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc

5201    1    873     X 2016-JAN-28 14:15:44
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc


Do you really want to delete the above objects (enter YES or NO)? yes
deleted archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc RECID=5191 STAMP=902317692
deleted archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc RECID=5201 STAMP=902326632
Deleted 2 EXPIRED objects
RMAN> backup archivelog all;
Starting backup at 2016-JAN-28 14:36:20
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=874 RECID=5203 STAMP=902326725
input archived log thread=1 sequence=875 RECID=5204 STAMP=902326750
channel ORA_DISK_1: starting piece 1 at 2016-JAN-28 14:36:21
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=876 RECID=5207 STAMP=902327780
channel ORA_DISK_2: starting piece 1 at 2016-JAN-28 14:36:22
channel ORA_DISK_1: finished piece 1 at 2016-JAN-28 14:36:23
piece handle=/oradata/backup/HAWK_3130551611_20160128_jgqsgrf5_1_1_1648_1 tag=TAG20160128T143621 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_2: finished piece 1 at 2016-JAN-28 14:36:23
piece handle=/oradata/backup/HAWK_3130551611_20160128_jhqsgrf5_1_1_1649_1 tag=TAG20160128T143621 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
Finished backup at 2016-JAN-28 14:36:23

Starting Control File and SPFILE Autobackup at 2016-JAN-28 14:36:23
piece handle=/oradata/backup/HAWK_c-3130551611-20160128-0a comment=NONE
Finished Control File and SPFILE Autobackup at 2016-JAN-28 14:36:30

RMAN>
Updated:

Does this mean we should never perform crosscheck archivelog all? No.

The purpose is to let archivelog backup fail and to investigate.

If investigation shows archivelog is indeed missing, then might be better to perform Level 1 or Level 0 backup.

RMAN> crosscheck archivelog all;
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=10 device type=DISK
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_874_cbo4y4or_.arc RECID=5203 STAMP=902326725
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_875_cbo4yy69_.arc RECID=5204 STAMP=902326750
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_876_cbo5z4lz_.arc RECID=5207 STAMP=902327780
Crosschecked 3 objects

validation failed for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_877_cbog0qbg_.arc RECID=5209 STAMP=902336024
Crosschecked 1 objects
RMAN> list expired archivelog all;
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5209    1    877     X 2016-JAN-28 14:36:20
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_877_cbog0qbg_.arc

RMAN> list backup of archivelog sequence 877;
specification does not match any backup in the repository

RMAN>

Is the DBA Career Dead?

Pythian Group - Thu, 2016-01-28 16:00

 

With the increased popularity of cloud services, one of the questions that I often receive is: “Is the DBA career dying? What will you do for a living in the future?” In this article I will give my personal opinion about the future of our beloved profession, and try to calm down those that have already started to look for another career.

The first thing that I want to point out is that when we started to work in IT we knew that it was a career that is different than most of the other ones out there. Its nature is  a dynamic and exciting one that reinvents itself all the time, with technological news showing up every single year and changing the entire landscape. We have chosen a field that pushes us to keep studying, learning and evolving, and this is the kind of mindset I want you to have while reading this article.

The Database Administrator role is not going anywhere. We are not an endangered species and won’t become one in the foreseeable future. Cloud is not our enemy. The data market is just evolving, and the cloud is bringing a lot of new things that will give us more power and more options.

In today’s market we have two very common problems:

  1. Companies can’t find enough people to fill in all positions.

We all know this one. I’m sure we all know several companies that have an open position for months, have interviewed dozens of people, and just can’t find anyone that suits the position.

  1. Companies want to keep their costs as low as possible.

Companies want to make money, and we had a big worldwide crisis just a few years ago that we are still recovering from. This means companies are trying to find ways to improve their productivity, while keeping their costs as low as possible.

 

In a scenario like this, the cloud offerings come as an aid to both improve our productivity as a DBA, and to help the company save money. Let’s think for a while about how many tasks we perform daily that don’t bring real value for the business. No doubt that when we’re planning the new high availability solution, or doing performance tuning on that slow query we can see the value that it will bring to the company. In the first case, this will guarantee that all applications are up and running at full speed when the company needs it. The latter will make sure that the server is handling the workload, running more sessions at the same time, and making both internal and external customers happy.

But how about the time you spent trying to find more disk space for all your databases? How about trying to find disk space for all your backups because the database has grown too large and we didn’t plan ahead? Then there’s all the time that you spend installing SQL and Windows patches. I know, in some big companies, we have a dedicated SAN admin and the infrastructure administrators that will worry about those tasks, but that’s not the everyone’s reality. The vast majority of small and medium companies have a small team that is responsible for multiple areas. Why? Scroll up and read problems 1 and 2 om my list above one more time.

I’ll wait for you.

Now, let’s imagine another reality. Let’s imagine a world where I receive a disk space alert for my backups. The company has acquired a new company, the database growth was much bigger than expected, and we ran out of disk space. I go to a web portal and a few mouse clicks later I have 1TB of disk available to me. All I have to do is open SQL Server Management Studio and change my backup jobs to use the new storage area. Problem solved in less than 15 minutes.

Let’s envision a world where I can get all those small databases I have that are not too important for the business (yeah, we all have a lot of those, don’t lie to yourself) and move those databases to the cloud so they don’t use our precious server resources. I don’t need to worry about patching and managing those databases. Wouldn’t that be great? And how about getting rid of the QA and testing servers and replacing them with virtual machines that can just turn off when they are not in use and save money? And those huge tables with hundreds of millions of rows that causes us problems every single day. Wouldn’t it be great if I could replace that complicated sliding window partition solution that we developed to manage historic data, and instead make SQL Server automatically move old and unused data to the cloud, while also keeping the data available for end users in a transparent way?

Cloud is indeed a career shift dynamic, but not one that will kill the database administrator role and destroy families. Instead, it’s one that will make us more efficient, provide us with tools and options to focus ourselves on tasks that bring value to the company. It’s a solution where we can use the existing hardware more efficiently and make our lives easier. Embrace the changes just like we embraced all new technologies that came before it, and use each one as a tool to be successful in your role.

 

Discover more about our expertise in the Cloud.

Categories: DBA Blogs

My Virtual Internship with Pythian

Pythian Group - Thu, 2016-01-28 14:38

My internship with Pythian started in October and has been an incredible opportunity to build upon my project management skills with a virtual internship. I never imagined working for a tech company. As business students, we are often characterized as lacking the hard skills needed for industries as competitive as those in STEM fields. After my internship with Pythian, I know that my perspective can bring value to a team especially within the tech industry. My work with Pythian has inspired me to apply to computer science programs in Taiwan after I graduate in June.

During my time at Pythian I worked on the Pythia Program which is Pythian’s commitment to improving gender diversity by encouraging, promoting and retaining women and young girls in STEM fields. I was able to work with managers across many different departments and learn how to be part of a virtual team, while building a plan for the Pythia Program.

 

Making an impact for women in STEM fields

The Pythia program is setting an incredible precedent for other tech companies in North America and I am very excited that I was a part of that process. Researching and compiling data on women in STEM fields, particularly the tech industries, was an eye-opening experience. Identifying the pain points for underrepresented groups in tech, particularly women, is key in developing solutions that encourage women to seek positions in this field.

I am looking forward to seeing Pythian’s impact in the local community with Technovation and the impact on the young women who will be walking the halls and learning from such great mentors. Pythian is not only making great strides for the young women in their local community, but for all women in tech by setting an example with their own diversity initiatives.

 

Working with the women of Pythian

While assigned to the Pythia Program, I was lucky to be working with women who were heads of their departments and brought a diverse range of skills to the table. Whether building communication plans with the marketing department, or measuring progress with the organizational development team, I was always challenged to look at the issue from different perspectives.

As a project manager it can be very easy to stay on the periphery and remain an outsider, and admittedly this was a concern of mine as an intern and a newcomer to Pythian. The level of trust that the OD department put in me, and their guiding hand in helping me navigate through the on boarding period was instrumental to our success.

Working with women from many different technical backgrounds, I was able to learn much more than if I had stayed within one specific department. I cannot say how important it is as a young women to work with other women on the Pythia Program. It was inspiring to be able to work with accomplished women with so much expertise that they were willing to share.

 

Working virtually is a whole different ballgame

It has been said that virtual work can be trying and difficult for those new to the team, however my time with Pythian was the complete opposite. I am very thankful to have been able to navigate a virtual internship with such incredible support from the OD team. The lines of communication have always been open, and this has been instrumental to our success on the Pythia Program.

Pythian’s managers made themselves accessible and available during my time on the Pythia program, and their guidance was excellent, as was learning from their experiences.

All in all, I could not have asked for a greater internship than my time at Pythian. I was able to work on a project that was important to me as a woman, while working with other women at Pythian. Together we made an impact within the organization and in the local tech community in Ottawa. In the months to come we will see the reach of the Pythia Program on others. For myself, the experience has been impressive as I apply to computer science programs abroad and see my own future in tech. I look forward to following the continued success of Pythian and the Pythia Program.

 

Find out more about the Pythia Program.

Categories: DBA Blogs

M2M, the Other IoT

Oracle AppsLab - Thu, 2016-01-28 11:50

Before IoT became ‘The’ buzzword, there was M2M (machine to machine). Some industries still refer to IoT as M2M, but overall the term Internet of Things has become the norm. I like the term M2M because it describes better what IoT is meant to do: Machines talking to other machines.

This year our team once again participated int he AT&T Developer Summit 2016 hackathon. With M2M in our minds, we created a platform to allow machines and humans report extraordinary events in their neighborhood.  Whenever a new event was reported (by machine or human),  devices and people (notified by an app) connected to the platform could react accordingly.  We came with two possible use cases to showcase our idea.

CX1FlaUUoAAuT6f

Virtual Gated Community

Gated communities are a great commodity for those wanting to have privacy and security. The problem is that usually these communities come with a high price tag. So we came up with a turnkey solution for a virtual gate using M2M. We created a device using the Qualcomm DragonBoard 410c board with wifi and bluetooth capabilities. We used a common motion sensor and a camera to detect cars and people not belonging to the neighborhood. Then, we used Bluetooth beacons that could be placed in at the resident keychains. When a resident drove (or walked) by the virtual gate, it would not trigger the automated picture and report to the system, but if someone without the Bluetooth beacon drove by, the system will log and report it.

We also created an app, so residents could get notifications as well as report different events, which brings me to our second use case.

Devices reacting to events

We used AT&T Flow Designer and M2X platform to create event workflows with notifications. A user or a device could subscribe to receive only events that they care about such as lost dog/cat, water leaks etc. The real innovative idea here is that devices can also react to certain events. For example, a user could configure its porch lights to automatically turn on when someone nearby reported suspicious activity. If everyone in their street do the same, it could be a pretty good crime deterrent to effectively being able to turn all the porch lights in the street at once by reporting such event.

We called our project “Neighborhood”, and we are still amazed on how much we were able to accomplish in merely 20+ hours.

IMG_4469 IMG_4472 IMG_4470 IMG_4471 Possibly Related Posts:

Stinkin' Badges

Scott Spendolini - Thu, 2016-01-28 07:55
Ever since APEX 5, the poor Navigation Bar has taken a back seat to the Navigation Menu. And for good reason, as the Navigation Menu offers a much more intuitive and flexible way to provide site-wide navigation that looks great, is responsive and just plain works. However, the Navigation Bar can and does still serve a purpose. Most application still use it to display the Logout link and perhaps the name of the currently signed on user. Some applications use it to also provide a link to a user's profile or something similar.

Another use for the Navigation Bar is to present simple metrics via badges. You've seen the before: the little red numbered icons that hover in the upper-right corner of an iPhone or Mac application, indicating that there's something that needs attention. Whether you consider them annoying or helpful, truth be told, they are a simple, minimalistic way to convey that something needs attention.

Fortunately, adding a badge to a Navigation Bar entry in the Universal Theme in APEX 5 is tremendously simple. In fact, it's almost too simple! Here's what you need to do:
First, navigate to the Shared Components of your application and select Navigation Bar List. From there, click Desktop Navigation Bar. There will likely only be one entry there: Log Out.

2016 01 28 08 40 05

Click Create List Entry to get started. Give the new entry a List Entry Label and make sure that the sequence number is lower than the Log Out link. This will ensure that your badged item displays to the left of the Log Out link. Optionally add a Target page. Ideally, this will be a modal page that will pop open from any page. This page can show the summary of whatever the badge is conveying. Next, scroll down to the User Defined Attributes section. Enter the value that you want the badge to display in the first (1.) field. Ideally, you should use an Application or Page Item here with this notation: &ITEM_NAME. But for simplicity's sake, it's OK to enter a value outright.
Run your application, and have a look:

2016 01 28 08 48 45

Not bad for almost no work. But we can make it a little better. You can control the color of the badge with a single line of CSS, which can easily be dropped in the CSS section of Theme Roller. Since most badges are red, let's make ours red as well. Run your application and Open Theme Roller and scroll to the bottom of the options. Expand the Custom CSS region and enter the following text:

.t-Button--navBar .t-Button-badge { background-color: red;}

Save your customizations, and note that the badge should now be red:

2016 01 28 08 49 49

Repeat for each metric that you want to display in your Navigation Bar.

Bitmap Efficiency

Jonathan Lewis - Thu, 2016-01-28 07:02

An interesting observation came up on the Oracle-L list server a few days ago that demonstrated how clever the Oracle software is at minimising run-time work, and how easy it is to think you know what an execution plan means when you haven’t actually thought through the details – and the details might make a difference to performance.

The original question was about a very large table with several bitmap indexes, and an anomaly that appeared as a query changed its execution plan.  Here are the critical sections from the plans (extracted from memory with rowsource execution statistics enabled):

--------------------------------------------------------------------------------------------------------
|  Id |Operation                        | Name       | Starts | E-Rows | A-Rows |     A-Time | Buffers |
--------------------------------------------------------------------------------------------------------
|   6 |    TABLE ACCESS BY INDEX ROWID  |       FACT |      1 |      1 |     24 |00:00:00.01 |      31 |
|   7 |     BITMAP CONVERSION TO ROWIDS |            |      1 |        |     24 |00:00:00.01 |       7 |
|   8 |      BITMAP AND                 |            |      1 |        |      1 |00:00:00.01 |       7 |
|*  9 |       BITMAP INDEX SINGLE VALUE |     FACT_0 |      1 |        |      1 |00:00:00.01 |       3 |
|* 10 |       BITMAP INDEX SINGLE VALUE |  FACT_DIM1 |      1 |        |      4 |00:00:00.01 |       4 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
     9 - access("FACT"."C0"=243001)
    10 - access("FACT"."C1"="DIMENSION1"."ID")


-------------------------------------------------------------------------------------------------------
|  Id | Operation                      | Name       | Starts | E-Rows | A-Rows |     A-Time | Buffers |
-------------------------------------------------------------------------------------------------------
|   7 |    BITMAP CONVERSION TO ROWIDS |            |      5 |        |      8 |00:00:00.01 |     119 |
|   8 |     BITMAP AND                 |            |      5 |        |      1 |00:00:00.01 |     119 |
|*  9 |      BITMAP INDEX SINGLE VALUE |  FACT_DIM1 |      5 |        |     20 |00:00:00.01 |      28 |
|* 10 |      BITMAP INDEX SINGLE VALUE |  FACT_DIM2 |      5 |        |    140 |00:00:00.01 |      78 |
|* 11 |      BITMAP INDEX SINGLE VALUE |     FACT_0 |      5 |        |      5 |00:00:00.01 |      13 |
|  12 |   TABLE ACCESS BY INDEX ROWID  |       FACT |      8 |      1 |      8 |00:00:00.01 |       8 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
     9 - access("FACT"."C1"="DIMENSION1"."ID")
    10 - access("FACT"."C2"="DIMENSION2"."ID")
    11 - access("FACT"."C0"=243001)

The first plan shows the steps leading to a single access (Starts = 1) to the FACT table after combining two bitmap indexes; the second shows the second child of a nested loop join where Oracle has combined three bitmaps indexes to access the FACT table – operation 7 (and its descendants) execute 5 times in this case. I’ve included the related parts of the predicate section so that you can see that the predicates at operations 9 and 10 of the first plan are the same as the predicates at operations 9 and 11 of the second plan.

So here’s the question – if one access to fact_dim1 requires 4 buffer visits, why does it take 28 buffer visits to do the same thing 5 times (and it is with the same value every time); conversely if one access to fact_0 requires 3 buffer visits, why do 5 visits to do the same thing take only 13 buffer visits. (Note: the arithmetic is made a little more obscure by the way in which index branch blocks may be pinned during nested loop joins.)

Then there’s a further question – not visible in the plan – the A-Rows in the “BITMAP INDEX SINGLE VALUE” operation is the number of bitmap sections in the rowsource, and we can see that the key values for index fact_dim2 have a significant number of bitmap chunks for a single key (5 executions returned 140 bitmap chunks). This scale, though, is true of all three indexes – in fact a follow-up email pointed out that a typical key value in EVERY ONE of the three indexes consisted of about 100 bitmap chunks, so why can’t we see those hundreds in the execution plan ?

So this is where we’re at: we have an execution plan where we haven’t visited all the bitmap chunks for a bitmap key, and the order in which the bitmap indexes are used in the plan seems to have some effect on the choice of leaf-blocks you visit when accessing the chunks. So (a) could a change in the order of indexes make a significant difference to the number of bitmap chunks you visit and the resulting performance, and (b) is there a way to control the order in which you visit the indexes. That’s where the note starts to get a bit technical – if you don’t want to read any more the answers are: (a) yes but probably not significantly and (b) yes.

Demo

To investigate what goes on inside a “BITMAP AND” I created a table with two bitmap indexes and used a very large setting for pctfree for the indexes so that they had to be stored with a large number of bitmap chunks per key. Here’s the code that I used, with some results from an instance of 12.1.0.2:


create table people
nologging
as
with generator as (
        select  --+ materialize 
                rownum id 
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id,
        mod(rownum-1, 1e2)      id_town_home,
        trunc((rownum-1)/1e4)   id_town_work,
        rpad('x',10,'x')        small_vc,
        rpad('x',100,'x')       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
;
begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'PEOPLE',
                method_opt       => 'for all columns size 1'
        );
end;
/

create bitmap index pe_home on people(id_town_home) nologging pctfree 95;
create bitmap index pe_work on people(id_town_work) nologging pctfree 95;

select
        index_name, distinct_keys, num_rows, leaf_blocks, avg_leaf_blocks_per_key
from
        user_indexes
where
        table_name = 'PEOPLE'
order by
        index_name
;


INDEX_NAME           DISTINCT_KEYS   NUM_ROWS LEAF_BLOCKS AVG_LEAF_BLOCKS_PER_KEY
-------------------- ------------- ---------- ----------- -----------------------
PE_HOME                        100      30399       15200                     152
PE_WORK                        100       1800         907                       9

As you can see I’ve generated two columns (id_town_home, id_town_work) with 100 distinct values and 10,000 rows each, but with very different data distributions – the rows for any given value for id_town_home are uniformly spread across the entire table, every hundredth row; while the rows for any given value of id_town_work are very tightly clustered as a group of 10,000 consecutive rows. As a consequence the index entry (bitmap string) for a typical key value for id_town_home is enormous and has to be broken into 304 chunks spread across 152 leaf blocks (2 index entries per leaf block), while the index entry for a typical key value for id_town_work is much shorter, but still requires 18 chunks spread across 9 leaf blocks.

So what will I see if I run the following query, and force it to use a BITMAP AND of the two indexes, in the two different orders:

select
        /*+ index_combine(pe) */
        max(small_vc)
from
        people pe
where
        id_town_home = 50
and     id_town_work = 50
;

Based on a very simple interpretation of the typical execution plan and using the index stats shown above we might expect to see roughly A-Rows = 18 with 9 buffer gets (plus a few more for segment headers and branch blocks) on the id_town_work index and A-Rows = 304 with 152 buffer gets on the id_town_home index to allow Oracle to generate and compare the two bit strings – but here are the two plans with their execution stats, generated in 12.1.0.2, and each run after flushing the buffer cache:

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |         |      1 |        |      1 |00:00:00.01 |     118 |    117 |
|   1 |  SORT AGGREGATE                      |         |      1 |      1 |      1 |00:00:00.01 |     118 |    117 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| PEOPLE  |      1 |    100 |    100 |00:00:00.01 |     118 |    117 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |         |      1 |        |    100 |00:00:00.01 |      18 |     17 |
|   4 |     BITMAP AND                       |         |      1 |        |      1 |00:00:00.01 |      18 |     17 |
|*  5 |      BITMAP INDEX SINGLE VALUE       | PE_WORK |      1 |        |     18 |00:00:00.01 |      14 |     13 |
|*  6 |      BITMAP INDEX SINGLE VALUE       | PE_HOME |      1 |        |      4 |00:00:00.01 |       4 |      4 |
-------------------------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |         |      1 |        |      1 |00:00:00.01 |     122 |    120 |
|   1 |  SORT AGGREGATE                      |         |      1 |      1 |      1 |00:00:00.01 |     122 |    120 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| PEOPLE  |      1 |    100 |    100 |00:00:00.01 |     122 |    120 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |         |      1 |        |    100 |00:00:00.01 |      22 |     20 |
|   4 |     BITMAP AND                       |         |      1 |        |      1 |00:00:00.01 |      22 |     20 |
|*  5 |      BITMAP INDEX SINGLE VALUE       | PE_HOME |      1 |        |      5 |00:00:00.01 |       8 |      7 |
|*  6 |      BITMAP INDEX SINGLE VALUE       | PE_WORK |      1 |        |     18 |00:00:00.01 |      14 |     13 |
-------------------------------------------------------------------------------------------------------------------

We have NOT touched anything like the entire bit-string for the id_town_home index – a bit-string that spans 152 leaf blocks! Clearly Oracle is doing something clever to minimise the work, and it’s so clever that switching the order of these two extremely different indexes in the plan has made virtually no difference to the work done. Obviously I can’t tell you exactly what the code is doing, but I think I can produce a reasonable guess about what’s going on.

The pe_work index has the smaller number of leaf blocks per key, which makes it the better starting choice for the AND in this case, so the optimizer’s default starting action was to pick the first couple of chunks of that index key value; and Oracle immediately sees that the first rowid that it could possibly need in its result set is roughly in the middle of the table – remember that the “key” columns of a bitmap index are (real_key, first_rowid_of chunk, last_rowid_of_chunk, compressed_bitstring).

Since it now knows the lowest possible rowid that it could need Oracle can now probe the pe_home index by (id_town_home=50, {target_rowid}) – which will let it go to a bitmap index chunk that’s roughly in the middle of the full range of 152. Then Oracle can expand the bitstrings from the chunks it has, reading new chunks as needed from each of the indexes until the 18 chunks / 9 leaf block from the pe_work index have been used up (and that range would have aligned with just two or three chunks from the pe_home index) at which point Oracle can see there’s no more rows in the table that could match both predicates and it doesn’t need to read the next 75 chunks of the pe_home index.

Conversely, when I forced Oracle to use the (inappropriate) pe_home index first, it read the first couple of chunks, then read the first couple of chunks of the pe_work index, at which point it discovered that it didn’t need any of the pe_home index prior to (roughly) chunk 75, so it jumped straight to the right chunk to align with pe_work and carried on from there. That’s why the forced, less efficient, plan that visited pe_home first visited just a couple more leaf blocks than the plan the optimizer selected for itself.

Bottom line on performance (tl;dr) – Oracle is sufficiently smart about checking the start and end ranges on bitmap indexes (rather then arbitrarily expanding the entire bitmap for each key) that even for very large bitmap index entries it will probably only access a couple of “redundant” leaf blocks per index even if it picks the worst possible order for using the indexes. You’re far more likely to notice Oracle picking the wrong indexes (because you know the data better) than you are to spot it using the right indexes in the wrong order – and given that bitmap indexes tend to be relatively small and well buffered (compared to the tables), and given the relatively large number of rows we pick by random I/O from fact tables, a little extra work in the bitmap indexes is unlikely to make a significant difference to the performance of most queries.

Closing fact: in the unlikely circumstances that you do spot the special case where it will make a difference (and it will probably be a difference in CPU usage) then you can dictate the order of the indexes with the undocumented bitmap_tree() hint.  I may get round to writing up the variations one day but, for this simple case, the index_combine() hint that I used to force the BITMAP AND turned into the following bitmap_tree() hint in the outline:

bitmap_tree(@sel$1 pe@sel$1 and((people.id_town_work) (people.id_town_home)))

bitmap_tree( @query_block     table_name@query_block     and( ({first index definition}) ({second index definition}) ) )

Obviously not suitable to throw into production code casually – check with Oracle support if you think it’s really necessary – but if you wanted to reverse the order of index usage in this case you could just swap the order of the index definitions. If you thought there was a third index that should be used you could include its definition (note that it’s table_name.column_name – the index definition – in the brackets).

My reference: bitmap_control_02.sql