Skip navigation.

Feed aggregator

SQL Developer: Viewing Trace Files

Dominic Brooks - Fri, 2015-02-27 06:18

Just a quick plug for looking at raw sql trace files via SQL Developer.

There is a nice Tree View:
sqldev_trace

Which can be expanded:
sqldev_trace_expand

Also summary view of statistics, filterable:
sqldev_trace_stats

And a list view, filterable and orderable:

sqldev_trace_list

Some sort of right click summary for binds/waits might be a nice addition.


Exadata Documentation Available

Dan Norris - Fri, 2015-02-27 06:12

Please join me in welcoming the Exadata product documentation to the internet. It’s been a long time coming, but glad it’s finally made an appearance!

Introducing Oracle Big Data Discovery Part 3: Data Exploration and Visualization

Rittman Mead Consulting - Thu, 2015-02-26 17:08

In the first two posts in this series, we looked at what Oracle Big Data Discovery is and how you can use it to sample, cleanse and then catalog data in your Hadoop-based data reservoir. At the end of that second post we’d loaded some webserver log data into BDD, and then uploaded some additional reference data that we then joined to the log file dataset to provide descriptive attributes to add to the base log activity. Once you’ve loaded the datasets into BDD you can do some basic searching and graphing of your data directly from the “Explore” part o the interface, selecting and locating attribute values from the search bar and displaying individual attributes in the “Scratchpad” area.

NewImage

With Big Data Discovery though you can go one step further and build complete applications to search and analyse your data, using the “Discover” part of the application. Using this feature you can add one or more charts to a dashboard page that go much further than the simple data visualisations you get on the Explore part of the application, based on the chart types and UI interactions that you first saw in Oracle Endeca Information Discovery Studio.

NewImage

Components you can add include thematic maps, summary bars (like OBIEE’s performance tiles, but for multiple measures), various bar, line and bubble charts, all of which can then be faceted-searched using an OEID-like search component.

NewImage

Each visualisation component is tied to a particular “view” that points to one or more underlying BDD datasets – samples of the full dataset held in the Hadoop cluster stored in the Endeca Server-based DGraph engine. For example, the thematic map above was created against the post comments dataset, with the theme colours defined using the number of comments metric and each country defined by a country name attribute derived from the calling host IP address.

NewImage

Views are auto-generated by BDD when you import a dataset, or when you join two or more datasets together. You can also use the Endeca EQL language to define your own views using a SQL-type language, and then define which columns represent attributes, which ones are metrics (measures) and how those metrics are aggregated.

NewImage

Like OEID before it, Big Data Discovery isn’t a substitute for a regular BI tool like OBIEE – beyond simple charts and visualizations its tricky to create more complex data selections, drill-paths in hierarchies, subtotals and so forth, and users will need to understand the concept of multiple views and datatypes, when to drop into EQL and so on – but for non-technical users working in an organization’s big data team it’s a great way to put a visual front-end onto the data in the data reservoir without having to understand tools like R Studio.

So that’s it for this three-part overview of Oracle Big Data Discovery and how it works with the Hadoop-based data reservoir. Keep an eye on the blog over the next few weeks as we get to grips with this new tool, and we’ll be covering it as part of the optional masterclass at the Brighton and Atlanta Rittman Mead BI Forum 2015 events this May.

Categories: BI & Warehousing

Oracle Priority Support Infogram for 26-FEB-2015

Oracle Infogram - Thu, 2015-02-26 15:57

Oracle Support
How To Be Notified When MOS Notes Are Updated, from the Oracle E-Business Suite Technology blog.
RDBMS
Oracle In-Memory Advisor with Oracle Multitenant? Issues?, from Update your Database – NOW!
Exadata
Examining the new Columnar Cache with v$cell_state, from SmartScan Deep Dive.
And from Exadata Database Machine: 10 reasons to run Database In-Memory on Exadata
OEM
From Oracle Enterprise Manager: Editing EM12c Jobs in Bulk
Data Warehouse
From The Data Warehouse Insider: New Way to Enable Parallel DML
WebLogic
Weblogic LDAPAuthenticator configuration; the GUID Attribute ,from WebLogic Partner Community EMEA.
Java and Friends
Unsynchronized Persistence Contexts in JPA 2.1/Java EE 7, from The Aquarium.
YouTube: Format Multiple Files in NetBeans IDE, from Geertjan's Blog.
4 New OTN Tech articles on JDeveloper, BPM, ADF, from ArchBeat.
Linux
Two from Oracle’s Linux Blog:
Introduction to Using Oracle's Unbreakable Linux Network
Technology Preview available for the Oracle Linux software collection library
MySQL
From Paulie’s world in a blog: Deploying MySQL over Fibre Channel / iSCSI using the Oracle ZFS Storage Appliance
Hardware Support
From My Oracle Support: Power Cord Replacement Notice (updated February 2015).
SOA and BPM
Dynamic ADF Form Solution for Oracle BPM Process, from the SOA & BPM Partner Community Blog.
WebCenter
From the Oracle WebCenter Blog: Webcast: Next Generation AP Invoice Automation
Ops Center
From the Oracle Ops Centerblog: Database License.
Hyperion
Patch Set Update: Hyperion Strategic Finance 11.1.2.1.106, from the Business Analytics - Proactive Support blog.
Demantra
From the Oracle Demantrablog: Setting Worksheet Related Parameters and Hardware Requirement Example
EBS
From the Oracle E-Business Suite Support Blog:
Announcing Oracle Global Trade Management (GTM) Community
Webcast: Oracle Time and Labor (OTL) Timecard Layout Configuration
Internal Requisition, Internal Sales Order Change Management
From the Oracle E-Business Suite Technology blog:
Reminder: Upgrade Database 12.1.0.1 to 12.1.0.2 by July 2015

EBS 12.x certified with Apple Mac OS X 10.10 (Yosemite)

LoudCloud Systems and FASTRAK: A non walled-garden approach to CBE

Michael Feldstein - Thu, 2015-02-26 13:44

By Phil HillMore Posts (291)

As competency-based education (CBE) becomes more and more important to US higher education, it would be worth exploring the learning platforms in use. While there are cases of institutions using their traditional LMS to support a CBE program, there is a new market developing specifically around learning platforms that are designed specifically for self-paced, fully-online, competency-framework based approaches.

Recently I saw a demo of the new CBE platform from LoudCloud Systems, a company whose traditional LMS I have covered a few years ago. The company is somewhat confusing to me – I had expected a far larger market impact from them based on their product design than what has happened in reality. LoudCloud has recently entered the CBE market, not by adding features to their core LMS but by creating a new product called FASTRAK. Like Instructure with their creation of a new LMS for a different market (corporate learning), LoudCloud determined that CBE called for a new design and that the company can handle two platforms for two mostly distinct markets. In the case of Bridge and FASTRAK, I believe the creation of a new learning platform took approximately one year (thanks a lot, Amazon). LoudCloud did leverage several of the traditional LMS tools such as rubrics, discussion forums and their LoudBook interactive eReader.

As was the case for the description of the Helix CBE-based learning platform and the description of FlatWorld’s learning platform, my interest here is not merely to review one company’s products, but rather to illustrate aspects of the growing CBE movement using the demo.

LoudCloud’s premier CBE partner is the University of Florida’s Lastinger Center, a part of the College of Education that provides professional development for Florida’s 55,000 early learning teachers. They have or expect to have more than a dozen pilot programs for CBE in place during the first half of 2015.

Competency Framework

Part of the reason for developing a new platform is that FASTRAK appears to be designed around a fairly comprehensive competency framework embodied in LoudTrack – an authoring tool and competency repository. This framework allows the school to combine their own set of competencies along with externally-defined job-based competencies such as O*NET Online.

Competency Structure

The idea is to (roughly in order):

  • Develop competencies;
  • Align to occupational competencies;
  • Define learning objectives;
  • Develop assessments; and
  • Then design academic programs.

LoudTrack Editing

One question within CBE design is what is the criteria for mastery within a specific competency – passing some, most, all of the sub-competencies? FASTRAK allows this decision to be set by program configuration.

Master Scale

Many traditional academic programs have learning outcomes, but a key differentiator for a CBE program is having some form of this competency framework and up-front design.

A unique feature (at least unique that I’ve seen so far) is FASTRAK’s ability to allow faculty to set competencies at an individual course level, provided in a safe area that stay outside of the overall competency repository unless reviewed and approved.

The program or school can also group together specific competencies to define sub-degree certificates.

Course Design Beyond Walled Garden

At the recent Instructional Technology Council (ITC) eLearning 2015 conference, I presented a view of the general ed tech market moving beyond the walled garden approach. As part of this move, however, I described that the walled garden will likely live on within top-down designs of specific academic programs such as many (if not most) of the CBE pilots underway.

Now it's clear what's the role @PhilOnEdTech gives to #LMS when he talks about a new "walled garden" age. #LTI +1 pic.twitter.com/DXgdjctHto

— Toni Soto (@ToniSoto_Vigo) February 22, 2015

What FASTRAK shows, however, is that CBE does not require a walled garden approach. Keep in mind the overall approach of starting with the competency framework through assessments and then academic program design. In this last area FASTRAK allows several approaches to bringing in pre-existing content and separate applications.

Add Resource Type

The system, along with current version of LoudBooks, is LTI as well as SCORM compliant and uses this interoperability to give choices to faculty. Remember that FlatWorld prides themselves on deeply integrating content, mostly their own, into the platform. While they can bring in outside content like OER, it is the FlatWorld designers who have to do this work. LoudCloud, by contrast, puts this choice in the hands of faculty. Two very different approaches.

LTI Apps

FASTRAK does provide a fairly impressive set of reports to see how students are doing against the competencies, which should help faculty and program designers to see where students are having problems or where the course designs need improving.

Competency Reporting

CBE-Light

An interesting note from the demo and conversation is that LoudCloud claims that half of their pilots are CBE-light, where schools want to try out competencies at the course level but not at the program level. This approach allows them to avoid the need for regulatory approval.

While I have already called out the basics of what CBE entails in this primer, I have also seen a lot of watering down or alteration of the CBE terminology. Steven Mintz from the University of Texas recently published an article at Inside Higher Ed that calls out CBE 2.0 in his terms, where they are trying approaches that are not fully online or even self-paced. This will be a topic for a future post on what really qualifies as CBE and where are people just co-opting the terminology.

The post LoudCloud Systems and FASTRAK: A non walled-garden approach to CBE appeared first on e-Literate.

Delivering Enterprise Mobile Applications Webcast

Anshu Sharma - Thu, 2015-02-26 12:36

Please join us for a discussion around opportunities and challenges around delivering enterprise mobile applications. We will have product management and an oracle partner who has been successful in creating a profitable mobile business provide their perspectives

 http://www.oracle.com/partners/secure/campaign/eblasts/enterprise-mobile-applications-2419241.html

3 things we learned from the Linux Collaboration Summit

Chris Foot - Thu, 2015-02-26 12:32

On February 13th, developers participating in the Linux project assembled in Santa Rosa, California to assess the kernel's development over the past year. It's been nearly 20 years since Linux 1.0 was released, and the solution has evolved considerably since then. 

InformationWeek contributor Charles Babcock acknowledged statements made by Dan Frye, IBM's vice president of open systems, who maintained that Linux is one of a kind, given the fact that the project has grown so much since its inception. During the conference, a number of Linux facts were brought up that some may not be aware of. Let's take a look at some of these findings.

1. Developer profile expands
According to Ars Technica, an estimated 2,000 developers were added to the Linux project over the pat 15 months, making up nearly 50 percent of all the engineers currently writing code for the OS kernel. The news source cited a report released by the Linux Foundation around the time of the Collaboration summit. 

"Linux kernel 3.15 was the busiest development cycle in the kernel's history," noted the Linux Foundation, in an announcement, as quoted by the news source. "The average number of changes accepted into the kernel per hour is 7.7, which translates to 185 changes every day and nearly 1,300 per week." 

2. Linux is reaching 19 million lines of code
Babcock noted that when Linus Torvalds first disclosed the open source kernel, Linux had 10,250 lines of code. In December of last year, exactly 18,997,848 lines of code were featured in the 3.18 kernel. Six releases were unveiled throughout 2014, resulting in 1 million lines being added. 

This speaks to just how much the kernel has grown over the past two decades. It acknowledges just how much you can do with this program, and why it's the OS of choice among many system administrators. 

3. Its development rate is accelerating 
With more developers on board and a sizeable code portfolio, one could imagine kernel releases are hampered by the pains of exchanging ideas among so many contributors. However, this isn't the case. The Linux project has fully adopted the ideals behind open source development – that software should be continuously revised whenever changes are made. 

Now, a new Linux kernel is released about every 70 days or so. The 3.18 version was developed and released after 63 days of collaboration, according to Babcock. It's a testament to Linus Torvalds' mission of creating a best-in-class OS kernel. 

The post 3 things we learned from the Linux Collaboration Summit appeared first on Remote DBA Experts.

Parallel Execution -- 1 The PARALLEL Hint and AutoDoP

Hemant K Chitale - Thu, 2015-02-26 09:29
The behaviour of the PARALLEL Hint has changed subtly but significantly in 11.2.  From the documentation :
Beginning with Oracle Database 11g Release 2 (11.2.0.1), the PARALLEL and NO_PARALLEL hints are statement-level hints and supersede the earlier object-level hints .......... If you omitinteger, then the database computes the degree of parallelism.

Further down, the documentation states :
This hint overrides the value of the PARALLEL_DEGREE_POLICY initialization parameter. 
and 
PARALLEL: The statement always is run parallel, and the database computes the degree of parallelism, which can be 2 or greater.

It is important to note that the Statement Level PARALLEL Hint *overrides* the PARALLEL_DEGREE_POLICY.
So, even if PARALLEL_DEGREE_POLICY is set to MANUAL, implying that automatic degree of parallelism is disabled, the PARALLEL Hint, itself, allows Oracle to auto-compute a DoP.

What further complicates understanding of the behaviour is a reading of Oracle Support Note 1269321.1 that implies that if I/O calibration statistics are missing, Oracle does not use the automatic degree of parallelism feature.
So, one would assume that if I/O Calibration is not done, with the "automatic degree of parallelism" feature not being used, the PARALLEL Hint would not compute any Auto DoP !

Let's run a simple test case :

HEMANT>select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

HEMANT>
HEMANT>show parameter parallel;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 40
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 16
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
HEMANT>
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 10488 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
2

HEMANT>
HEMANT>select * from dba_rsrc_io_calibrate;

no rows selected

HEMANT>

Apparently, I/O Calibration statistics are not present ("are missing"). And yet, Oracle chose to use 2 PX Servers (not 4, not 1) for the query.  Isn't this confusing ?  Is AutoDoP used or is it not used ?

Let's make a change somewhere (else ?)

HEMANT>show parameter cpu_count

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1

HEMANT>

Let's try on a server with more CPUs.  I reconfigure the same VM to run with 4 "CPUs" (cores) and restart the VM and database instance.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Thu Feb 26 23:18:47 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 135
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SYS>

We can now see that CPU_COUNT, PARALLEL_MAX_SERVERS and PARALLEL_SERVERS_TARGET have all gone up.

SYS>connect hemant/hemant
Connected.
HEMANT>set serveroutput off
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2622 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>
HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
8

HEMANT>

Aaha !  The same SQL statement, the same SQL_ID, the same Execution Plan (Plan Hash Value) and Oracle  chose to use 8  (not 1, not 4, not 2) PX servers for the query !

So, there IS some way that AutoDoP is being used even when I/O calibration statistics are missing. Is this AutoDoP simply a function CPU_COUNT x PARALLEL_THREADS_PER_CPU ?

UPDATE :  Herlindo QV also confirmed to me on Twitter : "it seems that way and in RAC the formula looks like (CPU_COUNT x PARALLEL_THREADS_PER_CPU) x RAC nodes"  and, later, when I ask him to check with PARALLEL_FORCE_LOCAL=TRUE : "right, restricts parallel to just one instance. The tricky part is how to choose the best DOP for each unique scenario"
.
.
.

Categories: DBA Blogs

Webcast: Next Generation AP Invoice Automation

WebCenter Team - Thu, 2015-02-26 09:20

Inspyrus is a silicon valley software startup that provides large enterprises and small-to-medium businesses with cloud and on-premise technology solutions. Join this webcast to learn how Next-Gen A/P Process Automation can deliver significant results in terms of cost savings, processing time and resource efficiency to your existing Oracle and non-Oracle ERP applications.

Register Now!
Tuesday, March 3, 2015 9:00 am | Central Standard Time (Chicago, GMT-06:00)

Power Cord Replacement Notice (updated February 2015)

Joshua Solomin - Thu, 2015-02-26 09:16
Power Cord Replacement Notice Power Cord Replacement Notice (Updated February 2015) Oracle published a Power Cord Replacement Notice to inform its customers that a small number of power cords from one specific manufacturer may develop a leakage current condition over an extended period of time (typically >4 years). If excessive enough, this leakage current could lead to a trip condition of the rack PDU or external circuit breaker. In a more severe case, there is the possibility of damage due to a thermal event. These cords shipped with certain systems from January 2004 to July 2011. Oracle is advising its customers to replace suspected power cords as soon as possible. For more information on this issue, including instructions on how to identify suspected power cords and obtaining replacement power cords, see the Power Cord Replacement Notice.

Database security monitoring identifies employee wrongdoing [VIDEO]

Chris Foot - Thu, 2015-02-26 09:01

Transcript

Hi, welcome to RDX! When it comes to security, many think about external threats, but rarely regard internal forces. Yes, we’re talking about the disgruntled employees.

Don’t think that’s much of a concern? A survey conducted by Harris Interactive found that 19 percent of workers aged 18 to 34 said they would steal company information if they knew they were going to be fired. In addition, 16 percent of respondents noted they were able to access systems through old user authentications.

Not only do database monitoring programs assess systems for malware and spyware, they also record user activity. This means when unauthorized persons gain entry to a database, a team of DBAs is notified of the situation. In some cases, this technology can be adjusted to block those using outdated credentials from accessing the databases entirely.

Thanks for watching! Check back next time to learn how database monitoring services can help protect your information.

The post Database security monitoring identifies employee wrongdoing [VIDEO] appeared first on Remote DBA Experts.

Real time sql monitoring – bulk bound statement

Dominic Brooks - Thu, 2015-02-26 09:00

Moving through some observations of an ongoing investigation tackling the easiest ones first.

Previously I observed the slow execution of a recursive delete of statistics history but coming not from MMON but from a number of application processes which should have only been calculating stats for their own partitions.

Statement was sql id 9v9n97qj8z1dg:

	
delete /*+ dynamic_sampling(4) */
from sys.wri$_optstat_histhead_history
where savtime < :1
and rownum <= NVL(:2, rownum);

From the real time sql monitoring output of a single execution (DBMS_SQLTUNE.REPORT_SQL_MONITOR EXEC_ID parameter) , I observed that EXECS was greater than 1 and so the A-Rows was much larger than I expected for a statement which was deliberately restricting the number of rows affected by a ROWNUM predicate.

Reminder:

SQL Plan Monitoring Details (Plan Hash Value=2348801730)
=======================================================================================================================================================================================================
| Id   |            Operation             |             Name              |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |            Activity Detail            |
|      |                                  |                               | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |              (# samples)              |
=======================================================================================================================================================================================================
|    0 | DELETE STATEMENT                 |                               |         |       |     50262 |   +101 |  1423 |        1 |      |       |          |                                       |
|    1 |   DELETE                         | WRI$_OPTSTAT_HISTHEAD_HISTORY |         |       |     50361 |     +2 |  1423 |        1 |      |       |    29.74 | enq: TX - row lock contention (14630) |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | buffer busy waits (231)               |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (82)                              |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (5)                |
|    2 |    COUNT                         |                               |         |       |     50262 |   +101 |  1423 |      14M |      |       |     0.00 | Cpu (1)                               |
|    3 |     FILTER                       |                               |         |       |     50266 |    +97 |  1423 |      14M |      |       |    30.14 | Cpu (15146)                           |
|    4 |      TABLE ACCESS BY INDEX ROWID | WRI$_OPTSTAT_HISTHEAD_HISTORY |      6M | 23218 |     50253 |   +100 |  1423 |       4G |   22 | 176KB |    23.17 | buffer busy waits (18)                |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (11627)                           |
| -> 5 |       INDEX RANGE SCAN           | I_WRI$_OPTSTAT_HH_ST          |    990K |  5827 |     50264 |   +101 |  1423 |       4G |   41 | 328KB |    16.94 | buffer busy waits (3)                 |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (8332)                            |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (180)              |
=======================================================================================================================================================================================================
   3 - filter(ROWNUM<=NVL(:2,ROWNUM))
   5 - access("WRI$_OPTSTAT_HISTHEAD_HISTORY"."SYS_NC00024$"<SYS_EXTRACT_UTC(:1))

Initially I was stumped by the fact that A-Rows in steps 2-3 was greater than the ROWNUM predicate restriction (10000) but that was before I noticed the number of EXECS and EXECS * 10000 = 14M so was this a report of a bulk/batched statement?

So, this is just an illustration that such a bulk bound statement will produce an output as per above.

Create a table with a few rows to slow things down:

drop table t3;

create table t3
as
with x as
(select rownum col1
 from   dual
 connect by level <= 1000)
select x1.col1
,      rpad('X',100,'X') col2
from   x x1, x x2;

Then run an update with FORALL to bulk bind:

declare
 v1 sys.odcinumberlist := sys.odcinumberlist();
begin
 for i in 1 .. 1000
 loop
     v1.extend();
     v1(i) := i;
 end loop;
 forall i in 1 .. v1.count
   update /*+ monitor domtest */ t3
   set    col2 = rpad('Y',100,'Y')
   where  col1 = v1(i);
 commit;
end;
/

Hunt down my SQL execution and get RTSM output of statement.

V$SQL reports this as one execution updating 1M rows.

select executions, rows_processed, elapsed_time/1000/1000, sql_text from v$sql where sql_id = '3tkqtzjyaa02g';

EXECUTIONS ROWS_PROCESSED ELAPSED_TIME/1000/1000 SQL_TEXT
---------- -------------- ---------------------- ------------------------------------------------------------------------------
         1        1000000             109.596429 UPDATE /*+ monitor domtest */ T3 SET COL2 = RPAD('Y',100,'Y') WHERE COL1 = :B1

RTSM reports it as 1044 (!?) executions and doesn’t seem to deal with the number of rows updated.
We only get the bind for the initial bind – not unexpected.

select dbms_sqltune.report_sql_monitor('3tkqtzjyaa02g',sql_exec_id => '16777216') from dual;
SQL Monitoring Report

SQL Text
------------------------------
UPDATE /*+ monitor domtest */ T3 SET COL2 = RPAD('Y',100,'Y') WHERE COL1 = :B1

Global Information
------------------------------
 Status              :  DONE                 
 Instance ID         :  1                    
 SQL ID              :  3tkqtzjyaa02g        
 SQL Execution ID    :  16777216             
 Execution Started   :  02/26/2015 13:25:29  
 First Refresh Time  :  02/26/2015 13:25:29  
 Last Refresh Time   :  02/26/2015 13:27:19  
 Duration            :  110s                 
 Module/Action       :  SQL Developer/-      
 Service             :  SYS$USERS            
 Program             :  SQL Developer        

Binds
========================================================================================================================
| Name | Position |  Type  |                                           Value                                           |
========================================================================================================================
| :B1  |        1 | NUMBER | 1                                                                                         |
========================================================================================================================

Global Stats
=========================================
| Elapsed |   Cpu   |  Other   | Buffer |
| Time(s) | Time(s) | Waits(s) |  Gets  |
=========================================
|     110 |     109 |     0.16 |    16M |
=========================================

SQL Plan Monitoring Details (Plan Hash Value=669554690)
==========================================================================================================================
| Id |      Operation       | Name |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail |
|    |                      |      | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |
==========================================================================================================================
|  0 | UPDATE STATEMENT     |      |         |      |           |        |  1044 |          |          |                 |
|  1 |   UPDATE             | T3   |         |      |       109 |     +2 |  1044 |        0 |     0.92 | Cpu (1)         |
|  2 |    TABLE ACCESS FULL | T3   |     171 | 3409 |       110 |     +1 |  1044 |       1M |    99.08 | Cpu (108)       |
==========================================================================================================================

By contrast a non-bulk bound FOR LOOP statement would register as 1000 executions in V$SQL and each a separate SQL_EXEC_ID.

However this does nothing to address the concern about what the heck is going on such that application processes executing gather_table_stats for a particular partition would each end up running this bulk indiscriminate cleanup script… heading that way next.


The German ADF-Community-Book was released today

Gerd Volberg - Thu, 2015-02-26 06:00
The German ADF- (and Forms-) Community released their first book.

The "ADF book" is a compilation of German lectures, articles and workshop tutorials in Oracle ADF (some of the contributions are in English!). The authors are members of the German ADF community. The time frame covers the years 2010 to 2014. The project "ADF book" was implemented by a team of staff from partner companies and Oracle.

40 authors wrote 70 contributions on 1400 pages in this book.

It's a 110 MB PDF and can be downloaded for free from the ADF-Community-Page.

Here is the direct link
My part in the book has the title "Modernizing Oracle Forms" in which I demonstrate, how easy it is to enhance Oracle Forms with modern frameworks like LAF (Look and Feel-Framework from Francois Degrelle) and how to integrate ADF in Oracle Forms via OraFormsFaces (A framework from Wilfred van der Deijl).

Have fun with the book
Gerd

SQL*Plus Terminator Torture

The Anti-Kyte - Thu, 2015-02-26 05:42

“Leave that jar of Nutella alone, it’s got my name on it !”
The context in which Deb issued this injunction to me probably requires some explanation.
It was Friday evening.
Wales had just…well…come second in the latest installment of their eternal battle with the English through the medium of Rugby.
There was no alcohol left in the house.
And only one source of chocolate.
From the safety of the Cupboard under the stairs, to which I had retreated at kick-off – the Welsh do take their Rugby quite seriously – I wondered about my better half’s change of name.
Shorn of it’s chocolate hazelnut spread connotations, you might think that Nutella was quite an nice name for a girl.
It certainly seems appropriate if the “Girl” in question is slightly unhinged by a combination of wine and wounded national pride.

I was going to write something here about how Rugby players all look like the Terminator and use this as a way of introducting the topic at hand. However, I realise that this would simply be too contrived…even for me.
Instead, I’ll jump straight in…

The Nature of SQL*Plus

SQL*Plus is the command line interface for the Oracle RDBMS. As such, it supports three categories of statement :

  • SQL*Plus commands – which require no terminator
  • SQL commands – terminated by a “/” on a new line
  • PL/SQL blocks – containing statements that are terminated by “;”

…but that’s not the whole story as we are about to discover….

select 'Hasta La Vista' from dual
/

set serveroutput on size unlimited

begin
    dbms_output.put_line(q'[Ill be back...from PL/SQL]');
end;
/

select 'Just you wait until the World Cup!' 
from dual;

Here we can see an example of all three statement types – SQL, SQL*Plus and PL/SQL.
However the final SQL statement is a bit different. The terminator is not the newline “/” as with the first statement, but a “;”.

It is this small quirk that can cause some interesting things to happen and leave you scratching your head for a good while until you figure out exactly what’s happened.

Spot the Terminator

Consider the following, run as an unattended batch job :

select to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') from dual;
declare
    e_not_exists exception; 
    pragma exception_init(e_not_exists, -942);
begin
    execute immediate 'drop table tenuous_links purge';
exception when e_not_exists then null;
end;
/
    
create table tenuous_links( arnie_quote varchar2(100));
/

Simple enough – drop the table ( ignoring the ORA-00942 error if it doesn’t exist in the first place), then create it again…


TO_CHAR(SYSDATE,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 13:12:20


PL/SQL procedure successfully completed.


Table created.

create table tenuous_links( arnie_quote varchar2(100))
             *
ERROR at line 1:
ORA-00955: name is already used by an existing object


SQL> 

At first glance, it would seem that our PL/SQL block to drop the table didn’t work for some reason.
However, if we dig a bit deeper…

select to_char(created, 'DD-MON-YYYY HH24:MI:SS')
from user_objects
where object_name = 'TENUOUS_LINKS'
and object_type = 'TABLE'
/

TO_CHAR(CREATED,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 13:12:20

So, the table was created at the time we ran our script. We know it still exists (and is not in the recycle bin) because it’s still in USER_OBJECTS.

If the table was dropped then re-created when we ran our script then why did we get the error ?

Let’s have a look at that CREATE TABLE statement again :

create table tenuous_links( arnie_quote varchar2(100));
/

Notice that “;” at the end of the first line ?
SQL*Plus takes this to mean “execute the last statement in the buffer”.
Then on the next line we have the same directive – expressed using a different syntax – i.e. “/”.

If we correct the script by removing the extraneous “;” all now works as expected :


select to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') from dual;
declare
    e_not_exists exception; 
    pragma exception_init(e_not_exists, -942);
begin
    execute immediate 'drop table tenuous_links purge';
exception when e_not_exists then null;
end;
/
    
create table tenuous_links( arnie_quote varchar2(100))
/

TO_CHAR(SYSDATE,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 13:30:48


PL/SQL procedure successfully completed.


Table created.

Whilst debugging a small script like this is relatively straight forward, you’re likely to be confronted with a much longer script in the real world and spend considerably more time tracking down the problem.

Culling the Terminators

At this point we have a number of questions :

  1. How and why does SQL*Plus allow two terminator characters ?
  2. Can we turn one of them off ?
  3. Can we standardise on one of them and if so, which ?

It’s probably worth taking a closer look at the two terminator characters as they behave slightly differently in
certain circumstances. For example…

create or replace function arnie
    return varchar2
as
begin
    return(q'[I'll be back!]');
end;
/

Function created.

However, we get a bit less success if we try :

create or replace function arnie
    return varchar2
as
    begin
        return(q'[I'll be back!]');
end;;
SQL> 
SQL> 
SQL> 

SQL*Plus doesn’t recognise the second “;” as a terminator. Putting it on a separate line fares no better.
From this then, we can infer that the “/” is mandatory when generating DDL for PL/SQL stored program units.

The next question is how we end up with two terminators, and whether we can switch one of them off.
Well, let’s take a look at the SQL*Plus settings, or one setting in particular :

SQL> show sqlterminator
sqlterminator ";" (hex 3b)
SQL> 

We can use this parameter to set the terminator to another single character…


SQL> set sqlterminator !

SQL> select 'Another Terminator reference'
  2  from dual
  3  !

'ANOTHERTERMINATORREFERENCE'
----------------------------
Another Terminator reference

SQL> 

SQL> select 'Another Terminator reference' from dual !

'ANOTHERTERMINATORREFERENCE'
----------------------------
Another Terminator reference

SQL> 

SQL> select 'Another Terminator reference!' from dual !

'ANOTHERTERMINATORREFERENCE!'
-----------------------------
Another Terminator reference!

SQL> 

…but not multiple characters…

SQL> set sqlterminator runbuffer
string "runbuffer" is too long. maximum size is 1 character.
SQL> 

Interestingly, we can also disable it altogether and then re-enable it…

SQL> set sqlterminator off
SQL> show sqlterminator
sqlterminator OFF
SQL> select sysdate
  2  from dual;
  3  
SQL> 
SQL> select sysdate
  2  from dual
  3  /

SYSDATE
------------------
20-FEB-15

SQL> set sqlterminator ;
SQL> select sysdate
  2  from dual;

SYSDATE
------------------
20-FEB-15

SQL> 

Whilst this can be quite useful, especially if you’re running scripts that contain only DDL for stored program units, it does feel a bit odd having to put the terminator on a new line.
Additionally, you may consider that standardising this would require some serious regression testing of any SQL scripts to make sure that they’re not using the “;” terminator, not to mention any scripts that get generated dynamically.

Missing Terminators

Just as vexing as having too many terminators is not having enough.

consider :

create or replace package the_terminator as
    function favourite_terminator
        return varchar2;
end;

create or replace package body the terminator as

    function favourite_terminator
        return varchar2
    is
    begin
        return q'[There's only one Arnie !]';
    end favourite_terminator;
end the_terminator;
/

Warning: Package created with compilation errors.

SQL> sho err
Errors for PACKAGE THE_TERMINATOR:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/1	 PLS-00103: Encountered the symbol "CREATE"

SQL> select object_type          
  2  from user_objects
  3  where object_name = 'THE_TERMINATOR'
  4  /

OBJECT_TYPE
-------------------
PACKAGE

SQL> 

Now consider this in a longer script ( which I’ve instrumented with prompt statements) :

prompt Recording Start Time

select to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') from dual;

prompt Dropping TENUOUS_LINKS

declare
    e_not_exists exception; 
    pragma exception_init(e_not_exists, -942);
begin
    execute immediate 'drop table tenuous_links purge';
exception when e_not_exists then null;
end;
/

prompt Creating table TENUOUS_LINKS
    
create table tenuous_links( arnie_quote varchar2(100))
/


prompt Creating Function ARNIE

create or replace function arnie
    return varchar2
as
begin
    return(q'[I'll be back !]');
end;

prompt Creating Package THE_TERMINATOR

create or replace package the_terminator as
    function favourite_terminator
        return varchar2;
end;
/

create or replace package body the terminator as

    function favourite_terminator
        return varchar2
    is
    begin
        return q'[There's only one Arnie !]';
    end favourite_terminator;
end the_terminator;
/

This time, we’ve missed the terminator at the end of the function.
However, this causes the Package to error as well :

Recording Start Time

TO_CHAR(SYSDATE,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 14:11:39

Dropping TENUOUS_LINKS

PL/SQL procedure successfully completed.

Creating table TENUOUS_LINKS

Table created.

Creating Function ARNIE

Warning: Function created with compilation errors.


Warning: Package Body created with compilation errors.

SQL> sho err
Errors for PACKAGE BODY THE_TERMINATOR:

LINE/COL ERROR
-------- -----------------------------------------------------------------
0/0	 PL/SQL: Compilation unit analysis terminated
1/14	 PLS-00304: cannot compile body of 'THE_TERMINATOR' without its
	 specification

1/14	 PLS-00905: object MIKE.THE_TERMINATOR is invalid
SQL> select object_name, object_type
  2  from user_objects
  3  where object_name in ('ARNIE', 'THE_TERMINATOR')
  4  /

OBJECT_NAME		       OBJECT_TYPE
------------------------------ -------------------
ARNIE			       FUNCTION
THE_TERMINATOR		       PACKAGE
THE_TERMINATOR		       PACKAGE BODY

SQL> 

There are a couple of things to note here. First is that it may well be worth enhancing the instrumentation in the script by including SHOW ERRORS after each stored program unit creation statement. The second is that there is no “smoking gun” error for either too many terminators, or not enough.
With SHOW ERRORS in place, it becomes a bit easier to spot what’s going wrong :

prompt Recording Start Time

select to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') from dual;

prompt Dropping TENUOUS_LINKS

declare
    e_not_exists exception; 
    pragma exception_init(e_not_exists, -942);
begin
    execute immediate 'drop table tenuous_links purge';
exception when e_not_exists then null;
end;
/

prompt Creating table TENUOUS_LINKS
    
create table tenuous_links( arnie_quote varchar2(100))
/


prompt Creating Function ARNIE

create or replace function arnie
    return varchar2
as
begin
    return(q'[I'll be back !]');
end;

show error

prompt Creating Package THE_TERMINATOR

create or replace package the_terminator as
    function favourite_terminator
        return varchar2;
end;
/

show error


create or replace package body the_terminator as

    function favourite_terminator
        return varchar2
    is
    begin
        return q'[There's only one Arnie !]';
    end favourite_terminator;
end the_terminator;
/

show error

When you run this script, the problem is a bit easier to spot :

Recording Start Time

TO_CHAR(SYSDATE,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 14:17:03

Dropping TENUOUS_LINKS

PL/SQL procedure successfully completed.

Creating table TENUOUS_LINKS

Table created.

Creating Function ARNIE

Warning: Function created with compilation errors.

Errors for FUNCTION ARNIE:

LINE/COL ERROR
-------- -----------------------------------------------------------------
8/1	 PLS-00103: Encountered the symbol "SHOW"

Warning: Package Body created with compilation errors.

Errors for PACKAGE BODY THE_TERMINATOR:

LINE/COL ERROR
-------- -----------------------------------------------------------------
0/0	 PL/SQL: Compilation unit analysis terminated
1/14	 PLS-00304: cannot compile body of 'THE_TERMINATOR' without its
	 specification

1/14	 PLS-00905: object MIKE.THE_TERMINATOR is invalid
SQL> 

There is another SQL*Plus setting that you might consider to at least reduce some of the pain caused by this tangle of terminators.
Remember, the terminator is a directive for SQL*Plus to run what’s in it’s buffer. By default this is the last SQL or PL/SQL statement. SQL*Plus commands are not held in the buffer.
The CLEAR BUFFER command will, well, clear the contents of the buffer…

SQL> select 'I really must watch the movies again'
  2  from dual
  3  /

'IREALLYMUSTWATCHTHEMOVIESAGAIN'
------------------------------------
I really must watch the movies again

SQL> list
  1  select 'I really must watch the movies again'
  2* from dual
SQL> clear buffer
buffer cleared
SQL> list
SP2-0223: No lines in SQL buffer.
SQL> 

So, run a statement and we can then see it’s in the buffer by issuing the list command ( usually abbreviated to “l”).
After the CLEAR BUFFER, there’s nothing to see.

Conclusion

SQL*Plus facilitates the use of two terminator characters which behave in the same way…most of the time.
Whilst it’s possible to disable, or even change the “;”, it may not be practical to go forward using just the “/” as a standard.

Deb has just “asked” me to add a disclaimer to this post.
A certain amount of artistic licence was used in the first couple of paragraphs.
After all, there’s so much junk in the cupboard under the stairs that I’d never fit in.
The Shed, Deb assures me, is an entirely different matter.


Filed under: Oracle, SQL Tagged: clear buffer, prompt, set sqlterminator, show errors, SQL*Plus, statement terminators

Why and How to use Oracle Metadata Management 12c. Part 2: Importing and Relating Metadata

Rittman Mead Consulting - Thu, 2015-02-26 05:27

In the first post of this series we have seen how to install and configure OEEM to start working with it. In this new post we are going to see how we import the metadata into OEMM from different sources like Oracle Database, OBIEE, ODI and OWB and then relate all of them inside OEMM.

oemm_main_page

After we have installed and configured OEMM, we need to start adding all the metadata from the different sources and applications that we use. In this example the sources will be some Oracle schemas and our applications will be ODI, OWB and OBIEE. To import the metadata for all of them we need to create one model in OEMM for each. A model in OEMM has all the connection details for a specific source or metadata provider (i.e: database schema, ODI repository, etc), and is also the container for the metadata of that specific source after the import process. So one model can connect to one specific source or application.

First and for organisational purposes we will create a Folder to contain the future models.  You can also create your models first, and then create the folder/s that you want and then just move the models under the correspondent folders. In addition, you can create folders within another folder.

To create a folder, right-click on the Repository entry under the Repository panel in the OEMM main page. Select New > Folder in the pop-up menu, enter a name and press the Create button.

 

oemm_folder_comb

The next step is creating the models and import the metadata of the different sources. The import or reverse engineering process is named harvesting in OEMM. We will start with the model for the Oracle Database. In this particular example I used Oracle 12c.

To create a model right click on the folder or the repository entry, and select New > Model. In the Create Model window that appears, enter a name for the new model and select the type of source that you want to import or to be more precise which will be the Import bridge that this model will use.

The Import Bridge is part of the Meta Integration® Model Bridge (MIMB) software and is the way that OEMM connect to the sources and applications to reverse engineering the metadata. You will find import bridges for a wide range of technologies like different databases, Business Intelligence  and Data Integration products from different vendors, Big Data stores, etc.

oemm_model_comb

For this first example we will select the Oracle Database (via JDBC) import bridge and in the Import Setup tab we will add all the usual connection details: host, port, service and user and password to connect to the Database. This user should have at least the CONNECT privilege and the SELEC_CATALOG_ROLE role. We can also define this model for specific schemas using the magnifying glass to choose the shown schemas or just write the schemas (in uppercase) separated by “;”. Also we can decide if we want that the stored procedures are going to be included in this imported metadata or not.

oemm_model_conn_db

After all the connection details have set, we test the connection and wait until we receive the Connection Successful message, and finally press the Create button. A message windows will appear asking if we want to “Import a new version now?” Press yes to start the harvesting process. A log window will show you the progress in the import process that can take several minutes. After the process is finished a new windows message ask if we want to open the model.

oemm_imp_succ_open_model

Choose yes to see all the objects that are imported for this model as it is shown in the figure below.

ORCL_training_model

We need to repeat the process explained above to create the models for the rest of sources and applications that we are going to use in this example. The process is the same for all of them but of course there are some differences in the connection details required after we chose the specific Import Bridge for each one.

In the next screenshot you will find the connection details for the ODI model after you choose the Oracle Data Integrator (ODI) Import Bridge. In the Import Setup tab, you need to select the appropriate driver to connect to the database where is the ODI Repository (that could be Oracle, SQLServer, DB2, etc), the ODI Home folder, the URL to connect to database, the schema and the password for the Master Repository, user and password for the ODI User (SUPERVISOR for example), the name of the Work Repository from that we want to select the ODI Objects and the Context.

oemm_odi_import_setup

We need to select the Scope for this model between two options: Projects, that will include packages and mappings (or interfaces for versions before 12c) or Load Plans and Scenarios, that includes the Load Plans and Scenarios.

After we chose the Scope we can also filter the Content of the scope pressing the magnifying glass icon and select the specific objects that we want for this model.

After you press the create button to start the harvesting process, open the model created and it will look similar to this if you choosing Projects as Scope.

oemm_odi_projects

For the connection details to create the OWB model , you need to take a couple of things into account. First, the version of the OWB from which you want to import the metadata. If it is 11.2.0.3 or later you will need to do the these two steps before:

  1. Copy the following .JAR files from: MetaIntegrationInstallationDir\java\ to %OWB_HOME%\owb\lib\ext\
  • jsr173_1.0_api.jar
  • MIR.jar
  • MIRModelBridge.jar
  • MIROracleWarehouseBuilderOmb.jar
  • MIRUtil.jar
  • stax-1.1.1-dev.jar
  1. Copy the MetaIntegrationInstallationDir\bin\mimbexec.bat file into the same OWB directory.

As the version that I have is 11.2.0.4, I copy the files detailed above, set the connection parameters like is shown in the following image and test the connection.

owb model config

When I started the import process the following error message appears in the log windows:

error_owb2

 

After trying many things unsuccessfully, I asked David Allan for help and he sent me another mimbexec.bat because apparently between 11.2.0.3 and 11.2.0.4 there were directory name changes.  This a temporary fix and a proper one is being worked on.

I substituted the bat file and I received another error message as it is shown in the next screenshot.

error_importing_owb_externaltable

 

After a while, I realised that the issue that OEMM reported was because I was using an external table in one of the mappings. I changed it for a common table and the import process worked well. This has reported as a bug and a solution is being worked on. I really want to thank David for all his invaluable help on that.

This is how it looks the OWB model after the import of the metadata.

owb_model

The last model that we need to create is the one based on OBIEE. There are different import bridges depending on the metadata that we need to import from OBIEE. Could be Oracle Business Intelligence (OBI) Server, Oracle Business Intelligence (OBI) Enterprise Edition and Oracle Business Intelligence (OBI) Answers.

The OBI Server import bridge needs the OBI repository in xml format as a parameter to import it, and the result model will contain all the objects defined in the three layers of the repository (Presentation, Business Model, Physical) as well as the repository connections, the variables and the initialisation blocks defined in the OBI repository.

oemm_biserver_comb

To use the OBI Enterprise Edition import bridge we need to set the login user and password to connect to OBIEE (usually weblogic or a user with admin privileges), the repository file in xml format, and we can also filter the amount of reports retrieved from the OBI Presentation Server.

There are a couple of interesting not mandatory options, one is for optimise the import of large models which if it sets to true doesn’t return some objects like joins, relationships, logical fk, etc., to consume less memory at run time. And another option is to set if we want to do an incremental import to import only the changes of the source or each time we want to import everything.

oemm_obiee_comb

The last import bridge to use with OBI is the OBI Answers, which will be import the content for a particular analysis or KPI report. This bridge needs to have the specific analysis in XML format.

oemm_bi_analysis_comb

 

About models, there are a couple of additional things that you need to take note. First if you want to see the configuration details you need to right-click the model and choose the settings option from the pop-up menu. In case that you want to open the model to see the objects that contains, double-click on it.

Another thing is for every parameter that you have in a model, you will find a very detailed help at the right in the import setup tab; and if you click on the name of the Import Bridge in the same tab, you have the documentation of this particular bridge which I find it very useful.

There are two tabs more in the folder and model definition that we won’t use in this example but that we talk in future posts: one for security and another to executing scripts when an event happens to this object. Models also have an additional tab Import Schedule, to create a plan to do the harvest process.

Relate the models

Once we have defined our models we need to relate them and to validate their relationship. The automated process of relate these models through the validation is named stitching. In order to do that we must create a Configuration first. A configuration in OEMM is a collection of models and another objects like mappings, glossaries, etc, that are related in someway.

According to the online documentation we need to consider a configuration as any of these options:

  • Repository workspace: a collection of Repository Objects to be analyzed together (search, browse, reports, etc.) as a technical scope, or business area under the same access permission scope.
  • Enterprise architecture – a collection of data store Models (ODS, data staging areas, data warehouses, data marts, etc.) and data process Models (ETL/DI, and BI) connected together through data flow stitching.
  • Design workflow – a collection of conceptual, logical and physical Models connected (semantically stitched) together through semantic mappings modeling the design process.

To create a Configuration, just right-click on a selected folder or the repository entry and choose New> Configuration. Enter a name for the configuration and press the Create button.

oemm_new_config

The configuration is opened and you need to drag the models that you want to be stitched inside this configuration as it is shown in the following screenshot

oemm_models_config

As you drag and drop your models, you can see that some of them have a warning icon after you include them in the configuration, and that is because we need to connect that model with the appropriate source of data.

To do that, select the model in the configuration and press Edit Connection. Choose the correspondent store for each connection and press OK.

oemm_edit_conn_config

oemm_conn_edit1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After you finish with all the models, press the Validate button, to start stitching or relate them.

oemm_validate_config

In most of the cases, OEMM can assign the correspondent default schema for each of the connections in the model. If in some cases cannot do it , like in OWB, you need to do it manually.

oemm_conn_config_owb

In the following image you will see all the models validated. For this example, I’ve created four databases models, one that contains the source (transactional system), one for the staging schema, and another two that contains different data warehouses. Also an ODI model, an OWB model and an OBIEE model.

oemm_models_validated

You can see also the relationship between the models that belong to a configuration in a graphical view in the Architecture Diagram tab of the Configuration. If the diagram looks like a little messy, you  can press the Edit button and then Layout to order the way the components are shown.

oemm_conf_arch_diag

In summary, we create and harvesting (reverse-engineer) the models and then relate or stitching them to can analyse them together. In the next post, we will see some interesting stuff that we can do with configurations and models like trace data lineage and trace data impact.

Categories: BI & Warehousing

COUNT STOPKEY – NVL

Dominic Brooks - Thu, 2015-02-26 04:50

Yesterday I mentioned issues with a recursive delete operation on statistics history.

This is a quick illustration of the last points I made on that post regarding the lack of a COUNT STOPKEY optimisation because of the use of NVL.

COUNT STOPKEY is an optimisation which allows processing to stop once the target number of rows has been reached.

For example:

create table t1 as select * from dba_objects;

alter session set statistics_level = all;

var rn number
exec :rn := 10;

select *
from   t1
where  rownum <= :rn
and    mod(object_id,5) = 0;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

Plan hash value: 3836375644

-------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |     10 |00:00:00.01 |       4 | 
|*  1 |  COUNT STOPKEY     |      |      1 |        |     10 |00:00:00.01 |       4 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |  26148 |     10 |00:00:00.01 |       4 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=:RN)
   2 - filter(MOD("OBJECT_ID",5)=0) 

COUNT STOPKEY kicks in after we have fetched the relevant rows and stops any unnecessary further execution – note in particular A-Rows & Buffers for STEP 2

However, if we use NVL around our ROWNUM limitation, then this optimisation is prevented.

var rn number
exec :rn := 10;

select *
from   t1
where  rownum <= nvl(:rn,rownum)
and    mod(object_id,5) = 0;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

Plan hash value: 624922415

--------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |     10 |00:00:00.09 |    2310 |
|   1 |  COUNT              |      |      1 |        |     10 |00:00:00.09 |    2310 |
|*  2 |   FILTER            |      |      1 |        |     10 |00:00:00.09 |    2310 |
|*  3 |    TABLE ACCESS FULL| T1   |      1 |  26148 |  29630 |00:00:00.08 |    2310 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(ROWNUM<=NVL(:RN,ROWNUM))
   3 - filter(MOD("OBJECT_ID",5)=0)


Meet the new face of bank robbery

Chris Foot - Thu, 2015-02-26 02:43

Bandits equipped with revolvers and bandoleers aren't that big of a threat compared to a small army of cybercriminals armed with laptops. Why go through the trouble of physically robbing a bank when you can hack into its databases?

Once a hacker successfully infiltrates a financial institution's database, a wealth of information is at his or her disposal. They're after customer data, enabling them to siphon money from checking and savings accounts, and use other data that could prove a sustainable source of income.

The work of groups
Much like how it was in the old west, the smart perpetrators aren't working alone. The more people you have on your side, the better chance you have of winning. At the Kaspersky Security Analyst Summit in Cancun, Mexico, which took place earlier this month, attendees unmasked Carbanak, a cybercrime gang comprised of members located across Russia, Ukraine, China and other European countries.

Carbanak has been responsible for pilfering approximately $1 billion over two years from 100 banks from Australia to the United Kingdom. The organization employs spear-phishing attacks that zero in on bank employees, and also favors using remote Trojan backdoors that spy on infected users, steal data and provide access to infiltrated machines.

Kaspersky Lab's Sergey Golovanov, principal security researcher at the firm's global research and analysis team, noted that one bank reported being hit multiple times by the gang. Its systems were ultimately infected by exploiting unpatched versions of Microsoft Office.

Invested in their work
What shocked InformationWeek's Susan Nunziata so much was that spear-phishing campaigns were so successful. The phishing emails were so well written that employees believed such messages were sent from fellow colleagues. Based on Kaspersky's insights, the Carbanak hackers dedicated an incredible amount of time and resources refining their tactics.

The security research firm's report showed that attackers infected video surveillance equipment to monitor bank clerk behaviors. In addition, Carbanak members used monitoring tools that enabled them to further mimic employee actions. Whenever the gang would successfully infiltrate a bank, it would assess its processes and networks for anywhere between two and four months, making away with an estimated $10 million per hack.

This situation underlines the need for database monitoring tools that can spot peculiar activity. It's only a matter of time before such malicious operations impact financial institutions in the U.S., and it's time they brace themselves.

About RDX
Since its inception in 1994, RDX has helped hundreds of organizations lower database administration support costs while increasing performance and availability. RDX provides 100 percent US-based, 24×7 support for Oracle, Oracle EBS, SQL Server, PostgreSQL, MySQL and DB2 databases as well as operating system support for all major Windows, Unix and Linux offerings.

RDX's highest priority is to safeguard its customers' sensitive data stores, and its expert staff of highly-trained professionals is backed by a monitoring and support infrastructure that has been continuously improved and enhanced throughout its 20-year history.

The post Meet the new face of bank robbery appeared first on Remote DBA Experts.

Complément : A-Team Chronicles

Jean-Philippe Pinte - Thu, 2015-02-26 02:31
Le site A-Team Chronicles aggrège le contenu produit par les membres de la A-Team : meilleures pratiques, astuces, conseils, etc

e-Literate TV Preview: Essex County College and changing role of faculty

Michael Feldstein - Wed, 2015-02-25 17:58

By Phil HillMore Posts (291)

As we get closer to the release of the new e-Literate TV series on personalized learning, Michael and I will be posting previews highlighting some of the more interesting segments from the series. When we first talked about the series with its sponsors, the Bill & Melinda Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

In this video preview (about 4:18 in duration), we hear from two faculty members who have first-hand experience in using a personalized learning approach as well as a traditional approach to remedial math. We also hear from students on what they are learning about learning. In our case studies so far, the real faculty issue is not that software is being designed to replace faculty, but rather that successful implementation of personalized learning necessarily changes the role of faculty. One of our goals with e-Literate TV is to allow faculty, staff and students to describe direct experiences in their own words. Take a look.

Click here to view the embedded video.

Stay tuned for the full episodes to be released on the In The Telling platform[1]. You can follow me (@PhilOnEdTech), Michael (@mfeldstein67), or e-Literate TV (@eLiterateTV) to stay up to date. You can also follow the e-Literate TV YouTube channel. We will also announce the release here on e-Literate.

  1. ITT is our partner in developing this series, providing video production as well as the platform.

The post e-Literate TV Preview: Essex County College and changing role of faculty appeared first on e-Literate.

Understanding vs Resolution – Statistics History Cleanup

Dominic Brooks - Wed, 2015-02-25 13:26

Today I helped resolve a problem quickly but to the detriment of my understanding of exactly what was going on and why.

And as part of the quicker resolution, I had to destroy the evidence which would have helped understand better.

So… now need to go back and figure it out if I have time to model it properly, etc.

Here’s what little I know so far.

What happened was that there were a number of application sessions experiencing slowness and contention when executing a recursive stats history cleanup statement.

Verified via ASH that this recursive delete was somehow being called by app ETL code (TOP_LEVEL_SQL_ID, PLSQL_ENTRY_OBJECT_ID & USER_ID columns), four of them each running slow statement and also because of that nature of the statement below and being blocked by mode 6 TX locks from the leading execution.

Version is 11.2.0.3

Statement was sql id 9v9n97qj8z1dg:

delete /*+ dynamic_sampling(4) */ 
from sys.wri$_optstat_histhead_history 
where savtime < :1 
and rownum <= NVL(:2, rownum);

First up, resolution was quick and easy according to, by a quirk of coincidence, my last post:

https://orastory.wordpress.com/2015/02/16/dbms_stats-purge_stats/

We just got rid of all the statistics history using the magic PURGE_ALL truncate flag.
The history is of limited usefulness day-to-day anyway.

The slowness was holding up ETL jobs which were just trying to calculate stats for their own partition.

I was brought into the situation towards the end of the situation but here’s an illustration of the slowness:

SQL Text
------------------------------
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histhead_history where savtime < :1 and rownum <= NVL(:2, rownum)

Global Information
------------------------------
 Status              :  EXECUTING               
 Instance ID         :  1                       
 SQL ID              :  9v9n97qj8z1dg           
 SQL Execution ID    :  16777249                
 Execution Started   :  02/24/2015 19:11:25     
 First Refresh Time  :  02/24/2015 19:13:06     
 Last Refresh Time   :  02/25/2015 09:10:35     
 Duration            :  50351s                  
 Module/Action       :  JDBC Thin Client/-      
 Program             :  JDBC Thin Client        

Binds
========================================================================================================================
| Name | Position |  Type  |                                           Value                                           |
========================================================================================================================
| :2   |        2 | NUMBER | 10000                                                                                     |
========================================================================================================================

Global Stats
===============================================================================================
| Elapsed |   Cpu   |    IO    | Application | Concurrency |  Other   | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Waits(s)   | Waits(s) |  Gets  | Reqs | Bytes |
===============================================================================================
|   50359 |   35199 |     0.16 |       14669 |         254 |      237 |     1G |   63 | 504KB |
===============================================================================================

SQL Plan Monitoring Details (Plan Hash Value=2348801730)
=======================================================================================================================================================================================================
| Id   |            Operation             |             Name              |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |            Activity Detail            |
|      |                                  |                               | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |              (# samples)              |
=======================================================================================================================================================================================================
|    0 | DELETE STATEMENT                 |                               |         |       |     50262 |   +101 |  1423 |        1 |      |       |          |                                       |
|    1 |   DELETE                         | WRI$_OPTSTAT_HISTHEAD_HISTORY |         |       |     50361 |     +2 |  1423 |        1 |      |       |    29.74 | enq: TX - row lock contention (14630) |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | buffer busy waits (231)               |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (82)                              |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (5)                |
|    2 |    COUNT                         |                               |         |       |     50262 |   +101 |  1423 |      14M |      |       |     0.00 | Cpu (1)                               |
|    3 |     FILTER                       |                               |         |       |     50266 |    +97 |  1423 |      14M |      |       |    30.14 | Cpu (15146)                           |
|    4 |      TABLE ACCESS BY INDEX ROWID | WRI$_OPTSTAT_HISTHEAD_HISTORY |      6M | 23218 |     50253 |   +100 |  1423 |       4G |   22 | 176KB |    23.17 | buffer busy waits (18)                |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (11627)                           |
| -> 5 |       INDEX RANGE SCAN           | I_WRI$_OPTSTAT_HH_ST          |    990K |  5827 |     50264 |   +101 |  1423 |       4G |   41 | 328KB |    16.94 | buffer busy waits (3)                 |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (8332)                            |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (180)              |
=======================================================================================================================================================================================================
   3 - filter(ROWNUM<=NVL(:2,ROWNUM))
   5 - access("WRI$_OPTSTAT_HISTHEAD_HISTORY"."SYS_NC00024$"<SYS_EXTRACT_UTC(:1))

So, first thought was that:

1. I thought that MMON was responsible for purging old stats… clearly not in this case as there were multiple concurrent application connections purging old data as part of their ETL process.

2. The DELETE is deleting any old history older than a parameterised date, the first 10000 rows thereof. There is no connection to the object on whose statistics the application ETL jobs were working on.

3. I would expect a COUNT STOPKEY operation but then the NVL predicate avoids this optimisation.

4. UPDATE: The real time sql monitoring output is also reporting the metrics for 1423 executions of this statement. The clue is in the 14M rows reported in the COUNT + FILTER operations. 1423 * 10000 rows = 14M. But I requested the output for what I thought was a single execution id – 16777249 – strange… bulk/batched delete ?

More questions than answers…