Feed aggregator

Data Visualization Desktop 12.2.2.0: Data Flow Component

Rittman Mead Consulting - Thu, 2016-10-20 07:54

My previous post contained a brief description of Data Visualization Desktop (DVD) new features in 12.2.2.0, in terms of sources, visualisations and components. In this post we're going to simulate a typical analyst use case and understand how DVD can support the process.

Data Visualisation Desktop is a tool aimed at departmental analysis, with data coming from different sources and results that need to be delivered quickly. Given the ad-hoc nature of it, traditional long term IT-driven Business Intelligence processes often won’t suffice. In this example we'll have a deep look at DVD's Data Flow component and how it can be used to create an ETL flow in order to analyse data coming from a multitude of sources. Data Flow is new functionality introduced in DVD 12.2.2.0.

Preamble: being Italian I can't avoid talking about football, the example provided in this post will analyse some Serie A data together with some Fantasy Football information in order to understand which players I should choose for my team.

Data Sources

In order to analyse Serie A players I based my research on the following data points:

  • Players cost: Excel file containing Team, Role and Fantasy Football Cost for each Serie A player. This file can change match by match since Cost of a single players can vary reflecting his performances.
  • Players statistics: CSV files containing players statistics like goal scored, yellow and red cards, assists and fantasy football mark for every match of the current and past season.

For the purpose of the example I'm assuming the Players cost file is an XLSX received manually by the analyst (think at Budget data) and the Players statistics data stored in an Hive table.

Creating Data Sources in DVD

Data Visualization Desktop has a native connector to Hive, so just need to click on "Data Sources", then Create -> Connection and select "Apache Hive". The setup is pretty simple, we need to specify the host, port, username and password of the Hive Server.

Hive Connection

The next step is creating a new Data Source and select the newly created "TestHive" as source. The list of Hive's databases and, selecting FantasyFootball, the list of tables are visible.

Hive Data Source

After clicking on the ff_statistics table we can select and import the columns. There is also an option to check or directly enter the SQL if needed. After clicking OK (and checking that no errors arise) we are ready to use the Hive table.

Hive Columns

The "Players Cost" Excel file, received manually by the analyst, can be directly updated using the Data Source -> Create -> Data Source -> File option.

Upload a File

DVD automatically detects the column types and provides a preview of the content

Excel file content

Once the data source is saved we are ready to start manipulating the data.

Data Flow

Our initial goal is to exclude from the statistics table any data quality issues. This could be down to invalid CSVs, as well as players not existing in "Players Cost" file (if they were sold to teams outside Serie A or they stopped their career). To do so we can use the Data Flow option included in DVD and accessible in the Data Source page.

Path to data flow

The first step is to select ff_statistics from list of sources, right click and select "Add Step". From the list of options presented we can select Filter and remove all the invalid data by simply only include rows where the "Code" is not empty [null].

Data Flow Step 1

The Data Flow chart now includes the Filter component. Following step is to bring in the "Players cost" file in the flow by selecting the Add Data option. Then it's time to join the two sources, we can do that by selecting both them and choosing the Join option.

Join two Dataflows

We can specify the columns which will be used in the joining condition and the join type (inner or outer) by selecting the desired option in the Keep Rows section (between Matching rows or All rows). For the purpose of our analysis we'll keep only the matching rows of the two datasets (inner join) since we are interested in all players listed in Players Cost and having a valid set of statistics in Players Statistics.

Now we can enrich the data set further, by adding derived metrics and attributes:

  • Count of Matches: The number of valid matches (having a not null grade) played by so far by each player. This will be used later to filter out all players having less than 10 valid games since those are less likely to play most of the games.
  • Role Translation: Roles are specified in Italian, a simple CASE WHEN can translate them in English.

The enrichment can be achieved by creating an additional Add Columns Step and filling properly the formulas.

New Columns Formula

After filtering out all players with less than 10 valid marks, an Aggregate step can be added to set the aggregation level and methods. The Aggregate step should be included in every Data Flow since it's the unique place where Attribute/Measure and aggregation definitions can be made. A Data Flow without the Aggregation step will provide a default column definition that may result in an unusable output data source.
Finally we can store the end resultset locally in order to proceed with the analysis.

Global Flow

We can now execute the data flow and FantasyFootball is automatically added to the list of DVD's Data Sources. The Data Flow can also be stored in DVD in order to be re-executed when necessary.
Keep in mind that Data Flow works locally on the workstation where DVD is installed, so data extraction and manipulation will generate a load on the system based on the data volume and complexity of the steps.

Project

Before creating a project we can review the resulting FantasyFootball dataset settings and change the Attribute/Measure definition of my Columns as well as the type of aggregation.

Change Columns Attributes

As written before it's better to define Attributes/Measures with an Aggregate step in the Data Flow since any setting changed directly in the dataset will be overwritten when the Data Flow is re-executed.

With the data preparation work completed, now is time to start creating a project using the FantasyFootball dataset. As written in my previous post a number of new visualisations is available with DVD 12.2.2.0, some are used in the example below like Chord, Parallel and Sankey diagrams.

Global Flow

Unfortunately I'll not share the details of my findings since those could be used against me in the competition but Hey....that Higuain looks like a good player!

In this post we saw a typical analyst use case, with data coming from multiple sources needing to be joined together and cleansed. All operations done manually via Excel that can now be automated, saved and re-executed with DVD's Data Flow.

Categories: BI & Warehousing

Conjuctive Normal Form

Jonathan Lewis - Thu, 2016-10-20 07:00

I recently tweeted about a comment I’d picked up at the Trivadis performance days regarding tablescans and performance.

“If you can write your SQL in conjunctive normal form it can help the optimizer to offload more predicates”

Inevitably someone asked me if I had an example to demonstrate this – I didn’t, and still don’t really, but here’s an interesting demo based on an example from the Oracle In-Memory blog showing how the optimizer will rearrange your filter predicates before passing them to the tablescan code for evaluation against an inmemory table.


rem
rem     Script:         in_memory_conjunctive.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2016
rem     Purpose:
rem
rem     Last tested
rem             12.1.0.2
rem

create table t1
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                          id,
        trunc(dbms_random.value(1,501)) qty,
        mod(rownum,200) + 1             part_no,
        lpad(rownum,10,'0')             v1,
        lpad('x',50,'x')                padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e7
;
prompt  ==========
prompt  Base query
prompt  ==========

select
        count(v1)
from
        t1
where
        (qty > 495 or (qty < 3 and part_no = 50))
;
prompt  ===============
prompt  predicate added
prompt  ===============

select
        count(v1)
from
        t1
where
        (qty > 495 or qty < 3) and (qty > 495 or (qty < 3 and part_no = 50))
;
prompt  =================
prompt  Ordered predicate
prompt  =================

select  /*+ ordered_predicates */
        count(v1)
from
        t1
where
        (qty > 495 or qty < 3) and (qty > 495 or (qty < 3 and part_no = 50))
;

The 2nd and 3rd queries add a predicate to the first query – which, unfortunately, changes the estimated cardinality even though it has no effect on the result. This predicate is one that would be added by the inmemory code path if the table were declared to be inmemory. I’ve got two versions of the query, one with the (deprecated) ordered_predicates hint because in my initial tests the optimizer swapped the order of the predicates and I wanted to see if the ordering was at all critical.

Here’s the plan for the base query – first before declaring the table inmemory, then after declaring the table inmemory:


---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       | 14739 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |   100K|  1862K| 14739   (6)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)))
------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |       |       |  1974 (100)|          |
|   1 |  SORT AGGREGATE             |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| T1   |   100K|  1862K|  1974  (44)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - inmemory((("QTY">495 OR "QTY"<3) AND ("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)))) filter(("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)))

And here, after putting the table back to no inmemory are the plans for the second and third queries; note, particularly the different order of the predicates in the predicate section: the predicate order matches the inmemory predicate order only if I use the ordered_predicates hint:

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       | 14741 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |  1404 | 26676 | 14741   (6)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((("QTY">495 OR ("QTY"<3 AND "PART_NO"=50)) AND ("QTY">495
              OR "QTY"<3)))
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       | 14741 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    19 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |  1404 | 26676 | 14741   (6)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((("QTY">495 OR "QTY"<3) AND ("QTY">495 OR ("QTY"<3 AND
              "PART_NO"=50))))

Finally the run times – after running the queries a few times each to check for consistency:

  • Base query: 0.82 seconds
  • Query with extra predicate: 0.86 seconds
  • Query with extra predicate and forced order of predicate evaluation: 0.71 seconds

The query with the predicate arrangement matching the inmemory rewrite actually ran 13% faster than the original. Unfortunatly the rewrite without the ordered_predicates hint ran slower – which is a bit of a shame but understandable – the first predicate is the more complex, and then the code has to run a completely redundant second predicate; I was a little surprised at how much slower it was, but the table is 10M rows and we’re only looking at sub-second times anyway.

My table was fully cached and just under 112,000 blocks, so not very large, and this was running a serial query on a basic Oracle instance. Nevetheless there is a difference in execution time that is more than just “random noise” – If this is an indication of how a little unsightly tweaking of SQL for small data sets can make a difference, you can imagine that there might be a worthwhile benefit to considering ways of tweaking your predicates that make a significant difference to execution time if the extra predicates end up being pushed down to storage on an Exadata machine.

Footnote:

Another “not quite” example I happen to have written about a few months ago is a case where rewriting “not exists() OR not exists() OR not exists()” as “not (exists() AND exists() AND exists())” allowed Oracle to rewrite three subqueries as a single subquery with three-table join.

 


Oracle EMPTY_CLOB Function with Examples

Complete IT Professional - Thu, 2016-10-20 06:00
In this article, I’ll explain what the EMPTY_CLOB function does and show you an example of how to use it. Purpose of the Oracle EMPTY_CLOB Function The EMPTY_CLOB function is used to initalise a CLOB column to EMPTY. It can be used in several places: In an INSERT statement In an UPDATE statement Initialising a […]
Categories: Development

EBS 12.2.6 OA Extensions for Jdeveloper 10g Now Available

Steven Chan - Thu, 2016-10-20 02:06
Jdeveloper logoWhen you create extensions to Oracle E-Business Suite OA Framework pages, you must use the version of Oracle JDeveloper shipped by the Oracle E-Business Suite product team. 

The version of Oracle JDeveloper is specific to the Oracle E-Business Suite Applications Technology patch level, so there is a new version of Oracle JDeveloper with each new release of the Oracle E-Business Suite Applications Technology patchset.

The Oracle Applications (OA) Extensions for JDeveloper 10g are now available for E-Business Suite Release 12.2.6.  For details, see:

The same Note also lists the latest OA Extension updates for EBS 11i, 12.0, 12.1, and 12.2.

Related Articles

Categories: APPS Blogs

Documentum story – Replicate an Embedded LDAP manually in WebLogic

Yann Neuhaus - Thu, 2016-10-20 02:00

In this blog, I will talk about the WebLogic Embedded LDAP. This LDAP is created by default on all AdminServers and Managed Servers of any WebLogic installation. The AdminServer always contains the Primary Embedded LDAP and all other Servers are synchronized with this one. This Embedded LDAP is the default security provider database for the WebLogic Authentication, Authorization, Credential Mapping and Role Mapping providers: it usually contains the WebLogic users, groups, and some other stuff like the SAML2 setup, aso… So basically a lot of stuff configured under the “security realms” in the WebLogic Administration Console. This LDAP is based on files that are stored under “$DOMAIN_HOME/servers/<SERVER_NAME>/data/ldap/”.

 

Normally the Embedded LDAP is automatically replicated from the AdminServer to the Managed Servers during startup but this can fail for a few reasons:

  • AdminServer not running
  • Problems in the communications between the AdminServer and Managed Servers
  • aso…

 

Oracle usually recommend to use an external RDBMS Security Store instead of the Embedded LDAP but not all information are stored in the RDBMS and therefore the Embedded LDAP is always used, at least for a few things. More information on this page: Oracle WebLogic Server Documentation.

 

So now in case the automatic replication isn’t working properly, for any reason, or if a manual replication is needed, how can it be done? Well that’s pretty simple and I will explain that below. I will also use a home made script in order to quickly and efficiently start/stop one, several or all WebLogic components. If you don’t have such script available, then please adapt the steps below to manually stop and start all WebLogic components.

 

So first you need to stop all components:

[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop stopAll
  ** Managed Server msD2-01 stopped
  ** Managed Server msD2Conf-01 stopped
  ** Managed Server msDA-01 stopped
  ** Administration Server AdminServer stopped
  ** Node Managed NodeManager stopped
[weblogic@weblogic_server_01 ~]$ ps -ef | grep weblogic
[weblogic@weblogic_server_01 ~]$

 

Once this is done, you need to retrieve the list of all Managed Servers installed/configured in this WebLogic Domain for which a manual replication is needed. For me, it is pretty simple, they are printed above in the start/stop command but otherwise you can find them like that:

[weblogic@weblogic_server_01 ~]$ cd $DOMAIN_HOME/servers
[weblogic@weblogic_server_01 servers]$ ls | grep -v "domain_bak"
AdminServer
msD2-01
msD2Conf-01
msDA-01

 

Now that you have the list, you can proceed with the manual replication for each and every Managed Server. First backup the Embedded LDAP and then replicate it from the Primary (in the AdminServer as explained above):

[weblogic@weblogic_server_01 servers]$ current_date=$(date "+%Y%m%d")
[weblogic@weblogic_server_01 servers]$ 
[weblogic@weblogic_server_01 servers]$ mv msD2-01/data/ldap msD2-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ mv msD2Conf-01/data/ldap msD2Conf-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ mv msDA-01/data/ldap msDA-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ 
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msD2-01/data/
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msD2Conf-01/data/
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msDA-01/data/

 

When this is done, just start all WebLogic components again:

[weblogic@weblogic_server_01 servers]$ $DOMAIN_HOME/bin/startstop startAll
  ** Node Manager NodeManager started
  ** Administration Server AdminServer started
  ** Managed Server msDA-01 started
  ** Managed Server msD2Conf-01 started
  ** Managed Server msD2-01 started

 

And if you followed these steps properly, the Managed Servers will now be able to start normally with a replicated Embedded LDAP containing all recent changes coming from the Primary Embedded LDAP.

 

Cet article Documentum story – Replicate an Embedded LDAP manually in WebLogic est apparu en premier sur Blog dbi services.

Links for 2016-10-19 [del.icio.us]

Categories: DBA Blogs

Critical Patch Updates (CPU) for Oct 2016 are now available : E-Business Suite, FMW, SOA, Identity Management etc

Online Apps DBA - Thu, 2016-10-20 01:37

Critical Patch Updates (CPU) are security fixes that Oracle releases quarterly basis (Jan, April, July, and Oct). 1. Oracle released Oct 2016 patches on 18th Oct 2016 2. These CPUs cover Oracle Databases, Fusion Middleware, Oracle E-Business Suite, Oracle Enterprise Manager, Oracle Siebel CRM., Oracle Peoplesoft, Oracle JD-Edwards, Linux etc . 3. For list of […]

The post Critical Patch Updates (CPU) for Oct 2016 are now available : E-Business Suite, FMW, SOA, Identity Management etc appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

View with a CTE?

Tom Kyte - Wed, 2016-10-19 21:46
Can I create a view that uses a CTE?
Categories: DBA Blogs

Format for TRUNC(TIMESTAMP,<seconds>)

Tom Kyte - Wed, 2016-10-19 21:46
Is there a format specifier for TRUNCATE(timestamp,?) to seconds? The following all work: TRUNC(SYSTIMESTAMP,'MI') TRUNC(SYSTIMESTAMP,'HH') TRUNC(SYSTIMESTAMP,'MM') TRUNC(SYSTIMESTAMP,'YY') But TRUNC(SYSTIMESTAMP,'SS') and any variation I'v...
Categories: DBA Blogs

How to tackle 'ORA-14024: number of partitions of LOCAL index must equal that of the underlying table' error?

Tom Kyte - Wed, 2016-10-19 21:46
Hi, I have created a plsql program which does the following. 1) Create Backup tables using script from dbms_metadata.get_ddl utility. 2) Insert records from Main tables into backup tables. 3) Rename indexes on Main table 3) Create Indexes (W...
Categories: DBA Blogs

how to get regular updates on database space left and database used space in my table

Tom Kyte - Wed, 2016-10-19 21:46
how to get regular updates on database space left and database used space in my table . i have created a table with 3 columns , dbspacetotal & dbspaceused,dbspaceremaining. how to get data into these columns when i insert update or delete in my...
Categories: DBA Blogs

running application with an app_user instead of schema owner user

Tom Kyte - Wed, 2016-10-19 21:46
Hi I have a java application that requires access to oracle database with an app_user instead of schema owner user and this is for security purposes. - The schema owner user is the user that owns oracle objects that need to be accessed from j...
Categories: DBA Blogs

Database

Tom Kyte - Wed, 2016-10-19 21:46
i learn all sql and pl/sql concept very well. but still not working on any project. so my question is please give me project or idea to develop project in oracle(SQL and PL/SQL) which helps to improve my knowledge very well... please give me the prj...
Categories: DBA Blogs

Need help on dbms_scheduler

Tom Kyte - Wed, 2016-10-19 21:46
Hi Tom, I have a scheduler which is linked to my package. The package was running for long and hence I cancelled the task. Now when I try to run the package back, the scheduler is not running. I checked in "USER_SCHEDULER_JOB_LOG" AND THE...
Categories: DBA Blogs

Quickly built new Python graph SQL execution by plan

Bobby Durrett's DBA Blog - Wed, 2016-10-19 17:51

sql_id-c6m8w0rxsa92v-on-mydb-database-with-plans

I created a new graph in my PythonDBAGraphs to show how a plan change affected execution time. The legend in the upper left is plan hash value numbers. Normally I run the equivalent as a sqlplus script and just look for plans with higher execution times. I used it today for the SQL statement with SQL_ID c6m8w0rxsa92v. It has been running slow since 10/11/2016.

Since I just split up my Python graphs into multiple smaller scripts I decided to build this new Python script to see how easy it would be to show the execution time of the SQL statement for different plans graphically. It was not hard to build this. Here is the script (sqlstatwithplans.py):

import myplot
import util

def sqlstatwithplans(sql_id):
    q_string = """
select 
to_char(sn.END_INTERVAL_TIME,'MM-DD HH24:MI') DATE_TIME,
plan_hash_value,
ELAPSED_TIME_DELTA/(executions_delta*1000000) ELAPSED_AVG_SEC
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.sql_id = '""" 
    q_string += sql_id
    q_string += """'
and ss.snap_id=sn.snap_id
and executions_delta > 0
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
order by ss.snap_id,ss.sql_id,plan_hash_value"""
    return q_string

database,dbconnection = 
util.script_startup('Graph execution time by plan')

# Get user input

sql_id=util.input_with_default('SQL_ID','acrg0q0qtx3gr')

mainquery = sqlstatwithplans(sql_id)

mainresults = dbconnection.run_return_flipped_results(mainquery)

util.exit_no_results(mainresults)

date_times = mainresults[0]
plan_hash_values = mainresults[1]
elapsed_times = mainresults[2]
num_rows = len(date_times)

# build list of distict plan hash values

distinct_plans = []
for phv in plan_hash_values:
    string_phv = str(phv)
    if string_phv not in distinct_plans:
        distinct_plans.append(string_phv)
        
# build a list of elapsed times by plan

# create list with num plans empty lists     
                        
elapsed_by_plan = []
for p in distinct_plans:
    elapsed_by_plan.append([])
    
# update an entry for every plan 
# None for ones that aren't
# in the row

for i in range(num_rows):
    plan_num = distinct_plans.index(str(plan_hash_values[i]))
    for p in range(len(distinct_plans)):
        if p == plan_num:
            elapsed_by_plan[p].append(elapsed_times[i])
        else:
            elapsed_by_plan[p].append(None)
            
# plot query
    
myplot.xlabels = date_times
myplot.ylists = elapsed_by_plan

myplot.title = "Sql_id "+sql_id+" on "+database+
" database with plans"
myplot.ylabel1 = "Averaged Elapsed Seconds"
    
myplot.ylistlabels=distinct_plans

myplot.line()

Having all of the Python code for this one graph in a single file made it much faster to put together a new graph. Pretty neat.

Bobby

Categories: DBA Blogs

The hot new cloud product for true customer service

Linda Fishman Hoyle - Wed, 2016-10-19 17:15

A Guest Post by Bill Miller, Oracle product management director (pictured left)

It’s such a pleasure doing business with a company that has a 360-degree view of me as a customer. All my information—from different touchpoints that I’ve used to contact the company to purchase products and receive service and support—is consolidated in a master record. When a company manages my data efficiently, I tend to engage more, spend more, renew my loyalty, and tell my friends.

Managing customer data is at the crux of delivering a positive customer experience. Despite their importance, most master data management (MDM) solutions are a bit onerous. They are typically expensive on-premises deployments, which are time consuming to set up and to obtain results.

Oracle Customer Data Management Cloud (CDM) changes the game

CDM Cloud is an affordable solution for a company’s master data management challenges. It’s a single, easy-to-use application that consolidates, cleans, completes, and coordinates customer data from different systems across the enterprise—delivering a current, complete 360-degree view.

The technology has actually been around since the introduction of Fusion, strategically embedded in the Fusion CRM / OSC platform. CDM as a cloud service is new (as part of Oracle CX Cloud) and is the first truly SaaS-based, next-generation MDM platform. It leverages Oracle’s decades of experience in the MDM industry.

Oracle CDM Cloud creates a trusted master customer profile

Based on a recent survey from Experian, on average, customer data resides in at least nine different systems, making it difficult to know what is really there and what is really happening. This is mainly a result of the expanding applications ecosystem used to run any business.

Oracle CDM Cloud uses a cross-reference registry to tie data together from multiple sources to create a “best version” record. Ken Readus from eVerge Consulting says, “We see Oracle CDM as the perfect, easy-to-use solution for our clients to create a common, sharable “customer master” from all the cloud and on-premise applications that have sprung up over the years.”

Besides centralizing data from multiple systems, Oracle CDM Cloud also resolves the issue of bad or duplicate data. Most business applications, such as Salesforce, Microsoft, and SAP, don’t have the embedded function to check data quality.

Why are so many Oracle CX, ERP, and Salesforce customers signing up for Oracle CDM?

Most companies know how a single, complete 360-degree view of the customer positively affects their customer service. But there are so many benefits that extend beyond the increased customer loyalty, retention, and reference sentiment, which I mentioned at the beginning of this post.

For instance, Oracle CDM Cloud increases customer insight. That in turn, reduces churn. There are fewer risk / compliance issues as a result of CDM’s data governance capabilities.

Also, CDM Cloud helps companies better manage their sales territories. Businesses can segment their marketing campaigns more completely. Reporting is more accurate and timely.

For More Information

To learn about the exceptional capabilities and benefits of Oracle CDM Cloud, go to this link for an overview, features, pricing, and a data sheet.

Datawarehouse ODS load is fast and easy in Enterprise Edition

Yann Neuhaus - Wed, 2016-10-19 14:56

In a previous post, tribute to transportable tablespaces (TTS), I said that TTS is also used to move data quickly from operational database to a datawarehouse ODS. For sure, you don’t transport directly from the production database because TTS requires that the tablespace is read only. But you can transport from a snapshot standby. Both features (transportable tablespaces and Data Guard snapshot standby) are free in Enterprise Edition without option. Here is an exemple to show that it’s not difficult to automate

I have a configuration with the primary database “db1a”

DGMGRL> show configuration
 
Configuration - db1
 
Protection Mode: MaxPerformance
Members:
db1a - Primary database
db1b - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 56 seconds ago)
 
DGMGRL> show database db1b
 
Database - db1b
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 0 Byte/s
Real Time Query: ON
Instance(s):
db1
 
Database Status:
SUCCESS

I’ve a few tables in the tablespace USERS and this is what I want to transport to ODS database:

SQL> select segment_name,segment_type,tablespace_name from user_segments;
 
SEGMENT_NAME SEGMENT_TY TABLESPACE
------------ ---------- ----------
DEPT TABLE USERS
EMP TABLE USERS
PK_DEPT INDEX USERS
PK_EMP INDEX USERS
SALGRADE TABLE USERS

Snapshot standby

With Data Guard it is easy to open temporarily the standby database. Just convert it to a snapshot standby with a simple command:


DGMGRL> connect connect system/oracle@//db1b
DGMGRL> convert database db1b to snapshot standby;
Converting database "db1b" to a Snapshot Standby database, please wait...
Database "db1b" converted successfully

Export

Here you can start to do some Extraction/Load but better to reduce this window where the standby is not in sync. The only thing we will do is export the tablespace in the fastest way: TTS.

First, we put the USERS tablespace in read only:

SQL> connect system/oracle@//db1b
Connected.
 
SQL> alter tablespace users read only;
Tablespace altered.

and create a directory to export metadata:

SQL> create directory TMP_DIR as '/tmp';
Directory created.

Then export is easy

SQL> host expdp system/oracle@db1b transport_tablespaces=USERS directory=TMP_DIR
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/********@db1b transport_tablespaces=USERS directory=TMP_DIR
 
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/tmp/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
/u02/oradata/db1/users01.dbf
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Wed Oct 19 21:03:36 2016 elapsed 0 00:00:52

I’ve the metadata in /tmp/expdat.dmp and the data in /u02/oradata/db1/users01.dbf. I copy this datafile directly in his destination for the ODS database:

[oracle@VM118 ~]$ cp /u02/oradata/db1/users01.dbf /u02/oradata/ODS/users01.dbf

This is physical copy, which is the fastest data movement we can do.

I’m ready to import it into my ODA database, but I can already re-sync the standby database because I extracted everything I wanted.

Re-sync the physical standby

DGMGRL> convert database db1b to physical standby;
Converting database "db1b" to a Physical Standby database, please wait...
Operation requires shut down of instance "db1" on database "db1b"
Shutting down instance "db1"...
Connected to "db1B"
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "db1" on database "db1b"
Starting instance "db1"...
ORACLE instance started.
Database mounted.
Connected to "db1B"
Continuing to convert database "db1b" ...
Database "db1b" converted successfully
DGMGRL>

The duration depends on the time to flashback the changes (and we did no change here as we only exported) and the time to apply the redo stream generated since the convert to snapshot standby (which duration has been minimized to its minimum).

This whole process can be automated. We did that at several customers and it works well. No need to change anything unless you have new tablespaces.

Import

Here is the import to the ODS database and I rename the USERS tablespace to ODS_USERS:

SQL> host impdp system/oracle transport_datafiles=/u02/oradata/db2B/users02.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** transport_datafiles=/u02/oradata/ODS/users01.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 3 error(s) at Wed Oct 19 21:06:18 2016 elapsed 0 00:00:10

Everything is there. You have all your data in ODS_USERS. You can have other data/code in this database. Only the ODS_USERS tablespace have to be dropped to be re-imported. You can have your staging tables here adn even permanent tables.

12c pluggable databases

In 12.1 it is even easier because the multitenant architecture gives the possibility to transport the pluggable databases in one command, through file copy or database links. It is even faster because metadata are transported physically with the PDB SYSTEM tablespace. I said multitenant architecture here, and didn’t mention any option. Multitenant option is needed only if you want multiple PDBs managed by the same instance. But if you want the ODS database to be an exact copy of the operational database, then you don’t need any option to unplug/plug.

In 12.1 you need to put the source in read only, so you still need a snapshto standby. And from my test, there’s no problem to convert it back to physical standby after a PDB has been unplugged. In next release, we may not need a standby because it has been announced that PDB can be cloned online.

I’ll explain the multitenant features available without any option (in 12c current and next release) at Oracle Geneva office on 23rd of November:

CaptureBreakfastNov23
Do not hesitate to register by e-mail.

 

Cet article Datawarehouse ODS load is fast and easy in Enterprise Edition est apparu en premier sur Blog dbi services.

Thought Leader Webcast - Modernize Employee Engagement: Making Culture Actionable

WebCenter Team - Wed, 2016-10-19 13:38
Oracle Corporation Oracle Webcast - Making Culture Actionable Don't Let Your Company Culture Just Happen


Right now 7 out of 10 people in your organization are not actively engaged at work. Disengaged workforces are a global problem; and the costs are high. In the U.S. alone, companies are reporting $450 billion to $550 billion in lost productivity each year.

Join XPLANE founder and industry thought leader Dave Gray as he discusses how culture — the formal and informal values, behaviors, and beliefs practiced in an organization — can help you:
  • Understand how IT empowers Line of Business users to better engage employees
  • Create change and motivate business agility
  • Drive business results
Register Now to join us for this webcast. Red Button Top Register Now Red Button Bottom Live Webcast Join us for this webcast Calendar October 27, 2016
10:00AM PT
1:00PM ET Featured Speakers
Dave Gray Dave Gray
Entrepreneur, Author, Consultant and Founder
XPLANE

Kellsey Ruppel Kellsey Ruppel
Principal Product Marketing Director
Oracle

Oracle Discoverer Security Alert - High impact to SOX Compliance and Financial Reporting

For those clients using Oracle Discoverer, especially those using Discoverer with the Oracle E-Business Suite for financial reporting, the October 2016 Oracle Critical Patch Update (CPU) include a high-risk vulnerability reported by Integrigy Corporation. CVE-2016-5495 is a vulnerability with the Discoverer EUL Code and Schema and has a base score 7.5. Integrigy believes this vulnerability affects all versions of Discoverer used with the Oracle E-Business Suite and that the confidentiality, integrity, and availability of reports are at risk.

Oracle's recommendation is that clients migrate to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Business Intelligence Cloud Service, or Oracle Business Intelligence Applications. If you are still using Discoverer, Oracle recommends upgrading to Fusion Middleware 11g patch set 6 (11.1.1.7.0) and to apply the October 2016 Critical Patch Update Discoverer patch (24716502). Be sure to also apply the CPU patches to WebLogic (10.3.6 and higher) and the database supporting the WebLogic repository.

If you have any questions, please contact us at info@integrigy.com

For more information

October 2016 CPU Announcement: http://www.oracle.com/technetwork/security-advisory/cpuoct2016-2881722.html

Patch Set Update and Critical Patch Update October 2016 Availability Document (Doc ID 2171485.1)

ALERT: Premier Support Ends Dec 31 2011 for Oracle Fusion Middleware 10g 10.1.2 & 10.1.4 (Doc Id: 1290974.1)

Using Discoverer 11.1.1 with Oracle E-Business Suite Release 12 (Doc Id: 1074326.1)

Using Discoverer 11.1.1 with Oracle E-Business Suite Release 11i (Doc Id: 1073963.1)

Vulnerability, Sarbanes-Oxley (SOX), Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

JRE 1.8.0_111/112 Certified with Oracle EBS 12.1 and 12.2

Steven Chan - Wed, 2016-10-19 11:01

Java logo

Java Runtime Environment 1.8.0_111 (a.k.a. JRE 8u111-b14) and its corresponding Patch Set Update (PSU) JRE 1.8.0_112 and later updates on the JRE 8 codeline are now certified with Oracle E-Business Suite 12.1 and 12.2 for Windows desktop clients.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

What's new in this release?

Oracle now releases a Critical Patch update (CPU) at the same time as the corresponding Patch Set Update (PSU) release for Java SE 8.

  • CPU Release:  JRE 1.8.0_111
  • PSU Release:  JRE 1.8.0_112
Oracle recommends that Oracle E-Business Suite customers use the CPU release (JRE 1.8.0_111) and only upgrade to the PSU release (1.8.0_112) if they require a specific bug fix.  For further information and bug fix details see Java CPU and PSU Releases Explained.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

All patches required for ensuring full compatibility of the E-Business Suite with JRE 8 are documented in these Notes:

For EBS 12.1 & 12.2

EBS + Discoverer 11g Users

This JRE release is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements:

Implications of Java 6 and 7 End of Public Updates for EBS Users

The Oracle Java SE Support Roadmap and Oracle Lifetime Support Policy for Oracle Fusion Middleware documents explain the dates and policies governing Oracle's Java Support.  The client-side Java technology (Java Runtime Environment / JRE) is now referred to as Java SE Deployment Technology in these documents.

Starting with Java 7, Extended Support is not available for Java SE Deployment Technology.  It is more important than ever for you to stay current with new JRE versions.

If you are currently running JRE 6 on your EBS desktops:

  • You can continue to do so until the end of Java SE 6 Deployment Technology Extended Support in June 2017
  • You can obtain JRE 6 updates from My Oracle Support.  See:

If you are currently running JRE 7 on your EBS desktops:

  • You can continue to do so until the end of Java SE 7 Deployment Technology Premier Support in July 2016
  • You can obtain JRE 7 updates from My Oracle Support.  See:

If you are currently running JRE 8 on your EBS desktops:

Will EBS users be forced to upgrade to JRE 8 for Windows desktop clients?

No.

This upgrade is highly recommended but remains optional while Java 6 and 7 are covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 and 7 desktop clients. Note that there are different impacts of enabling JRE Auto-Update depending on your current JRE release installed, despite the availability of ongoing support for JRE 6 and 7 for EBS customers; see the next section below.

Impact of enabling JRE Auto-Update

Java Auto-Update is a feature that keeps desktops up-to-date with the latest Java release.  The Java Auto-Update feature connects to java.com at a scheduled time and checks to see if there is an update available.

Enabling the JRE Auto-Update feature on desktops with JRE 6 installed will have no effect.

With the release of the January Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Enabling the JRE Auto-Update feature on desktops with JRE 8 installed will apply JRE 8 updates.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

JRE 8 is certified for Mac OS X 10.8 (Mountain Lion), 10.9 (Mavericks), 10.10 (Yosemite), and 10.11 (El Capitan) desktops.  For details, see:

Will EBS users be forced to upgrade to JDK 8 for EBS application tier servers?

No.

JRE is used for desktop clients.  JDK is used for application tier servers.

JRE 8 desktop clients can connect to EBS environments running JDK 6 or 7.

JDK 8 is not certified with the E-Business Suite.  EBS customers should continue to run EBS servers on JDK 6 or 7.

Known Iusses

Internet Explorer Performance Issue

Launching JRE 1.8.0_73 through Internet Explorer will have a delay of around 20 seconds before the applet starts to load (Java Console will come up if enabled).

This issue fixed in JRE 1.8.0_74.  Internet Explorer users are recommended to uptake this version of JRE 8.

Form Focus Issue

Clicking outside the frame during forms launch may cause a loss of focus when running with JRE 8 and can occur in all Oracle E-Business Suite releases. To fix this issue, apply the following patch:

References

Related Articles
Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator