Feed aggregator

Need help on dbms_scheduler

Tom Kyte - Wed, 2016-10-19 21:46
Hi Tom, I have a scheduler which is linked to my package. The package was running for long and hence I cancelled the task. Now when I try to run the package back, the scheduler is not running. I checked in "USER_SCHEDULER_JOB_LOG" AND THE...
Categories: DBA Blogs

Quickly built new Python graph SQL execution by plan

Bobby Durrett's DBA Blog - Wed, 2016-10-19 17:51

sql_id-c6m8w0rxsa92v-on-mydb-database-with-plans

I created a new graph in my PythonDBAGraphs to show how a plan change affected execution time. The legend in the upper left is plan hash value numbers. Normally I run the equivalent as a sqlplus script and just look for plans with higher execution times. I used it today for the SQL statement with SQL_ID c6m8w0rxsa92v. It has been running slow since 10/11/2016.

Since I just split up my Python graphs into multiple smaller scripts I decided to build this new Python script to see how easy it would be to show the execution time of the SQL statement for different plans graphically. It was not hard to build this. Here is the script (sqlstatwithplans.py):

import myplot
import util

def sqlstatwithplans(sql_id):
    q_string = """
select 
to_char(sn.END_INTERVAL_TIME,'MM-DD HH24:MI') DATE_TIME,
plan_hash_value,
ELAPSED_TIME_DELTA/(executions_delta*1000000) ELAPSED_AVG_SEC
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.sql_id = '""" 
    q_string += sql_id
    q_string += """'
and ss.snap_id=sn.snap_id
and executions_delta > 0
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
order by ss.snap_id,ss.sql_id,plan_hash_value"""
    return q_string

database,dbconnection = 
util.script_startup('Graph execution time by plan')

# Get user input

sql_id=util.input_with_default('SQL_ID','acrg0q0qtx3gr')

mainquery = sqlstatwithplans(sql_id)

mainresults = dbconnection.run_return_flipped_results(mainquery)

util.exit_no_results(mainresults)

date_times = mainresults[0]
plan_hash_values = mainresults[1]
elapsed_times = mainresults[2]
num_rows = len(date_times)

# build list of distict plan hash values

distinct_plans = []
for phv in plan_hash_values:
    string_phv = str(phv)
    if string_phv not in distinct_plans:
        distinct_plans.append(string_phv)
        
# build a list of elapsed times by plan

# create list with num plans empty lists     
                        
elapsed_by_plan = []
for p in distinct_plans:
    elapsed_by_plan.append([])
    
# update an entry for every plan 
# None for ones that aren't
# in the row

for i in range(num_rows):
    plan_num = distinct_plans.index(str(plan_hash_values[i]))
    for p in range(len(distinct_plans)):
        if p == plan_num:
            elapsed_by_plan[p].append(elapsed_times[i])
        else:
            elapsed_by_plan[p].append(None)
            
# plot query
    
myplot.xlabels = date_times
myplot.ylists = elapsed_by_plan

myplot.title = "Sql_id "+sql_id+" on "+database+
" database with plans"
myplot.ylabel1 = "Averaged Elapsed Seconds"
    
myplot.ylistlabels=distinct_plans

myplot.line()

Having all of the Python code for this one graph in a single file made it much faster to put together a new graph. Pretty neat.

Bobby

Categories: DBA Blogs

The hot new cloud product for true customer service

Linda Fishman Hoyle - Wed, 2016-10-19 17:15

A Guest Post by Bill Miller, Oracle product management director (pictured left)

It’s such a pleasure doing business with a company that has a 360-degree view of me as a customer. All my information—from different touchpoints that I’ve used to contact the company to purchase products and receive service and support—is consolidated in a master record. When a company manages my data efficiently, I tend to engage more, spend more, renew my loyalty, and tell my friends.

Managing customer data is at the crux of delivering a positive customer experience. Despite their importance, most master data management (MDM) solutions are a bit onerous. They are typically expensive on-premises deployments, which are time consuming to set up and to obtain results.

Oracle Customer Data Management Cloud (CDM) changes the game

CDM Cloud is an affordable solution for a company’s master data management challenges. It’s a single, easy-to-use application that consolidates, cleans, completes, and coordinates customer data from different systems across the enterprise—delivering a current, complete 360-degree view.

The technology has actually been around since the introduction of Fusion, strategically embedded in the Fusion CRM / OSC platform. CDM as a cloud service is new (as part of Oracle CX Cloud) and is the first truly SaaS-based, next-generation MDM platform. It leverages Oracle’s decades of experience in the MDM industry.

Oracle CDM Cloud creates a trusted master customer profile

Based on a recent survey from Experian, on average, customer data resides in at least nine different systems, making it difficult to know what is really there and what is really happening. This is mainly a result of the expanding applications ecosystem used to run any business.

Oracle CDM Cloud uses a cross-reference registry to tie data together from multiple sources to create a “best version” record. Ken Readus from eVerge Consulting says, “We see Oracle CDM as the perfect, easy-to-use solution for our clients to create a common, sharable “customer master” from all the cloud and on-premise applications that have sprung up over the years.”

Besides centralizing data from multiple systems, Oracle CDM Cloud also resolves the issue of bad or duplicate data. Most business applications, such as Salesforce, Microsoft, and SAP, don’t have the embedded function to check data quality.

Why are so many Oracle CX, ERP, and Salesforce customers signing up for Oracle CDM?

Most companies know how a single, complete 360-degree view of the customer positively affects their customer service. But there are so many benefits that extend beyond the increased customer loyalty, retention, and reference sentiment, which I mentioned at the beginning of this post.

For instance, Oracle CDM Cloud increases customer insight. That in turn, reduces churn. There are fewer risk / compliance issues as a result of CDM’s data governance capabilities.

Also, CDM Cloud helps companies better manage their sales territories. Businesses can segment their marketing campaigns more completely. Reporting is more accurate and timely.

For More Information

To learn about the exceptional capabilities and benefits of Oracle CDM Cloud, go to this link for an overview, features, pricing, and a data sheet.

Datawarehouse ODS load is fast and easy in Enterprise Edition

Yann Neuhaus - Wed, 2016-10-19 14:56

In a previous post, tribute to transportable tablespaces (TTS), I said that TTS is also used to move data quickly from operational database to a datawarehouse ODS. For sure, you don’t transport directly from the production database because TTS requires that the tablespace is read only. But you can transport from a snapshot standby. Both features (transportable tablespaces and Data Guard snapshot standby) are free in Enterprise Edition without option. Here is an exemple to show that it’s not difficult to automate

I have a configuration with the primary database “db1a”

DGMGRL> show configuration
 
Configuration - db1
 
Protection Mode: MaxPerformance
Members:
db1a - Primary database
db1b - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 56 seconds ago)
 
DGMGRL> show database db1b
 
Database - db1b
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 0 Byte/s
Real Time Query: ON
Instance(s):
db1
 
Database Status:
SUCCESS

I’ve a few tables in the tablespace USERS and this is what I want to transport to ODS database:

SQL> select segment_name,segment_type,tablespace_name from user_segments;
 
SEGMENT_NAME SEGMENT_TY TABLESPACE
------------ ---------- ----------
DEPT TABLE USERS
EMP TABLE USERS
PK_DEPT INDEX USERS
PK_EMP INDEX USERS
SALGRADE TABLE USERS

Snapshot standby

With Data Guard it is easy to open temporarily the standby database. Just convert it to a snapshot standby with a simple command:


DGMGRL> connect connect system/oracle@//db1b
DGMGRL> convert database db1b to snapshot standby;
Converting database "db1b" to a Snapshot Standby database, please wait...
Database "db1b" converted successfully

Export

Here you can start to do some Extraction/Load but better to reduce this window where the standby is not in sync. The only thing we will do is export the tablespace in the fastest way: TTS.

First, we put the USERS tablespace in read only:

SQL> connect system/oracle@//db1b
Connected.
 
SQL> alter tablespace users read only;
Tablespace altered.

and create a directory to export metadata:

SQL> create directory TMP_DIR as '/tmp';
Directory created.

Then export is easy

SQL> host expdp system/oracle@db1b transport_tablespaces=USERS directory=TMP_DIR
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/********@db1b transport_tablespaces=USERS directory=TMP_DIR
 
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/tmp/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
/u02/oradata/db1/users01.dbf
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Wed Oct 19 21:03:36 2016 elapsed 0 00:00:52

I’ve the metadata in /tmp/expdat.dmp and the data in /u02/oradata/db1/users01.dbf. I copy this datafile directly in his destination for the ODS database:

[oracle@VM118 ~]$ cp /u02/oradata/db1/users01.dbf /u02/oradata/ODS/users01.dbf

This is physical copy, which is the fastest data movement we can do.

I’m ready to import it into my ODA database, but I can already re-sync the standby database because I extracted everything I wanted.

Re-sync the physical standby

DGMGRL> convert database db1b to physical standby;
Converting database "db1b" to a Physical Standby database, please wait...
Operation requires shut down of instance "db1" on database "db1b"
Shutting down instance "db1"...
Connected to "db1B"
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "db1" on database "db1b"
Starting instance "db1"...
ORACLE instance started.
Database mounted.
Connected to "db1B"
Continuing to convert database "db1b" ...
Database "db1b" converted successfully
DGMGRL>

The duration depends on the time to flashback the changes (and we did no change here as we only exported) and the time to apply the redo stream generated since the convert to snapshot standby (which duration has been minimized to its minimum).

This whole process can be automated. We did that at several customers and it works well. No need to change anything unless you have new tablespaces.

Import

Here is the import to the ODS database and I rename the USERS tablespace to ODS_USERS:

SQL> host impdp system/oracle transport_datafiles=/u02/oradata/db2B/users02.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** transport_datafiles=/u02/oradata/ODS/users01.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 3 error(s) at Wed Oct 19 21:06:18 2016 elapsed 0 00:00:10

Everything is there. You have all your data in ODS_USERS. You can have other data/code in this database. Only the ODS_USERS tablespace have to be dropped to be re-imported. You can have your staging tables here adn even permanent tables.

12c pluggable databases

In 12.1 it is even easier because the multitenant architecture gives the possibility to transport the pluggable databases in one command, through file copy or database links. It is even faster because metadata are transported physically with the PDB SYSTEM tablespace. I said multitenant architecture here, and didn’t mention any option. Multitenant option is needed only if you want multiple PDBs managed by the same instance. But if you want the ODS database to be an exact copy of the operational database, then you don’t need any option to unplug/plug.

In 12.1 you need to put the source in read only, so you still need a snapshto standby. And from my test, there’s no problem to convert it back to physical standby after a PDB has been unplugged. In next release, we may not need a standby because it has been announced that PDB can be cloned online.

I’ll explain the multitenant features available without any option (in 12c current and next release) at Oracle Geneva office on 23rd of November:

CaptureBreakfastNov23
Do not hesitate to register by e-mail.

 

Cet article Datawarehouse ODS load is fast and easy in Enterprise Edition est apparu en premier sur Blog dbi services.

Thought Leader Webcast - Modernize Employee Engagement: Making Culture Actionable

WebCenter Team - Wed, 2016-10-19 13:38
Oracle Corporation Oracle Webcast - Making Culture Actionable Don't Let Your Company Culture Just Happen


Right now 7 out of 10 people in your organization are not actively engaged at work. Disengaged workforces are a global problem; and the costs are high. In the U.S. alone, companies are reporting $450 billion to $550 billion in lost productivity each year.

Join XPLANE founder and industry thought leader Dave Gray as he discusses how culture — the formal and informal values, behaviors, and beliefs practiced in an organization — can help you:
  • Understand how IT empowers Line of Business users to better engage employees
  • Create change and motivate business agility
  • Drive business results
Register Now to join us for this webcast. Red Button Top Register Now Red Button Bottom Live Webcast Join us for this webcast Calendar October 27, 2016
10:00AM PT
1:00PM ET Featured Speakers
Dave Gray Dave Gray
Entrepreneur, Author, Consultant and Founder
XPLANE

Kellsey Ruppel Kellsey Ruppel
Principal Product Marketing Director
Oracle

Oracle Discoverer Security Alert - High impact to SOX Compliance and Financial Reporting

For those clients using Oracle Discoverer, especially those using Discoverer with the Oracle E-Business Suite for financial reporting, the October 2016 Oracle Critical Patch Update (CPU) include a high-risk vulnerability reported by Integrigy Corporation. CVE-2016-5495 is a vulnerability with the Discoverer EUL Code and Schema and has a base score 7.5. Integrigy believes this vulnerability affects all versions of Discoverer used with the Oracle E-Business Suite and that the confidentiality, integrity, and availability of reports are at risk.

Oracle's recommendation is that clients migrate to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Business Intelligence Cloud Service, or Oracle Business Intelligence Applications. If you are still using Discoverer, Oracle recommends upgrading to Fusion Middleware 11g patch set 6 (11.1.1.7.0) and to apply the October 2016 Critical Patch Update Discoverer patch (24716502). Be sure to also apply the CPU patches to WebLogic (10.3.6 and higher) and the database supporting the WebLogic repository.

If you have any questions, please contact us at info@integrigy.com

For more information

October 2016 CPU Announcement: http://www.oracle.com/technetwork/security-advisory/cpuoct2016-2881722.html

Patch Set Update and Critical Patch Update October 2016 Availability Document (Doc ID 2171485.1)

ALERT: Premier Support Ends Dec 31 2011 for Oracle Fusion Middleware 10g 10.1.2 & 10.1.4 (Doc Id: 1290974.1)

Using Discoverer 11.1.1 with Oracle E-Business Suite Release 12 (Doc Id: 1074326.1)

Using Discoverer 11.1.1 with Oracle E-Business Suite Release 11i (Doc Id: 1073963.1)

Vulnerability, Sarbanes-Oxley (SOX), Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

JRE 1.8.0_111/112 Certified with Oracle EBS 12.1 and 12.2

Steven Chan - Wed, 2016-10-19 11:01

Java logo

Java Runtime Environment 1.8.0_111 (a.k.a. JRE 8u111-b14) and its corresponding Patch Set Update (PSU) JRE 1.8.0_112 and later updates on the JRE 8 codeline are now certified with Oracle E-Business Suite 12.1 and 12.2 for Windows desktop clients.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

What's new in this release?

Oracle now releases a Critical Patch update (CPU) at the same time as the corresponding Patch Set Update (PSU) release for Java SE 8.

  • CPU Release:  JRE 1.8.0_111
  • PSU Release:  JRE 1.8.0_112
Oracle recommends that Oracle E-Business Suite customers use the CPU release (JRE 1.8.0_111) and only upgrade to the PSU release (1.8.0_112) if they require a specific bug fix.  For further information and bug fix details see Java CPU and PSU Releases Explained.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

All patches required for ensuring full compatibility of the E-Business Suite with JRE 8 are documented in these Notes:

For EBS 12.1 & 12.2

EBS + Discoverer 11g Users

This JRE release is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements:

Implications of Java 6 and 7 End of Public Updates for EBS Users

The Oracle Java SE Support Roadmap and Oracle Lifetime Support Policy for Oracle Fusion Middleware documents explain the dates and policies governing Oracle's Java Support.  The client-side Java technology (Java Runtime Environment / JRE) is now referred to as Java SE Deployment Technology in these documents.

Starting with Java 7, Extended Support is not available for Java SE Deployment Technology.  It is more important than ever for you to stay current with new JRE versions.

If you are currently running JRE 6 on your EBS desktops:

  • You can continue to do so until the end of Java SE 6 Deployment Technology Extended Support in June 2017
  • You can obtain JRE 6 updates from My Oracle Support.  See:

If you are currently running JRE 7 on your EBS desktops:

  • You can continue to do so until the end of Java SE 7 Deployment Technology Premier Support in July 2016
  • You can obtain JRE 7 updates from My Oracle Support.  See:

If you are currently running JRE 8 on your EBS desktops:

Will EBS users be forced to upgrade to JRE 8 for Windows desktop clients?

No.

This upgrade is highly recommended but remains optional while Java 6 and 7 are covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 and 7 desktop clients. Note that there are different impacts of enabling JRE Auto-Update depending on your current JRE release installed, despite the availability of ongoing support for JRE 6 and 7 for EBS customers; see the next section below.

Impact of enabling JRE Auto-Update

Java Auto-Update is a feature that keeps desktops up-to-date with the latest Java release.  The Java Auto-Update feature connects to java.com at a scheduled time and checks to see if there is an update available.

Enabling the JRE Auto-Update feature on desktops with JRE 6 installed will have no effect.

With the release of the January Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Enabling the JRE Auto-Update feature on desktops with JRE 8 installed will apply JRE 8 updates.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

JRE 8 is certified for Mac OS X 10.8 (Mountain Lion), 10.9 (Mavericks), 10.10 (Yosemite), and 10.11 (El Capitan) desktops.  For details, see:

Will EBS users be forced to upgrade to JDK 8 for EBS application tier servers?

No.

JRE is used for desktop clients.  JDK is used for application tier servers.

JRE 8 desktop clients can connect to EBS environments running JDK 6 or 7.

JDK 8 is not certified with the E-Business Suite.  EBS customers should continue to run EBS servers on JDK 6 or 7.

Known Iusses

Internet Explorer Performance Issue

Launching JRE 1.8.0_73 through Internet Explorer will have a delay of around 20 seconds before the applet starts to load (Java Console will come up if enabled).

This issue fixed in JRE 1.8.0_74.  Internet Explorer users are recommended to uptake this version of JRE 8.

Form Focus Issue

Clicking outside the frame during forms launch may cause a loss of focus when running with JRE 8 and can occur in all Oracle E-Business Suite releases. To fix this issue, apply the following patch:

References

Related Articles
Categories: APPS Blogs

Feedback on my session at Oracle Open World 2016

Yann Neuhaus - Wed, 2016-10-19 11:00

I was a speaker at Oracle Open World and received the feedback and demographic data. this session took place on the Sunday, which is the User Group Forum day about Multitenant, defining what is the multitenant architecture and which features it brings to us even wen we dont’ have the multitenant option. Unfortunately, I cannot upload the slides before the next 12c release is available. If you missed the session or want to hear it in my native language, I’ll give it in Geneva on 23rd of November at Oracle Switzerland office.

Here is the room when we was setting up de demo on my laptop, but from demographic statistics below 84 people attended (or planned to attend) my session.

2016-09-18 10.28.07

Feedback survey

Depending on the conferences, the percentage of people that fill the feedback goes from low to very low. Here 6 people gave feedback wich is 7% or the attendees:

Number of Respondents: 6
Q1: How would you rate the content of the session? (select a rating of 1 to 3, 3 being the best): 2.67
Q2: How would you rate the speaker(s) of the session? (select a rating of 1 to 3, 3 being the best): 2.83
Q3: Overall, based on content and speakers I would rate this session as (select a rating of 1 to 3, 3 being the best): 2.67

Thanks for this. But most important is the quality than the quantity. I received only one comment and this one is very important for me because it can help me to improve my slide:

Heavy accent was difficult to understand at times where I lost interest/concentration. Ran through slides too quick (understanding time constraints). Did not allow image capturing (respected). Did provide examples which was nice. Advised slides will be downloadable…to be seen.

The accent is not a surprise. It’s an international event a lot of people coming from all around the world may have accent that is difficult to understand. I would love to speak English more clearly but I know that my French accent is there, and my lack of vocabulary as well. That will be hard to change. But the remark about the slides is very pertinent. I usually put lot of material in my presentations: lot of slides, lot of texts, lot of demos. My idea is that you don’t need to read all what is written. It is there to read it later when you download the slides (I expected the 12cR2 to be fully available for OOW when I prepared the slides). And it’s also there in case my live demos fails so that I’ve the info on the slides, but I usually skip them quickly when all was seen in the demo.

But thanks to this comment, I understand that reading the slides is important when you don’t get what I say, and having too much text makes it difficult to follow. For future presentations, I’ll remove text from slides and put it as powerpoint presenter notes, made available in the pdf.

So thanks for the one that has written this comment. I’ll improve that. And don’t hesitate to ping me to know when the slides can be downloadable, and maybe I can already share a few ones.

Demographic data

Open World gives some demographic data about attendees. As at the Sunday Session you don’t have to scan you badge, I suppose it’s about people that registered and may not have been there. But intention counts ;)

About countries: we were in US so that’s the main country represented here. Next comes 6 people from Switzerland, the country where I live and work:

CaptureOOW16demoCountries84

When we register to OOW we fill-in the industry we are working on. The most represented in the room were Financial, Heathcare, High Tech:

CaptureOOW16demoIndustries84

And the job title which is a free text have several values, which makes it difficult to aggregate:

CaptureOOW16demoJobtitles84

That’s no surprise that there were a lot of DBAs. I’m happy to see some managers/supervisors interested in technical sessions.
My goal for future events is to get more attention from developers because a database is not a black box storage for data, but a full software where data is processed.

I don’t think that 84 people were in that room actually, there were several good sessions at the same time, as well as the bridge run.

oo1Csp1eTvXgAABNcH

This is the kind of slides where there’s lot of text but I go fast. Actually I had initially 3 slides about this point (feature usage detection, multitenant and CON_IDs). I removed some, and kept one with too much text. When I remove slides, I usually post a blog about what I’ll not have time to detail.

Here are those posts:

http://blog.dbi-services.com/unplugged-pluggable-databases/
http://blog.dbi-services.com/how-to-check-multitenant-option-feature-usage/

Thanks

My session was part of part of the stream which was selected by the EMEA Oracle Usergroup Community. Thanks a lot to EOUC. They have good articles in their newly created magazine www.oraworld.org. Similar name but nothing to do with the team of worldwide OCMs and ACEs publishing for years as Oraworld Team.

 

Cet article Feedback on my session at Oracle Open World 2016 est apparu en premier sur Blog dbi services.

JRE 1.7.0_111 Certified with Oracle E-Business Suite 12.1 and 12.2

Steven Chan - Wed, 2016-10-19 10:54

Java logo

Java Runtime Environment 1.7.0_121 (a.k.a. JRE 7u121-b15) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 12.1 and 12.2 for Windows-based desktop clients.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

EBS + Discoverer 11g Users

This JRE release is certified for Discoverer 11g in E-Business Suite environments with the following minimum requirements:

JRE 7 End of Public Updates

The JRE 7u79 release was the last JRE 7 update available to the general public.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 7 updates to the end of Java SE 7 Premier Support to the end of July 2016.

How can EBS customers obtain Java 7 updates after the public end-of-life?

EBS customers can download Java 7 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see:

Both JDK and JRE packages are now contained in a single combined download.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package. 

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

Java Auto-Update Mechanism

With the release of the January 2015 Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.7 (Lion), 10.8 (Mountain Lion), 10.9 (Mavericks), and 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE is used for desktop clients.  JDK is used for application tier servers

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 is covered by Extended Support until June 2017.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by June 2017. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is certified with E-Business Suite 12.  See:

Known Issues

When using Internet Explorer, JRE 1.7.0_01 had a delay of around 20 seconds before the applet started to load. This issue is fixed in JRE 1.7.0_95.

References

Related Articles
Categories: APPS Blogs

JRE 1.6.0_131 Certified with Oracle E-Business Suite 12.1 and 12.2

Steven Chan - Wed, 2016-10-19 10:50
Java logThe latest Java Runtime Environment 1.6.0_131 (a.k.a. JRE 6u131-b14) and later updates on the JRE 6 codeline are now certified with Oracle E-Business Suite Release 12.1 and 12.2 for Windows-based desktop clients.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

New EBS installation scripts

This JRE release is the first with a 3-digit Java version. Installing this in your EBS 11i and 12.x environments will require new installation scripts.  See the documentation listed in the 'References' section for more detail.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Deploying JRE documentation for your EBS release for details.

Implications of Java 6 End of Public Updates for EBS Users

The Support Roadmap for Oracle Java is published here:

The latest updates to that page (as of Sept. 19, 2012) state:

Java SE 6 End of Public Updates Notice

After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support .

What does this mean for Oracle E-Business Suite users?

EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates from February 2013 to the end of Java SE 6 Extended Support in June 2017.

In other words, nothing changes for EBS users after February 2013. 

EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6 until the end of Java SE 6 Extended Support in June 2017. 

How can EBS customers obtain Java 6 updates after the public end-of-life?

Java 6 is now available only via My Oracle Support for E-Business Suite users.  You can find links to this release, including Release Notes, documentation, and the actual Java downloads here: Both JDK and JRE packages are contained in a single combined download after 6u45.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE is used for desktop clients.  JDK is used for application tier servers

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 is covered by Extended Support until June 2017.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by June 2017. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is certified with E-Business Suite 12.  See:

References

Related Articles

Categories: APPS Blogs

Tim Gorman at AZORA meeting tomorrow in Scottsdale

Bobby Durrett's DBA Blog - Wed, 2016-10-19 10:34
#meetup_oembed .mu_clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; }* html #meetup_oembed .mu_clearfix, *:first-child+html #meetup_oembed .mu_clearfix { zoom: 1; }#meetup_oembed { background:#eee;border:1px solid #ccc;padding:10px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;margin:0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; }#meetup_oembed h3 { font-weight:normal; margin:0 0 10px; padding:0; line-height:26px; font-family:Georgia,Palatino,serif; font-size:24px }#meetup_oembed p { margin: 0 0 10px; padding:0; line-height:16px; }#meetup_oembed img { border:none; margin:0; padding:0; }#meetup_oembed a, #meetup_oembed a:visited, #meetup_oembed a:link { color: #1B76B3; text-decoration: none; cursor: hand; cursor: pointer; }#meetup_oembed a:hover { color: #1B76B3; text-decoration: underline; }#meetup_oembed a.mu_button { font-size:14px; -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;border:2px solid #A7241D;color:white!important;text-decoration:none;background-color: #CA3E47; background-image: -moz-linear-gradient(top, #ca3e47, #a8252e); background-image: -webkit-gradient(linear, left bottom, left top, color-stop(0, #a8252e), color-stop(1, #ca3e47));disvplay:inline-block;padding:5px 10px; }#meetup_oembed a.mu_button:hover { color: #fff!important; text-decoration: none; }#meetup_oembed .photo { width:50px; height:50px; overflow:hidden;background:#ccc;float:left;margin:0 5px 0 0;text-align:center;padding:1px; }#meetup_oembed .photo img { height:50px }#meetup_oembed .number { font-size:18px; }#meetup_oembed .thing { text-transform: uppercase; color: #555; }
Arizona Oracle User Group – October 20, 2016

Thursday, Oct 20, 2016, 12:30 PM

Republic Services – 3rd Floor Conference Room
14400 N 87th St (AZ101 & Raintree) Scottsdale, AZ

16 AZORAS Attending

Change In Plans -Tim Gorman comes to Phoenix! Stephen Andert had a sudden business commitment making it impossible for him to speak at Thursday’s meeting.Fortunately, Tim Gorman of Delphix will be coming from Denver to speak instead. Tim is an internationally-renowned speaker, performance specialist, member of the Oak Table, Oracle Ace Director, …

Check out this Meetup →

Phoenix area readers – I just found out that Oracle performance specialist and Delphix employee Tim Gorman will be speaking at the Arizona User Group meeting tomorrow in Scottsdale.  I am looking forward to it.

Bobby

Categories: DBA Blogs

Considering Cloud DBMS Systems? Choose Your Architecture Wisely!

Chris Foot - Wed, 2016-10-19 09:08

Databases in the Cloud
Technology leaders are being inundated with a flood of new cloud architectures, strategies and products – all guaranteed by vendors and various industry pundits to solve all of our database challenges.  The seemingly endless array of public cloud based DBMS offerings can quickly become bewildering.  This article is intended to peel back the veil of cloud based DBMS offerings by providing readers with our experiences with cloud based database architectures.

Documentum xPlore – ftintegrity tool not working

Yann Neuhaus - Wed, 2016-10-19 08:38

I’ve been patching around some xPlore servers for a while and recently I went into an issue regarding the ftintegrity tool. Maybe you figured it out as well, for xPlore 1.5 Patch 15 the ftintegrity tool available in $DSEARCH_HOME/setup/indexagent/tools was corrupted by the patch.
I think for some reason the libs were changed and the tool wasn’t able to load anymore. I asked EMC and they said it was a known bug which will be fixed in next release.

So I came to patch it again when the Patch 17 went out (our customer processes doesn’t allow us to patch every month, so I skipped the Patch 16). After patching, I directly started the ftintegrity tool in order to check that everything was fixed, and…. no.

In fact yes, but you have something to do before making it work. The error you have is like ‘Could not load because config is NULL’, or ‘dfc.properties not found’. I found these error kinda strange, so I wondered if the ftintegrity tool script was patched as well. And the answer is no, the script is still the same but the jar libraries have been changed, that means that the script is pointing to the wrong libraries and it can’t load properly.

Thus the solution is simple, I uninstalled the Index Agent and installed it again right after. The ftintegrity tool script was re-created with the good pointers to the new libraries. Little tip: If you have several Index Agents and don’t want to re-install them all, you may want to copy the content of the updated ftintegrity tool script and paste it into the other instances (do not forget to adapt it because it may point to different docbases).

To summary, if you have issues regarding the execution of the ftintegrity tool, check the libraries call in the script and try to re-install it in order to have the latest one.

 

Cet article Documentum xPlore – ftintegrity tool not working est apparu en premier sur Blog dbi services.

Easing into the Cloud for Oracle WebCenter Investments

WebCenter Team - Wed, 2016-10-19 08:17

Author: Marcus Diaz, Senior Principal Product Manager, Oracle

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin;}

For customers who have invested in traditional on-premise deployments of IT applications to support their business objectives, the growing trend and push to the cloud can often appear to be a daunting challenge. But the good news is that it is not an “all or nothing” proposition when you consider taking a hybrid approach as you step towards cloud adoption. As Figure 1 shows below, 71% of customers below are taking a hybrid cloud approach which includes the use of an conventional on-premise IT application in conjunction with some form of cloud technology.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin;}

Figure 1: Hybrid Cloud Adoption

Lift and Shift is a term that is being used quite a bit these days in talking about the push to the cloud by various cloud vendors. While exact definitions vary, we at Oracle use this term to mean copying (or cloning) an on-premises implementation of Oracle WebCenter Content, Imaging, Portal or Sites environment into Oracle’s public cloud offerings. The targets for these environments could be either Oracle’s Infrastructure as a Service (IaaS) Compute cloud or Oracle’s Java Cloud Service (JCS).

All of the current 12c releases of the WebCenter product family (Content, Imaging, Capture, Portal & Sites) are certified to run in these Oracle cloud environments.

Oracle’s Infrastructure as a Service (IaaS) Compute cloud environment is a cloud based virtualization service for deploying your applications. You get a virtualized operating system environment with a number of pre-allocated CPU cores (Oracle calls them OCPU’s) and storage running on Oracle cloud infrastructure. As an administrator, you would do all the same things that you would have done to install the Oracle WebCenter Stack into your own hardware but instead you do it remotely using a secure connection to your Oracle cloud environment.

Oracle’s Java Cloud Service (JCS) cloud environments are similar to IaaS/Compute cloud with one big difference: JCS comes with a pre-provisioned Oracle WebLogic Application Server. You still get a virtualized operating system environment with an number of pre-allocated CPU cores but instead of starting the application installation process at the operating system level – you start with an already running WLS server and only need to install the middleware applications. The additional benefits of the JCS environment are that the WLS application server functions like monitoring, backup, update & scale out are integrated into the JCS cloud administration console functions.

When you lift and shift your WebCenter on-premise applications to the cloud – these cloud instances can be used for testing, development, or production environments. For production environments, the high availability standard reference architectures that the WebCenter products document & support for clustering and load balancing are supported as well. Figure 2 below the Oracle WebCenter Content & Imaging reference high availability architecture.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin;}

Figure 2: Oracle WebCenter Content & Imaging Reference HA architecture

From a licensing perspective, it’s a case of Bring Your Own License (BYOL) , meaning you can re-allocate your existing on-premise perpetual processor licenses that you already own and re-deploy them on Oracle’s cloud infrastructure instead of your own hardware. What you pay for is a subscription cost for the virtual cloud CPU’s & storage that you use in the Oracle cloud. You can pay as you go (metered subscription) for the resources you use or you can prepay (non-metered) for a fixed set of cloud resources.

In summary, with the availability of “lift-n-shift” support in Oracle’s compute cloud environments you’ve now got the best of both worlds – the flexibility of the Oracle cloud for scaling and off-loading your infrastructure costs while continuing to be able to use your existing WebCenter product family investments and the applications you’ve built around them.

To get started, connect with us via your Oracle WebCenter account representative or your Oracle point of contact. We will drive strategy sessions to work on a roadmap that works best for your short- and long-term needs. And, get the best of both worlds - leveraging your existing investments and benefits of the Cloud.

We know you have questions so please do reach out to us via the comments section or through your point of contact. For more information on our Content and Experience cloud portfolio, visit us at oracle.com/dx.

Data Visualization Desktop 12.2.2.0

Rittman Mead Consulting - Wed, 2016-10-19 06:17

Yesterday Data Visualization Desktop (DVD) Version 12.2.2.0 was released. DVD, since its first release, aims to extend Oracle's Data Visualization portfolio by adding a desktop tool allowing data visualization capabilities directly to end users without the intervention of the IT department, in line with the Gartner's bi-modal IT.

The new version adds several capabilities to the existing product like visualization types, data sources and a wrangling option. This post will share the details of the new release additional features.

Installation

After downloading DVD, the installation is pretty simple, just double click on the Oracle_Data_Visualization_Desktop_V2_12_2_2_0_0.msi file, choose the installation folder and click on "Install".

Installation end

Once the installation is finished, be aware of the message in the last screen, it says that the correct version of R and the set of packages need to be installed in order to be used with DVD for Advanced Analytics. Those can be installed via the "Install Advanced Analytics" file placed in Start Menu -> Programs -> Oracle.

This setup allows to chose the R installation location, installs R and then downloads from cran.us.r-project.org the relevant packages.

R Setup

New Visualisations

The first set of improvements in the new release is about the new out of the box visualisation, and new set of graphs is now available:

  • List: Shows a list of the dimension's values together with a gradient colouring based on the measure selected

List View

  • Parallel Coordinates: Shows multiple dimensions on the same chart enhancing the ability to quickly get an insight about possible connections between them

Parallel View

  • Timeline: It's an effective way of showing time related facts, each fact is shown along a timeline, with one or more distinguishing attributes, the example shows the quantity shipped by day and city.

Timeline View

  • Network Diagrams: Chord, Circular, Network and Sankey Diagrams are used to shows inter-relationship between elements

Network Views

Other visual enhancements include a multi-canvas layout that can be exported with a single click and a hierarchical or gradient colouring for the charts.

Data Sources

A lot of new data sources have been added to DVD, some of them still in beta phase. A bunch of new databases are now supported like Netezza, Amazon Aurora and PostgreSQL.

An interesting enhancement is the connection to Dropbox and Google Drive allowing DVD to source files stored in Cloud. Finally DVD's exposure to Big Data world has been enhanced by the addition of the connectivity to tools such as Apache Drill, Presto and Cassandra.

DVD Data Sources

Excel Editing

Excel sheets used as data source now can be edited and the DVD project refreshed without the need of manually reloading the spreadsheet.

Data Flows

There is a new component in DVD called Data Flow allowing the end user some basic transformations of the data like joining two datasets (even if coming from different sources), filtering, aggregating, adding columns based on custom formulas and storing the result on the local file system.

DVD Data Flows Options

In the example below two files coming from Hive (but the source can also be different) are joined and a subset of columns is selected and stored locally.

DVD Data Flows Options

Data Flows can be stored in DVD and re-executed upon request. The list of Data Flows is available under Data Sources -> Data Flows. In the next blog post I'll show a typical Analyst use case in which Data Flow can help automating a series of data loading, cleansing and enriching steps.

Data Insights

Data Insights provides a way of quickly understand the dataset available, by default it shows a series of graphs, one for every attribute, with the cardinality of each attribute's value. A drop down menu allows to show the same graphs based on any measure defined in the dataset.

DVD Data Insights

BI Ask

The new DVD version contains also BI Ask, providing the ability to create queries with natural language which is automatically interpreted and presented in suggested visualisations.

BI Ask

As you could read in this post the new version of Data Visualization Desktop adds a series of really interesting features enabling not only the data visualisation but also data exploration and wrangling. In the next blog post we'll see a typical DVD use case and how the new Data Flow option could be used to couple data coming from various sources.

Categories: BI & Warehousing

Oracle Critical Patch Update for October 2016

Syed Jaffar - Wed, 2016-10-19 05:08
The Critical Patch Update for October 2016 was released on October 18th, 2016. Oracle strongly recommends applying the patches as soon as possible. 

Visit the URL below for more updates

http://www.oracle.com/us/dm/sev100575522-na-ww-ot-de1-ev-3253663.html?elq_mid=59973&sh=0802222319060808261813292520100701&cmid=SPPT160711P00036

Documentum story – Jobs in a high availability installation

Yann Neuhaus - Wed, 2016-10-19 04:55

When you have an installation with one Content Server (CS) you do not take care where the job will be running. It’s always on your single CS.
But how should you configure the jobs in case you have several CS? Which jobs have to be executed and which one not? Let’s see that in this post.

When you have to run your jobs in a high availability installation you have to configure some files and objects.

Update the method_verb of the dm_agent_exec method:

API> retrieve,c,dm_method where object_name = 'agent_exec_method'
API> get,c,l,method_verb
API> set,c,l,method_verb
SET> ./dm_agent_exec -enable_ha_setup 1
API> get,c,l,method_verb
API> save,c,l
API> reinit,c

 

The java methods have been updated to be restartable:

update dm_method object set is_restartable=1 where method_type='java';

 

On our installation we use jms_max_wait_time_on_failures = 300 instead the default one (3000).
In server.ini (Primary Content Server) and server_HOSTNAME2_REPO01.ini (Remote Content Server), we have:

incremental_jms_wait_time_on_failure=30
jms_max_wait_time_on_failures=300

 

Based on some issues we faced, for instance with the dce_clean job that ran twice when we had both JMS projected to each CS, EMC advised us to each JMS with its local CS only. With this configuration, in case the JMS is down on the primary CS, the job (using a java method) is started on the remote JMS via the remote CS.

Regarding which jobs have to be executed – I am describing only the one which are used for the housekeeping.
So the question to answer is which job does what and what is “touched”, either metadata or/and content.

To verify that, check how many CS are used and where they are installed:

select object_name, r_host_name from dm_server_config
REPO1               HOSTNAME1.DOMAIN
HOSTNAME2_REPO1     HOSTNAME2.DOMAIN

 

Verify on which CS the jobs will run and “classify” them.
Check the job settings:

select object_name, target_server, is_inactive from dm_job
Metadata

The following jobs work only on metadata, they can run anywhere so the target_server has to be empty

 object_name target_server is_inactive dm_ConsistencyChecker False dm_DBWarning False dm_FileReport False dm_QueueMgt False dm_StateOfDocbase False

 

Content

The following jobs work only on content.

  object_name  target_server dm_ContentWarning  REPO1.REPO1@HOSTNAME1.DOMAIN dm_ContentWarningHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN dm_DMClean REPO1.REPO1@HOSTNAME1.DOMAIN dm_DMCleanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN

As we are using a NAS for the Data directory which is shared for both servers, only one of the two jobs has to run. Per default the target_server is defined. So for the one which has to run, target_server has to be empty.

  object_name  target_server  is_inactive dm_ContentWarning False dm_ContentWarningHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True dm_DMClean False dm_DMCleanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True Metadata and Content

These following jobs are working on metadata and content.

   object_name   target_server dm_DMFilescan  REPO1.REPO1@HOSTNAME1.DOMAIN dm_DMFilescanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN dm_LogPurge  REPO1.REPO1@HOSTNAME1.DOMAIN dm_LogPurgeHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN

Filescan scans the NAS content storage. As said above, it is shared and therefore the job only need to be execute once: the target_server has to be empty to be run everywhere.

LogPurge is also cleaning files under $DOCUMENTUM/dba/log and subfolders which are obviously not shared and therefore both dm_LogPurge jobs have to run. You just have to use another start time to avoid an overlap when objects are removed from the repository.

   object_name   target_server   is_inactive dm_DMFilescan False dm_DMFilescanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True dm_LogPurge  REPO1.REPO1@HOSTNAME1.DOMAIN False dm_LogPurgeHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN False

Normally with this configuration your housekeeping jobs should be well configured.

One point you have to take care is when you use DA to configure your jobs. Once you open the job properties, the “Designated Server” is set to one of your server and not to “Any Running Server” which means target_server = ‘ ‘. If you click the OK button, you will set the target server and in case this CS is down, the job will fail because it cannot use the second CS.

 

Cet article Documentum story – Jobs in a high availability installation est apparu en premier sur Blog dbi services.

Get the hostname of the executing server in BPEL

Darwin IT - Wed, 2016-10-19 04:48
This week I got involved in a question on the Oracle Forums on getting the hostname of the server executing the bpel process. In itself this is not possible in BPEL. Also if you have a long running async process, the process gets dehydrated at several points (at a receive, wait, etc.). After an incoming signal, another server could process it further. You can't be sure that one server will process it to the end.

However, using Java, you can get the hostname of an executing server, quite easily. @AnatoliAtanasov suggested this question on stackOverflow. I thought that it would be fun to try this out.

Although you can opt for creating an embedded java activity, I used my earlier article on SOA and Spring Contexts to have it in a separate bean. By the way, in contrast to my suggestions in the article, you don't have to create a separate spring context for every bean you use.

My java bean looks like:
package nl.darwinit.soasuite;
import java.net.InetAddress;
import java.net.UnknownHostException;


public class ServerHostBeanImpl implements IServerHostBean {
public ServerHostBeanImpl() {
super();
}

public String getHostName(String hostNameDefault){
String hostName;
try
{
InetAddress addr;
addr = InetAddress.getLocalHost();
hostName = addr.getHostName();
}
catch (UnknownHostException ex)
{
System.out.println("Hostname can not be resolved");
hostName = hostNameDefault;
}
return hostName;
}

}

The interface class I generated is:
package nl.darwinit.soasuite;

public interface IServerHostBean {
String getHostName(String hostNameDefault);
}

Then I defined a Spring Context, getHostNameContext, with the following content
<?xml version="1.0" encoding="UTF-8" ?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util"
xmlns:jee="http://www.springframework.org/schema/jee" xmlns:lang="http://www.springframework.org/schema/lang"
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:sca="http://xmlns.oracle.com/weblogic/weblogic-sca" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tool http://www.springframework.org/schema/tool/spring-tool.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd http://xmlns.oracle.com/weblogic/weblogic-sca META-INF/weblogic-sca.xsd">
<!--Spring Bean definitions go here-->
<sca:service name="GetHostService" target="ServerHostBeanImpl" type="nl.darwinit.soasuite.IServerHostBean"/>
<bean id="ServerHostBeanImpl" class="nl.darwinit.soasuite.ServerHostBeanImpl"/>
</beans>

After wiring the context to my BPEL the composite looks like:


Then, deploying and running it, gives the following output:


Nice, isn't it?

Documentum story – How to display correct client IP address in the log file when a WebLogic Domain is fronted by a load Balancer

Yann Neuhaus - Wed, 2016-10-19 04:32

The Load Balancers do not provide the client IP address by default. The WebLogic HTTP log file (access_log) does not provide the client IP address but the Load Balancer one.
This is sometimes a problem when diagnosing issues and the Single Sign On configuration does not provide the user name in the HTTP log either.

In most of  the cases, the Load Balancer can provides an additional header named “X-Forwarded-For” but it needs to be configured from the Load Balancer administration people.
If the “X-Forwarded-For” Header is provided, it can be fetched using the WebLogic Server HTTP extended logging.

To enable the WebLogic Server HTTP logging to fetch the “X-Forwarded-For” Header follow the steps below for each WebLogic Server in the WebLogic Domain:

  1. Browse to the WebLogic Domain administration console and sign in as an administrator user
  2. Open the servers list and select the first managed server
  3. Select the logging TAB and the HTTP sub-tab
  4. Open the advanced folder and change the format to “extended” and the Extended Logging Format Fields to:
    "cs(X-Forwarded-For) date time cs-method cs-uri sc-status bytes"
  5. Save
  6. Browse back to the servers list and repeat the steps for each WebLogic Server from the domain placed behind the load balancer.
  7. Activate the changes.
  8. Stop and restart the complete WebLogic domain.

After this, the WebLogic Servers HTTP Logging (access_log) should display the client IP address and not the Load Balancer one.

When using the WebLogic Server extended HTTP logging, the username field is not available any more.
This feature is described in the following Oracle MOS article:
Missing Username In Extended Http Logs (Doc ID 1240135.1)

To get the authenticated usename displayed, an additional custom filed provided by a custom Java class needs to be used.

Here is an example of such Java class:

import weblogic.servlet.logging.CustomELFLogger; 
import weblogic.servlet.logging.FormatStringBuffer; 
import weblogic.servlet.logging.HttpAccountingInfo;

/* This example outputs the User-Agent field into a
 custom field called MyCustomField
*/

public class MyCustomUserNameField implements CustomELFLogger{

public void logField(HttpAccountingInfo metrics,
  FormatStringBuffer buff) {
  buff.appendQuotedValueOrDash(metrics.getRemoteUser());
  }
}

The next step is to compile and create a jar library.

Set the environment running the WebLogic setWLSEnv.sh script.

javac MyCustomUserNameField.java

jar cvf MyCustomUserNameField.jar MyCustomUserNameField.class

Once done, copy the jar library file under the WebLogic Domain lib directory. This way, it will be made available in the class path of each WebLogic Server of this WebLogic Domain.

The WebLogic Server HTTP Extended log format can now be modified to include a custom field named “x-MyCustomUserNameField”.

 

Cet article Documentum story – How to display correct client IP address in the log file when a WebLogic Domain is fronted by a load Balancer est apparu en premier sur Blog dbi services.

OBIEE, Big Data Discovery, and ODI security updates - October 2016

Rittman Mead Consulting - Wed, 2016-10-19 04:14

Oracle release their "Critical Patch Update" (CPU) notices every quarter, bundling together details of vulnerabilities and associated patches across their entire product line. October's was released yesterday, with a few entries of note in the analytics & DI space.

Each vulnerability is given a unique identifier (CVE-xxxx-xxxx) and a score out of ten. The scoring uses a common industry-standard scale on the basis of how easy it is to exploit, and what is compromised (availability, data, etc). Ten is the worst, and I would crudely paraphrase it as generally meaning that someone can wander in, steal your data, change your data, and take your system offline. Lower than that and it might be that it requires extensive skills to exploit, or the impact be much lower.

A final point to note is that the security patches that are released are not available for old versions of the software. For example, if you're on OBIEE 11.1.1.6 or earlier, and it is affected by the vulnerability listed below (which I would assume it is), there is no security patch. So even if you don't want to update your version for the latest functionality, staying within support is an important thing to do and plan for. You can see the dates for OBIEE versions and when they go out of "Error Correction Support" here.

If you want more information on how Rittman Mead can help you plan, test, and carry out patching or upgrades, please do get in touch!

The vulnerabilities listed below are not a comprehensive view of an Oracle-based analytics/DI estate - things like the database itself, along with Web Logic Server, should also be checked. See the CPU itself for full details.

Big Data Discovery (BDD)
  • CVE-2015-3253
    • Affected versions: 1.1.1, 1.1.3, 1.2.0
    • Base score: 9.8
    • Action: upgrade to the latest version, 1.3.2. Note that the upgrade packages are on Oracle Software Delivery Cloud (née eDelivery)
OBIEE
  • CVE-2016-2107
    • Affected versions: 11.1.1.7.0, 11.1.1.9.0, 12.1.1.0.0, 12.2.1.1.0
    • Base score: 5.9
    • Action: apply bundle patch 161018 for your particular version (see MoS doc 2171485.1 for details)
BI Publisher ODI
  • CVE-2016-5602

    • Affected versions: 11.1.1.7.0, 11.1.1.9.0, 12.1.3.0.0, 12.2.1.0.0, 12.2.1.1.0
    • Base score: 5.7
    • The getInfo() ODI API could be used to expose passwords for data server connections.
    • More details in MoS doc 2188855.1
  • CVE-2016-5618

    • Affected versions: 11.1.1.7.0, 11.1.1.9.0, 12.1.2.0.0, 12.1.3.0.0, 12.2.1.0.0, 12.2.1.1.0
    • Base score: 3.1
    • This vulnerability documents the potential that a developer could take the master repository schema credentials and use them to grant themselves SUPERVISOR access. Even using the secure wallet, the credentials are deobfuscated on the local machine and therefore a malicious developer could still access the credentials in theory.
    • More details in MoS doc 2188871.1
Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator