Feed aggregator

Oracle Technology Patch Versioning Changed

Steven Chan - Thu, 2016-12-15 02:06

Version numbering for Oracle Database, Enterprise Manager, and Fusion Middleware products began to change after November 2015. The new version format makes it easier to see which bundle patches are from which time-frame, and in particular which patches are from the same Critical Patch Update release. 

This change is documented here:

Some bundles may continue to use a numeric 5th digit in the short term, but will transition to the new format over time.

What changed?

The new format replaces the numeric fifth digit of the bundle version with a release date in the form  "YYMMDD" where:

  • YY is the last 2 digits of the year
  • MM is the numeric month (2 digits)
  • DD is the numeric day of the month (2 digits)

The "release date" is the release date of the main bundle / PSU / SPU. In some rare cases, for example where the same bundle is released on multiple platforms, the patch for a specific platform may not be available until some days after the "release date".

Related Articles

Categories: APPS Blogs

Whitepaper List as at December 2016

Anthony Shorten - Wed, 2016-12-14 17:25

The following Oracle Utilities Application Framework technicalwhitepapers are available from MyOracle Support at the Doc Id's mentioned below. Some have beenupdated in the last few months to reflect new advice and new features.

Refer to Whitepaper Strategy Now and In the Future for direction of the documentation.

Note: If a link on this page does not work, this means the whitepaper may have been retired. In that case refer to the online documentation provided with your product for more information.

Unless otherwise marked the technical whitepapers in the table beloware applicable for the following products (with versions):

Doc Id DocumentTitle Contents ConfigLabDesign Guidelines This whitepaper outlines how to design and implement a datamanagement solution using the ConfigLab facility.
This whitepaper currently only applies to the following products:
TechnicalBest Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices usedby partners, implementation teams and customers. PerformanceTroubleshooting Guideline Series A set of whitepapers on tracking performance at each tier inthe framework. The individual whitepapers are as follows:
  • Concepts - General Conceptsand Performance Troublehooting processes
  • Client Troubleshooting -General troubleshooting of the browser client with common issues andresolutions.
  • Network Troubleshooting -General troubleshooting of the network with common issues andresolutions.
  • Web Application Server Troubleshooting- General troubleshooting of the Web Application Server with commonissues and resolutions.
  • Server Troubleshooting -General troubleshooting of the Operating system with common issues andresolutions.
  • Database Troubleshooting -General troubleshooting of the database with common issues andresolutions.
  • Batch Troubleshooting -General troubleshooting of the background processing component of theproduct with common issues and resolutions.
SoftwareConfiguration Management Series
A set of whitepapers on how to manage customization (code anddata)using the tools provided with the framework. Topics include RevisionControl, SDK Migration/Utilities, Bundling and Configuration MigrationAssistant. The individual whitepapers are as follows:
  • Concepts - General conceptsand introduction.
  • Environment Management -Principles and techniques for creating and managing environments.
  • Version Management -Integration of Version control and version management of configurationitems.
  • Release Management -Packaging configuration items into a release.
  • Distribution - Distributionand installation of releases across environments
  • Change Management - Genericchange management processes for product implementations.
  • Status Accounting - Statusreporting techniques using product facilities.
  • Defect Management - Genericdefect management processes for product implementations.
  • Implementing Single Fixes -Discussion on the single fix architecture and how to use it in animplementation.
  • Implementing Service Packs -Discussion on the service packs and how to use them in animplementation.
  • Implementing Upgrades -Discussion on the the upgrade process and common techniques forminimizing the impact of upgrades.
OracleUtilities Application Framework Security Overview A whitepaper summarizing the security facilities in theframework. Now includes references to other Oracle security productssupported. LDAPIntegration for Oracle Utilities Application Framework based products A generic whitepaper summarizing how to integrate an externalLDAP based security repository with the framework. OracleUtilities Application Framework Integration Overview A whitepaper summarizing all the various common integrationtechniques used with the product (with case studies). SingleSign On Integration for Oracle Utilities Application Framework basedproducts A whitepaper outlining a generic process for integrating anSSO product with the framework. OracleUtilities Application Framework Architecture Guidelines This whitepaper outlines the different variations ofarchitecture that can be considered. Each variation will include adviceon configuration and other considerations. BatchBest Practices This whitepaper outlines the common and best practicesimplemented by sites all over the world. TechnicalBest Practices V1 Addendum Addendum to Technical Best Practices for OracleUtilities Customer Care And Billing V1.x only. XAIBest Practices This whitepaper outlines the common integration tasks andbest practices for the Web Services Integration provided by the OracleUtilities Application Framework. OracleIdentity Manager Integration Overview This whitepaper outlines the principals of the prebuiltintergration between Oracle Utilities Application Framework BasedProducts and OracleIdentity Manager used to provision user and user group securityinformation. For Fw4.x customers use whitepaper 1375600.1instead. ProductionEnvironment Configuration Guidelines A whitepaper outlining common production level settings forthe products based upon benchmarks and customer feedback. 1177265.1 What'sNew In Oracle Utilities Application Framework V4?  Whitepaper outlining the major changes to the framework sinceOracle Utilities Application Framework V2.2. 1290700.1 DatabaseVault Integration Whitepaper outlining the DatabaseVault Integration solution provided with Oracle UtilitiesApplication Framework V4.1.0 and above. 1299732.1 BIPublisher Guidelines for Oracle Utilities Application Framework Whitepaper outlining the interface between BIPublisher and the Oracle Utilities Application Framework 1308161.1 OracleSOA Suite Integration with Oracle Utilities Application Framework basedproducts This whitepaper outlines common design patterns andguidelines for using OracleSOA Suite with Oracle Utilities Application Framework basedproducts. 1308165.1 MPLBest Practices
This is a guidelines whitepaper for products shipping withthe Multi-Purpose Listener.
This whitepaper currently only applies to the following products:
1308181.1 OracleWebLogic JMS Integration with the Oracle Utilities Application Framework This whitepaper covers the native integration between OracleWebLogic JMS with Oracle Utilities Application Framework using thenew Message Driven Bean functionality and real time JMS adapters. 1334558.1 OracleWebLogic Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clusteringusing OracleWebLogic for Oracle Utilities Application Framework based products. 1359369.1 IBMWebSphere Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clusteringusing IBM WebSphere for Oracle Utilities Application Framework basedproducts 1375600.1 OracleIdentity Management Suite Integration with the Oracle UtilitiesApplication Framework This whitepaper covers the integration between OracleUtilities Application Framework and OracleIdentity Management Suite components such as OracleIdentity Manager, OracleAccess Manager, OracleAdaptive Access Manager, OracleInternet Directory and OracleVirtual Directory. 1375615.1 AdvancedSecurity for the Oracle Utilities Application Framework This whitepaper covers common security requirements and howto meet those requirements using Oracle Utilities Application Frameworknative security facilities, security provided with the J2EE WebApplication and/or facilities available in OracleIdentity Management Suite. 1486886.1 ImplementingOracle Exadata with Oracle Utilities Customer Care and Billing This whitepaper covers some advice when implementing OracleExaData for OracleUtilities Customer Care And Billing. 878212.1 OracleUtilities Application FW Available Service Packs This entry outlines ALL the service packs available for theOracle Utilities Application Framework. 1454143.1 CertificationMatrix for Oracle Utilities Products This entry outlines the software certifications for all theOracle Utilities products. 1474435.1 OracleApplication Management Pack for Oracle Utilities Overview This whitepaper covers the Oracle Application Management Packfor Oracle Utilities. This is a pack for OracleEnterprise Manager. 1506855.1 IntegrationReference Solutions
This whitepaper covers the various Oracle technologies youcan use with the Oracle Utilities Application Framework. 1544969.1 NativeInstallation Oracle Utilities Application Framework Thiswhitepaper describes the process of installing Oracle UtilitiesApplication Framework based products natively within OracleWebLogic. 1558279.1 OracleService Bus Integration  Thiswhitepaper describes direct integration with OracleService Busincluding the new OracleService Bus protocol adapters available.Customers using the MPL should read this whitepaper as the OracleService Bus replaces MPL in the future and this whitepaper outlineshowto manually migrate your MPL configuration into OracleService Bus.

Note: In Oracle Utilities Application Framework V4.2.0.1.0 and above,Oracle Service Bus Adapters for Outbound Messages andNotification/Workflow are available 1561930.1 UsingOracle Text for Fuzzy Searching This whitepaper describes how to use the Name Matchingand  fuzzy operator facilities in OracleText to implemement fuzzy searching using the @fuzzy helperfucntion available in Oracle Utilities Application FrameworkV4.2.0.0.0 1606764.1
AuditVault Integration This whitepaper describes theintegration with OracleAudit Vaultto centralize and separate Audit information from OUAF products. AuditVault integration is available in OUAF 4.2.0.1.0 and above only.
1644914.1
MigratingXAI to IWS
Migration from XML ApplicationIntegration to the new native Inbound Web Services in Oracle UtilitiesApplication Framework 4.2.0.2.0 and above.
1643845.1
PrivateCloud Planning Guide
Planning Guide for implementingOracle Utilities products on Private Clouds using Oracle's CloudFoundation set of products.
1682436.1
ILMPlanning Guide
Planning Guide for OracleUtilities new ILM based data management and archiving solution.
207303.1
Client / Server Interoperability Support Matrix
Certification Matrix.
1965395.1
Cache Nodes Configuration using BatchEdit utility
Using the new Batch Edit Wizard to configure batch quickly and easily
1628358.1
Overview and Guidelines for Managing Business Exceptions and Errors
Best Practices for To Do Management
2014163.1
Oracle Functional/Load Testing Advanced Pack for Oracle Utilities Overview
Overview of the new Oracle Utilities testing solution. Updated for 5.0.0.1.0.
1929040.1
ConfigTools Best Practices
Best Practices for using the configuration tools facility
2014161.1
Oracle Utilities Application Framework - Keystore Configuration
Managing the keystore
2014163.1
Oracle Functional/Load Testing Advanced Pack for Oracle Utilities Overview
Outlines the Oracle Application Testing Suite based testing solution for Functional and Load Testing available for Oracle Utilities Application Framework based products
2132081.1
Migrating From On Premise To Oracle Platform As A Service
Outlines the process of moving an Oracle Utilities product from on-premise to Oracle Cloud Platform As A Service (PaaS)
2196486.1
Batch Scheduler Integration
Outlines the Oracle Utilities Application Framework based integration with Oracle’s DBMS_SCHDEULER to build, manage and execute complex batch schedules
2211363.1
Enterprise Manager for Oracle Utilities: Service Pack Compliance
Outlines the process of converting service packs to allow the Application Management Pack for Oracle Utilities to install service packs using the patch management capabilities
2214375.1
Web Services Best Practices
Outlines the best practices of the web services capabilities available for integration

Whitepaper List as at December 2016

Anthony Shorten - Wed, 2016-12-14 17:25

The following Oracle Utilities Application Framework technical whitepapers are available from My Oracle Support at the Doc Id's mentioned below. Some have been updated in the last few months to reflect new advice and new features.

Refer to Whitepaper Strategy Now and In the Future for direction of the documentation.

Note: If a link on this page does not work, this means the whitepaper may have been retired. In that case refer to the online documentation provided with your product for more information.

Unless otherwise marked the technical whitepapers in the table below are applicable for the following products (with versions):

Doc Id Document Title Contents ConfigLab Design Guidelines This whitepaper outlines how to design and implement a data management solution using the ConfigLab facility.
This whitepaper currently only applies to the following products:
Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers. Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows:
  • Concepts - General Concepts and Performance Troublehooting processes
  • Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions.
  • Network Troubleshooting - General troubleshooting of the network with common issues and resolutions.
  • Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions.
  • Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions.
  • Database Troubleshooting - General troubleshooting of the database with common issues and resolutions.
  • Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions.
Software Configuration Management Series
A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. Topics include Revision Control, SDK Migration/Utilities, Bundling and Configuration Migration Assistant. The individual whitepapers are as follows:
  • Concepts - General concepts and introduction.
  • Environment Management - Principles and techniques for creating and managing environments.
  • Version Management - Integration of Version control and version management of configuration items.
  • Release Management - Packaging configuration items into a release.
  • Distribution - Distribution and installation of releases across environments
  • Change Management - Generic change management processes for product implementations.
  • Status Accounting - Status reporting techniques using product facilities.
  • Defect Management - Generic defect management processes for product implementations.
  • Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation.
  • Implementing Service Packs - Discussion on the service packs and how to use them in an implementation.
  • Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades.
Oracle Utilities Application Framework Security Overview A whitepaper summarizing the security facilities in the framework. Now includes references to other Oracle security products supported. LDAP Integration for Oracle Utilities Application Framework based products A generic whitepaper summarizing how to integrate an external LDAP based security repository with the framework. Oracle Utilities Application Framework Integration Overview A whitepaper summarizing all the various common integration techniques used with the product (with case studies). Single Sign On Integration for Oracle Utilities Application Framework based products A whitepaper outlining a generic process for integrating an SSO product with the framework. Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. Batch Best Practices This whitepaper outlines the common and best practices implemented by sites all over the world. Technical Best Practices V1 Addendum Addendum to Technical Best Practices for Oracle Utilities Customer Care And Billing V1.x only. XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Oracle Identity Manager used to provision user and user group security information. For Fw4.x customers use whitepaper 1375600.1 instead. Production Environment Configuration Guidelines A whitepaper outlining common production level settings for the products based upon benchmarks and customer feedback. 1177265.1 What's New In Oracle Utilities Application Framework V4?  Whitepaper outlining the major changes to the framework since Oracle Utilities Application Framework V2.2. 1290700.1 Database Vault Integration Whitepaper outlining the Database Vault Integration solution provided with Oracle Utilities Application Framework V4.1.0 and above. 1299732.1 BI Publisher Guidelines for Oracle Utilities Application Framework Whitepaper outlining the interface between BI Publisher and the Oracle Utilities Application Framework 1308161.1 Oracle SOA Suite Integration with Oracle Utilities Application Framework based products This whitepaper outlines common design patterns and guidelines for using Oracle SOA Suite with Oracle Utilities Application Framework based products. 1308165.1 MPL Best Practices
This is a guidelines whitepaper for products shipping with the Multi-Purpose Listener.
This whitepaper currently only applies to the following products:
1308181.1 Oracle WebLogic JMS Integration with the Oracle Utilities Application Framework This whitepaper covers the native integration between Oracle WebLogic JMS with Oracle Utilities Application Framework using the new Message Driven Bean functionality and real time JMS adapters. 1334558.1 Oracle WebLogic Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clustering using Oracle WebLogic for Oracle Utilities Application Framework based products. 1359369.1 IBM WebSphere Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clustering using IBM WebSphere for Oracle Utilities Application Framework based products 1375600.1 Oracle Identity Management Suite Integration with the Oracle Utilities Application Framework This whitepaper covers the integration between Oracle Utilities Application Framework and Oracle Identity Management Suite components such as Oracle Identity Manager, Oracle Access Manager, Oracle Adaptive Access Manager, Oracle Internet Directory and Oracle Virtual Directory. 1375615.1 Advanced Security for the Oracle Utilities Application Framework This whitepaper covers common security requirements and how to meet those requirements using Oracle Utilities Application Framework native security facilities, security provided with the J2EE Web Application and/or facilities available in Oracle Identity Management Suite. 1486886.1 Implementing Oracle Exadata with Oracle Utilities Customer Care and Billing This whitepaper covers some advice when implementing Oracle ExaData for Oracle Utilities Customer Care And Billing. 878212.1 Oracle Utilities Application FW Available Service Packs This entry outlines ALL the service packs available for the Oracle Utilities Application Framework. 1454143.1 Certification Matrix for Oracle Utilities Products This entry outlines the software certifications for all the Oracle Utilities products. 1474435.1 Oracle Application Management Pack for Oracle Utilities Overview This whitepaper covers the Oracle Application Management Pack for Oracle Utilities. This is a pack for Oracle Enterprise Manager. 1506855.1 Integration Reference Solutions
This whitepaper covers the various Oracle technologies you can use with the Oracle Utilities Application Framework. 1544969.1 Native Installation Oracle Utilities Application Framework This whitepaper describes the process of installing Oracle Utilities Application Framework based products natively within Oracle WebLogic. 1558279.1 Oracle Service Bus Integration  This whitepaper describes direct integration with Oracle Service Bus including the new Oracle Service Bus protocol adapters available. Customers using the MPL should read this whitepaper as the Oracle Service Bus replaces MPL in the future and this whitepaper outlines how to manually migrate your MPL configuration into Oracle Service Bus.

Note: In Oracle Utilities Application Framework V4.2.0.1.0 and above, Oracle Service Bus Adapters for Outbound Messages and Notification/Workflow are available 1561930.1 Using Oracle Text for Fuzzy Searching This whitepaper describes how to use the Name Matching and  fuzzy operator facilities in Oracle Text to implemement fuzzy searching using the @fuzzy helper fucntion available in Oracle Utilities Application Framework V4.2.0.0.0 1606764.1
Audit Vault Integration This whitepaper describes the integration with Oracle Audit Vault to centralize and separate Audit information from OUAF products. Audit Vault integration is available in OUAF 4.2.0.1.0 and above only.
1644914.1
Migrating XAI to IWS
Migration from XML Application Integration to the new native Inbound Web Services in Oracle Utilities Application Framework 4.2.0.2.0 and above.
1643845.1
Private Cloud Planning Guide
Planning Guide for implementing Oracle Utilities products on Private Clouds using Oracle's Cloud Foundation set of products.
1682436.1
ILM Planning Guide
Planning Guide for Oracle Utilities new ILM based data management and archiving solution.
1682442.1
ILM Implementation Guide for Oracle Utilities Customer Care and Billing
Implementation Guide for the ILM based solution for the Oracle Utilities Customer Care And Billing.
207303.1
Client / Server Interoperability Support Matrix
Certification Matrix.
1965395.1
Cache Nodes Configuration using BatchEdit utility
Using the new Batch Edit Wizard to configure batch quickly and easily
1628358.1
Overview and Guidelines for Managing Business Exceptions and Errors
Best Practices for To Do Management
2014163.1
Oracle Functional/Load Testing Advanced Pack for Oracle Utilities Overview
Overview of the new Oracle Utilities testing solution. Updated for 5.0.0.1.0.
1929040.1
ConfigTools Best Practices
Best Practices for using the configuration tools facility
2014161.1
Oracle Utilities Application Framework - Keystore Configuration
Managing the keystore
2014163.1
Oracle Functional/Load Testing Advanced Pack for Oracle Utilities Overview
Outlines the Oracle Application Testing Suite based testing solution for Functional and Load Testing available for Oracle Utilities Application Framework based products
2132081.1
Migrating From On Premise To Oracle Platform As A Service
Outlines the process of moving an Oracle Utilities product from on-premise to Oracle Cloud Platform As A Service (PaaS)
2196486.1
Batch Scheduler Integration
Outlines the Oracle Utilities Application Framework based integration with Oracle’s DBMS_SCHDEULER to build, manage and execute complex batch schedules
2211363.1
Enterprise Manager for Oracle Utilities: Service Pack Compliance
Outlines the process of converting service packs to allow the Application Management Pack for Oracle Utilities to install service packs using the patch management capabilities
2214375.1
Web Services Best Practices
Outlines the best practices of the web services capabilities available for integration

Dataguard Oracle 12.2 : Fast-Start Failover with Maximum Protection

Yann Neuhaus - Wed, 2016-12-14 17:04

With Oracle 12.1 the one requirement to configure Fast-start Failover is to ensure the broker configuration is operating in either Maximum Availability mode or Maximum Performance mode.
With 12.2 Fast-Start Failover can be now configured with the Maximum Protection
Below our broker configuration

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxPerformance
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 48 seconds ago)

Let’s configure the maximum protection mode.
We first have to update some database properties.

DGMGRL> edit database 'ORCL_SITE1' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'ORCL_SITE' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'ORCL_SITE2' set property LogXptMode='SYNC';
Property "logxptmode" updated

After we have to enable the Maximum Availability before enabling the Maximum protection

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.
DGMGRL>

And now let’s enable Fast-Start failover

DGMGRL> enable fast_start failover;
Error: ORA-16693: requirements not met for enabling fast-start failover
Failed.
DGMGRL>

Oh what happens?
Remember before enabling Fast-Start Failover we have to enable flashback for databases and also to set the database property FastStartFailoverTarget
Let’s enable flashback for databases
For the Primary

SQL> alter database flashback on;
Database altered.

For Standby databases

DGMGRL> edit database 'ORCL_SITE1' set state='APPLY-OFF';
Succeeded.
SQL> alter database flashback on;
Database altered.
SQL>
DGMGRL> edit database 'ORCL_SITE1' set state='APPLY-ON';
Succeeded.
DGMGRL>


DGMGRL> edit database 'ORCL_SITE2' set state='APPLY-OFF';
Succeeded.
DGMGRL>
SQL> alter database flashback on;
Database altered.
DGMGRL> edit database 'ORCL_SITE2' set state='APPLY-ON';
Succeeded.
DGMGRL>

Let’s set FastStartFailoverTarget property for the primary database

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE2';
Property "faststartfailovertarget" updated

And then now we can enable the Fast-Start Failover with the maximum protection

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL>

Checking after our configuration. The observer must be started, otherwise you will get warning about observer

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE - Primary database
ORCL_SITE2 - (*) Physical standby database
ORCL_SITE1 - Physical standby database
Fast-Start Failover: ENABLED
Configuration Status:
SUCCESS (status updated 1 second ago)
DGMGRL>

Just note that At least 2 standby databases must be available, otherwise the mode will be retrograded to MAXPERFORMANCE after Failover

 

Cet article Dataguard Oracle 12.2 : Fast-Start Failover with Maximum Protection est apparu en premier sur Blog dbi services.

Web Services Best Practices Whitepaper

Anthony Shorten - Wed, 2016-12-14 16:13

Over the last few release new web services capabilities have been added to the Oracle Utilities Application Framework. A wide range of new and updated facilities are now available for integration capabilities for inbound and outbound communications.

A new whitepaper has been released outlining the best practices for using the new and updated web services capabilities including:

 Capability Usage
 Inbound  Outbound  Inbound Web Services
 Container SOAP based web services

 Message Driven Bean (MDB)
 Container based JMS resource processing

 REST Support
 JSON/XML REST


 Real Time Adapters
 Real time integration for transports


 Outbound Messages
 Service based communications


 SOA/Oracle Service Bus Integration
 SOA middleware based interface
• •
 Web Service Integration
 Importing and execution of an external web service



The whitepaper is available from My Oracle Support at Web Services Best Practices for Oracle Utilities Application Framework (Doc Id: 2214375.1)

Customers using XAI can refer to the XAI Best Practices (Doc Id: 942074.1) available from My Oracle Support. The Web Services Best Practices whitepaper replaces the XAI whitepaper for newer releases.

Web Services Best Practices Whitepaper

Anthony Shorten - Wed, 2016-12-14 16:13

Over the last few release new web services capabilities have been added to the Oracle Utilities Application Framework. A wide range of new and updated facilities are now available for integration capabilities for inbound and outbound communications.

A new whitepaper has been released outlining the best practices for using the new and updated web services capabilities including:

 Capability Usage
 Inbound  Outbound  Inbound Web Services
 Container SOAP based web services

 Message Driven Bean (MDB)
 Container based JMS resource processing

 REST Support
 JSON/XML REST


 Real Time Adapters
 Real time integration for transports


 Outbound Messages
 Service based communications


 SOA/Oracle Service Bus Integration
 SOA middleware based interface
• •
 Web Service Integration
 Importing and execution of an external web service



The whitepaper is available from My Oracle Support at Web Services Best Practices for Oracle Utilities Application Framework (Doc Id: 2214375.1)

Customers using XAI can refer to the XAI Best Practices (Doc Id: 942074.1) available from My Oracle Support. The Web Services Best Practices whitepaper replaces the XAI whitepaper for newer releases.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

Yann Neuhaus - Wed, 2016-12-14 16:00

Here it is. The multitenant book is out for sale…

CaptureBooks
One year ago, at DOAG2015, Vit Spinka came to me with this idea: with Anton Els they planned to write a book on multitenant and proposed me to be a co-author. I was already quite busy at that time and my short-term plan was to prepare and pass the OCM 12c exam. But this book idea was something great and that had to be started quickly. At that time, we expected the 12cR2 to be out on June 2016 and then the book should be at for Oracle Open World. So no time to waste: propose the idea to Oracle Press, find a reviewer and start as soon as possible.

For reviewers, I was very happy that Deiby Gomez accepted to do the technical review. And Mike Donovan was volunteer to do the English review. I think he didn’t imagine how hard it can be to take non-native English speakers writing, with very limited vocabulary, and put that to something that makes sense to read. It’s an amazing chance to have the language review done by someone with deep technical knowledge. This ensures that the improved style do not change the meaning. Having that language review is also a good way to uniformise the style for what is written by three different authors. I bet you cannot guess who has written what. In addition to that, Oracle Press asked to Arup Nanda to do an additional review which was great because Arup has experience about book writing.

So we worked on the 12.2 beta, tested everything (there are lot of code listings in the book), filled bugs, clarified everything. We had good interaction with support engineers and product managers. The result is a book on multitenant which covers all administration tasks you can do on a 12c database.

Cs11EMPWcAAdlSqIt was an amazing adventure from the get-go. You know people for their skills, blogs, presentations and discussions at events. And then you start to work with them on a common thing – the book – and remotely – we’re all on different timezones. How to be sure that you can work together? Actually, it was easy and went smooth. We listed the chapters and each of us has marked which chapter he prefers. And that was done: in one or two e-mail exchange the distribution of tasks was done with everybody happy. We had very short schedule: need to deliver one chapter every 2 or 3 weeks. I was happy with what I wrote and was equally happy with what I’ve read from Vit and Anton. Reviews from Deiby, Mike, Arup were all adding higher precision and clarity. Incredible team work without the need for long discussions. Besides the hard work and the delightful result, working with this team was an amazing human adventure.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

Master the Powerful Multitenant Features of Oracle Database 12c
• Build high-performance multitenant Oracle databases
• Create single-tenant, multitenant, and application containers
• Establish network connections and manage services
• Handle security using authentication, authorization, and encryption
• Back up and restore your mission-critical data
• Work with point-in-time recovery and Oracle Flashback
• Move data and replicate and clone databases
• Work with Oracle’s Resource Manager and Data Guard

 

Cet article Oracle Database 12c Release 2 Multitenant (Oracle Press) est apparu en premier sur Blog dbi services.

Table lock adding foreign key

Tom Kyte - Wed, 2016-12-14 14:46
Add foreign constraint in child table at a time inserting statement fire in references table(parent table) so inserting query is going to hold, but we want to insert statement and add foreign key constraint parallel. it is possible or not ?
Categories: DBA Blogs

Count number of occurances by date and by code.

Tom Kyte - Wed, 2016-12-14 14:46
Hi All- I need to create a report that shows number of occurrences of a student being suspended (by 2 different suspension codes) for a range of dates (usually first day of month and last day of month) by school. Here is the starting query, th...
Categories: DBA Blogs

Grouping with DateRange

Tom Kyte - Wed, 2016-12-14 14:46
Hi Tom I have the following Case: CREATE TABLE TEST_DATERANGE ( TITLE VARCHAR2(10) ,DATEFROM DATE ,DATEUPTO DATE ); INSERT INTO TEST_DATERANGE VALUES ('Test A',TO_DATE( '01.12.2016' , 'DD.MM.YYYY' ),TO_DATE( '05.12.2016' , 'DD.MM.YYYY' ))...
Categories: DBA Blogs

The year zero

Tom Kyte - Wed, 2016-12-14 14:46
When I try to create a date in the year 0 I get an error: <code> SQL> select to_date('1-Jan-0000AD', 'dd-Mon-yyyyAD') 2 from dual 3 ; select to_date('1-Jan-0000AD', 'dd-Mon-yyyyAD') * ERROR at line 1: ORA-01841: (full...
Categories: DBA Blogs

Search for values in all tables

Tom Kyte - Wed, 2016-12-14 14:46
Hi Please I want your help In database 10g I want to search for a value (number or char) I don't know it's stored in any table of database Like search for 'king' the result be table_name=employees , column_name=emp_id Thanks for your efforts
Categories: DBA Blogs

FORCE result cache for queries that are not cacheable?

Tom Kyte - Wed, 2016-12-14 14:46
Is there a way to force the result cache for ANY query and let the application handle invalidation? In an extreme (not very wise) case, do this: select /*+ force_result_cache */ * from dba_tables where table = 'XXX'; Basically, we want to c...
Categories: DBA Blogs

VASSIT Drives Business Transformation with Oracle Cloud Platform

WebCenter Team - Wed, 2016-12-14 12:32

VASSIT is a software integration company specialized in solving complex enterprise challenges associated with customer experience, enterprise content management, systems integration and big data and offers a range of services including customer journey mapping, technology evaluations, technology deployments, custom application development, bespoke integrations and application hosting and support. Clients typically come to VASSIT to enable digital transformation and improve customer experience and efficiency. 

Watch this video to learn how VASSIT enables business transformation for its clients by leveraging Oracle's Cloud Platform including: Oracle Mobile Cloud Service, Oracle Documents Cloud Service, Oracle Process Cloud Service, and Oracle Sites Cloud Service and learn how Oracle’s enterprise-scalable technology platform provides secure solutions wherever and whenever needed.

Source Control and Automated Code Deployment Options for OBIEE

Rittman Mead Consulting - Wed, 2016-12-14 03:00

It's Monday morning. I've arrived at a customer site to help them - ironically enough - with automating their OBIEE code management. But, on arrival, I'm told that the OBIEE team can't meet with me because someone did a release on the previous Friday, had now gone on holiday - and the wrong code was released but they didn't know which version. All hands-on-deck, panic-stations!

This actually happened to me, and in recent months too. In this kind of situation hindsight gives us 20:20 vision, and of course there shouldn't be a single point of failure, of course code should be under version control, of course it should be automated to reduce the risk of problems during deployments. But in practice, these things often don't get done - and it's understandable why. In the very early days of a project, it will be a manual process because that's what is necessary as people get used to the tools and technology. As time goes by, project deadlines come up, and tasks like this are seen as "zero sum" - sure we can automate it, but we can also continue doing it manually and things will still get done, code will still get released. After a while, it's just accepted as how things are done. In effect, it is technical debt - and this is your reminder that debt has to be paid, sooner or later :)

I'll not pretend that managing OBIEE code in source control, and automating code deployments, is straightforward. But, it is necessary, so in this post I'll walk through why you should be doing it, and then importantly how.

Why Source Control?

Do we really need source control for OBIEE? After all, what's wrong with the tried-and-tested method of sticking it all in a folder like this?

sdlc01.png

What's wrong with this? What's right with this? Oh lack of source control, let me count the number of ways that I doth hate thee:

  1. No audit trail of who changed something
  2. No audit of what was changed, and when
  3. No enforceable naming standards for versions
  4. No secure way of identifying deployment candidates
  5. No distributed method for sharing code (don't tell me that a network share counts!)
  6. No way of reliably identifying the latest version of code

These range from the immediately practical through to the slightly more abstract but necessary in a mature deployment.

Of immediate impact is the simply ability to identify the latest version of code on which to make new changes. Download the copy from the live server? Really? No. If you're tracking your versions accurately and reliably then you simply pull the latest version of code from there, in the knowledge that it is the version that is live. No monkeying around trying to figure out if it really is (just because it's called "PROD-091216.rpd" how do you know that's actually what got released to Production? And was that on 12th December or 9th September? Who knows!).

Longer term, having a secure and auditable code line simply makes it easier and less risky to manage. It also gives you the ability to work with it in a much more flexible manner, such as genuine concurrent development by multiple developers against the RPD. You can read more about this in my presentation here.

Which Source Control?

I don't care. Not really. So long as you are using source control, I am happy.

For preference, I always advocate using git. It is a modern platform, with strong support from many desktop clients (SourceTree is my favourite, along with the commandline too, natch). Git is decentralised, meaning that you can commit and branch code locally on your own machine without having to be connected to a server. It supports a powerful fork and pull process too, which is part of the reason it has almost universal usage within the open source world. The most well known of git platforms is github, which in effect provides git as a Platform-as-a-service (PaaS), in a similar fashion to Bitbucket too. You can also run git on its own locally, or more pragmatically, with gitlab.

But if you're using Subversion (SVN), Perforce, or whatever - that's fine. The key thing is that you understand how to use it, and that it is supported within your organisation. For simple source control, pretty much all the common platforms work just fine. If you get onto more advanced use, such as feature-branches and concurrent development, you may find it worth ensuring that your chosen platform supports the workflow that you adopt. Even then, whilst I'd chose git for preference, at Rittman Mead we've helped clients develop very powerful concurrent development processes with Subversion providing the underlying source control.

What Goes into Source Control? Part 1

So you've drunk the Source Control koolaid, and accepted that really there is no excuse not to use it. So what do you put into it? The RPD? The OBIEE 12c BAR file? What if you're still on OBIEE 11g? The answer here depends partially on how you are planning to manage code deployment in your environment. For a fully automated solution, you may opt to store code in a more granular fashion than if you are simply migrating full BAR files each time. So, read on to understand about code deployment, and then we'll revisit this question again after that.

How Do You Deploy Code Changes in OBIEE?

The core code artefacts are the same between OBIEE 11g and OBIEE 12c, so I'll cover both in this article, pointing out as we go any differences.

The biggest difference with OBIEE 12c is the concept of the "Service Instance", in which the pieces for the "analytical application" are clearly defined and made portable. These components are:

  • Metadata model (RPD)
  • Presentation Catalog ("WebCat"), holding all analysis and dashboard definitions
  • Security - Application Roles and Policy grants, as well as OBIEE front-end privilege grants

Part of this is laying the foundations for what has been termed "Pluggable BI", in which 'applications' can be deployed with customisations layered on top of them. In the current (December 2016) version of OBIEE 12c we have just the Single Service Instance (ssi). Service Instances can be exported and imported to BI Archive files, known as BAR files.

The documentation for OBIEE environment migrations (known as "T2P" - Test to Production) in 12c is here. Hopefully I won't be thought too rude for saying that there is scope for expanding on it, clarifying a few points - and perhaps making more of the somewhat innocuous remark partway down the page:

PROD Service Instance metadata will be replaced with TEST metadata.

Hands up who reads the manual fully before using a product? Hands up who is going to get a shock when they destroy their Production presentation catalog after importing a service instance?...

Let's take walk through the three main code artefacts, and how to manage each one, starting with the RPD.

The RPD

The complication of deployments of the RPD is that the RPD differs between environments because of different connection pool details, and occassionally repository variable values too.

If you are not changing connection pool passwords between environments, or if you are changing anything else in your RPD (e.g. making actual model changes) between environments, then you probably shouldn't be. It's a security risk to not have different passwords, and it's bad software development practice to make code changes other than in your development environment. Perhaps you've valid reasons for doing it... perhaps not. But bear in mind that many test processes and validations are based on the premise that code will not change after being moved out of dev.

With OBIEE 12c, there are two options for managing deployment of the RPD:

  1. BAR file deploy and then connection pool update
  2. Offline RPD patch with connection pool updates, and then deploy
    • This approach is valid for OBIEE 11g too
RPD Deployment in OBIEE 12c - Option 1

This is based on the service instance / BAR concept. It is therefore only valid for OBIEE 12c.

  1. One-off setup : Using listconnectionpool to create a JSON connection pool configuration file per target environment. Store each of these files in source control.
  2. Once code is ready for promotion from Development, run exportServiceInstance to create a BAR file. Commit this BAR file to source control

    /app/oracle/biee/oracle_common/common/bin/wlst.sh <<EOF
    exportServiceInstance('/app/oracle/biee/user_projects/domains/bi/','ssi','/home/oracle','/home/oracle')
    EOF
    

  3. To deploy the updated code to the target environment:

    1. Checkout the BAR from source control
    2. Deploy it with importServiceInstance, ensuring that the importRPD flag is set.

      /app/oracle/biee/oracle_common/common/bin/wlst.sh <<EOF
      importServiceInstance('/app/oracle/biee/user_projects/domains/bi','ssi','/home/oracle/ssi.bar',true,false,false)
      EOF
      
    3. Run updateConnectionPool using the configuration file from source control for the target environment to set the connection pool credentials

      /app/oracle/biee/user_projects/domains/bi/bitools/bin/datamodel.sh updateconnectionpool -C ~/prod_cp.json -U weblogic -P Admin123 -SI ssi
      

      Note that your OBIEE system will not be able to connect to source databases to retrieve data until you update the connection pools.

    4. The BI Server should pick up the new RPD after a few minutes. You can force this by restarting the BI Server, or using "Reload Metadata" from OBIEE front end.

Whilst you can also create the BAR file with includeCredentials, you wouldn't use this for migration of code between environments - because you don't have the same connection pool database passwords in each environment. If you do have the same passwords then change it now - this is a big security risk.

The above BAR approach works fine, but be aware that if the deployed RPD is activated on the BI Server before you have updated the connection pools (step 3 above) then the BI Server will not be able to connect to the data sources and your end users will see an error. This approach is also based on storing the BAR file as whole in source control, when for preference we'd store the RPD as a standalone binary if we want to be able to do concurrent development with it.

RPD Deployment in OBIEE 12c - Option 2 (also valid for OBIEE 11g)

This approach takes the RPD on its own, and takes advantage of OBIEE's patching capabilities to prepare RPDs for the target environment prior to deployment.

  1. One-off setup: create a XUDML patch file for each target environment.

    Do this by:

    1. Take your development RPD (e.g. "DEV.rpd"), and clone it (e.g. "PROD.rpd")
    2. Open the cloned RPD (e.g. "PROD.rpd") offline in the Administration Tool. Update it only for the target environment - nothing else. This should be all connection pool passwords, and could also include connection pool DSNs and/or users, depending on how your data sources are configured. Save the RPD.
    3. Using comparerpd, create a XUDML patch file for your target environment:

      /app/oracle/biee/user_projects/domains/bi/bitools/bin/comparerpd.sh \
      -P Admin123 \
      -W Admin123 \
      -G ~/DEV.rpd \
      -C ~/PROD.rpd \
      -D ~/prod_cp.xudml
      
    4. Repeat the above process for each target environment

  2. Once code is ready for promotion from Development:

    1. Extract the RPD

      • In OBIEE 12c use downloadrpd to obtain the RPD file

        /app/oracle/biee/user_projects/domains/bi/bitools/bin/datamodel.sh \
        downloadrpd \
        -O /home/oracle/obiee.rpd \
        -W Admin123 \
        -U weblogic \
        -P Admin123 \
        -SI ssi
        
      • In OBIEE 11g copy the file from the server filesystem

    2. Commit the RPD to source control

  3. To deploy the updated code to the target environment:

    1. Checkout the RPD from source control
    2. Prepare it for the target environment by applying the patch created above

      1. Check out the XUDML patch file for the appropriate environment from source control
      2. Apply the patch file using biserverxmlexec:

        /app/oracle/biee/user_projects/domains/bi/bitools/bin/biserverxmlexec.sh \
        -P Admin123 \
        -S Admin123 \
        -I prod_cp.xudml \
        -B obiee.rpd \
        -O /tmp/prod.rpd
        
    3. Deploy the patched RPD file

      • In OBIEE 12c use uploadrpd

        /app/oracle/biee/user_projects/domains/bi/bitools/bin/datamodel.sh \
        uploadrpd \
        -I /tmp/prod.rpd \
        -W Admin123 \
        -U weblogic \
        -P Admin123 \
        -SI ssi \
        -D
        

        The RPD is available straightaway. No BI Server restart is needed.

      • In OBIEE 11g use WLST's uploadRepository to programatically do this, or manually from EM.

        After deploying the RPD in OBIEE 11g, you need to restart the BI Server.

This approach is the best (only) option for OBIEE 11g. For OBIEE 12c I also prefer it as it is 'lighter' than a full BAR, more solid in terms of connection pools (since they're set prior to deployment, not after), and it enables greater flexibility in terms of RPD changes during migration since any RPD change can be encompassed in the patch file.

Note that the OBIEE 12c product manual states that uploadrpd/downloadrpd are for:

"...repository diagnostic and development purposes such as testing, only ... all other repository development and maintenance situations, you should use BAR to utilize BAR's repository upgrade and patching capabilities and benefits.".

Maybe in the future the BAR capabilites will extend beyond what they currently do - but as of now, I've yet to see a definitive reason to use them and not uploadrpd/downloadrpd.

The Presentation Catalog ("WebCat")

The Presentation Catalog stores the definition of all analyses and dashboards in OBIEE, along with supporting objects including Filters, Conditions, and Agents. It differs significantly from the RPD when it comes to environment migrations. The RPD can be seen in more traditional software development lifecycle terms, sine it is built and developed in Development, and when deployed in subsequent environment overwrites in entirety what is currently there. However, the Presentation Catalog is not so simple.

Commonly, content in the Presentation Catalog is created by developers as part of 'pre-canned' reporting and dashboard packs, to be released along with the RPD to end-users. Where things get difficult is that the Presentation Catalog is also written to in Production. This can include:

  • User-developed content saved in one (or both) of:
    • My Folders
    • Shared, e.g. special subfolders per department for sharing common reports outside of "gold standard" ones
  • User's profile data, including timezone and language settings, saved dashboard customisations, preferred delivery devices, and more
  • System configuration data, such as default formatting for specific columns, bookmarks, etc

In your environment you maybe don't permit some of these (for example, disabling access to My Folders is not uncommon). But almost certainly, you'll want your users to be able to persist their environment settings between sessions.

The impact of this is that the Presentation Catalog becomes complex to manage. We can't just overwrite the whole catalog when we come to deployment in Production, because if we do so all of the above listed content will get deleted. And that won't make us popular with users, at all.

So how do we bring any kind of mature software development practice to the Presentation Catalog, assuming that we have report development being done in non-Production environments?

We have two possible approaches:

  1. Deploy the full catalog into Production each time, but backup first existing content that we don't want to lose, and restore it after the deploy
    • Fiddly, but means that we don't have to worry about which bits of the catalog go in source control - all of it does. This has consequences for if we want to do branch-based development with source control, in that we can't. This is because the catalog will exist as a single binary (whether BAR or 7ZIP), so there'll be no merging with the source control tool possible.
    • Risky, if we forget to backup the user content first, or something goes wrong in the process
    • A 'heavier' operation involving the whole catalog and therefore almost certainly requiring the catalog to be in maintenance-mode (read only).
  2. Deploy the whole catalog once, and then do all subsequent deploys as deltas (i.e. only what has changed in the source environment)
    • Less risky, since not overwriting whole target environment catalog
    • More flexible, and more granular so easier to track in source control (and optionally do branch-based development).
    • Requires more complex automated deployment process.

Both methods can be used with OBIEE 11g and 12c.

Presentation Catalog Migration in OBIEE - Option 1

In this option, the entire Catalog is deployed, but content that we want to retain backed up first, and then re-instated after the full catalog deploy.

First we take the entire catalog from the source environment and store it in source control. With OBIEE 12c this is done using the exportServiceInstance WLST command (see the example with the RPD above) to create a BAR file. With OBIEE 11g, you would create an archive of the catalog at its root using 7-zip/tar/gzip (but not winzip).

When ready to deploy to the target environment, we first backup the folders that we want to preserve. Which folders might we want to preserve?

  1. /users - this holds both objects that users have created and saved in My Folders, as well as user profile information (including timezone preferences, delivery profiles, dashboard customisations, and more)
  2. /system - this hold system internal settings, which include things such as authorisations for the OBIEE front end (/system/privs), as well as column formatting defaults (/system/metadata), global variables (/system/globalvariables), and bookmarks (/system/bookmarks).
    • See note below regarding the /system/privs folder
  3. /shared/<…>/<…> - if users are permitted to create content directly in the Shared area of the catalog you will want to preserve this. A valid use of this is for teams to share content developed internally, instead of (or prior to) it being released to the wider user community through a more formal process (the latter being often called 'gold standard' reports).

Regardless of whether we are using OBIEE 11g or 12c we create a backup of the folders identified by using the Archive functionality of OBIEE. This is NOT just creating a .zip file of the file system folders - which is completely unsupported and a bad idea for catalog management, except at the very root level. Instead, the Archive functionality creates a .catalog file which can be stored in source control, and unarchived back into OBIEE to restore content.

You can create OBIEE catalog archives in one of four ways, which are also valid for importing the content back into OBIEE too:

  1. Manually, through OBIEE front-end
  2. Manually, through Catalog Manager GUI
  3. Automatically, through Catalog Manager CLI (runcat.sh)

    • Archive:

      runcat.sh \
      -cmd archive  \
      -online http://demo.us.oracle.com:7780/analytics/saw.dll \
      -credentials /tmp/creds.txt \
      -folder "/shared/HR" \
      -outputFile /home/oracle/hr.catalog
      
    • Unarchive:

      runcat.sh \
      -cmd unarchive \
      -inputFile hr.catalog \
      -folder /shared \
      -online http://demo.us.oracle.com:7780/analytics/saw.dll  \
      -credentials /tmp/creds.txt \
      -overwrite all
      
  4. Automatically, using the WebCatalogService API (copyItem2 / pasteItem2).

Having taken a copy of the necessary folders, we then deploy the entire catalog (with the changes from the development in) taken from source control. Deployment is done in OBIEE 12c using importServiceInstance. In OBIEE 11g it's done by taking the server offline, and replacing the catalog with the filesystem archive to 7zip of the entire catalog.

Finally, we then restore the folders previously saved, using the Unarchive function to import the .catalog files:

Presentation Catalog Migration in OBIEE - Option 2

In this option we take a more granular approach to catalog migration. The entire catalog from development is only deployed once, and after that only .catalog files from development are put into source control and then deployed to the target environment.

As before, the entire catalog is initially taken from the development environment, and stored in source control. With OBIEE 12c this is done using the exportServiceInstance WLST command (see the example with the RPD above) to create a BAR file. With OBIEE 11g, you would create an archive of the catalog at its root using 7zip.

Note that this is only done once, as the initial 'baseline'.

The first time an environment is commissioned, the baseline is used to populate the catalog, using the same process as in option 1 above (in 12c, importServiceInstance/ in 11g unzip of full catalog filesystem copy).

After this, any work that is done in the catalog in the development environment is migrated through by using OBIEE's archive function against just the necessary /shared subfolder to a .catalog file, storing this in source control

This is then imported to target environment with unarchive capability. See above in option 1 for details of using archive/unarchive - just remember that this is archiving with OBIEE, not using 7zip!

You will need to determine at what level you take this folder: -

  • If you archive the whole of /shared each time you'll never be able to do branch-based development with the catalog in which you want to merge branches (because the .catalog file is binary).
  • If you instead work at, say, department level (/shared/HR, /shared/sales, etc) then the highest grain for concurrent catalog development would be the department. The lower down the tree you go the greater the scope for independent concurrent development, but the greater the complexity to manage. This is because you want to be automating the unarchival of these .catalog files to the target environment, so having to deal with multiple levels of folder hierarchy gets hard work.

It's a trade off between the number of developers, breadth of development scope, and how simple you want to make the release process.

The benefit of this approach is that content created in Production remains completely untouched. Users can continue to create their content, save their profile settings, and so on.

Presentation Catalog Migration - OBIEE Privilege Grants

Permissions set in the OBIEE front end are stored in the Presentation Catalog's /system/privs folder.

Therefore, how this folder is treated during migration dictates where you must apply your security grants (or conversely, where you set your security grants dictates how you should treat the folder in migrations). For me the "correct" approach would be to define the full set of privileges in the development environment and the migrate these through along with pre-built objects in /shared through to Production. If you have a less formal approach to environments, or for whatever reason permissions are granted directly in Production, you will need to ensure that the /system/privs folder isn't overwritten during catalog deployments.

When you create a BAR file in OBIEE 12c, it does include /system/privs (and /system/metadata). Therefore, if you are happy for these to be overwritten from the source environment, you would not need to backup/restore these folders. If you set includeCatalogRuntimeInfo in the OBIEE 12c export to BAR, it will also include the complete /system folder as well as /users.

Agents

Regardless of how you move Catalog content between environments, if you have Agents you need to look after them too. When you move Agents between environment, they are not automatically registered with the BI Scheduler in the target environment. You either have to do this manually, or with the web service API : WebCatalogService.readObjects to get the XML for the agent, and then submit it to iBotService.writeIBot which will register it with the BI Scheduler.

Security
  • In terms of the Policy store (Application Roles and Policy grants), these are managed by the Security element of the BAR and migration through the environments is simple. You can deploy the policy store alone in OBIEE 12c using the importJazn flag of importServiceInstance. In OBIEE 11g it's not so simple - you have to use the migrateSecurityStore WLST command.
  • Data/Object security defined in the RPD gets migrated automatically through the RPD, by definition
  • See above for a discussion of OBIEE front-end privilege grants.
What Goes into Source Control? Part 2

So, suddenly this question looks a bit less simple than when orginally posed at the beginning of this article. In essence, you need to store:

  1. RPD
    1. BAR + JSON configuration for each environment's connection pools -- 12c only, simpler, but less flexible and won't support concurrent development easily
    2. RPD (.rpd) + XUDML patch file for each environment's connection pools -- works in 11g too, supports concurrent development
  2. Presentation Catalog
    1. Entire catalog (BAR in 12c / 7zip in 11g) -- simpler, but impossible to manage branch-based concurrent development
    2. Catalog baseline (BAR in 12c / 7zip in 11g) plus delta .catalog files -- More complex, but more flexible, and support concurrent development
  3. Security
    1. BAR file (OBIEE 12c)
    2. system-jazn-data.xml (OBIEE 11g)
  4. Any other files that are changed for your deployment.

    It's important that when you provision a new environment you can set it up the same as the others. It is also invaluable to have previous versions of these files so as to be able to rollback changes if needed, and to track what settings have changed over time.

    This could include:

    • Configuration files (nqsconfig.ini, instanceconfig.xml, tnsnames.ora, etc)
    • Custom skins & styles
    • writeback templates
    • etc
Summary

I never said it was simple ;-)

OBIEE is an extremely powerful product, and just as you have to take care to build your data models correctly, you also need to take care to understand why and how to manage your code correctly. What I've tried to do here is pull together the different options available, and lay them out with their respectively pros and cons. Let me know in the comments below what you think and how you manage OBIEE code at your site.

One of the key messages that it's important to get across is this: there are varying degrees of complexity with which you can embrace source control. All are valid, and in fact an incremental adoption of them rather than big-bang can sometimes be a better idea:

  • At one end of the scale, you simply use source control to hold copies of all your code, and continue to deploy manually
  • Getting a bit smarter, automating code deployments from source control. Code development is still done serially though.
  • At the other end of the scale, you use source control with branch-based feature-driven concurrent development. Completed features are merged automatically with RPD conflicts managed by the OBIEE tooling from the command line. Testing and deployment are both automated.

If you'd like assistance with your OBIEE development and deployment practices, including fully automated source-control driven concurrent development management, please get in touch with us here at Rittman Mead. We would be delighted to use our extensive experience in this field to produce a flexible and customised process for your particular environment and requirements.

You can find the companion slide deck to this article, with further discussion on concurrent development, here.

Categories: BI & Warehousing

What is the Impact of SHA-1 Desupport on EBS?

Steven Chan - Wed, 2016-12-14 02:05

Given that Certificate Authorities have already stopped issuing SHA-1 certificates, most likely, you have already addressed the use of SHA-1 certificates for inbound connections by migrating to SHA-2 certificates. In case you have not, this is a reminder for you to do so.

What's Happening with Browsers?

Many browsers (eg, Google Chrome, MS IE) have now published their desupport plans for SHA-1. The timeline for desupport of SHA-1 is dependent on your certificate authority and your browser.

You need to ensure that your inbound connections are using SHA-2 signed PKI certificates. If you do not move to a SHA-2 certificate and the end-user browser requires one, your Oracle E-Business Suite users will receive a warning or error.

What Do You Need to Do?

Following the instructions in our documentation to meet the minimum requirements for using SHA-2 signed PKI certificates with Oracle E-Business Suite.

For TLS 1.2, refer to the following:

For TLS 1.0 and SSLv3, refer to the following:

Note: TLS 1.2 is the latest version certified with Oracle E-Business Suite Release 12.2 and 12.1. As a reminder, we recommend that you migrate to TLS 1.2 for Oracle E-Business Suite Release 12.2 and 12.1.

References

Related Articles

Categories: APPS Blogs

Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations?

Yann Neuhaus - Wed, 2016-12-14 01:33

For security reasons, you may want that your DataGuard operations are done with a different UNIX user and with a different Oracle user which is not so highly privileged like the SYSDBA.  This is exactly where the SYSDG Administrative Privilege for Oracle Data Guard Operations comes into play.

The SYSDG privilege is quite powerful and allows you to work with the Broker (DGMGRL) command line interface and besides that, it enables the following operations:

  • STARTUP
  • SHUTDOWN
  • ALTER DATABASE
  • ALTER SESSION
  • ALTER SYSTEM
  • CREATE RESTORE POINT (including GUARANTEED restore points)
  • CREATE SESSION
  • DROP RESTORE POINT (including GUARANTEED restore points)
  • FLASHBACK DATABASE
  • SELECT ANY DICTIONARY
  • SELECT
    • X$ tables (that is, the fixed tables)
    • V$ and GV$ views (that is, the dynamic performance views
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • DELETE
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • EXECUTE
    • SYS.DBMS_DRS

In addition, the SYSDG privilege enables you to connect to the database even if it is not open.

Ok. Let’s give it a try. I want to give the user scott all the privileges he needs to do the DataGuard operational tasks. So … I create a UNIX user scott and a database user scott with the SYSDG privilege.

[root@dbidg02 ~]# useradd scott
[root@dbidg02 ~]# usermod -a -G sysdg scott
[root@dbidg02 ~]# cat /etc/group |grep sysdg
sysdg:x:54324:oracle,scott

SQL> create user scott identified by tiger;

User created.

SQL> grant sysdg to scott;

Grant succeeded.

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS where USERNAME = 'SCOTT';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
SCOTT                  FALSE FALSE FALSE TRUE  FALSE

So far so good. Everything works. Scott can do switchovers, convert the physical standby to a snapshot database, create restore points and many more. But what happens when an error pops up? You need to take a look into the most important log files which are the alert log and broker log file in a DataGuard environment.

If you do a “show database verbose”, you will find at the end of the output the locations of the log files, which is quite useful from my point of view. This is new with Oracle 12cR2.

DGMGRL> show database verbose 'DBIT122_SITE1';

Database - DBIT122_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Active Apply Rate:  152.00 KByte/s
  Maximum Apply Rate: 152.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    DBIT122
  ...
  ...
Broker shows you the Log file location:

    Alert log               : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log

But unfortunately, the scott user can’t read those files, because there are no read permissions for others and
scott is not part of the oinstall group.

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 trace]$ ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 37787 Dec 13 10:36 drcDBIT122.log
[scott@dbidg01 trace]$ ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 221096 Dec 13 12:04 alert_DBIT122.log

So what possibilities do we have to overcome this issue?

1. We can add user scott to the oinstall group, but then we haven’t won to much security
2. We can set the parameter “_trace_files_public”=true, but when this one is enable, then all oracle
trace files are world readable, not just the alert and broker log
3. We can configure XFS access control lists, so that user scott gets only the permissions he needs

For security reasons, I decided to go for the last one.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(sysbkp),54324(sysdg),54325(syskm),54326(oper)
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r drcDBIT122.log


oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r-----+ 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl alert_DBIT122.log
# file: alert_DBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl drcDBIT122.log
# file: drcDBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

Cool. Now the scott user is really able to do a lot of DataGuard operation tasks, including some debugging.

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log | grep 'MAXIMUM AVAILABILITY mode' | tail -1
Primary database is in MAXIMUM AVAILABILITY mode

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log |grep "Protection Mode" | tail -1
      Protection Mode:            Maximum Availability
Conclusion

Using XFS ACL lists is quite cool if you want to give a specific user permissions to a file, but you don’t want to add him to a group, or make all files world readable. But be careful, that you configure the same ACL list on all other Standby nodes as well, and make sure that you use a Backup solution which supports ACL’s.

For example, using ‘cp’ or ‘cp -p’ makes a huge difference. In one case you loose your ACL list in the copy, in the other case you preserve it. The (+) sign at the end of the file permissions shows the difference.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp alert_DBIT122.log alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp -p alert_DBIT122.log alert_DBIT122.log.b
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.a
-rw-r----- 1 oracle oinstall 312894 Dec 13 14:25 alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.b
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log.b
 

Cet article Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations? est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER

Yann Neuhaus - Wed, 2016-12-14 01:18

In a DataGuard environment, by default, the password of the SYS user is used to authenticate redo transport sessions when a password file is used. But for security reasons you might not want to use such a high privileged user only for the redo transmission. To overcome this issue, Oracle has implemented the REDO_TRANSPORT_USER initialization parameter.

The REDO_TRANSPORT_USER specifies the name of the user whose password verifier is used when a remote login password file is used for redo transport authentication.

But take care, the password must be the same at both databases to create a redo transport session, and the value of this parameter is case sensitive and must exactly match the value of the USERNAME column in the V$PWFILE_USERS view.

Besides that, this user must have the SYSDBA or SYSOPER privilege. However, we don’t want to grant the SYSDBA privilege. For administrative ease, Oracle recommends that the REDO_TRANSPORT_USER parameter be set to the same value on the redo source database and at each redo transport destination.

Ok. Let’s give it a try. I am creating an user called ‘DBIDG’ which will be used for redo transmission between my primary and standby.

SQL> create user DBIDG identified by manager;

User created.

SQL> grant connect to DBIDG;

Grant succeeded.

SQL> grant sysoper to DBIDG;

Grant succeeded.

Once done, I check the v$pwfile_users to see if my new user ‘DBIDG’ exist.

-- On Primary

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE


-- On Standby
SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

no rows selected

Ok. Like in previous versions of Oracle, I have to copy the password myself to the destination host to make it work.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] scp -p orapwDBIT122 oracle@dbidg02:$PWD

SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE

 

By connecting with the ‘DBIDG’ user, you almost can’t do anything. Not even selecting from the dba_tablespaces view e.g. From the security perspective, this user is much less of a concern.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] sqlplus dbidg/Manager1@DBIT122_SITE1 as sysoper

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 13 11:08:00 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> desc dba_tablespaces
ERROR:
ORA-04043: object "SYS"."DBA_TABLESPACES" does not exist

Nevertheless, the ‘DBIDG’ user is completely sufficient for my use case. Now, as I got my ‘DBIDG’ redo transport user in both password files (primary and standby), I can activate the redo_transport_user feature on (primary and standby) and check if everything works, by doing a switch over and switch back.

-- On Primary and Standby

SQL> alter system set redo_transport_user='DBIDG';

System altered.


DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 33 seconds ago)

DGMGRL> SWITCHOVER TO 'DBIT122_SITE2' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE2"
Connecting ...
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE1"
Switchover succeeded, new primary is "DBIT122_SITE2"

DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE2 - Primary database
    DBIT122_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 71 seconds ago)


DGMGRL> SWITCHOVER TO 'DBIT122_SITE1' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE1"
Connecting ...
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
New primary database "DBIT122_SITE1" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE2"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE2"
Switchover succeeded, new primary is "DBIT122_SITE1"

Looks very good so far. But what happens if I have to change the password of the ‘DBIDG’ user?

-- On Primary

SQL> alter user dbidg identified by Manager1;

User altered.

-- On Primary
oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122


-- On Standby
oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122

That’s cool. Passwords on both sites have been updated successfully. They have the same time stamps and even the MD5 checksums are exactly the same. This is because of the new “Automatic Password Propagation to Standby” feature of 12cR2.

Conclusion

REDO_TRANSPORT_USER and “Automatic Password Propagation to Standby” are nice little features from Oracle.  The REDO_TRANSPORT_USER exists for quite a while now, at least since 11gR2, however, the “Automatic Password Propagation to Standby” is new with 12cR2.

 

Cet article Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER est apparu en premier sur Blog dbi services.

Upgrade Merge Patch shows the Error loading seed data for FND_ATTACHMENT_FUNCTIONS

Online Apps DBA - Wed, 2016-12-14 00:50

During Upgrade from 12.1.3 to 12.2.0: Upgrade Merge Patch shows the Error loading seed data for FND_ATTACHMENT_FUNCTIONS ERROR/ISSUE During Upgrade from 12.1.3 to 12.2.0: Upgrade Merge Patch throws the Error loading seed data for FND_ATTACHMENT_FUNCTIONS: FUNCTION_NAME = CSF_CSFDCMAI, FUNCTION_TYPE = F, APPLICATION_SHORT_NAME = CSF, ORA-01422: exact fetch returns more than requested number of rows Resolution: […]

The post Upgrade Merge Patch shows the Error loading seed data for FND_ATTACHMENT_FUNCTIONS appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator