Feed aggregator

Oracle E-Business Suite APPLSYS, APPS and APPS_NE

The evolution of the Oracle E-Business Suite since its inception in the late 1980s has gone through many significant changes. For example, I can personally remember in the late 1990s upgrading clients to release 10.5 of the E-Business Suite with the big change being the introduction of the APPS schema.

The introduction of the APPS schema greatly simplified the technical interdependencies of the then 40+ applications of Release 10.5 of the E-Business Suite. The most recent version of the Oracle E-Business Suite, Release 12.2, with 200+ modules, introduces on-line patching to reduce downtime requirements. This new technical functionality is based on Edition-based Redefinition provided by the Oracle 11gR2 database. For the E-Business Suite to make use of Editioning, Oracle has added a new schema to the ‘APPS’ family – the APPS_NE schema.

The APPS_NE schema is the owner of those objects previously owned by APPS that cannot be Editioned or in other words; the APPS_NE is the APPS schema for the non-editioned APPS foundation database objects.  APPS_NE has similar elevated system privileges to APPS (e.g. SELECT ANY TABLE), but is not identical. The same password must be shared among APPLSYS, APPS, and APPS_NE. The default password for APPS_NE is 'APPS.'

--This SQL gives a high-level summary of the difference between APPS and APPS_NE
SELECT OWNER, OBJECT_TYPE, COUNT(*)
FROM DBA_OBJECTS
WHERE OWNER = 'APPS_NE'
GROUP BY OWNER, OBJECT_TYPE
UNION
SELECT OWNER, OBJECT_TYPE, COUNT(*)
FROM DBA_OBJECTS
WHERE OWNER = 'APPS'
GROUP BY OWNER,OBJECT_TYPE
ORDER BY 1,3 DESC;
 
The table below is a high-level summary of the APPS schemas.
 

Oracle E-Business Suite ‘APPS’ Schemas

Schema

Description

APPS

Introduced with 10.5 of the E-Business Suite, APPS, owns all of the applications code in the database and has access all data in the Oracle E-Business Suite. All end-user connections as well connect as APPS after being authenticated using the APPLSYSPUB schema. The APPS schema must have same password as APPLSYS and APPS_NE schemas.

APPSLSYS

Owns the foundation objects (AD_* and FND_* tables) of the E-Business Suite used to define users and menus etc…. The APPLSYS schema must have same password as APPS and APPS_NE.

APPS_NE

New with 12.2, the APPS_NE schema is the Non-Editioned runtime ‘APPS’ user for the E-Business Suite. The APPS_NE schema must have same password as APPLSYS and APPS schemas.

APPS_MRC

APPS_MRC was created to support functionality for multiple reporting currencies (MRC). This schema has been obsolete since 11.5.10 and is no longer used. Its default was APPS_MRC, but country code suffixes were added (e.g. APPS_UK, APPS_JP). APPS_MRC is dropped by the upgrade to 11.5.10 and should not exist in R12 instances.

 

If you have any questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

References
 
 
 
 
Oracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Oracle Database 12.2.0.1 – Application PDB unable to sync – An update

Oracle in Action - Tue, 2017-05-02 04:57

RSS content

In my last post, I had demonstrated that without OMF, If an  application creates new datafiles, application PDB’s always fails to sync with the  application.  When I mentioned it to Tim, he raised an SR with Oracle who have clarified that it is  Bug 21933632. Although  it is not documented yet but OMF is mandatory if using the Application Container feature. If it is attempted to sync application PDB’s  with an  application that creates   non-OMF datafile(s),   sync  will run into  problem while trying to replay a create tablespace or similar statement with a hard-coded file name.”

References:

Multitenant: Application Containers in Oracle Database 12c Release 2

————————————————————————————————————————-

Related Links:

Home

Oracle 12c Index

Oracle Database 12.2.0.1 – Application PDB unable to sync



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [Oracle Database 12.2.0.1 – Application PDB unable to sync - An update], All Right Reserved. 2017.

The post Oracle Database 12.2.0.1 – Application PDB unable to sync – An update appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Aliases

Jonathan Lewis - Tue, 2017-05-02 03:23

Here’s a performance problem that came up on OTN recently. The following query (reforematted) takes “ages” to run – how do you address the problem:

SELECT
	COUNT(*) 
FROM
	smp_dbuser2.workflow_step_report
WHERE
	report_ID IN (
		SELECT	report_id
		FROM	smp_dbuser2.workflow_report
		WHERE	trunc(start_time) = '28-Apr-2017'
		AND	user_id = 'nbi_ssc'
	)
;


Various pieces of relevant information were supplied (the workflow_report table holds 1.4M rows the workflow_step_report table holds 740M rows and some indexes were described), but most significantly we were given the execution plan:

--------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                 | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                      |     1 |     6 |    10T  (1)|999:59:59 |       |       |
|   1 |  SORT AGGREGATE       |                      |     1 |     6 |            |          |       |       |
|*  2 |   FILTER              |                      |       |       |            |          |       |       |
|   3 |    PARTITION HASH ALL |                      |   731M|  4187M|  5363K  (1)| 17:52:47 |     1 |   128 |
|   4 |     TABLE ACCESS FULL | WORKFLOW_STEP_REPORT |   731M|  4187M|  5363K  (1)| 17:52:47 |     1 |   128 |
|*  5 |    FILTER             |                      |       |       |            |          |       |       |
|   6 |     PARTITION HASH ALL|                      |     2 |    38 | 14161   (1)| 00:02:50 |     1 |    32 |
|*  7 |      TABLE ACCESS FULL| WORKFLOW_REPORT      |     2 |    38 | 14161   (1)| 00:02:50 |     1 |    32 |
--------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( EXISTS (SELECT 0 FROM "SMP_DBUSER2"."WORKFLOW_REPORT" "WORKFLOW_REPORT" WHERE :B1=:B2
              AND "USER_ID"='nbi_ssc' AND TRUNC(INTERNAL_FUNCTION("START_TIME"))=TO_DATE(' 2017-04-28 00:00:00',
              'syyyy-mm-dd hh24:mi:ss')))
   5 - filter(:B1=:B2)
   7 - filter("USER_ID"='nbi_ssc' AND TRUNC(INTERNAL_FUNCTION("START_TIME"))=TO_DATE(' 2017-04-28
              00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

You’ll notice that the optimizer has transformed the IN subquery into an EXISTS subquery – operation 2 is a FILTER operation, and you can see that the filter predicate at operation 2 shows the existence subquery that would be executed.

If you look carefully at the execution plan (all of it), what can you deduce from it ? What, then, should be your next step in dealing with this performance problem ?

Answers at the end of the (UK) day. Resist the temptation to examine my comments in the OTN thread.


New OA Framework 12.2.5 Update 11 Now Available

Steven Chan - Tue, 2017-05-02 02:00

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • The View generated by the rich table interactions affects the performance by executing the blind query.
  • The text font is inconsistent on Printable page.
  • The German translation is not in sync for button's label.
  • Table data is not loading when scroll down for inline update table.

Related Articles

Categories: APPS Blogs

April 2017 Updates to AD and TXK for EBS 12.2

Steven Chan - Tue, 2017-05-02 02:00

We have been fine-tuning the administration tools for E-Business Suite 12.2 via a series of regular updates to the Applications DBA (AD) and EBS Technology Stack (TXK) components:

We have now made available a eleventh set of critical updates to AD and TXK. We strongly recommend that you apply these new AD and TXK updates at your earliest convenience:

They must be individually downloaded from My Oracle Support, as shown by this example for AD:

Refer to the following My Oracle Support knowledge document for full installation instructions and associated tasks:

What's New in this Patchset?

This patchset includes a large number of critical fixes for stability issues that will affect all customers.  It also includes the following new features:

Related Articles

Categories: APPS Blogs

Binding a Spring Cloud Task to a Pivotal Cloud Foundry Database Service

Pas Apicella - Mon, 2017-05-01 22:54
I previously blogged about how to create and deploy a Spring Cloud Task to Pivotal Cloud Foundry (PCF) as shown below.

http://theblasfrompas.blogspot.com.au/2017/03/run-spring-cloud-task-from-pivotal.html

Taking that same example I have used the Spring Cloud Connectors to persist the log output to a database table to avoid looking through log files to view the output. Few things have to change to make this happen as detailed below.

1. We need to change the manifest.yml to include a MySQL service instance as shown below

applications:
- name: springcloudtask-date
  memory: 750M
  instances: 1
  no-route: true
  health-check-type: none
  path: ./target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar
  services:
    - pmysql-test
  env:
    JAVA_OPTS: -Djava.security.egd=file:///dev/urando

2. Alter the project dependancies to include Spring Data JPA libraries to persist the log output to a table. Spring Cloud Connectors will automatically pick up the bound MySQL instance and connect for us when we push the application to PCF

https://github.com/papicella/SpringCloudTaskTodaysDate
  
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-task</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
</dependencies>

3. A Entity class, Spring JPA repository interface and a JPA task Configurer has been created for persisting the log output as shown in the code below.

TaskRunOutput.java
  
package pas.au.pivotal.pa.sct.demo;

import javax.persistence.*;

@Entity
@Table (name = "TASKRUNOUTPUT")
public class TaskRunOutput
{
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;

private String output;

public TaskRunOutput()
{
}

public TaskRunOutput(String output) {
this.output = output;
}

public Long getId() {
return id;
}

public void setId(Long id) {
this.id = id;
}

public String getOutput() {
return output;
}

public void setOutput(String output) {
this.output = output;
}

@Override
public String toString() {
return "TaskRunOutput{" +
"id=" + id +
", output='" + output + '\'' +
'}';
}
}

TaskRepository.java
  
package pas.au.pivotal.pa.sct.demo;

import org.springframework.data.jpa.repository.JpaRepository;

public interface TaskRepository extends JpaRepository <TaskRun, Long>
{
}

JpaTaskConfigurer.java
  
package pas.au.pivotal.pa.sct.demo.configuration;

import java.text.SimpleDateFormat;
import java.util.Date;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import pas.au.pivotal.pa.sct.demo.TaskRunOutput;
import pas.au.pivotal.pa.sct.demo.TaskRunRepository;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.task.configuration.DefaultTaskConfigurer;
import org.springframework.cloud.task.listener.annotation.BeforeTask;
import org.springframework.cloud.task.repository.TaskExecution;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.stereotype.Component;
import org.springframework.transaction.PlatformTransactionManager;

@Component
public class JpaTaskConfigurer extends DefaultTaskConfigurer {
private static final Log logger = LogFactory.getLog(JpaTaskConfigurer.class);

@Autowired
private PlatformTransactionManager transactionManager;

@Autowired
private TaskRunRepository taskRunRepository;

@Override
public PlatformTransactionManager getTransactionManager() {
if(this.transactionManager == null) {
this.transactionManager = new JpaTransactionManager();
}

return this.transactionManager;
}

@BeforeTask
public void init(TaskExecution taskExecution)
{
String execDate = new SimpleDateFormat().format(new Date());
taskRunRepository.save(new TaskRunOutput("Executed at " + execDate));
logger.info("Executed at : " + execDate);
}
}

4. Now as per the previous blog execute the task and verify it completes without error. The screen shot below shows how the "Tasks" tab shows this

Note: You would need to PUSH the application to Pivotal Cloud Foundry before you can execute it which is shown on the original blog entry


5. Now if you follow this blog entry below you can deploy a Web Based interface for Pivotal MySQL instance to view the table and it's output

http://theblasfrompas.blogspot.com.au/2017/04/accessing-pivotal-mysql-service.html

With Pivotal MySQL*Web installed the output can be viewed as shown below.



Categories: Fusion Middleware

Oracle 12cR2 Security - Listener Port

Pete Finnigan - Mon, 2017-05-01 19:26
I downloaded Oracle 12cR2 from Oracle when it became available in March and installed a legacy SE2 database and also a single PDB multitenant database and started some investigations to discover and look at the new security features added in....[Read More]

Posted by Pete On 01/05/17 At 01:03 PM

Categories: Security Blogs

Creating an ACFS filesystem

DBA Scripts and Articles - Mon, 2017-05-01 15:12

About ACFS filesystem ACFS mean  Automatic Storage Management Cluster File System. This filesystem volume resides inside the ASM database and can be used to store database files but also any type of files with or without direct relation to the database. Oracle ACFS does not support Oracle Grid Infrastructure or Oracle Cluster Registry voting files. … Continue reading Creating an ACFS filesystem

The post Creating an ACFS filesystem appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Top 5 Quotes from Oracle’s 2017 Modern Finance Experience

Look Smarter Than You Are - Mon, 2017-05-01 12:40
Three days of Oracle’s Modern Finance Experience set my personal new record for “Most Consecutive Days Wearing a Suit.” Surrounded by finance professionals (mostly CFOs, VPs of FP&A, and people who make money from Finance execs), I came prepared to learn nothing… yet found myself quoting the content for days to come.

The event featured top notch speakers on cutting edge concepts: the opening keynote with Mark Hurd, a panel on the changing world of finance with Matt Bradley & Rondy Ng, Hari Sankar on Hybrid in the world of Oracle EPM, and even one of my competitors (more on that in a second).

For those of you who couldn’t be there (or didn’t want to pay a lot of money to dress up for three days), I thought I’d share my top five quotes as best as I could transcribe them.

“IT currently spends 80% of its budget on maintenance. Boards are demanding increased security, compliance, and regulatory investment. All these new investments come from the innovation budget, not maintenance.”
-          Mark Hurd, Oracle, Co-Chief Executive Officer

Mark Hurd was pulling double duty: he gave the opening keynote at Oracle HCM World (held at a nearby hotel) and then bolted over to Oracle Modern Finance Experience to deliver our keynote. He primarily talked Oracle strategy for the next few years which – to badly paraphrase The Graduate – can be summed up in one word: Cloud.

He gave a compelling argument for why the Cloud is right for Oracle and businesses (though server vendors and hosting providers should be terrified). Now let me be clear: much of this conference was focused around the Cloud, so many of these quotes will be too, but what I liked about Mark’s presentation was it gave clear, concise, and practically irrefutable arguments of the benefits of the Cloud.

The reason I liked the quote above is it answers the concerns from all those IT departments: what happens to my job if I don’t spend 80% of our resources on maintaining existing systems? You’ll get to spend your time on actually improving systems. Increased innovation, greater security, better compliance … the things you’ve been wanting to get to but never have time or budget to address.

“The focus is not on adding lots of new features to on-premises applications. Our priority is less on adding to the functional richness and more on simplifying the process of doing an upgrade.”

-          Hari Sankar, Oracle, GVP of Product Management

I went to a session on the hybrid world of Oracle EPM. I knew Hari would be introducing a customer who had both on-premises Hyperion applications and Cloud applications. What I didn’t know is that he would be addressing the future of Oracle EPM on-premises. As most of you know, the current version for the on-premises Oracle EPM products is 11.1.2.4.x. What many of you do not know is that Oracle has taken future major versions (11.1.2.5 and 12c) of those products off the roadmap.

Hari spoke surprisingly directly to the audience about why Oracle is not abandoning EPM on-prem, but why they will not be pushing the Cloud versions and all their cool new functionality back down to the historical user base. To sum up his eight+ minute monologue, the user base is not requesting new functionality. They want simplicity and an easy path to transition to the Cloud eventually, and that’s why Oracle will be focusing on PSUs (Patch Set Updates) for the EPM products and not on “functional richness.”

Or to put it another way: Hyperion Planning and other Hyperion product users who want impressive new features? Go to the Cloud because they’re probably never coming to on-premises. To quote Hari once more, “create a 1-3 year roadmap for moving to a Cloud environment” or find your applications increasingly obsolete.

 “Hackers are in your network: they’re just waiting to pull the trigger.”

-          Rondy Ng, Oracle, SVP of Applications Development

There was an entertaining Oracle panel led by Jeff Jacoby (Master Principal Sales Consultant and a really nice guy no matter what his family says) that included Rondy Ng (he’s over ERP development), Matt Bradley (he’s over EPM development), and Michael Gobbo (also a lofty Master Principal Sales Consultant). While I expected to be entertained (and Gobbo’s integrated ERP/HCM/EPM demo was one for the ages), I didn’t expect them to tackle the key question on everyone’s mind: what about security in the Cloud?

Mark Hurd did address this in his keynote and he gave a fun fact: if someone finds a security flaw in Oracle’s software on a Tuesday, Oracle will patch in by Wednesday, and it will take an average of 18 months until that security patch gets installed in the majority of their client base. Rondy addressed it even more directly: if you think hackers haven’t infiltrated your network, you’re sticking your head in the sand.

Without going into all of Rondy’s points, his basic argument was that Oracle is better at running a data center than any of their customers out there. He pointed out that Oracle now has 90 data centers around the world and that security overrides everything else they do. He also said, “security is in our DNA” which is almost the exact opposite of “Danger is my middle name,” but while Rondy’s line won’t be getting him any dates, it should make the customer base feel a lot safer about letting Oracle host their Cloud applications.

 “Cloud is when not if.”

-          David Axson, Accenture, Managing Director

I have to admit, I have developed a man crush on one of my competitors. I wrote down more quotes from him than from every other speaker at the event put together. His take on the future of Finance and Planning so closely paralleled my thoughts that I almost felt like he had read the State of Business Analytics white paper we wrote. For instance, in that white paper, we wrote about Analysis Inversion: that the responsibility for analyzing the report should be in the hands of the provider of the report, not the receiver of the report. David Axson put it this way: “The reporting and analysis is only as good as the business decisions made from it. In finance, your job starts when you deliver the report and analysis. Most people think that's when it ends.”

The reason I picked the quote above is because it really sums up the whole theme of the conference: the Cloud is not doing battle with on-premises. The Cloud did that battle, won with a single sucker punch while on-prem was thinking it had it made, and Cloud currently dancing on the still unconscious body of on-prem who right now is having a bad nightmare involving losing its Blackberry while walking from Blockbuster to RadioShack.

David is right: the Cloud is coming to every company and the only question is when you’ll start that journey.

“Change and Certainty are the new normal. Combat with agility.”

-          Rod Johnson, Oracle, SVP North America ERP, EPM, SCM Enterprise Business

So, what can we do about all these changes coming to Finance? And for that matter, all the changes coming to every facet of every industry in every country on Earth? Rod Johnson (which he assures me is his not his “stage” name) said it best: don’t fight the change but rather embrace it and make sure you can change faster than everyone else.

"Change comes to those who wait, but it’s the ones bringing the change who are in control."

-          Edward Roske, interRel, CEO


To read more about some of those disruptive changes coming to the world of Finance, download the white paper I mentioned above.
Categories: BI & Warehousing

Webcast Tomorrow: Introducing Oracle Content and Experience Cloud

WebCenter Team - Mon, 2017-05-01 09:24
Oracle   Introducing Oracle Content and Experience Cloud: Drive Engagement by Unleashing the Power of Content
 
Register Now cta-arrow   93% of marketers have siloed teams and technologies by channels Join this webcast to learn about Oracle Content and Experience Cloud -- a Digital Experience platform that drives omni-channel content management and delivers engaging experiences to your customers, partners, and employees. Hear how Oracle Content and Experience Cloud can help you:
  • Easily find, access, use, reuse and collaborate on content anytime, anywhere and on any device
  • Drive consistent, compliant and contextual experiences across multiple channels
  • Centralize content management across digital channels and enterprise applications
Empower your marketing teams to unleash the power of content. 

Register today.


Sincerely,
Content and Experience Cloud Team
Introducing Oracle Content and Experience Cloud Webcast slap-hr Offer May 2, 2017
10:00 AM PDT /
01:00 PM EDT Register Now cta-arrow Featured Speaker slab-hr David Le Strat David Le Strat
Senior Director, Product Management
Oracle
David Le Strat is Senior Director of Product Management for the Digital Experience and Content and Experience Cloud product portfolios at Oracle. In his role, David is responsible for product strategy, definition and delivery and go-to-market.
Stay Connected Facebook Linkedin Twitter Youtube Blog #OracleDX
 

RTFM

Jonathan Lewis - Mon, 2017-05-01 06:55

Imagine you’re fairly new to Oracle and don’t have a lot of background information at your fingertips; then one day someone tells you to read the manual pages for the view dba_free_space. Look carefully at this sentence:

Note that if a data file (or entire tablespace) is offline in a locally managed tablespace, you will not see any extent information.

Can you spot the error ? Did you spot the error when you first read the sentence – or did you fill in the gap without noticing what you were doing ?

Let’s demonstrate the accuracy of the statement (simple cut-n-paste from an SQL*Plus session on 12.1.0.2 running in archivelog mode, and with a locally managed tablespace consisting of 4 (oracle managed) files on a filesystem):


SQL> select * from dba_free_space where tablespace_name = 'LOB_TEST';

TABLESPACE_NAME                   FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
------------------------------ ---------- ---------- ---------- ---------- ------------
LOB_TEST                                4        128   51380224       6272            4
LOB_TEST                                7        128   51380224       6272            7
LOB_TEST                                8        640   47185920       5760            8
LOB_TEST                                9        128   51380224       6272            9

4 rows selected.

SQL> select file#, ts#, name from v$datafile;

     FILE#        TS# NAME
---------- ---------- ----------------------------------------------------------------------
         1          0 /u02/app/oracle/oradata/OR32/datafile/o1_mf_system_cbcysq2o_.dbf
         2          9 /u02/app/oracle/oradata/OR32/datafile/o1_mf_undotbs_d84db0s2_.dbf
         3          1 /u02/app/oracle/oradata/OR32/datafile/o1_mf_sysaux_cbcyrmyd_.dbf
         4         15 /u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchn57_.dbf
         5          6 /u02/app/oracle/oradata/OR32/datafile/o1_mf_test_8k__cbd120yc_.dbf
         6          4 /u02/app/oracle/oradata/OR32/datafile/o1_mf_users_cbcyv47y_.dbf
         7         15 /u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchnnq_.dbf
         8         15 /u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpcho47_.dbf
         9         15 /u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchok1_.dbf

9 rows selected.

SQL> alter database datafile '/u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchnnq_.dbf' offline;

Database altered.

SQL> select * from dba_free_space where tablespace_name = 'LOB_TEST';

TABLESPACE_NAME                   FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
------------------------------ ---------- ---------- ---------- ---------- ------------
LOB_TEST                                4        128   51380224       6272            4
LOB_TEST                                8        640   47185920       5760            8
LOB_TEST                                9        128   51380224       6272            9

3 rows selected.

SQL> recover datafile '/u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchnnq_.dbf';
Media recovery complete.
SQL> alter database datafile '/u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchnnq_.dbf' online;

Database altered.

SQL> select * from dba_free_space where tablespace_name = 'LOB_TEST';

TABLESPACE_NAME                   FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
------------------------------ ---------- ---------- ---------- ---------- ------------
LOB_TEST                                4        128   51380224       6272            4
LOB_TEST                                7        128   51380224       6272            7
LOB_TEST                                8        640   47185920       5760            8
LOB_TEST                                9        128   51380224       6272            9

4 rows selected.

SQL> spool off

See the bit in the middle where I have “3 rows selected” for the lob_test tablespace: the manual says I “will not see any extent information” – but the only change in the output is the absence of information about the one data file that I’ve put offline.

You may want to argue that “obviously” the statement was only about the data file that was offline – but is that a couple of years experience allowing you to interpret the text ? Some might assume (with a little prior experience and if they hadn’t done the experiment and given the parenthetical reference to “entire tablespace”) that the statement was about the effect on a single tablespace  – and maybe others would criticise them for making unwarranted assumptions.

But maybe you’re a novice and believed what the manual actually said.

It’s a fairly silly example, of course, but the point of this note is that when you tell someone to RTFM remember that they might actually do exactly that and not have the benefit of being able to know (unthinkingly) that the manual is wrong. If you go one step further and tell them to “stop making assumptions and RTFM” then just remember that you probably make a lot of assumptions without realising it when you read the manuals, and maybe it’s your assumptions that lead you to the correct interpretation of the manual.

Footnote:

If you’re feeling in the mood to split hairs, don’t forget that dba_free_space doesn’t usually give you any information about extents when it’s reporting locally managed tablespaces, it tells you about the space in which extents can be created; the one exception (that I know of) is when you have an object in the recyclebin and each extent of that object is listed as free space (see this article and the footnote here).  It’s only for dictionary managed tablespaces that dba_free_space reports extent information – the rows stored in the fet$ table.

 


Reminder: Sign E-Business Suite JAR Files

Steven Chan - Mon, 2017-05-01 02:00

Oracle disabled MD5 signed JARs in the April 2017 Critical Patch Update.  JAR files signed with MD5 algorithms will be treated as unsigned JARs.

MD5 JAR file signing screenshot

Does this affect EBS environments?

Yes. This applies to Java 6, 7, and 8 used in EBS 12.1 and 12.2.  Oracle E-Business Suite uses Java, notably for running Forms-based content via the Java Runtime Environment (JRE) browser plug-in.  Java-based content is delivered in JAR files.  Customers must sign E-Business Suite JAR files with a code signing certificate from a trusted Certificate Authority (CA). 

A code signing certificate from a Trusted CA is required to sign your Java content securely. It allows you to deliver signed code from your server (e.g. JAR files) to users desktops and verifying you as the publisher and trusted provider of that code and also verifies that the code has not been altered. A single code signing certificate allows you to verify any amount of code across multiple EBS environments. This is a different type of certificate to the commonly used SSL certificate which is used to authorize a server on a per environment basis. You cannot use an SSL certificate for the purpose of signing jar files. 

Instructions on how to sign EBS JARs are published here:

Where can I get more information?

Oracle's plans for changes to the security algorithms and associated policies/settings in the Oracle Java Runtime Environment (JRE) and Java SE Development Kit (JDK) are published here:

More information about Java security is available here:

Getting help

If you have questions about Java Security, please log a Service Request with Java Support.

If you need assistance with the steps for signing EBS JAR files, please log a Service Request against the "Oracle Applications Technology Stack (TXK)" > "Java."

Related Articles

Categories: APPS Blogs

12cR2 partial PDB backup

Yann Neuhaus - Mon, 2017-05-01 01:21

I had a recent question about the following mention from the 12cR2 Multitenant book, about Partial PDB backups:
CapturePArtialPDBBackup.
Here is an example in 12.2 with local undo to illustrate the answer, which may help to understand what is a partial PDB backup.

Of course, since 12cR1 you can backup PDB individually, without the CDB$ROOT, in the same way you can backup only a few tablespaces subset of a CDB. It can be part of your backup strategy, but it is not to be considered as a backup that you can restore elsewhere later. A PDB is not self-consistent without the PDB$ROOT except if is has been closed and unplugged. In 12.1 you cannot restore a partial PDB backup if you don’t have the CDB$ROOT at the same point in time, because the recovery phase will need to rollback the ongoing transactions, and this requires to have the UNDO tablespace recovered at the same point in time.

However, in 12.2 with LOCAL UNDO, the partial PDB backup contains the local UNDO tablespace and then it can be sufficient to do a PDB Point In Time Recovery within the same CDB. And, in this case only, it is not required to have a backup of the root.

Let’s test it. I explicitly delete all backups


Recovery Manager: Release 12.2.0.1.0 - Production on Sun Apr 30 22:11:38 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
RMAN>
echo set on
 
RMAN> connect target /
connected to target database: CDB1 (DBID=914521258)
 
RMAN> delete noprompt backup;
 
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
specification does not match any backup in the repository
 
 
RMAN> list backup;
specification does not match any backup in the repository

No backup

I have only one PDB here:


RMAN> report schema;
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 820 SYSTEM YES /u01/oradata/CDB1A/system01.dbf
3 630 SYSAUX NO /u01/oradata/CDB1A/sysaux01.dbf
4 70 UNDOTBS1 YES /u01/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u01/oradata/CDB1A/pdbseed/system01.dbf
6 330 PDB$SEED:SYSAUX NO /u01/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u01/oradata/CDB1A/users01.dbf
8 100 PDB$SEED:UNDOTBS1 NO /u01/oradata/CDB1A/pdbseed/undotbs01.dbf
103 250 PDB1:SYSTEM YES /u01/oradata/CDB1A/PDB1/system01.dbf
104 350 PDB1:SYSAUX NO /u01/oradata/CDB1A/PDB1/sysaux01.dbf
105 100 PDB1:UNDOTBS1 YES /u01/oradata/CDB1A/PDB1/undotbs01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 131 TEMP 32767 /u01/oradata/CDB1A/temp01.dbf
2 64 PDB$SEED:TEMP 32767 /u01/oradata/CDB1A/pdbseed/temp012017-04-08_22-24-09-441-PM.dbf
4 64 PDB1:TEMP 32767 /u01/oradata/CDB1A/PDB1/temp012017-04-08_22-24-09-441-PM.dbf

all datafiles need backup:


RMAN> report need backup;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of files with less than 1 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1 0 /u01/oradata/CDB1A/system01.dbf
3 0 /u01/oradata/CDB1A/sysaux01.dbf
4 0 /u01/oradata/CDB1A/undotbs01.dbf
5 0 /u01/oradata/CDB1A/pdbseed/system01.dbf
6 0 /u01/oradata/CDB1A/pdbseed/sysaux01.dbf
7 0 /u01/oradata/CDB1A/users01.dbf
8 0 /u01/oradata/CDB1A/pdbseed/undotbs01.dbf
103 0 /u01/oradata/CDB1A/PDB1/system01.dbf
104 0 /u01/oradata/CDB1A/PDB1/sysaux01.dbf
105 0 /u01/oradata/CDB1A/PDB1/undotbs01.dbf

Partial backup not including the root

I backup only the pluggable database PDB1


RMAN> backup pluggable database PDB1;
Starting backup at 30-APR-17
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00104 name=/u01/oradata/CDB1A/PDB1/sysaux01.dbf
channel ORA_DISK_1: starting piece 1 at 30-APR-17
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00103 name=/u01/oradata/CDB1A/PDB1/system01.dbf
channel ORA_DISK_2: starting piece 1 at 30-APR-17
channel ORA_DISK_3: starting full datafile backup set
channel ORA_DISK_3: specifying datafile(s) in backup set
input datafile file number=00105 name=/u01/oradata/CDB1A/PDB1/undotbs01.dbf
channel ORA_DISK_3: starting piece 1 at 30-APR-17
channel ORA_DISK_1: finished piece 1 at 30-APR-17
piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk827s_.bkp tag=TAG20170430T221146 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_3: finished piece 1 at 30-APR-17
piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk83go_.bkp tag=TAG20170430T221146 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 30-APR-17
piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk830z_.bkp tag=TAG20170430T221146 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
Finished backup at 30-APR-17
 
Starting Control File and SPFILE Autobackup at 30-APR-17
piece handle=/u01/fast_recovery_area/CDB1A/autobackup/2017_04_30/o1_mf_s_942703909_djdk85m1_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 30-APR-17

Here is the proof that only PDB1 has a backup, the CDB$ROOT has no backup:


RMAN> report need backup;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of files with less than 1 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1 0 /u01/oradata/CDB1A/system01.dbf
3 0 /u01/oradata/CDB1A/sysaux01.dbf
4 0 /u01/oradata/CDB1A/undotbs01.dbf
5 0 /u01/oradata/CDB1A/pdbseed/system01.dbf
6 0 /u01/oradata/CDB1A/pdbseed/sysaux01.dbf
7 0 /u01/oradata/CDB1A/users01.dbf
8 0 /u01/oradata/CDB1A/pdbseed/undotbs01.dbf

Restore the PDB

I will do PDB Point In Time Recovery, using a restore point


RMAN> create restore point RP;
Statement processed
 
RMAN> alter pluggable database PDB1 close;
Statement processed
 

Here is the restore


RMAN> restore pluggable database PDB1 until restore point RP;
Starting restore at 30-APR-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=15 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=149 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=268 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=398 device type=DISK
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00104 to /u01/oradata/CDB1A/PDB1/sysaux01.dbf
channel ORA_DISK_1: reading from backup piece /u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk827s_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00105 to /u01/oradata/CDB1A/PDB1/undotbs01.dbf
channel ORA_DISK_2: reading from backup piece /u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk83go_.bkp
channel ORA_DISK_3: starting datafile backup set restore
channel ORA_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_DISK_3: restoring datafile 00103 to /u01/oradata/CDB1A/PDB1/system01.dbf
channel ORA_DISK_3: reading from backup piece /u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk830z_.bkp
channel ORA_DISK_2: piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk83go_.bkp tag=TAG20170430T221146
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:03
channel ORA_DISK_3: piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk830z_.bkp tag=TAG20170430T221146
channel ORA_DISK_3: restored backup piece 1
channel ORA_DISK_3: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk827s_.bkp tag=TAG20170430T221146
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 30-APR-17

and the recover


RMAN> recover pluggable database PDB1 until restore point RP;
Starting recover at 30-APR-17
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
 
starting media recovery
media recovery complete, elapsed time: 00:00:00
 
Finished recover at 30-APR-17

Fimnally, I open resetlogs


RMAN> alter pluggable database PDB1 open resetlogs;
Statement processed

Thanks to LOCAL UNDO there is no need to restore the CDB$ROOT into an auxiliary instance, as it was the case for PDBPITR in 12.1 and then we can do PDBPITR without a backup of the root.

So what?

In theory, and as demonstrated above, including CDB$ROOT into a partial PDB backup is not mandatory in 12cR2 in local undo mode. However, keep in mind that this is for academic purpose only, not for real-life production. For short-term point in time, you will not use backups but flashback. For long-term restore, then you may have different reasons to restore the PDB elsewhere with its CDB$ROOT at the same point in time: some common objects (users, roles, directories, etc) may have changed. And anyway, your backup strategy should be at CDB level.

 

Cet article 12cR2 partial PDB backup est apparu en premier sur Blog dbi services.

Real-time materialized view not working as expected

Tom Kyte - Mon, 2017-05-01 01:06
Hello, I have a problem with <b>Real-time Materialized View `ON QUERY COMPUTATION?</b> functionality. My Real-time MV is enabled for both <b>QUERY REWRITE & ON QUERY COMPUTATION</b>. As I understand, when the MV is fresh, we get a MAT_VIEW R...
Categories: DBA Blogs

Trigger

Tom Kyte - Mon, 2017-05-01 01:06
Hi Team, I have to create one trigger. Whenever some insert happens I want that trigger to be fired. Please see this query: select dept_id, acct_airing_id, lag(dept_airing_id) over (partition by acct_airing_id order by dept_id) as old_dept_a...
Categories: DBA Blogs

Online partitioning and purging old data

Tom Kyte - Mon, 2017-05-01 01:06
Hi Tom, I am writing to you for the first time and I hope you could help me.I should consider resizing of some tables.I am talking about log tables(user event log, app event log, transaction log etc.).They are mostly standalone tables with no childr...
Categories: DBA Blogs

create a procedure to insert data from various tables

Tom Kyte - Mon, 2017-05-01 01:06
HI, I have this parameter table tb_parameter. <code> +----+----------+-----+-----------+--------------------------------------------------------------------------------------- | ID_SRC| TABLE_NAME | VIEW_NAME | COLUMN_VIEW | ...
Categories: DBA Blogs

ORA-02292: integrity constraint with FK reference with same table PK

Tom Kyte - Mon, 2017-05-01 01:06
Hello Tom, Need Ur help. DB : 12.1.0.2.0 While execution script we are facing below issue: -- Table have PK and FK (FK refer to same table PK). While we execute script for delete some data , But some how we get the error of ORA-022...
Categories: DBA Blogs

Installation error

Tom Kyte - Mon, 2017-05-01 01:06
Hi im trying to install oracle 12c database on ubuntu and got this error which i cant figure out ORA-27104 http://imgur.com/24gauEv when i click on ignore ORA-01034 error and than [INS-20802] Oracle Database Configuration Assistant...
Categories: DBA Blogs

Recover database using redo log

Tom Kyte - Mon, 2017-05-01 01:06
I read " professional oracle programming 2005 " book. http://arsamandish.com/dl/ebook/oracle/Professional%20Oracle%20Programming%202005.pdf On page number 4 , I snippet a text as here. Redo Log Files :- --------------- One of key features...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator