Feed aggregator

Exception in declration section

Tom Kyte - Sat, 2017-11-04 02:46
create or replace procedure proc_delme is n number:=1/0; begin dbms_output.put_line(n); exception when others then dbms_output.put_line(sqlerrm); end proc_delme; / If I do following then raised error is not ...
Categories: DBA Blogs

Goldengate XAG HAS

Michael Dinh - Fri, 2017-11-03 19:53

If you install GI for SI DB, then you might as well install XAG for OGG.

Imagine if there is a Vagrant to put all of this together?

[oracle@db-asm-1 xag]$ mkdir -p /u01/app/oracle/xag

[oracle@db-asm-1 xag]$ ./xagsetup.sh --install --directory /u01/app/oracle/xag
Installing Oracle Grid Infrastructure Agents on: db-asm-1

[oracle@db-asm-1 ~]$ cd /u01/app/oracle/xag/bin/

[oracle@db-asm-1 bin]$ ./agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 8.1.0

[oracle@db-asm-1 bin]$ ./agctl query deployment
The Oracle Grid Infrastructure Agents deployment is standalone

[oracle@db-asm-1 bin]$ ./agctl add goldengate --help
Adds Goldengate instance to Oracle Clusterware.

[oracle@db-asm-1 bin]$ ./agctl add goldengate ogg_amer \
> --instance_type dual --databases ora.amer.db \
> --gg_home /u01/app/oracle/amer/ogg/12.3.0_ora12c \
>  --oracle_home /u01/app/oracle/

[oracle@db-asm-1 bin]$ ./agctl status goldengate ogg_amer
Goldengate  instance 'ogg_amer' is not running

[oracle@db-asm-1 bin]$ ./agctl start goldengate ogg_amer

[oracle@db-asm-1 bin]$ ./agctl status goldengate ogg_amer
Goldengate  instance 'ogg_amer' is running on db-asm-1

[oracle@db-asm-1 bin]$ ./agctl config goldengate
XAG-212: Instance '' is not yet registered.

[oracle@db-asm-1 bin]$ ./agctl config goldengate ogg_amer
GoldenGate location is: /u01/app/oracle/amer/ogg/12.3.0_ora12c
GoldenGate instance type is: dual
ORACLE_HOME location is: /u01/app/oracle/
Databases needed: ora.amer.db
EXTRACT groups to monitor:
REPLICAT groups to monitor:
Critical EXTRACT groups:
Critical REPLICAT groups:
Autostart on DataGuard role transition to PRIMARY: no
Autostart JAgent: no
[oracle@db-asm-1 bin]$

$ ./crs_stat.sh
The Oracle base remains unchanged with value /u01/app/oracle
NAME                                          TARGET     STATE           SERVER       STATE_DETAILS
-------------------------                     ---------- ----------      ------------ ------------------
                                              Name       Target          State        Server State
ora.CRS.dg                                    ONLINE     ONLINE          db-asm-1     STABLE
ora.DATA.dg                                   ONLINE     ONLINE          db-asm-1     STABLE
ora.FRA.dg                                    ONLINE     ONLINE          db-asm-1     STABLE
ora.LISTENER.lsnr                             ONLINE     ONLINE          db-asm-1     STABLE
ora.asm                                       ONLINE     ONLINE          db-asm-1     Started,STABLE
ora.ons                                       OFFLINE    OFFLINE         db-asm-1     STABLE
ora.amer.db                                   ONLINE     ONLINE          db-asm-1     Open,HOME=/u01/app/o
ora.cssd                                      ONLINE     ONLINE          db-asm-1     STABLE
ora.diskmon                                   OFFLINE    OFFLINE         STABLE
ora.euro.db                                   ONLINE     ONLINE          db-asm-1     Open,HOME=/u01/app/o
ora.evmd                                      ONLINE     ONLINE          db-asm-1     STABLE
xag.ogg_amer.goldengate                       ONLINE     ONLINE          db-asm-1     STABLE

$ crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db-asm-1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'db-asm-1'
CRS-2673: Attempting to stop 'ora.euro.db' on 'db-asm-1'
CRS-2673: Attempting to stop 'xag.ogg_amer.goldengate' on 'db-asm-1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'db-asm-1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'db-asm-1' succeeded
CRS-2677: Stop of 'ora.euro.db' on 'db-asm-1' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'db-asm-1' succeeded
CRS-2677: Stop of 'xag.ogg_amer.goldengate' on 'db-asm-1' succeeded
CRS-2673: Attempting to stop 'ora.amer.db' on 'db-asm-1'
CRS-2677: Stop of 'ora.amer.db' on 'db-asm-1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'db-asm-1'
CRS-2677: Stop of 'ora.DATA.dg' on 'db-asm-1' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'db-asm-1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'db-asm-1'
CRS-2677: Stop of 'ora.FRA.dg' on 'db-asm-1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'db-asm-1'
CRS-2677: Stop of 'ora.evmd' on 'db-asm-1' succeeded
CRS-2677: Stop of 'ora.asm' on 'db-asm-1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'db-asm-1'
CRS-2677: Stop of 'ora.cssd' on 'db-asm-1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db-asm-1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

GoldenGate Naming Convention P01

Michael Dinh - Fri, 2017-11-03 17:33

I had a nice discussion with teammates about standards.

It’s wrong if there are no standards or naming conventions; otherwise, let your imagination run wild.

Hence, before you embark, think about it as it will make life much easier.

For prompt: I like to know what ORACLE_SID for environment.


For Goldengate: I did it this way because there are 2 DBs / 2 GGs for the same host.

Why ora12c? There are 2 options when installing Goldengate: ORA11g|ORA12c


There was discussion ogg/gg/ggs – doesn’t really matter.

Where should it reside? I had planned for /u02 and /u03 but Vagrant was not being nice to me.

Why different mount? There are GG directories and trails which will fill up.

I like to KISS and avoid soft links.

One thing that does annoy is using $GGHOME.


Make life simple, use aliases.

$ cat .bash_profile

# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
# User specific environment and startup programs
export PATH=$PATH:$HOME/.local/bin:$HOME/bin
. ~/.alias

$ cat .alias

alias amer='source ~/.amer'
alias euro='source ~/.euro'
alias ggs='cd $GG_HOME'

$ cat .amer

export LD_LIBRARY_PATH=/lib:/usr/lib
export ORACLE_SID=amer
. oraenv
export GG_HOME=/u01/app/oracle/amer/ogg/12.3.0_ora12c
export PS1="\u@\h:\${ORACLE_SID}:\${PWD}\n$ "

$ amer

The Oracle base remains unchanged with value /u01/app/oracle
$ env|egrep 'ORACLE|HOME'

$ cat .euro

export LD_LIBRARY_PATH=/lib:/usr/lib
export ORACLE_SID=euro
. oraenv
export GG_HOME=/u01/app/oracle/euro/ogg/12.3.0_ora12c
export PS1="\u@\h:\${ORACLE_SID}:\${PWD}\n$ "

$ euro

The Oracle base remains unchanged with value /u01/app/oracle
$ env|egrep 'ORACLE|HOME'

$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version OGGCORE_12.
Linux, x64, 64bit (optimized), Oracle 12c on Jul 21 2017 23:31:13
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.

GGSCI (db-asm-1) 1> exit

$ grep ORA oggcore.rsp

# Specify ORA12c for installing Oracle GoldenGate for Oracle Database 12c and
#         ORA11g for installing Oracle GoldenGate for Oracle Database 11g

Oracle SOA Suite 12c: Installation - Preparing the database

Dietrich Schroff - Fri, 2017-11-03 15:24
After a successful installation of Oracle 12c database the next step is to create a plugable databse (PDB).
Therefor you have to run the dbca (database creation assistant):

 The first check fails with:
[oracle@localhost ~]$ export ORACLE_SID=soasuite12c
[oracle@localhost ~]$ sqlplus

SQL*Plus: Release Production on Sat Oct 7 17:00:21 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter user-name: bpeladmin
Enter password:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3701
Additional information: -1824536353
Process ID: 0
Session ID: 0 Serial number: 0This is, because the tnsnames.ora is not correct:
[oracle@localhost admin]$ cat /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: /home/oracle/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

  (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
      (SERVICE_NAME = orcl)
You have to add this entry:
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
      (SERVICE_NAME = soasuite12c)
And here we go:
[oracle@localhost admin]$ sqlplus bpeladmin@soasuite12c

SQL*Plus: Release Production on Sat Oct 7 17:22:14 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password:

Verbunden mit:
Oracle Database 12c Enterprise Edition Release - 64bit Production

SQL> show con_name;


ORA-15040 ORA-15042 with EXTERNAL redundancy Diskgroup

Amardeep Sidhu - Fri, 2017-11-03 12:57

A colleague was working on an ASM issue (Standalone one, Version on AIX) at one of the customer sites. Later on, I also joined him. The issue was that the customer added few news disks to an existing diskgroup. Everything went well and the rebalance kicked in. After some time, something happened and all of a sudden the diskgroup was dismounted. While trying the mount the diskgroup again, it was giving

ORA-15032: not all alterations performed
ORA-15040: diskgroup is incomplete
ORA-15042: ASM disk "27" is missing from group number "2"

Here is the relevant text from the ASM alert log

ORA-27063: number of bytes read/written is incorrect
IBM AIX RISC System/6000 Error: 19: <strong>No such device</strong>
Additional information: -1
Additional information: 1048576
WARNING: <strong>Write Failed</strong>. group:2 disk:27 AU:1005 offset:0 size:1048576
Fri Nov 03 10:55:27 2017
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_dbw0_58983380.trc:
ORA-27063: number of bytes read/written is incorrect
IBM AIX RISC System/6000 Error: 19: No such device
Additional information: -1
Additional information: 4096
WARNING: Write Failed. group:2 disk:27 AU:0 offset:16384 size:4096
NOTE: cache initiating offline of disk 27 group DATADG
NOTE: process _dbw0_+asm1 (58983380) initiating offline of disk 27.3928481273 (DISK_01) with mask 0x7e in group 2
Fri Nov 03 10:55:27 2017
WARNING: Disk 27 (DISK_01) in group 2 mode 0x7f is now being offlined
WARNING: Disk 27 (DISK_01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
NOTE: initiating PST update: grp = 2, dsk = 27/0xea27ddf9, mask = 0x6a, op = clear
ERROR: failed to copy file +DATADG.263, extent 1952
GMON updating disk modes for group 2 at 36 for pid 9, osid 58983380
ERROR: Disk 27 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 2)
ERROR: ORA-15080 thrown in ARB0 for group number 2
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_57672234.trc:
ORA-15080: synchronous I/O operation to a disk failed
Fri Nov 03 10:55:27 2017
NOTE: stopping process ARB0
WARNING: Disk 27 (DISK_01) in group 2 mode 0x7f offline is being aborted
WARNING: Offline of disk 27 (DISK_01) in group 2 and mode 0x7f failed on ASM inst 1
NOTE: halting all I/Os to diskgroup 2 (DATADG)
Fri Nov 03 10:55:28 2017
NOTE: cache dismounting (not clean) group 2/0xDEB72D47 (DATADG)
NOTE: messaging CKPT to quiesce pins Unix process pid: 62128816, image: oracle@tiiproddb1.murugappa.co.in (B000)
NOTE: dbwr not being msg'd to dismount
Fri Nov 03 10:55:28 2017
NOTE: LGWR doing non-clean dismount of group 2 (DATADG)
NOTE: LGWR sync ABA=124.7138 last written ABA 124.7138
NOTE: cache dismounted group 2/0xDEB72D47 (DATADG)
SQL> alter diskgroup DATADG dismount force /* ASM SERVER */ 

At this stage disk 27 was not readable even with dd. So that means something is wrong with the disk. Since it is an external redundancy diskgroup not much can be done until the disk becomes available.

Speaking to the storage team cleared the air. One that the disk had gone offline at storage level so that is why even dd was not able to read it. Two that all these disks were thin provisioned (over provisioning of the storage space to improve the utilization; similar to over provisioning of CPU cores in the Virtualization world) from the storage. This particular disk 27 was meant for some other purpose but got wrongly allocated to this diskgroup. The actual space available in the pool (of this disk) was less than what was needed. The moment disks were added to the diskgroup, the rebalance kicked in and ASM started writing data to the disk. Within few minutes space became full and the storage software took the disk offline. Since ASM couldn’t write to the disk, the diskgroup was dismounted.

Fortunately, in the same pool, there was another disk that was still unused. So the storage guy dropped that disk and it freed up some space in the pool. He brought this disk 27 online after that. Diskgroup got mounted and the rebalance kicked in again. Finally, we dropped this disk and the rebalance started again. Once the rebalance completed, disk was free to be taken offline.


Categories: BI & Warehousing

New OA Framework 12.2.5 Update 17 Now Available

Steven Chan - Fri, 2017-11-03 11:27

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes 51 fixes in total, including all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • A horizontal scroll bar appears on the page when the Title/Description of an attachment is too long.
  • There is a script error on clicking GO button on WebADI LOV window while selecting WebADI template.

Related Articles

Categories: APPS Blogs

Oracle ADF on Docker Container

Andrejus Baranovski - Fri, 2017-11-03 10:56
Want to run Oracle ADF on Docker? This is possible, I will explain how. If you are new to Docker, it may require to spend significant amount of time to get started with all different bits and pieces. I will try to explain all essential steps, so that you will get up to speed quickly.

First of all you need to have DB accessible, check my previous post explaining how to run Oracle DB on Docker - Oracle Database Docker Image in Docker Cloud (Digital Ocean). DB is required to install RCU schema for WebLogic installation with JRF files.

I have built my own Oracle Fusion Middleware Docker image using Oracle Docker images - Oracle Fusion Middleware Infrastructure on Docker.

First step is to build Oracle JDK (Server JRE) image, this is pre-requisite to build Oracle Fusion Middleware Docker image. Read through instructions documented on Oracle Fusion Middleware Infrastructure on Docker GitHub page. You should navigate to Oracle Java folder (download Oracle Docker files from GitHub link mentioned above) and copy there JDK installation file:

Run command to create JDK Docker image:


Command output:

Double check to verify if image was created successfully by running docker images command:

Let's move on to Oracle FMW image creation. Navigate to Oracle FMW folder and copy FMW infrastructure installation file (I'm installing

Move one folder up and run command:

./buildDockerImage.sh -s -v

To build Oracle FMW image. I use flag -s to skip checksum verification for installation file. You should run command from this folder:

You will see long output in the log for this command:

It installs WLS into Docker image:

Run docker images command to verify if image was created successfully:

In the next step, we will create FMW domain and extend it with ADF support. But before that we need to make sure DB details are set correctly, to be able to install RCU schema. Oracle provides infraDomain file with DB and WLS properties, make sure to set correct DB details. If properties are not correct, RCU creation will fail:

Execute docker run command to startup WLS Docker container. During first start up it will create and extend WLS domain with ADF support:

docker run -d -p 7001:7001 --name RedSamuraiWLS --env-file ./infraDomain.env.list oracle/fmw-infrastructure:

Flag -d means container will run in detached mode and we will be able to return to command prompt. Port with name is specified along with environment properties file. Make sure to reference FMW image which was created in the step above. Once control is returned back to the prompt, run docker command to check status of docker container (flag -a means to show all containers):

docker ps -a

Container should be in the running state. First startup takes longer, because it requires to setup and extend WLS domain:

Once domain is extended, you will see WebLogic starting:

Finally WebLogic should be in Running state:

Run again docker ps -a command to verify container state, it should be up and running:

Once WLS machine is up, you can navigate to Enterprise Manager through URL from outside of Docker container, for example from your host. Login to EM and you will see Admin server is up, but Managed Server is down. There is a way to startup Managed Server too, but if you want to run ADF apps for DEV environment, realistically speaking Admin server is more than enough for deployment too:

Simply delete (this cab done from EM) Managed Server and cluster, keep only Admin Server:

I have deployed sample ADF application:

This application is based on ADF BC, data source is defined too:

ADF application runs from WebLogic on Docker:

Now lets see how to push newly created container to Docker registry.

First we need to create new Docker image from Docker container. This can be done with docker commit command (pointing to container ID and specifying Docker repository name and tag):

docker commit da03e52b42a2 abaranovskis/redsamurai-wls:v1

Run docker images command to verify new image is created successfully. Next run docker login to authenticate with Docker repository. Run docker push to write image to Docker repository:

docker push abaranovskis/redsamurai-wls:v1

Commands execution sequence:

Pushed image should appear in docker repository:

Once image is in Docker online repository, we can startup online Docker container, so that WLS will be accessible online. This can be done through command line or using Docker Cloud UI interface. You can create new container by referencing image from Docker repository:

Our WLS docker container with ADF support runs on Digital Ocean:

Logs are accessible from Docker Cloud UI and you can see server status:

Alter table add column on a FDA enabled table - how to avoid the row chain effect without using the 'move' option?

Tom Kyte - Fri, 2017-11-03 08:26
Hi, We'd like to use FDA on our Oracle db for its bi-temporality feature. So far when we add a column to a table, we also perform the 'alter table T move;' + rebuild indexes, to avoid performance issues and to re-organize the row IDS. But the ...
Categories: DBA Blogs

Why differ inmemory_size in v$im_segments from used_bytes in v$inmemory_area?

Tom Kyte - Fri, 2017-11-03 08:26
Hi I'm testing In-Memory and my question is why the figures in v$im_segments differ from the used_bytes in v$inmemory_area? I have read a lot of great posts (for example https://blogs.oracle.com/in-memory/what-is-an-in-memory-compression-unit-i...
Categories: DBA Blogs

Advise for Analytics-related Workflow Automation

Tom Kyte - Fri, 2017-11-03 08:26
Hello, I work in the Analytics department where I support a team of many Data Scientists. We use Oracle Database Enterprise v11.2.0.4 as our back-end database and I have developed several automation using PL/SQL procedures, functions, etc. I...
Categories: DBA Blogs

How Result cache is managed in 12c Pluggable Database (PDB)

Tom Kyte - Fri, 2017-11-03 08:26
Hi Team, I ma having one scenario, where I am setting up my application in 3 pluggable db instances under single CDB. As per my app requirement, I have to create synonym for dbms_result_cache in all 3 PDBs. As the public synonym for dbms_result_ca...
Categories: DBA Blogs

fetch output (success/failure) status from web service

Tom Kyte - Fri, 2017-11-03 08:26
Hi, Could you please share any example to fetch web service output (i.e. success/failure) status into oracle PL/SQL procedure? The scenario is as below, we have created a stored procedure which will pass 2 input parameters from those input p...
Categories: DBA Blogs

Can I have the same table published and subscribed (bi-directional) in PostgreSQL 10 logical replication?

Yann Neuhaus - Fri, 2017-11-03 04:03

When you start using PostgreSQL 10 logical replication you might think it is a good idea to setup bi-directional replication so you end up with two or more masters that are all writable. I will not go into the details of multi master replication here (conflict resolution, …) but will show what happens when you try to do that. Lets go …

My two instances run on the same host, one on port 6000 the other one on 6001. To start I’ll create the same table in both instances:

postgres=# create table t1 ( a int primary key, b varchar(50) );
postgres=# alter table t1 replica identity using INDEX t1_pkey;
postgres=# \d+ t1
                                            Table "public.t1"
 Column |         Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
 a      | integer               |           | not null |         | plain    |              | 
 b      | character varying(50) |           |          |         | extended |              | 
    "t1_pkey" PRIMARY KEY, btree (a) REPLICA IDENTITY

Create the same publication on both sides:

postgres=# create publication my_pub for table t1;
postgres=# select * from pg_publication;
 pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete 
 my_pub  |       10 | f            | t         | t         | t
(1 row)
postgres=# select * from pg_publication_tables;
 pubname | schemaname | tablename 
 my_pub  | public     | t1
(1 row)

Create the same subscription on both sides (except for the port, of course):

postgres=# show port;
(1 row)
ppostgres=# create subscription my_sub connection 'host=localhost port=6001 dbname=postgres user=postgres' publication my_pub;
postgres=# select * from pg_subscription;
 subdbid | subname | subowner | subenabled |                      subconninfo                       | subslotname | 
   13212 | my_sub  |       10 | t          | host=localhost port=6001 dbname=postgres user=postgres | my_sub      | 
(1 row)

### second instance

postgres=# show port;
(1 row)

postgres=# create subscription my_sub connection 'host=localhost port=6000 dbname=postgres user=postgres' publication my_pub;
postgres=# select * from pg_subscription;
 subdbid | subname | subowner | subenabled |                      subconninfo                       | subslotname | 
   13212 | my_sub  |       10 | t          | host=localhost port=6000 dbname=postgres user=postgres | my_sub      | 
(1 row)

So far, so good, everything worked until now. Now lets insert a row in the first instance:

postgres=# insert into t1 (a,b) values (1,'a');
postgres=# select * from t1;
 a | b 
 1 | a
(1 row)

That seemed to worked as well as the row is there on the second instance as well:

postgres=# show port;
(1 row)

postgres=# select * from t1;
 a | b 
 1 | a
(1 row)

But: When you take a look at the log file of the first instance you’ll see something like this (which is repeated over and over again):

2017-11-03 09:56:29.176 CET - 2 - 10687 -  - @ ERROR:  duplicate key value violates unique constraint "t1_pkey"
2017-11-03 09:56:29.176 CET - 3 - 10687 -  - @ DETAIL:  Key (a)=(1) already exists.
2017-11-03 09:56:29.178 CET - 29 - 10027 -  - @ LOG:  worker process: logical replication worker for subscription 16437 (PID 10687) exited with exit code 1
2017-11-03 09:56:34.198 CET - 1 - 10693 -  - @ LOG:  logical replication apply worker for subscription "my_sub" has started

Now the second instance is constantly trying to insert the same row back to the first instance and that obviously can not work as the row is already there. So the answer to the original question: Do not try to do that, it will not work anyway.


Cet article Can I have the same table published and subscribed (bi-directional) in PostgreSQL 10 logical replication? est apparu en premier sur Blog dbi services.

Reminder: Upgrade JDK 6 on EBS Servers Before December 2018

Steven Chan - Thu, 2017-11-02 17:26

E-Business Suite 12.1 and 12.2 both included Java SE 6 as part of their server-based technology stacks.  Both EBS 12.1 and 12.2 are certified with Java SE 7:

Upgrade EBS servers to JDK 7 before December 2018

Extended Support for Java SE 6 ends on December 31, 2018. E-Business Suite customers must upgrade their servers to Java SE 7 before that date.

Upgrade EBS end-user desktops to Java 7 or 8

Extended Support for Java SE 6 Deployment technology ended on June 30, 2017.  EBS end-user desktops running JRE 6 should be upgraded to any of the following certified options:

How can EBS customers obtain Java 7?

EBS customers can download Java 7 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see:

Both JDK and JRE packages are now contained in a single combined download.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package. 

Can EBS servers be upgraded to JDK 8?

No. The server-side technology stack Fusion Middleware components (e.g. Forms 10g) included in these two EBS releases are not compatible with Java SE 8.  There are currently no plans to update those FMW components to be JDK 8 compatible.  

JRE 8 can be used on desktop clients accessing EBS 12.1 and 12.2.

It is expected that a future release of EBS 12.x will incorporate new FMW technology stack components that will be compatible with JDK 8 or higher.  We’re working on that now.

When will that new EBS 12.x be released?

Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.    

Related Articles

Categories: APPS Blogs

Exporting and Importing Data from Visual Builder Cloud Service - with REST Calls

Shay Shmeltzer - Thu, 2017-11-02 16:37

Visual Builder Cloud Service (VBCS) makes it very easy to create custom objects to store your data. A frequent request we get is for a way to load and export data from these business objects. As John blogged, we added a feature to support doing this through the command line - John's blog shows you the basic options for the command line.

I recently needed to do this for a customer, and thought I'll share some tips that helped me get the functionality working properly - in case others need some help skipping bumps in the road.

Here is a demo showing both import and export and how to get them to work.

Exporting Data

Export is quite simple - you use a GET operation on a REST service, the command line for calling this using curl will look like this:

curl -u user:password https://yourserver/design/ExpImp/1.0/resources/datamgr/export > exp.zip

The result is a streaming of a zip file, so I just added a > exp.zip file to the command's end. The zip file will contain CSV files for each object in your application.

Don't forget to replace the bold things with your values for username and password, your VBCS server name and the name of the app you are using (ExpImp in my case).

Importing Data

Having the exported CSV file makes it easy to build a CSV file for upload - in the demo I just replaced and added values in that file. Next you'll use a similar curl command to call a POST method. It will look like this:

curl -X POST -u user:password https://yourserver/design/ExpImp/1.0/resources/datamgr/import/Employee?filename=Employee.csv -H "Origin:https://yourserver" -H "Content-Type:text/csv" -T Employee.csv -v

A few things to note.

You need to specify which object you want to import into (Employee after the /import/ in the command above), and you also need to provide a filename parameter that tell VBCS which file to import.

In the current release you need to work around a CORS security limitation - this is why we are adding a header (with the -H option) that indicate that we are sending this from the same server as the one we are running on. In an upcoming version this won't be needed.

We use the -T option to attach the csv file to our call.

Note that you should enable the "Enable basic authentication for business object REST APIs" security option for the application (Under Application Settings->Security). 

Using Import in Production Apps

In the samples above we imported and exported into an application that is still being developed - this is why we used the /design/ in our REST path.

If you want to execute things on an application that you published then replace the /design/ with /deployment/ 

One special note about live applications, before you import data into them you'll need to lock them. You can do this from the home page of VBCS and the drop down menu on the application.


p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures}
Categories: Development

Merge - unfold records based on conditional join

Tom Kyte - Thu, 2017-11-02 14:06
Hi Team, Need your help or suggestion on altering a merge statement. I have a below staging table A_TRANSACTION_STAGING which gets merged to main table A_TRANSACTION : A_TRANSACTION_STAGING : <code> TRANSACTION_ID NUMBER REGION_CD ...
Categories: DBA Blogs

SQL Query related to String

Tom Kyte - Thu, 2017-11-02 14:06
Hi Tom, There is a string 'ascjhsdndfdaja' i want to print only 'a' alphabet from this string there are 3-occurrence of 'a' so i want to print 'aaa' can you please help me this. Your help will be much appriciated. Thanks
Categories: DBA Blogs

trim in sql*plus

Tom Kyte - Thu, 2017-11-02 14:06
Hi Tom, I have a varchar2(30) field which when displayed on sqlplus, doesn't seem to be trimming the trailing spaces when I use rtim or trim in select stmt: set head off set colsep "," set trim on set wrap off set linesize 800 select part_id...
Categories: DBA Blogs

Two Talks Accepted for RMOUG Training Days

Bobby Durrett's DBA Blog - Thu, 2017-11-02 14:01

I got two talks accepted for RMOUG Training Days in February. I mentioned these two titles in a earlier post:

  • Python for the Oracle DBA
  • Toastmasters for the Oracle DBA

These two talks are about topics that interest me so I am glad that RMOUG thinks that they are valuable to the conference attendees.

I plan to do the two talks for my DBA coworkers and shorter versions at Toastmasters so I should get some constructive feedback and practice before the conference.

Should be fun. Hope to see you in Denver next February.

My Python posts: url

My Toastmasters posts: url


Categories: DBA Blogs

Quick history on database growth

Yann Neuhaus - Thu, 2017-11-02 12:13

AWR collects segment statistics, and this can be used to quickly understand an abnormal database growth. Here is a script I use to get, from the AWR history, the segments that have grown by more than 1% of the database size, in one hour.

First I must mention that this uses only the part of AWR which is not subject to additional option. This even works in Standard Edition:
------------------------------------ ----------- ------------------------------
control_management_pack_access string NONE

So here is the query, easy to modify with different threshold:
set echo on pagesize 1000
set sqlformat ansiconsole
select * from (
,trunc(max(end_interval_time),'hh24') snap_time
,round(sum(SPACE_ALLOCATED_DELTA)/1024/1024/1024*24*(cast(max(end_interval_time) as date)-cast(min(begin_interval_time) as date))) "GB/hour"
from DBA_HIST_SEG_STAT join DBA_HIST_SEG_STAT_OBJ using (dbid,ts#,obj#,dataobj#) join dba_hist_snapshot using(dbid,snap_id)
group by trunc(end_interval_time,'hh24'),owner,object_name,subobject_name,object_type
) where "GB/hour" > (select sum(bytes)/1024/1024/1024/1e2 "one percent of database size" from dba_data_files)
order by snap_time

and the sample output, showing only the snapshots and segments where more than 1% of the database size has been allocated within one hour:

--------------- --------- ------- ----- ----------- -------------- -----------
4 25-OCT-2017 19:00:00 4 BIGDATA SYS_LOB0000047762C00006$$ LOB
9 25-OCT-2017 20:00:00 9 BIGDATA SYS_LOB0000047762C00006$$ LOB
9 25-OCT-2017 21:00:00 9 BIGDATA SYS_LOB0000047762C00006$$ LOB
3 25-OCT-2017 22:00:00 3 BIGDATA SYS_LOB0000047762C00006$$ LOB
5 26-OCT-2017 00:00:00 5 BIGDATA SYS_LOB0000047762C00006$$ LOB
6 26-OCT-2017 01:00:00 6 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 02:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 03:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 04:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
5 26-OCT-2017 05:00:00 5 BIGDATA SYS_LOB0000047762C00006$$ LOB
2 26-OCT-2017 06:00:00 2 BIGDATA SYS_LOB0000047719C00008$$ LOB
2 26-OCT-2017 06:00:00 2 BIGDATA SYS_LOB0000047710C00006$$ LOB

With this, it is easier to ask to the application owners if this growth is normal or not.


Cet article Quick history on database growth est apparu en premier sur Blog dbi services.


Subscribe to Oracle FAQ aggregator