Feed aggregator

Oracle ADF on Docker Container

Andrejus Baranovski - Fri, 2017-11-03 10:56
Want to run Oracle ADF on Docker? This is possible, I will explain how. If you are new to Docker, it may require to spend significant amount of time to get started with all different bits and pieces. I will try to explain all essential steps, so that you will get up to speed quickly.

First of all you need to have DB accessible, check my previous post explaining how to run Oracle DB on Docker - Oracle Database Docker Image in Docker Cloud (Digital Ocean). DB is required to install RCU schema for WebLogic installation with JRF files.

I have built my own Oracle Fusion Middleware Docker image using Oracle Docker images - Oracle Fusion Middleware Infrastructure on Docker.

First step is to build Oracle JDK (Server JRE) image, this is pre-requisite to build Oracle Fusion Middleware Docker image. Read through instructions documented on Oracle Fusion Middleware Infrastructure on Docker GitHub page. You should navigate to Oracle Java folder (download Oracle Docker files from GitHub link mentioned above) and copy there JDK installation file:

Run command to create JDK Docker image:


Command output:

Double check to verify if image was created successfully by running docker images command:

Let's move on to Oracle FMW image creation. Navigate to Oracle FMW folder and copy FMW infrastructure installation file (I'm installing

Move one folder up and run command:

./buildDockerImage.sh -s -v

To build Oracle FMW image. I use flag -s to skip checksum verification for installation file. You should run command from this folder:

You will see long output in the log for this command:

It installs WLS into Docker image:

Run docker images command to verify if image was created successfully:

In the next step, we will create FMW domain and extend it with ADF support. But before that we need to make sure DB details are set correctly, to be able to install RCU schema. Oracle provides infraDomain file with DB and WLS properties, make sure to set correct DB details. If properties are not correct, RCU creation will fail:

Execute docker run command to startup WLS Docker container. During first start up it will create and extend WLS domain with ADF support:

docker run -d -p 7001:7001 --name RedSamuraiWLS --env-file ./infraDomain.env.list oracle/fmw-infrastructure:

Flag -d means container will run in detached mode and we will be able to return to command prompt. Port with name is specified along with environment properties file. Make sure to reference FMW image which was created in the step above. Once control is returned back to the prompt, run docker command to check status of docker container (flag -a means to show all containers):

docker ps -a

Container should be in the running state. First startup takes longer, because it requires to setup and extend WLS domain:

Once domain is extended, you will see WebLogic starting:

Finally WebLogic should be in Running state:

Run again docker ps -a command to verify container state, it should be up and running:

Once WLS machine is up, you can navigate to Enterprise Manager through URL from outside of Docker container, for example from your host. Login to EM and you will see Admin server is up, but Managed Server is down. There is a way to startup Managed Server too, but if you want to run ADF apps for DEV environment, realistically speaking Admin server is more than enough for deployment too:

Simply delete (this cab done from EM) Managed Server and cluster, keep only Admin Server:

I have deployed sample ADF application:

This application is based on ADF BC, data source is defined too:

ADF application runs from WebLogic on Docker:

Now lets see how to push newly created container to Docker registry.

First we need to create new Docker image from Docker container. This can be done with docker commit command (pointing to container ID and specifying Docker repository name and tag):

docker commit da03e52b42a2 abaranovskis/redsamurai-wls:v1

Run docker images command to verify new image is created successfully. Next run docker login to authenticate with Docker repository. Run docker push to write image to Docker repository:

docker push abaranovskis/redsamurai-wls:v1

Commands execution sequence:

Pushed image should appear in docker repository:

Once image is in Docker online repository, we can startup online Docker container, so that WLS will be accessible online. This can be done through command line or using Docker Cloud UI interface. You can create new container by referencing image from Docker repository:

Our WLS docker container with ADF support runs on Digital Ocean:

Logs are accessible from Docker Cloud UI and you can see server status:

Alter table add column on a FDA enabled table - how to avoid the row chain effect without using the 'move' option?

Tom Kyte - Fri, 2017-11-03 08:26
Hi, We'd like to use FDA on our Oracle db for its bi-temporality feature. So far when we add a column to a table, we also perform the 'alter table T move;' + rebuild indexes, to avoid performance issues and to re-organize the row IDS. But the ...
Categories: DBA Blogs

Why differ inmemory_size in v$im_segments from used_bytes in v$inmemory_area?

Tom Kyte - Fri, 2017-11-03 08:26
Hi I'm testing In-Memory and my question is why the figures in v$im_segments differ from the used_bytes in v$inmemory_area? I have read a lot of great posts (for example https://blogs.oracle.com/in-memory/what-is-an-in-memory-compression-unit-i...
Categories: DBA Blogs

Advise for Analytics-related Workflow Automation

Tom Kyte - Fri, 2017-11-03 08:26
Hello, I work in the Analytics department where I support a team of many Data Scientists. We use Oracle Database Enterprise v11.2.0.4 as our back-end database and I have developed several automation using PL/SQL procedures, functions, etc. I...
Categories: DBA Blogs

How Result cache is managed in 12c Pluggable Database (PDB)

Tom Kyte - Fri, 2017-11-03 08:26
Hi Team, I ma having one scenario, where I am setting up my application in 3 pluggable db instances under single CDB. As per my app requirement, I have to create synonym for dbms_result_cache in all 3 PDBs. As the public synonym for dbms_result_ca...
Categories: DBA Blogs

fetch output (success/failure) status from web service

Tom Kyte - Fri, 2017-11-03 08:26
Hi, Could you please share any example to fetch web service output (i.e. success/failure) status into oracle PL/SQL procedure? The scenario is as below, we have created a stored procedure which will pass 2 input parameters from those input p...
Categories: DBA Blogs

Can I have the same table published and subscribed (bi-directional) in PostgreSQL 10 logical replication?

Yann Neuhaus - Fri, 2017-11-03 04:03

When you start using PostgreSQL 10 logical replication you might think it is a good idea to setup bi-directional replication so you end up with two or more masters that are all writable. I will not go into the details of multi master replication here (conflict resolution, …) but will show what happens when you try to do that. Lets go …

My two instances run on the same host, one on port 6000 the other one on 6001. To start I’ll create the same table in both instances:

postgres=# create table t1 ( a int primary key, b varchar(50) );
postgres=# alter table t1 replica identity using INDEX t1_pkey;
postgres=# \d+ t1
                                            Table "public.t1"
 Column |         Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
 a      | integer               |           | not null |         | plain    |              | 
 b      | character varying(50) |           |          |         | extended |              | 
    "t1_pkey" PRIMARY KEY, btree (a) REPLICA IDENTITY

Create the same publication on both sides:

postgres=# create publication my_pub for table t1;
postgres=# select * from pg_publication;
 pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete 
 my_pub  |       10 | f            | t         | t         | t
(1 row)
postgres=# select * from pg_publication_tables;
 pubname | schemaname | tablename 
 my_pub  | public     | t1
(1 row)

Create the same subscription on both sides (except for the port, of course):

postgres=# show port;
(1 row)
ppostgres=# create subscription my_sub connection 'host=localhost port=6001 dbname=postgres user=postgres' publication my_pub;
postgres=# select * from pg_subscription;
 subdbid | subname | subowner | subenabled |                      subconninfo                       | subslotname | 
   13212 | my_sub  |       10 | t          | host=localhost port=6001 dbname=postgres user=postgres | my_sub      | 
(1 row)

### second instance

postgres=# show port;
(1 row)

postgres=# create subscription my_sub connection 'host=localhost port=6000 dbname=postgres user=postgres' publication my_pub;
postgres=# select * from pg_subscription;
 subdbid | subname | subowner | subenabled |                      subconninfo                       | subslotname | 
   13212 | my_sub  |       10 | t          | host=localhost port=6000 dbname=postgres user=postgres | my_sub      | 
(1 row)

So far, so good, everything worked until now. Now lets insert a row in the first instance:

postgres=# insert into t1 (a,b) values (1,'a');
postgres=# select * from t1;
 a | b 
 1 | a
(1 row)

That seemed to worked as well as the row is there on the second instance as well:

postgres=# show port;
(1 row)

postgres=# select * from t1;
 a | b 
 1 | a
(1 row)

But: When you take a look at the log file of the first instance you’ll see something like this (which is repeated over and over again):

2017-11-03 09:56:29.176 CET - 2 - 10687 -  - @ ERROR:  duplicate key value violates unique constraint "t1_pkey"
2017-11-03 09:56:29.176 CET - 3 - 10687 -  - @ DETAIL:  Key (a)=(1) already exists.
2017-11-03 09:56:29.178 CET - 29 - 10027 -  - @ LOG:  worker process: logical replication worker for subscription 16437 (PID 10687) exited with exit code 1
2017-11-03 09:56:34.198 CET - 1 - 10693 -  - @ LOG:  logical replication apply worker for subscription "my_sub" has started

Now the second instance is constantly trying to insert the same row back to the first instance and that obviously can not work as the row is already there. So the answer to the original question: Do not try to do that, it will not work anyway.


Cet article Can I have the same table published and subscribed (bi-directional) in PostgreSQL 10 logical replication? est apparu en premier sur Blog dbi services.

Reminder: Upgrade JDK 6 on EBS Servers Before December 2018

Steven Chan - Thu, 2017-11-02 17:26

E-Business Suite 12.1 and 12.2 both included Java SE 6 as part of their server-based technology stacks.  Both EBS 12.1 and 12.2 are certified with Java SE 7:

Upgrade EBS servers to JDK 7 before December 2018

Extended Support for Java SE 6 ends on December 31, 2018. E-Business Suite customers must upgrade their servers to Java SE 7 before that date.

Upgrade EBS end-user desktops to Java 7 or 8

Extended Support for Java SE 6 Deployment technology ended on June 30, 2017.  EBS end-user desktops running JRE 6 should be upgraded to any of the following certified options:

How can EBS customers obtain Java 7?

EBS customers can download Java 7 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see:

Both JDK and JRE packages are now contained in a single combined download.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package. 

Can EBS servers be upgraded to JDK 8?

No. The server-side technology stack Fusion Middleware components (e.g. Forms 10g) included in these two EBS releases are not compatible with Java SE 8.  There are currently no plans to update those FMW components to be JDK 8 compatible.  

JRE 8 can be used on desktop clients accessing EBS 12.1 and 12.2.

It is expected that a future release of EBS 12.x will incorporate new FMW technology stack components that will be compatible with JDK 8 or higher.  We’re working on that now.

When will that new EBS 12.x be released?

Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.    

Related Articles

Categories: APPS Blogs

Exporting and Importing Data from Visual Builder Cloud Service - with REST Calls

Shay Shmeltzer - Thu, 2017-11-02 16:37

Visual Builder Cloud Service (VBCS) makes it very easy to create custom objects to store your data. A frequent request we get is for a way to load and export data from these business objects. As John blogged, we added a feature to support doing this through the command line - John's blog shows you the basic options for the command line.

I recently needed to do this for a customer, and thought I'll share some tips that helped me get the functionality working properly - in case others need some help skipping bumps in the road.

Here is a demo showing both import and export and how to get them to work.

Exporting Data

Export is quite simple - you use a GET operation on a REST service, the command line for calling this using curl will look like this:

curl -u user:password https://yourserver/design/ExpImp/1.0/resources/datamgr/export > exp.zip

The result is a streaming of a zip file, so I just added a > exp.zip file to the command's end. The zip file will contain CSV files for each object in your application.

Don't forget to replace the bold things with your values for username and password, your VBCS server name and the name of the app you are using (ExpImp in my case).

Importing Data

Having the exported CSV file makes it easy to build a CSV file for upload - in the demo I just replaced and added values in that file. Next you'll use a similar curl command to call a POST method. It will look like this:

curl -X POST -u user:password https://yourserver/design/ExpImp/1.0/resources/datamgr/import/Employee?filename=Employee.csv -H "Origin:https://yourserver" -H "Content-Type:text/csv" -T Employee.csv -v

A few things to note.

You need to specify which object you want to import into (Employee after the /import/ in the command above), and you also need to provide a filename parameter that tell VBCS which file to import.

In the current release you need to work around a CORS security limitation - this is why we are adding a header (with the -H option) that indicate that we are sending this from the same server as the one we are running on. In an upcoming version this won't be needed.

We use the -T option to attach the csv file to our call.

Note that you should enable the "Enable basic authentication for business object REST APIs" security option for the application (Under Application Settings->Security). 

Using Import in Production Apps

In the samples above we imported and exported into an application that is still being developed - this is why we used the /design/ in our REST path.

If you want to execute things on an application that you published then replace the /design/ with /deployment/ 

One special note about live applications, before you import data into them you'll need to lock them. You can do this from the home page of VBCS and the drop down menu on the application.


p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures}
Categories: Development

Merge - unfold records based on conditional join

Tom Kyte - Thu, 2017-11-02 14:06
Hi Team, Need your help or suggestion on altering a merge statement. I have a below staging table A_TRANSACTION_STAGING which gets merged to main table A_TRANSACTION : A_TRANSACTION_STAGING : <code> TRANSACTION_ID NUMBER REGION_CD ...
Categories: DBA Blogs

SQL Query related to String

Tom Kyte - Thu, 2017-11-02 14:06
Hi Tom, There is a string 'ascjhsdndfdaja' i want to print only 'a' alphabet from this string there are 3-occurrence of 'a' so i want to print 'aaa' can you please help me this. Your help will be much appriciated. Thanks
Categories: DBA Blogs

trim in sql*plus

Tom Kyte - Thu, 2017-11-02 14:06
Hi Tom, I have a varchar2(30) field which when displayed on sqlplus, doesn't seem to be trimming the trailing spaces when I use rtim or trim in select stmt: set head off set colsep "," set trim on set wrap off set linesize 800 select part_id...
Categories: DBA Blogs

Two Talks Accepted for RMOUG Training Days

Bobby Durrett's DBA Blog - Thu, 2017-11-02 14:01

I got two talks accepted for RMOUG Training Days in February. I mentioned these two titles in a earlier post:

  • Python for the Oracle DBA
  • Toastmasters for the Oracle DBA

These two talks are about topics that interest me so I am glad that RMOUG thinks that they are valuable to the conference attendees.

I plan to do the two talks for my DBA coworkers and shorter versions at Toastmasters so I should get some constructive feedback and practice before the conference.

Should be fun. Hope to see you in Denver next February.

My Python posts: url

My Toastmasters posts: url


Categories: DBA Blogs

Quick history on database growth

Yann Neuhaus - Thu, 2017-11-02 12:13

AWR collects segment statistics, and this can be used to quickly understand an abnormal database growth. Here is a script I use to get, from the AWR history, the segments that have grown by more than 1% of the database size, in one hour.

First I must mention that this uses only the part of AWR which is not subject to additional option. This even works in Standard Edition:
------------------------------------ ----------- ------------------------------
control_management_pack_access string NONE

So here is the query, easy to modify with different threshold:
set echo on pagesize 1000
set sqlformat ansiconsole
select * from (
,trunc(max(end_interval_time),'hh24') snap_time
,round(sum(SPACE_ALLOCATED_DELTA)/1024/1024/1024*24*(cast(max(end_interval_time) as date)-cast(min(begin_interval_time) as date))) "GB/hour"
from DBA_HIST_SEG_STAT join DBA_HIST_SEG_STAT_OBJ using (dbid,ts#,obj#,dataobj#) join dba_hist_snapshot using(dbid,snap_id)
group by trunc(end_interval_time,'hh24'),owner,object_name,subobject_name,object_type
) where "GB/hour" > (select sum(bytes)/1024/1024/1024/1e2 "one percent of database size" from dba_data_files)
order by snap_time

and the sample output, showing only the snapshots and segments where more than 1% of the database size has been allocated within one hour:

--------------- --------- ------- ----- ----------- -------------- -----------
4 25-OCT-2017 19:00:00 4 BIGDATA SYS_LOB0000047762C00006$$ LOB
9 25-OCT-2017 20:00:00 9 BIGDATA SYS_LOB0000047762C00006$$ LOB
9 25-OCT-2017 21:00:00 9 BIGDATA SYS_LOB0000047762C00006$$ LOB
3 25-OCT-2017 22:00:00 3 BIGDATA SYS_LOB0000047762C00006$$ LOB
5 26-OCT-2017 00:00:00 5 BIGDATA SYS_LOB0000047762C00006$$ LOB
6 26-OCT-2017 01:00:00 6 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 02:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 03:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
7 26-OCT-2017 04:00:00 7 BIGDATA SYS_LOB0000047762C00006$$ LOB
5 26-OCT-2017 05:00:00 5 BIGDATA SYS_LOB0000047762C00006$$ LOB
2 26-OCT-2017 06:00:00 2 BIGDATA SYS_LOB0000047719C00008$$ LOB
2 26-OCT-2017 06:00:00 2 BIGDATA SYS_LOB0000047710C00006$$ LOB

With this, it is easier to ask to the application owners if this growth is normal or not.


Cet article Quick history on database growth est apparu en premier sur Blog dbi services.

National Restaurant Association Educational Foundation and Oracle Hospitality Empower Students to Get Creative about Future of Hospitality

Oracle Press Releases - Thu, 2017-11-02 08:00
Press Release
National Restaurant Association Educational Foundation and Oracle Hospitality Empower Students to Get Creative about Future of Hospitality New report highlights innovative culinary and restaurant management concepts developed by students participating in the national ProStart Invitational

Redwood Shores, Calif.—Nov 2, 2017

The National Restaurant Association Educational Foundation (NRAEF) and Oracle Hospitality present “A Look Inside the National ProStart Invitational.” The report showcases the innovative restaurant and culinary concepts developed by high school students who are enrolled in ProStart®, a two-year career and technical education program in secondary schools nationwide that teaches culinary arts and restaurant management skills and fundamentals.

“Oracle Hospitality is extremely proud to be supporting the 2017 National ProStart Invitational—it gives the students a life-changing opportunity to find a career in this exciting industry of ours, while simultaneously helping our customers, restaurant operators, in finding new talent for them to recruit,” said Brett R. Smith, Senior Director, Food and Beverage Solutions Management, Oracle Hospitality. “This report offers us a glimpse into the creative innovative minds of the next generation foodservice industry.”

Nearly 400 students who participated in this year’s National ProStart Invitational presented concepts to a panel of industry judges. Based on qualitative data collected from the students’ submitted materials, the NRAEF tracked methods, ingredients used, restaurant concepts and students inclusion of technology and innovation to determine the core themes of the competition and ultimately the report findings.

This year’s report unveils themes of sustainability, philanthropy and efficiency. It also showcases innovation, not only with food ingredients and various culinary concepts, but students’ desire to use technology to enhance the food service experience overall.

“Once again, we’re proud to partner with Oracle to highlight insights from students at our National ProStart Invitational,” said Rob Gifford, Executive Vice President, NRAEF. “The report showcases how much students learn from ProStart, and the students’ dedication to coming up with new and creative concepts and possible trends in the restaurant and foodservice industry.”

Report highlights include:

Students incorporated innovative technology into their restaurant concepts:

  • Students used restaurant floor planning software to design a layout of their concepts, designed websites as a marketing technique, created social media campaigns, ideated smartphone applications, and allowed ordering through tablets
  • Teams employed different point-of-sale systems, allowing customers to place orders through tablets, kiosks, or mobile applications
  • The use of technology was not simply limited to the customer and employee experiences—teams leveraged solar panels, biogas generator, and geothermal energy to sustain restaurant operations

Students showcased their culinary skills through new techniques:

  • Seafood was the overwhelming favorite ingredient used for appetizers
  • Standout ingredients included wakame seaweed, quail eggs and game meats
  • Teams exercised pickling techniques across starters, entrees and desserts—from chicken wings to lemon curd
  • Required to showcase certain knife skills, students chose chiffonade, julienne, small dice, brunoise and rondelle

Students promoted sustainability, philanthropy and efficiency across concepts:

  • Concepts centered on locally-sourced, health-conscious and fusion cuisine; more than half of the menus featured gluten free, vegan and vegetarian options
  • Concepts presented were classified as either casual dining, quick casual/fast casual or quick service, which illustrated a critical trend in the industry—a shift away from traditional, sit-down, in-restaurant dining
  • Nearly 50 percent of management teams’ concepts promoted community engagement and contained a philanthropic component, such as a culinary internship for at-risk youth, a book donation drop, a pay-it-forward giving model, and a weekly fundraising night for local non-profits
  • Concepts also incorporated unique food delivery services at popular on-the-go venues, such as the airport or gym, to effectively reach target audiences

“A Look Inside the National ProStart Invitational” and an accompanying infographic can be found here: https://go.oracle.com/LP=60748?elqcampaignid=104070.

As a sponsor of the ProStart program Oracle Hospitality is actively taking a role in empowering the next generation of hospitality. Oracle Hospitality provides comprehensive solutions that elevate guest experiences while streamlining operations with point of sale, kitchen and back office management and mobile technologies. Oracle Hospitality tools ensure that food and beverage operations with thousands of workstations or single property locations have the same ability to deliver brand loyalty and customer loyalty.

Reaching nearly 140,000 students annually, ProStart is offered in more than 1,800 high schools and career and technical education centers in all states, Guam and Department of Defense Education Activity schools in Europe and the Pacific. For more information on the ProStart program, visit ChooseRestaurants.org/ProStart or find us on Twitter or Facebook.

Contact Info
Matt Torres
Oracle PR
Patty Oien
Blanc & Otus
Jasmine Jones
About the National Restaurant Association Educational Foundation

As the philanthropic foundation of the National Restaurant Association, the National Restaurant Association Educational Foundation’s mission of service to the public is dedicated to enhancing the industry’s training and education, career development and community engagement efforts. The NRAEF and its programs work to Attract, Empower and Advance today’s and tomorrow’s restaurant and foodservice workforce. NRAEF programs include: ProStart®—a high-school career and technical education program; Restaurant Ready—partnering with community based organizations to provide “opportunity youth” with skills training and job opportunities; Military—helping military servicemen and women transition their skills to restaurant and foodservice careers; Scholarships—financial assistance for students pursuing restaurant, foodservice and hospitality degrees; and, the Hospitality Sector Registered Apprenticeship Project—a partnership with the American Hotel & Lodging Association providing a hospitality apprenticeship program for the industry. For more information on the NRAEF, visit ChooseRestaurants.org.

About Oracle Hospitality

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility. For more information about Oracle Hospitality, please visit www.Oracle.com/Hospitality.

About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Patty Oien

  • +1.510.303.7289

Jasmine Jones

  • 202-315-4101

nVision Performance Tuning: 5. Additional Instrumentation of nVision

David Kurtz - Thu, 2017-11-02 07:28
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

One of the challenges of tuning and monitoring nVision is to be able to identify each report being run. Calls to Oracle instrumentation package dbms_application_info  were added to the component processor in PeopleTools 8.50, and to Application Engine in PeopleTools 8.52.  However, COBOL, nVision, and SQR were never instrumented.
Therefore, there is still a place for the psftapi package and trigger.  When a PeopleSoft batch process starts, it sets the status on its request record on the Process Scheduler request record, psprcsrqst to 7, thus indicating that it is processing.  A trigger on that table fires on that update and calls the psftapi package.  The package sets module and action to the process name and process instance, and also stored the process instance number in a package global variable that can be read with another procedure in the package.  Every scheduled process will have module and action set to something meaningful.  Any PeopleSoft instrumentation will simply overwrite these values.  A sessions module and action are picked up Oracle monitoring tools, in particular, they are also stored in the Active Session History (ASH).
However, nVision reports always run as the same process name, either NVSRUN for a single report, RPTBOOK for a report book of many reports, or DRILLDWN for a nVision drill-down query.  Knowing the process instance is useful because then we can look up the operator and run control ID
However, we also need to know the report ID being run.  When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table.  There is no update, so it is not possible to capture this with a DML trigger.  Instead, you can create a fine-grained audit policy on the query with a PL/SQL handler.  The handler package is then invoked by the audit policy.
object_schema => 'SYSADM',
object_name => 'PS_NVS_REPORT',
policy_name => 'PS_NVS_REPORT_SEL',
handler_module => 'AEG_FGA_NVISION_HANDLER',
enable => TRUE,
statement_types => 'SELECT',
audit_trail => DBMS_FGA.DB + DBMS_FGA.EXTENDED);
The handler package runs in the session that triggered the audit.  It can access the audit record and extract the string of colon-separated bind variables thus obtaining the report ID and business unit.  It updates the session action attribute in the same way as psftapi.sql.
CREATE OR REPLACE PROCEDURE sysadm.aeg_fga_nvision_handler
(object_schema VARCHAR2
,object_name VARCHAR2
,policy_name VARCHAR2)
l_sqlbind VARCHAR2(4000);
l_parm1 VARCHAR2(30);
l_parm2 VARCHAR2(30);
l_parm3 VARCHAR2(30);
l_parm4 VARCHAR2(30);
SELECT x.lsqlbind
, SUBSTR(x.lsqlbind,x.start1,LEAST(30,NVL(x.end1,x.lensqlbind+1)-x.start1)) parm1
, SUBSTR(x.lsqlbind,x.start2,LEAST(30,NVL(x.end2,x.lensqlbind+1)-x.start2)) parm2
, SUBSTR(x.lsqlbind,x.start3,LEAST(30,NVL(x.end3,x.lensqlbind+1)-x.start3)) parm3
, SUBSTR(x.lsqlbind,x.start4,LEAST(30,NVL(x.end4,x.lensqlbind+1)-x.start4)) parm4
INTO l_sqlbind, l_parm1, l_parm2, l_parm3, l_parm4
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,1,1,'i'),0) start1
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,0,'i'),0) end1
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,1,'i'),0) start2
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,0,'i'),0) end2
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,1,'i'),0) start3
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,0,'i'),0) end3
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,1,'i'),0) start4
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,5,1,'i'),0) end4
, LENGTH(lsqlbind) lensqlbind
FROM sys.fga_log$ l
) x
AND x.entryid = USERENV('ENTRYID')
AND x.obj$name = 'PS_NVS_REPORT';
WHEN no_data_found THEN

l_parm4 := l_parm3;
l_parm3 := l_parm2;
l_parm2 := l_parm1;

l_parm4 := l_parm3;
l_parm3 := l_parm2;

l_parm4 := l_parm3;


--EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER=''PI='||psftapi.get_prcsinstance()||':'||l_parm4||':'||l_parm3||'''';
The action attribute is then picked up by the ASH data.  It is easy to extract the report ID and business unit from the action string with regular expressions, as in this example query.
set lines 160 trimspool on
column module format a12
column action format a32
column client_id format a12
column prcsinstance format a9 heading 'Process|Instance'
column business_unit format a8 heading 'Business|Unit'
column report_id format a10
select DISTINCT module, action, client_id
, REGEXP_SUBSTR(h.action,'[[:digit:]]+') prcsinstance
, substr(regexp_substr(h.action,':([[:alnum:]])+',1,2),2) business_unit
, substr(regexp_substr(h.action,':([A-Za-z0-9_-])+',1,1),2) report_id
from v$active_session_History h
where program like 'PSNVS%'
Here you can see how process instance, report ID and business unit are held in action and how they can be extracted.  Now, it is possible to profile ASH data for nVision processes, find the long-running SQL and determine which layout it came from.
                                                           Process   Business
------------ -------------------------------- ------------ --------- -------- ----------
RPTBOOK PI=1780508:GBGL224S:UK001 GBNVISION 1780508 UK001 GBGL113S
RPTBOOK PI=1780509:GBGL010E:UK002 GBNVISION 1780509 UK002 GBGL010E
RPTBOOK PI=1780509:Processing GBNVISION 1780509 Processing
RPTBOOK PI=1780500:GBGL113S:UK003 GBNVISION 1780500 UK003 GBGL113S
RPTBOOK PI=1780509:GBGL010E:UK000 GBNVISION 1780508 UK000 GBGL010E 
This code in this blog is available on github.
Other recommendations
  • Create an index on SYS.FGA_LOG$ to support the query in the FGA handler package.
CREATE INDEX sys.fga_log$_obj$name
ON sys.fga_log$ (obj$name, sessionid, entryid)
  • Put a regular purge of the FGA_LOG$ table in place, to purge rows after, say, 31 days.  Otherwise, it will grow indefinitely, one row will be added for every nVision report run.
DELETE FROM fga_log$ 
WHERE obj$name = 'PS_NVS_REPORT'
AND ntimestamp#
  • Move SYS.AUD$ and SYS.FGA_LOG$ from the SYSTEM tablespace to another ASSM tablespace using the instructions in Oracle support note 1328239.1.

Live Demo: Create Database on Cloud & FREE Trainings This Week

Online Apps DBA - Thu, 2017-11-02 04:59

[K21Academy Weekly Newsletter] 171102 Subject: Live Demo: Create Database on Cloud & FREE Trainings This Week Interesting This Week: 1. Create Oracle Database on Cloud 2. FREE Information/Guides This Week 2.1 Role of DBA in Cloud 2.2 7 Docs every Apps DBA must read before Integrating OAM 2.3 EBS (R12)-OAM Integration: Troubleshooting 3. Useful FREE […]

The post Live Demo: Create Database on Cloud & FREE Trainings This Week appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Are large objects supported in PostgreSQL 10 logical replication

Yann Neuhaus - Thu, 2017-11-02 01:32

Another interesting topic that popped up last week during pgconfeu: Are large objects supported with logical replication in PostgreSQL 10? The only truth is a test, isn’t it? Lets go…

Obviously we need a table containing same large objects to start with:

postgres=# create table t1 ( a int, b oid);

Before inserting some data lets create a publication for that table right now:

postgres=# create publication my_pub for table t1;

Ok, that works. Now we need a subscription for that, so on a second instance:

postgres=# create table t1 ( a int, b oid);
postgres=# create subscription my_sub connection 'host=localhost port=6000 dbname=postgres user=postgres' publication my_pub;

So far, so good. Lets insert some data on the publishing instance and see what happens:

postgres=# \! which cp
postgres=# insert into t1 (a,b) values (1, lo_import('/usr/bin/cp'));

That worked. What do we see on the subscription side?

postgres=# select * from t1;
 a |   b   
 1 | 16418
(1 row)

postgres=# select * from pg_size_pretty ( pg_relation_size ( 't1' ) );
 8192 bytes
(1 row)

So, at least “something” is there. Lets prove it:

postgres=# select lo_export(b,'/tmp/cp') from t1;
ERROR:  large object 16418 does not exist

Hm, this is not what was expected, right? Doing the same on the publishing side works:

postgres=# select lo_export(b,'/tmp/cp') from t1;
(1 row)

postgres=# \! chmod +x /tmp/cp
postgres=# \! /tmp/cp --help | head -1
Usage: /tmp/cp [OPTION]... [-T] SOURCE DEST

This means the OID is replicated but not the large object itself. So the answer is: No, large objects can not be used with PostgreSQL 10 logical replication.


Cet article Are large objects supported in PostgreSQL 10 logical replication est apparu en premier sur Blog dbi services.

Pass Summit 2017

Yann Neuhaus - Wed, 2017-11-01 23:52

Today starts the Pass Summit 2017 taking place in Seattle.
After a small fly over the Ocean, more than 10 hours… yesterday, and a nice jet lag which avoid me to sleep later than 4AM this morning, I arrived to the Convention Center in Seattle where the Pass takes place.


I start this first day by the session of Itzik Ben-Gan: T-SQL Tips and Tricks.
As part of the session, Itzik spoke about batch processing (start with 2012) which boost the execution of T_SQL script compare to Row execution mode.
The problem is that Batch mode is just available with columnstore indexes. So if you don’t have a columnstore index in your table you cannot benefit of this feature.
To cheat this drawback Itzik showed us the possibility to create a filter columnstore index (filter CI start with 2016) which will return no row but will enable the possibility to use batch processing.
Well done!

After a quick lunch, I continue this first day by the Session of Drew Furgiuele:


After having explained why to use PowerShell (automation, bridge between tools…) and how to install the SQLSERVER module (Install-Module SQLSERVER or Save-Module SQLServer), Drew shown how to use this module.
The first interesting point is how to browse SQL Server once the module has been installed.
For that just execute the PS script:


And after connection to your SQL Server instance with cd sql\<servername>\default for a SQL Server default instance or \<instancename> for a named instance it’s possible to browse your complete instance as you can do via SQL Server Management Studio with commands like:

$dbs = Get-Item
$dbs = Get-Item ¦ where-object {$_.name -eq AdventureWorks2104}

Easy for a fist step with PowerShell.
Of course Drew showed us really more with PowerShell scripts copying tables from an instance to an other one, managing backups identically in your whole environment or executing a Point in time restore.
Well done Drew.

The last session of the day as 2 parts and is driven by Glenn Berry about Migration to SQL Server 2017.
Glenn explained that there is plenty Reasons to upgrade to SQL Server 2017: great new features, features available with Standard Edition (start with 2016 SP1)…
But he also pointed that there is also big performance differences between Standard and Enterprise Edition with examples using columnstore indexes or when running a dbcc checkdb.
So it’s not just new features that are available with Enterprise Edition, it could also provide great performance gain which is often forgotten.
There is also limitation for memories, sockets and physical cores usage with Standard Edition, don’t build a Virtual Machine for a Standard Edition with too many memories or sockets/cores because it will not be able to use them ;-) You can learn more on Glenn Berry’s blog.

This first day was very great with lot’s of interesting sessions.
It’s time now to visit a little bit Seattle and waiting tomorrow for the second day with some other great sessions and speakers!



Cet article Pass Summit 2017 est apparu en premier sur Blog dbi services.

Monitoring Standby – SQLPlus or DGMGRL

Michael Dinh - Wed, 2017-11-01 22:00

Here is an example using dgmgrl

DGMGRL> show database roverdb SendQEntries
           roverstby     ARCHIVED        936921167                1                13019  10/31/2017 10:47:04  10/31/2017 10:48:31     746413367424     746413483999          1819004 
           roverstby     ARCHIVED        936921167                1                13023  10/31/2017 10:51:40  10/31/2017 10:52:19     746413767648     746413883688          1809094 
           roverstby     ARCHIVED        936921167                1                13031  10/31/2017 10:57:02  10/31/2017 10:57:44     746414728981     746414851377          1924909 
           roverstby     ARCHIVED        936921167                1                13032  10/31/2017 10:57:44  10/31/2017 10:58:23     746414851377     746414967877          1815042 
           roverstby     ARCHIVED        936921167                1                13033  10/31/2017 10:58:23  10/31/2017 10:59:02     746414967877     746415089206          1798857 
           roverstby     ARCHIVED        936921167                1                13034  10/31/2017 10:59:02  10/31/2017 10:59:41     746415089206     746415217514          1818919 
                          CURRENT        936921167                1                13036  10/31/2017 11:29:41                          746415239037                               628 
                          CURRENT        936921167                2                12359  10/31/2017 11:29:05                          746415238854                               864 

DGMGRL> show database roverstby RecvQEntries
              STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED    FIRST_CHANGE#     NEXT_CHANGE#       SIZE (KBs) 
         NOT_APPLIED        936921167                1                13020  10/31/2017 10:48:31  10/31/2017 10:48:49     746413483999     746413509640           385949 
         NOT_APPLIED        936921167                1                13021  10/31/2017 10:48:49  10/31/2017 10:50:19     746413509640     746413636246          1885417 
         NOT_APPLIED        936921167                1                13022  10/31/2017 10:50:19  10/31/2017 10:51:40     746413636246     746413767648          1944637 
         NOT_APPLIED        936921167                1                13024  10/31/2017 10:52:19  10/31/2017 10:52:58     746413883688     746413999759          1819116 
         NOT_APPLIED        936921167                1                13025  10/31/2017 10:52:58  10/31/2017 10:53:40     746413999759     746414124264          1868420 
         NOT_APPLIED        936921167                1                13026  10/31/2017 10:53:40  10/31/2017 10:54:22     746414124264     746414244619          1890478 
         NOT_APPLIED        936921167                1                13027  10/31/2017 10:54:22  10/31/2017 10:55:02     746414244619     746414363387          1843514 
         NOT_APPLIED        936921167                1                13028  10/31/2017 10:55:02  10/31/2017 10:55:41     746414363387     746414484244          1818826 
         NOT_APPLIED        936921167                1                13029  10/31/2017 10:55:41  10/31/2017 10:56:20     746414484244     746414605367          1813344 
         NOT_APPLIED        936921167                1                13030  10/31/2017 10:56:20  10/31/2017 10:57:02     746414605367     746414728981          1904385 
         NOT_APPLIED        936921167                1                13035  10/31/2017 10:59:41  10/31/2017 11:29:41     746415217514     746415239037            79395 
   PARTIALLY_APPLIED        936921167                2                12352  10/31/2017 10:40:04  10/31/2017 10:47:07     746413130730     746413371576             1980 
         NOT_APPLIED        936921167                2                12353  10/31/2017 10:47:07  10/31/2017 10:50:22     746413371576     746413640990             1658 
         NOT_APPLIED        936921167                2                12354  10/31/2017 10:50:22  10/31/2017 10:53:01     746413640990     746414010894             1774 
         NOT_APPLIED        936921167                2                12355  10/31/2017 10:53:01  10/31/2017 10:55:04     746414010894     746414371654             1541 
         NOT_APPLIED        936921167                2                12356  10/31/2017 10:55:04  10/31/2017 10:57:04     746414371654     746414736501             1532 
         NOT_APPLIED        936921167                2                12357  10/31/2017 10:57:04  10/31/2017 10:59:04     746414736501     746415097318             1485 
         NOT_APPLIED        936921167                2                12358  10/31/2017 10:59:04  10/31/2017 11:29:05     746415097318     746415238854             6101 


Subscribe to Oracle FAQ aggregator