Feed aggregator

Bitmap Join Indexes

Jonathan Lewis - Fri, 2018-05-18 08:29

I’ve been prompted by a recent question on the ODC database forum to revisit a note I wrote nearly five years ago about bitmap join indexes and their failure to help with join cardinalities. At the time I made a couple of unsupported claims and suggestions without supplying any justification or proof. Today’s article finally fills that gap.

The problem is this – I have a column which exhibits an extreme skew in its data distribution, but it’s in a “fact” table where most columns are meaningless ids and I have to join to a dimension table on its primary key to translate an id into a name. While there is a histogram on the column in the fact table the information in the histogram ceases to help if I do the join to the dimension and query by name, and the presence of a bitmap join index doesn’t make any difference. Let’s see this in action – some of the code follows a different pattern and format from my usual style because I started by copying and editing the example supplied in the database forum:

rem     Script:         bitmap_join_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem     Last tested 
rem     Notes:
rem     Bitmap join indexes generate virtual columns on the fact table
rem     but you can't get stats on those columns - which means if the
rem     data is skewed you can have a histogram on the raw column but
rem     you don't have a histogram on the bitmap virtual column.

drop table t1;
drop table dim_table;

create table dim_table (type_code number, object_type varchar2(10));

insert into dim_table values (1,'TABLE');
insert into dim_table values (2,'INDEX');
insert into dim_table values (3,'VIEW');
insert into dim_table values (4,'SYNONYM');
insert into dim_table values (5,'OTHER');

alter table dim_table add constraint dim_table_pk primary key (type_code) using index;

exec dbms_stats.gather_table_stats(user,'dim_table',cascade=>true);

create table t1 
        object_id, object_name, 
        decode(object_type, 'TABLE',1,'INDEX',2,'VIEW',3,'SYNONYM',4,5) type_code 
        rownum <= 50000 -- > comment to bypass wordpress format issue

insert into t1 select * from t1;
insert into t1 select * from t1;
insert into t1 select * from t1;

create  bitmap index t1_b1 on t1(dt.object_type)
from    t1, dim_table dt
where   t1.type_code = dt.type_code

exec dbms_stats.gather_table_stats(null, 't1', cascade=>true, method_opt=>'for all columns size 254');

        dt.object_type, count(*)
        t1, dim_table  dt
        t1.type_code   = dt.type_code
group by
order by

I’ve started with a dimension table that lists 5 type codes and has a primary key on that type code; then I’ve used all_objects to generate a table of 400,000 rows using those type codes, and I’ve created a bitmap join index on the fact (t1) table based on the dimension (dim_table) table column. By choice the distribution of the five codes is massively skewed so after gathering stats (including histograms on all columns) for the table I’ve produced a simple aggregate report of the data showing how many rows there are of each type – by name. Here are the results – with the execution plan from showing the benefit of the “group by placement” transformation:

---------- ----------
INDEX           12960
OTHER          150376
SYNONYM        177368
TABLE           12592
VIEW            46704

5 rows selected.

| Id  | Operation             | Name      | Rows  | Bytes | Cost  |
|   0 | SELECT STATEMENT      |           |       |       |   735 |
|   1 |  SORT GROUP BY        |           |     5 |   125 |   735 |
|*  2 |   HASH JOIN           |           |     5 |   125 |   720 |
|   3 |    VIEW               | VW_GBF_7  |     5 |    80 |   717 |
|   4 |     HASH GROUP BY     |           |     5 |    15 |   717 |
|   5 |      TABLE ACCESS FULL| T1        |   400K|  1171K|   315 |
|   6 |    TABLE ACCESS FULL  | DIM_TABLE |     5 |    45 |     2 |

Predicate Information (identified by operation id):
   2 - access("ITEM_1"="DT"."TYPE_CODE")

Having established the basic result we can now examine some execution plans to see how well the optimizer is estimating cardinality for queries relating to that skewed distribution. I’m going to generate the execution plans for a simple select of all the rows of type ‘TABLE’ – first by code, then by name, showing the execution plan of each query:

explain plan for
select  t1.object_id
        t1.type_code = 1

select * from table(dbms_xplan.display(null,null,'outline'));

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 12592 |    98K|   281   (8)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 12592 |    98K|   281   (8)| 00:00:01 |

Predicate Information (identified by operation id):
   1 - filter("T1"."TYPE_CODE"=1)

Thanks to the histogram I generated on the type_code table the optimizer’s estimate of the number of rows is very accurate. So how well does the optimizer handle the join statistics:

prompt  =============
prompt  Unhinted join
prompt  =============

explain plan for
select  t1.object_id
        t1, dim_table  dt
        t1.type_code   = dt.type_code 
and     dt.object_type = 'TABLE'

select * from table(dbms_xplan.display(null,null,'outline'));

| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |           | 80000 |  1328K|   287  (10)| 00:00:01 |
|*  1 |  HASH JOIN         |           | 80000 |  1328K|   287  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| DIM_TABLE |     1 |     9 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1        |   400K|  3125K|   277   (7)| 00:00:01 |

Predicate Information (identified by operation id):
   1 - access("T1"."TYPE_CODE"="DT"."TYPE_CODE")
   2 - filter("DT"."OBJECT_TYPE"='TABLE')

Taking the default execution path the optimizer’s estimate of rows identified by type name is 80,000 – which is one fifth of the total number of rows. Oracle knows that the type_code is skewed in t1, but at compile time doesn’t have any idea which type_code corresponds to type ‘TABLE’, so it’s basically using the number of distinct values to dictate the estimate.

We could try hinting the query to make sure it uses the bitmap join index – just in case this somehow helps the optimizer (and we’ll see in a moment why we might have this hope, and why it is forlorn):

prompt  ===================
prompt  Hinted index access
prompt  ===================

explain plan for
        /*+ index(t1 t1_b1) */
        t1, dim_table dt
        t1.type_code   = dt.type_code 
and     dt.object_type = 'TABLE'

select * from table(dbms_xplan.display(null,null,'outline'));

| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                    |       | 80000 |   625K|   687   (1)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    | 80000 |   625K|   687   (1)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |       |       |       |            |          |
|*  3 |    BITMAP INDEX SINGLE VALUE        | T1_B1 |       |       |            |          |

Predicate Information (identified by operation id):
   3 - access("T1"."SYS_NC00004$"='TABLE')

The plan tells us that the optimizer now realises that it doesn’t need to reference the dimension table at all – all the information it needs is in the t1 table and its bitmap join index – but it still comes up with an estimate of 80,000 for the number of rows. The predicate section tells us what to do next – it identifies a system-generated column, which is the virtual column underlying the bitmap join index: let’s see what the stats on that column look like:

        column_name, histogram, num_buckets, num_distinct, num_nulls, sample_size
        table_name = 'T1'
order by

-------------------- --------------- ----------- ------------ ---------- -----------
OBJECT_ID            HYBRID                  254        50388          0        5559
OBJECT_NAME          HYBRID                  254        29224          0        5560
TYPE_CODE            FREQUENCY                 5            5          0      400000
SYS_NC00004$         NONE

4 rows selected.

There are no stats on the virtual column – and Oracle won’t try to collect any, and even if you write some in (using dbms_stats.set_column_stats) it won’t use them for the query. The optimizer seems to be coded to use the number of distinct keys from the index in this case.


It’s very disappointing that there seems to be no official way to work around this problem – but Oracle has their own (undocumented) solution to the problem that comes into play with OLAP – the hint /*+ precompute_subquery() */. It’s possible to tell the optimizer to execute certain types of subquery as the first stage of optimising a query, then changing the query to take advantage of the resulting data:

explain plan for
        t1.type_code in (
                from    dim_table dt
                where   dt.object_type = 'TABLE'

select * from table(dbms_xplan.display(null,null,'outline'));

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      | 12592 |    98K|   281   (8)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 12592 |    98K|   281   (8)| 00:00:01 |

Predicate Information (identified by operation id):
   1 - filter("T1"."TYPE_CODE"=1)

Oracle hasn’t optimized the query I wrote, instead it has executed the subquery, derived a (very short, in this case) list of values, then optimized and executed the query I first wrote using the constant(s) returned by the subquery. And you can’t see the original subquery in the execution plan. Of course, with the literal values in place, the cardinality estimate is now correct.

It’s such a pity that this hint is undocumented, and one that you shouldn’t use in production.


Oracle Blockchain Cloud Service and Financial Services Enable Next-Gen Blockchain Innovators

Oracle Press Releases - Fri, 2018-05-18 07:00
Press Release
Oracle Blockchain Cloud Service and Financial Services Enable Next-Gen Blockchain Innovators Students Tackle Real Problems and Succeed in Blockchain Challenge

Redwood Shores, Calif.—May 18, 2018

In an effort to accelerate blockchain innovation in Financial Services and other industries, Oracle recently joined academia and banking industry leaders as part of the Carolina Fintech Hub (CFH) Generation Blockchain Challenge to encourage students to develop commercially viable and cross-industry blockchain applications.

Thirty students from North Carolina State, University North Carolina Chapel Hill, University North Carolina Charlotte and the University of South Carolina worked together with leaders from Bank of America, Wells Fargo, Ernst & Young, BB&T, Ally Bank, SIA Partners and Oracle Blockchain Cloud Service implementation partner Aurablocks to solve real-world industry challenges covering banking, higher education, energy, healthcare, and real estate. The goal of the challenge was to bring a single source of truth and trusted transactions to complex distributed business processes with the usage of blockchain. For seven weeks, 10 finalist student teams worked with industry mentors to define, test and implement proof of concepts using beta version of Oracle Blockchain Cloud Service and Oracle Digital Innovation Platform for Open Banking.

CFH is an organization focused on accelerating the merging worlds of financial services and digital technology. The CFH Challenge culminated at an event hosted in North Carolina on April 20 by AvidXchange Inc., where student teams were acknowledged by a set of esteemed judges for their innovations and entrepreneurialism. The winning team, Versity, created a decentralized student records credentials platform that allows individuals to validate, store, and share their student records with employers and educational institutions from anywhere, anytime.

“This challenge offered students an incredible opportunity to learn more about blockchain technology and to help them develop their business and technical skills with the help of their industry and academic mentors—and access to Oracle’s advanced enterprise-grade blockchain cloud platform,” said Tariq Bokhari, Executive Director of Carolina Fintech Hub. “The students not only identified problems and created real, viable solutions but through this process have learned important skills they can apply to their employment future.”

Throughout this challenge, Oracle and AuraBlocks led the training of more than 200 students in the Carolinas on blockchain with additional partners including North Carolina State University, University of North Carolina at Charlotte and Sia Blocks. Nearly all of the student teams intend to further hone their projects on Oracle’s platform and develop future companies based on their innovations. The winning student teams who participated in this program have the option to join the Oracle Startup for Higher Education program giving students access to mentorship and other resources required to further develop their entrepreneurial ideas into real enterprises.

“Oracle’s blockchain platform is built to enable rapid experimentation and provide enterprise-grade capabilities to simplify moving POCs and pilots into production,” said Sanjay Mathew, senior director, Global Financial Services Industry, Oracle. “The scope of what the teams designed and built in seven weeks with this unique collaboration is impressive, and we look forward to having the teams evolve these projects into full solutions deployed on Oracle Blockchain Cloud Service. This could not have happened without the outstanding participation from motivated students, industry leaders, partners and academia combined.”

Contact Info
Pasha Maher
Carolina Fintech Hub
Judi Palmer
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Pasha Maher

  • 804.263.7653

Judi Palmer

  • 650.506.0266

Confusion surrounding "buffer busy waits" and "read by other session" wait events

Tom Kyte - Fri, 2018-05-18 05:06
Tom - I'm investigating a relatively minor but attention-getting issue that occurs from time to time around my workplace. I observe a few dozen identical queries (select only - no DML) generated by an Oracle Financials concurrent program being ex...
Categories: DBA Blogs

Get the length of CLOB columns in bytes

Tom Kyte - Fri, 2018-05-18 05:06
Hello Tom, I have a table with a CLOB column: <code> create table plch_clob (i int primary key, x clob); begin for indx in 1 .. 1000 loop insert into plch_clob( i, x) values (indx, 'CLOB Row: ' || indx); end loop; ...
Categories: DBA Blogs

Data Masking

Tom Kyte - Fri, 2018-05-18 05:06
Does Oracle provide a package or function for data masking ? For example, in a development environment, for data protection purposes, the information of the table customer needs to be masked. create table customer (last_name varchar2(25), first_n...
Categories: DBA Blogs

Read only partitions in 12.2

Tom Kyte - Fri, 2018-05-18 05:06
Team, Started reading about Read Only partitions in 12.2 <u>http://docs.oracle.com/database/122/VLDBG/partition-create-tables-indexes.htm#VLDBG-GUID-9D7149B6-A2FF-47CA-8F00-47CBFD33F82B</u> <quote> A higher level setting of the read-only cl...
Categories: DBA Blogs

Scaling the EDB containers in MiniShift/OpenShift

Yann Neuhaus - Fri, 2018-05-18 03:21

When you followed the last post you should have two pgpool instances and one EDB Postgres instance running in OpenShift. pgpool is responsible for the load balancing, meaning: Send write requests to the master instance and spread read requests over all instances. In the current setup this does not make much sense as we only have one instance so in this post we will scale the setup to finally consist of one master and one read only replica container.

Lets check what pods are currently running:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-vldkj   1/1       Running   2          5d
edb-pgpool-1-699vh   1/1       Running   2          5d
edb-pgpool-1-nsgrm   1/1       Running   2          5d

As expected, two pgpool containers and one database container are up an running. When we login to one of the pgpool container we should see the pgpool processes:

dwe@dwe:~$ oc rsh edb-pgpool-1-699vh
sh-4.2$ ps -ef | grep pool
edbuser     63     0  0 15:27 ?        00:00:00 pgpool -D -d -f /etc/sysconfig/edb/pgpool3.5/pgpool.conf -a /etc/sysconfig/edb/pgpool3.5/pool_hba.conf
edbuser     64    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     65    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     66    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     67    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     68    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     69    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     70    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     71    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     72    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     73    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     74    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     75    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     76    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     77    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     78    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     79    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     80    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     81    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     82    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     83    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     84    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     85    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     86    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     87    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     88    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     89    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     90    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     91    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     92    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     93    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     94    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     95    63  0 15:27 ?        00:00:00 pgpool: wait for connection request
edbuser     96    63  0 15:27 ?        00:00:00 pgpool: PCP: wait for connection request
edbuser     97    63  0 15:27 ?        00:00:00 pgpool: worker process
edbuser    365   342  0 15:30 ?        00:00:00 grep pool

What I am interested in is the backend configuration. As we do have only one database container there should be only one backend configured:

sh-4.2$ cat /etc/sysconfig/edb/pgpool3.5/pgpool.conf | grep backend | egrep -v "^ |^#"
backend_hostname0 = ''
backend_port0 = 5444
backend_weight0 = 1
backend_data_directory0 = '/var/lib/edb/as9.6/data'
backend_flag0 = 'ALLOW_TO_FAILOVER'
								   # but also for ordinary conection to backend.
fail_over_on_backend_error = off

Fine, this is what is expected. The same is of course true for the other container. The question is: We have two pgpool instances running against one database instance, how can we connect then? What in addition happened when we deployed the setup is that a service was created:

dwe@dwe:~$ oc get svc -o wide
NAME          TYPE           CLUSTER-IP      EXTERNAL-IP                     PORT(S)          AGE       SELECTOR
edb-service   LoadBalancer,   5444:30726/TCP   5d        lb=edb-pgpool

This services proxies the request to one of the pgpool instances which then routes the connection to the database instance. Lets try to connect from outside:

dwe@dwe:~$ psql -h $(minishift ip) -p 30726 -U postgres -W
Password for user postgres: 
psql: FATAL:  md5 authentication failed
DETAIL:  pool_passwd file does not contain an entry for "postgres"

Hm. Looking at the available databases and users in the database containers:

dwe@dwe:~$ oc rsh edb-as10-0-1-vldkj
sh-4.2$ psql edb
psql.bin (10.3.8)
Type "help" for help.

edb=# \l
                                 List of databases
   Name    |    Owner     | Encoding  | Collate | Ctype | ICU |  Access privileges  
 edb       | enterprisedb | SQL_ASCII | C       | C     |     | 
 postgres  | edbuser      | SQL_ASCII | C       | C     |     | 
 template0 | edbuser      | SQL_ASCII | C       | C     |     | =c/edbuser         +
           |              |           |         |       |     | edbuser=CTc/edbuser
 template1 | edbuser      | SQL_ASCII | C       | C     |     | =c/edbuser         +
           |              |           |         |       |     | edbuser=CTc/edbuser
(4 rows)

edb=# \du
                                         List of roles
       Role name       |                         Attributes                         | Member of 
 aq_administrator_role | No inheritance, Cannot login                              +| {}
                       | Profile default                                            | 
 edbuser               | Superuser, Create role, Create DB, Replication, Bypass RLS+| {}
                       | Profile default                                            | 
 enterprisedb          | Superuser                                                 +| {}
                       | Profile default                                            | 
 repl                  | Replication                                               +| {}
                       | Profile default                                            | 

… it becomes clear: There is no user postgres. Using the edbuser it works fine:

dwe@dwe:~$ psql -h $(minishift ip) -p 30726 -U edbuser -W postgres
Password for user edbuser: 
psql (9.5.12, server 10.3.8)
WARNING: psql major version 9.5, server major version 10.
         Some psql features might not work.
Type "help" for help.


How can we verify that the service actually is routing the connection to both pgpool instances? PostgreSQL comes with a set of administration functions and one of those can be used to verify that.

Do a first connection and ask for the client address (the client is one of the pgpool instances in this case):

dwe@dwe:~$ psql -h $(minishift ip) -p 30726 -U edbuser -W postgres
Password for user edbuser: 
psql (9.5.12, server 10.3.8)
WARNING: psql major version 9.5, server major version 10.
         Some psql features might not work.
Type "help" for help.

postgres=# select inet_client_addr();
(1 row)

Do a second one:

dwe@dwe:~$ psql -h $(minishift ip) -p 30726 -U edbuser -W postgres
Password for user edbuser: 
psql (9.5.12, server 10.3.8)
WARNING: psql major version 9.5, server major version 10.
         Some psql features might not work.
Type "help" for help.

postgres=# select inet_client_addr();
(1 row)


This two IP addresses are my two pgpool instances so this confirms that the service is behaving as expected. As having a single instance in that setup is somehow useless lets scale the setup by adding another database container. We’ll be doing that with the console for now:


After a few seconds another pod is running:

What I expect is that the pgpool instances got re-configured to include the new backend. Looking at the pods:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-frc99   1/1       Running   0          15m
edb-as10-0-1-gkpgq   1/1       Running   0          31m
edb-pgpool-1-lvwsq   1/1       Running   0          31m
edb-pgpool-1-nh4qb   1/1       Running   0          31m

Lets check the configuration of the first pgpool instance:

dwe@dwe:~$ oc rsh edb-pgpool-1-lvwsq
sh-4.2$ ps -ef | grep conf
edbuser   1918     0  0 07:43 ?        00:00:00 pgpool -D -d -f /etc/sysconfig/edb/pgpool3.5/pgpool.conf -a /etc/sysconfig/edb/pgpool3.5/pool_hba.conf
edbuser   2508  2483  0 07:48 ?        00:00:00 grep conf
sh-4.2$ cat /etc/sysconfig/edb/pgpool3.5/pgpool.conf | grep backend | egrep -v "^ |^#"
backend_hostname0 = ''
backend_port0 = 5444
backend_weight0 = 1
backend_data_directory0 = '/var/lib/edb/as9.6/data'
backend_flag0 = 'ALLOW_TO_FAILOVER'
								   # but also for ordinary conection to backend.
fail_over_on_backend_error = off
backend_hostname1 = ''
backend_port1 = 5444
backend_weight1 = 1
backend_data_directory1 = '/data1'
backend_flag1 = 'DISALLOW_TO_FAILOVER'

Here we go, works as expected. What is a bit strange is that $PGDATA on the second backend is in /data1 where it is in /var/lib/edb/as9.6/data on the first backend. From now on read only queries should land on either the slave or the master node node and statements that modify data should land on the master node, lets check what is master and what is replica:

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "show pool_nodes" postgres
Password for user edbuser: 
 node_id |  hostname  | port | status | lb_weight |  role   | select_cnt 
 0       | | 5444 | 2      | 0.500000  | primary | 1
 1       | | 5444 | 2      | 0.500000  | standby | 0
(2 rows)

When I do two read only statements I should be round-robined:

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "select inet_server_addr()" postgres
Password for user edbuser: 
(1 row)

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "select inet_server_addr()" postgres
Password for user edbuser: 
(1 row)

Disabling load balancing should bring me to the master:

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "select /*NO LOAD BALANCE*/ inet_server_addr()" postgres
Password for user edbuser: 
(1 row)

dwe@dwe:~$ psql -h $(minishift ip) -p 31014 -U edbuser -c "select /*NO LOAD BALANCE*/ inet_server_addr()" postgres
Password for user edbuser: 
(1 row)

Works quite fine. In the next post we’ll add ConfigMaps to customize our deployment. Usually you want to pre-configure the deployment so it fits for your requirements and that is what ConfigMaps are for.


Cet article Scaling the EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

Using GoldenGate LogDump To Find Bad Data

Michael Dinh - Thu, 2018-05-17 23:38

GoldenGate Primary Extract from source database captured data without any issues;

however, target was not able to consume the data since GoldenGate process would ABEND.

Unfortunately, I cannot provide all the details but high level.

Logdump 2433 >pos 0
+++ Starting with GoldenGate 12.2 TDR – Table Definition Record is in trail
+++ This will provide metadata for the table

Reading forward from RBA 0 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x00)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :    86  (x0056)   IO Time    : 2018/03/25 18:24:23.307.797   
IOType     :   170  (xaa)     OrigNode   :     1  (x01) 
TransInd   :     .  (x01)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
DDR/TDR Idx:   (001, 000)     AuditPos   : 3277290592 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/03/25 18:24:23.307.797 Metadata             Len 86 RBA 1689 
DDR Version: 1
Database type: ORACLE
Character set ID: CESU-8
National character set ID: UTF-16
Locale: neutral
Case sensitivity: 14 14 14 14 14 14 14 14 14 14 14 14 11 14 14 14
TimeZone: GMT-05:00
Global name: DBNAME
Logdump 2435 >n
Hdr-Ind    :     E  (x45)     Partition  :     .  (x00)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :   561  (x0231)   IO Time    : 2018/03/25 18:28:16.317.879   
IOType     :   170  (xaa)     OrigNode   :     2  (x02) 
TransInd   :     .  (x01)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
DDR/TDR Idx:   (001, 001)     AuditPos   : 3352410292 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/03/25 18:28:16.317.879 Metadata             Len 561 RBA 1826 
 1)Name          2)Data Type        3)External Length  4)Fetch Offset      5)Scale         6)Level
 7)Null          8)Bump if Odd      9)Internal Length 10)Binary Length    11)Table Length 12)Most Sig DT
13)Least Sig DT 14)High Precision  15)Low Precision   16)Elementary Item  17)Occurs       18)Key Column
19)Sub DataType 20)Native DataType 21)Character Set   22)Character Length 23)LOB Type     24)Partial Type
TDR version: 1
Definition for table SCHEMA.SCHEMA
Record Length: 4298
Columns: 7
ID           64     50        0  0  0 1 0     50     50     50 0 0 0 0 1    0 1   2    2       -1      0 0 0
A            B      C         D  E  F G H     I      J      K  L M N O P    Q R

Position 18)Key Column identify column is Primary Key; hence, ID a primary key column.
I labeled each column using the alphabet for reference and R is the 18th letter of the alphabet.

Even though ID is the first column of the table, GoldenGate offset starts with 0

 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
0 ID                                        NOT NULL NUMBER
5 TEXT                                               VARCHAR2(4000)

SQL> r
  1  select b.column_name,a.constraint_type
  2  from dba_constraints a, dba_cons_columns b
  3  where a.table_name = b.table_name
  4  and a.constraint_name=b.constraint_name
  5  and a.constraint_type = 'P'
  6  and a.table_name='TABLE'
  7  and a.owner='SCHEMA'

COLUMN_NAME                    C
------------------------------ -
ID                             P

Logdump 2500 >n
Hdr-Ind    :     E  (x45)     Partition  :     .  (x0c)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :   885  (x0375)   IO Time    : 2018/05/10 14:16:43.000.514   
IOType     :    15  (x0f)     OrigNode   :   255  (xff) 
TransInd   :     .  (x02)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :      36261       AuditPos   : 2145515112 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/05/10 14:16:43.000.514 FieldComp            Len   885 RBA 88598006 
Name: SCHEMA.TABLE  (TDR Index: 1) 
After  Image:                                        ....................
 0000 000c 0000 0008 3239 3533 3432 3936 0001 0009 | ........29534296....  
 0000 0005 3937 3932 3600 0200 0500 0000 0131 0003 | ....................
 0005 0000 0001 3600 0400 0b00 0000 0731 3738 3337 | ....................
 3939 0005 0325 0000 0321 456e 6a6f 7920 796f 7572 | ....................  
 2073 7461 7920 696e 206f 7572 2068 6f74 656c 7320 | ....................
 7769 7468 2074 6865 206d 6f73 7420 6469 7363 6f75 | ....................  
 6e74 6564 2072 6174                               | ....................
Column     0 (x0000), Len    12 (x000c)  
 0000 0008 3239 3533 3432 3936                     | ....29534296 --- PRIMARY KEY VALUE
Column     1 (x0001), Len     9 (x0009)  
 0000 0005 3937 3932 36                            | ....97926  
Column     2 (x0002), Len     5 (x0005)  
 0000 0001 31                                      | ....1  
Column     3 (x0003), Len     5 (x0005)  
 0000 0001 36                                      | ....6  

SQL> select id, substr(TEXT,1,30) txt, vsize(TEXT), length(TEXT) from SCHEMA.TABLE where ID in (29534296);

        ID LTXT                           VSIZE(LTEXT) LENGTH(LTEXT)
---------- ------------------------------ ------------ -------------
  29534296 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx          801           800


For database, NLS_LENGTH_SEMANTICS=BYTE; hence, 800 characters should equal 800 bytes. 
However, one character is most likely a multi-byte character which target cannot consume 
as it is not able to handle multi-byte.

Microservice Approach for Web Development - Micro Frontends

Andrejus Baranovski - Thu, 2018-05-17 12:16
This post is based on my Oracle Code 2018 Warsaw talk. View presentation on slides share:

Wondering what micro frontends term means? Check micro frontends description here. Simply speaking, micro frontend must implement business logic from top to bottom (database, middleware and UI) in isolated environment, it should be reusable and pluggable into main application UI shell. There must be no shared variables between micro frontends. Advantage - distributed teams can work on separate micro frontends, this improves large and modular system development. There is runtime advantage too - if one of the frontends stops working, main application should continue to work.

I have implemented micro frontends architecture with Oracle JET. Source code is available on GitHub repository. There are three applications, two with micro frontends and one is the master UI shell. Both micro frontends are implemented as JET Composite Components. First is hosted on WebLogic, it calls ADF BC REST service in the backend. Second is hosted on Node.JS and returns static data. First micro frontend implements listener, it allows to handle actions from the outside.

When JET application is accessed in your browser, bunch of HTML, JS and CSS files are downloaded from the server. Core idea with micro frontends - instead of loading HTML, JS and CSS for micro frontend from the same host as master app - load it from different host. JET Composite Component rendered inside master application will be downloaded from different host. Not only downloaded, all backend calls should go to that host too, not to the master host. JET Composite Component integration into master application architecture:

This is how it works in practice. Each of these charts is separate JET Composite Component, loaded as micro frontend from different host into master application. We can see that in network monitor. Loader.js scripts for both micro frontends are downloaded from different hosts:

Runtime advantage - if one or multiple micro frontends are down, application continues to run:

JET Composite Component runs on the client, even it is hosted in its own micro frontend. This gives possibility to subscribe to the events happening in the component in the master app and route that event to another micro frontend. In this example, once item is selected in job chart - employees chart (another micro frontend) is filtered:

Technical implementation

Main application must be configured to support remote module loading for JET Composite Component. Read more about it in Duncan Mills blog post - JET Custom Components XII - Revisiting the Loader Script. In short, you should add Xhr config in JET application main.js:

Server where micro frontend is hosted, must set Access-Control-Allow-Origin header.

Main module where both micro frontends are integrated is using JET module component. Each micro frontend in master UI shell is wrapped into JET module. This allows main application to function, even when micro frontend in the module stops:

JET module is initialized from variable, which returns module name:

Jobs module contains Jobs micro frontend - JET Composite Component. It is hosted and WebLogic and calls ADF BC REST in the backend. Component is assigned with listener:

The most important part is in JS script. Here instead of referencing JET Composite Component locally, we load it from remote host. This allows to develop and host micro frontend JET Composite Component on its own:

Listener refers to c2 element and cals the method. Element c2 in the main app relates to second micro frontend:

This component is loaded from another host, from Node.JS:

Important hint - for JET Composite Component to load from remote host, make sure to add .js for JET Composite Component script, as highlighted (refer to source code):

SOA Suite 12c in Docker containers. Only a couple of commands, no installers, no third party scripts

Amis Blog - Thu, 2018-05-17 10:58

For developers, installing a full blown local SOA Suite environment has never been a favorite (except for a select few). It is time consuming and requires you to download and run various installers after each other. If you want to start clean (and you haven’t taken precautions), it could be you have to start all over again.

There is a new and easy way to get a SOA Suite environment up and running without downloading any installers in only a couple of commands without depending on scripts provided by any party other than Oracle. The resulting environment is an Oracle Enterprise Edition database, an Admin Server and a Managed Server. All of them running in separate Docker containers with ports exposed to the host. The 3 containers can run together within an 8Gb RAM VM.

The documentation Oracle provides in its Container Registry for the SOA Suite images, should be used as base, but since you will encounter some errors if you follow it, you can use this blog post to help you solve them quickly.

A short history QuickStart and different installers

During the 11g times, a developer, if he wanted to run a local environment, he needed to install a database (usually XE), WebLogic Server, SOA Infrastructure, run the Repository Creation Utility (RCU) and one or more of SOA, BPM, OSB. In 12c, the SOA Suite QuickStart was introduced. The QuickStart uses an Apache Derby database instead of the Oracle database and lacks features like ESS, split Admin Server / Managed Server, NodeManager and several other features, making this environment not really comparable to customer environments. If you wanted to install a standalone version, you still needed to go through all the manual steps or automate them yourself (with response files for the installers and WLST files for domain creation). As an alternative, during these times, Oracle has been so kind as to provide VirtualBox images (like this one or this one) with everything pre-installed. For more complex set-ups Edwin Biemond / Lucas Jellema have provided Vagrant files and blog posts to quickly create a 12c environment.


One of the benefits of running SOA Suite in Docker containers is that the software is isolatd in the container. You can quickly remove and recreate domains. Also, in general, Docker is more resource efficient compared to for example VMWare, VirtualBox or Oracle VM and the containers are easily shippable to other environments/machines.


Docker has become very popular and there have been several efforts to run SOA Suite in Docker containers. At first these efforts where by people who created their own Dockerfiles and used the installers and responsefiles to create images. Later Oracle provided their own Dockerfiles but you still needed the installers from edelivery.oracle.com and first build the images. The official Oracle provided Docker files can be found in GitHub here.

Container Registry

Oracle has introduced its Container Registry recently (the start of 2017). The Container Registry is a Docker Registry which contains prebuild images, thus just Dockerfiles. Oracle Database appeared, WebLogic and the SOA Infrastructure and now (May 2018) the complete SOA Suite.

How do you use this? You link your OTN account to the Container Registry. This needs to be done only once. Next you can accept the license agreement for the images you would like to use. The Container Registry contains a useful description with every image on how to use it and what can be configured. Keep in mind that since the Container Registry has recently been restructured, names of images have changed and not all manuals have been updated yet. That is also why you want to tag images so you can access them locally in a consistent way.

Download and run!

For SOA Suite, you need to accept the agreement for the Enterprise Edition database and SOA Suite. You don’t need the SOA Infrastructure; it is part of the SOA Suite image.

docker login -u OTNusername -p OTNpassword container-registry.oracle.com
Pull, tag, create env files

Pulling the images can take a while… (can be hours on Wifi). The commands for pulling differ slightly from the examples given in the image documentation in the Container Registry because image names have recently changed. For consistent access, tag them.

docker pull container-registry.oracle.com/database/enterprise:
docker tag container-registry.oracle.com/database/enterprise: oracle/database:

The database requires a configuration file. The settings in this file are not correctly applied by the installation which is executed when a container is created from the image however. I’ve updated the configuration file to reflect what is actually created:

SOA Suite
docker pull container-registry.oracle.com/middleware/soasuite:
docker tag container-registry.oracle.com/middleware/soasuite: oracle/soa:

The Admin Server also requires a configuration file:


As you can see, you can use the same database for multiple SOA schema’s since the RCU prefix is configurable.

The Managed Server also requires a configuration file:


Make sure the Managed Server mentioned in the Admin Server configuration file matches the Managed Server in the Managed Server configuration file. The Admin Server installation creates a boot.properties for the Managed Server. If the server name does not match, the Managed Server will not boot.

Create local folders and network

Since you might not want to lose your domain or database files when you remove your container and start it again, you can create a location on your host machine where the domain will be created and the database can store its files. Make sure the user running the containers has userid/groupid 1000 for the below commands to allow the user access to the directories. Run the below commands as root. They differ slightly from the manual since errors will occur if SOAVolume/SOA does not exist.

mkdir -p /scratch/DockerVolume/SOAVolume/SOA
chown 1000:1000 /scratch/DockerVolume/SOAVolume/
chmod -R 700 /scratch/DockerVolume/SOAVolume/

Create a network for the database and SOA servers:

docker network create -d bridge SOANet
Run Start the database

You’ll first need the database. You can run it by:

#Start the database
docker run --name soadb --network=SOANet -p 1521:1521 -p 5500:5500 -v /scratch/DockerVolume/SOAVolume/DB:/opt/oracle/oradata --env-file /software/db.env.list oracle/database:

This installs and starts the database. db.env.list, which is described above, should be in /software in this case.

SOA Suite

In the examples documented, it is indicated you can run the Admin Server and the Managed Server in separate containers. You can and they will startup. However, the Admin Server cannot manage the Managed Server and the WebLogic Console / EM don’t show the Managed Server status. The configuration in the Docker container uses a single machine with a single host-name and indicates both the Managed Server and Admin Server both run there. In order to fix this, I’ll suggest two easy workarounds.

Port forwarding. Admin Server and Managed Server in separate containers

You can create a port-forward from the Admin Server to the Managed Server. This allows the WebLogic Console / EM and Admin Server to access the Managed Server at ‘localhost’ within the Docker container on port 8001.

#This command starts an interactive shell which runs the Admin Server. Wait until it is up before continuing!
docker run -i -t --name soaas --network=SOANet -p 7001:7001 -v /scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/user_projects --env-file /software/adminserver.env.list oracle/soa:

#This command starts an interactive shell which runs the Managed Server.
docker run -i -t --name soams --network=SOANet -p 8001:8001 --volumes-from soaas --env-file /software/soaserver.env.list oracle/soa: "/u01/oracle/dockertools/startMS.sh"

#The below commands install and run socat to do the port mapping from Admin Server port 8001 to Managed Server port 8001
docker exec -u root soaas yum -y install socat
docker exec -d -u root soaas "/usr/bin/socat" TCP4-LISTEN:8001,fork TCP4:soams:8001"

The container is very limited. It does not contain executables for ping, netstat, wget, ifconfig, iptables and several other common tools. socat seemed an easy solution (easier than iptables or SSH tunnels) to do port forwarding and it worked nicely.

Admin Server and Managed Server in a single container

An alternative is to run the both the Managed Server and the Admin Server in the same container. Here you start the Admin Server with both the configuration files so all environment variables are available. Once the Admin Server is started, the Managed Server can be started in a separate shell with docker exec.

#Start Admin Server
docker run -i -t --name soaas --network=SOANet -p 7001:7001 -p 8001:8001 -v /scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/user_projects --env-file /software/adminserver.env.list --env-file /software/soaserver.env.list oracle/soa:
#Start Managed Server
docker exec -it soaas "/u01/oracle/dockertools/startMS.sh"
Start the NodeManager

If you like (but you don’t have to), you can start the NodeManager in both set-ups like;

docker exec -d soaas "/u01/oracle/user_projects/domains/InfraDomain/bin/startNodeManager.sh"

The NodeManager runs on port 5658.

How does it look?

A normal SOA Suite environment.

The post SOA Suite 12c in Docker containers. Only a couple of commands, no installers, no third party scripts appeared first on AMIS Oracle and Java Blog.

Checking if the current user is logged into Application Builder

Andrew Tulley - Thu, 2018-05-17 10:01

If you want conditional logic in your PL/SQL dependent on whether the current user is also logged into the APEX Application Builder, this can be achieved by reference to:




This will have a value if the user is logged into APEX Application Builder, otherwise it will be null.

Related documentation: https://docs.oracle.com/database/apex-18.1/HTMDB/understanding-substitution-strings.htm#GUID-62FE6E65-265A-4BE4-B04B-F90BDA317328

A node-oracledb Web Service in Docker

Christopher Jones - Thu, 2018-05-17 02:28

This post shows how to run a node-oracledb application in a Docker Container. For bonus points, the application connects to an Oracle Database running in a second container.

The steps are the 'show notes' from a recent talk at Oracle Code.

The demo app is a simple Express web service that accepts REST calls.



Oracle Docker images are available from https://store.docker.com/ and also mirrored on https://container-registry.oracle.com

If you're not familiar with Docker, it helps to know basic terminology:

  • Images: Collection of software to be run as a container. Images are immutable. Changes to an image require a new image build.

  • Registry: Place to store and download images.

  • Container: A lightweight standalone, executable piece of software that includes everything required to run it on a host. Containers are spun up from images. Containers are non-persistent. Once a container is deleted, all files inside that container are gone.

  • Docker engine: The software engine running containers.

  • Volumes: Place to persist data outside the container.


For my host, I used Oracle Linux 7, which has the ol7_latest and ol7_uekr4  channels already enabled.

  • Install the Docker engine as the root user by running 'sudo su -', or prefix each command with 'sudo':

    # yum-config-manager --enable ol7_addons # yum install docker-engine # systemctl enable docker # systemctl start docker
  • Sign in to the container registry https://container-registry.oracle.com/ with your (free) Oracle "single sign-on" (SSO) credentials.

  • Accept the license on the container registry.

  • On your OL7 Docker host, log in to the registry. Remember to run Docker commands as 'root':

    # docker login container-registry.oracle.com

    This prompts for your Oracle SSO credentials.

  • Get the Oracle Database and Oracle Instant Client images:

    # docker pull container-registry.oracle.com/database/enterprise: # docker pull container-registry.oracle.com/database/instantclient:

    This can take a while. For testing, you may want to pull the smaller, 'slim' version of the database.

  • View the installed images with:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE container-registry.oracle.com/database/enterprise 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient fda46de41de3 7 months ago 407MB
  • Start the database container:

    # docker run -d --name demodb -P container-registry.oracle.com/database/enterprise:

    The '-P' option maps the ports used, allowing access to the database from outside the container.

  • Check for its health and wait until it shows 'healthy'

    # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb
  • Find the database container's IP address: # docker inspect -f "{{ .NetworkSettings.IPAddress }}" demodb

    You will use this IP in database connect strings in your applications.

  • You can stop and start the container as desired:

    # docker stop demodb # docker start demodb

    The data is persistent as long as the container exists. Use 'docker ps --all' to show all containers, running or not.

  • Create a SQL file called createschema.sql:

    SET ECHO ON ALTER SESSION SET CONTAINER=orclpdb1; DROP USER scott CASCADE; CREATE USER scott IDENTIFIED BY tiger; GRANT CONNECT, RESOURCE TO scott; ALTER USER scott QUOTA UNLIMITED ON USERS; DROP TABLE scott.bananas; CREATE TABLE scott.bananas (shipment VARCHAR2(4000) CHECK (shipment IS JSON)); INSERT INTO scott.bananas VALUES ('{ "farmer": "Gita", "ripeness": "All Green", "kilograms": 100 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Ravi", "ripeness": "Full Yellow", "kilograms": 90 }'); INSERT INTO scott.bananas VALUES ('{ "farmer": "Mindy", "ripeness": "More Yellow than Green", "kilograms": 92 }'); COMMIT; EXIT

    For this demo, you can see I used the Oracle Database JSON data type.

  • Execute createschema.sql in your favorite tool, such as SQL*Plus.

    In my case I actually ran SQL*Plus on my Docker host machine. Cheating a bit on giving details here, I had downloaded the Instant Client Basic and SQL*Plus packages and unzipped as shown in the the Instant Client instructions. I then set my shell to use the SQL*Plus binary:

    # export LD_LIBRARY_PATH=/home/cjones/instantclient_12_2 # export PATH=/home/cjones/instantclient_12_2:$PATH

    Using the database IP address as shown earlier you can now run the script in SQL*Plus against the container database. In my environment the database IP was

    # sqlplus -l sys/Oradoc_db1@ as sysdba @createschema.sql

    The database password and service name shown are the defaults in the image.


Let's add Node.js to the Instant Client image.

  • Create a sub-directory nodejs-scripts

    # mkdir nodejs-scripts
  • Create a new file 'nodejs-scripts/Dockerfile'. This is the 'recipe' for building a Docker image. Here Node.js is added to the Instant Client image to create a new image usable by any Node.js application. The Node.js 8 package for Oracle Linux is handy.

    The Dockerfile should contain:

    FROM container-registry.oracle.com/database/instantclient: ADD ol7_developer_nodejs8.repo /etc/yum.repos.d/ol7_developer_nodejs8.repo RUN echo proxy=http://my-proxy.example.com:80 >> /etc/yum.conf RUN yum -y update && \ rm -rf /var/cache/yum && \ yum -y install nodejs

    The FROM line shows that we base our new image on the Instant Client image.

    If you are not behind a proxy, you can omit the proxy line. Or change the line to use your proxy.

    For quick testing, you may want to omit the 'yum -y update' command.

  • The Dockerfile ADD command copies 'ol7_developer_nodejs8.repo' from the host file system into the image's file system. Create 'nodejs-scripts/ol7_developer_nodejs8.repo' containing:

    [ol7_developer_nodejs8] name=Oracle Linux $releasever Node.js 8 Packages for Development and test ($basearch) baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/developer_nodejs8/$basearch/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1
  • Now the new image with Oracle Instant Client and Node.js 8 can be built using this Dockerfile:

    docker build -t cjones/nodejs-image nodejs-scripts

    The 'cjones/nodejs-image' is the image name, not a directory path.

  • You can see the new image has been created:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient fda46de41de3 7 months ago 407MB

The new Node.js image is refined by installing our demo application. This creates another new image that we can later run whenever we want to use the application.

  • Create a sub-directory 'ws-demo-scripts':

    # mkdir ws-demo-scripts
  • Create a new file 'ws-demo-scripts/Dockerfile' containing:

    FROM cjones/nodejs-image ENV https_proxy=http://my-proxy.example.com:80 WORKDIR workdir COPY package.json package.json COPY server.js server.js RUN npm install CMD ["npm", "start"]

    The first line shows the new image should be based on the Node.js image 'cjones/nodejs-image' created in the section above.

    Again, adjust the proxy line as needed by your network.

    You can see the Dockerfile copies two files from our host file system into the image. These files are shown below.

    When the image is created, the RUN command will install the Node.js dependencies from package.json.

    When a container starts, the CMD action is taken, which runs 'npm start', in turn invoking the 'main' target in package.json. Looking below to the package.json content, you can see this means 'node server.js' is run.

  • Create a file 'ws-demo-scripts/package.json' containing:

    { "name": "banana-farmer", "version": "1.0.0", "description": "RESTful API using Node.js Express Oracle DB", "main": "server.js", "author": "Oracle", "license": "Apache", "dependencies": { "body-parser": "^1.18.2", "express": "^4.16.0", "oracledb": "^2.2.0" } }

    As obvious, this application installs the body-parser module, the node-oracledb module, and also express. This demo is an Express web service application. And yes, it is a Banana Farmer web service.

    The default run target of package.json is the application file 'server.js'.

  • Create the application file 'ws-demo-scripts/server.js' containing the contents from here.

    The demo application is just this one file.

  • Build the demo image:

    # docker build -t cjones/ws-demo ws-demo-scripts

    We now have our fourth image which contains our runnable application:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 31cbe6d2ea4e 21 seconds ago 1.51GB cjones/nodejs-image latest e048b739bb63 29 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient fda46de41de3 7 months ago 407MB

The Banana Farmer scenario is that shipments of bananas from farmers are recorded. They can have a farmer name, ripeness, and weight. Shipments can be inserted, queried, updated or deleted.

Let's look at a couple of snippets from ws-demo-scripts/server.js.

A connection helper creates a pool of database connections:

oracledb.createPool({  user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING }, . . .

The credentials are taken from environment variables. When we run the app container we will pass value for those environment variables into the container.

The application has Express routes for REST GET, POST, PUT and DELETE calls. The code to handle a GET request looks like:

// HTTP method: GET // URI : /bananas/FARMER // Get the banana shipment for FARMER app.get('/bananas/:FARMER', function (req, res) { doGetConnection(res, function(err, connection) { if (err) return; connection.execute( "SELECT b.shipment FROM bananas b WHERE b.shipment.farmer = :f", { f: req.params.FARMER }, function (err, result) { if (err) { res.set('Content-Type', 'application/json'); res.status(500).send(JSON.stringify({ status: 500, message: "Error getting the farmer's profile", detailed_message: err.message })); } else if (result.rows.length < 1) { res.set('Content-Type', 'application/json'); res.status(404).send(JSON.stringify({ status: 404, message: "Farmer doesn't exist", detailed_message: "" })); } else { res.contentType('application/json'); res.status(200).send(JSON.stringify(result.rows)); } doRelease(connection, "GET /bananas/" + req.params.FARMER); }); }); });

Express makes it easy. It handles the routing to this code when a GET request with the URL '/bananas/<name>' e.g. '/bananas/Gita' is called. This simply binds the URL route parameter containing the farmer’s name into the SELECT statement. Binding is important for security and scalability, as you know. The SQL syntax used is the JSON 'dot' notation of Oracle Database 12.2 but it could be rewritten to work with

The bulk of the code is error handling, looking after the cases where there was a processing error or no rows returned. It sends back HTTP status codes 500 or 404, respectively.

The success code path sends back the query output 'result.rows' as a JSON string, with the HTTP success status code 200.


Let's run the application.

  • Create a file 'ws-demo-scripts/envfile.list' with the credentials for the application. Use the IP address of your database container found with the 'docker inspect' command shown previously. In my environment, the database IP address was ''

  • Start the Node.js web service container

    # docker run -d --name nodejs -P --env-file ws-demo-scripts/envfile.list cjones/ws-demo
  • To recap what's happened, the Docker images are:

    # docker images REPOSITORY TAG IMAGE ID CREATED SIZE cjones/ws-demo latest 25caede29b17 12 minutes ago 1.51GB cjones/nodejs-image latest 138f2b76ffe7 13 minutes ago 1.51GB container-registry.oracle.com/database/enterprise 12a359cd0528 7 months ago 3.44GB container-registry.oracle.com/database/instantclient fda46de41de3 7 months ago 407MB

    Two base images were downloaded, An image with Node.js was created from the Instant Client image. Finally a fourth image 'cjones/ws-demo' with the Node.js, Instant Client and the application code was created.

  • We have started database ('demodb') and application containers ('nodejs'):

    # docker ps CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES 2924e1225290 cjones/ws-demo ”npm start" Up 3 hours nodejs 9596bc2345d3 [...]/database/[...] "/bin/sh -c '/bin/..." Up 3 hours (healthy) ...->1521/tcp, ...->5500/tcp demodb

    We found the IP address of the database container, and knew (by reading the container registry documentation) the default credentials of the SYS user.

    We created a schema SCOTT on the database, with a table containing some JSON data.

    An application container was started, with the database application credentials and connection string specified in an environment file outside the container.


Now we can call our application, and it will access the database.

  • Install the browser extension HttpRequester (in Firefox) or Postman (in Chrome).

  • Find the IP of the demo web service container:

    # docker inspect -f "{{ .NetworkSettings.IPAddress }}" nodejs

    In my environment, it was ''. Use this with the port (3000) and various endpoints (e.g. '/bananas/<farmer>') defined in server.js for REST requests.

  • In the HttpRequester or Postman extensions you can make various REST calls.

    Get all shipments:


    Get one farmer's shipment(s):


    New data:

    POST { "farmer" : "CJ", "ripeness" : "Light Green", "kilograms" : 50 }

    Update data:

    PUT { "farmer" : "CJ", "ripeness" : "50% Green, 50% Yellow", "kilograms" : 45 }

    Here's a screenshot of HttpRequester in action doing a GET request to get all banana shipments. On the left, the red boxes show the URL for the '/bananas' endpoint was executed as a GET request. On the right, the response shows the success HTTP status code of 200 and the returned data from the request:

    Screenshot of HttpRequester
  • When you are finished with the containers you can stop them:

    # docker stop demodb # docker stop nodejs

    If you haven't tried Docker yet, now is the perfect time! They make deployment and development easy. Oracle's Docker images let you get started with Oracle products very quickly.

Why the XAI Staging is not in the OSB Adapters?

Anthony Shorten - Wed, 2018-05-16 19:52

With the replacement of the Multi-Purpose Listener (MPL) with the Oracle Service Bus (OSB) with the additional OSB Adapters for Oracle Utilities Application Framework based products, customers have asked about transaction staging support.

One of the most common questions I have received is why there is an absence of an OSB Adapter for the XAI Staging table. Let me explain the logic.

  • One Pass versus Two Passes. The MPL processed its integration by placing the payload from the integration into the XAI Staging table. The MPL would then process the payload in a second pass. The staging record would be marked as complete or error. The complete ones would need to be removed using the XAI Staging purge process run separately. You then used XAI Staging portals to correct the data coming in for ones in error. On the other hand, the OSB Adapters treat the product as a "black box" (i,e, like a product) and it directly calls the relevant service directly (for inbound) and polls the relevant Outbound or NDS table for outbound processing records directly. This is a single pass process rather than multiple that MPL did. OSB is far more efficient and scalable than the MPL because of this.
  • Error Hospital. The idea behind the XAI Staging is that error records remain in there for possible correction and reprocessing. This was a feature of MPL. In the OSB world, if a process fails for any reason, the OSB can be configured to act as an Error Hospital. This is effectively the same as the MPL except you can configure the hospital to ignore any successful executions which reduces storage. In fact, OSB has features where you can detect errors anywhere in the process and allows you to determine which part of the integration was at fault in a more user friendly manner. OSB effectively already includes the staging functionality so adding this to the adapters just duplicates processing. The only difference is that error correction, if necessary, is done within the OSB rather than the product.
  • More flexible integration model. One of the major reasons to move from the MPL to the OSB is the role that the product plays in integration. If you look at the MPL model, any data that was passed to the product from an external source was automatically the responsibility of the product (that is how most partners implemented it). This means the source system had no responsibility for the cleanliness of their data as you had the means of correcting the data as it entered the system. The source system could send bad data over and over and as you dealt with it in the staging area that would increase costs on the target system. This is not ideal. In the OSB world, you can choose your model. You can continue to use the Error Hospital to keep correcting the data if you wish or you can configure the Error Hospital to compile the errors and send them back, using any adapter, to the source system for correction. With OSB there is a choice, MPL did not really give you a choice.

With these considerations in place it was not efficient to add an XAI Staging Adapter to OSB as it would duplicate effort and decrease efficiency which negatively impacts scalability.

What is the Status of the PeopleSoft Interaction Hub?

PeopleSoft Technology Blog - Wed, 2018-05-16 19:07

As many of you may have heard, PeopleSoft has moved several functional capabilities from the PeopleSoft Interaction Hub (formerly the PeopleSoft Portal) to PeopleTools.  Why did we do this?  We wanted to make clustering available to many more customers.  Clustering enables two or more PeopleSoft applications to behave like a seamless system without noticeable boundaries between applications.  We did a webinar on this subject recently.  Clustering formerly required the Interaction Hub, but now when you move to PeopleTools 8.56, you can achieve clustering without the Hub.  There are many customers that could benefit from clustering their PeopleSoft applications, but don’t want the overhead, system complexity, or additional expense of deploying the Hub.  Some simply don’t want to deploy a web site.

That’s great, but what does that mean for customers who have the Hub?  First of all, nothing is taken away.  If you have deployed the Hub and are successful and your users are happy, you can continue to use it.  We will continue to support it.   Customers that still want to deploy a PeopleSoft–centric web site experience for their users may want to continue to use the Hub.  You can also use the Hub with PeopleSoft Fluid applications.  In fact, if you are using the Hub you should be deploying Fluid applications with it.

Our focus on Fluid as the current and future PeopleSoft user interface, means, however, that we don’t have plans to enhance the Interaction Hub further.  Enhancements to clustering and common capabilities used across applications will be made directly in PeopleTools so all customers can benefit from them.

Filtering a Table, List or Other Collections in Oracle Visual Builder Cloud Service

Shay Shmeltzer - Wed, 2018-05-16 18:36

A common use case when working with data is to try and filter it.

For example if you have a set of records shown in a table in the UI the user might want to filter those to show specific rows.

In the video below I show you the basic way to achieve this using the filterCriterion of ServiceDataProvider variables - the type of variable that populates tables and lists.

Basically each SDP has an attribute called filterCriterion that accepts a structure that can contain arrays of conditions. You can then map a criteria that leverage for example a page level variable connected to a field.

FilterCriterion setting

In the criteria you'll specify

  • an attribute - this will be the column id (not title) of your business object
  • a value - this is the value you are filtering based on - usually a pointer to a variable in your page
  • An operator (op) - the filterCriterion is using operators like $eq or $ne - these are based on the Oracle JET AttributeFilterOperator - a full list of the operators is here.

In the video I end up with a filterCriterion that is:

{ "criteria": [ { "value": "{{ $page.variables.filterVar }}", "op": "{{ \"$eq\"\n }}", "attribute": "{{ \"traveler\"\n }}" } ], "op": "{{ \"$or\"\n }}" }

Since you can have multiple criteria you can also specify an operator on them - either an or ($or) or an and ($and) - these are CompoudOperators from Oracle JET.

By the way, this will work automatically for your Business Objects, however if you want to apply this to data from a random REST service - then that service will need to have filtering transformation defined for it.


Categories: Development

Reflecting Changes in Business Objects in UI Tables with Visual Builder

Shay Shmeltzer - Wed, 2018-05-16 11:52

While the quick start wizards in Visual Builder Cloud Service (VBCS) make it very easy to create tables and other UI components and bind them to business objects, it is good to understand what is going on behind the scenes, and what the wizards actually do. Knowing this will help you achieve things that we still don't have wizards for.

For example - let's suppose you created a business object and then created a UI table that shows the fields from that business object in your page. You probably used the "Add Data" quick start wizard to do that. But then you remembered that you need one more column added to your business object, however after you added that one to the BO, you'll notice it is not automatically shown in the UI. That makes sense since we don't want to automatically show all the fields in a BO in the UI.

But how do you add this new column to the UI?

The table's Add Data wizard will be disabled at this point - so is your only option to drop and recreate the UI table? Of course not!


If you'll look into the table properties you'll see it is based on a page level ServiceDataProvider ( SDP for short) variable. This is a special type of object that the wizards create to represent collections. If you'll look at the variable, you'll see that it is returning data using a specific type. Note that the type is defined at the flow level - if you'll look at the type definition you'll see where the fields that make up the object are defined.

Type Definition

It is very easy to add a new field here - and modify the type to include the new column you added to the BO. Just make sure you are using the column's id - and not it's title - when you define the new field in the items array.

Now back in the UI you can easily modify the code of the table to add one more column that will be hooked up to this new field in the SDP that is based on the type.

Sounds complex? It really isn't - here is a 3 minute video showing the whole thing end to end:

As you see - a little understanding of the way VBCS works, makes it easy to go beyond the wizards and achieve anything.

Categories: Development

Pizza, Beer, and Dev Expertise at Your Local Meet-up

OTN TechBlog - Wed, 2018-05-16 06:30

Big developer conferences are great places to learn about new trends and technologies, attend technical sessions, and connect with colleagues. But by virtue of their size, their typical location in destination cities, and multi-day schedules, they can require a lot of planning, expense, and time away from work.

Meet-ups, offer a fantastic alternative. They’re easily accessible local events, generally lasting a couple of hours. Meet-ups offer a more human scale and are far less crowded than big conferences, with a far more casual, informal atmosphere that can be much more conducive to learning through Q&A and hands-on activities.

One big meet-up advantage is that by virtue of their smaller scale they can be scheduled more frequently. For example, while Oracle ACE Associate Jon Petter Hjulsted and his colleagues attend the annual Oracle User Group Norway (OUGN) Conference, they wanted to get together more often, three or four times a year. The result is a series of OUGN Integration meet-ups “where we can meet people who work on the same things.” As of this podcast two meet-ups have already taken place, with third schedule for the end of May.

Luis Weir, CTO at Capgemini in the UK and an Oracle ACE Director and Developer Champion, felt a similar motivation. “There's so many events going on and there's so many places where developers can go,” Luis says. But sometimes developers want a more relaxed, informal, more approachable atmosphere in which to exchange knowledge. Working with his colleague Phil Wilkins, senior consultant at Capgemini and an Oracle ACE, Luis set out to organize a series of meet-ups that offered more “cool.”

Phil’s goal in the effort was to organize smaller events that were “a little less formal, and a bit more convenient.” Bigger, longer events are more difficult to attend because they require more planning on the part of attendees. “It can take quite a bit of effort to organize your day if you’re going to be out for a whole day to attend a user group special interest group event,” Phil says. But local events scheduled in the evening require much less planning in order to attend. “It's great! You can get out and attend these things and you get to talk to people just as much as you would at a during a day-time event.”

For Oracle ACE Ruben Rodriguez Santiago, a Java, ADF, and cloud solution specialist with Avanttic in Spain, the need for meet-ups arose out of a dearth of events focused on Oracle technologies. And those that were available were limited to database and SaaS. “So for me this was a way to get moving and create events for developers,” Ruben says.

What steps did these meet-up organizers take? What insight have they gained along the way as they continue to organize and schedule meet-up events? You’ll learn all that and more in this podcast. Listen!


The Panelists Jon-Petter Hjulstad
Department Manager, SYSCO AS
Twitter LinkedIn   
Ruben Rodriguez Santiago
Java, ADF, and Cloud Solution Specialist, Avanttic
Twitter LinkedIn  
Luis Weir
CTO, Oracle DU, Capgemini
Twitter LinkedIn  
Phil Wilkins
Senior Consultant, Capgemini
Twitter LinkedIn  Additional Resources Coming Soon
  • What Developers Need to Know About API Monetization
  • Best Practices for API Development

Never miss an episode! The Oracle Developer Community Podcast is available via:


Oracle Buys DataScience.com

Oracle Press Releases - Wed, 2018-05-16 06:00
Press Release
Oracle Buys DataScience.com Adds Leading Data Science Platform to the Oracle Cloud, Enabling Customers to Fully Utilize Machine Learning

Redwood Shores, Calif.—May 16, 2018

Oracle today announced that it has signed an agreement to acquire DataScience.com, whose platform centralizes data science tools, projects and infrastructure in a fully-governed workspace.

Data science teams use the platform to organize work, easily access data and computing resources, and execute end-to-end model development workflows. Leading organizations like Amgen, Rio Tinto, and Sonos are using the DataScience.com platform to improve productivity, reduce operational costs and deploy machine learning solutions faster to power their digital transformations.

DataScience.com empowers data scientists to deliver the business-changing insights executives expect in less time with self-service access to open source tools, data and computing resources, while also improving the ability of IT teams to support that work. Oracle embeds Artificial Intelligence (AI) and machine learning capabilities across its software as a service (SaaS) and platform as a service (PaaS) solutions, including big data, analytics and security operations, to enable digital transformations. Together, Oracle and DataScience.com will provide customers with a single data science platform that leverages Oracle Cloud Infrastructure and the breadth of Oracle's integrated SaaS and PaaS offerings to help them realize the full potential of machine learning.

“Every organization is now exploring data science and machine learning as a key way to proactively develop competitive advantage, but the lack of comprehensive tooling and integrated machine learning capabilities can cause these projects to fall short,” said Amit Zavery, Executive Vice President of Oracle Cloud Platform, Oracle. “With the combination of Oracle and DataScience.com, customers will be able to harness a single data science platform to more effectively leverage machine learning and big data for predictive analysis and improved business results.”

“Data science requires a comprehensive platform to simplify operations and deliver value at scale,” said Ian Swanson, CEO of DataScience.com. “With DataScience.com, customers leverage a robust, easy-to-use platform that removes barriers to deploying valuable machine learning models in production. We are extremely enthusiastic about joining forces with Oracle’s leading cloud platform so customers can realize the benefits of their investments in data science.”

More information about this announcement is available at www.oracle.com/datascience.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
Ken Bond
Oracle Investor Relations
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, SCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Oracle is currently reviewing the existing DataScience.com product roadmap and will be providing guidance to customers in accordance with Oracle’s standard product communication policies. Any resulting features and timing of release of such features as determined by Oracle’s review of DataScience.com’s product roadmap are at the sole discretion of Oracle. All product roadmap information, whether communicated by DataScience.com or by Oracle, does not represent a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. It is intended for information purposes only, and may not be incorporated into any contract.

Cautionary Statement Regarding Forward-Looking Statements
This document contains certain forward-looking statements about Oracle and DataScience.com, including statements that involve risks and uncertainties concerning Oracle’s proposed acquisition of DataScience.com, anticipated customer benefits and general business outlook. When used in this document, the words “anticipates”, “can”, “will”, “look forward to”, “expected” and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or DataScience.com, that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business may deteriorate and/or Oracle or DataScience.com may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or DataScience.com. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor DataScience.com is under any duty to update any of the information in this document.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Ken Bond

  • +1.650.607.0349

Autonomous Oracle Visual Builder Cloud Service

Visual Application Development and Hosting The new Autonomous Oracle Visual Builder Cloud Service (AVBCS) introduces a new architecture that turns VBCS into an ideal development and hosting...

We share our skills to maximize your revenue!
Categories: DBA Blogs

12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users

Yann Neuhaus - Tue, 2018-05-15 16:24

Mike Dietrich has blogged recently about upuserxt.lst and upobjxt.lst and how to query them with external table. The first time I’ve seen those ‘.lst’ files, the default extension for sqlplus spool files, I wondered whether they were provided in ?/rdbms/admin on purpose, or if they were just some leftovers from some tests Oracle did before packaging the Oracle Home. Finally, I realized that they were there on purpose and that those ‘.lst’ are important files when upgrading to 12c.

I’ll look at an 18c Oracle Home (/rdbms/admin) in the Oracle Cloud but that applies to all 12c (and 18c is a 12cR2 patchset). One of the most important little feature of 12c is the tagging of Oracle Supplied objects and users. Before 12c it was a nightmare to distinguish system users from application ones. I detailed that in a previous post.

At database creation: _oracle_script

In a newly created 12c database, all the objects and users belonging to the system are flagged with ORACLE_MAINTAINED=Y

Here is an example listing system users and roles:

SQL> select listagg(username,',' on overflow truncate) within group (order by username) from dba_users where oracle_maintained='Y';LISTAGG(USERNAME,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYUSERNAME)
SQL> select listagg(role,',' on overflow truncate) within group (order by role) from dba_roles where oracle_maintained='Y';

And here is an exemple listing the owners of system objects flagged with ORACLE_MAINTAINED=Y

SQL> select listagg(num||' '||owner,',' on overflow truncate) within group (order by num) from (select owner,count(*) num from dba_objects where oracle_maintained='Y' group by owner);LISTAGG(NUM||''||OWNER,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYNUM)

How this is done? That’s easy. All system objects are created by Oracle scripts, such as those called by catalog.sql and catproc.sql during database creation. Those scripts set “_oracle_script”=true before running the DDL and all object created while “_oracle_script”=true is flagged as Oracle Maintained.

If, in a lab (not in prod), you create your own object in the same way, they will also be flagged as Oracle Maintained:

SQL> connect / as sysdba
SQL> alter session set "_oracle_script"=true;
Session altered.
SQL> create user FRANCK identified by myself;
User FRANCK created.
SQL> alter user FRANCK quota unlimited on users;
User FRANCK altered.
SQL> create table FRANCK.DEMO(n primary key) as select 1 from dual;
Table FRANCK.DEMO created.
SQL> alter session set "_oracle_script"=false;
Session altered.
SQL> select username,oracle_maintained from dba_users where username='FRANCK';
-------- -----------------
SQL> select owner,object_name,object_type,oracle_maintained from dba_objects where owner='FRANCK';
------ ---- ----- -

So, this one is easy. Database creation runs with “_oracle_script”=true and objects and users created when this parameter is set to true are flagged as Oracle Supplied objects.

And during upgrades?

When you upgrade from 11g to 12c you don’t have this Oracle Maintained information. The catupgrd runs with “_oracle_script”=true but this script does not create all objects. However the upgraded database has all system objects flagged as Oracle Maintained. This is where upuserxt.lst upobjxt.lst are used.

When Oracle developers build a new Oracle Home to be shipped, they create a database (including all options I suppose) and then run the utlupox.sql script. This script will list all Oracle Maintained users and objects, just relying on the flag that has been set during creation, and spool to the upuserxt.lst upobjxt.lst files. And those files will be shipped in the Oracle Home (all that under /rdbms/admin).

These .lst files will be used when upgrading from pre-12c in order to set the flags for Oracle Maintained objects. The external tables SYS.USERXT on upuserxt.lst and SYS.OBJXT on upobjxt.lst are created by catupcox.sql and, finally, those tables are read by catuposb.sql to set Oracle Maintained in USER$ and OBJ$. The catuposb.sql is a bit more complex that that because there are objects that can have different name when a database is created.

Note that this information about Oracle Maintained objects, in addition to being very useful for us, is crucial when you further convert the non-CDB to a PDB because those will become metadata links.


Cet article 12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users est apparu en premier sur Blog dbi services.


Subscribe to Oracle FAQ aggregator