Feed aggregator

impdp logtime=all metrics=y and 12cR2 parallel metadata

Yann Neuhaus - Tue, 2017-11-28 14:35

A quick post to show why you should always use LOGTIME=ALL METRICS=Y when using Data Pump. Just look at an example showing the timestamp in front of each line and a message about each task completed by the worker:

15-NOV-17 11:48:32.305: W-5 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
15-NOV-17 11:48:34.439: W-13 Completed 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.439: W-13 Completed by worker 1 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.440: W-13 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
15-NOV-17 11:48:35.472: W-17 Startup took 70 seconds
15-NOV-17 11:48:35.596: W-18 Startup took 70 seconds
15-NOV-17 11:48:35.719: W-20 Startup took 70 seconds
15-NOV-17 11:48:35.841: W-19 Startup took 70 seconds
15-NOV-17 11:48:43.520: W-5 Completed 1714 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 1 39 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 2 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 3 85 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 4 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 5 25 TABLE objects in 1 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 6 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 7 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 8 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 9 89 TABLE objects in 5 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 10 74 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 11 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 12 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 13 34 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 14 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 15 108 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 16 90 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 17 82 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 18 40 TABLE objects in 3 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 19 97 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 20 53 TABLE objects in 3 seconds
15-NOV-17 11:48:44.266: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Here, I was running an import with PARALLEL=20 and I can see exactly how many tables were processed by each worker. You see it is ‘TABLE’ and not ‘TABLE_DATA’ which is the proof that 12cR2 can import metadata in parallel.

I see no reason not to use LOGTIME=ALL METRICS=Y always and you will be happy to have this detail if something goes wrong.

 

Cet article impdp logtime=all metrics=y and 12cR2 parallel metadata est apparu en premier sur Blog dbi services.

nVision Performance Tuning: 10. Maintaning Statistics on Non-Partitioned Tree Selector Tables

David Kurtz - Tue, 2017-11-28 10:21
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

If you do not partition your selector tables, and if you predominantly use static selectors then you can maintain the statistics on the selector table as the new static selectors are created or updated.  PeopleTools table PSTREESELCTL is used to track static selectors.  It is keyed by selector number.  A row is inserted when a new selector is created, and the version number is updated when an existing selector is re-extracted.  Therefore, a trigger on this table can be used to submit a job to refresh the statistics on the tree selector.
CREATE OR REPLACE TRIGGER sysadm.pstreeselector_stats
BEFORE INSERT OR UPDATE ON sysadm.pstreeselctl
FOR EACH ROW
DECLARE
l_jobno NUMBER;
l_cmd VARCHAR2(1000);
l_table_name VARCHAR2(18);
l_suffix VARCHAR2(2);
BEGIN
l_table_name := 'PSTREESELECT'||LTRIM(TO_CHAR(:new.length,'00'));
l_suffix := SUBSTR(l_table_name,-2);
l_cmd := 'dbms_stats.gather_table_stats(ownname=>user,tabname=>'''||l_table_name||''',force=>TRUE);'
||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_FROM_'||l_suffix||''',density=>1,force=>TRUE);'
||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_TO_'||l_suffix||''',density=>1,force=>TRUE);'
dbms_output.put_line(l_cmd);
dbms_job.submit(l_jobno,l_cmd);
EXCEPTION WHEN OTHERS THEN NULL;
END;
/
When all the selectors exist in a single segment, the optimizer will not correctly calculate the cardinality of the expressions in the nVision queries on the tree selector tables.  Therefore, I have found it necessary to manually set the density on the range columns to 1 to get the effective execution plans at least some of the time.
N.B. This trigger should not be used if the selector tables are partitioned.


Tech 2017 Agenda

Jonathan Lewis - Tue, 2017-11-28 08:29

As usual it’s hard to pick a personal agenda from the wealth of content available for the UKOUG annual conference, but this is my starting list:

Sunday

13:40 – 14:25 Roger MacNicol: “My query plan says Table Access Full: what happens next ?”

14:40 – 15:30 Gert Poel: “Smart Database Development with PL/SQL and Oracle REST Data Services”

16:10 – 17:00 Kellyn Pot’Vin: “Oracle vs. SQL Server – the War of the Indexes”

17:10 – 18:00 Luiza Koziel: “How to improve your presentations AK the Tool is Just a Tool… Learn how to use it for a Good Cause”

Monday

9:00 – 9:50  ME! I launch (one stream of) the conference with “Index Statistics and Column Groups”

11:35 – 13:25 Ivica Arsov: “Parallel Execution with Oracle 12c” … there may be some overlap with my 2nd presentation

14:25 – 15:15 ME again! At the CBO round table — we’ll may answer a few of the more specific questions that came in for the CBO panel.

15:25 – 16:25 Community Keynote – how could one possibly miss Maria Colgan, Connor McDonald and Chris Saxon … (maybe with Tim-Tams ?)

16:55 – 17:45 Michael Salt – “An in-memory paradox – increased I/O”.  I think I know the answer, but this reveal another way to do it wrong. (Pity to miss Richard Foote on AWRs)

17:55 – 18:45 Franck Pachot – “From Transportable Tablespace to Pluggable Databases”

 

Tuesday

9:00 – 9:50 ME yet again, starting the day with Maria Colgan, Nigel Bayliss, Chris Antognini and Richard Foote on the CBO Panel, with Martin Widlake and Neil Chandler doing the MC bit and making sarky comments.

10:00 – 10:50 Marcin Przepierowski: “Rman – from Beginner to Advanced”, because you’ve always got to review what you thought you knew about recovery (and backup).  It means I have to miss Bryn Llewellyn and Kamil Stawiarski arguing about how to address performance in PL/SQL

11:25 – 12:15 Lucas Jellema: “Intro to Docker Containers & the Oracle Platform – Database, Weblogic and Cloud”.

12:25 – 13:15 Mark Rittman: “How Analytics is changing the World (again)”.

14:20 – 15:05 Stew Ashton: “Meet your Match: Advanced Row Pattern Matching”

15:40 – 16:25 Martin Berger: “Escape from Exadata”.

16:40 – 17:30 Roger MacNicol: “Using Oracle Columnar Technologies across the Information Lifecycle”

Wednesday

9:00 – 9:45 Simon Pane: “Modernizing your DBA Scripts and Backups with the Oracle Scheduler”

10:00 – 10:45 Allan McAleavy: “Moving to an All Flash Array – Dude Where’s my Bottleneck”

11:25 – 12:15 Jason Arneil: “An introduction to Sharding”

12:25 – 13:15 My final session: “Parallel Execution” – if you come to this it’s probably a good idea to see Ivica Arsov on Monday morning as well

14:15 – 15:05 Jaromir D.B. Nemec: “Anomaly Detection in Database Workload”.

 

Of course I may change my mind between now and the start of the event so if you’re feeling deprived (or relieved) that I’m not going to be in your audience – or if you’re feeling pressured that I am – never mind, my agenda isn’t cast in stone and I’ll probably end up at the wrong sessions anyway because I’ve been busy talking to someone without keeping an eye on the time.

 

 


Dataguard: QUIZZ on Snapshot Standby

Yann Neuhaus - Tue, 2017-11-28 07:51

In oracle documentation we can find this about snapshot standby: A snapshot standby database is a fully updatable standby database that provides disaster recovery and data protection benefits that are similar to those of a physical standby database.

The concept of snapshot standby is that we can temporary convert a physical standby to an updatable database for different purposes and then convert back to a physical standby. During the time that the database is a snapshot standby, it can be used as a normal read write database. And then after the flashback technology is used combined with archived logfiles to convert back the snapshot to a physical standby.
In this blog I have tested some common tasks on a snapshot database and I am describing below the results.

We show our configuration, oracle 12.2 is used.
ORCL_SITE: Primary
ORCL_SITE2: Physical Standby
ORCL_SITE2: Physical Standby
ORCL_SITE3: Logical Standby


DGMGRL> show configuration;
.
Configuration - ORCL_DR
.
Protection Mode: MaxPerformance
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
ORCL_SITE3 - Logical standby database
.
Fast-Start Failover: DISABLED
.
Configuration Status:
SUCCESS (status updated 42 seconds ago)
.
DGMGRL>

The first question we can ask is which type of standby can be converted to a snapshot database

1- Can we convert a logical standby to a snapshot standby
Let’s convert our logical standby to a snapshot standby

DGMGRL> CONVERT DATABASE 'ORCL_SITE3' TO SNAPSHOT STANDBY;
Converting database "ORCL_SITE3" to a Snapshot Standby database, please wait...
Error: ORA-16831: operation not allowed on this member
.
Failed.
Failed to convert database "ORCL_SITE3"
DGMGRL>

Answer: NO we cannot convert a logical standby to a snapshot standby

2- Can we convert a physical standby to a snapshot standby
Let’s convert our physical standby to a snapshot standby

DGMGRL> CONVERT DATABASE 'ORCL_SITE2' TO SNAPSHOT STANDBY;
Converting database "ORCL_SITE2" to a Snapshot Standby database, please wait...
Database "ORCL_SITE2" converted successfully
DGMGRL>

And we can verify the new status of the database ‘ORCL_SITE2′

DGMGRL> show database 'ORCL_SITE2';
.
Database - ORCL_SITE2
.
Role: SNAPSHOT STANDBY
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 1 minute 33 seconds (computed 1 second ago)
Instance(s):
ORCL
.
Database Status:
SUCCESS
.
DGMGRL>
.

Answer: Yes we can convert a physical standby to a snapshot standby.

Now that the physical is converted to a snapshot let’s continue our quizz.

3- Can we open a snapshot standby on a read only mode
Let’s shutdown our standby snapshot and let’s open it on read only mode

SQL> startup open read only;
ORACLE instance started.
.
Total System Global Area 943718400 bytes
Fixed Size 8627440 bytes
Variable Size 348130064 bytes
Database Buffers 583008256 bytes
Redo Buffers 3952640 bytes
Database mounted.
Database opened.

The status is now open read only

SQL> select db_unique_name,database_role,open_mode from v$database;
.
DB_UNIQUE_NAME DATABASE_ROLE OPEN_MODE
------------------------------ ---------------- --------------------
ORCL_SITE2 SNAPSHOT STANDBY READ ONLY

Answer: Yes a snapshot standby can be opened in a READ ONLY mode

4- Can we create a tablespace on a snapshot standby
Connected to the standby database let’s create a tablespace

SQL> create tablespace mytab_snap datafile '/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf' size 2M autoextend on maxsize 10M;
.
Tablespace created.

We can verify in the table dba_tablespaces

SQL> select tablespace_name from dba_tablespaces;
.
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
MYTAB_SNAP
TAB_TEST

Answer: Yes we can see that the tablespace MYTAB_SNAP was created.

5- Can we drop a tablespace on a snapshot standby
Let’s drop a tablespace
SQL> drop tablespace TAB_TEST including contents and datafiles;
drop tablespace TAB_TEST including contents and datafiles
*
ERROR at line 1:
ORA-38881: Cannot drop tablespace TAB_TEST on primary database due to
guaranteed restore points.

Answer: No due to guaranteed restore point, we cannot drop a tablespace on a snapshot database.

6- Can we extend a datafile on a snapshot standby
We are going to consider two types of datafiles.
• One from tablespace MYTAB_SNAP created on the snapshot standby
• Another from tablespace TAB_TEST created on the primary

FILE_NAME TABLESPACE_NAME
-------------------------------------------------- ------------------------------
/u01/app/oracle/oradata/ORCL/users01.dbf USERS
/u01/app/oracle/oradata/ORCL/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/ORCL/system01.dbf SYSTEM
/u01/app/oracle/oradata/ORCL/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/ORCL/stab_test.dbf TAB_TEST
/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf MYTAB_SNAP

Let’s extend first the datafile created on the snapshot standby

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf' resize 20M;
.
Database altered.

And then let’s do the same operation on the datafile created on the primary

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/stab_test.dbf' resize 20M;
.
Database altered.

We can verify the new sizes

SQL> select FILE_NAME,sum(BYTES) from dba_data_files group by FILE_NAME;
.
FILE_NAME SUM(BYTES)
-------------------------------------------------- ----------
/u01/app/oracle/oradata/ORCL/users01.dbf 5242880
/u01/app/oracle/oradata/ORCL/undotbs01.dbf 73400320
/u01/app/oracle/oradata/ORCL/system01.dbf 870318080
/u01/app/oracle/oradata/ORCL/sysaux01.dbf 587202560
/u01/app/oracle/oradata/ORCL/stab_test.dbf 20971520
/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf 20971520
.
6 rows selected.

Answer: Yes we can extend datafiles on a snapshot standby.

Just note that when the snapshot standby is converted back to a physical, the datafile is shrinked until his previous size.

7- Can we reduce a datafile on a standby database
Let’s now reduce the size of the datafile created on the snapshot

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf' resize 5M;
.
Database altered.

And let’s do the same operation on the datafile created on the primary

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/stab_test.dbf' resize 5M;
alter database datafile '/u01/app/oracle/oradata/ORCL/stab_test.dbf' resize 5M
*
ERROR at line 1:
ORA-38883: Cannot shrink data file /u01/app/oracle/oradata/ORCL/stab_test.dbf
on primary database due to guaranteed restore points.

Answer: Yes we see that we can only reduce size for datafiles created on the snapshot standby.

8- Can we do a switchover to a snapshot standby
As a snapshot is a physical standby which was converted, one may ask if a switchover is possible to a snapshot standby.

DGMGRL> switchover to 'ORCL_SITE2';
Performing switchover NOW, please wait...
Error: ORA-16831: operation not allowed on this member
.
Failed.
Unable to switchover, primary database is still "ORCL_SITE"
DGMGRL>

Answer: No we cannot do a switchover to a snapshot standby.

9- Can we do a failover to a snapshot standby
The same question can be asked about failover.

DGMGRL> connect sys/root@ORCL_SITE2
Connected to "ORCL_SITE2"
Connected as SYSDBA.
DGMGRL> failover to 'ORCL_SITE2';
Converting database "ORCL_SITE2" to a Physical Standby database, please wait...
Operation requires shut down of instance "ORCL" on database "ORCL_SITE2"
Shutting down instance "ORCL"...
Connected to "ORCL_SITE2"
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "ORCL" on database "ORCL_SITE2"
Starting instance "ORCL"...
ORACLE instance started.
Database mounted.
Connected to "ORCL_SITE2"
Connected to "ORCL_SITE2"
Continuing to convert database "ORCL_SITE2" ...
Database "ORCL_SITE2" converted successfully
Performing failover NOW, please wait...
Failover succeeded, new primary is "ORCL_SITE2"
DGMGRL>

Answer: Yes, we can do a failover to a snapshot standby but the time of the failover is longer than if the failover was done to a physical standby. Indeed oracle has

• Convert the snapshot to physical standby one
• Applied archived logs to the physical standby
• And then do the failover

Conclusion: In this blog, we tried to explain some behaviors of snapshot standby. Hope that this article may help

 

Cet article Dataguard: QUIZZ on Snapshot Standby est apparu en premier sur Blog dbi services.

Are statistics immediately available after creating a table or an index in PostgreSQL?

Yann Neuhaus - Tue, 2017-11-28 06:32

While giving the last PostgreSQL DBA Essentials workshop this question came up: When we create a table or an index: are the statistics available automatically? To be more precise: When we create and load a table in one step, create an index on that table afterwards: Do we have the statistics available by default or do we need to wait for autovacuum to kick in or analyze manually? Lets see …

First of all lets disable autovacuum so it does not kick off analyze in the background:

postgres=# \! ps -ef | grep autov | grep -v grep
postgres  1641  1635  0 07:08 ?        00:00:00 postgres: MY_CLUSTER: autovacuum launcher process   
postgres=# alter system set autovacuum=off;
ALTER SYSTEM
postgres=# select * from pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# \! ps -ef | grep autov | grep -v grep

Create and populate the table:

postgres=# \! cat a.sql
drop table if exists t;
create table t
as select a.*, md5(a::varchar) from generate_series(1,5000000) a;
postgres=# \i a.sql
psql:a.sql:1: NOTICE:  table "t" does not exist, skipping
DROP TABLE
SELECT 5000000

Create an index:

postgres=# create index i1 on t(a);
CREATE INDEX
postgres=# \d+ t
                                     Table "public.t"
 Column |  Type   | Collation | Nullable | Default | Storage  | Stats target | Description 
--------+---------+-----------+----------+---------+----------+--------------+-------------
 a      | integer |           |          |         | plain    |              | 
 md5    | text    |           |          |         | extended |              | 
Indexes:
    "i1" btree (a)

Do we have statistics already? Lets check:

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

No, at least not for the table. What about the index?

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i1'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

No. Lets analyze:

postgres=# analyze t;
ANALYZE
postgres=# analyze i1;
WARNING:  skipping "i1" --- cannot analyze non-tables or special system tables
ANALYZE

Apparently we can not analyze an index. What do we see now?

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
           0 |        4 |          -1
           0 |       33 |          -1
(2 rows)

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i1'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

We do see statistics for the table but not for the index. The reason is that “analyze” works on the tables, but not on the indexes. For regular indexes there will be nothing in pg_statistic because that information would be redundant with the underlying table columns. But there will be statistics for function based indexes:

postgres=# create index i2 on t(lower(a::text));
CREATE INDEX
postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i2'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

postgres=# analyze t;
ANALYZE
postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i2'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
           0 |       10 |          -1
(1 row)

So, when autovacuum is off we do not get statistics when we do not kick off a manual analyze (which is not a surprise). What happens when autovacuum is on?

postgres=# alter system set autovacuum=on;
ALTER SYSTEM
postgres=# select * from pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)
postgres=# \i a.sql
DROP TABLE
SELECT 5000000
postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

Nope, same picture here. But some seconds later:

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
           0 |        4 |          -1
           0 |       33 |          -1
(2 rows)

… statistics are there. Conclusion: When you require current statistics directly after loading a table you’d better kick of a manual analyze right after. Otherwise autovacuum will take care about that, but not immediately.

 

Cet article Are statistics immediately available after creating a table or an index in PostgreSQL? est apparu en premier sur Blog dbi services.

Oracle Live SQL

Tom Kyte - Mon, 2017-11-27 21:06
Hello, Is there a way I could run explain plan on Oracle Live SQL? Apparently, it gives an error that PLAN_TABLE doesn't exist. I ran utlxplan.sql in my session but still could not execute statement SELECT * FROM TABLE(DBMS_XPLAN.display) W...
Categories: DBA Blogs

Data Compression and Maintenance after archival

Tom Kyte - Mon, 2017-11-27 21:06
Hi , We have implemented archival solution for a client and now exploring further actions. Initial data size was around 25 TB's with major tables partitioned monthly. After archival, the size of active DB is around 13 TB's and archival DB is aroun...
Categories: DBA Blogs

Dynamic Select with Cursor type

Tom Kyte - Mon, 2017-11-27 21:06
How can i get value from sql using dbms_sql type of CURSOR ....COL_TYPE=102 <code>DECLARE RUN_S CLOB; IGNORE NUMBER; SOURCE_CURSOR NUMBER; PWFIELD_COUNT NUMBER DEFAULT 0; L_DESCTBL DBMS_SQL.DESC_TAB2; Z...
Categories: DBA Blogs

JET 4.1.0 Composite - List Item Action and Defferred Loading

Andrejus Baranovski - Mon, 2017-11-27 14:57
I was reviewing JET 4.1.0 features and decided to build simple composite component. Would like to share some of the items I learned.

Composite component comes with collapsible UI and action link, it implements list item, which can be rendered in any kind of parent UI container:


When item is expanded, more info is displayed:


Once user clicks on Open link, JS call is made and key from current item is printed in the background:


Let's take a look into component metadata. I'm using several properties and one event. Through event we can call JS method outside of composite component, this can be very useful:


How this event is used? Take a look into composite HTML implementation, click is handled by JS method, inside that function event will be initialized:


One more interesting thing - I'm using JET Defer functionality, to render HTML expandable content, when item is expanded. This allows to minimize client side load by lazy loading, renders content when it is displayed:


JS method, which handles click - creates event. This will allow to execute event implementation method outside composite component. Syntax looks pretty similar to ADF Server listener call:


Naming of the event is important. If event name is openDetails, then make sure to use on-open-details as property name for the event handler in composite component interface. Here I define external method to be invoked when event is fired:


Here is the code for event listener JS method in main view module:


Let's see how JET Defer works on runtime. This is source from JET running app, when list item is rendered initially, there is no content for expanded block - it only contains JET Defer block:


After we expand it:


Content is rendered in source. This is called lazy loading and this concept allows to improve performance, especially for large lists (or tabs, etc.), when a lot of content is rendered:


Now few hints how to create JET Composite with OJET tooling. Run similar command:

sudo ojet create component list-itemrs

This will create JET Composite with given name. To be able to run JET Composite from index page, add applyBindings to main.js require block:


Don't forget to add JET composite module and actual composite loader:


Finally add your own JET Composite tag to index.html:


Sample code can be accessed in GitHub repository (run ojet restore) - list-itemrs.

Check and Deploy Secure Configuration for Oracle EBS 12.2 and 12.1

Steven Chan - Mon, 2017-11-27 12:43

The Secure Configuration Console provides a single dashboard for you to review if your Oracle E-Business Suite environment is in compliance with critical secure configuration guidelines. You may already be familiar with the set of secure configuration scripts that can be executed in your environment to check secure configuration. The Secure Configuration Console expands on these checks providing additional features and a new user interface. You can access the Secure Configuration Console from the Configuration Manager tab via the Functional Administrator responsibility.

Using the Secure Configuration Console dashboard, you can:

  • Check the high priority secure configuration items in your environment
  • Review and implement secure configuration recommendations where applicable
  • Click to automatically configure some checks in the console
  • Suppress checks that are not relevant to your system

Which Oracle E-Business Suite Releases include the Secure Configuration Console?

The Secure Configuration Console is available as of Oracle E-Business Suite 12.2.6. We recently enhanced this feature to include a total of 24 checks with the release of Oracle E-Business Suite 12.2.7. This feature has also been back-ported to Oracle E-Business Suite 12.1.3 after applying patch 26090737.

Where can I learn more?

Related Articles

References

Categories: APPS Blogs

DBVISIT from Oracle to Postgres

Yann Neuhaus - Mon, 2017-11-27 09:51

As I regularly work on Oracle and PostgreSQL, I decided to test the replication from Oracle to PostgreSQL using the Dbvisit Replicate tool.

Dbivisit Replicate does not use Oracle logminer ot triggers but its own mining processes to get the changes when they are written to the redo logs. When a change appears in the redo log, an external file called PLOG is generated and transferred to the target.

dbvisit

The architecture is quite easy to understand, you have a MINE process on the source server, looking at the redo logs for changed data, and an APPLY process which applies SQL on the target database.

The configuration is easy to implement but must not be under estimated:=)

My configuration is the following:

  • Oracle server named cloud13c, with PSI database version 12.2.0.1
  • Postgres server named pg_essentials_p1 with Postgres version 9.6

At first, we create a user in the Postgres database:

postgres@pg_essentials_p1:/home/postgres/ [PG1] createuser -d -e -E -l -P -r -s dbvrep_admin
Enter password for new role: 
Enter it again: 
CREATE ROLE dbvrep_admin ENCRYPTED 
PASSWORD 'md5e3c4e8f1b4f8e388eef4fe890d6bdb36' SUPERUSER CREATEDB 
CREATEROLE INHERIT LOGIN;

We edit the configuration file /u02/pgdata/postgresql.conf in order to allow non-localhost connections:

postgres@pg1:/u02/pgdata/PG1/ [PG1] cp postgresql.conf postgresql.conf.save
postgres@pg1:/u02/pgdata/PG1/ [PG1] sed -i "s/^#\(
listen_addresses = '\)localhost'/\1*'\t/" postgresql.conf

We also enable connections on non-localhost address:

postgres@pg1:/u02/pgdata/PG1/ [PG1] cp pg_hba.conf 
pg_hba.conf.save
postgres@p1:/u02/pgdata/PG1/ [PG1] echo -e 
"host\tall\t\tall\t\t0.0.0.0/0\t\tmd5" >> pg_hba.conf

cat pg_hba.conf:

 # TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only
local   all             all                                     md5
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
host    all             all             192.168.22.201/24       md5
 # IPv6 local connections:
host    all             all             ::1/128                 md5
 host    all             barman          192.168.1.101/24       md5
host    replication     barman_streaming 192.168.1.101/24       md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     postgres                                md5
host    replication     postgres        127.0.0.1/32            md5
host    replication     postgres        ::1/128                 md5
host all       all       0.0.0.0/0      md5

We have to restart the postgres server:

postgres@pg1:/u02/pgdata/PG1/ [PG1] pgrestart
waiting for server to shut down.... done
server stopped
server starting
postgres@pg1:/u02/pgdata/PG1/ [PG1] 2017-07-17 13:52:52.350 CEST
 - 1 - 3106 - 
 - @ LOG:  redirecting log output to logging collector process
2017-07-17 13:52:52.350 CEST - 2 - 3106 - 
 - @ HINT:  Future log output will appear in directory 
"/u01/app/postgres/admin/PG1/pg_log".
 
postgres@pg_essentials_p1:/u02/pgdata/PG1/ [PG1] 
postgres@pg_essentials_p1:/u02/pgdata/PG1/ [PG1] alias | grep pgrestart
alias pgrestart='pg_ctl -D ${PGDATA} restart -m fast'

Then we install dbvisit replicate:

We download the dbvisit_replicate-2.9.00-el5.x86_64.rpm and we install it:

[root@localhost software]# rpm -ivh dbvisit_replicate-2.9.00-el5.x86_64.rpm 
Preparing...                       ################################# [100%]
Updating / installing...
   1:dbvisit_replicate-2.9.00-el5  ################################# [100%]

To make it work properly, I had to modify the sqlnet.ora file as follows in order to avoid the following error message:

ERR-11: Could not connect as dbvrep to database PSI, 
error is ORA-24327: need explicit attach
before authenticating a user (DBD ERROR: OCISessionBegin)
sqlnet.ora:
SQLNET.SQLNET_ALLOWED_LOGON_VERSION=11
SQLNET.ALLOWED_LOGON_VERSION_CLIENT =11
SQLNET.ALLOWED_LOGON_VERSION_SERVER =11

Before running dbvrep, be sure you can connect with psql from the Oracle server to the postgreSQL server !! I needed to install a postgres client own the Oracle host and to define the PATH properly.

Finally by running dbvrep on the Oracle server, you  run the setup wizard, and you enter your configuration settings, this menu is quite easy to understand. The setup wizard is defined in 4 steps:

– Step 1: describe databases

– Step 2: Replicate Pairs

– Step 3: Replicated tables

– Step 4: Process Configuration

oracle@localhost:/home/oracle/ora2pg/ [PSI] dbvrep
Initializing......done
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
No DDC file loaded.
Run "setup wizard" to start the configuration wizard or try "help" 
to see all commands available.
dbvrep> setup wizard                                                                              
This wizard configures Dbvisit Replicate.
 
The setup wizard creates configuration scripts, which need to be run 
after the wizard ends. Nochanges to the databases are made before that.
 
The progress is saved every time a list of databases, replications, etc. 
is shown. It will bere-read if wizard is restarted and the same DDC 
name and script path is selected.
Run the wizard now? [Yes]                                                   
Accept end-user license agreement? (View/Yes/No) [No] yes                  
Before starting the actual configuration, some basic information is needed. The DDC name and
script path determines where all files created by the wizard go 
(and where to reread them ifwizard is rerun) and the license key 
determines which options are available for this
configuration.
(DDC_NAME) - Please enter a name for this replication: [] ora2pg                                  
(LICENSE_KEY) - Please enter your license key: [(trial)]                                          
Which Replicate edition do you want to trial (LTD/XTD/MAX): [MAX]                                 
(SETUP_SCRIPT_PATH) - Please enter a directory for location of 
configuration scripts on this     
machine: [/home/oracle/Documents/ora2pg] /home/oracle/ora2pg                                 
 
Network configuration files were detected on this system in these locations:
/u00/app/oracle/network/admin
/u00/app/oracle/product/12.2.0.1/dbhome_1/network/admin
(TNS_ADMIN) - Please enter TNS configuration directory for this machine:                          [/u00/app/oracle/network/admin]                                          
Read 2 described databases from previous wizard run.
 
Step 1 - Describe databases
========================================
The first step is to describe databases used in the replication. 
There are usually two of them
(source and target).
 
Following databases are now configured:
1: Oracle PSI, SYS/***, SYSTEM/***, dbvrep/***, USERS/TEMP, dbvrep/, 
ASM:No, TZ: +02:00
2: Postgres postgres, dbvrep_admin/***, dbvrep_admin/***, dbvrep/***, 
/, dbvrep/, ASM:n/a, TZ: 
Enter the number of the database to modify it, or "add", or "done": [done]                        
Read 1 replication pairs from previous wizard run.
 
Step 2 - Replication pairs
========================================
The second step is to set source and targets for each replication pair.
 
Enter number of replication pair to modify it, or "add", or "done": [done] 1                  
Do you want to "edit" the replication pair or "delete" it? [edit] edit                        
Let's configure the replication pair, selecting source and target.
Following databases are described:
1: PSI#DBVREP (Oracle)
2: DBNAME=POSTGRES;HOST=PG1#DBVREP (Postgres) 
(cannot be source: not an Oracle database)
Select source database: [1]                                                                   
Select target database: [2]                                                                   
Will limited DDL replication be enabled? (Yes/No) [Yes]                                       
Use fetcher to offload the mining to a different server? (Yes/No) [No]                        
Should where clauses (and Event Streaming) include all columns, 
not just changed and PK?      (Yes/No) [No]                                                                            
Would you like to encrypt the data across the network? (Yes/No) [No]                          
Would you like to compress the data across the network? (Yes/No) [No]                         
How long do you want to set the network timeouts. 
Recommended range between 60-300 seconds    [60]                                                                                       
Lock and copy the data initially one-by-one or at a single SCN?
one-by-one : Lock tables one by one and capture SCN
single-scn : One SCN for all tables
ddl-only   : Only DDL script for target objects
resetlogs  : Use SCN from last resetlogs operation
(standby activation, rman incomplete
recovery)
no-lock    : Do not lock tables. Captures previous SCN of oldest active 
transaction. Requires pre-requisite running of pre-all.sh script            (one-by-one/single-scn/ddl-only/resetlogs/no-lock) [single-scn] 
 
What data instantiation script to create?
ddl_file       : DDL file created (APPLY.sql)
ddl_run        : DDL is automatically executed on target
load           : All replicated data is created and loaded automatically
none                                                                                          (ddl_file/ddl_run/load/none) [ddl_run] ddl-file
 
 
Following replication pairs are now configured:
1: PSI (Oracle) ==> postgres (Postgres), DDL: Yes, fetcher: No, 
process suffix: (no suffix),
compression: No, encryption: No, network timeout: 60, prepare type: 
single-scn,:
ddl-run
Enter number of replication pair to modify it, or "add", or "done": [done]                        
Read 1 replication pairs from previous wizard run.
 
Step 3 - Replicated tables
========================================
The third step is to choose the schemas and tables to be replicated. 
If the databases arereachable, the tables are checked for existence, 
datatype support, etc., schemas are queried for tables. 
Note that all messages are merely hints/warnings and may be ignored 
if issues are rectified before the scripts are actually executed.
 
Following tables are defined for replication pairs:
1: PSI (Oracle) ==> postgres (Postgres), DDL: Yes, suffix: (no suffix), 
prepare: single-scn
  PSI(tables)
Enter number of replication pair to modify it, or "done": [done]                                  
Read 2 replication pairs from previous wizard run.
 
Step 4 - Process configuration
========================================
The fourth step is to configure the replication processes for each 
replication.
 
Following processes are defined:
1: MINE on PSI
  Host: cloud13c, SMTP: No, SNMP: No
2: APPLY on postgres
  Host: pg1, SMTP: No, SNMP: No
Enter number of process to modify it, or "done": [done] 1                                         
Fully qualified name of the server for the process (usually co-located 
with the database, unless  mine is offloaded using fetcher): [cloud13c]                                                     
Server type (Windows/Linux/Unix): [Linux]                                                         
Enable email notifications about problems? (Yes/No) [No]                                          
Enable SNMP traps/notifications about problems? (Yes/No) [No]                                     
Directory with DDC file and default where to create log files etc. 
(recommended: same as global   setting, if possible)? [/home/oracle/ora2pg]                                                    
Following settings were pre-filled with defaults or your reloaded settings:
----------------------------------------
[MINE_REMOTE_INTERFACE]: Network remote interface: cloud13c:7901 
[MINE_DATABASE]: Database TNS: PSI 
[TNS_ADMIN]: tnsnames.ora path: /u00/app/oracle/network/admin 
[MINE_PLOG]: Filemask for generated plogs: /home/oracle/ora2pg/mine/%S.%E 
(%S is sequence, %T thread, %F original filename (stripped extension), 
%P process type, %N process name, %E default extension)
[LOG_FILE]: General log file: /home/oracle/ora2pg/log/dbvrep_%N_%D.%E 
[LOG_FILE_TRACE]: Error traces: 
/home/oracle/ora2pg/log/trace/dbvrep_%N_%D_%I_%U.%E 
 
Checking that these settings are valid...
Do you want to change any of the settings? [No]                                                   
Following processes are defined:
1: MINE on PSI
  Host: cloud13c, SMTP: No, SNMP: No
2: APPLY on postgres
  Host: pg1, SMTP: No, SNMP: No
Enter number of process to modify it, or "done": [done] 2                                         
Fully qualified name of the server for the process (usually co-located 
with the database, unless  mine is offloaded using fetcher): [pg1]                                                          
Server type (Windows/Linux/Unix): [Linux]                                                         
Enable email notifications about problems? (Yes/No) [No]                                          
Enable SNMP traps/notifications about problems? (Yes/No) [No]                                     
Directory with DDC file and default where to create log files etc. 
(recommended: same as global   setting, if possible)? [/home/oracle/ora2pg]                                                    
Following settings were pre-filled with defaults or your reloaded settings:
----------------------------------------
[APPLY_REMOTE_INTERFACE]: Network remote interface: pg1:7902 
[APPLY_DATABASE]: Database Postgres connection string: dbname=postgres;
host=pg1 
[TNS_ADMIN]: tnsnames.ora path: /u00/app/oracle/network/admin 
[APPLY_SCHEMA]: Dbvisit Replicate database (schema): dbvrep 
[APPLY_STAGING_DIR]: Directory for received plogs: /home/oracle/ora2pg/apply 
[LOG_FILE]: General log file: /home/oracle/ora2pg/log/dbvrep_%N_%D.%E 
[LOG_FILE_TRACE]: Error traces: 
/home/oracle/ora2pg/log/trace/dbvrep_%N_%D_%I_%U.%E 
 
Checking that these settings are valid...
Do you want to change any of the settings? [No]                                                   
Following processes are defined:
1: MINE on PSI
  Host: cloud13c, SMTP: No, SNMP: No
2: APPLY on postgres
  Host: pg1, SMTP: No, SNMP: No
Enter number of process to modify it, or "done": [done]                                           
Created file /home/oracle/ora2pg/ora2pg-APPLY.ddc.
Created file /home/oracle/ora2pg/ora2pg-MINE.ddc.
Created file /home/oracle/ora2pg/config/ora2pg-setup.dbvrep.
Created file /home/oracle/ora2pg/config/ora2pg-dbsetup_PSI_DBVREP.sql.
Created file /home/oracle/ora2pg/config/ora2pg-dbsetup_DBNAME_POSTGRES_HOST_PG1_DBVREP.sql.
Created file /home/oracle/ora2pg/config/ora2pg-grants_PSI_DBVREP.sql.
Created file /home/oracle/ora2pg/config/ora2pg-grants_DBNAME_POSTGRES_HOST_PG1_DBVREP.sql.
Created file /home/oracle/ora2pg/config/ora2pg-onetime.ddc.
Created file /home/oracle/ora2pg/start-console.sh.
Created file /home/oracle/ora2pg/ora2pg-run-cloud13c.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-cloud13c-start-MINE.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-cloud13c-stop-MINE.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-cloud13c-dbvrep-MINE.sh.
Created file /home/oracle/ora2pg/scripts/systemd-dbvrep-MINE_ora2pg.service.
Created file /home/oracle/ora2pg/scripts/upstart-dbvrep-MINE_ora2pg.conf.
Created file /home/oracle/ora2pg/ora2pg-run-pg1.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-pg1-start-APPLY.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-pg1-stop-APPLY.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-pg1-dbvrep-APPLY.sh.
Created file /home/oracle/ora2pg/scripts/systemd-dbvrep-APPLY_ora2pg.service.
Created file /home/oracle/ora2pg/scripts/upstart-dbvrep-APPLY_ora2pg.conf.
Created file /home/oracle/ora2pg/Nextsteps.txt.
Created file /home/oracle/ora2pg/ora2pg-all.sh.
============================================================================
Dbvisit Replicate wizard completed
Script /home/oracle/ora2pg/ora2pg-all.sh created. 
This runs all the above created scripts. Please exit out of dbvrep, 
review and run script as current user to setup and start Dbvisit Replicate.
============================================================================
Optionally, the script can be invoked now by this wizard.
Run this script now? (Yes/No) [No]                                          dbvrep> exit

As it is asked at the end of the setup wizard, we run the ora2pg_all.sh :

oracle@localhost:/home/oracle/ora2pg/ [PSI] . ora2pg-all.sh 
Setting up Dbvisit Replicate configuration
Configure database PSI...
This check fails if the DBID is not the expected one...
Ok, check passed.
Configure database dbname=postgres
Object grants for database PSI...
Object grants for database dbname=postgres
Setting up the configuration
Initializing......done
DDC loaded from database (0 variables).
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
DDC file /home/oracle/ora2pg/config/ora2pg-onetime.ddc loaded.
MINE: Cannot determine Dbvisit Replicate dictionary version. (no
dictionary exists)
APPLY: Cannot determine Dbvisit Replicate dictionary version. (no
dictionary exists)
dbvrep> #clear the no-DDC-DB-available warning
dbvrep> process clear previous warnings
dbvrep> set ON_WARNING SKIP
Variable ON_WARNING set to SKIP for process *.
dbvrep> set ON_ERROR EXIT
Variable ON_ERROR set to EXIT for process *.
dbvrep> 
dbvrep> # Configuring default processes
dbvrep> choose process MINE
Process type MINE set to: MINE.
dbvrep> choose process APPLY
Process type APPLY set to: APPLY.
dbvrep> PROCESS SWITCH_REDOLOG
Redo log switch requested.
dbvrep> PROCESS SETUP MINE DROP DICTIONARY
0 dictionary objects dropped.
dbvrep> PROCESS SETUP MINE CREATE DICTIONARY
dbvrep> PROCESS SETUP MINE LOAD DICTIONARY
Oldest active transaction SCN: 2054212 (no active transaction)
Supplemental logging on database set.
dbvrep> PROCESS SETUP APPLY DROP DICTIONARY
0 dictionary objects dropped.
dbvrep> PROCESS SETUP APPLY CREATE DICTIONARY
dbvrep> PROCESS SETUP APPLY LOAD DICTIONARY
dbvrep> PROCESS SETUP PAIR MINE AND APPLY
Applier SCN set (start=2054228, current=2054228).
dbvrep> SET APPLY.INSTANTIATE_SCN NOW
Variable INSTANTIATE_SCN set to NOW for process APPLY.
dbvrep> SET MINE._PREPARE_SUPLOG_TYPE PK
Variable _PREPARE_SUPLOG_TYPE set to PK for process MINE.
dbvrep> EXCLUDE CREATE TABLE %.DBMS_TABCOMP_TEMP_UNCMP #Ignore tables
created by Compression Advisor
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.DBMS_TABCOMP_TEMP_CMP #Ignore tables
created by Compression Advisor
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.SCHEDULER$_% #Ignore tables created by
Oracle scheduler (also used by schema/full expdp/impdp)
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.CMP1$% #Ignore tables created by
Compression Advisor since 11.2.0.4
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.CMP2$% #Ignore tables created by
Compression Advisor since 11.2.0.4
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.CMP3$% #Ignore tables created by
Compression Advisor since 11.2.0.4
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.CMP4$% #Ignore tables created by
Compression Advisor since 11.2.0.4
Exclude rule created.
dbvrep> memory_set IGNORE_APPLY_DDL_DIFFERENCES Yes
Variable IGNORE_APPLY_DDL_DIFFERENCES set to YES for process *.
dbvrep> SET PREPARE_SCHEMA_EXCEPTIONS none
Variable PREPARE_SCHEMA_EXCEPTIONS set to none for process *.
dbvrep> PROCESS SUPPLEMENTAL LOGGING SCHEMA "PSI" ENABLE PRIMARY KEY
dbvrep> PROCESS SWITCH_REDOLOG
Redo log switch requested.
dbvrep> PROCESS WAIT_SCN_FLIP
Waited 1 seconds until scn_to_timestamp changed.
dbvrep> #single-scn instantiation: lock all tables and schemas
dbvrep> PROCESS LOCK SCHEMAS "PSI"
Locking all schemas.
...locked 2 of 2 tables from PSI schema.
Lock done.
dbvrep> #single-scn instantiation: unlock all tables and schemas, but
keep the SCN
dbvrep> PROCESS LOCK RELEASE LOCKS
Engine locks released.
dbvrep> 
dbvrep> #prepare the tables (we use OFFLINE as neither MINE nor APPLY
is running; with OFFLINE we won't wait on network timeout)
dbvrep> PREPARE OFFLINE SCHEMA "PSI"
Table PSI.EMPLOYE instantiated at SCN 2056800
Table PSI.OFFICE instantiated at SCN 2056800
dbvrep> 
dbvrep> #single-scn instantiation: unlock all tables and schemas,
forget the SCN (so it does not affect any further PREPARE statements)
dbvrep> PROCESS LOCK CLEAR SCN
dbvrep> PROCESS SWITCH_REDOLOG
Redo log switch requested.
dbvrep> #prepare script for instantiation
dbvrep> PROCESS PREPARE_DP WRITE DDL_FILE FILE
/home/oracle/ora2pg/APPLY.sql USERID SYSTEM/manager@PSI
File /home/oracle/ora2pg/APPLY.sql has been written successfully.
Created DDL script /home/oracle/ora2pg/APPLY.sql.
dbvrep> create ddcdb from ddcfile
DDC loaded into database (430 variables).
dbvrep> load ddcdb
DDC loaded from database (430 variables).
dbvrep> set ON_WARNING SKIP
Variable ON_WARNING set to SKIP for process *.
dbvrep> set ON_ERROR SKIP
Variable ON_ERROR set to SKIP for process *.
OK-0: Completed successfully.
WARN-1850: No DDC DB available, dictionary table does not exist.
These steps are required after the ora2pg-all.sh script runs:
 
1) Create the necessary directory(ies) on the servers:
cloud13c: /home/oracle/ora2pg
pg1: /home/oracle/ora2pg
 
2) Copy the DDC files to the server(s) where the processes will run:
pg1: /home/oracle/ora2pg/ora2pg-APPLY.ddc
cloud13c: /home/oracle/ora2pg/ora2pg-MINE.ddc
 
Ensure that the parameter TNS_ADMIN (in the ddc file) is pointing to the correct TNS_ADMIN path on each of the servers.
 
3) Review that path to dbvrep executable is correct in the run scripts:
/home/oracle/ora2pg/ora2pg-run-cloud13c.sh
/home/oracle/ora2pg/ora2pg-run-pg1.sh
 
4) Copy the run script to the server(s) where the processes will run:
cloud13c: /home/oracle/ora2pg/ora2pg-run-cloud13c.sh
pg1: /home/oracle/ora2pg/ora2pg-run-pg1.sh
 
5) Ensure firewall is open for listen interfaces 0.0.0.0:7902, 0.0.0.0:7901 used by the processes.
 
6) Make sure the data on apply are in sync as of time when setup was run.
Scripts for Data Pump/export/DDL were created as requested:
 
Create referenced database links (if any) before running the scripts.
/home/oracle/ora2pg/APPLY.sql
 
7) Start the replication processes on all servers:
cloud13c: /home/oracle/ora2pg/ora2pg-run-cloud13c.sh
pg1: /home/oracle/ora2pg/ora2pg-run-pg1.sh
 
8) Start the console to monitor the progress:
/home/oracle/ora2pg/start-console.sh

As explained you have to copy two files on the postgres server : /home/oracle/ora2pg/ora2pg-APPLY.ddc and /home/oracle/ora2pg/ora2pg-run-pg1.sh

As I choosed the option ddl_only, we have to first create the tables on the postgres server. In order to do this we can use the APPLY:sql file from the Oracle server.

The next step consist in running the MINE process on the Oracle server:

oracle@localhost:/home/oracle/ora2pg/ [PSI] . ora2pg-run-cloud13c.sh 
Initializing......done
DDC loaded from database (430 variables).
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
DDC file /home/oracle/ora2pg/ora2pg-MINE.ddc loaded.
Starting process MINE...started

And we launch the APPLy process on the postgres server:

postgres@pg_essentials_p1:/home/oracle/ora2pg/ [PG1] . ora2pg-run-pg1.sh 
Initializing......done
DDC loaded from database (431 variables).
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
DDC file /home/oracle/ora2pg/ora2pg-APPLY.ddc loaded.
Starting process APPLY...Created directory /home/oracle/ora2pg/ddc_backup
Created directory /home/oracle/ora2pg/log/
Created directory /home/oracle/ora2pg/log/trace/
Created directory /home/oracle/ora2pg/apply
started

Initially I had two tables in my PSI oracle database belonging to the psi schema: EMPLOYE and OFFICE. I used the APPLY.sql script to create the tables in the postgres environment.

To visualize the activity we run start_console.sh on the Oracle server:

oracle@localhost:/home/oracle/ora2pg/ [PSI] . start-console.sh 
Initializing......done
DDC loaded from database (431 variables).
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
 
| Dbvisit Replicate 2.9.02(MAX edition) - Evaluation License expires in 30 days
MINE is running. Currently at plog 120 and SCN 2060066 (11/07/2017 15:27:57).
APPLY is running. Currently at plog 120 and SCN 2060021 (11/07/2017 15:27:45).
Progress of replication ora2pg:MINE->APPLY: total/this execution
-------------------------------------------------------------------------------------------------
PSI.EMPLOYE/psi.employe:      100%  Mine:1/1             Unrecov:0/0         Applied:1/1         Conflicts:0/0       Last:07/11/2017 15:20:06/OK
PSI.OFFICE/psi.office:        100%  Mine:1/1             Unrecov:0/0         Applied:1/1         Conflicts:0/0       Last:07/11/2017 15:21:36/OK
-------------------------------------------------------------------------------------------------
2 tables listed.

And we can validate that each insert in the employe or office table is replicated on the postgres server:

From the postgres database;

(postgres@[local]:5432) [postgres] > select * from psi.employe;
 name  | salary 
-------+--------
 Larry |  10000
 Bill  |   2000
(2 rows)

From the Oracle server:

SQL> insert into employe values ('John', 50000);
 
1 row created.
 
SQL> commit;
 
Commit complete.

The console is giving us correct informations:

/ Dbvisit Replicate 2.9.02(MAX edition) - Evaluation License expires in 30 days
MINE is running. Currently at plog 120 and SCN 2075526 (11/07/2017 16:44:17).
APPLY is running. Currently at plog 120 and SCN 2075494 (11/07/2017 16:44:08).
Progress of replication ora2pg:MINE->APPLY: total/this execution
-------------------------------------------------------------------------------------------------
PSI.EMPLOYE/psi.employe:      100%  Mine:3/3             Unrecov:0/0         Applied:3/3         Conflicts:0/0       Last:07/11/2017 16:18:41/OK
PSI.OFFICE/psi.office:        100%  Mine:3/3             Unrecov:0/0         Applied:3/3         Conflicts:0/0       Last:07/11/2017 15:37:02/OK
-------------------------------------------------------------------------------------------------
2 tables listed.

And the result is apllied on the postgres database:
(postgres@[local]:5432) [postgres] > select * from psi.employe;
 name  | salary 
-------+--------
 Larry |  10000
 Bill  |   2000
 John  |  50000
(3 rows)
 
As previously we have choosen the single-scn and ddl-run option, we had to run the APPLY.sql script from the Oracle server in order to create the tables on the postgres side, you can also choose in Step 2 of the configuration wizard, the load option (all replicated data is created and loaded automatically):
Lock and copy the data initially one-by-one or at a single SCN?
one-by-one : Lock tables one by one and capture SCN
single-scn : One SCN for all tables
ddl-only   : Only DDL script for target objects
resetlogs  : Use SCN from last resetlogs operation (standby activation, rman incomplete
recovery)
no-lock    : Do not lock tables. Captures previous SCN of oldest active transaction. Requires
pre-requisite running of pre-all.sh script                                                    (one-by-one/single-scn/ddl-only/resetlogs/no-lock) [single-scn] 
 
What data instantiation script to create?
ddl_file       : DDL file created (APPLY.sql)
ddl_run        : DDL is automatically executed on target
load           : All replicated data is created and loaded automatically
none                                                                                          (ddl_file/ddl_run/load/none) [ddl_run] load
Do you want to (re-)create the tables on target or keep them (they are already created)?      (create/keep) [keep] create

In this case you can visualize that each Oracle table is replicated to the Postgres server.

From the oracle server:

SQL> create table salary (name varchar2(10)); 
 
Table created.
 
SQL> insert into salary values ('Larry');
 
1 row created.
 
SQL> commit;
 
Commit complete.

The dbvist console displays correct informations:

\ Dbvisit Replicate 2.9.02(MAX edition) - Evaluation License expires in 30 days
MINE is running. Currently at plog 135 and SCN 2246259 (11/27/2017 14:44:24).
APPLY is running. Currently at plog 135 and SCN 2246237 (11/27/2017 14:44:18).
Progress of replication replic:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------
REP.SALARY:                   100%  Mine:1/1             Unrecov:0/0         Applied:1/1         Conflicts:0/0       Last:27/11/2017 14:01:25/OK
---------------------------------------------------------------------------------------------
1 tables listed.

From the postgres server:

(postgres@[local]:5432) [postgres] > select * from rep.salary;
 name  
-------
 Larry
(1 row)

The plog files generated in the postgres server contains the strings we need:

The plot files are generated on the postgres server in the directory /home/oracle/replic/apply

-bash-4.2$ ls
122.plog.gz  124.plog.gz  126.plog  128.plog.gz  130.plog.gz  132.plog.gz  134.plog
123.plog.gz  125.plog.gz  127.plog  129.plog.gz  131.plog.gz  133.plog.gz  135.plog
-bash-4.2$ strings 135.plog | grep -l larry
-bash-4.2$ strings 135.plog | grep -i larry
Larry
-bash-4.2$ strings 135.plog | grep -i salary
SALARY
create table salary (name varchar2(10))
SALARY
SALARY

Despite some problems at the beginning of my tests, the replication from Oracle to PostgreSQL is working fine and fast. There are many possibilities with Dbvisit Replicate I will try to test in the following weeks.

 

 

 

 

 

 

Cet article DBVISIT from Oracle to Postgres est apparu en premier sur Blog dbi services.

sys password change and orapwd file

Tom Kyte - Mon, 2017-11-27 02:46
Hi Tom, What is the relation between SYS user password and orapwd file ? What are the various methods to change the SYS user password. If I change the SYS user password using from sqlplus use password command or use alter user ...
Categories: DBA Blogs

SYS, SYSDBA, SYSOPER, SYSTEM

Tom Kyte - Mon, 2017-11-27 02:46
Hi I am very new to oracle. I have installed Oracle 10g compatible with windows vista. I am confused with what are exactly, SYS, SYSDBA, SYSOPER and SYSTEM? How they differ and what is the specific purpose of these automatically created accounts w...
Categories: DBA Blogs

Batch History Portal - Driving the Portal with a flexible search

Anthony Shorten - Sun, 2017-11-26 22:01

In a previous post, I illustrated a zone on displaying batch history. This is part of a prototype I am working on to illustrate some advanced UI features in the ConfigTools scripting language and zone features. The zone looked like this:

Example Batch Hostory Portal

Many people asked me for the zone definition so here are the steps I did to create the zone:

  • I created a service Script that returned the FULL message from the Batch Level of Service called CMLOSM:

10: edit data
     if ("parm/batchControlId = $BLANK")
           terminate;
     end-if;
     move "parm/batchControlId" to "BatchLevelOfService/input/batchControlId";
     //
     // Get Level Of Service
     //
     invokeBS 'F1-BatchLevelOfService' using "BatchLevelOfService";
     move "BatchLevelOfService/output/levelOfService" to "parm/levelOfService";
     //
     // Get Level Of Service Description
     //
     move 'F1_BATCH_LEVEL_OF_SERVICE_FLG' to "LookupDescription/fieldName";
     move "BatchLevelOfService/output/levelOfService" to "LookupDescription/fieldValue";
     invokeBS 'F1-GetLookupDescription' using "LookupDescription";
     //
     // Get Message
     //
     move "BatchLevelOfService/output/messageCategory" to "ReturnMessage/input/messageCategory";
     move "BatchLevelOfService/output/messageNumber" to "ReturnMessage/input/messageNumber";
     move '0' to "ReturnMessage/input/messageParmCollCount";
     move "$LANGUAGE" to "ReturnMessage/input/language";
     //
     // Set Substitution Parms.. I have only coded 4 for now
     //
     if ("string(BatchLevelOfService/output/messageParameters/parameters[1]/parameterValue) != $BLANK")
        move "BatchLevelOfService/output/messageParameters/parameters[1]/parameterValue" to "ReturnMessage/input/messageParms/messageParm1";
        move '1' to "ReturnMessage/input/messageParmCollCount";
     end-if;
     if ("string(BatchLevelOfService/output/messageParameters/parameters[2]/parameterValue) != $BLANK")
        move "BatchLevelOfService/output/messageParameters/parameters[2]/parameterValue" to "ReturnMessage/input/messageParms/messageParm2";
         move '2' to "ReturnMessage/input/messageParmCollCount";
     end-if;
     if ("string(BatchLevelOfService/output/messageParameters/parameters[3]/parameterValue) != $BLANK")
        move "BatchLevelOfService/output/messageParameters/parameters[3]/parameterValue" to "ReturnMessage/input/messageParms/messageParm3";
        move '3' to "ReturnMessage/input/messageParmCollCount";
     end-if;
     if ("string(BatchLevelOfService/output/messageParameters/parameters[4]/parameterValue) != $BLANK")
        move "BatchLevelOfService/output/messageParameters/parameters[4]/parameterValue" to "ReturnMessage/input/messageParms/messageParm4";
        move '4' to "ReturnMessage/input/messageParmCollCount";
     end-if;
     //
     // Compile the Message
     //
     invokeBS 'F1-ReturnMessage' using "ReturnMessage";
     move "ReturnMessage/output/expandedMessage" to "parm/fullMessage";
end-edit;

Schema:

<schema>
    <batchControlId dataType="string"/>  
    <levelOfService mdField="F1_BATCH_LEVEL_OF_SERVICE_FLG"/>  
    <levelOfServiceDesc mdField="LOS_DESC"/>  
    <fullMessage dataType="string"/>
</schema>

Data Areas:

Schema Type Object Data Area Name Business Service F1-BatchLevelOfService BatchLevelOfService Business Service F1-GetLookupDescription LookupDescription Business Service F1-ReturnMessage ReturnMessage
  • I created a script that would set the color of the level of service called CMCOLOR:

10: move 'black' to "parm/foreColor";
11: move 'white' to "parm/bgColor";
20: edit data
     if ("parm/levelOfService = 'DISA'")
         move 'white' to "parm/foreColor";
         move '#808080' to "parm/bgColor";
     end-if;
end-edit;
30: edit data
     if ("parm/levelOfService = 'ERRO'")
         move 'white' to "parm/foreColor";
         move 'red' to "parm/bgColor";
     end-if;
end-edit;
40: edit data
     if ("parm/levelOfService = 'NORM'")
         move 'white' to "parm/foreColor";
         move 'green' to "parm/bgColor";
     end-if;
end-edit;
50: edit data
     if ("parm/levelOfService = 'WARN'")
         move 'black' to "parm/foreColor";
         move 'yellow' to "parm/bgColor";
     end-if;
end-edit;

Schema:

<schema>
    <levelOfService mdField="F1_BATCH_LEVEL_OF_SERVICE_FLG" dataType="lookup" lookup="F1_BATCH_LEVEL_OF_SERVICE_FLG"/>  
    <foreColor dataType="string" mdField="COLOR"/>  
    <bgColor dataType="string" mdField="BG_COLOR"/>
</schema>

  • I created a script to post process the records for advanced filtering called CMLOSRF:

     if ("string(parm/levelOfServiceFilter) = $BLANK")
       if ("parm/hideDisabled = $BLANK")
           move 'true' to "parm/result";
           terminate;
       end-if;
       move 'true' to "parm/result";
       if ("string(parm/hideDisabled) = 'Y'")
         if ("string(parm/levelOfService) = 'DISA'")
            move 'false' to "parm/result";
            terminate;
         end-if;
       end-if;
       terminate;
     end-if;
     move 'false' to "parm/result";
     if ("parm/levelOfServiceFilter = parm/levelOfService")
           move 'true' to "parm/result";
     end-if;

Schema:

<schema>
    <levelOfService dataType="string"/>  
    <levelOfServiceFilter dataType="string"/>  
    <hideDisabled/>  
    <result dataType="string"/>
</schema>

  • I then built a zone called CMBH01 which has the following attributes:
Parameter Value Description Batch History Query Zone Type F1-DE Application Service (Choose an appropriate one) Width Full Height Of Report 60 Display Row Number Column false User Filter 1 label=BATCH_CD likeable=S divide=below User Filter 2 label=LEVEL_OF_SERVICE type=LOOKUP lookup=F1_BATCH_LEVEL_OF_SERVICE_FLG User Filter 3 label='Hide Disabled' type=LOOKUP lookup=F1_YESNO_FLG divide=below User Filter 4 label=F1_BATCH_CTGY_FLG type=LOOKUP lookup=F1_BATCH_CTGY_FLG User Filter 5 label=F1_BATCH_CTRL_TYPE_FLG type=LOOKUP lookup=F1_BATCH_CTRL_TYPE_FLG No SQL Execute nosql=IGNORE Initial Display Columns C1 C2 C8 C5 C12 C10 C13 C14 SQL 1 Broadcast Columns BATCH_CD=C1 SQL Statement 1 SELECT UNIQUE I.BATCH_CD, B.F1_BATCH_CTRL_TYPE_FLG, B.F1_BATCH_CTGY_FLG, B.LAST_UPDATE_DTTM, B.NEXT_BATCH_NBR FROM CI_BATCH_INST I, CI_BATCH_CTRL B
WHERE I.BATCH_CD = B.BATCH_CD
 [ (F1) AND I.BATCH_CD LIKE :F1]
 [ (F4) AND B.F1_BATCH_CTGY_FLG = :F4]
 [ (F5) AND B.F1_BATCH_CTRL_TYPE_FLG = :F5] Column 1 for SQL 1 source=SQLCOL sqlcol=BATCH_CD label=BATCH_CD Column 2 for SQL 1 source=FKREF fkref='F1-BTCCT' input=[BATCH_CD=BATCH_CD] label=BATCH_CD_DESCR Column 3 for SQL 1 source=BS bs='F1-BatchLevelOfService'
input=[input/batchControlId=C1]
output=output/levelOfService  suppress=true suppressExport=true Column 4 for SQL 1 source=BS bs='F1-GetLookupDescription'
input=[fieldName='F1_BATCH_LEVEL_OF_SERVICE_FLG' fieldValue=C3] label=LEVEL_OF_SERVICE color=C6 bgColor=C7
output=description suppress=true suppressExport=true Column 5 for SQL 1 source=SS ss='CMLOSM'
input=[batchControlId=C1] label=LEVEL_OF_SERVICE_REASON
output=fullMessage Column 6 for SQL 1 source=SS ss='CMCOLOR'
input=[levelOfService=C3] label=COLOR
output=foreColor suppress=true suppressExport=true Column 7 for SQL 1 source=SS ss='CMCOLOR'
input=[levelOfService=C3] label=BG_COLOR
output=bgColor  suppress=true suppressExport=true Column 8 for SQL 1 source=SPECIFIED spec=['<div style=" font-weight:bold; background-clip: content-box; border-radius: 10px; padding: 2px 8px; text-align: center; background-color:' C7 '; color:' C6 ';">' C4 '</div>'] label=LEVEL_OF_SERVICE Column 9 for SQL 1 source=SQLCOL sqlcol=2 label=F1_BATCH_CTRL_TYPE_FLG suppress=true suppressExport=true Column 10 for SQL 1 source=BS bs='F1-GetLookupDescription'
input=[fieldName='F1_BATCH_CTRL_TYPE_FLG' fieldValue=C9] label=F1_BATCH_CTRL_TYPE_FLG
output=description Column 11 for SQL 1 source=SQLCOL sqlcol=3 label=F1_BATCH_CTGY_FLG suppress=true suppressExport=true Column 12 for SQL 1 source=BS bs='F1-GetLookupDescription'
input=[fieldName='F1_BATCH_CTGY_FLG' fieldValue=C11] label=F1_BATCH_CTGY_FLG output=description Column 13 for SQL 1 source=SQLCOL sqlcol=4 label='Last Executed' type=DATE/TIME Column 14 for SQL 1 source=SQLCOL sqlcol=5 label=NEXT_BATCH_NBR type=NUMBER dec=0 Allow Row Service Script 1 ss=CMLOSRF
input=[levelOfServiceFilter=F2 levelOfService=C3 hideDisabled=F3]
output=result

Saving the Zone and adding it to a menu will then implement the zone in the menu and invoke it. Make sure the Application Service you use is connected to the users via a user group so that users can access the zone.

Understanding the solution

I want to also make you understand a few of the decisions I made in building this zone up:

  • The zone type was just a personal choice (F1-DE). In a typical use case you would display the batch controls you favor using the filters. By using F1-DE, the SQL is run without asking for filters first as I would assume you would start with a full list and then using filters to refine what you wanted to see. If you got to a smaller subset you can use the Save View functionality to set those as your preferred filters. In other zone types you can filter first and then display the records, it is up to your personal preferences and business requirements.
  • The solution was built up over time. I started with some basic SQL and then started looking at scripting to reformat and provide advanced functionality in the zone. This is a good example of development of zones. You start simple and build more and more into it until you are happy with the result.
  • The SQL Statement will return the list of batch controls that have been executed at least once. This is intentional to filter out jobs that are never run from the product for an implementation. We supply lots of jobs in a product to cover all situations in the field but I have not encountered a customer that runs them all. As I am looking for at least one execution, I added the UNIQUE clause to ignore multiple executions.
  • I added Batch Category and Batch Control Type to allow filtering on things that are important at different stages of an implementation as well as targeting products that have a mix of different job types.
  • The Last Executed Date and Time is for information purposes mainly but also can be used as a sortable column so you can quickly fund jobs that were executed recently.
  • The Next Run Number might seem a strange field to include but it gives you an idea of batch controls that have been executed more frequently. In the screen above I can see that F1-MGDIM has been executed far more than the other batch controls.
  • There are a lot of suppressed columns in the list above. This is intentional as the values of these columns can be used for other columns. For example, I have Column 5 and Column 6 calculating the color of the Level Of Service. These are never to be displayed in the list or export as they are intermediate columns as they are used in the formatting in Column 8.
  • The Allow Row Service Script is really handy as it allows for complex processing outside the SQL. For example, as I do not know the Level Of Service value in the SQL (as it is calculated) and I want to include it as a filter, I can use the Allow Row Service Script to use that information to return whether a column returned is actually to be included in the final result set (even though the SQL would return it).
  • You might of noticed that I actually hardcoded some labels. This is typically not recommended as I would of created custom fields to hold the labels so that I can translate the label or change the text without changing this zone. If you ever create your own zones, I would strongly suggest avoiding hardcoding. I just used it to make the publishing of this article much easier.
  • The code in the service scripts is really just examples. They are not probably optimal bits of code. I am sure you are looking at the code and working out better ways of doing it and that is fine. The code is just to give you some ideas.
  • The script, CMLOSM, which builds the full Level Of Service message, is not really optimal and I am sure there are much easier methods to achieve the same result but it is functional for illustrative purposes.
  • You will notice that Column 8 is actually some dynamic HTML coding enclosed in div tags. That is correct. It is possible to implement some HTML in a column for formatting. Just so you know, the HTML in ANY column has to conform to the HTML whitelist that is enabled across the product. You cannot put just any code in there. You are limited to formatting and some basic post-processing. My development team helped me with some of the possibilities as I wanted a specific look without resorting to graphics. It is both visual and functional (for sorting).
  • You might also notice a Broadcast column (BATCH_CD). Well that is so that this zone can be part of a larger solution which I will expand upon in future blog entries to show off some very nice new functionality (actually most of it is available already).

RMAN Backup To FRA Repercussions

Michael Dinh - Sun, 2017-11-26 09:50

Common advice is to backup to FRA.
Before following advice, evaluate to determine fit and understand any repercussions.
Doesn’t this potentially create SPOF and may require restore from tape unnecessarily?

HINT:

Make sure the following commands are part of backup when backup to FRA.

CONFIGURE CHANNEL DEVICE TYPE DISK CLEAR;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;
DEMO:
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Nov 26 16:02:17 2017

RMAN> show controlfile autobackup;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP ON;

RMAN> show controlfile autobackup format;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default

RMAN> backup datafile 1;

Starting backup at 26-NOV-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/HAWK/DATAFILE/system.258.960967651
channel ORA_DISK_1: starting piece 1 at 26-NOV-17
channel ORA_DISK_1: finished piece 1 at 26-NOV-17
piece handle=+FRA/HAWK/BACKUPSET/2017_11_26/nnndf0_tag20171126t160327_0.274.961085007 tag=TAG20171126T160327 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-NOV-17

--- Control File and SPFILE Autobackup to FRA
Starting Control File and SPFILE Autobackup at 26-NOV-17
piece handle=+FRA/HAWK/AUTOBACKUP/2017_11_26/s_961085014.275.961085015 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-17

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
new RMAN configuration parameters are successfully stored

RMAN> show controlfile autobackup format;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
--- CONTROLFILE AUTOBACKUP FORMAT is same but ***NOT*** DEFAULT
RMAN> backup datafile 1;

Starting backup at 26-NOV-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/HAWK/DATAFILE/system.258.960967651
channel ORA_DISK_1: starting piece 1 at 26-NOV-17
channel ORA_DISK_1: finished piece 1 at 26-NOV-17
piece handle=+FRA/HAWK/BACKUPSET/2017_11_26/nnndf0_tag20171126t160655_0.276.961085215 tag=TAG20171126T160655 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-NOV-17

--- Control File and SPFILE Autobackup to ***DISK***
Starting Control File and SPFILE Autobackup at 26-NOV-17
piece handle=/u01/app/oracle/12.1.0.2/db1/dbs/c-3219666184-20171126-01 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-17

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;

old RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
RMAN configuration parameters are successfully reset to default value

RMAN> show controlfile autobackup format;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default <-- 

RMAN> backup datafile 1 FORMAT '%d_%I_%T_%U';

Starting backup at 26-NOV-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/HAWK/DATAFILE/system.258.960967651
channel ORA_DISK_1: starting piece 1 at 26-NOV-17
channel ORA_DISK_1: finished piece 1 at 26-NOV-17
piece handle=/u01/app/oracle/12.1.0.2/db1/dbs/HAWK_3219666184_20171126_0oski093_1_1 tag=TAG20171126T161531 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-NOV-17

Starting Control File and SPFILE Autobackup at 26-NOV-17
piece handle=+FRA/HAWK/AUTOBACKUP/2017_11_26/s_961085738.277.961085739 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-17

RMAN>
REFERENCE:
How to KEEP a backup created in the Flash Recovery Area (FRA)? (Doc ID 401163.1)	
A backup needed to be KEPT, must be created outside the flash recovery area.

Why are backups going to $ORACLE_HOME/dbs rather than Flash recovery area via Rman or EM Grid control /FRA not considering Archivelog part of it (Doc ID 404854.1)
 1. Do not use a FORMAT clause on backup commands.
 
RMAN Uses Flash Recovery Area for Autobackup When Using Format '%F' (Doc ID 338483.1)	 

Docker-CE: Creating a small Linux as base OS for running a Docker cluster

Dietrich Schroff - Sat, 2017-11-25 15:13
After some first steps with docker i wanted to run more than one docker node. The problem is, that i have very limited resources on my laptop.
  • CoreOS (see here)
    consumes about 600MB diskspace
  • UbuntuCore (see here)
    consumes about 1.9GB diskspace
To setup CoreOS you have to learn ignition - otherwise every change is lost after a reboot. For UbuntuCore you have to use an UbuntuOne account to get it running.

I heard, that Alpine Linux is an OS with a very small footprint, so i downloaded the "virtual" image and

mounted it in my virtualbox:


After a short startup i got the following:

This is completely different to all other installation procedures i know. No configuration dialog just a
localhost login:You have to enter "root" and without a password you get a banner with one important information:
You can setup the system with the command: setup-alpineThis will guide you through a 11 step setup (read the manual). To get a installation with persistent homes, etc. you have to choose the "sys" mode.

And then after a reboot, i have a Linux VM with 170MB.
For my next steps (installing docker, cloning some nodes) this should be the right choice.

[Installing docker is quite easy: remove the # before "edge/community" in /etc/apk/repositories and the run "apk update" and "apk add docker".]

WebLogic – SSO/Atn/Atz – How to enable debug logs

Yann Neuhaus - Sat, 2017-11-25 13:04

Let’s say that you have an application deployed on a WebLogic Server and that the SSO/Atn/Atz (SSO/Authentication/Authorization) isn’t working or throwing a 4xx error code like ‘401 – Unauthorized’ or ‘403 – Forbidden’. How would you be able to know what is happening and how to solve it? Well the answer is always the same: enable debug logs, it is just a matter or choosing the right scope and choosing a way to do it because yes, it is possible to enable/disable them in several ways…

In this blog, I will be using a WebLogic Server configured with a SAML2 Single Sign-On (SSO) and hosting a Documentum D2 application. I will therefore enable and disable the debug logs for a Managed Server that is named ‘msD2-01′ and this MS is hosting D2, obviously (this is just for the background, it does not matter which application is running there!).

In the case of the SAML2 SSO, it’s pretty easy to recognize some pattern of errors, you just have to access the SSO URL and then watch closely the redirection from the WLS to the IdP Partner and then back to the WLS. Depending on when the issue appears, you can find out where the issue is. But in case you absolutely don’t know anything about all that or if you are completely new to this, you can and should always do the same thing: enable the debug logs.

 

I. Administration Console

WebLogic provides several ways to enable debug logs but the one that makes it really easy, even for beginners, is the WebLogic Administration Console. In this blog, I will only talk about SAML2 SSO, Authentication and Authorization. So enabling the debug logs for these three elements can be done using these steps:

  1. Login to the Admin Console using your weblogic account
  2. Navigate to the correct page: DOMAIN > Environment > Servers > ServerName (msD2-01 for me) > Debug
  3. Click on the ‘Lock & Edit’ button
  4. Expand the scope ‘weblogic’ and then ‘security’
  5. Check the correct checkboxes, depending on which element you want to enable:
    • atn
    • atz
    • ldap
    • saml
    • saml2
  6. Go back to the top and click on the ‘Enable’ button
  7. Click on the ‘Activate Changes’ button -> All changes have been activated. No restarts are necessary.

The correct page on the WebLogic Admin Console (step 2) is this one:

Debug

From this moment, you can check the Managed Server log file (its name is defined in the ‘Logging’ tab) which is $DOMAIN_HOME/servers/ServerName/logs/ServerName.log by default ($DOMAIN_HOME/servers/msD2-01/logs/msD2-01.log in my case) and it will start to log the messages for the elements that you enabled.

Do NOT enable too many elements! I usually only enable the SAML2 when I know it is linked to the SSO or atn+atz when it is more linked to how WebLogic manages the Authentication/Authorization. These are the three main scopes that you should work with when debugging an issue related to SSO/Authentication/Authorization.

Also, do not forget to disable the debug logs when you are done with your analysis. It is pretty obvious but it can store a lot of information in the log files so…

Please also note that if you expand the ‘atn’ scope for example, you will see that inside it, there is actually the ‘DebugSecurityAtn’, ‘DebugSecuritySAML2Atn’ and ‘DebugSecuritySAMLAtn’ -> these are attributes (not expandable). Therefore enabling the whole ‘atn’ will already activate some of the SAML2 debug logs. So there is a notion of group (=scope) on the Admin Console to simplify the selection that you can also find on other solutions below.

 

II. config.xml

If you already worked with WebLogic, you probably know what is the purpose of the config.xml file and you probably also know that it can be a little bit dangerous to mess with this file (not really if you know what you are doing). However it is still possible to do some configuration there, so at your own risks :).

By default, this file will NOT contain any information regarding the debug logging, unless you already enabled them at least once in the past using the Admin Console for example. So since there is, by default, no information regarding the debug logging, you need to add them manually if you want to configure the logging using the config.xml file. Be aware that the location where you put the configuration matters!

[weblogic@weblogic_server_01 ~]$ cd $DOMAIN_HOME/config
[weblogic@weblogic_server_01 config]$
[weblogic@weblogic_server_01 config]$ grep -C1 -E "debug|server>" config.xml
  </log>
  <server>
    <name>AdminServer</name>
--
    <custom-trust-key-store-pass-phrase-encrypted>{AES}QARbQAV1ul1u3tJcsGAhdATQrNJe1YlVnQmY9d1jWQFx4aM=</custom-trust-key-store-pass-phrase-encrypted>
  </server>
  <server>
    <name>msD2-01</name>
--
    </single-sign-on-services>
  </server>
  <production-mode-enabled>true</production-mode-enabled>
[weblogic@weblogic_server_01 config]$

 

With the example above, there are no debug information in the config.xml file so let’s add the SAML2, atn and atz logging. You can put these entries between the ‘<listen-port-enabled>’ and ‘<listen-address>’ parameters for example. That’s what it looks like afterwards:

[weblogic@weblogic_server_01 config]$ grep -C1 -E "debug|server>" config.xml
  </log>
  <server>
    <name>AdminServer</name>
--
    <custom-trust-key-store-pass-phrase-encrypted>{AES}QARbQAV1ul1u3tJcsGAhdATQrNJe1YlVnQmY9d1jWQFx4aM=</custom-trust-key-store-pass-phrase-encrypted>
  </server>
  <server>
    <name>msD2-01</name>
--
    <listen-port-enabled>false</listen-port-enabled>
    <server-debug>
      <debug-scope>
        <name>weblogic.security.atn</name>
        <enabled>true</enabled>
      </debug-scope>
      <debug-scope>
        <name>weblogic.security.atz</name>
        <enabled>true</enabled>
      </debug-scope>
      <debug-scope>
        <name>weblogic.security.saml2</name>
        <enabled>true</enabled>
      </debug-scope>
    </server-debug>
    <listen-address>10.10.10.10</listen-address>
--
    </single-sign-on-services>
  </server>
  <production-mode-enabled>true</production-mode-enabled>
[weblogic@weblogic_server_01 config]$

 

As you can see above, the debug-scope name that needs to be added in the config.xml file is simply the concatenation of the different elements you expanded in the WebLogic Administration Console separated by dots (E.g.: weblogic.security.saml2).

Above, I only talked about scopes again. If you want to set the attributes directly, it is also possible by adding this for example ‘<debug-security-saml2-service>true</debug-security-saml2-service>’ or ‘<debug-security-atz>true</debug-security-atz>’ but let’s try to keep it simple… I’m only showing the minimum to be able to debug issues but that is obviously not everything you can do.

The problem with configuring something in the config.xml: you need to restart the Managed Server for the change to be applied… Indeed, all changes are written to this file as soon as they are activated but this file is only loaded at the boot so a restart is needed, which makes this solution a little bit less effective when we are talking about debug logging because we usually want to enable/disable them on the fly!

 

III. WLST

You all know what the WLST is so let’s just get to it. This solution is probably the best one if you want to script/automate it because it’s the whole purpose of the WebLogic Scripting Tool, yeah yeah I know it is unbelievable ;)! Below I will not use the config and key files to connect to the AdminServer but keep in mind that these files contain the weblogic username and password encrypted so if you want to automate something, you will most probably need them to avoid clear text passwords! So first the connection to the WLST:

[weblogic@weblogic_server_01 config]$ source $MW_HOME/wlserver/server/bin/setWLSEnv.sh
CLASSPATH=$JAVA_HOME/lib/tools.jar:$MW_HOME/wlserver/modules/features/wlst.wls.classpath.jar:

PATH=$MW_HOME/wlserver/server/bin:$MW_HOME/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:$JAVA_HOME/jre/bin:$JAVA_HOME/bin:$JAVA_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/weblogic/bin:$MW_HOME/wlserver/../oracle_common/modules/org.apache.maven_3.2.5/bin

Your environment has been set.
[weblogic@weblogic_server_01 config]$
[weblogic@weblogic_server_01 config]$ java -Djava.security.egd=file:///dev/./urandom -Dwlst.offline.log=disable weblogic.WLST

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

wls:/offline> connect('weblogic','P4sSw0rd','t3s://weblogic_server_01:8443')
Connecting to t3s://weblogic_server_01:8443 with userid weblogic ...
<Nov 11, 2017 2:50:12 PM UTC> <Info> <Security> <BEA-090908> <Using the default WebLogic SSL Hostname Verifier implementation.>
Successfully connected to Admin Server "AdminServer" that belongs to domain "DOMAIN".

wls:/DOMAIN/serverConfig/> edit()
Location changed to edit tree.
This is a writable tree with DomainMBean as the root.
To make changes you will need to start an edit session via startEdit().
For more help, use help('edit').

wls:/DOMAIN/edit/> cd('Servers/msD2-01/ServerDebug/msD2-01')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01> startEdit()
Starting an edit session ...
Started edit session, be sure to save and activate your changes once you are done.
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecurityAtn','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecurityAtz','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecuritySAML2Atn','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecuritySAML2Service','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecuritySAMLCredMap','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> save()
Saving all your changes ...
Saved all your changes successfully.
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> activate()
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released once the activation is completed.
Activation completed
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01>

 

The commands above will enable the attributes only. On the first paragraph above (related to the Admin Console), I talked about groups (=scope) and I actually only enabled the groups themselves on this first solution (scopes = ‘atn’, ‘atz’ and ‘saml2′). So if you enabled the scope ‘atn’ on the Admin Console for example, then you will probably see the scope ‘DebugSecurityAtn’ (on the WLST session) set to false (unless you already changed it using the above WLST commands). If you want to enable the scopes directly, it is not on the same location. Here is an example for the SAML2 scope:

wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01> cd('DebugScopes/weblogic.security.saml2')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2> ls()

-r--   DynamicallyCreated                           false
-rw-   Enabled                                      false
-r--   Id                                           0
-rw-   Name                                         weblogic.security.saml2
-rw-   Notes                                        null
-rw-   Tags                                         null
-r--   Type                                         DebugScope

-r-x   addTag                                       Boolean : String(tag)
-r-x   freezeCurrentValue                           Void : String(attributeName)
-r-x   getInheritedProperties                       String[] : String[](propertyNames)
-r-x   isInherited                                  Boolean : String(propertyName)
-r-x   isSet                                        Boolean : String(propertyName)
-r-x   removeTag                                    Boolean : String(tag)
-r-x   restoreDefaultValue                          Void : String(attributeName)
-r-x   unSet                                        Void : String(propertyName)

wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2> startEdit()
Starting an edit session ...
Started edit session, be sure to save and activate your changes once you are done.
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2 !> set('Enabled','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2 !> ls()

-r--   DynamicallyCreated                           false
-rw-   Enabled                                      true
-r--   Id                                           0
-rw-   Name                                         weblogic.security.saml2
-rw-   Notes                                        null
-rw-   Tags                                         null
-r--   Type                                         DebugScope

-r-x   addTag                                       Boolean : String(tag)
-r-x   freezeCurrentValue                           Void : String(attributeName)
-r-x   getInheritedProperties                       String[] : String[](propertyNames)
-r-x   isInherited                                  Boolean : String(propertyName)
-r-x   isSet                                        Boolean : String(propertyName)
-r-x   removeTag                                    Boolean : String(tag)
-r-x   restoreDefaultValue                          Void : String(attributeName)
-r-x   unSet                                        Void : String(propertyName)

wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2 !> save()
Saving all your changes ...
Saved all your changes successfully.
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2 !> activate()
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released once the activation is completed.
Activation completed
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2> disconnect()
Disconnected from weblogic server: AdminServer
wls:/offline> exit()

Exiting WebLogic Scripting Tool.

[weblogic@weblogic_server_01 config]$

 

Pretty cool, right?

 

IV. Command line

The last way to do this is via the command line but just like the config.xml, this will require a restart of the concerned Managed Server. Here are some examples of command line arguments that can be added to the JAVA_OPTIONS to enable the debug logs on the Attributes level:

  • -Dweblogic.debug.DebugSecurityAtn=true
  • -Dweblogic.debug.DebugSecurityAtz=true
  • -Dweblogic.debug.DebugSecuritySAML2Atn=true
  • -Dweblogic.debug.DebugSecuritySAML2Service=true

With all that, you should be well armed to face any issue!

 

 

Cet article WebLogic – SSO/Atn/Atz – How to enable debug logs est apparu en premier sur Blog dbi services.

Documentum – DFC traces setup & investigation

Yann Neuhaus - Sat, 2017-11-25 12:11

When working with Documentum, you will most probably have to enable the DFC traces one day or another and then work with these traces to analyze them. The purpose of this blog is simply to show how the DFC traces can be enabled, which tools can be used to quickly process them and what are the limitations of such things.

Enabling the DFC traces can be done very easily by updating the dfc.properties file of the client. This client can be a DA, D2, JMS, Index Agent, aso… The change is applied directly (if enabled=true) and disabled by default (if commented or enable=false). If you have a dfc.properties that is inside a war file (for DA/D2 for example) and that you deployed your application as a war file (not exploded), then disabling the tracing might need a restart of your application. To avoid that, you can have a dfc.properties inside the war file that just point to another one outside of the war file and then enabling/disabling the traces from this second file will work properly. There are a lot of options to customize how the traces should be generated. A first example with only a few properties that you can use and reuse every time you need traces:

dfc.tracing.enable=true
dfc.tracing.verbose=true
dfc.tracing.max_stack_depth=0
dfc.tracing.mode=compact
dfc.tracing.dir=/tmp/dfc_tracing

 

Another example with more properties to really specify what you want to see:

dfc.tracing.enable=true
dfc.tracing.verbose=true
dfc.tracing.max_stack_depth=4
dfc.tracing.include_rpcs=true
dfc.tracing.mode=standard
dfc.tracing.include_session_id=true
dfc.tracing.user_name_filter[0]=dmadmin
dfc.tracing.user_name_filter[1]=myuser
dfc.tracing.thread_name_filter[0]=Thread-3
dfc.tracing.thread_name_filter[1]=Thread-25
dfc.tracing.timing_style=milliseconds_from_start
dfc.tracing.dir=/tmp/dfc_tracing
dfc.tracing.file_prefix=mydfc
dfc.tracing.max_backup_index=10
dfc.tracing.max_file_size=104857600
...

 

All these properties are quite easy to understand even without explanation but you can probably find more information and all the possible options in the official Documentum documentation. It’s not the main purpose of this blog so I’m just mentioning a few properties to get started. By default, the name of the generated files will be something like “dfctrace.timestamp.log”, you can change that by setting the “dfc.tracing.file_prefix” for example. Adding and customizing the properties will change the display format and style inside the files so if you want to have a way to analyze these DFC traces, it is better to use more or less always the same set of options. For the example below, OTX asked me to use these properties only:

dfc.tracing.enable=true
dfc.tracing.verbose=true
dfc.tracing.max_stack_depth=4
dfc.tracing.include_rpcs=true
dfc.tracing.mode=compact
dfc.tracing.include_session_id=true
dfc.tracing.dir=/tmp/dfc_tracing

 

When you have your DFC traces, you need a way to analyze them. They are pretty much readable but it will be complicated to get something out of it without spending a certain amount of time – unless you already know what you are looking for – simply because there are a lot of information inside… For that purpose, Ed Bueche developed more than 10 years ago some AWK scripts to parse the DFC traces files: traceD6.awk and trace_rpc_histD6.awk. You can find these scripts at the following locations (all EMC links… So might not be working at some point in the future):

As you can see above, it is not really maintained and the same scripts or a mix of several versions can be found at several locations so it can be a little bit confusing. All the old links are about the awk scripts but since 2013, there is now a python script too (also developed by Ed Bueche).

In this blog, I wanted to talk about the AWK scripts mainly. Earlier this month, I was working with OTX on some performance tuning tasks and for that, I gathered the DFC traces for several scenarios, in different log files, well separated, aso… Then, I provided them to OTX for the analysis. OTX came back to me a few minutes later saying that most of the traces were corrupted and asking me to regenerate them. I wasn’t quite OK with that simply because it takes time and because there were some testing in progress on this environment so gathering clean DFC traces for several scenarios would have forced the tests to be stopped, aso… (Ok ok you got me, I’m just lazy ;))

The content of the DFC traces looked correct to me and after a quick verification, I saw that OTX was using the AWK scripts (traceD6.awk and trace_rpc_histD6.awk) to analyze the logs but they were apparently getting an error. The files didn’t look corrupted to me so I mentioned to OTX that the issue might very well be with the AWK scripts they were using. They didn’t really listen to what I said and stayed focus on getting a new set of DFC traces. I already used these scripts but never really looked inside so it was the perfect reason to take some time for that:

[dmadmin@content_server_01 ~]$ cd /tmp/dfc_tracing/
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ ls -l trace* dfctrace.*.log
-rw-r-----. 1 dmadmin dmadmin 92661060 Nov 3 09:24 dfctrace.1510220481672.log
-rw-r-----. 1 dmadmin dmadmin 3240 Nov 4 14:10 traceD6.awk
-rw-r-----. 1 dmadmin dmadmin 7379 Nov 4 14:10 traceD6.py
-rw-r-----. 1 dmadmin dmadmin 5191 Nov 4 14:10 trace_rpc_histD6.awk
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ awk -f traceD6.awk < dfctrace.1510220481672.log > output_awk_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_awk_1.log
2 output_awk_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ awk -f trace_rpc_histD6.awk < dfctrace.1510220481672.log > output_awk_2.log
awk: trace_rpc_histD6.awk:203: (FILENAME=- FNR=428309) fatal: division by zero attempted
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_awk_2.log
4 output_awk_2.log
[dmadmin@content_server_01 dfc_tracing]$

 

As you can see above, the first script generated a log file that contains only 2 lines, so this is already suspicious even if there are no errors. The second script generated an error and its log file contains only 4 lines… The input DFC trace file has a size of 90Mb so it’s clear that there is something wrong and that’s why OTX said that the DFC traces were corrupted. The error message shows the line (203) as the origin of the issue as well as a “division by zero attempted” message. This obviously means that somewhere on this line, there is a division and that the divisor is equal to 0 or at least not set at all. Since I love all kind of UNIX scripting, I would rather fix the bug in the script than having to generate a new set of DFC traces (and the new set would still be impacted by the issue anyway…)! So checking inside the trace_rpc_histD6.awk file, the line 203 is the following one:

[dmadmin@content_server_01 dfc_tracing]$ grep -n -C1 "TIME SPENT" trace_rpc_histD6.awk
202-    printf ("DURATION (secs):\t%17.3f\n", ((curr_tms - st_tms)) );
203:    printf ("TIME SPENT EXECUTING RPCs (secs):%8.3f (which is %3.2f percent of total time)\n", total_rpc_time, 100*total_rpc_time/(curr_tms - st_tms));
204-    printf ("Threads :\t%25d\n", thread_cnt);
[dmadmin@content_server_01 dfc_tracing]$

 

The only division on this line is the total time taken to execute the RPCs divided by the duration of the log file (timestamp of last message – first message). So the value of “curr_tms – st_tms” is 0. Potentially, it could be that both variables have the exact same value but since the first and last messages on the DFC traces don’t have the same timestamp, this isn’t possible and therefore both variables are actually 0 or not set. To check where these variables are defined, how and in which function:

[dmadmin@content_server_01 dfc_tracing]$ grep -n -C15 -E "curr_tms|st_tms" trace_rpc_histD6.awk | grep -E "curr_tms|st_tms|^[0-9]*[:-][^[:space:]]"
144-/ .RPC:/ {
159:                    st_tms = $1;
162:            curr_tms = $1;
175-}
177-/obtained from pool/ {
--
187-}
188-/.INFO: Session/ {
193-}
197-END {
202:    printf ("DURATION (secs):\t%17.3f\n", ((curr_tms - st_tms)) );
203:    printf ("TIME SPENT EXECUTING RPCs (secs):%8.3f (which is %3.2f percent of total time)\n", total_rpc_time, 100*total_rpc_time/(curr_tms - st_tms));
[dmadmin@content_server_01 dfc_tracing]$

 

This shows that the only location where these two variables are set is inside the matching pattern “/ .RPC:/” (st_tms is set to $1 only on the first execution). So it means that this portion of code is never executed so in other words: this pattern is never found in the DFC trace file. Why is that? Well that’s pretty simple: the DFC traces file contains a lot of RPC calls but these lines never contain ” .RPC:”, there are always at least two dots (so something like that: ” ..RPC:” or ” …RPC:” or ” ….RPC:”). The reason why there are several dots is simply because the RPC are placed where they are called… In this case, OTX asked us to use “dfc.tracing.max_stack_depth=4″ so this is what I did and it is the reason why the AWK scripts cannot work by default because they need “dfc.tracing.max_stack_depth=0″, that’s written at the beginning of the scripts in the comment sections.

So a simple way to fix the AWK scripts is to remove the space at the beginning of the pattern for both the traceD6.awk and trace_rpc_histD6.awk scripts and after doing that, it will work for all max_stack_depth values:

[dmadmin@content_server_01 dfc_tracing]$ grep -n ".RPC:/" *.awk
traceD6.awk:145:/ .RPC:/ {
trace_rpc_histD6.awk:144:/ .RPC:/ {
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ sed -i 's,/ .RPC:/,/.RPC:/,' *.awk
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ grep -n ".RPC:/" *.awk
traceD6.awk:145:/.RPC:/ {
trace_rpc_histD6.awk:144:/.RPC:/ {
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ awk -f traceD6.awk < dfctrace.1510220481672.log > output_awk_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -lc output_awk_1.log
 1961 163788 output_awk_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ awk -f trace_rpc_histD6.awk < dfctrace.1510220481672.log > output_awk_2.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_awk_2.log
 367 49050 output_awk_2.log
[dmadmin@content_server_01 dfc_tracing]$

 

That looks much much better… Basically, the first script list all RPCs with their thread, name and times while the second script creates a sorted list of queries that took the most time to execute as well as a list of calls and occurrences per types/names.

The AWK and Python scripts, even if they are globally working, might have some issues with commas, parenthesis and stuff like that (again it depends which dfc.tracing options you selected). This is why I mentioned above that there is actually both a AWK and Python version of these scripts. Sometimes, the AWK scripts will contain the right information, sometimes it is the Python version that will but in all cases, the later will run much faster. So if you want to work with these scripts, you will have to juggle a little bit:

[dmadmin@content_server_01 dfc_tracing]$ python traceD6.py dfctrace.1510220481672.log > output_py_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_py_1.log
 1959 194011 output_py_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ python traceD6.py dfctrace.1510220481672.log -profile > output_py_2.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_py_2.log
 342 65917 output_py_2.log
[dmadmin@content_server_01 dfc_tracing]$

 

As you can see, there are fewer lines in the python output files but that’s because some unnecessary headers have been removed in the python version so it’s actually normal. However there are much more characters so it shows that, in this case, the extracted DQL queries contain more characters but it does not mean that these characters are actually part of the DQL queries: you will see below that there are references to “, FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)” => This is NOT part of the DQL but it is present on the output of the Python script while it is not for the AWK one:

[dmadmin@content_server_01 dfc_tracing]$ head -15 output_awk_1.log
analysis program version 2 based on DFC build 6.0.0.76
68354.130 & 0.005 & [http--0.0.0.0-9082-3] & EXEC_QUERY  select r_object_id from dm_sysobject where folder ('/Home') and object_name = 'Morgan Patou'
68354.135 & 0.000 & [http--0.0.0.0-9082-3] & multiNext
68354.136 & 0.005 & [http--0.0.0.0-9082-3] & SysObjFullFetch  0b0f12345004f0de
68354.165 & 0.002 & [http--0.0.0.0-9082-4] & EXEC_QUERY  select r_object_id from dm_user where user_name = 'Morgan Patou'
68354.167 & 0.000 & [http--0.0.0.0-9082-4] & multiNext
68354.167 & 0.002 & [http--0.0.0.0-9082-4] & IsCurrent
68354.170 & 0.003 & [http--0.0.0.0-9082-4] & EXEC_QUERY  SELECT COUNT(*) AS items FROM dm_group WHERE group_name = 'report_user' AND ANY i_all_users_names = 'Morgan Patou'
68354.173 & 0.001 & [http--0.0.0.0-9082-4] & multiNext
68354.175 & 0.003 & [http--0.0.0.0-9082-4] & EXEC_QUERY  select r_object_id from dm_sysobject where folder ('/myInsight') and object_name = 'myInsight.license'
68354.178 & 0.001 & [http--0.0.0.0-9082-4] & multiNext
68354.179 & 0.001 & [http--0.0.0.0-9082-4] & IsCurrent
68354.165 & 0.010 & [http--0.0.0.0-9082-3] & SysObjGetPermit
68354.175 & 0.006 & [http--0.0.0.0-9082-3] & SysObjGetXPermit
68354.181 & 0.006 & [http--0.0.0.0-9082-4] & MAKE_PULLER
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ head -15 output_py_1.log
68354.130 & 0.005 & [http--0.0.0.0-9082-3] & EXEC_QUERY & select r_object_id from dm_sysobject where folder ('/Home') and object_name = 'Morgan Patou', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
68354.135 & 0.000 & [http--0.0.0.0-9082-3] & multiNext &
68354.136 & 0.005 & [http--0.0.0.0-9082-3] & SysObjFullFetch & 0b0f12345004f0de
68354.165 & 0.002 & [http--0.0.0.0-9082-4] & EXEC_QUERY & select r_object_id from dm_user where user_name = 'Morgan Patou', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
68354.167 & 0.000 & [http--0.0.0.0-9082-4] & multiNext &
68354.167 & 0.002 & [http--0.0.0.0-9082-4] & IsCurrent & 110f123450001d07
68354.170 & 0.003 & [http--0.0.0.0-9082-4] & EXEC_QUERY & SELECT COUNT(*) AS items FROM dm_group WHERE group_name = 'report_user' AND ANY i_all_users_names = 'Morgan Patou', FOR_UPDATE=T, BATCH_HINT=50, BOF_DQL=T, FLUSH_BATCH=-1]],50,true,true)
68354.173 & 0.001 & [http--0.0.0.0-9082-4] & multiNext &
68354.175 & 0.003 & [http--0.0.0.0-9082-4] & EXEC_QUERY & select r_object_id from dm_sysobject where folder ('/myInsight') and object_name = 'myInsight.license', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
68354.178 & 0.001 & [http--0.0.0.0-9082-4] & multiNext &
68354.179 & 0.001 & [http--0.0.0.0-9082-4] & IsCurrent & 090f123450023f63
68354.165 & 0.010 & [http--0.0.0.0-9082-3] & SysObjGetPermit & 0b0f12345004f0de
68354.175 & 0.006 & [http--0.0.0.0-9082-3] & SysObjGetXPermit & 0b0f12345004f0de
68354.181 & 0.006 & [http--0.0.0.0-9082-4] & MAKE_PULLER & null
68354.187 & 0.000 & [http--0.0.0.0-9082-4] & getBlock &
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ head -35 output_py_2.log

****** PROFILE OF rpc CALLS *****
     3.273           0.080              41      AUTHENTICATE_USER
     0.032           0.002              17      BEGIN_TRANS
     0.001           0.000              14      END_PUSH_V2
     0.202           0.012              17      END_TRANS
    21.898           0.071             310      EXEC_QUERY
     0.028           0.005               6      FETCH_CONTENT
     0.011           0.000              55      GET_ERRORS
     0.117           0.004              27      GET_LOGIN
     0.290           0.002             163      IsCurrent
     0.013           0.000              82      KILL_PULLER
     0.003           0.000              14      KILL_PUSHER
     0.991           0.012              82      MAKE_PULLER
     0.005           0.000              14      MAKE_PUSHER
     0.002           0.000               5      NEXT_ID_LIST
     0.083           0.002              38      NORPC
     0.015           0.005               3      RelationCopy
     0.446           0.032              14      SAVE
     0.274           0.014              20      SAVE_CONT_ATTRS
     0.140           0.010              14      START_PUSH
     0.134           0.045               3      SysObjCheckin
     0.048           0.016               3      SysObjCheckout
     2.199           0.009             240      SysObjFullFetch
     0.913           0.006             141      SysObjGetPermit
     0.764           0.005             141      SysObjGetXPermit
     0.642           0.046              14      SysObjSave
     0.033           0.000              82      getBlock
     1.454           0.004             399      multiNext

**** QUERY RESPONSE SORTED IN DESCENDING ORDER ****

10.317  select distinct wf.object_name as workflow_name, pr.object_name as process_name, i.name as Performer_Name, i.task_name as Task_Name, i.date_sent as Date_Task_Sent, i.actual_start_date as Date_Task_Acquired, wf.r_creator_name as Workflow_Initiator, cd.primary_group as "group", cd.subgroup as subgroup, cd.artifact_name as Artifact_Name, cd.object_name as document_name, cd.r_version_label as version_label, cd.title as Document_Title, cd.r_object_id as object_id from cd_common_ref_model(all) cd, dmi_package p, dmi_queue_item i, dm_workflow wf, dm_process pr
0.607   select r_object_id from dm_sysobject where folder ('/myInsight/Presentations/Standard Presentations/Graphical Reports') and object_name = 'FusionInterface.xsl', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
0.505   select r_object_id from dm_sysobject where folder ('/myInsight/Presentations/Life Sciences') and object_name = 'Unique Templates.xsl', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
[dmadmin@content_server_01 dfc_tracing]$

 

To conclude this blog on a more philosophical note: always question what other people ask you to do and think twice before doing the same thing over and over again. ;)

 

 

Cet article Documentum – DFC traces setup & investigation est apparu en premier sur Blog dbi services.

email in PL/SQL

Tom Kyte - Fri, 2017-11-24 19:46
Tom: 1. If you want to write something in PL/SQL application server that send email automatically to an administrator requesting approval after a user create a record, which oracle package would be using. Would you also use an after insert tri...
Categories: DBA Blogs

How to define a composite primary key

Tom Kyte - Fri, 2017-11-24 19:46
I have a table with two columns that need to be defined as primary keys, order_number and item_number. A table can have only one primary key, but I can define composite primary keys in the table_constraint syntax. How do I define two columns ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator