Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 11 hours 14 min ago

SQL Saturday Toulouse 2017: SQL Server on Linux – Slide deck

Mon, 2017-09-18 03:39

#SQLSatToulouse was definitely a great event. Firstly, a big thanks to the organization team (@Guss and @Fredg_31) as well as all the sponsors and attendees (without whom this kind of event would not be possible).

blog 124 - sqlonlinuxsqlsattoulouse

As promised, here the slide deck of my session “Introduction to SQL Server on Linux for DBAs“. It was for me a great moment of sharing.

Thanks to all!

 

 

Cet article SQL Saturday Toulouse 2017: SQL Server on Linux – Slide deck est apparu en premier sur Blog dbi services.

DOAG IMC Day 2017

Mon, 2017-09-18 03:04

Screen Shot 2017-09-18 at 09.59.38

This year and before taking part to the DOAG in Nuremberg as a referent – Yes I will present at the DOAG this year – I had the opportunity to attend the DOAG IMC Day in Berlin. It was the SIG IMW dedicated to the infrastructure and operating system.

It was my first time in Berlin :) Very nice city.

The event was well organized at the DOAG “Diensleistung”. All the facility was there.

So let’s get back to the event, there was a lots of interesting sessions. At the beginning, Jan-Peter Timmermann presented the DOAG community and how it’s splitted. The event I took part was part of the Middleware community.

After that, Franck Burkhardt started with a session about Forms and security. He presented a retour of experience made during the implementation of Oracle Access Management for an existing CRM environment that was already using an SSO mechanism with AD. The challenge was to use the Windows Native Authentication provided by OAM with web gate and Forms Applications. Forms applications were configured to use SSO. Architectural challenges have been presented.

Then Jan-Peter Timmerman had a useful session about Troubleshooting Oracle FMW 12c Forms/Reports. He presented some issues he had and provide information to begin the investigation and solve them.

Then, there were two interesting sessions around the Cloud; one made by Jan Brosowski and one made by Danilo Schmiedel.

Then it came to the most exciting one of the day for me. It was around Ansible and WebLogic. The presenter, Grzegorz Lysko defined what Ansible is and what we could do with it. He gave some samples on how Fusion Middleware can be easily deployed on multiple hosts including clustering features and Fusion Middleware components. It was the most interesting because I also developed Ansible scripts to deploy production environments running Oracle Fusion Middleware components and I was able to compare the approach I has chosen with mine. The scripts I developed allows to easily deploy in less than 25 minutes an OFM infrastructure including Reports and Forms. The all based on our defined best practices. It checked as well if the Linux requested packages and install them if they are missing. It deploys in the same time the JDK, and our DMK WebLogic that is a kind of management kit which allows to manage easily a WebLogic domain and its system components if some.

The last session cover the JVM internal usage and some known issue we can have with the JVM. Mostly the OOM issue with permspace, heapsize, swapspace and so on.

It was really an interesting day in Berlin for the DOAG. Let’s see them again next year. Why not for presenting something.

 

Cet article DOAG IMC Day 2017 est apparu en premier sur Blog dbi services.

Active Data Guard services in Multitenant

Fri, 2017-09-15 17:36

A database (or the CDB in multitenant) registers its name as the default service. When a standby database is on the same server, or same cluster, you have no problem because this database name is the db_unique_name which is different between the primary and the standby(s).

In multitenant, in addition to that, each PDB registers its name as a service. But the PDB name is the same in the primary and the standby database. This means that we have the same service name registered for the PDB in primary and standby:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...

We cannot change that, and then it is strongly recommended to create different services for the PDB in primary and standby.

The PDB default service name

Here is what we want to avoid.
I’ve a container database (db_name=CDB2) with its primary (db_unique_name=CDB2A) and standby (db_unique_name=CDB2B) on the same server, registered to the same listener:

Service "59408d6bed2c1c8ee0536a4ea8c0cfa9" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2A" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2AXDB" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2A_DGB" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Service "CDB2A_DGMGRL" has 1 instance(s).
Instance "CDB2A", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2B" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2BXDB" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2B_DGB" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "CDB2B_DGMGRL" has 1 instance(s).
Instance "CDB2B", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2_CFG" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...

The PDB1 service is registered from both instances, and then when I use it in my connection string I’m connected at random to the primary or the standby:

22:27:46 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:27:51 SQL> select * from v$instance;
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2B VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:00 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:06 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2A VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:07 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:10 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2B VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE
 
22:28:11 SQL> connect sys/oracle@//localhost:1522/pdb1 as sysdba
Connected.
22:28:13 SQL> /
 
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
1 CDB2A VM106 12.2.0.1.0 15-SEP-17 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE SINGLE

I don’t want to use a service that connects at random and then I need to create different services.

Read-Only service for the Active Data Guard standby

I’m in Oracle Restart and I create the service with srvctl (but you can also create it with dbms_service when not running with Grid Infrastructure):


srvctl add service -db cdb2b -service pdb1_ro -pdb pdb1 -role physical_standby

This creates the service for the standby database (CDB2B) to be started when in physical standby role, and the service connects to the pluggable database PDB1.
But I cannot start it:

srvctl start service -db cdb2b -service pdb1_ro -pdb pdb1
 
 
PRCD-1084 : Failed to start service pdb1_ro
PRCR-1079 : Failed to start resource ora.cdb2b.pdb1_ro.svc
CRS-5017: The resource action "ora.cdb2b.pdb1_ro.svc start" encountered the following error:
ORA-16000: database or pluggable database open for read-only access
ORA-06512: at "SYS.DBMS_SERVICE", line 5
ORA-06512: at "SYS.DBMS_SERVICE", line 288
ORA-06512: at line 1
. For details refer to "(:CLSN00107:)" in "/u01/app/12.2/diag/crs/vm106/crs/trace/ohasd_oraagent_oracle.trc".
 
CRS-2674: Start of 'ora.cdb2b.pdb1_ro.svc' on 'vm106' failed

The reason is that the service information must be stored in the dictionary, SYS.SERVICE$ table, and you cannot do that on a read-only database.

This has been explained a long time ago by Ivica Arsov on his blog: https://iarsov.com/oracle/data-guard/active-services-on-physical-standby-database/ and nothing has changed. You need to create the service on the primary so that the update of SYS.SERVICE$ is propagated to the standby database through log shipping:


srvctl add service -db cdb2a -service pdb1_ro -pdb pdb1 -role physical_standby

This is not sufficient because the insert in SYS.SERVICE$ does not occur yet:

SQL> alter session set container=PDB1;
 
Session altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- --------- ----------
3 PDB1 READ WRITE NO
 
SQL> select * from service$;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH DELETION_DATE FAILOVER_METHOD FAILOVER_TYPE FAILOVER_RETRIES FAILOVER_DELAY MIN_CARDINALITY MAX_CARDINALITY GOAL FLAGS EDITION PDB RETENTION_TIMEOUT REPLAY_INITIATION_TIMEOUT SESSION_STATE_CONSISTENCY SQL_TRANSLATION_PROFILE MAX_LAG_TIME GSM_FLAGS PQ_SVC STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT
---------- ---- --------- ------------ ------------- ------------------ ------------- --------------- ------------- ---------------- -------------- --------------- --------------- ---- ----- ------- --- ----------------- ------------------------- ------------------------- ----------------------- ------------ --------- ------ ----------- ---------------- -------------
14 pdb1 1888881990 pdb1 15-SEP-17 1332716667 136 PDB1

As explained by Ivica in his blog post, we need to start the service once to have the row inserted in SERVICE$:

srvctl start service -db cdb2a -service pdb1_ro -pdb pdb1
srvctl stop service -db cdb2a -service pdb1_ro

Now the service information is persistent in the dictionary:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
3 PDB1 READ WRITE NO
 
SQL> select * from service$;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH DELETION_DATE FAILOVER_METHOD FAILOVER_TYPE FAILOVER_RETRIES FAILOVER_DELAY MIN_CARDINALITY MAX_CARDINALITY GOAL FLAGS EDITION PDB RETENTION_TIMEOUT REPLAY_INITIATION_TIMEOUT SESSION_STATE_CONSISTENCY SQL_TRANSLATION_PROFILE MAX_LAG_TIME GSM_FLAGS PQ_SVC STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT
---------- ---- --------- ------------ ------------- ------------------ ------------- --------------- ------------- ---------------- -------------- --------------- --------------- ---- ----- ------- --- ----------------- ------------------------- ------------------------- ----------------------- ------------ --------- ------ ----------- ---------------- -------------
14 pdb1 1888881990 pdb1 15-SEP-17 1332716667 136 PDB1
1 pdb1_ro 1562179816 pdb1_ro 15-SEP-17 1301388390 0 0 0 8 PDB1 86400 300 DYNAMIC ANY 0 0 0 0

This is from the primary, but after the redo has been transported and applied, I have the same on the standby. Now I can start the service I’ve created for the standby:

srvctl start service -db cdb2b -service pdb1_ro -pdb pdb1

Here is the new service registered on the listener, which I can use to connect to the read-only PDB1 on the Active Data Guard standby:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_ro" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_rw" has 1 instance(s).

Read-Write service for the primary

You can see above that in order to select from SERVICE$ I connected to CDB$ROOT and switched to PDB1 with ‘set container’. There’s no other choice because using the service name directs me at random to any instance. Then, I need a service to connect to the primary only, and I’ll call it PDB1_RW as it is opened in Read Write there.

srvctl add service -db cdb2a -service pdb1_rw -pdb pdb1 -role primary
srvctl start service -db cdb2a -service pdb1_rw

Finally, here are the services registered from the listener:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_ro" has 1 instance(s).
Instance "CDB2B", status READY, has 1 handler(s) for this service...
Service "pdb1_rw" has 1 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...

I’ll probably never use the ‘PDB1′ service because I want to know where I connect to.

In case of switchover, I also create the Read Write service in for the standby:

srvctl add service -db cdb2b -service pdb1_rw -pdb pdb1 -role primary

Here are the resources when CDB2A is the primary:

$ crsctl stat resource -t -w "TYPE = ora.service.type"
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdb2a.pdb1_ro.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2a.pdb1_rw.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2b.pdb1_ro.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2b.pdb1_rw.svc
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------------------

I test as switchover to CDB2B:

$ dgmgrl sys/oracle
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Sep 15 23:41:26 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Welcome to DGMGRL, type "help" for information.
Connected to "CDB2B"
Connected as SYSDG.
DGMGRL> switchover to cdb2b;
Performing switchover NOW, please wait...
New primary database "cdb2b" is opening...
Oracle Clusterware is restarting database "cdb2a" ...
Switchover succeeded, new primary is "cdb2b"

Here are the services:

[oracle@VM106 blogs]$ crsctl stat resource -t -w "TYPE = ora.service.type"
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdb2a.pdb1_ro.svc
1 ONLINE ONLINE vm106 STABLE
ora.cdb2a.pdb1_rw.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2b.pdb1_ro.svc
1 OFFLINE OFFLINE STABLE
ora.cdb2b.pdb1_rw.svc
1 ONLINE ONLINE vm106 STABLE
--------------------------------------------------------------------------------

So what?

The recommendations are not new here:

  • Always do the same on the primary and the standby. Create services on both sites, then have started them depending on the role
  • Always use one or several application services rather than the default one, in order to have better control and flexibility on where you connect

In multitenant, because services are mandatory to connect to a container with a local user, all the recommendations about services are even more important than before. If you follow them, you will see that multitenant is not difficult at all.

This case may seem improbable, because you probably don’t put the standby on the same server or cluster as the primary. But you may have several standby databases on the same server. About the service registered from the PDB name, just don’t use it. I’m more concerned by the GUID service name (here 59408d6bed2c1c8ee0536a4ea8c0cfa9) which is also declared by both databases. If you plan to use online PDB relocate in a Data Guard configuration then be careful with that. I’ve not tested it, but it is probably better to keep the standby PDB closed, or at least do not register it on the same listener.

 

Cet article Active Data Guard services in Multitenant est apparu en premier sur Blog dbi services.

12c Access Control Lists

Sun, 2017-09-10 14:39

There is already enough information about the new simplified 12c way to define Access Control Lists, such as in oracle-base.
I’m just posting my example here to show how it is easy.

If, as a non-SYS user you want to connect to a host with TCP, you get an error:

SQL> connect DEMO1/demo@//localhost/PDB1
Connected.
SQL>
SQL>
SQL> declare
2 c utl_tcp.connection;
3 n number:=0;
4 begin
5 c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
6 end;
7 /
 
Error starting at line : 27 File @ /media/sf_share/122/blogs/12cacl.sql
In command -
declare
c utl_tcp.connection;
n number:=0;
begin
c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
end;
Error report -
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.UTL_TCP", line 19
ORA-06512: at "SYS.UTL_TCP", line 284
ORA-06512: at line 5
24247. 00000 - "network access denied by access control list (ACL)"
*Cause: No access control list (ACL) has been assigned to the target
host or the privilege necessary to access the target host has not
been granted to the user in the access control list.
*Action: Ensure that an access control list (ACL) has been assigned to
the target host and the privilege necessary to access the target
host has been granted to the user.
SQL>

Here are the ACLs defined by default:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
 
SQL> select * from dba_host_acls;
 
HOST LOWER_PORT UPPER_PORT ACL ACLID ACL_OWNER
---- ---------- ---------- --- ----- ---------
* NETWORK_ACL_4700D2108291557EE05387E5E50A8899 0000000080002724 SYS
 
SQL> select * from dba_host_aces;
 
HOST LOWER_PORT UPPER_PORT ACE_ORDER START_DATE END_DATE GRANT_TYPE INVERTED_PRINCIPAL PRINCIPAL PRINCIPAL_TYPE PRIVILEGE
---- ---------- ---------- --------- ---------- -------- ---------- ------------------ --------- -------------- ---------
* 1 GRANT NO GSMADMIN_INTERNAL DATABASE RESOLVE
* 2 GRANT NO GGSYS DATABASE RESOLVE

So, I add an ACL to access to towel.blinkenlights.nl on telnet port (23) for my user DEMO1:

SQL> exec dbms_network_acl_admin.append_host_ace(host=>'towel.blinkenlights.nl',lower_port=>23,upper_port=>23,ace=>xs$ace_type(privilege_list =>xs$name_list('connect'),principal_name=>'DEMO1',principal_type =>xs_acl.ptype_db));
 
PL/SQL procedure successfully completed.
 
SQL> select * from dba_host_acls;
 
HOST LOWER_PORT UPPER_PORT ACL ACLID ACL_OWNER
---- ---------- ---------- --- ----- ---------
towel.blinkenlights.nl 23 23 NETWORK_ACL_5876ADC67B6635CEE053684EA8C0F378 000000008000281F SYS
* NETWORK_ACL_4700D2108291557EE05387E5E50A8899 0000000080002724 SYS
 
SQL> select * from dba_host_aces;
 
HOST LOWER_PORT UPPER_PORT ACE_ORDER START_DATE END_DATE GRANT_TYPE INVERTED_PRINCIPAL PRINCIPAL PRINCIPAL_TYPE PRIVILEGE
---- ---------- ---------- --------- ---------- -------- ---------- ------------------ --------- -------------- ---------
* 1 GRANT NO GSMADMIN_INTERNAL DATABASE RESOLVE
* 2 GRANT NO GGSYS DATABASE RESOLVE
towel.blinkenlights.nl 23 23 1 GRANT NO DEMO1 DATABASE CONNECT

Now I can connect from my user:

SQL> connect DEMO1/demo@//localhost/PDB1
Connected.
 
SQL> declare
2 c utl_tcp.connection;
3 n number:=0;
4 begin
5 c:=utl_tcp.open_connection(remote_host=>'towel.blinkenlights.nl',remote_port=>23);
6 end;
7 /
 
PL/SQL procedure successfully completed.

If you don’t know why I used towel.blinkenlights.nl, then just try to telnet to it and have fun…

 

Cet article 12c Access Control Lists est apparu en premier sur Blog dbi services.

Create constraints in your datawarehouse – why and how

Fri, 2017-09-08 14:13

We still see some developers not declaring referential integrity constraints in datawarehouse databases because they think they don’t need it (integrity of data has been validated by the ETL). Here is a small demo I did to show why you need to declare them, and how to do it to avoid any overhead on the ETL.

Test case

I create 3 dimension tables and 1 fact table:

21:01:18 SQL> create table DIM1 (DIM1_ID number, DIM1_ATT1 varchar2(20));
Table DIM1 created.
 
21:01:19 SQL> create table DIM2 (DIM2_ID number, DIM2_ATT1 varchar2(20));
Table DIM2 created.
 
21:01:20 SQL> create table DIM3 (DIM3_ID number, DIM3_ATT1 varchar2(20));
Table DIM3 created.
 
21:01:21 SQL> create table FACT (DIM1_ID number, DIM2_ID number, DIM3_ID number,MEAS1 number);
Table FACT created.

I insert 10 million rows into the fact table:

21:01:22 SQL> insert into FACT select mod(rownum,3),mod(rownum,5),mod(rownum,10),rownum from xmltable('1 to 10000000');
10,000,000 rows inserted.
 
Elapsed: 00:00:18.983

and fill the dimension tables from it:

21:01:42 SQL> insert into DIM1 select distinct DIM1_ID,'...'||DIM1_ID from FACT;
3 rows inserted.
 
Elapsed: 00:00:01.540
 
21:01:52 SQL> insert into DIM2 select distinct DIM2_ID,'...'||DIM2_ID from FACT;
5 rows inserted.
 
Elapsed: 00:00:01.635
 
21:01:57 SQL> insert into DIM3 select distinct DIM3_ID,'...'||DIM3_ID from FACT;
10 rows inserted.
 
Elapsed: 00:00:01.579
 
21:01:58 SQL> commit;
Commit complete.

Query joining fact with one dimension

I’ll run the following query:

21:01:58 SQL> select count(*) from FACT join DIM1 using(DIM1_ID);
 
COUNT(*)
--------
10000000
 
Elapsed: 00:00:01.015

Here is the execution plan:

21:02:12 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4pqjrjkc7sn17, child number 0
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID)
 
Plan hash value: 1826335751
 
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7514 (100)| |
| 1 | SORT AGGREGATE | | 1 | 26 | | |
|* 2 | HASH JOIN | | 10M| 253M| 7514 (2)| 00:00:01 |
| 3 | TABLE ACCESS FULL| DIM1 | 3 | 39 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| FACT | 10M| 126M| 7482 (1)| 00:00:01 |
----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("FACT"."DIM1_ID"="DIM1"."DIM1_ID")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

Actually, we don’t need that join. A dimension table has two goals:

  • filter facts on the dimension attributes. Example: filter on customer last name
  • add dimension attributes to the result. Example: add customer first name

Here, there is no WHERE clause on DIM1 columns, and no columns from DIM1 selected. We don’t need to join to DIM1. However, we often see those useless joins for two reasons:

  • We query a view that joins the fact with all dimensions
  • The query is generated by a reporting tool which always join to dimensions
Join elimination

The Oracle optimizer is able to remove those kinds of unnecessary joins. But one information is missing here for the optimizer. We know that all rows in the fact table have a matching row in each dimension, but Oracle doesn’t know that. And if there is no mathing row, then the inner join should not return the result. For this reason, the join must be done.

Let’s give this information to the optimizer: declare the foreign key from FACT to DIM1 so that Oracle knows that there is a many-to-one relationship:

21:02:17 SQL> alter table DIM1 add constraint DIM1_PK primary key(DIM1_ID);
Table DIM1 altered.
 
Elapsed: 00:00:00.051
 
21:02:20 SQL> alter table FACT add constraint DIM1_FK foreign key(DIM1_ID) references DIM1;
Table FACT altered.
 
Elapsed: 00:00:03.210

I’ve spent 3 seconds here to create this foreign key (would have been much longer with a real fact table and lot of columns and rows) but now, the optimizer is able to eliminate the join:

21:02:24 SQL> select count(*) from FACT join DIM1 using(DIM1_ID);
 
COUNT(*)
--------
10000000
 
21:02:25 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4pqjrjkc7sn17, child number 0
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID)
 
Plan hash value: 3735838348
 
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7488 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| FACT | 10M| 126M| 7488 (1)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("FACT"."DIM1_ID" IS NOT NULL)
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

No join needed here, the query is faster. This is exactly the point of this blog post: to show you that declaring constraints improve performance of queries. It adds information to the optimizer, like statistics. Statistics gives estimated cardinalities. Foreign keys are exact cardinality (many-to-one).

No validate

When loading a datawarehouse, you usually don’t need to validate the constraints because data was bulk loaded from a staging area where all data validation has been done. You don’t want to spend time validating constraints (the 3 seconds in my small example above) and this is why some datawarehouse developers do not declare constraints.

However, we can declare constraints without validating them. Let’s do that for the second dimension table:

21:02:34 SQL> alter table DIM2 add constraint DIM2_PK primary key(DIM2_ID) novalidate;
Table DIM2 altered.
 
Elapsed: 00:00:00.018
%nbsp;
21:02:35 SQL> alter table FACT add constraint DIM2_FK foreign key(DIM2_ID) references DIM2 novalidate;
Table FACT altered.
 
Elapsed: 00:00:00.009

That was much faster than the 3 seconds we had for the ‘validate’ constraint which is the default. Creating a constraint in NOVALIDATE is immediate and do not depend on the size of the table.

However this is not sufficient to get the join elimination:

21:02:39 SQL> select count(*) from FACT join DIM2 using(DIM2_ID);
 
COUNT(*)
--------
10000000
 
21:02:40 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4t9g2n6duw0jf, child number 0
-------------------------------------
select count(*) from FACT join DIM2 using(DIM2_ID)
 
Plan hash value: 3858910383
 
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7518 (100)| |
| 1 | SORT AGGREGATE | | 1 | 26 | | |
|* 2 | HASH JOIN | | 10M| 253M| 7518 (2)| 00:00:01 |
| 3 | INDEX FULL SCAN | DIM2_PK | 5 | 65 | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| FACT | 10M| 126M| 7488 (1)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("FACT"."DIM2_ID"="DIM2"."DIM2_ID")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)

The constraint ensures that no rows will be inserted without a matching row in the dimension. However, because Oracle has not validated the result itself, it does not apply the join elimination, just in case a previously existing row has no matching dimension.

Rely novalidate

If you want the optimizer to do the join elimination on a ‘novalidate’ constraint, then it has to trust you and rely on the constraint you have validated.

RELY is an attribute of the constraint that you can set:

21:02:44 SQL> alter table DIM2 modify constraint DIM2_PK rely;
Table DIM2 altered.
 
Elapsed: 00:00:00.016
 
21:02:45 SQL> alter table FACT modify constraint DIM2_FK rely;
Table FACT altered.
 
Elapsed: 00:00:00.010

But this is not sufficient. You told Oracle to rely on your constraint, but Oracle must trust you.

Trusted

The join elimination is a rewrite of the query and, by default, rewrite is enabled but only when integrity is enforced by Oracle:

21:02:50 SQL> show parameter query_rewrite
NAME TYPE VALUE
----------------------- ------ --------
query_rewrite_enabled string TRUE
query_rewrite_integrity string ENFORCED

Let’s allow our session to have rewrite transformations to trust our RELY constraints:

21:02:52 SQL> alter session set query_rewrite_integrity=trusted;
Session altered.

Now, joining to DIM2 without using DIM2 columns outside of the join allows join elimination:

21:02:57 SQL> select count(*) from FACT join DIM2 using(DIM2_ID);
 
COUNT(*)
--------
10000000
 
Elapsed: 00:00:00.185
21:02:58 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 4t9g2n6duw0jf, child number 0
-------------------------------------
select count(*) from FACT join DIM2 using(DIM2_ID)
 
Plan hash value: 3735838348
 
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7494 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| FACT | 10M| 126M| 7494 (1)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("FACT"."DIM2_ID" IS NOT NULL)
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- rely constraint used for this statement

In 12.2 the execution plan has a note to show that the plan depends on RELY constraint.

From this example, you can see that you can, and should, create RELY NOVALIDATE constraints on tables where you know the existing data is valid. They are immediately created, without any overhead on the load process, and helps to improve queries generated on your dimensional model.

Rely Disable

I said that a NOVALIDATE constraint has no overhead when created, but you may have further inserts or updates in your datawarehouse. And then, those constraints will have to be verified and this may add a little overhead. In this case, you can go further and disable the constraint:

21:03:04 SQL> alter table DIM3 add constraint DIM3_PK primary key(DIM3_ID) rely;
Table DIM3 altered.
 
Elapsed: 00:00:00.059
 
21:03:05 SQL> alter table FACT add constraint DIM3_FK foreign key(DIM3_ID) references DIM3 rely disable novalidate;
Table FACT altered.
 
Elapsed: 00:00:00.014

Note that I had to set the referenced constraint DIM3_PK to RELY here, even if it is enable and validate, or I would get: ORA-25158: Cannot specify RELY for foreign key if the associated primary key is NORELY.

My session still trusts RELY constraints for query rewrite:

21:03:07 SQL> show parameter query_rewrite
 
NAME TYPE VALUE
----------------------- ------ -------
query_rewrite_enabled string TRUE
query_rewrite_integrity string TRUSTED

Now, the join elimination occurs:

21:03:08 SQL> select count(*) from FACT join DIM3 using(DIM3_ID);
 
COUNT(*)
--------
10000000
 
21:03:09 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 3bhs523zyudf0, child number 0
-------------------------------------
select count(*) from FACT join DIM3 using(DIM3_ID)
 
Plan hash value: 3735838348
 
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7505 (100)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| FACT | 11M| 138M| 7505 (1)| 00:00:01 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("FACT"."DIM3_ID" IS NOT NULL)
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- rely constraint used for this statement

So, we can still benefit from the query optimization even with the RELY DISABLE NOVALIDATE.

But I would not recommend this. Be careful. Here are my foreign key constraints:

21:03:15 SQL> select table_name,constraint_type,constraint_name,status,validated,rely from all_constraints where owner='
DEMO' and table_name='FACT' order by 4 desc,5 desc,6 nulls last;
 
TABLE_NAME CONSTRAINT_TYPE CONSTRAINT_NAME STATUS VALIDATED RELY
---------- --------------- --------------- ------ --------- ----
FACT R DIM1_FK ENABLED VALIDATED
FACT R DIM2_FK ENABLED NOT VALIDATED RELY
FACT R DIM3_FK DISABLED NOT VALIDATED RELY

For DIM1_FK and DIM2_FK the constraints prevent us from inconsistencies:

21:03:17 SQL> insert into FACT(DIM1_ID)values(666);
 
Error starting at line : 1 in command -
insert into FACT(DIM1_ID)values(666)
Error report -
ORA-02291: integrity constraint (DEMO.DIM1_FK) violated - parent key not found

But the disabled one will allow inconsistencies:

21:03:19 SQL> insert into FACT(DIM3_ID)values(666);
1 row inserted.

That’s bad. I rollback this immediately:

21:03:20 SQL> rollback;
Rollback complete.

Star transformation

Join elimination is not the only transformation that needs to know about the many-to-one relationship between fact tables and dimensions. You usually create a bitmap index on each foreign key to the dimension, to get the higher selectivity when looking at the table rows from the combination of criteria on the dimension attributes.

21:03:24 SQL> create bitmap index FACT_DIM1 on FACT(DIM1_ID);
Index FACT_DIM1 created.
 
21:03:29 SQL> create bitmap index FACT_DIM2 on FACT(DIM2_ID);
Index FACT_DIM2 created.
 
21:03:33 SQL> create bitmap index FACT_DIM3 on FACT(DIM3_ID);
Index FACT_DIM3 created.

Here is the kind of query with predicates on each dimension attributes:

21:03:35 SQL> select count(*) from FACT
2 join DIM1 using(DIM1_ID) join DIM2 using(DIM2_ID) join DIM3 using(DIM3_ID)
3 where dim1_att1='...0' and dim2_att1='...0' and dim3_att1='...0';
 
COUNT(*)
--------
333333

By default, the optimizer applies those predicates on the dimension and do a cartesian join to get all accepted combinations of dimension IDs. Then the rows can be fetched from the table:

21:03:37 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 01jmjv0sz1dpq, child number 0
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID) join DIM2
using(DIM2_ID) join DIM3 using(DIM3_ID) where dim1_att1='...0' and
dim2_att1='...0' and dim3_att1='...0'
 
Plan hash value: 1924236134
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 5657 (100)| |
| 1 | SORT AGGREGATE | | 1 | 114 | | |
| 2 | NESTED LOOPS | | 55826 | 6215K| 5657 (1)| 00:00:01 |
| 3 | MERGE JOIN CARTESIAN | | 1 | 75 | 9 (0)| 00:00:01 |
| 4 | MERGE JOIN CARTESIAN | | 1 | 50 | 6 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | DIM1 | 1 | 25 | 3 (0)| 00:00:01 |
| 6 | BUFFER SORT | | 1 | 25 | 3 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | DIM2 | 1 | 25 | 3 (0)| 00:00:01 |
| 8 | BUFFER SORT | | 1 | 25 | 6 (0)| 00:00:01 |
|* 9 | TABLE ACCESS FULL | DIM3 | 1 | 25 | 3 (0)| 00:00:01 |
| 10 | BITMAP CONVERSION COUNT | | 55826 | 2126K| 5657 (1)| 00:00:01 |
| 11 | BITMAP AND | | | | | |
|* 12 | BITMAP INDEX SINGLE VALUE| FACT_DIM3 | | | | |
|* 13 | BITMAP INDEX SINGLE VALUE| FACT_DIM2 | | | | |
|* 14 | BITMAP INDEX SINGLE VALUE| FACT_DIM1 | | | | |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
5 - filter("DIM1"."DIM1_ATT1"='...0')
7 - filter("DIM2"."DIM2_ATT1"='...0')
9 - filter("DIM3"."DIM3_ATT1"='...0')
12 - access("FACT"."DIM3_ID"="DIM3"."DIM3_ID")
13 - access("FACT"."DIM2_ID"="DIM2"."DIM2_ID")
14 - access("FACT"."DIM1_ID"="DIM1"."DIM1_ID")

Here rows are fetched from the fact table through a nested loop from the cartesian join on the dimensions, using the bitmap index access for each loop. If there are lot of rows to fetch, then the optimizer will chose a hash join and then will have to full scan the fact table, which is expensive. To lower that cost, the optimizer can add a ‘IN (SELECT DIM_ID FROM DIM WHERE DIM_ATT)’ for very selective dimensions. This is STAR transformation and relies on the foreign key constraints.

It is not enabled by default:

21:03:43 SQL> show parameter star
NAME TYPE VALUE
---------------------------- ------- -----
star_transformation_enabled string FALSE

We can enable it and then it is a cost based transformation:

21:03:45 SQL> alter session set star_transformation_enabled=true;
Session altered.

Here is my example:

21:03:47 SQL> select count(*) from FACT
2 join DIM1 using(DIM1_ID) join DIM2 using(DIM2_ID) join DIM3 using(DIM3_ID)
3 where dim1_att1='...0' and dim2_att1='...0' and dim3_att1='...0';
 
COUNT(*)
--------
333333

The star transformation, changing a join to an ‘IN()’ is possible only when we know that there is a many-to-one relationship. We have all constraints for that, disabled or not, validated or not, but all in RELY. Then Star Transformation can occur:

21:03:51 SQL> select * from dbms_xplan.display_cursor();
 
PLAN_TABLE_OUTPUT
-----------------
SQL_ID 01jmjv0sz1dpq, child number 1
-------------------------------------
select count(*) from FACT join DIM1 using(DIM1_ID) join DIM2
using(DIM2_ID) join DIM3 using(DIM3_ID) where dim1_att1='...0' and
dim2_att1='...0' and dim3_att1='...0'
 
Plan hash value: 1831539117
 
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 68 (100)| |
| 1 | SORT AGGREGATE | | 1 | 38 | | |
|* 2 | HASH JOIN | | 2 | 76 | 68 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | DIM2 | 1 | 25 | 3 (0)| 00:00:01 |
| 4 | VIEW | VW_ST_62BA0C91 | 8 | 104 | 65 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 8 | 608 | 56 (0)| 00:00:01 |
| 6 | BITMAP CONVERSION TO ROWIDS| | 8 | 427 | 22 (5)| 00:00:01 |
| 7 | BITMAP AND | | | | | |
| 8 | BITMAP MERGE | | | | | |
| 9 | BITMAP KEY ITERATION | | | | | |
|* 10 | TABLE ACCESS FULL | DIM1 | 1 | 25 | 3 (0)| 00:00:01 |
|* 11 | BITMAP INDEX RANGE SCAN| FACT_DIM1 | | | | |
| 12 | BITMAP MERGE | | | | | |
| 13 | BITMAP KEY ITERATION | | | | | |
|* 14 | TABLE ACCESS FULL | DIM2 | 1 | 25 | 3 (0)| 00:00:01 |
|* 15 | BITMAP INDEX RANGE SCAN| FACT_DIM2 | | | | |
| 16 | BITMAP MERGE | | | | | |
| 17 | BITMAP KEY ITERATION | | | | | |
|* 18 | TABLE ACCESS FULL | DIM3 | 1 | 25 | 3 (0)| 00:00:01 |
|* 19 | BITMAP INDEX RANGE SCAN| FACT_DIM3 | | | | |
| 20 | TABLE ACCESS BY USER ROWID | FACT | 1 | 25 | 43 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("ITEM_1"="DIM2"."DIM2_ID")
3 - filter("DIM2"."DIM2_ATT1"='...0')
10 - filter("DIM1"."DIM1_ATT1"='...0')
11 - access("FACT"."DIM1_ID"="DIM1"."DIM1_ID")
14 - filter("DIM2"."DIM2_ATT1"='...0')
15 - access("FACT"."DIM2_ID"="DIM2"."DIM2_ID")
18 - filter("DIM3"."DIM3_ATT1"='...0')
19 - access("FACT"."DIM3_ID"="DIM3"."DIM3_ID")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- star transformation used for this statement
- this is an adaptive plan

Here, each dimension drives a range scan on the bitmap index: the predicate on the dimension table returns the dimension ID for the the index lookup on the fact table. The big advantage of bitmap indexes here is that when this access path is used for several dimensions, the bitmap result can be combined before going to the table. This transformation avoids the join and then you must be sure that there is a many-to-one relationship.

In summary

As you should rely on the integrity of data in your datawarehouse, you should find the following parameters to query on fact-dimension schemas:

NAME TYPE VALUE
---------------------------- ------- ------
query_rewrite_enabled string TRUE
query_rewrite_integrity string TRUSTED
star_transformation_enabled string FALSE

And you should define all constraints. When you are sure about the integrity of data, then those constraints can be created RELY ENABLE NOVALIDATE. If some processing must be optimized by not enforcing the constraint verification, then you may create those constraints as RELY DISABLE NOVALIDATE but the gain will probably minimal, and the risk high. But remember that there are not only the well-controlled processes which update data. You may have one day to do a manual update to fix something, and enabled constraint can prevent terrible errors.

I have not covered all optimizer transformations that rely on constraints. When using materialized views you, the rewrite capability also relies on constraints. Relationship cardinality is one of the most important information of database design, this information must be known by the optimizer.

 

Cet article Create constraints in your datawarehouse – why and how est apparu en premier sur Blog dbi services.

impdp content=metadata_only locks the stats

Wed, 2017-09-06 15:20

With Oracle you can learn something every day. Today, preparing a migration to 12.2, I found all tables had locked statistics. I learned that it is the expected behavior since 10.2 when importing metadata_only including statistics, to avoid that the automatic job gathering comes and replaces the stats by ‘0 rows’.

It is documented in DataPump Import Without Data Locks Table Statistics (Doc ID 415081.1) but as I was really surprised about that (and also frustrated to learn a 10.2 thing when migrating to 12.2) that I wanted to test myself.

I create a table DEMO with statistics:

SQL> connect sys/oracle@//localhost/pdb1 as sysdba
Connected.
SQL> create table demo.demo as select * from dual;
Table created.
 
SQL> create index demo.demo on demo.demo(dummy);
Index created.
 
SQL> exec dbms_stats.gather_table_stats('DEMO','DEMO');
PL/SQL procedure successfully completed.
 
SQL> create or replace directory TMP as '/tmp';
Directory created.
 
SQL> select count(*) from DEMO.DEMO;
 
COUNT(*)
----------
1
 
SQL> select object_type from dba_objects where owner='DEMO' and object_name='DEMO';
 
OBJECT_TYPE
-----------------------
TABLE
INDEX
 
SQL> select owner,table_name,last_analyzed,stattype_locked,num_rows from dba_tab_statistics where owner='DEMO' and table_name='DEMO';
 
OWNER TABLE_NAME LAST_ANALYZED STATTYPE_LOCKED NUM_ROWS
---------- -------------------- -------------------- -------------------- ----------
DEMO DEMO 06-SEP-17 1

I export it:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "SYS"."SYS_EXPORT_TABLE_01": "sys/********@//localhost/pdb1 AS SYSDBA" tables=DEMO.DEMO directory=TMP
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
. . exported "DEMO"."DEMO" 5.054 KB 1 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/tmp/expdat.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Wed Sep 6 19:14:44 2017 elapsed 0 00:00:09

And drop it:

SQL> connect sys/oracle@//localhost/pdb1 as sysdba
Connected.
SQL> drop table demo.demo;
Table dropped.

Now import metadata only (for example because I want to change NLS semantics before importing the data)

Import: Release 12.2.0.1.0 - Production on Wed Sep 6 19:21:28 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYS"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TABLE_01": "sys/********@//localhost/pdb1 AS SYSDBA" tables=DEMO.DEMO directory=TMP content=metadata_only exclude=index
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_TABLE_01" successfully completed at Wed Sep 6 19:21:39 2017 elapsed 0 00:00:11

If I check the statistics:

SQL> select owner,table_name,last_analyzed,stattype_locked,num_rows from dba_tab_statistics where owner='DEMO' and table_name='DEMO';
 
OWNER TABLE_NAME LAST_ANALYZED STATTYPE_LOCKED NUM_ROWS
---------- -------------------- -------------------- -------------------- ----------
DEMO DEMO 06-SEP-17 ALL 1

Stats are locked. I suppose that the idea is that you have the tables with same statistics as production for example, and you can load them with a subset of data but expect the same execution plans as in production. But this is not what I want for a migration.

One possibility is to unlock the stats once you have imported the data.

The other possibility is to import metadata without the statistics:

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYS"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TABLE_01": "sys/********@//localhost/pdb1 AS SYSDBA" tables=DEMO.DEMO directory=TMP content=metadata_only exclude=index exclude=table_statistics
Processing object type TABLE_EXPORT/TABLE/TABLE
Job "SYS"."SYS_IMPORT_TABLE_01" successfully completed at Wed Sep 6 21:11:03 2017 elapsed 0 00:00:03

Then the table statistics are not locked:

OWNER TABLE_NAME LAST_ANALYZED STATTYPE_LOCKED NUM_ROWS
---------- -------------------- -------------------- -------------------- ----------
DEMO DEMO

Once you have changed what you want on the tables, you import the data (table_exists_action=truncate) and then you import the remaining: indexes, ref_constraints, triggers.
This is where you can also add include=table_statistics:

Starting "SYS"."SYS_IMPORT_TABLE_01": "sys/********@//localhost/pdb1 AS SYSDBA" tables=DEMO.DEMO directory=TMP table_exists_action=truncate include=index include=table_statistics
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER

So that you have the statistics from the source, unlocked.

OWNER TABLE_NAME LAST_ANALYZED STATTYPE_LOCKED NUM_ROWS
---------- -------------------- -------------------- -------------------- ----------
DEMO DEMO 06-SEP-17 1

 

Cet article impdp content=metadata_only locks the stats est apparu en premier sur Blog dbi services.

12c dbms_stats.gather_table_stats on GTT do not commit

Mon, 2017-09-04 14:21

In my UKOUG OracleScene article on 12c online statistics and GTT I mentioned the following:

A final note about those 12c changes in statistics gathering on GTT. In 11g the dbms_stats did a commit at the start. So if you did gather stats after the load, you had to set the GTT as ON COMMIT PRESERVE ROWS. Or you just vacuum what you’ve loaded. That has changed in 12c. If you now choose to do a conventional insert followed by dbms_stats (having set private stats of course) then you don’t need to set on commit preserve rows anymore.

Today, I realized that I’ve never explained exactly when dbms_stats.gather_table_stats commits the transaction or not. Because, of course, it depends. In summary: 12c non-SYS owner GTT with private statistics.

Here is an example. I connect as non-SYS user:

SQL> connect demo/demo@//localhost/pdb1
Connected.
SQL> show user
USER is "DEMO"

I create a permanent table and a global temporary table:

SQL> create table DEMO(text varchar2(20));
Table created.
 
SQL> create global temporary table DEMOGTT(text varchar2(20));
Table created.

In the permanent table, I insert my row. The goal is to be sure that this insert is not commited and can be rolled back at the end:

SQL> insert into DEMO values('Forget me, please!');
1 row created.

In the global temporary table I insert one row. The goal is to be sure that the row remains until the end of my transaction (on commit delete rows):

SQL> insert into DEMOGTT values('Preserve me, please!');
1 row created.

Here it is:

SQL> select * from DEMO;
 
TEXT
--------------------
Forget me, please!
 
SQL> select * from DEMOGTT;
 
TEXT
--------------------
Preserve me, please!

Then, I gather statistics on the GTT:

SQL> exec dbms_stats.gather_table_stats(user,'DEMOGTT');
PL/SQL procedure successfully completed.

I check that my rows in the GTT are still there, which is a proof that no commit happened:

SQL> select * from DEMOGTT;
 
TEXT
--------------------
Preserve me, please!

And I check that, as no commit happened, I can rollback my previous insert on the permanent table:

SQL> rollback;
Rollback complete.
 
SQL> select * from DEMO;
no rows selected

This is the new behavior in 12c. The same in 11g would have committed my transaction before and after the call to dbms_stats.

GTT only

Here is the same example when gathering the stats on the permanent table:
SQL> show user
USER is "DEMO"
SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.
&nbsp:
SQL> select * from DEMOGTT;
no rows selected
&nbsp:
SQL> rollback;
Rollback complete.
&nbsp:
SQL> select * from DEMO;
&nbsp:
TEXT
--------------------
Forget me, please!

The transaction was committed by dbms_stats here: no rows from GTT (on commit delete rows), and the insert in permanent table was commited before my rollback.

Not for SYS

When connected as SYS:
SQL> show user
USER is "SYS"
SQL> exec dbms_stats.gather_table_stats(user,'DEMOGTT');
PL/SQL procedure successfully completed.
 
SQL> select * from DEMOGTT;
no rows selected
 
SQL> rollback;
Rollback complete.
 
SQL> select * from DEMO;
 
TEXT
--------------------
Forget me, please!

The transaction was committed by dbms_stats here: when the table is owned by SYS, dbms_stats commits.

I mean, not for SYS owner

If I’m connected by SYS but gather stats on a non-SYS table, dbms_stats do not commit:

SQL> show user
USER is "SYS"
SQL> exec dbms_stats.gather_table_stats('DEMO','DEMOGTT');
PL/SQL procedure successfully completed.
 
SQL> select * from DEMOGTT;
 
TEXT
--------------------
Preserve me, please!
 
SQL> rollback;
Rollback complete.
 
SQL> select * from DEMO;
no rows selected

The behaviour is not related to the user who runs dbms_stats, but the owner of the GTT.

Private statistics only

The default in 12c for GTT is private statistics, visible to session only. Trying the same with shared statistics (as in 11g):
SQL> show user
USER is "DEMO"
 
SQL> select dbms_stats.get_prefs(ownname=>user,tabname=>'DEMO_GTT',pname=>'GLOBAL_TEMP_TABLE_STATS') from dual;
 
DBMS_STATS.GET_PREFS(OWNNAME=>USER,TABNAME=>'DEMO_GTT',PNAME=>'GLOBAL_TEMP_TABLE
--------------------------------------------------------------------------------
SESSION
 
SQL> exec dbms_stats.set_table_prefs(user,'DEMO_GTT','GLOBAL_TEMP_TABLE_STATS','SHARED' );
PL/SQL procedure successfully completed.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMOGTT');
PL/SQL procedure successfully completed.
&nbsp:
SQL> select * from DEMOGTT;
no rows selected
&nbsp:
SQL> rollback;
Rollback complete.
&nbsp:
SQL> select * from DEMO;
&nbsp:
TEXT
--------------------
Forget me, please!
 
SQL> exec dbms_stats.set_table_prefs(user,'DEMO_GTT', 'GLOBAL_TEMP_TABLE_STATS',null);
PL/SQL procedure successfully completed.

The dbms_stats did commit my transaction here.

So what?

Private session statistics for GTT is a great feature. Use it: gather statistics after filling the GTT. And don’t worry about on commit delete rows GTT (the default) because this statistic gathering do not commit the transation.

 

Cet article 12c dbms_stats.gather_table_stats on GTT do not commit est apparu en premier sur Blog dbi services.

When PDB name conflicts with CDB name

Sun, 2017-09-03 09:24

Going to multitenant architecture is not a big change. The administration things (DBA, monitoring, backups) connect to the CDB and the application things connect to the PDB. Without the multitenant option, it is still recommended to go to the CDB architecture. The non-CDB is deprecated and the multitenant architecture brings interesting features. People often ask how to name the CDB and the PDB, especially when they have naming rules or policies in the company. My recommendation is to name the PDB as you are used to naming the databases: the name often gives an idea of the data that is inside, the application, and the environment. The CDB is the container, and in my opinion, you should apply the same naming rules as for servers. Don’t forget that pluggable databases are made to be moved across CDB, so the CDB name should not depend on the content.

But, with single tenant, you have a one-to-one relationship between the CDB and the PDB and then may come the idea to set the same name for CDB and PDB… I’m not sure if it is supported or not and please, don’t do that.

Service Name

There’s one rule: the service name must be unique on a server, especially when registered to the same listener. The PDB name will be the default service name registered by the PDB. And the DB_UNIQUE_NAME of the CDB will be the default service name registered by the CDB. Then the PDB name must be different than the DBA_UNIQUE_NAME.

With this rule, it should be possible to have the same name for the CDB (the DB_NAME) and the PDB, given that we have set a different DB_UNIQUE_NAME.

Here is an example. The name of my Container Database is CDB1. But as it is part of a Data Guard configuration I changed the unique name to CDB1A (and standby will be CDB1B).

Here are the services from by CDB:

SQL> select * from v$services;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH GOAL DTP AQ_HA_NOTIFICATION CLB_GOAL COMMIT_OUTESSION_STATE_CONSISTENCY GLOBAL PDB SQL_TRANSLATION_PROFILE MAX_LAG_TIME STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT CON_ID
---------- ---- --------- ------------ ------------- ------------------ ---- --- ------------------ -------- ---------------------------------- ------ --- ----------------------- ------------ ----------- ---------------- ------------- ------
7 CDB1A 3104886812 CDB1A 27-AUG-17 1962062146 NONE N NO LONG NO NO CDB$ROOT NONE NONE 0 1
1 SYS$BACKGROUND 165959219 26-JAN-17 1784430042 NONE N NO SHORT NO NO CDB$ROOT NONE NONE 0 1
2 SYS$USERS 3427055676 26-JAN-17 1784430042 NONE N NO SHORT NO NO CDB$ROOT NONE NONE 0 1
0 pdb1 1888881990 pdb1 0 NONE N NO SHORT NO NO PDB1 NONE NONE 0 4
6 CDB1XDB 1202503288 CDB1XDB 27-AUG-17 1962062146 NONE N NO LONG NO NO CDB$ROOT NONE NONE 0 1

All are default services: CDB1A is the DB_UNIQUE_NAME, SYS$BACKGROUND for background processes, SYS$USERS when connecting without a service name, CDB1XDB is used to connec to XDB dispathers. PDB1 is the default service of my pluggable database PDB1.

I can also look at the services registred in the listener:


SQL> host lsnrctl status
 
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 28-AUG-2017 20:34:36
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 27-AUG-2017 20:41:33
Uptime 0 days 23 hr. 53 min. 3 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/VM104/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "57c2283990d42152e053684ea8c05ea0" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1A" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

There is just one additional service here: the GUI of my PDB (see https://blog.dbi-services.com/service-696c6f76656d756c746974656e616e74-has-1-instances/)

ORA-65149

Do you see any service named ‘CDB1′ here? No. Then I should be able to create a PDB with this name.

SQL> create pluggable database CDB1 admin user admin identified by covfefe file_name_convert=('pdbseed','cdb1');
 
Error starting at line : 1 in command -
create pluggable database CDB1 admin user admin identified by covfefe file_name_convert=('pdbseed','cdb1')
Error report -
ORA-65149: PDB name conflicts with existing service name in the CDB or the PDB
65149. 00000 - "PDB name conflicts with existing service name in the CDB or the PDB"
*Cause: An attempt was made to create a pluggable database (PDB) whose
name conflicts with the existing service name in the container
database (CDB) or the PDB.
*Action: Choose a different name for the PDB.

Ok. This is impossible. However, the error message is not correct. My PDB name does not conflict with existing service names. It may conflict with instance name or DB_NAME, but not with any service.

NID

As I’m not satisfied with this, I try to find another way to have the same name for CDB and PDB. I have a pluggable database named ‘PDB1′ and I’ll try to change the CDB name to this:


[oracle@VM104 ~]$ nid dbname=PDB1 target=sys/oracle
 
DBNEWID: Release 12.2.0.1.0 - Production on Mon Aug 28 20:40:08 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Connected to database CDB1 (DBID=926862412)
 
Connected to server version 12.2.0
 
Control Files in database:
/u01/oradata/CDB1A/control01.ctl
/u01/fast_recovery_area/CDB1A/control02.ctl
 
The following datafiles are read-only:
/u01/oradata/CDB1A/PDB1/USERS2.db (17)
These files must be writable by this utility.
 
Change database ID and database name CDB1 to PDB1? (Y/[N]) => Y
 
Proceeding with operation
Changing database ID from 926862412 to 3460932968
Changing database name from CDB1 to PDB1
Control File /u01/oradata/CDB1A/control01.ctl - modified
Control File /u01/fast_recovery_area/CDB1A/control02.ctl - modified
Datafile /u01/oradata/CDB1A/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/users01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/USERS.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/USERS2.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/temp01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/temp012017-08-27_18-30-16-741-PM.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/temp012017-08-27_18-30-16-741-PM.db - dbid changed, wrote new name
Control File /u01/oradata/CDB1A/control01.ctl - dbid changed, wrote new name
Control File /u01/fast_recovery_area/CDB1A/control02.ctl - dbid changed, wrote new name
Instance shut down
 
Database name changed to PDB1.
Modify parameter file and generate a new password file before restarting.
Database ID for database PDB1 changed to 3460932968.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.
 
SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 784338600 bytes
Database Buffers 58720256 bytes
Redo Buffers 7974912 bytes
ORA-01103: database name 'PDB1' in control file is not 'CDB1'
 
SQL> alter system set db_name=PDB1 scope=spfile;
 
System altered.
 
SQL> shutdown immediate
ORA-01507: database not mounted
 
ORACLE instance shut down.
SQL>
SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 784338600 bytes
Database Buffers 58720256 bytes
Redo Buffers 7974912 bytes
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
 
SQL> alter database open resetlogs;
 
Database altered.

That’s done.
My CDB is named PDB1:
SQL> select * from v$database;
 
DBID NAME CREATED RESETLOGS_CHANGE# RESETLOGS_TIME PRIOR_RESETLOGS_CHANGE# PRIOR_RESETLOGS_TIME LOG_MODE CHECKPOINT_CHANGE# ARCHIVE_CHANGE# CONTROLFILE_TYPE CONTROLFILE_CREATED CONTROLFILE_SEQUENCE# CONTROLFILE_CHANGE# CONTROLFILE_TIME OPEN_RESETLOGS VERSION_TIME OPEN_MODE PROTECTION_MODE PROTECTION_LEVEL REMOTE_ARCHIVE ACTIVATION# SWITCHOVER# DATABASE_ROLE ARCHIVELOG_CHANGE# ARCHIVELOG_COMPRESSION SWITCHOVER_STATUS DATAGUARD_BROKER GUARD_STATUS SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_PK SUPPLEMENTAL_LOG_DATA_UI FORCE_LOGGING PLATFORM_ID PLATFORM_NAME RECOVERY_TARGET_INCARNATION# LAST_OPEN_INCARNATION# CURRENT_SCN FLASHBACK_ON SUPPLEMENTAL_LOG_DATA_FK SUPPLEMENTAL_LOG_DATA_ALL DB_UNIQUE_NAME STANDBY_BECAME_PRIMARY_SCN FS_FAILOVER_STATUS FS_FAILOVER_CURRENT_TARGET FS_FAILOVER_THRESHOLD FS_FAILOVER_OBSERVER_PRESENT FS_FAILOVER_OBSERVER_HOST CONTROLFILE_CONVERTED PRIMARY_DB_UNIQUE_NAME SUPPLEMENTAL_LOG_DATA_PL MIN_REQUIRED_CAPTURE_CHANGE# CDB CON_ID PENDING_ROLE_CHANGE_TASKS CON_DBID FORCE_FULL_DB_CACHING
---- ---- ------- ----------------- -------------- ----------------------- -------------------- -------- ------------------ --------------- ---------------- ------------------- --------------------- ------------------- ---------------- -------------- ------------ --------- --------------- ---------------- -------------- ----------- ----------- ------------- ------------------ ---------------------- ----------------- ---------------- ------------ ------------------------- ------------------------ ------------------------ ------------- ----------- ------------- ---------------------------- ---------------------- ----------- ------------ ------------------------ ------------------------- -------------- -------------------------- ------------------ -------------------------- --------------------- ---------------------------- ------------------------- --------------------- ---------------------- ------------------------ ---------------------------- --- ------ ------------------------- -------- ---------------------
3460932968 PDB1 27-AUG-17 1495032 28-AUG-17 1408558 27-AUG-17 ARCHIVELOG 1495035 0 CURRENT 27-AUG-17 2574 1496538 28-AUG-17 NOT ALLOWED 27-AUG-17 READ WRITE MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE ENABLED 3460947145 3460947145 PRIMARY 0 DISABLED NOT ALLOWED DISABLED NONE NO NO NO NO 13 Linux x86 64-bit 3 3 1497050 NO NO NO CDB1A 0 DISABLED 0 NO NO YES 0 NOT APPLICABLE 3460932968 NO

And I have a PDB with the same name:

SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED
 
SQL> alter pluggable database PDB1 open;
 
Pluggable database PDB1 altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ WRITE NO

What was forbidden with a wrong error message was made possible with this other way.

So what?

Please, do not take this as a solution. There is clearly a problem here. Maybe the documentation and error message are wrong. Maybe the NID has a bug, allowing to do something that should be blocked. Or the create pluggable database has a bug, blocking something that should be possible. Until this is fixed (SR opened) I would recommend that the PDB name is always different than the CDB name, independently of service names. Well, I would recommend it anyway as it brings a lot of confusion: when you mention a database name, people will not know whether you are referring to the CDB or the PDB.

 

Cet article When PDB name conflicts with CDB name est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths XI – Sample Scan

Sat, 2017-08-26 09:33

I was going to end this series with the previous post because the last access path available in Postgres is a bit special: a Seq Scan that returns only a sample of the rows, at random. However, it is the occasion to come back to the difference between random and sequential reads.

I’m still working on the same table as in the previous posts, with 10000 rows in 1429 pages. 5% of rows is 500 rows and 5% of blocks is about 72 pages.

Rows

Sometimes, you can answer your business question on a sample of rows, when you need an approximate result, trend or pattern Let’s say that you want to sum() on only 5 percent of rows:
explain (analyze,verbose,costs,buffers) select sum(a) from demo1 tablesample bernoulli(5) ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1435.25..1435.26 rows=1 width=8) (actual time=1.940..1.940 rows=1 loops=1)
Output: sum(a)
Buffers: shared hit=1429
-> Sample Scan on public.demo1 (cost=0.00..1434.00 rows=500 width=4) (actual time=0.007..1.890 rows=509 loops=1)
Output: n, a, x
Sampling: bernoulli ('5'::real)
Buffers: shared hit=1429
Planning time: 0.373 ms
Execution time: 1.956 ms

This row sampling reads all rows and picks a sample of them at random. Unfortunately, it reads all blocks because you cannot get a good sample if you don’t know how many rows you have in each block. Working on a sample can make sense if you want to apply complex operations on the result. Here the cost in the database is similar to a Seq Scan: 1429 blocks read at seq_page_cost=1, but the sum() applied on 500 rows (cpu_operator_cost=0.0025) and 500 tuples from the scan and 1 tuple for the result, with cpu_tuple_cost=0.01

From execution statistics, you can see that the result is exactly what we asked: 500 rows returned.

Oracle has a different syntax and different algorithm:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1tsadjdd9ddam, child number 0
-------------------------------------
select /*+ */ sum(a) from demo1 sample(5)
-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 397 (100)| 1 |00:00:00.01 | 581 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 581 |
| 2 | TABLE ACCESS SAMPLE| DEMO1 | 1 | 500 | 397 (0)| 478 |00:00:00.01 | 581 |
-----------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("A")[22] 2 - (rowset=256) "A"[NUMBER,22]

Here we have not read all the blocks. Only 40% of them. This is faster than the Postgres approach, but the drawback is that the result is not exact: 478 rows were returned here.

Blocks

When we can afford an approximate sampling, we can sample on blocks rather than on rows:
explain (analyze,verbose,costs,buffers) select sum(a) from demo1 tablesample system(5) ;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------
Aggregate (cost=290.25..290.26 rows=1 width=8) (actual time=0.479..0.480 rows=1 loops=1)
Output: sum(a)
Buffers: shared hit=73
-> Sample Scan on public.demo1 (cost=0.00..289.00 rows=500 width=4) (actual time=0.016..0.377 rows=511 loops=1)
Output: n, a, x
Sampling: system ('5'::real)
Buffers: shared hit=73
Planning time: 0.698 ms
Execution time: 0.509 ms

The number of rows is still good here, but the result may depend on the blocks sampled. Only 73 blocks were read, which is exactly 5% and of course, the rows may be distributed differently within the blocks. However, the advantage is that it is faster as it reads less blocks. But those blocks being picked at random, they are by definition random reads: 71 pages read at random_page_cost=0:4 and, as in the previous case, 501 cpu_tuple_cost and 500 cpu_operator_cost

With block sampling, Oracle reads a smaller number of blocks than with row sampling, but still more than 5%, and the number of rows is not exact: 798 rows here:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fqgbwqfavgdrn, child number 0
-------------------------------------
select /*+ */ sum(a) from demo1 sample block(5)
-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 22 (100)| 1 |00:00:00.01 | 134 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 134 |
| 2 | TABLE ACCESS SAMPLE| DEMO1 | 1 | 500 | 22 (0)| 798 |00:00:00.01 | 134 |
-----------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("A")[22] 2 - (rowset=256) "A"[NUMBER,22]

Again, as for the previous access paths: same concepts and different implementation between Postgres and Oracle. Everything looks similar and easily portable from a far overview, but going into details you see all those little differences which make it no so easy to be database agnostic or easily portable.

Summary

This is the end of this series comparing Postgres access path with Oracle ones. The goal is not to tell you that one is better than the other. They have a different approach, different targets, different price, different history and probably future. But understanding how they work and how they estimate the cost is a good way to learn. I’m myself learning a lot about Postgres while writing those posts, matching things I discover on Postgres with those I know for a while in Oracle.

Here is the list of posts on Access Path:

  1. Postgres vs. Oracle access paths – intro
  2. Postgres vs. Oracle access paths I – Seq Scan
  3. Postgres vs. Oracle access paths II – Index Only Scan
  4. Postgres vs. Oracle access paths III – Partial Index
  5. Postgres vs. Oracle access paths IV – Order By and Index
  6. Postgres vs. Oracle access paths V – FIRST ROWS and MIN/MAX
  7. Postgres vs. Oracle access paths VI – Index Scan
  8. Postgres vs. Oracle access paths VII – Bitmap Index Scan
  9. Postgres vs. Oracle access paths VIII – Index Scan and Filter
  10. Postgres vs. Oracle access paths IX – Tid Scan
  11. Postgres vs. Oracle access paths X – Update
  12. Postgres vs. Oracle access paths XI – Sample Scan

I think my next series will be on Join methods.

 

Cet article Postgres vs. Oracle access paths XI – Sample Scan est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths X – Update

Thu, 2017-08-24 15:03

In the previous post we have seen the cheapest way to get one row, reading only one block from its physical location. But that’s the optimal case where the row has not moved. I’ll (nearly) conclude this series about access path with an update.

ROWID in Oracle

Here is the ROWID of one row in Oracle:

select rowid from demo1 where n=1000;
ROWID
------------------
AAAR4WAAMAAAAEaAAF

There’s enough information here to get directly to the block with file_name and offset:
select file_name,dbms_rowid.rowid_block_number('AAAR4WAAMAAAAEaAAF')*block_size offset
from dba_data_files join dba_tablespaces using(tablespace_name)
where file_id=dbms_rowid.rowid_to_absolute_fno('AAAR4WAAMAAAAEaAAF','DEMO','DEMO1');
 
FILE_NAME OFFSET
---------------------------------------- ----------
/u01/oradata/CDB1A/PDB/users01.dbf 2310144

The ROWID also contains the index of the row within the block’s row directory:

select dbms_rowid.rowid_row_number('AAAR4WAAMAAAAEaAAF') from dual;
 
DBMS_ROWID.ROWID_ROW_NUMBER('AAAR4WAAMAAAAEAAAF')
-------------------------------------------------
5

TID in Postgres

And the TID of similar row in Postgres:

select ctid from demo1 where n=1000;
ctid
---------
(142,6)

The file is known from the table, as there is only one file per table:

show data_directory;
data_directory
----------------------------
/usr/share/postgresql/data
 
select pg_relation_filepath('demo1');
pg_relation_filepath
----------------------
base/16437/125852

The blocksize is common for the whole database:

show block_size;
block_size
------------
8192

Then the block is at offset 142+8192=8334.
Within the block, the row is at index 6.

SELECT

We have seen in the previous post that we can select using the ROWID/TID and Oracle and Postgres behave the same: only one block to read, cost estimation based on one random read:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8mj3ms08x0sfh, child number 0
-------------------------------------
select /*+ */ a from demo1 where rowid='AAAR4WAAMAAAAEaAAF'
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 1 |00:00:00.01 | 1 |
| 1 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
-----------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22]

Different units but same signification: cost=1 for Oracle is for random reads, cost=1 for Postgres is for sequential reads and random reads are estimated to cost=4:

explain (analyze,verbose,costs,buffers) select a from demo1 where ctid='(142,6)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------
Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=4) (actual time=0.007..0.007 rows=1 loops=1)
Output: a
TID Cond: (demo1.ctid = '(142,6)'::tid)
Buffers: shared hit=1
Planning time: 0.358 ms
Execution time: 0.016 ms

Oracle UPDATE

Now I’m updating this row, changing the column X which contains 1000 ‘x’ characters to 1000 ‘y’ characters:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gpqv4k6m1q20y, child number 0
-------------------------------------
update /*+ */ demo1 set x=lpad('y',1000,'y') where rowid='AAAR4WAAMAAAAEaAAF'
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | | 1 (100)| 0 |00:00:00.01 | 4 |
| 1 | UPDATE | DEMO1 | 1 | | | 0 |00:00:00.01 | 4 |
| 2 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
------------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
2 - (upd=2) ROWID[ROWID,10], "X"[VARCHAR2,1000]

In addition to the access to the block (1 buffer) the update had to read 3 additional buffers. There are no indexes on this updated column and then Oracle has no additional maintenance to do. One buffer is the table block to update (the TABLE ACCESS BY USER ROWID was a consistent get, the update needs the current version of the block).

Additional buffers are from the UNDO tablespace for MVCC (Multi Version Concurrency Control). It is the first modification in my transaction and then has to update the transaction table and undo segment, which is why we see 2 additional buffers. Another update within the same transaction reads only two buffers in total:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gpqv4k6m1q20y, child number 0
-------------------------------------
update /*+ */ demo1 set x=lpad('z',1000,'z') where rowid='AAAR4WAAMAAAAEaAAF'
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | | 1 (100)| 0 |00:00:00.01 | 2 |
| 1 | UPDATE | DEMO1 | 1 | | | 0 |00:00:00.01 | 2 |
| 2 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
------------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
2 - (upd=2) ROWID[ROWID,10], "X"[VARCHAR2,1000]

Only the table blocks are read: one consistent read as of the beginning of the query (or the transaction if in serializable isolation level) and one for the current block. Oracle has an optimization called In-Memory UNDO to avoid frequent access undo blocks.

There are no further re-visits needed. Oracle may choose to come back at commit if it can be done quickly (few blocks still in buffer cache) but that’s not required. The block can stay like this for years without the need to read it again for cleanup. If another session has to read it, then cleanup may be done by this session.

Postgres UPDATE

Here is the same update in Postgres:

explain (analyze,verbose,costs,buffers) update demo1 set x=lpad('y',1000,'y') where ctid='(142,6)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Update on public.demo1 (cost=0.00..4.01 rows=1 width=46) (actual time=0.214..0.214 rows=0 loops=1)
Buffers: shared hit=6 dirtied=3
-> Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=46) (actual time=0.009..0.009 rows=1 loops=1)
Output: n, a, 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'::text, ctid
TID Cond: (demo1.ctid = '(142,6)'::tid)
Buffers: shared hit=1
Planning time: 0.405 ms
Execution time: 0.232 ms

The Tid Scan is the same as for the select. Then the update has read 5 blocks and modified 3 of them. The update in Postgres is processed as a delete+insert. Here is my guess about those numbers. The new version is inserted, in a new block if there is no free space in the same block. The old version is updated. And the index must be maintained. Those are 3 blocks to modify. Here, the row was directly accessed through its TID. But we must find the index entry. The row contains the index value, and then an index scan is possible: two block reads for this small index having one branch only.

SELECT again

I said that with Oracle the row is updated in-place and doesn’t need further cleanup. If I run the same SELECT as the one I did before the UPDATE, I still have only one block to read:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8mj3ms08x0sfh, child number 0
-------------------------------------
select /*+ */ a from demo1 where rowid='AAAR4WAAMAAAAEaAAF'
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 1 |00:00:00.01 | 1 |
| 1 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
-----------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22]

In Postgres, because the update was processed as insert+delete, running the same also reads only one block, but it returns no rows:

explain (analyze,verbose,costs,buffers) select a from demo1 where ctid='(142,6)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------
Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1)
Output: a
TID Cond: (demo1.ctid = '(142,6)'::tid)
Buffers: shared hit=1
Planning time: 0.442 ms
Execution time: 0.028 ms

The new version is in another block, then the TID to find it is different:

select ctid from demo1 where n=1000;
ctid
----------
(1428,5)
(1 row)

There was not enough space for another version of the whole row within the same block. Free space was found in the last block (1428). Of course, this is why the index was updated even if the indexed column did not change: it had to address a different block.

Let’s query with the new TID:

explain (analyze,verbose,costs,buffers) select a from demo1 where ctid='(1428,5)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------
Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=4) (actual time=0.008..0.008 rows=1 loops=1)
Output: a
TID Cond: (demo1.ctid = '(1428,5)'::tid)
Buffers: shared hit=1
Planning time: 0.449 ms
Execution time: 0.023 ms

Only one buffer read. However, as we have seen with the Index Only Scan, there is a need for cleanup to avoid Heap Fetches. There are also the old tuples that should be removed later or the updated tables and indexes grow forever.

There’s only one Postgres access path remaining. That’s for teh last post of this series, which will include the table of content.

 

Cet article Postgres vs. Oracle access paths X – Update est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths IX – Tid Scan

Wed, 2017-08-23 15:29

In the previous post we have seen how Postgres and Oracle finds the table row from the index entry. It uses the TID / ROWID. I’ll focus on this access path and I will have covered all Postgres access paths to table data.

Oracle ACCESS BY ROWID

I start with Oracle because we already have seen the TABLE ACCESS BY ROWID. I’ll decompose an index acces to the table. The first step is getting the ROWID from the index entry:

SQL> select /*+ */ rowid from demo1 where n=1000;
 
ROWID
------------------
AAASPkAAMAAABIaAAF

The ROWID contains the data object ID (to be able to identify the segment and then the tablespace), the relative file number within the tablespace, the block number within this file and the row number within the block. This can be stored in 10 bytes. When in an index entry, except if this is a global index on a partitioned table, we don’t need the object ID (because there’s a one-to-one relationship between the table and the index objects) and the only 6 bytes are stored in the index entry.

This is a simple index access and the output (projection) is the ROWID:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 32tsqy19ctmd4, child number 0
-------------------------------------
select /*+ */ rowid from demo1 where n=1000
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 1 |00:00:00.01 | 2 |
|* 1 | INDEX UNIQUE SCAN| DEMO1_N | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 2 |
----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("N"=1000)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - ROWID[ROWID,10]

Now with the ROWID, I query a column from the table:
SQL> select /*+ */ a from demo1 where rowid='AAASPkAAMAAABIaAAF';
 
A
----------
1

And the plan is exactly the ‘TABLE ACCESS’ part we have seen in previous posts on index scans:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID c46nq5t0sru8q, child number 0
-------------------------------------
select /*+ */ a from demo1 where rowid='AAASPkAAMAAABIaAAF'
Plan hash value: 3196731035
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 (100)| 1 |00:00:00.01 | 1 |
| 1 | TABLE ACCESS BY USER ROWID| DEMO1 | 1 | 1 | 1 (0)| 1 |00:00:00.01 | 1 |
-----------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22]

There’s no Predicate section visible here, but the access is done on the ROWID which contains the file number, block number, and row number. This is the fastest way to get one row: reading only one buffer.

Postgres Tid Scan

Same idea in Postgres where we can query the TID (Tumple ID):

select ctid from demo1 where n=1000 ;
ctid
---------
(142,6)
(1 row)

Because my table is stored in a file (no tablespace with multiple data files here) the TID contains only the block number and the row number within the block.
explain (analyze,verbose,costs,buffers) select ctid from demo1 where n=1000 ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Index Scan using demo1_n on public.demo1 (cost=0.29..8.30 rows=1 width=6) (actual time=0.009..0.009 rows=1 loops=1)
Output: ctid
Index Cond: (demo1.n = 1000)
Buffers: shared hit=3
Planning time: 0.429 ms
Execution time: 0.023 ms

We already have seen the cost of this operation: 116 startup operations, 2 index pages read at random_page_cost=4 and 1 result row at cpu_tuple_cost=0.01 (note that the query planner does not count the cpu_index_tuple_cost here).

Then here is the query using this TID:
explain (analyze,verbose,costs,buffers) select a from demo1 where ctid='(142,6)' ;
QUERY PLAN
------------------------------------------------------------------------------------------------------
Tid Scan on public.demo1 (cost=0.00..4.01 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=1)
Output: a
TID Cond: (demo1.ctid = '(142,6)'::tid)
Buffers: shared hit=1
Planning time: 0.351 ms
Execution time: 0.017 ms

The cost estimation is very simple here: 1 seek()+read() at random_page_cost=4 and 1 result row at cpu_tuple_cost=0.01

Since the post on Index Only Scan, I’m working on a vacuumed table with no modifications. Now that I have the simplest access path, I’ll show it with an update, in the next post.

 

Cet article Postgres vs. Oracle access paths IX – Tid Scan est apparu en premier sur Blog dbi services.

Improving Statspack Experience

Wed, 2017-08-23 14:41

I’ve published a few month ago an article in the UKOUG OracleScene magazine on Improving Statspack Experience: quick setup script, changing settings, filling Idle Events,etc. In the article, I used dbms_job to schedule the snapshots, because I have this script for years and never took the time to do it with dbms_scheduler. Thanks to Nicolas Jardot here is the equivalent script using dbms_scheduler.

The idea is to have a script to run on each instance (when in RAC) in order to have a job calling statspack.snap and statspack.purge on each instance.

DECLARE
instno NUMBER;
snapjob VARCHAR2(30);
purgejob VARCHAR2(30);
BEGIN
select instance_number into instno from v$instance;
snapjob := 'PERFSTAT.STATSPACK_SNAP_' || instno;
purgejob := 'PERFSTAT.STATSPACK_PURGE_' || instno;
 
DBMS_SCHEDULER.CREATE_JOB (
job_name => snapjob,
job_type => 'PLSQL_BLOCK',
job_action => 'statspack.snap;',
number_of_arguments => 0,
start_date => systimestamp,
repeat_interval => 'FREQ=HOURLY;BYTIME=0000;BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN',
end_date => NULL,
enabled => TRUE,
auto_drop => FALSE,
comments => 'Take hourly Statspack snapshot');
 
DBMS_SCHEDULER.CREATE_JOB (
job_name => purgejob,
job_type => 'PLSQL_BLOCK',
job_action => 'statspack.purge(i_num_days=>31,i_extended_purge=>true);',
number_of_arguments => 0,
start_date => systimestamp,
repeat_interval => 'FREQ=WEEKLY;BYTIME=120000;BYDAY=SUN',
end_date => NULL,
enabled => TRUE,
auto_drop => FALSE,
comments => 'Weekly purge Statspack snapshot');
 
DBMS_SCHEDULER.SET_ATTRIBUTE( name => snapjob, attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF );
DBMS_SCHEDULER.SET_ATTRIBUTE( name => snapjob, attribute => 'INSTANCE_ID', value=>instno);
DBMS_SCHEDULER.SET_ATTRIBUTE( name => snapjob, attribute => 'INSTANCE_STICKINESS', value=>TRUE);
 
DBMS_SCHEDULER.SET_ATTRIBUTE( name => purgejob, attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF );
DBMS_SCHEDULER.SET_ATTRIBUTE( name => purgejob, attribute => 'INSTANCE_ID', value=>instno);
DBMS_SCHEDULER.SET_ATTRIBUTE( name => purgejob, attribute => 'INSTANCE_STICKINESS', value=>TRUE);
END;
/

I used the Oracle Cloud Service to provision quickly a two nodes RAC database to validate, and I’ll check the scheduling:

[oracle@rac-dg01-1 admin]$ alias sql='JAVA_HOME=$ORACLE_HOME/jdk bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql'
[oracle@rac-dg01-1 admin]$ TWO_TASK=//10.31.143.86/pdb1.a511644.oraclecloud.internal sql sys/"Ach1z0#d" as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Wed Aug 23 18:57:12 2017
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
 
SQL> set sqlformat ansiconsole
SQL> select job_name, state, enabled, next_run_date, instance_stickiness, instance_id from dba_scheduler_jobs where owner='PERFSTAT';
JOB_NAME STATE ENABLED NEXT_RUN_DATE INSTANCE_STICKINESS INSTANCE_ID
-------- ----- ------- ------------- ------------------- -----------
STATSPACK_SNAP_1 SCHEDULED TRUE 23-AUG-17 07.00.00.981193000 PM +00:00 TRUE 1
STATSPACK_PURGE_1 SCHEDULED TRUE 27-AUG-17 12.00.00.074939000 PM +00:00 TRUE 1
STATSPACK_SNAP_2 SCHEDULED TRUE 23-AUG-17 07.00.00.644681000 PM +00:00 TRUE 2
STATSPACK_PURGE_2 SCHEDULED TRUE 27-AUG-17 12.00.00.755685000 PM +00:00 TRUE 2

One hour later, the job has run on each instance:

JOB_NAME STATE ENABLED NEXT_RUN_DATE INSTANCE_STICKINESS INSTANCE_ID
-------- ----- ------- ------------- ------------------- -----------
STATSPACK_SNAP_1 SCHEDULED TRUE 23-AUG-17 08.00.00.325755000 PM +00:00 TRUE 1
STATSPACK_PURGE_1 SCHEDULED TRUE 27-AUG-17 12.00.00.074939000 PM +00:00 TRUE 1
STATSPACK_SNAP_2 SCHEDULED TRUE 23-AUG-17 08.00.00.644681000 PM +00:00 TRUE 2
STATSPACK_PURGE_2 SCHEDULED TRUE 27-AUG-17 12.00.00.755685000 PM +00:00 TRUE 2

Now running a spreport to see the instances having snapshots:

[oracle@rac-dg01-1 admin]$ TWO_TASK=//10.31.143.86/pdb1.a511644.oraclecloud.internal/cdb12 sqlplus sys/"Ach1z0#d" as sysdba @ spreport
 
Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
924704606 2 CDB1 cdb12 rac-dg01-2
924704606 1 CDB1 cdb11 rac-dg01-1
 
Using 924704606 for database Id
Using 2 for instance number

Here it is. dbms_job is deprecated. Let’s use dbms_scheduler.

 

Cet article Improving Statspack Experience est apparu en premier sur Blog dbi services.

Bequeath connect to PDB: set container in logon trigger?

Wed, 2017-08-23 00:54

There are little changes when you go to multitenant architecture and one of them is that you must connect with a service name. You cannot connect directly to a PDB with a beaqueath (aka local) connection. This post is about a workaround you may have in mind: create a common user and set a logon trigger to ‘set container’. I do not recommend it and you should really connect with a service. Here is an example.

Imagine that I have a user connecting with bequeath connection to a non-CDB, using user/password without a connection string, the database being determined by the ORACLE_SID. And I want to migrate to CDB without changing anything on the client connection configuration side. The best idea would be to use a service, explicitly or implicitly with TWO_TASK or LOCAL. But let’s imagine that you don’t want to change anything on the client side.

As we can connect only the the CDB$ROOT with a bequeath connection, we have to create a common user. Because the idea is not to change anything on client configuration, and there’s a very little chance that the user starts with C## I’ll start to remove the mandatory prefix for common users.


SQL> show parameter common_user_prefix
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
common_user_prefix string
 
SQL> alter system set common_user_prefix='' scope=spfile;
System altered.
 
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
...

Then I create my common user:

SQL> create user MYOLDUSER identified by covfefe container=all;
User created.

This user must be able to connect to the CDB:

SQL> grant create session to MYOLDUSER container=current;
Grant succeeded.

And then I want it to switch immediately to PDB1 using a logon trigger:

SQL> create or replace trigger SET_CONTAINER_AT_LOGON after logon on database
2 when (user in ('MYOLDUSER'))
3 begin
4 execute immediate 'alter session set container=PDB1';
5 end;
6 /
Trigger created.

Once on PDB1 this user will have some privileges, and for the example I will grant him a default role:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> create role MYROLE;
Role created.
 
SQL> grant MYROLE to MYOLDUSER container=current;
Grant succeeded.

The documentation says that When you grant a role to a user, the role is granted as a default role for that user and is therefore enabled immediately upon logon so I don’t need to:

SQL> alter user MYOLDUSER default role MYROLE;
User altered.

But the doc say ‘logon’ and technically I do not logon to PDB1. I just set container. However, if you test it you will see that default roles are set also on ‘set container’. And anyway, we cannot set a role in a procedure, neither with ‘set role’ nor with dbms_session.set_role:

ORA-06565: cannot execute SET ROLE from within stored procedure

Then, I can now connect locally to the CDB$ROOT with this user:

SQL> connect MYOLDUSER/covfefe
Connected.

And I’m automatically switched to the PDB1:

SQL> show con_name
 
CON_NAME
------------------------------
PDB1

Issue #1: default roles

However the default roles are not set:

SQL> select * from session_roles;
 
no rows selected

I have to set the role once connected:

SQL> set role all;
Role set.
 
SQL> select * from session_roles;
 
ROLE
--------------------------------------------------------------------------------
MYROLE

This is probably not what we want when we cannot change anything on the application side. This is considered as a bug (Bug 25081564 : ALTER SESSION SET CONTAINER IN “ON LOGON TRIGGER” IS NOT WORKING) fixed in 18.1 (expected in Q1 2018) and there’s a patch for 12.1 and 12.2 https://updates.oracle.com/download/25081564.html

Issue #2: core dump

There’s another issue. If you run the same with SQLcl you have a core dump in the client library libclntsh.so on kpuSetContainerNfy

SQLcl: Release 17.2.0 Production on Tue Aug 22 22:00:52 2017
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
SQL> connect MYOLDUSER/covfefe
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fcaa172faf6, pid=31242, tid=140510230116096
#
# JRE version: Java(TM) SE Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libclntsh.so.12.1+0x11d8af6] kpuSetContainerNfy+0x66
#
# Core dump written. Default location: /media/sf_share/122/blogs/core or core.31242

There’s a SR opened for that. This is not a no-go because the context being no change to the client part, then sqlplus will probably be used. However, that’s another point which shows that ‘set container’ in a logon trigger may have some implementation problems.

Issue #3: security

In my opinion, there is a bigger problem here. With sqlplus (or with sqlcl not using local connection) I can connect to the CDB$ROOT and switch to PDB1. But look at all the commands above… where did I grant the ‘set container’ privilege for MYOLDUSER on the PDB1 container? Nowhere. MYOLDUSER has no create session and no set container privileges, but is able to connect to PDB1 thanks to the logon trigger. Of course , the logon trigger is defined by a DBA who knows what he does. But in my opinion, it is not a good idea to bypass the privilege checking.

So what?

With no default role, connecting without the right privilege, the security model is biased here. And disabling the common user prefix will raise other issues one day with plugging operations. Then, in my opinion, this is not a solution to workaround the need to connect with a service. Especially in the context where we run legacy application with no possibility to change the way it connects: you just postpone the problems to bigger ones later.

The real solution is to connect to a service (and that’s not difficult even when you can’t change the code, with TWO_TASK environment variable).

 

Cet article Bequeath connect to PDB: set container in logon trigger? est apparu en premier sur Blog dbi services.

Get trace file from server to client

Sun, 2017-08-20 15:20

The old way to get a user dump trace file, for sql_trace (10046), Optimizer compilation trace (10053), lock trace (10704), Optimizer execution trace (10507),… is to go to the server trace directory. But if you don’t have access to the server (as in the ☁) the modern (12cR2) way is to select from V$DIAG_TRACE_FILE_CONTENTS. Before everybody is on 12.2 I’m sharing here a sqlplus script that I use for a long time to get the trace file to the client.

This is a script to run with sqlplus passing as a parameter the name of the trace file you want to write to (because the original name on the server is not very meaningful). I close all cursors to get all info such as STAT lines. I get the trace file name from v$diag_info. If a directory exists I use it. If not I create one, named UDUMP, which I’ll remove at the end. Then the file is read by utl_file and spooled on client though dbms_output. At the end, I remove the trace file from the server.

Here is the script – comments welcome.

set serveroutput on feedback off verify off termout off linesize 1024 trimspool on echo off
spool &1..trc
declare
fd utl_file.file_type;
line varchar2(1024);
l_tracename varchar2(512);
l_directory_path all_directories.directory_path%TYPE;
l_directory_name all_directories.directory_name%TYPE;
l_directory_created boolean;
procedure t (msg in varchar2) is begin dbms_output.put_line(msg); end;
begin
/* use old parameter _cached_open_cursors to close all open cursors */
for r in (select 1 from v$session_cursor_cache where count>0) loop
dbms_session.set_close_cached_open_cursors(true);
rollback;
commit;
dbms_session.set_close_cached_open_cursors(false);
end loop;
/* get trace directory and trace file name */
select value into l_directory_path from v$diag_info where name='Diag Trace';
select substr(replace(value,l_directory_path,''),2) into l_tracename from v$diag_info where name='Default Trace File';
/* get directory name for it, or try to create it */
l_directory_created:=false;
begin
select directory_name into l_directory_name from all_directories where directory_path = l_directory_path and rownum=1;
exception
when no_data_found then
begin
l_directory_name:='UDUMP';
execute immediate 'create directory '||l_directory_name||' as '''||l_directory_path||'''';
l_directory_created:=true;
exception when others then
raise_application_error(-20000,'You must CREATE DIRECTORY '||l_directory_name||' as ''' ||l_directory_path||'''; or be granted CREATE DIRECTORY.');
end;
end;
/* opens the trace file */
begin
fd:=utl_file.fopen(l_directory_name,l_tracename,'R');
exception when others then
raise_application_error(-20001,'Impossible to open file: '||l_tracename||' in '||l_directory_name||' ( '||l_directory_path||' )');
end;
/* read the trace file and prints it */
begin
loop
begin
utl_file.get_line(fd,line);
dbms_output.put_line(line);
exception
when no_data_found then exit;
when others then
dbms_output.put_line('!!! error while reading file '||l_tracename||' in '||l_directory_name||' ( '||l_directory_path||' )');
end;
end loop;
/* close and remove the file from the server */
utl_file.fclose(fd);
utl_file.fremove(l_directory_name,l_tracename);
exception
when others then
raise_application_error(-20002,'Impossible to remove: '||l_tracename||' in '||l_directory_name||' ( '||l_directory_path||' )');
end;
begin
/* drop directory if created */
if l_directory_created then execute immediate 'drop directory '||l_directory_name; end if;
exception
when others then
raise_application_error(-20002,'Impossible to remove directory: '||l_directory_name||' ( '||l_directory_path||' )'||sqlerrm);
end;
end;
/
spool off

 

Cet article Get trace file from server to client est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths VIII – Index Scan and Filter

Sun, 2017-08-20 07:40

In the previous post we have seen a nice optimization to lower the consequences of bad correlation between the index and the table physical order: a bitmap, which may include false positives and then requires a ‘recheck’ of the condition, but with the goal to read each page only once. Now we are back to the well-clustered table where we have seen two possible access paths: IndexOnlyScan when all columns we need are in the index, and IndexScan when we select additional columns. Here is a case in the middle: the index does not have all the columns required by the select, but can eliminate all rows.

The table created is:

create table demo1 as select generate_series n , 1 a , lpad('x',1000,'x') x from generate_series(1,10000);
SELECT 10000
create unique index demo1_n on demo1(n);
CREATE INDEX
vacuum demo1;
VACUUM

Index Only Scan and Filter

I use only one column (N), which is indexed, in the SELECT clause and the WHERE clause. And this WHERE clause is silly: in addition to the n<=1000 I've used in previous post to focus on 10% of rows, I add a condition which is always false: mod(n,100)=1000

explain (analyze,verbose,costs,buffers) select n from demo1 where n<=1000 and mod(n,100)=1000 ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n on public.demo1 (cost=0.29..38.78 rows=5 width=4) (actual time=0.276..0.276 rows=0 loops=1)
Output: n
Index Cond: (demo1.n <= 1000)
Filter: (mod(demo1.n, 100) = 1000)
Rows Removed by Filter: 1000
Heap Fetches: 0
Buffers: shared hit=5
Planning time: 0.454 ms
Execution time: 0.291 ms

Index Only Scan is used here because no other columns are used. The n<=1000 is the access condition (Index Cond.) doing a range scan on the index structure. The mod(n,100)=1000 is a filter predicate which is applied to the result of the index access (Filter) and we have additional information that the 1000 rows selected by the access predicate have been filtered out (Rows Removed by Filter). During the execution, 5 index buffers have been read for the range scan (branches + leaves). Because I vacuumed any changes, the visibility map knows that all rows can be displayed and there are no blocks to read from the table (Heap Fetches).

Now I’ll select another column in order to see an Index Scan. We have seen in the previous post that the huge cost of index access is the access to the table. Filtering most of the rows from the index entries is the most common recommendation to optimize a query. And my example here is running the extreme case: a predicate on the indexed column removes all rows.

Index Scan and Filter

I’ve just changed the ‘select n’ to ‘select a':


explain (analyze,verbose,costs,buffers) select a from demo1 where n<=1000 and mod(n,100)=1000 ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Index Scan using demo1_n on public.demo1 (cost=0.29..184.78 rows=5 width=4) (actual time=0.427..0.427 rows=0 loops=1)
Output: a
Index Cond: (demo1.n <= 1000)
Filter: (mod(demo1.n, 100) = 1000)
Rows Removed by Filter: 1000
Buffers: shared hit=147
Planning time: 0.434 ms
Execution time: 0.440 ms

I can understand that the cost is higher. The optimizer may not know that mod(n,100) will never be equal to 1000. Estimating 5 rows, as in the previous case, is ok for me. We see different Output (different SELECT clause) but same information about Index Cond, Filter, and Rows Removed (same WHERE clause). The estimation part looks good.

However, there’s something that I can’t understand. At execution, we know that all rows can be removed before going to the table. We go to the table to get the value from A but all rows were filtered out from the index. At least it was the case with the Index Only Scan, and we know that the filter condition has all values from the index.

However, 147 blocks were read here. We have seen that the index scan reads 5 index pages, and then we can guess that 142 table pages have been read, exactly 10% of the pages from my correlated table. It seems that all rows have been read from the table before being filtered out. The Index Scan being one operation, the filter occurs at the end only. This is only my guess and I hope to get comments about that.

Oracle

With Oracle, the first query, selecting only indexed columns is an INDEX RANGE SCAN similar to the Postgres one.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fj36y2vph9u8f, child number 0
-------------------------------------
select /*+ */ n from demo1 where n<=1000 and mod(n,100)=1000
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 4 (100)| 0 |00:00:00.01 | 3 |
|* 1 | INDEX RANGE SCAN| DEMO1_N | 1 | 10 | 4 (0)| 0 |00:00:00.01 | 3 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("N"<=1000)
filter(MOD("N",100)=1000)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

Oracle does not know either that the filter predicate mod(n,100)=1000 eliminates all rows and estimates this kind of predicate to 10% of rows (a generic value) after the access predicate returning 10% (this one is calculated from statistics). 3 blocks were read: index branch + leaves.

Reading an additional table from the column does not change this INDEX RANGE SCAN operation but just adds one step to go to the table:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1rpmvq3jj8hgq, child number 0
-------------------------------------
select /*+ */ a from demo1 where n<=1000 and mod(n,100)=1000
----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 6 (100)| 0 |00:00:00.01 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| DEMO1 | 1 | 10 | 6 (0)| 0 |00:00:00.01 | 3 |
|* 2 | INDEX RANGE SCAN | DEMO1_N | 1 | 10 | 4 (0)| 0 |00:00:00.01 | 3 |
----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("N"<=1000)
filter(MOD("N",100)=1000)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22] 2 - "DEMO1".ROWID[ROWID,10]

Having two operations, the filter removes the rows on the output of the index range scan on line 2 and then has to go to the table only for rows that remain. No additional buffer reads on this step 1 when there are no rows. With Oracle, we build indexes to optimize the access predicates and we add columns to optimize the filter predicate. We can go further by adding all projections and avoid completely the access to the table, but that is not always needed. If we can apply all where clause filters on the indexed columns, then the access to the table remains proportional to the result. And the end-user usually accept longer response time for long results. And index access response time is proportional to the result.

The decomposition in two operations is also convenient to see which columns projection is done for the index result or the table result. Here the only output of the index range scan at line 2 is the ROWID and the output from the table access at line 1 is the column we select. So, we have two operations here. We have seen that INDEX RANGE SCAN can run alone. And we will see in the next post that the TABLE ACCESS BY INDEX ROWID can also run alone.

So what?

I hope that Postgres experts will comment about the need to read the table pages even when we can filter all rows from the index scan. We can do something similar by re-writing the query where we can see that the access to the table is never executed:

explain (analyze,verbose,costs,buffers) select a from demo1 where n in (select n from demo1 where n<=1000 and mod(n,100)=1000 ) ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.57..76.35 rows=5 width=4) (actual time=0.285..0.285 rows=0 loops=1)
Output: demo1.a
Buffers: shared hit=5
-> Index Only Scan using demo1_n on public.demo1 demo1_1 (cost=0.29..34.78 rows=5 width=4) (actual time=0.284..0.284 rows=0 loops=1)
Output: demo1_1.n
Index Cond: (demo1_1.n <= 1000)
Filter: (mod(demo1_1.n, 100) = 1000)
Rows Removed by Filter: 1000
Heap Fetches: 0
Buffers: shared hit=5
-> Index Scan using demo1_n on public.demo1 (cost=0.29..8.30 rows=1 width=8) (never executed)
Output: demo1.n, demo1.a, demo1.x
Index Cond: (demo1.n = demo1_1.n)

But this involves a join, and join methods will deserve another series of blog posts. The next one on access paths will show the TABLE ACCESS BY INDEX ROWID equivalent, Tid Scan. Then I’ll have covered all access paths.

 

Cet article Postgres vs. Oracle access paths VIII – Index Scan and Filter est apparu en premier sur Blog dbi services.

Alfresco – Unable to see the content of folders with ‘+’ in the name

Wed, 2017-08-16 15:23

As you might now if you are following our blogs (see this one for example), we are using Alfresco Community Edition internally for some years now and we also have a few customers using it. Today, I will present you a small issue I faced with Alfresco which wasn’t able to display the content of a few – very specific – folders… As you will see below, it was actually not related to Alfresco but that’s a good example.

 

For some background on this issue, an end-user contacted me, saying that he wasn’t able to see the content of three folders in which he was sure there were at least some PDFs. So I checked and yes, even as an Alfresco Admin, I wasn’t able to see the content of this folder using Share. The only common point I could see with these three folders is that they all contained a “+” in their name. I recently upgraded and migrated this Alfresco environment so I was kind of sure this was somehow linked (since the folders existed and were working before the upgrade+migration). For this blog, I will use a test folder named “Test+Folder” and I’m using an Apache HTTPD as a front end. I put this folder under the Shared folder:

//alfresco_server_01/share/page/context/shared/sharedfilesURL: https://alfresco_server_01/share/page/context/shared/sharedfiles

 

Clicking on this folder’s name from the central screen or from the browsertree on the left side will result in the following screen:

Folder2URL: https://alfresco_server_01/share/page/context/shared/sharedfiles#filter=path|%2FTest%2BFolder|&page=1

 

As you can see above, the URL accessed is the correct one, it contains the “/Test+Folder” path so that’s correct (%2F is “/” and %2B is “+”). However, the screen is empty, just like if this path would not exist and on the breadcrumb, it is shown “Shared Files” while it should be “Shared Files > Test+Folder”. So in summary, the Share UI isn’t able to display the content of this folder. For testing purpose, I accessed this folder using WebDAV and AOS and these clients were able to see the content of the folder but not Share. Like I mentioned above, this Alfresco environment is using Apache HTTPD as a front-end and then mod_jk for the communication with the Tomcat. You can take a look at this blog or the official documentation for more information on this setup.

 

Since other clients were working properly, I tried to access Alfresco Share without going through the front-end (so accessing Tomcat directly) in order to ensure that the issue isn’t with Alfresco itself. By doing so, I was able to see the PDFs. If the issue isn’t with Alfresco, then it should be with the front-end and in particular with the mod_rewrite and mod_jk, in this case. The idea here is to check what are doing these two mods and then compare the outcome with the initial request and what is being transferred to the Tomcat using the access logs.

 

While doing this analysis, I found out that the mod_jk was most probably the root cause of this issue. When mod_jk is doing its job, it will decode the URL’s encoded characters like %2F for “/”, like %2B for “+”, like %20 for a space, aso… Then once the rules have been processed, it will, by default, re-encode these special characters before transmitting the request to Tomcat. However in the Tomcat access logs, it appeared that the other special characters were indeed present in their encoded format but it wasn’t the case for the “+” sign which was shown like that (so no %2B anymore).

This is an example (it’s not the real requests in the background but this is the URL on a Web browser so it gives a good example):

  • URL accessed:                    https://alfresco_server_01/share/page/context/shared/sharedfiles#filter=path|%2FTest%2BFolder|&page=1
  • URL decoded by mod_jk:  https://alfresco_server_01/share/page/context/shared/sharedfiles#filter=path|/Test+Folder|&page=1
  • URL sent to Tomcat:          https://alfresco_server_01/share/page/context/shared/sharedfiles#filter=path|%2FTest+Folder|&page=1

 

For some reasons (it looks like a bug), the mod_jk does not re-encode the “+” sign and this prevents the Tomcat in the back-end to complete the request properly. The behavior of mod_jk related to the URIs is managed using the JkOptions. This property can have the following values regarding the Forwarded URI:

  • JkOptions     +ForwardURIProxy
    • Default value in version > mod_jk 1.2.23
    • The forwarded URI will be partially re-encoded after processing inside Apache and before forwarding to Tomcat
  • JkOptions     +ForwardURICompatUnparsed
    • Default value in version = mod_jk 1.2.23
    • The forwarded URI will be unparsed so no decoding nor re-encoding
  • JkOptions     +ForwardURICompat
    • Default value in version < mod_jk 1.2.23
    • The forwarded URI will be decoded by Apache
  • JkOptions     +ForwardURIEscaped
    • The forwarded URI will be the encoded form of the URI used by ForwardURICompat

 

I’m using a fairly recent version of mod_jk on this Alfresco environment so the default value is “ForwardURIProxy”. Therefore on the paper, it should work properly since URIs will be decoded and re-encoded… However we saw above that this is working but not for the “+” sign which is not re-encoded.

 

To workaround this issue, I just updated the JkOptions to have “JkOptions     +ForwardURICompatUnparsed” in my Apache HTTPD configuration and after a reload of the conf, I was able to access the content of the folder:

Folder3URL: https://alfresco_server_01/share/page/context/shared/sharedfiles#filter=path|%2FTest%2BFolder|&page=1

 

Please note that “ForwardURICompatUnparsed” will always forward the original request URI, so rewriting URIs with mod_rewrite might not work properly, it all depends how you configured the rewrite rules and what you need to do with it. Honestly, I don’t think this issue is linked to all mod_jk versions above 1.2.23 since I was using a version 1.2.40 on RedHat before and I never noticed such issue so this might be linked to a specific mod_jk version on a specific OS (Ubuntu?). So basically if you don’t have this issue, I would suggest you to use the default JkOptions.

 

 

Cet article Alfresco – Unable to see the content of folders with ‘+’ in the name est apparu en premier sur Blog dbi services.

Replicating specific tables in PostgreSQL 10 Beta with mimeo

Tue, 2017-08-15 11:31

In this blog I am going to test the extension mimeo with PostgreSQL 10 beta. Mimeo is a replication extension for copying specific tables in one of several specialized ways from any number of source databases to a destination database where mimeo is installed.
In our configuration we are going to replicate data on a same server but between 2 clusters running on different ports. But it’s same for different servers. The pg_hba.conf should be configured to allow remote connection.
Source
Hostname: pgservertools.localdomain (192.168.56.30)
Database: prima (port 5432)
Target
Hostname: pgservertools.localdomain (192.168.56.30)
Database: repl (port 5433)
The first thing is to install the extension on the destination server. For this we will use the command git to clone the extension directory on the server.
[root@pgservertools ~]# yum install perl-Git.noarch
[root@pgservertools ~]# git clone git://github.com/omniti-labs/mimeo.git
Cloning into 'mimeo'...
remote: Counting objects: 1720, done.
remote: Total 1720 (delta 0), reused 0 (delta 0), pack-reused 1720
Receiving objects: 100% (1720/1720), 1.24 MiB | 429.00 KiB/s, done.
Resolving deltas: 100% (1094/1094), done.
[root@pgservertools ~]#

Then in the mimeo directory let’s run following commands

[root@pgservertools mimeo]# make
[root@pgservertools mimeo]# make install

If there is no error, we can create our two databases. The source database will be named prima and the target will be named repl.

[postgres@pgservertools postgres]$ psql
psql (10beta2)
Type "help" for help.
.
postgres=# show port;
port
------
5432
(1 row)
.
postgres=# create database prima;
CREATE DATABASE
postgres=#


postgres=# show port;
port
------
5433
(1 row)
.
postgres=# create database repl;
CREATE DATABASE
postgres=#

Now we have to install the extension mimeo in the destination database repl.
The extension mimeo requires the extension dblink. If this extension is not present, an error will be raised

repl=# create schema mimeo;
CREATE SCHEMA
.
repl=# create extension mimeo schema mimeo;
ERROR: required extension "dblink" is not installed
HINT: Use CREATE EXTENSION ... CASCADE to install required extensions too.
repl=#

The extension dblink should be already present with the standard installation. This can be verified by listing files the extension directory.

[root@pgservertools extension]# pwd
/usr/pgsql-10/share/extension
.
[root@pgservertools extension]# ls -l dblink*
-rw-r--r--. 1 root root 419 Jul 13 12:15 dblink--1.0--1.1.sql
-rw-r--r--. 1 root root 2832 Jul 13 12:15 dblink--1.1--1.2.sql
-rw-r--r--. 1 root root 6645 Jul 13 12:15 dblink--1.2.sql
-rw-r--r--. 1 root root 170 Jul 13 12:15 dblink.control
-rw-r--r--. 1 root root 2863 Jul 13 12:15 dblink--unpackaged--1.0.sql
[root@pgservertools extension]#

So rexecuting the instruction with the cascade option will install the extension dblink.

repl=# create extension mimeo schema mimeo cascade;
NOTICE: installing required extension "dblink"
CREATE EXTENSION
repl=#

On the target database let’s create a user mimeo we will use for the replication and let’s give him all required privileges. Superuser is not needed by will also work.

repl=# create user mimeo password 'root';
CREATE ROLE
repl=# GRANT USAGE ON SCHEMA mimeo TO mimeo;
GRANT
repl=# GRANT USAGE ON SCHEMA public TO mimeo;
GRANT
repl=# GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA mimeo TO mimeo;
GRANT
repl=#

On the source database let’s create same user on the source and give him required privileges

prima=# create user mimeo password 'root';
CREATE ROLE
prima=# CREATE SCHEMA mimeo;
CREATE SCHEMA
prima=# ALTER SCHEMA mimeo OWNER TO mimeo;
ALTER SCHEMA
prima=#

Every source database needs to have its connection information stored in mimeo’s dblink_mapping_mimeo table on the destination database. You can have as many source databases as you need, which makes creating a central replication destination for many master databases easy. All data is pulled by the destination database, never pushed by the source.

repl=# INSERT INTO mimeo.dblink_mapping_mimeo (data_source, username, pwd)
VALUES ('host=192.168.56.30 port=5432 dbname=prima', 'mimeo', 'root');
INSERT 0 1
repl=#

On the source let’s create table to be replicated and insert some data

prima=# create table article(idart int primary key, name varchar(20));
CREATE TABLE


prima=# table article;
idart | name
-------+-------
1 | Paper
2 | Food
(2 rows)
prima=#

Grant required privilege to mimeo

prima=# grant select on article to mimeo;
GRANT
prima=# grant trigger on article to mimeo;
GRANT
prima=#

Now we are ready to start the replication. We have three methods of replication:
-Snapshot replication
-Incremental replication
-DML replication
We will discuss only for snapshot and DML methods. Indeed the incremental method can replicate only inserted and updated data. It will not replicate any deleted data. See the documentation here

Snapshot Replication
This method is the only one to replicate data and structure change (add column….)
To initialize the table we use the function snapshot_maker (as we are using snapshot replication) and we pass as arguments the table to be replicated and the id of the dblink we want to use.

repl=# select * from mimeo.dblink_mapping_mimeo ;
data_source_id | data_source | username | pwd
----------------+-------------------------------------------+----------+------
1 | host=192.168.56.30 port=5432 dbname=prima | mimeo | root

So following command is used to initialize the table

repl=# SELECT mimeo.snapshot_maker('public.article', 1);
NOTICE: attempting first snapshot
NOTICE: attempting second snapshot
NOTICE: Done
snapshot_maker
----------------
.
(1 row)
repl=#

And we can easily verify that the two tables are synchronized.

repl=# table article;
idart | name
-------+-------
1 | Paper
2 | Food
(2 rows)
repl=#

Now let’s insert new data in the source table and let’s see how to refresh the target table.

prima=# table article;
idart | name
-------+-------
1 | Paper
2 | Food
3 | Oil
4 | Books

On the target database we just have to use the refresh_snap function

repl=# SELECT mimeo.refresh_snap('public.article');
refresh_snap
--------------
.
(1 row)

And we see that the source table was updated.

repl=# table article;
idart | name
-------+-------
1 | Paper
2 | Food
3 | Oil
4 | Books
(4 rows)
repl=#

A refresh can be scheduled using crontab for example every two minutes in my case

[postgres@pgservertools ~]$ crontab -l
*/2 * * * * psql -p 5433 -d repl -c "SELECT mimeo.refresh_snap('public.article')";
[postgres@pgservertools ~]$

DML Replication
The snapshot method is easier to setup, but it is not recommended for large table as
a table setup with this method will have the entire contents refreshed every time it is run.
So for large tables DML replication is recommended.
Let’s create a table customers on the source

prima=# create table customers(idcust int primary key, name varchar(30));
CREATE TABLE
. ^
prima=# insert into customers values(1,'Dbi');
INSERT 0 1
prima=# insert into customers values(2,'XZZ');
INSERT 0 1
.
prima=# table customers
prima-# ;
idcust | name
--------+------
1 | Dbi
2 | XZZ
(2 rows)
prima=#

And let’s grant required privileges to mimeo on customers

prima=# grant select on customers to mimeo;
GRANT
prima=# grant trigger on customers to mimeo;
GRANT
prima=#

On the target we use the function dml_maker to replicate data. We can see that we can even change the name of the destination table.

repl=# SELECT mimeo.dml_maker('public.customers', 1, p_dest_table := 'public.customers_repl');
NOTICE: Creating objects on source database (function, trigger & queue table)...
NOTICE: Pulling data from source...
dml_maker
-----------
.
(1 row)

We can verify that the table customers_repl is created

repl=# \d
List of relations
Schema | Name | Type | Owner
--------+----------------+-------+----------
public | article | view | postgres
public | article_snap1 | table | postgres
public | article_snap2 | table | postgres
public | customers_repl | table | postgres
(4 rows)

And that data are replicated

repl=# select * from customers_repl ;
idcust | name
--------+------
1 | Dbi
2 | XZZ
(2 rows)

Now let’s insert again new data in the source table and let’s see how to refresh the target

prima=# insert into customers values(3,'Linux');
INSERT 0 1
prima=# insert into customers values(4,'Unix');
INSERT 0 1
.
prima=# table customers
;
idcust | name
--------+-------
1 | Dbi
2 | XZZ
3 | Linux
4 | Unix
(4 rows)

On the target database we have to run the refresh_dml function

repl=# SELECT mimeo.refresh_dml('public.customers_repl');
refresh_dml
-------------
.
(1 row)


repl=# table customers_repl;
idcust | name
--------+-------
1 | Dbi
2 | XZZ
3 | Linux
4 | Unix
(4 rows)
repl=#

Like the snapshot method a crontab can be scheduled .
Conclusion
In a previous blog, we saw that logical replication is now supported on PostgreSQL 10. But the extension mimeo can still be used.

 

Cet article Replicating specific tables in PostgreSQL 10 Beta with mimeo est apparu en premier sur Blog dbi services.

ODA X6 Small Medium Large and High Availability

Mon, 2017-08-14 10:46

There are 4 models of Oracle Database Appliance with the new ODA X6 which is for the moment the latest ODA hardware version. One is similar to the previous X5-2 one, and 3 smaller ones known as ODA Lite. They are 1 year old already, here is a small recap of the differences and links to more detail.

System

The ODA X6 are composed with Oracle Server X6-2:

  • ODA X6-2S has one server
  • ODA X6-2M has one server
  • ODA X6-2L has one server
  • ODA X6-2HA is a cluster of two servers with one shared storage shelf (DE3-24C). It can be expanded with one or two additional storage shelf.

Those servers have 2 USB 2.0 ports accessible (2 in front, 2 in back) and 2 internal USB ports.
They have one serial console (SER MGT/RJ-45 connector) port. And One VGA DB-15 connector, easier to plug before you put the cable rails. Resolution: 1,600 x1,200×16 MB @ 60 Hz. Note that the resolution is 1,024 x 768 when viewed remotely via Oracle ILOM).
Each server has 2 redundant power supplies (hot swappable).

Rack Unit
  • ODA X6-2S is 2U
  • ODA X6-2M is 2U
  • ODA X6-2L is 4U
  • ODA X6-2HA is 6U (up to 10U with storage expansions)
CPU

The processor of X6 servers is an Intel 2.2GHz 10-Core Xeon E5-2630 v4, 85W
Cache:
Level 1: 32 KB instruction and 32 KB data L1 cache per core
Level 2: 256 KB shared data and instruction L2 cache per core
Level 3: 25 MB shared inclusive L3 cache per processor

  • ODA X6-2S has 1 processor: 10 cores
  • ODA X6-2M has 2 processors: 20 cores
  • ODA X6-2L has 2 processors: 20 cores
  • ODA X6-2 HA has two servers with 2 processors each: 40 cores

The core factor of the processor is 0.5 and you can license the full capacity for Enterprise Edition: 5 licenses for one ODA X6-2S, 10 licenses for one ODA X6-2M or 2L, 20 licenses for ODA X6-2 HA. You can also run NUP licenses with a minimum of 125 NUP for one ODA X6-2S, 250 NUP for one ODA X6-2M or 2L, 500 NUP for ODA X6-2 HA.
Of course, you can do Capacity on Demand on all models, so the minimum processor license is 1 license for ODA Lite (when 2 cores only are enabled) or 2 licenses for ODA HA because the nodes must by symmetric (except if you run only one node). In NUP, it means 25 NUP minimum for ODA Lite or 50 NUP for ODA HA.

On ODA Lite (2S, 2M, 2L) you can choose to run Standard Edition. Then you count the number of processors: 1 license for ODA X6-2S, 2 licenses for one ODA X6-2M or 2L. When in Standard Edition 2 you can license in NUP with the minimum of 10 NUP per server.

RAM

The X6 servers have 14 DIMM slots and are partially populated with 32GB DIMMs (DDR4-2400).

  • ODA X6-2S has 4 slots populated: 128GB RAM. It can be expanded with additional 8 DIMMs for 384GB
  • ODA X6-2M has 8 slots populated: 256GB RAM. It can be expanded with additional 16 DIMMs for 512GB or 768GB which populates all the slots.
  • ODA X6-2L has 8 slots populated: 256GB RAM. It can be expanded with additional 16 DIMMs for 512GB or 768GB which populates all the slots.
  • ODA X6-2 HA has 8 slots populated per server: 256GB RAM. It can be expanded with additional 16 DIMMs for 512GB or 768GB which populates all the slots.
Network

ODA Lite:
2x 10GbE SFP+ (fiber) and 2x 10GBase-T (copper) ports for ODAX6-2S (4x 10GBase-T for 2M and 2L)
ODA HA:
InfiniBand interconnect, 4x 10GBase-T (copper) ports (bonded to provide two public interfaces) or optional 2x 10GbE SFP+ (fiber) per server and one pair of 10GBase-T will be used for interconnect.

Storage

The servers have two 2.5″ 480GB SAS-3 SDDs disks in front, mirrored for the Operating System and the Oracle Database Software

ODA X6-2S and 2M have additional two 2.5″ 3.2TB NVMe PCIe 3.0 SSD disks for DATA and RECO disk groups (6.4TB raw capacity, 2.4TB double-mirrored, 1.6TB triple-mirrored)
They are expandable to 12.8 TB with two additional disks.

ODA X6-2L has six 3.5-inch disk bays are populated with 3.2TB SSDs (19.2TB raw capacity, 9.6TB double-mirrored, 6.4TB triple-mirrored)
They are expandable to 28.8 TB with two additional disks.

ODA X6-HA has a storage shelf, as previous X5 model (similar but not compatible), but with SSD and lower capacity for DATA: 10x SAS SSD flash 1.2TB (1.6 formatted to 1.2, over-provisioned for write performance because of garbage collection) which means 12 TB (12TB raw capacity, 6TB double-mirrored, 4TB triple-mirrored) expandable to 24 TB in the same chassis. You can add another storage shelf with additional 24TB.

Note that latest ODA X5 had 16x 8TB disks, so 128TB so X6-HA has higher performance but decreased capacity.

Virtualization, High Availability

Virtualization with OVM is only for ODA X6-HA. Only one VM per node has the resources to run Oracle Databases. You can add additional ‘Guest VMs’ for your applications (not for databases as the I/O is optimal only in the ODA Base VM). You can define VLANs.

The ODA Lite (2S/2M/2L) cannot run applications on guest VMs as they are bare metal only. If you want to run applications on ODA Lite, you can now use KVM, but remember that you still need to license all activated cores for each product because KVM is only ‘soft partitioning’ for Oracle LMS.

High Availability (with RAC) is only for ODA X6-HA.
A standby database to another ODA is possible for all models: Data Guard when in Enterprise Edition, or manual standby (we recommend Dbvisit standby – see one of our customer case study http://www.dbvisit.com/customers/case-study-das-protection-juridique/) when in Standard Edition.

Price

According to the Price List (http://www.oracle.com/us/corporate/pricing/exadata-pricelist-070598.pdf)

  • ODA X6-2S costs USD 18000
  • ODA X6-2M costs USD 24000
  • ODA X6-2L costs USD 40000
  • ODA X6-2 HA costs USD 72000
Full specifications

The full specification is available in the Oracle System Handbook:

A simple presentation is in the 3D view:

 

Cet article ODA X6 Small Medium Large and High Availability est apparu en premier sur Blog dbi services.

WebLogic – Cannot register for disconnect events on local server

Sun, 2017-08-13 03:15

When working with WebLogic, there will probably be a moment when you will ask yourself: damn, what is wrong? Nowadays, software are so complex that it is kind of easy to introduce bugs/non-wanted behaviors in them… In this blog, I will present a small thing that just blew my mind when I faced it: I had a fully working WLS on which I executed a WLST script in order to configure the SSL Ciphers for the Admin Server as well as Managed Servers. After this, the WLS wasn’t able to start anymore but this had nothing to do with the WLST script since I was sure it didn’t contain any error and it did what I expected… To stay generic, to update the Ciphers of all Managed Servers/Domains of a single server and because we manage more than one hundred WebLogic Servers at this customer, an automated deployment was really necessary.

On all these WLS, we have a very few of them that contain two domains on the same server (with different IPs/DNS Aliases of course). To handle this case, we had to use a variable that point to the Admin Server in order to connect to this server and execute the WLST script against it. The name of the variable we defined is, of course, “ADMIN_URL”. This is the name used by WebLogic in the configuration files to point to the Admin Server. So what better name than this? Well actually, anything would have been better since this is what caused this issue…

 

So let’s demonstrate this. First, I’m just trying to start the Admin Server without the variable defined:

[weblogic@weblogic_server_01 ~]$ cd $DOMAIN_HOME/bin
[weblogic@weblogic_server_01 bin]$ 
[weblogic@weblogic_server_01 bin]$ ./startWebLogic.sh
.
.
JAVA Memory arguments: -Xms256m -Xmx512m -XX:MaxPermSize=256m
.
CLASSPATH=$ORACLE_HOME/wlserver/server/lib/jcmFIPS.jar:$ORACLE_HOME/wlserver/server/lib/sslj.jar:$ORACLE_HOME/wlserver/server/lib/cryptoj.jar::$JAVA_HOME/lib/tools.jar:$ORACLE_HOME/wlserver/server/lib/weblogic_sp.jar:$ORACLE_HOME/wlserver/server/lib/weblogic.jar:$ORACLE_HOME/wlserver/../oracle_common/modules/net.sf.antcontrib_1.1.0.0_1-0b3/lib/ant-contrib.jar:$ORACLE_HOME/wlserver/modules/features/oracle.wls.common.nodemanager_2.0.0.0.jar:$ORACLE_HOME/wlserver/../oracle_common/modules/com.oracle.cie.config-wls-online_8.1.0.0.jar:$ORACLE_HOME/wlserver/common/derby/lib/derbyclient.jar:$ORACLE_HOME/wlserver/common/derby/lib/derby.jar:$ORACLE_HOME/wlserver/server/lib/xqrl.jar:$DOMAIN_HOME/lib/LB.jar:$DOMAIN_HOME/lib/LBJNI.jar:
.
PATH=$ORACLE_HOME/wlserver/server/bin:$ORACLE_HOME/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:$JAVA_HOME/jre/bin:$JAVA_HOME/bin:$DOMAIN_HOME/D2/lockbox:$DOMAIN_HOME/D2/lockbox/lib/native/linux_gcc34_x64:$JAVA_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/weblogic/bin
.
***************************************************
* To start WebLogic Server, use a username and    *
* password assigned to an admin-level user. For   *
* server administration, use the WebLogic Server  *
* console at http://hostname:port/console         *
***************************************************
starting weblogic with Java version:
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
Starting WLS with line:
$JAVA_HOME/bin/java -server -Xms256m -Xmx512m -XX:MaxPermSize=256m -Dweblogic.Name=AdminServer -Djava.security.policy=$ORACLE_HOME/wlserver/server/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true -Ddomain.home=$DOMAIN_HOME -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2 -Dweblogic.security.disableNullCipher=true -Djava.security.egd=file:///dev/./urandom -Dweblogic.security.allowCryptoJDefaultJCEVerification=true -Dweblogic.nodemanager.ServiceEnabled=true -Djava.endorsed.dirs=$JAVA_HOME/jre/lib/endorsed:$ORACLE_HOME/wlserver/../oracle_common/modules/endorsed -da -Dwls.home=$ORACLE_HOME/wlserver/server -Dweblogic.home=$ORACLE_HOME/wlserver/server -Dweblogic.utils.cmm.lowertier.ServiceDisabled=true weblogic.Server
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
<Aug 5, 2017 1:37:30 PM UTC> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Aug 5, 2017 1:37:30 PM UTC> <Notice> <WebLogicServer> <BEA-000395> <The following extensions directory contents added to the end of the classpath:
$DOMAIN_HOME/lib/LB.jar:$DOMAIN_HOME/lib/LBJNI.jar.>
<Aug 5, 2017 1:37:30 PM UTC> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 25.102-b14 from Oracle Corporation.>
<Aug 5, 2017 1:37:31 PM UTC> <Info> <Management> <BEA-141107> <Version: WebLogic Server 12.1.3.0.0 Wed May 21 18:53:34 PDT 2014 1604337 >
<Aug 5, 2017 1:37:32 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.>
<Aug 5, 2017 1:37:32 PM UTC> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool.>
<Aug 5, 2017 1:37:32 PM UTC> <Info> <WorkManager> <BEA-002942> <CMM memory level becomes 0. Setting standby thread pool size to 256.>
<Aug 5, 2017 1:37:33 PM UTC> <Notice> <Log Management> <BEA-170019> <The server log file $DOMAIN_HOME/servers/AdminServer/logs/AdminServer.log is opened. All server side log events will be written to this file.>
<Aug 5, 2017 1:37:34 PM UTC> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.>
<Aug 5, 2017 1:37:36 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY.>
<Aug 5, 2017 1:37:36 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.>
Aug 5, 2017 1:37:36 PM weblogic.wsee.WseeCoreMessages logWseeServiceStarting
INFO: The Wsee Service is starting
<Aug 5, 2017 1:37:37 PM UTC> <Warning> <Munger> <BEA-2156227> <The "META-INF/application.xml" deployment descriptor for the "consoleapp" module is in DTD format. Overrides from the deployment plan will NOT be applied since only deployment descriptors in XML Schema format are supported.>
<Aug 5, 2017 1:37:37 PM UTC> <Warning> <Munger> <BEA-2156227> <The "WEB-INF/weblogic.xml" deployment descriptor for the "consolehelp" module is in DTD format. Overrides from the deployment plan will NOT be applied since only deployment descriptors in XML Schema format are supported.>
<Aug 5, 2017 1:37:38 PM UTC> <Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under the alias mycert from the JKS keystore file $DOMAIN_HOME/certs/identity.jks.>
<Aug 5, 2017 1:37:38 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the JKS keystore file $DOMAIN_HOME/certs/trust.jks.>
<Aug 5, 2017 1:37:39 PM UTC> <Notice> <Log Management> <BEA-170027> <The server has successfully established a connection with the Domain level Diagnostic Service.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RESUMING.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure" is now listening on weblogic_server_01:8443 for protocols iiops, t3s, ldaps, https.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000329> <Started the WebLogic Server Administration Server "AdminServer" for domain "DOMAIN" running in production mode.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.>
<Aug 5, 2017 1:37:40 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING.>

^C
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000388> <JVM called the WebLogic Server shutdown hook. The server will force shutdown now.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000396> <Server shutdown has been requested by <WLS Kernel>.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SUSPENDING.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN.>
<Aug 5, 2017 1:37:55 PM UTC> <Notice> <Server> <BEA-002607> <Channel "DefaultSecure", listening on weblogic_server_01:8443, was shut down.>
[weblogic@weblogic_server_01 bin]$

 

As you can see above, the Admin Server is starting properly. Once in RUNNING state, I just stopped it (CTRL+C) to continue the demo.

The next step is to find the current value of “ADMIN_URL”. This variable is used in the files stopWebLogic.sh, stopManagedWebLogic.sh and startManagedWebLogic.sh. To be able to stop these components, the address of the Admin Server is needed. The funny thing is that in two of these three files, it is using the “t3s” protocol and on the third one, it is using “https” (or t3/http if not in SSL-enabled). Once you have this value, you define your environment variable using this exact same value (manual setup or more automatic setup):

[weblogic@weblogic_server_01 bin]$ echo $ADMIN_URL

[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ grep -E 'ADMIN_URL="[th]' *.sh
startManagedWebLogic.sh:	ADMIN_URL="https://weblogic_server_01:8443"
stopManagedWebLogic.sh:		ADMIN_URL="t3s://weblogic_server_01:8443"
stopWebLogic.sh:			ADMIN_URL="t3s://weblogic_server_01:8443"
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ export ADMIN_URL="https://weblogic_server_01:8443"
[weblogic@weblogic_server_01 bin]$ # or
[weblogic@weblogic_server_01 bin]$ export ADMIN_URL=`grep -E 'ADMIN_URL="[th]' stopWebLogic.sh | sed 's,ADMIN_URL="\([^"]*\)",\1,'`
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ echo $ADMIN_URL
t3s://weblogic_server_01:8443
[weblogic@weblogic_server_01 bin]$

 

At this point, I defined the ADMIN_URL variable using the T3S protocol. We can think that this wouldn’t affect the start of our domain. I mean we are just defining a variable that exist in the shell scripts of WebLogic with the exact same value… But note that this variable isn’t defined in the file “startWebLogic.sh”… Once this is done, we can reproduce the issue. For that, simply try to start the Admin Server again:

[weblogic@weblogic_server_01 bin]$ ./startWebLogic.sh
.
.
JAVA Memory arguments: -Xms256m -Xmx512m -XX:MaxPermSize=256m
.
CLASSPATH=$ORACLE_HOME/wlserver/server/lib/jcmFIPS.jar:$ORACLE_HOME/wlserver/server/lib/sslj.jar:$ORACLE_HOME/wlserver/server/lib/cryptoj.jar::$JAVA_HOME/lib/tools.jar:$ORACLE_HOME/wlserver/server/lib/weblogic_sp.jar:$ORACLE_HOME/wlserver/server/lib/weblogic.jar:$ORACLE_HOME/wlserver/../oracle_common/modules/net.sf.antcontrib_1.1.0.0_1-0b3/lib/ant-contrib.jar:$ORACLE_HOME/wlserver/modules/features/oracle.wls.common.nodemanager_2.0.0.0.jar:$ORACLE_HOME/wlserver/../oracle_common/modules/com.oracle.cie.config-wls-online_8.1.0.0.jar:$ORACLE_HOME/wlserver/common/derby/lib/derbyclient.jar:$ORACLE_HOME/wlserver/common/derby/lib/derby.jar:$ORACLE_HOME/wlserver/server/lib/xqrl.jar:$DOMAIN_HOME/lib/LB.jar:$DOMAIN_HOME/lib/LBJNI.jar:
.
PATH=$ORACLE_HOME/wlserver/server/bin:$ORACLE_HOME/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:$JAVA_HOME/jre/bin:$JAVA_HOME/bin:$DOMAIN_HOME/D2/lockbox:$DOMAIN_HOME/D2/lockbox/lib/native/linux_gcc34_x64:$JAVA_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/weblogic/bin
.
***************************************************
* To start WebLogic Server, use a username and *
* password assigned to an admin-level user. For *
* server administration, use the WebLogic Server *
* console at http://hostname:port/console *
***************************************************
starting weblogic with Java version:
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
Starting WLS with line:
$JAVA_HOME/bin/java -server -Xms256m -Xmx512m -XX:MaxPermSize=256m -Dweblogic.Name=AdminServer -Djava.security.policy=$ORACLE_HOME/wlserver/server/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true -Ddomain.home=$DOMAIN_HOME -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2 -Dweblogic.security.disableNullCipher=true -Djava.security.egd=file:///dev/./urandom -Dweblogic.security.allowCryptoJDefaultJCEVerification=true -Dweblogic.nodemanager.ServiceEnabled=true -Djava.endorsed.dirs=$JAVA_HOME/jre/lib/endorsed:$ORACLE_HOME/wlserver/../oracle_common/modules/endorsed -da -Dwls.home=$ORACLE_HOME/wlserver/server -Dweblogic.home=$ORACLE_HOME/wlserver/server -Dweblogic.management.server=t3s://weblogic_server_01:8443 -Dweblogic.utils.cmm.lowertier.ServiceDisabled=true weblogic.Server
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
<Aug 5, 2017 1:40:04 PM UTC> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Aug 5, 2017 1:40:04 PM UTC> <Notice> <WebLogicServer> <BEA-000395> <The following extensions directory contents added to the end of the classpath:
$DOMAIN_HOME/lib/LB.jar:$DOMAIN_HOME/lib/LBJNI.jar.>
<Aug 5, 2017 1:40:04 PM UTC> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 25.102-b14 from Oracle Corporation.>
<Aug 5, 2017 1:40:05 PM UTC> <Warning> <Security> <BEA-090924> <JSSE has been selected by default, since the SSLMBean is not available.>
<Aug 5, 2017 1:40:05 PM UTC> <Info> <Security> <BEA-090908> <Using the default WebLogic SSL Hostname Verifier implementation.>
<Aug 5, 2017 1:40:05 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file $ORACLE_HOME/wlserver/server/lib/DemoTrust.jks.>
<Aug 5, 2017 1:40:06 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file $JAVA_HOME/jre/lib/security/cacerts.>
<Aug 5, 2017 1:40:06 PM UTC> <Info> <Management> <BEA-141298> <Could not register with the Administration Server: java.rmi.RemoteException: [Deployer:149147]Could not reach the Administration Server through any of its URLs: "https://weblogic_server_01:8443".>
<Aug 5, 2017 1:40:06 PM UTC> <Info> <Management> <BEA-141107> <Version: WebLogic Server 12.1.3.0.0 Wed May 21 18:53:34 PDT 2014 1604337 >
<Aug 5, 2017 1:40:07 PM UTC> <Info> <Security> <BEA-090908> <Using the default WebLogic SSL Hostname Verifier implementation.>
<Aug 5, 2017 1:40:07 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file $ORACLE_HOME/wlserver/server/lib/DemoTrust.jks.>
<Aug 5, 2017 1:40:07 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file $JAVA_HOME/jre/lib/security/cacerts.>
<Aug 5, 2017 1:40:07 PM UTC> <Alert> <Management> <BEA-141151> <The Administration Server could not be reached at https://weblogic_server_01:8443.>
<Aug 5, 2017 1:40:07 PM UTC> <Info> <Configuration Management> <BEA-150018> <This server is being started in Managed Server independence mode in the absence of the Administration Server.>
<Aug 5, 2017 1:40:07 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.>
<Aug 5, 2017 1:40:07 PM UTC> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool.>
<Aug 5, 2017 1:40:07 PM UTC> <Info> <WorkManager> <BEA-002942> <CMM memory level becomes 0. Setting standby thread pool size to 256.>
<Aug 5, 2017 1:40:07 PM UTC> <Notice> <Log Management> <BEA-170019> <The server log file $DOMAIN_HOME/servers/AdminServer/logs/AdminServer.log is opened. All server side log events will be written to this file.>
<Aug 5, 2017 1:40:09 PM UTC> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.>
<Aug 5, 2017 1:40:10 PM UTC> <Error> <Configuration Management> <BEA-150000> <An error occurred while establishing a connection back to the Adminstration Server t3s://weblogic_server_01:8443 during startup. Since bootstrap succeeded, check that t3s://weblogic_server_01:8443 uniquely identifies the Administration Server.
javax.naming.ConfigurationException: Cannot register for disconnect events on local server
    at weblogic.server.channels.RemoteChannelServiceImpl.registerInternal(RemoteChannelServiceImpl.java:234)
    at weblogic.server.channels.RemoteChannelServiceImpl.registerForever(RemoteChannelServiceImpl.java:190)
    at weblogic.protocol.ConnectMonitorFactory.registerForever(ConnectMonitorFactory.java:54)
    at weblogic.management.provider.MSIService.registerForReconnectToAdminServer(MSIService.java:188)
    at weblogic.management.mbeanservers.compatibility.internal.CompatibilityMBeanServerService.start(CompatibilityMBeanServerService.java:183)
    Truncated. see log file for complete stacktrace
>
<Aug 5, 2017 1:40:10 PM UTC> <Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under the alias myacert from the JKS keystore file $DOMAIN_HOME/certs/identity.jks.>
<Aug 5, 2017 1:40:10 PM UTC> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the JKS keystore file $DOMAIN_HOME/certs/trust.jks.>
<Aug 5, 2017 1:40:12 PM UTC> <Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason:

    There are 1 nested errors:

javax.naming.ConfigurationException: Cannot register for disconnect events on local server
    at weblogic.server.channels.RemoteChannelServiceImpl.registerInternal(RemoteChannelServiceImpl.java:234)
    at weblogic.server.channels.RemoteChannelServiceImpl.registerForever(RemoteChannelServiceImpl.java:190)
    at weblogic.protocol.ConnectMonitorFactory.registerForever(ConnectMonitorFactory.java:54)
    at weblogic.management.provider.MSIService.registerForReconnectToAdminServer(MSIService.java:188)
    at weblogic.management.mbeanservers.compatibility.internal.CompatibilityMBeanServerService.start(CompatibilityMBeanServerService.java:183)
    at weblogic.server.AbstractServerService.postConstruct(AbstractServerService.java:78)
    at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1017)
    at org.jvnet.hk2.internal.ClazzCreator.postConstructMe(ClazzCreator.java:388)
    at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:430)
    at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:456)
    at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:225)
    at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:82)
    at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2488)
    at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:98)
    at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:87)
    at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.oneJob(CurrentTaskFuture.java:1162)
    at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.run(CurrentTaskFuture.java:1147)
    at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$UpOneLevel.run(CurrentTaskFuture.java:753)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:553)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:263)

>
<Aug 5, 2017 1:40:12 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FAILED.>
<Aug 5, 2017 1:40:12 PM UTC> <Error> <WebLogicServer> <BEA-000383> <A critical service failed. The server will shut itself down.>
<Aug 5, 2017 1:40:12 PM UTC> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN.>
[weblogic@weblogic_server_01 bin]$

 

So what is happening exactly that can cause this simple variable definition to prevent you to start your Admin Server? If you take a look at the script “startWebLogic.sh”, you will not see any use of this variable so you should be good, right? Well not really because this script is actually loading its environment by using the well-known “setDomainEnv.sh” (in the same folder). The interesting part is in this second file… If you are checking the usage of “ADMIN_URL” in this setDomainEnv, you will see an “if-then-else” section:

[weblogic@weblogic_server_01 bin]$ grep -C2 "ADMIN_URL" setDomainEnv.sh
# Clustering support (edit for your cluster!)

if [ "${ADMIN_URL}" = "" ] ; then
        # The then part of this block is telling us we are either starting an admin server OR we are non-clustered
        CLUSTER_PROPERTIES=""
        export CLUSTER_PROPERTIES
else
        CLUSTER_PROPERTIES="-Dweblogic.management.server=${ADMIN_URL}"
        export CLUSTER_PROPERTIES
fi
[weblogic@weblogic_server_01 bin]$

 

With this small portion of code, you can actually understand the issue:

  • if the variable “ADMIN_URL” isn’t defined when loading the file setDomainEnv.sh, then WebLogic will suppose that you are starting an Admin Server or that you are in a non-clustered environment.
  • if the variable “ADMIN_URL” is defined when loading the file setDomainEnv.sh, then WebLogic will suppose that you are starting a clustered environment

 

Therefore when defining the ADMIN_URL in the environment, we are actually – unintentionally – saying to WebLogic that this is a cluster and that the Management Server can be found at $ADMIN_URL… But this URL is the local Admin Server which we are trying to start. Thus the failure to start…

If you carefully read the logs above, you could actually saw this small difference… When reproducing the error, if you take a look at the end of the line 21, you will see that the JVM parameter “-Dweblogic.management.server=t3s://weblogic_server_01:8443″ is present and that’s the reason of this issue.

 

 

Cet article WebLogic – Cannot register for disconnect events on local server est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – How to recover a lost entry in a replicated OUD environment

Thu, 2017-08-10 04:57

By accident, it could happen that someone dropped an OUD entry in your replicated environment and of course, the entry is deleted on all replicas as well. And besides that, you got no logical ldif export of your OUD. Not a good situation, but if you have a done regular backups, there is a way out without resetting the whole OUD environment to an old timestamp.

The idea is, to create a second empty OUD instance, restore only the Context which is needed, export the entry as ldif and import it again into your current replicated OUD.

Let’s suppose, we have lost the following OUD entry:

Distinguished Name: cn=dbit122_ldap,cn=OracleContext,dc=william,dc=dbi,dc=com

To recover it, follow these steps:

  1. Create new OUD instance
  2. Create suffix
  3. Stop OUD instance asinst_2
  4. Restore the correct dc
  5. Start OUD instance asinst_2
  6. Export entry from asinst_2
  7. Import entry into asinst_1
  8. Cleanup asinst_2
1. Create new OUD instance
[dbafmw@dbidg01 ~]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
[dbafmw@dbidg01 Oracle_OUD1]$
[dbafmw@dbidg01 Oracle_OUD1]$ ./oud-setup --cli --baseDN dc=dbi,dc=com --addBaseEntry --adminConnectorPort 5444 --ldapPort 2389 \
> --rootUserDN cn=Directory\ Manager --rootUserPasswordFile ~/.oudpwd \
> --ldapsPort 2636 --generateSelfSignedCertificate \
> --hostname dbidg01 --integration generic \
> --serverTuning -Xms2048m\ -Xmx2048m\ -d64\ -XX:+UseCompressedOops\ -server\ -XX:MaxTenuringThreshold=1\ -XX:+UseConcMarkSweepGC\ -XX:CMSInitiatingOccupancyFraction=55 \
> --offlineToolsTuning -Xms2048m\ -Xmx2048m\ -d64\ -XX:+UseCompressedOops\ -server\ -XX:+UseParallelGC\ -XX:+UseNUMA \
> --no-prompt --noPropertiesFile
OUD Instance location successfully created - /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/../asinst_2"

Oracle Unified Directory 11.1.2.3.170718
Please wait while the setup program initializes...

See /u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD/logs/oud-setup
for a detailed log of this operation.

Configuring Directory Server ..... Done.
Configuring Certificates ..... Done.
Creating Base Entry dc=dbi,dc=com ..... Done.
Preparing the server for Oracle integration ....... Done.
Starting Directory Server ....... Done.
Creating Net Services suffixes ..... Done.

To see basic server configuration status and configuration you can launch
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD/bin/status


The new OUD instance was started automatically.


[dbafmw@dbidg01 bin]$ /u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD/bin/status


>>>> Specify Oracle Unified Directory LDAP connection parameters

Administrator user bind DN [cn=Directory Manager]:

Password for user 'cn=Directory Manager':

          --- Server Status ---
Server Run Status:        Started
Open Connections:         1

          --- Server Details ---
Host Name:                dbidg01
Administrative Users:     cn=Directory Manager
Installation Path:
/u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
Instance Path:
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD
Version:                  Oracle Unified Directory 11.1.2.3.170718
Java Version:             1.7.0_151
Administration Connector: Port 5444 (LDAPS)

          --- Connection Handlers ---
Address:Port : Protocol : State
-------------:----------:---------
--           : LDIF     : Disabled
0.0.0.0:161  : SNMP     : Disabled
0.0.0.0:1689 : JMX      : Disabled
0.0.0.0:2389 : LDAP     : Enabled
0.0.0.0:2636 : LDAPS    : Enabled

          --- Data Sources ---
Base DN:     cn=OracleContext
Backend ID:  OIDCompatibility
Entries:     26
Replication: Disabled

Base DN:     cn=OracleContext,dc=dbi,dc=com
Backend ID:  OracleContext0
Entries:     17
Replication: Disabled

Base DN:     cn=OracleSchemaVersion
Backend ID:  OIDCompatibility
Entries:     3
Replication: Disabled

Base DN:     cn=virtual acis
Backend ID:  virtualAcis
Entries:     0
Replication: Disabled

Base DN:     dc=dbi,dc=com
Backend ID:  userRoot
Entries:     1
Replication: Disabled

[dbafmw@dbidg01 bin]$
2. Create suffix
[dbafmw@dbidg01 bin]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD/bin

[dbafmw@dbidg01 bin]$ ./manage-suffix create --baseDN "dc=william,dc=dbi,dc=com" \
> --entries base-entry --integration Generic \
> --hostname localhost --port 5444 \
> --bindDN cn="Directory Manager" --bindPasswordFile ~/.oudpwd \
> --trustAll --no-prompt
Reading Configuration ..... Done.

Creating suffixes ..... Done.

Adding Data ..... Done.

Updating Oracle Integration ..... Done.
[dbafmw@dbidg01 bin]$
3. Stop OUD instance asinst_2
[dbafmw@dbidg01 bin]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD/bin
[dbafmw@dbidg01 bin]$ ./stop-ds
Stopping Server...

[10/Aug/2017:11:10:34 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=OIDCompatibility,cn=Workflow Elements,cn=config is now taken offline
[10/Aug/2017:11:10:34 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=OracleContext for dc=william\,dc=dbi\,dc=com,cn=Workflow elements,cn=config is now taken offline
[10/Aug/2017:11:10:35 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=OracleContext0,cn=Workflow elements,cn=config is now taken offline
[10/Aug/2017:11:10:35 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=userRoot,cn=Workflow Elements,cn=config is now taken offline
[10/Aug/2017:11:10:35 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=userRoot-0,cn=Workflow elements,cn=config is now taken offline
[10/Aug/2017:11:10:35 +0200] category=BACKEND severity=NOTICE msgID=9896306 msg=The backend cn=virtualAcis,cn=Workflow Elements,cn=config is now taken offline
[10/Aug/2017:11:10:35 +0200] category=CORE severity=NOTICE msgID=458955 msg=The Directory Server is now stopped
4. Restore the correct dc
[dbafmw@dbidg01 bin]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD/bin
[dbafmw@dbidg01 bin]$ ./restore --listBackups --backupDirectory="/u99/backup/OUD/20170810/OracleContext for dc=william,dc=dbi,dc=com"
Backup ID:          20170810085118Z
Backup Date:        10/Aug/2017:10:51:18 +0200
Is Incremental:     false
Is Compressed:      true
Is Encrypted:       false
Has Unsigned Hash:  false
Has Signed Hash:    false
Dependent Upon:     none

[dbafmw@dbidg01 bin]$ ./restore --dry-run --backupDirectory="/u99/backup/OUD/20170810/OracleContext for dc=william,dc=dbi,dc=com"
[10/Aug/2017:11:19:47 +0200] category=JEB severity=NOTICE msgID=8847444 msg=Verifying: 00000000.jdb

[dbafmw@dbidg01 bin]$ ./restore  --backupDirectory="/u99/backup/OUD/20170810/OracleContext for dc=william,dc=dbi,dc=com"
[10/Aug/2017:11:20:11 +0200] category=JEB severity=NOTICE msgID=8847445 msg=Restored: 00000000.jdb (size 114414)
5. Start OUD instance asinst_2
[dbafmw@dbidg01 bin]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD/bin
[dbafmw@dbidg01 bin]$ ./start-ds
[10/Aug/2017:11:20:30 +0200] category=CORE severity=INFORMATION msgID=132 msg=The Directory Server is beginning the configuration bootstrapping process
[10/Aug/2017:11:20:31 +0200] category=CORE severity=NOTICE msgID=458886 msg=Oracle Unified Directory 11.1.2.3.170718 (build 20170621135318Z, R1706210545) starting up
[10/Aug/2017:11:20:34 +0200] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381717 msg=Installation Directory:  /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
[10/Aug/2017:11:20:34 +0200] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381719 msg=Instance Directory:      /u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD
[10/Aug/2017:11:20:34 +0200] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381713 msg=JVM Information: 1.7.0_151-b15 by Oracle Corporation, 64-bit architecture, 2130051072 bytes heap size
[10/Aug/2017:11:20:34 +0200] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381714 msg=JVM Host: dbidg01, running Linux 4.1.12-94.5.7.el7uek.x86_64 amd64, 5986422784 bytes physical memory size, number of processors available 2
[10/Aug/2017:11:20:34 +0200] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381715 msg=JVM Arguments: "-Xms2048m", "-Xmx2048m", "-XX:+UseCompressedOops", "-XX:MaxTenuringThreshold=1", "-XX:+UseConcMarkSweepGC", "-XX:CMSInitiatingOccupancyFraction=55", "-Dorg.opends.server.scriptName=start-ds"
[10/Aug/2017:11:20:34 +0200] category=ACCESS_CONTROL severity=INFORMATION msgID=12582978 msg=Added 11 Global Access Control Instruction (ACI) attribute types to the access control evaluation engine
[10/Aug/2017:11:20:34 +0200] category=BACKEND severity=INFORMATION msgID=9437595 msg=Local DB backend OracleContext0 does not specify the number of lock tables: defaulting to 97
[10/Aug/2017:11:20:34 +0200] category=BACKEND severity=INFORMATION msgID=9437594 msg=Local DB backend OracleContext0 does not specify the number of cleaner threads: defaulting to 24 threads
[10/Aug/2017:11:20:34 +0200] category=BACKEND severity=INFORMATION msgID=9437615 msg=Local DB backend OracleContext0 does not specify the percentage of the heap space to allocate to the database cache: defaulting to 35 percent
[10/Aug/2017:11:20:34 +0200] category=BACKEND severity=INFORMATION msgID=9437613 msg=Local DB backend OracleContext0 does not specify the size of the file handle cache: sizing automatically to use 100 file descriptors
[10/Aug/2017:11:20:35 +0200] category=JEB severity=NOTICE msgID=8847402 msg=The database backend cn=OracleContext0,cn=Workflow elements,cn=config containing 17 entries has started
[10/Aug/2017:11:20:35 +0200] category=ACCESS_CONTROL severity=INFORMATION msgID=12582962 msg=Added 5 Access Control Instruction (ACI) attribute types found in context "cn=OracleContext,dc=dbi,dc=com" to the access control evaluation engine
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437595 msg=Local DB backend virtualAcis does not specify the number of lock tables: defaulting to 97
[10/Aug/2017:11:20:35 +0200] category=JEB severity=NOTICE msgID=8847402 msg=The database backend cn=virtualAcis,cn=Workflow Elements,cn=config containing 0 entries has started
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437595 msg=Local DB backend userRoot-0 does not specify the number of lock tables: defaulting to 97
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437594 msg=Local DB backend userRoot-0 does not specify the number of cleaner threads: defaulting to 24 threads
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437615 msg=Local DB backend userRoot-0 does not specify the percentage of the heap space to allocate to the database cache: defaulting to 35 percent
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437613 msg=Local DB backend userRoot-0 does not specify the size of the file handle cache: sizing automatically to use 100 file descriptors
[10/Aug/2017:11:20:35 +0200] category=JEB severity=NOTICE msgID=8847402 msg=The database backend cn=userRoot-0,cn=Workflow elements,cn=config containing 1 entries has started
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437595 msg=Local DB backend OracleContext for dc=william,dc=dbi,dc=com does not specify the number of lock tables: defaulting to 97
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437594 msg=Local DB backend OracleContext for dc=william,dc=dbi,dc=com does not specify the number of cleaner threads: defaulting to 24 threads
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437615 msg=Local DB backend OracleContext for dc=william,dc=dbi,dc=com does not specify the percentage of the heap space to allocate to the database cache: defaulting to 35 percent
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437613 msg=Local DB backend OracleContext for dc=william,dc=dbi,dc=com does not specify the size of the file handle cache: sizing automatically to use 100 file descriptors
[10/Aug/2017:11:20:35 +0200] category=JEB severity=NOTICE msgID=8847402 msg=The database backend cn=OracleContext for dc=william\,dc=dbi\,dc=com,cn=Workflow elements,cn=config containing 18 entries has started
[10/Aug/2017:11:20:35 +0200] category=ACCESS_CONTROL severity=INFORMATION msgID=12582962 msg=Added 5 Access Control Instruction (ACI) attribute types found in context "cn=OracleContext,dc=william,dc=dbi,dc=com" to the access control evaluation engine
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437595 msg=Local DB backend userRoot does not specify the number of lock tables: defaulting to 97
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437594 msg=Local DB backend userRoot does not specify the number of cleaner threads: defaulting to 24 threads
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437615 msg=Local DB backend userRoot does not specify the percentage of the heap space to allocate to the database cache: defaulting to 35 percent
[10/Aug/2017:11:20:35 +0200] category=BACKEND severity=INFORMATION msgID=9437613 msg=Local DB backend userRoot does not specify the size of the file handle cache: sizing automatically to use 100 file descriptors
[10/Aug/2017:11:20:35 +0200] category=JEB severity=NOTICE msgID=8847402 msg=The database backend cn=userRoot,cn=Workflow Elements,cn=config containing 1 entries has started
[10/Aug/2017:11:20:36 +0200] category=BACKEND severity=INFORMATION msgID=9437595 msg=Local DB backend OIDCompatibility does not specify the number of lock tables: defaulting to 97
[10/Aug/2017:11:20:36 +0200] category=BACKEND severity=INFORMATION msgID=9437594 msg=Local DB backend OIDCompatibility does not specify the number of cleaner threads: defaulting to 24 threads
[10/Aug/2017:11:20:36 +0200] category=BACKEND severity=INFORMATION msgID=9437615 msg=Local DB backend OIDCompatibility does not specify the percentage of the heap space to allocate to the database cache: defaulting to 35 percent
[10/Aug/2017:11:20:36 +0200] category=BACKEND severity=INFORMATION msgID=9437613 msg=Local DB backend OIDCompatibility does not specify the size of the file handle cache: sizing automatically to use 100 file descriptors
[10/Aug/2017:11:20:36 +0200] category=JEB severity=NOTICE msgID=8847402 msg=The database backend cn=OIDCompatibility,cn=Workflow Elements,cn=config containing 29 entries has started
[10/Aug/2017:11:20:36 +0200] category=ACCESS_CONTROL severity=INFORMATION msgID=12582962 msg=Added 10 Access Control Instruction (ACI) attribute types found in context "cn=OracleContext" to the access control evaluation engine
[10/Aug/2017:11:20:36 +0200] category=ACCESS_CONTROL severity=INFORMATION msgID=12582962 msg=Added 1 Access Control Instruction (ACI) attribute types found in context "cn=OracleSchemaVersion" to the access control evaluation engine
[10/Aug/2017:11:20:36 +0200] category=EXTENSIONS severity=INFORMATION msgID=1048797 msg=DIGEST-MD5 SASL mechanism using a server fully qualified domain name of: dbidg01
[10/Aug/2017:11:20:36 +0200] category=CORE severity=INFORMATION msgID=731 msg=LDAP Connection Handler 0.0.0.0 port 2389 does not specify the number of request handler threads: sizing automatically to use 8 threads
[10/Aug/2017:11:20:36 +0200] category=CORE severity=INFORMATION msgID=731 msg=LDAP Connection Handler 0.0.0.0 port 2636 does not specify the number of request handler threads: sizing automatically to use 8 threads
[10/Aug/2017:11:20:36 +0200] category=CORE severity=INFORMATION msgID=720 msg=No worker queue thread pool size specified: sizing automatically to use 24 threads
[10/Aug/2017:11:20:36 +0200] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on Administration Connector 0.0.0.0 port 5444
[10/Aug/2017:11:20:36 +0200] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on LDAP Connection Handler 0.0.0.0 port 2389
[10/Aug/2017:11:20:36 +0200] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on LDAP Connection Handler 0.0.0.0 port 2636
[10/Aug/2017:11:20:36 +0200] category=CORE severity=NOTICE msgID=458887 msg=The Directory Server has started successfully
[10/Aug/2017:11:20:36 +0200] category=CORE severity=NOTICE msgID=458891 msg=The Directory Server has sent an alert notification generated by class org.opends.server.core.DirectoryServer (alert type org.opends.server.DirectoryServerStarted, alert ID 458887):  The Directory Server has started successfully
6. Export entry from asinst_2

Now let’s check if we got our entry “cn=dbit122_ldap,cn=OracleContext,dc=william,dc=dbi,dc=com” back.

[dbafmw@dbidg01 bin]$ ./ldapsearch --hostname localhost --port 2389 --bindDN "cn=Directory Manager" \
> --bindPasswordFile ~/.oudpwd --baseDN "cn=OracleContext,dc=william,dc=dbi,dc=com" "(cn=dbit122_ldap)" objectclass orclNetDescString orclNetDescName orclVersion
dn: cn=dbit122_ldap,cn=OracleContext,dc=william,dc=dbi,dc=com
orclNetDescName: 000:cn=DESCRIPTION_0
orclNetDescString: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg
 01)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521)))(CONNECT_DATA=
 (SERVICE_NAME=DBIT122_PRI)))
objectclass: orclNetService
objectclass: top

Looks good, so we can create the ldif now.

[dbafmw@dbidg01 bin]$ ./ldapsearch --hostname localhost --port 2389 --bindDN "cn=Directory Manager" --bindPasswordFile 
> ~/.oudpwd --baseDN "cn=OracleContext,dc=william,dc=dbi,dc=com" "(cn=dbit122_ldap)" objectclass 
> orclNetDescString orclNetDescName orclVersion > /tmp/dbit122.ldif

[dbafmw@dbidg01 bin]$ cat /tmp/dbit122.ldif
dn: cn=dbit122_ldap,cn=OracleContext,dc=william,dc=dbi,dc=com
orclNetDescName: 000:cn=DESCRIPTION_0
orclNetDescString: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg
 01)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521)))(CONNECT_DATA=
 (SERVICE_NAME=DBIT122_PRI)))
objectclass: orclNetService
objectclass: top
7. Import the entry into asinst_1 (this is the replicated OUD environment)
[dbafmw@dbidg01 bin]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/bin

[dbafmw@dbidg01 bin]$ ./ldapmodify --defaultAdd --filename /tmp/dbit122.ldif --hostname dbidg01 --port 1389 \
> --bindDN "cn=Directory Manager" --bindPasswordFile ~/.oudpwd
Processing ADD request for cn=dbit122_ldap,cn=OracleContext,dc=william,dc=dbi,dc=com
ADD operation successful for DN cn=dbit122_ldap,cn=OracleContext,dc=william,dc=dbi,dc=com
8. Cleanup asinst_2
[dbafmw@dbidg01 OUD]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/asinst_2/OUD
[dbafmw@dbidg01 OUD]$ ./uninstall --cli --remove-all

The server is currently running and must be stopped before uninstallation can
continue.
Stop the Server and permanently delete the files? (yes / no) [yes]: yes

Stopping Directory Server ..... Done.
Deleting Files under the Installation Path ..... Done.

Uninstall Completed Successfully.
See /tmp/oud-uninstall-4049143346007549356.log for a detailed log of this operation.
[dbafmw@dbidg01 OUD]$

Ready, we got our entry back, and even cleaned up the leftovers from the temporary OUD instance asinst_2.

Conclusion

Loosing entries from your replicated OUD environment is not the end of the world. If you have good backups, you can create anytime a new OUD instance and recover only the Context that you need.

 

Cet article OUD 11.1.2.3 – How to recover a lost entry in a replicated OUD environment est apparu en premier sur Blog dbi services.

Pages