Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 16 hours 13 min ago

Oracle Licensing (R)evolution

Thu, 2017-12-21 08:49

In 2015 I wrote a blog named “All you need to know about Oracle Database licensing with VMware”. This blog generated lots of comments and hopefully helped some DBAs and IT Managers understanding the potential issues that they can face in case of Oracle LMS Audit.

In 2 years I made some new experiences related to Oracle LMS audit and I’d like to share those experiences with you to provide you up to date information. What is written in this blog is nothing more than summary of facts encountered with Swiss Oracle customers. The situation described in this blog is related to one of our customers named SITel (Service de l’Informatique et des Télécommunication) of Fribourg in Switzerland which has also been encountered by some others of our customers. I cannot guarantee that Oracle apply the same strategy worldwide. Please notice that this article has been written with the approval of all customers named in this blog.

What didn’t change?

Oracle on VMware not certified: Oracle still does not certify any of its products on VMware virtualized environments as explained in the My Oracle Support Note ID 249212.1. In summary it’s supported but not certified meaning that Oracle does not guarantee the proper working of these products on VMware architecture, but will support you in case of incidents not related to the VMware infrastructure. In cases where VMware could be involved, the Oracle support may ask you to reproduce the incident on a certified environment. VMware on his side has his own Oracle support policy available here.

Oracle Audit reputation: As mentioned on clearlicensing.org, Oracle didn’t improve them to get better audit reputation. Indeed according to this website, Oracle was voted worst vendor during the audit process. (cf graph below)

Clearlicensing

Who is the least helpful vendor in terms of audits (Agressive behavior, focused on short term revenue)

Soft and hard partitioning policy: VMware is still a soft partitioning technology according to Oracle. The distinction between hard and soft partitioning is explained in this Oracle pdf file. In addition VMware still explain to his customer that “DRS Host Affinity rules can be used to appropriately restrict the movement of virtual machines within the cluster. DRS Host Affinity is a vSphere feature that enables you to ensure that your Oracle applications are restricted to move  only between a subset of the hosts—that is, not all hardware in the cluster is “available” to the Oracle software”. – (cf UNDERSTANDING ORACLE CERTIFICATION, SUPPORT AND LICENSING FOR VMWARE ENVIRONMENTS). That’s for sure true but Oracle does not recognize this feature allowing VMware to provide a hard partitioning solution and won’t be taken into consideration during an LMS audit

 What changed ?

Oracle Edition: As you know Standard Edition One and Standard Edition are stopped and if you want to upgrade to 12.1.0.2 (12c patchet 1) then you have to go to Standard Edition Two. You can find a good blog from Franck Pachot related to the installation of Standard Edition Two here as well as the tests related to the 16 thread limitation here. Many questions related to Oracle Standard Edition 2 and the difference between the Standard Edition One and the Standard Edition Two are answered in this Oracle pdf.

Oracle LMS Auditors: In Switzerland we do now have a company (Entry of 15.02.2017), named Seven Eighths Schweiz GmbH doing Oracle audits. The JPE program is definitively stopped.

Oracle commercial approach for VMware customers: That is perhaps the most important thing that inspired me for the title of this blog. Up to the middle of last year, Oracle customers using VMware to host their Oracle products had to be licensed for all processors where Oracle products are installed and/or running but Oracle basically took into consideration all processors where Oracle products can/could be running. This has been well illustrated by the following famous image realized by House of brick.

House Of Brick Oracle Parking

House Of Brick Oracle Parking

Meaning than since VMware 5.1 all the physical hosts managed by a vCenter Server instance have to be licensed whatever the Oracle footprint on the virtual servers. This decision has probably been taken by Oracle due to the new features coming with VMware vSphere 5.1, particularly one named vSphere vMotion(zero-downtime migration) without the need for shared storage configurations. Of course with VMware 6.x according to this same principle all the vSphere environments of the company have to be licensed. Since last year some customers came to me requesting my advice on a proposal that have been done by Oracle. Basically Oracle proposed them to validate their Oracle/VMware infrastructure and license only the processors that are really used to execute Oracle products with these two conditions:

  1. Totally isolating Oracle products in their VMware infrastructure
  2. Oracle requires a “certain volume of business” (can be in terms of Oracle Cloud Credit) to validate the customer infrastructure schema

It is important to take into consideration that Oracle validated these infrastructures only for some specific versions of VMware, usually the current customer’s VMware version and the next one.

What means “Totally isolating Oracle products”?

Customers sent me the infrastructure example that has been provided to them by Oracle. You can find them below. (Notice that customer gave me the approval to publish these pictures)

Oracle VMware VLAN and storage configuration

Oracle VMware VLAN and storage configuration

 

vCenter, VLAN and storage configuration

vCenter, VLAN and storage configuration

What you can see on these infrastructure schemas is:

  1. A dedicated vCenter Server Instance with dedicated physical hosts
  2. A dedicated VLAN
  3. A Storage isolation through LUN Masking, Zoning or any approved restriction
What means a “certain volume of business”?

In the cases we encountered, Oracle asked a “certain volume of business” to validate the Oracle infrastructure on VMware. This volume of business has been in some cases realized by acquiring Oracle Cloud Credits. I cannot provide an exact budget for this compensation since it seems varying according to the customer infrastructure. In the cases we encountered with our customers we spoke of amounts above 40.000 US$ (CHF 40’000).

Conclusion

It seems that Oracle is not anymore so restrictive regarding the licensing of Oracle on VMware infrastructure. VMware is still not certified with any Oracle products but at least Oracle customers can speak with Oracle salespeople in order to avoid having to license all the processors of their vSphere environments. This discussion can lead in some circumstances and according to our experience, to the acquisition of some other Oracle products or Cloud Credits to get the infrastructure validated by Oracle for specific VMware versions.

The other alternative to VMware is Oracle VM Server. Although the support is chargeable, Oracle VM is free. Unfortunately even if Oracle did some great improvements over the last years, Oracle VM does not have the same functionalities as VMware but is can perfectly fit depending on your needs. You can find the documentation in order to configure it has hard partitioning, with CPU pining here since Oracle VM Server has to be configured specifically in order to be considered as a hard partitioning technology.

I very do hope that you find this blog helpful and would like to thanks the SITel for his precious collaboration in reviewing this blog.

 

Cet article Oracle Licensing (R)evolution est apparu en premier sur Blog dbi services.

Online datafile move in a 12c dataguard environment

Wed, 2017-12-20 12:13

Oracle 12c introduces moving online datafile. One question we might ask is what about moving datafile online in a dataguard environment. In this blog we will do some tests
Below our configuration, we are using oracle 12.2

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 8 seconds ago)
DGMGRL>

The StandbyFileManagement property is set to auto for both primary and standby database.

DGMGRL> show database 'MYCONT_SITE' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL> show database 'MYCONT_SITE1' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL>

Below datafiles on both primary and standby pluggable databases PDB1

SQL> show con_name;
CON_NAME
------------------------------
PDB1
SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

Now let’s move for example /u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf to a new location on the primary PDB1

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

We can verify the new location on the primary

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

As the StandbyFileManagement is set to auto for both databases, we might think that datafile is also moved in the standby, so let’s check

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

The answer is no.
Ok but is my dataguard still working? Let’s query the broker

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 0 seconds ago)

Yes the configuration is fine.
Ok now can I move online my datafile in the new location on the standby server? Let’s try

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

And we can verify that datafile was moved.

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

And we also can verify that my dataguard configuration is still fine

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 46 seconds ago)
DGMGRL>

Conclusion
We can see that
1- StandbyFileManagement property dos not concern online datafile move
2- Moving online datafile in the primary does not move the datafile on the standby
3- Online datafile can be done on the standby database

 

Cet article Online datafile move in a 12c dataguard environment est apparu en premier sur Blog dbi services.

Oracle 12.2 Dataguard : PDB Flashback on the Primary

Wed, 2017-12-20 12:12

The last day I was discussing with one colleague about database flashback for a pluggable database in a dataguard environment. I did some tests and I present results in this blog.
Below our broker configuration. Oracle 12.2 is used.

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 35 seconds ago)
DGMGRL>

The primary database has the flashback database set to YES.

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE READ WRITE YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

Same for the standby database

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE1 READ ONLY WITH APPLY YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ ONLY
PDB2 READ ONLY

For the tests we are going to do a flashback database for the primary PDB1.
Let’s connect to PDB1

10:15:59 SQL> alter session set container=pdb1;
Session altered.
.
10:16:15 SQL> show con_name;
CON_NAME
------------------------------
PDB1
10:16:22 SQL>

And let’s create a table article with some datafor reference

10:16:22 SQL> create table article (idart number);
Table created.
.
10:18:12 SQL> insert into article values (1);
1 row created.
10:18:31 SQL> insert into article values (2);
1 row created.
.
10:18:34 SQL> select * from article;
IDART
----------
1
2
.
10:18:46 SQL> commit;

Now let’s do a database flashback of primary pdb1 before the creation of the table article.

10:28:12 SQL> show con_name
CON_NAME
------------------------------
PDB1
.
10:28:16 SQL> shut immediate;
Pluggable Database closed.
.
10:28:28 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 MOUNTED
10:28:54 SQL>
.
10:28:54 SQL> FLASHBACK PLUGGABLE DATABASE PDB1 TO TIMESTAMP TO_TIMESTAMP('2017-12-20 10:16:00', 'YYYY-MM-DD HH24:MI:SS');
Flashback complete.
10:30:14 SQL>

Now let’s open PDB1 with resetlogs option

10:31:08 SQL> alter pluggable database PDB1 open resetlogs;
Pluggable database altered.
10:32:15 SQL>

And let’s query the table article. As expected the table is no longer present

10:32:15 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 READ WRITE
.
10:32:59 SQL> select * from article;
select * from article
*
ERROR at line 1:
ORA-00942: table or view does not exist
10:33:06 SQL>

Now if we check the status of our dataguard in the broker, we have errors

12/20/2017 10:23:07 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the member
Fast-Start Failover: DISABLED
Configuration Status:
ERROR (status updated 48 seconds ago)
12/20/2017 10:34:40 DGMGRL>

The status of the Primary database is fine

12/20/2017 10:34:40 DGMGRL> show database 'MYCONT_SITE';
Database - MYCONT_SITE
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYCONT
Database Status:
SUCCESS

But the standby status is returning some errors
12/20/2017 10:35:11 DGMGRL> show database 'MYCONT_SITE1';
Database - MYCONT_SITE1
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 3 minutes 10 seconds (computed 1 second ago)
Average Apply Rate: 7.00 KByte/s
Real Time Query: OFF
Instance(s):
MYCONT
Database Error(s):
ORA-16766: Redo Apply is stopped
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
Database Status:
ERROR
12/20/2017 10:35:15 DGMGRL>

And if we check the alert log of the standby dataset we can find following errors

(3):Recovery of pluggable database PDB1 aborted due to pluggable database open resetlog marker.
(3):To continue recovery, restore all data files for this PDB to checkpoint SCN lower than 2518041, or timestamp before 12/20/2017 10:16:01, and restart recovery
MRP0: Background Media Recovery terminated with error 39874
2017-12-20T10:32:05.565085+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 2520607
2017-12-20T10:32:05.612394+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
2017-12-20T10:32:05.612511+01:00
MRP0: Background Media Recovery process shutdown (MYCONT)

On the primary PDB, we can can query the current INCARNATION_SCN in the v$pdb_incarnation view. And we can remark that the current SCN is the same that the one specified in the standby alert log 2518041

11:08:11 SQL> show con_name
CON_NAME
------------------------------
PDB1
11:08:56 SQL> select status,INCARNATION_SCN from v$pdb_incarnation;
STATUS INCARNATION_SCN
------- ---------------
CURRENT 2518041
PARENT 2201909
PARENT 1396169
11:08:59 SQL>

And then as specified in the alert log we have to flashback the standby pdb to a SCN lower than 2518041
First let’s stop the redo apply on the standby

12/20/2017 11:13:14 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-OFF';
Succeeded.
12/20/2017 11:13:59 DGMGRL>

And then let’s flashback to 2518039 ( i.e 2518041 -2 ) for example
Let’s shutdown the standby container MYCONT and startup it in a mount state

11:18:42 SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
.
11:19:10 SQL> startup mount
ORACLE instance started.
Total System Global Area 956301312 bytes
Fixed Size 8799656 bytes
Variable Size 348129880 bytes
Database Buffers 595591168 bytes
Redo Buffers 3780608 bytes
Database mounted.
11:19:50 SQL>
.
11:19:50 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED MOUNTED
PDB1 MOUNTED
PDB2 MOUNTED

Now let’s flashback PDB1 on the standby

11:20:19 SQL> flashback pluggable database PDB1 to SCN 2518039;
Flashback complete.
11:20:40 SQL>

The last step is to enable again the redo apply for the standby container

12/20/2017 11:13:59 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-ON';
Succeeded.
12/20/2017 11:23:08 DGMGRL>

And then we can verify that the configuration is now fine

12/20/2017 11:25:05 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 1 second ago)
12/20/2017 11:25:07 DGMGRL>

Conclusion
In this article we saw that the flashback in a dataguard environment is working in the same way for a container or a non container. The only difference is the SCN we must consider to flashback the pluggable database. This SCN should be queried fom the v$pdb_incarnation and not from the v$database as we usually do for a non container database.

 

Cet article Oracle 12.2 Dataguard : PDB Flashback on the Primary est apparu en premier sur Blog dbi services.

Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 02)

Wed, 2017-12-20 07:58

In this post we are going to deploy a R.A.C system ready to run production load with near-zero knowledge with R.A.C, Oracle cluster nor Oracle database.

We are going to use the “Deploy Cluster Tool” which is provide by Oracle to perform Oracle deployment of many kind of database architectures you may need like Oracle single instance, Oracle Restart or Oracle R.A.C. This tool permits you to choose if you want an Enterprise Edition or a Standard Edition and if you want an Oracle Release 11g or 12c.

For this demonstration we are going to deploy a R.A.C 12cR2 in Standard Edition.

What you need at this stage

  • An OVM infrastructure as describe in this post. In this infrastructure we have
    • 2 virtual machines called rac001 and rac002 with the required network cabling and disk configuration to run R.A.C
    • The 2 VMs are create from the Oracle template which include the stuff needed to deploy whichever configuration you need
  • A copy of the last release of the “Deploy Cluster Tool” available here

The most important part here is to edit 2 configurations files to describe the configuration we want

  • deployRacProd_SE_RAC_netconfig.ini: network parameters needed for the deployment
  • deployRacProd_SE_RAC_params.ini: parameters related to database memory, name, ASM disk groups, User UID and so on

This is the content of the network configuration used for this infrastructure:

-bash-4.1# egrep -v "^$|^#" deployRacProd_SE_RAC_netconfig.ini
NODE1=rac001
NODE1IP=192.168.179.210
NODE1PRIV=rac001-priv
NODE1PRIVIP=192.168.3.210
NODE1VIP=rac001-vip
NODE1VIPIP=192.168.179.211
NODE2=rac002
NODE2IP=192.168.179.212
NODE2PRIV=rac002-priv
NODE2PRIVIP=192.168.3.212
NODE2VIP=rac002-vip
NODE2VIPIP=192.168.179.213
PUBADAP=eth1
PUBMASK=255.255.255.0
PUBGW=192.168.179.1
PRIVADAP=eth2
PRIVMASK=255.255.255.0
RACCLUSTERNAME=cluprod01
DOMAINNAME=
DNSIP=""
NETCONFIG_DEV=/dev/xvdc
SCANNAME=cluprod01-scan
SCANIP=192.168.179.205
FLEX_CLUSTER=yes
FLEX_ASM=yes
ASMADAP=eth3
ASMMASK=255.255.255.0
NODE1ASMIP=192.168.58.210
NODE2ASMIP=192.168.58.212

Let’s start from the OVM Manager server going in the “Deploy Cluster Tool” directory and initiating the first stage of the deployment:

-bash-4.1# cd /root/deploycluster3
-bash-4.1# ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini
Oracle DB/RAC OneCommand (v3.0.5) for Oracle VM - deploy cluster - (c) 2011-2017 Oracle Corporation
 (com: 29100:v3.0.4, lib: 231275:v3.0.5, var: 1800:v3.0.5) - v2.6.5 - ovmm (x86_64)
Invoked as root at Mon Dec 18 14:19:48 2017  (size: 43900, mtime: Tue Feb 28 01:03:00 2017)
Using: ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini

INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting...
Password:

INFO: Attempting to connect to Oracle VM Manager...

Oracle VM Manager Core WS-API Shell 3.4.2.1384 (20160914_1384)

Copyright (C) 2007, 2016 Oracle. All rights reserved.
See the LICENSE file for redistribution information.


Connecting to https://localhost:7002/...

INFO: Oracle VM Client CONNECTED to Oracle VM Manager (3.4.4.1709) UUID (0004fb00000100001f20e914973507f6)

INFO: Inspecting /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini for number of nodes defined....
INFO: Detected 2 nodes in: /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini

INFO: Located a total of (2) VMs;
      2 VMs with a simple name of: ['rac001', 'rac002']

INFO: Detected a RAC deployment...

INFO: Starting all (2) VMs...

INFO: VM with a simple name of "rac001" is in a Stopped state, attempting to start it.................................OK.

INFO: VM with a simple name of "rac002" is in a Stopped state, attempting to start it.................................OK.

INFO: Verifying that all (2) VMs are in Running state and pass prerequisite checks.....

INFO: Detected that all (2) VMs specified on command line have (9) common shared disks between them (ASM_MIN_DISKS=5)

INFO: The (2) VMs passed basic sanity checks and in Running state, sending cluster details as follows:
      netconfig.ini (Network setup): /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini
      params.ini (Overall build options): /root/deploycluster3/deployRacProd_SE_RAC_params.ini
      buildcluster: yes

INFO: Starting to send configuration details to all (2) VM(s).................................................................
INFO: Sending to VM with a simple name of "rac001"...........................................................................................................................................................................................................................................................
INFO: Sending to VM with a simple name of "rac002"..............................................................................................................................................................

INFO: Configuration details sent to (2) VMs...
      Check log (default location /u01/racovm/buildcluster.log) on build VM (rac001)...

INFO: deploycluster.py completed successfully at 14:21:28 in 100.4 seconds (0h:01m:40s)
Logfile at: /root/deploycluster3/deploycluster23.log

 

At this stage we have 2 nodes with the network configuration required like host name and IP addresses. The deployment script has also pushed the configuration files mentioned previously in the VMs.

So we connect to the first VM rac001 to

-bash-4.1# ssh root@192.168.179.210
Warning: Permanently added '192.168.179.210' (RSA) to the list of known hosts.
root@192.168.179.210's password:
Last login: Mon Dec 11 10:31:03 2017
[root@rac001 ~]#

Then we go the deployment directory which is part of the Template and we can execute the deployment

[root@rac001 racovm]# ./buildcluster.sh -s
Invoking on rac001 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation
   Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:43 EST 2017
   Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen
   Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition)
   Step(s): buildcluster

INFO (node:rac001): Skipping confirmation, flag (-s) supplied on command line
2017-12-18 09:06:43:[buildcluster:Start:rac001] Building 12cR2 RAC Cluster

INFO (node:rac001): No database created due to (BUILD_RAC_DATABASE=no) & (BUILD_SI_DATABASE=no) setting in params.ini
2017-12-18 09:06:45:[setsshroot:Start:rac001] SSH Setup for the root user...
..
INFO (node:rac001): Passwordless SSH for the root user already configured, skipping...
2017-12-18 09:06:46:[setsshroot:Done :rac001] SSH Setup for the root user completed successfully
2017-12-18 09:06:46:[setsshroot:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:46:[copykit:Start:rac001] Copy kit files to remote nodes
Kit files: buildsingle.sh buildcluster.sh netconfig.sh netconfig.ini common.sh cleanlocal.sh diskconfig.sh racovm.sh ssh params.ini doall.sh  netconfig GetSystemTimeZone.class kitversion.txt mcast

INFO (node:rac001): Copied kit to remote node rac002 as root user
2017-12-18 09:06:48:[copykit:Done :rac001] Copy kit files to (1) remote nodes
2017-12-18 09:06:48:[copykit:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s)
2017-12-18 09:06:48:[usrsgrps:Start:rac001] Verifying Oracle users & groups on all nodes (create/modify mode)..
..
2017-12-18 09:06:51:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (create/modify mode)..

INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:06:51:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:06:51:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s)
....
INFO (node:rac001): Passwordless SSH for the Oracle user (oracle) already configured to all nodes; not re-setting users passwords
2017-12-18 09:06:55:[usrsgrps:Done :rac001] Verifying Oracle users & groups on all nodes (create/modify mode)..
2017-12-18 09:06:55:[usrsgrps:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s)

INFO (node:rac001): Parameters loaded from params.ini...
  Users & Groups:
   Role Separation: no  Running as: root
   OInstall    : oinstall       GID: 54321
   RAC Owner   : oracle         UID: 54321
    DB OSDBA   : dba            GID: 54322
    DB OSOPER  :                GID:
    DB OSBACKUP: dba            GID:
    DB OSDGDBA : dba            GID:
    DB OSKMDBA : dba            GID:
    DB OSRAC   : dba            GID:
   Grid Owner  : oracle         UID: 54321
    GI OSDBA   : dba            GID: 54322
    GI OSOPER  :                GID:
    GI OSASM   : dba            GID: 54322
  Software Locations:
   Operating Mode: RAC                   Database Edition: STD
   Flex Cluster: yes      Flex ASM: yes
   Central Inventory: /u01/app/oraInventory
   Grid Home: /u01/app/12.2.0/grid  (Detected: 12cR2, Enterprise Edition)
   Grid Name: OraGrid12c
   RAC Home : /u01/app/oracle/product/12.2.0/dbhome_1  (Detected: 12cR2, Enterprise Edition)
   RAC Name : OraRAC12c
   RAC Base : /u01/app/oracle
   DB/RAC OVM kit : /u01/racovm
   Attach RAC Home: yes   GI Home: yes  Relink Homes: no   On OS Change: yes
   Addnode Copy: no
  Database & Storage:
   Database : no         DBName: ORCL  SIDName: ORCL  DG: DGDATA   Listener Port: 1521
   Policy Managed: no
   DBExpress: no         DBExpress port: 5500
   Grid Management DB: no   GIMR diskgroup name:
   Separate GIMR diskgroup: no
   Cluster Storage: ASM
   ASM Discovery String: /dev/xvd[k-s]1
   ASM diskgroup: dgocrvoting      Redundancy: EXTERNAL   Allocation Unit (au_size): 4
      Disks     : /dev/xvdk1 /dev/xvdl1 /dev/xvdm1 /dev/xvdn1 /dev/xvdo1
   Recovery DG  : DGFRA            Redundancy: EXTERNAL
      Disks     : /dev/xvdr1 /dev/xvds1
      Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'
   Extra DG #1  : DGDATA           Redundancy: EXTERNAL
      Disks     : /dev/xvdp1 /dev/xvdq1
      Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'
   Persistent disknames: yes  Stamp: yes  Partition: yes  Align: yes  GPT: no Permissions: 660
   ACFS Filesystem: no

Network information loaded from netconfig.ini...
  Default Gateway: 192.168.179.1  Domain:
  DNS:
  Public NIC : eth1  Mask: 255.255.255.0
  Private NIC: eth2  Mask: 255.255.255.0
  ASM NIC    : eth3  Mask: 255.255.0.0
  SCAN Name: cluprod01-scan  SCAN IP: 192.168.179.205  Scan Port: 1521
  Cluster Name: cluprod01
  Nodes & IP Addresses (2 of 2 nodes)
  Node  1: PubIP : 192.168.179.210 PubName : rac001
     (Hub) VIPIP : 192.168.179.211 VIPName : rac001-vip
           PrivIP: 192.168.3.210   PrivName: rac001-priv
           ASMIP : 192.168.58.210
  Node  2: PubIP : 192.168.179.212 PubName : rac002
     (Hub) VIPIP : 192.168.179.213 VIPName : rac002-vip
           PrivIP: 192.168.3.212   PrivName: rac002-priv
           ASMIP : 192.168.58.212
Running on rac001 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation
   Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:55 EST 2017
   Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen
   Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition)
2017-12-18 09:06:56:[printparams:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:56:[setsshora:Start:rac001] SSH Setup for the Oracle user(s)...
..
INFO (node:rac001): Passwordless SSH for the oracle user already configured, skipping...
2017-12-18 09:06:57:[setsshora:Done :rac001] SSH Setup for the oracle user completed successfully
2017-12-18 09:06:57:[setsshora:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:57:[diskconfig:Start:rac001] Storage Setup
2017-12-18 09:06:58:[diskconfig:Start:rac001] Running in configuration mode (local & remote nodes)
.
2017-12-18 09:06:58:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq............................OK
2017-12-18 09:07:02:[diskconfig:Disks:rac001] Checking contents of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1.
2017-12-18 09:07:02:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with stamping (existence check)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo......../dev/xvdr./dev/xvds...../dev/xvdp./dev/xvdq........OK
2017-12-18 09:07:23:[diskconfig:Remote:rac001] Verify disks are free on remote nodes...
rac002....................OK
2017-12-18 09:07:52:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)...
rac002.......OK
2017-12-18 09:07:54:[diskconfig:Disks:rac001] Setting disk permissions for next startup (all nodes)...
.....OK
2017-12-18 09:07:56:[diskconfig:ClearPartTables:rac001] Clearing partition tables...
./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK
2017-12-18 09:08:03:[diskconfig:CreatePartitions:rac001] Creating 'msdos' partitions on disks (as needed)...
./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK
2017-12-18 09:08:13:[diskconfig:CleanPartitions:rac001] Cleaning new partitions...
./dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1...OK
2017-12-18 09:08:13:[diskconfig:Done :rac001] Done configuring and checking disks on all nodes
2017-12-18 09:08:13:[diskconfig:Done :rac001] Storage Setup
2017-12-18 09:08:13:[diskconfig:Time :rac001] Completed successfully in 76 seconds (0h:01m:16s)
2017-12-18 09:08:15:[clearremotelogs:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s)
2017-12-18 09:08:15:[check:Start:rac001] Pre-install checks on all nodes
..

INFO (node:rac001): Check found that all (2) nodes have the following (25586399 26609817 26609966) patches applied to the Grid Infrastructure Home (/u01/app/12.2.0/grid), the following (25811364 26609817 26609966) patches applied to the RAC Home (/u01/app/oracle/product/12.2.0/dbhome_1)
.2017-12-18 09:08:20:[checklocal:Start:rac001] Pre-install checks
2017-12-18 09:08:21:[checklocal:Start:rac002] Pre-install checks
2017-12-18 09:08:22:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (check only mode)..

INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:08:22:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (check only mode)..
2017-12-18 09:08:22:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (check only mode)..

INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:08:22:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (check only mode)..

INFO (node:rac001): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen

WARNING (node:rac001): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini.

INFO (node:rac001): Running disk checks on all nodes, persistent disk names (/u01/racovm/diskconfig.sh -n 2 -D 1 -s)
2017-12-18 09:08:23:[diskconfig:Start:rac001] Running in dry-run mode (local & remote nodes, level 1), no stamping, partitioning or OS configuration files will be modified...(assuming persistent disk names)

INFO (node:rac002): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen

WARNING (node:rac002): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini.

INFO (node:rac002): Running network checks...
......
2017-12-18 09:08:24:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.............................OK
2017-12-18 09:08:29:[diskconfig:Disks:rac001] Checking existence of automatically renamed disks (localhost)...
/dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1.
2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking permissions of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1
2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking contents of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1..
2017-12-18 09:08:31:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with NO stamping (existence check)...
rac002........OK
2017-12-18 09:08:37:[diskconfig:Remote:rac001] Verify disks are free on remote nodes...
rac002........
INFO (node:rac001): Waiting for all checklocal operations to complete on all nodes (At 09:08:50, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 13222 13365)...
...............
INFO (node:rac002): Check completed successfully
2017-12-18 09:09:07:[checklocal:Done :rac002] Pre-install checks
2017-12-18 09:09:07:[checklocal:Time :rac002] Completed successfully in 46 seconds (0h:00m:46s)
.......OK
2017-12-18 09:09:11:[diskconfig:Remote:rac001] Checking existence of automatically renamed disks (remote nodes)...
rac002...
2017-12-18 09:09:17:[diskconfig:Remote:rac001] Checking permissions of disks (remote nodes)...
rac002....
2017-12-18 09:09:21:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)...
rac002.......OK
2017-12-18 09:09:26:[diskconfig:Done :rac001] Dry-run (local & remote, level 1) completed successfully, most likely normal run will too
..
INFO (node:rac001): Running multicast check on 230.0.1.0 port 42050 for 2 nodes...

INFO (node:rac001): All nodes can multicast to all other nodes on interface eth2 multicast address 230.0.1.0 port 42050...

INFO (node:rac001): Running network checks...
....................
INFO (node:rac001): Check completed successfully
2017-12-18 09:10:11:[checklocal:Done :rac001] Pre-install checks
2017-12-18 09:10:11:[checklocal:Time :rac001] Completed successfully in 111 seconds (0h:01m:51s)

INFO (node:rac001): All checklocal operations completed on all (2) node(s) at: 09:10:12
2017-12-18 09:10:12:[check:Done :rac001] Pre-install checks on all nodes
2017-12-18 09:10:13:[check:Time :rac001] Completed successfully in 117 seconds (0h:01m:57s)
2017-12-18 09:10:13:[creategrid:Start:rac001] Creating 12cR2 Grid Infrastructure
..
2017-12-18 09:10:16:[preparelocal:Start:rac001] Preparing node for Oracle installation

INFO (node:rac001): Resetting permissions on Oracle Homes... May take a while...
2017-12-18 09:10:17:[preparelocal:Start:rac002] Preparing node for Oracle installation

INFO (node:rac002): Resetting permissions on Oracle Homes... May take a while...

INFO (node:rac001): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.5G     0  1.5G   0% /dev/shm
2017-12-18 09:10:27:[preparelocal:Done :rac001] Preparing node for Oracle installation
2017-12-18 09:10:27:[preparelocal:Time :rac001] Completed successfully in 11 seconds (0h:00m:11s)

INFO (node:rac002): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.5G     0  1.5G   0% /dev/shm
2017-12-18 09:10:31:[preparelocal:Done :rac002] Preparing node for Oracle installation
2017-12-18 09:10:31:[preparelocal:Time :rac002] Completed successfully in 14 seconds (0h:00m:14s)
2017-12-18 09:10:32:[prepare:Time :rac001] Completed successfully in 19 seconds (0h:00m:19s)
....
2017-12-18 09:10:40:[giclonelocal:Start:rac001] Attaching 12cR2 Grid Infrastructure Home

INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null
2017-12-18 09:10:41:[giattachlocal:Start:rac001] Attaching Grid Infratructure Home on node rac001

INFO (node:rac001): Running on: rac001 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle'   CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
2017-12-18 09:10:41:[giclonelocal:Start:rac002] Attaching 12cR2 Grid Infrastructure Home

INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null
2017-12-18 09:10:42:[giattachlocal:Start:rac002] Attaching Grid Infratructure Home on node rac002

INFO (node:rac002): Running on: rac002 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle'   CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory pointer is located at /etc/oraInst.loc

INFO (node:rac001): Waiting for all giclonelocal operations to complete on all nodes (At 09:11:06, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 18135 18141)...
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

INFO (node:rac001): Running on: rac001 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2017-12-18 09:11:08:[giattachlocal:Done :rac001] Attaching Grid Infratructure Home on node rac001
2017-12-18 09:11:08:[giattachlocal:Time :rac001] Completed successfully in 27 seconds (0h:00m:27s)
2017-12-18 09:11:09:[girootlocal:Start:rac001] Running root.sh on Grid Infrastructure home

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/root.sh -silent
Check /u01/app/12.2.0/grid/install/root_rac001_2017-12-18_09-11-09-287116939.log for the output of root script
2017-12-18 09:11:09:[girootlocal:Done :rac001] Running root.sh on Grid Infrastructure home
2017-12-18 09:11:09:[girootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)...
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

INFO (node:rac002): Running on: rac002 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2017-12-18 09:11:10:[giattachlocal:Done :rac002] Attaching Grid Infratructure Home on node rac002
2017-12-18 09:11:10:[giattachlocal:Time :rac002] Completed successfully in 28 seconds (0h:00m:28s)
2017-12-18 09:11:10:[girootlocal:Start:rac002] Running root.sh on Grid Infrastructure home

INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/root.sh -silent
Check /u01/app/12.2.0/grid/install/root_rac002_2017-12-18_09-11-10-934545273.log for the output of root script
2017-12-18 09:11:11:[girootlocal:Done :rac002] Running root.sh on Grid Infrastructure home
2017-12-18 09:11:11:[girootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)...
2017-12-18 09:11:11:[giclonelocal:Done :rac001] Attaching 12cR2 Grid Infrastructure Home
2017-12-18 09:11:11:[giclonelocal:Time :rac001] Completed successfully in 33 seconds (0h:00m:33s)
2017-12-18 09:11:13:[giclonelocal:Done :rac002] Attaching 12cR2 Grid Infrastructure Home
2017-12-18 09:11:13:[giclonelocal:Time :rac002] Completed successfully in 34 seconds (0h:00m:34s)

INFO (node:rac001): All giclonelocal operations completed on all (2) node(s) at: 09:11:14
2017-12-18 09:11:14:[giclone:Time :rac001] Completed successfully in 42 seconds (0h:00m:42s)
....
2017-12-18 09:11:18:[girootcrslocal:Start:rac001] Running rootcrs.pl

INFO (node:rac001): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_<timestamp>.log

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_2017-12-18_09-11-19AM.log
2017/12/18 09:11:30 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/12/18 09:11:30 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:12:08 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:12:08 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2017/12/18 09:12:19 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/12/18 09:12:25 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/12/18 09:12:28 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/12/18 09:12:40 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
2017/12/18 09:13:23 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/12/18 09:13:23 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/12/18 09:14:06 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/12/18 09:14:19 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/12/18 09:14:19 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2017/12/18 09:14:27 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/12/18 09:14:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2017/12/18 09:15:48 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2017/12/18 09:16:56 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:18:14 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/12/18 09:18:23 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.driver.afd' on 'rac001'
CRS-2672: Attempting to start 'ora.evmd' on 'rac001'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001'
CRS-2676: Start of 'ora.driver.afd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001'
CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac001'
CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac001'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac001'
CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded

Disk label(s) created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details.
Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details.


2017/12/18 09:24:06 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade oracle oinstall'
CRS-2672: Attempting to start 'ora.crf' on 'rac001'
CRS-2672: Attempting to start 'ora.storage' on 'rac001'
CRS-2676: Start of 'ora.storage' on 'rac001' succeeded
CRS-2676: Start of 'ora.crf' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac001'
CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk e0321712cd544fa6bf438b5849f11155.
Successfully replaced voting disk group with +dgocrvoting.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   e0321712cd544fa6bf438b5849f11155 (AFD:DGOCRVOTING1) [DGOCRVOTING]
Located 1 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac001'
CRS-2677: Stop of 'ora.crsd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac001'
CRS-2673: Attempting to stop 'ora.crf' on 'rac001'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac001'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac001'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac001'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac001' succeeded
CRS-2677: Stop of 'ora.storage' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac001'
CRS-2677: Stop of 'ora.crf' on 'rac001' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac001'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac001'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac001'
CRS-2677: Stop of 'ora.evmd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac001'
CRS-2677: Stop of 'ora.cssd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac001'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac001'
CRS-2677: Stop of 'ora.driver.afd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac001' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/12/18 09:26:52 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001'
CRS-2672: Attempting to start 'ora.evmd' on 'rac001'
CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001'
CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac001'
CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac001'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac001'
CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac001'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac001'
CRS-2676: Start of 'ora.ctssd' on 'rac001' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac001'
CRS-2676: Start of 'ora.asm' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac001'
CRS-2676: Start of 'ora.storage' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac001'
CRS-2676: Start of 'ora.crf' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac001'
CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac001
CRS-6016: Resource auto-start has completed for server rac001
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:30:15 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/12/18 09:30:15 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac001'
CRS-2676: Start of 'ora.asm' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.DGOCRVOTING.dg' on 'rac001'
CRS-2676: Start of 'ora.DGOCRVOTING.dg' on 'rac001' succeeded
2017/12/18 09:32:38 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2017/12/18 09:34:22 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
2017-12-18 09:34:26:[girootcrslocal:Done :rac001] Running rootcrs.pl
2017-12-18 09:34:27:[girootcrslocal:Time :rac001] Completed successfully in 1388 seconds (0h:23m:08s)
2017-12-18 09:34:49:[girootcrslocal:Start:rac002] Running rootcrs.pl

INFO (node:rac002): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_<timestamp>.log

INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_2017-12-18_09-34-50AM.log
2017/12/18 09:35:03 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/12/18 09:35:04 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:35:18, elapsed: 0h:00m:31s, 1) node remaining, all background pid(s): 7263)...
2017/12/18 09:35:44 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:35:44 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
.2017/12/18 09:35:51 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/12/18 09:35:56 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/12/18 09:35:56 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/12/18 09:36:02 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
..2017/12/18 09:36:59 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/12/18 09:37:01 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/12/18 09:37:08 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/12/18 09:37:16 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/12/18 09:37:17 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
.2017/12/18 09:37:23 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/12/18 09:37:40 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
..
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:38:21, elapsed: 0h:03m:34s, 1) node remaining, all background pid(s): 7263)...
.2017/12/18 09:39:00 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
...2017/12/18 09:40:24 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
..
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:41:25, elapsed: 0h:06m:38s, 1) node remaining, all background pid(s): 7263)...
2017/12/18 09:41:40 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/12/18 09:41:42 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
.CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac002'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac002' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/12/18 09:42:04 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
.....
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:44:27, elapsed: 0h:09m:40s, 1) node remaining, all background pid(s): 7263)...
.CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac002'
CRS-2672: Attempting to start 'ora.evmd' on 'rac002'
CRS-2676: Start of 'ora.mdnsd' on 'rac002' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac002'
CRS-2676: Start of 'ora.gpnpd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac002'
CRS-2676: Start of 'ora.gipcd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac002'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac002'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac002'
CRS-2676: Start of 'ora.diskmon' on 'rac002' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac002'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac002'
CRS-2676: Start of 'ora.ctssd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac002'
CRS-2676: Start of 'ora.crf' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac002'
CRS-2676: Start of 'ora.crsd' on 'rac002' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac002'
CRS-2676: Start of 'ora.asm' on 'rac002' succeeded
CRS-6017: Processing resource auto-start for servers: rac002
CRS-2672: Attempting to start 'ora.net1.network' on 'rac002'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002'
CRS-2676: Start of 'ora.net1.network' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'rac002'
CRS-2676: Start of 'ora.ons' on 'rac002' succeeded
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac002'
CRS-2676: Start of 'ora.asm' on 'rac002' succeeded
CRS-6016: Resource auto-start has completed for server rac002
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:45:22 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/12/18 09:45:22 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
.2017/12/18 09:45:49 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
..2017/12/18 09:46:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
2017-12-18 09:46:40:[girootcrslocal:Done :rac002] Running rootcrs.pl
2017-12-18 09:46:40:[girootcrslocal:Time :rac002] Completed successfully in 711 seconds (0h:11m:51s)

INFO (node:rac001): All girootcrslocal operations completed on all (2) node(s) at: 09:46:42
2017-12-18 09:46:42:[girootcrs:Time :rac001] Completed successfully in 2128 seconds (0h:35m:28s)
2017-12-18 09:46:42:[giassist:Start:rac001] Running RAC Home assistants (netca, asmca)

INFO (node:rac001): Creating the node Listener using NETCA... (09:46:44)

INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/netca /orahome /u01/app/12.2.0/grid /instype typical /inscomp client,oraclenet,javavm,server,ano /insprtcl tcp /cfg local /authadp NO_VALUE /responseFile /u01/app/12.2.0/grid/network/install/netca_typ.rsp /silent /orahnam OraGrid12c

Parsing command line arguments:
    Parameter "orahome" = /u01/app/12.2.0/grid
    Parameter "instype" = typical
    Parameter "inscomp" = client,oraclenet,javavm,server,ano
    Parameter "insprtcl" = tcp
    Parameter "cfg" = local
    Parameter "authadp" = NO_VALUE
    Parameter "responsefile" = /u01/app/12.2.0/grid/network/install/netca_typ.rsp
    Parameter "silent" = true
    Parameter "orahnam" = OraGrid12c
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Profile configuration complete.
Listener "LISTENER" already exists.
Oracle Net Services configuration successful. The exit code is 0

INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/asmca -silent -postConfigureASM

Post configuration completed successfully


INFO (node:rac001): Setting initial diskgroup name dgocrvoting's attributes as defined in RACASMGROUP_ATTRIBUTES ('compatible.asm'='12.2.0.1.0', 'compatible.rdbms'='12.2.0.1.0')...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:38: alter diskgroup dgocrvoting set attribute 'compatible.asm'='12.2.0.1.0';

Diskgroup altered.

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:40: alter diskgroup dgocrvoting set attribute  'compatible.rdbms'='12.2.0.1.0';

Diskgroup altered.
2017-12-18 09:48:44:[creatediskgroups:Start:rac001] Creating additional diskgroups

INFO (node:rac001): Creating Recovery diskgroup (DGFRA) at: 09:50:31...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGFRA" EXTERNAL redundancy disk 'AFD:RECO1','AFD:RECO2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

Elapsed: 00:00:09.34

INFO (node:rac001): Creating Extra diskgroup (DGDATA) at: 09:52:22...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGDATA" EXTERNAL redundancy disk 'AFD:DGDATA1','AFD:DGDATA2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

Elapsed: 00:00:10.48

INFO (node:rac001): Successfully created the following ASM diskgroups (DGFRA DGDATA), setting them for automount on startup and attempting to mount on all nodes...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:52:35: alter system set asm_diskgroups='DGDATA', 'DGFRA';

System altered.

INFO (node:rac001): Successfully set the ASM diskgroups (DGDATA DGFRA) to automount on startup

INFO (node:rac001): Attempting to mount diskgroups on nodes running ASM: rac001 rac002

INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:38: alter diskgroup "DGFRA" mount;

Diskgroup altered.

INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:39: alter diskgroup "DGDATA" mount;

Diskgroup altered.

INFO (node:rac001): Successfully mounted the created (DGFRA DGDATA) ASM diskgroups on all nodes running an ASM instance (rac001 rac002)
2017-12-18 09:52:41:[creatediskgroups:Done :rac001] Creating additional diskgroups
2017-12-18 09:52:41:[creatediskgroups:Time :rac001] Completed successfully in 237 seconds (0h:03m:57s)

WARNING (node:rac001): Management Database not created due to CLONE_GRID_MANAGEMENT_DB=no. Note that starting with release 12.1.0.2 and higher, the Management Database (GIMR) is required for a fully supported environment
2017-12-18 09:52:41:[giassist:Done :rac001] Running RAC Home assistants (netca, asmca)
2017-12-18 09:52:41:[giassist:Time :rac001] Completed successfully in 359 seconds (0h:05m:59s)
2017-12-18 09:52:41:[creategrid:Done :rac001] Creating 12cR2 Grid Infrastructure
2017-12-18 09:52:41:[creategrid:Time :rac001] Completed successfully in 2548 seconds (0h:42m:28s)

INFO (node:rac001): Skipping CVU post crsinst checks, due to CLONE_SKIP_CVU_POSTCRS=yes
2017-12-18 09:52:41:[cvupostcrs:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)
2017-12-18 09:52:41:[racclone:Start:rac001] Cloning 12cR2 RAC Home on all nodes
..

INFO (node:rac001): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini
2017-12-18 09:53:04:[racclonelocal:Start:rac001] Cloning 12cR2 RAC Home

INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null

INFO (node:rac001): Running on: rac001 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac001"  '-ignoreSysPrereqs'
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 6740 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 4059 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-12AM. Please wait ...
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:53:14, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 12271 12277)...

INFO (node:rac002): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini
2017-12-18 09:53:15:[racclonelocal:Start:rac002] Cloning 12cR2 RAC Home

INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null

INFO (node:rac002): Running on: rac002 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac002"  '-ignoreSysPrereqs'
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 6757 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 4082 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-42AM. Please wait ....You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.
.
Copy files successful.

Link binaries in progress.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log
...................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.
...
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:56:19, elapsed: 0h:03m:35s, 2) nodes remaining, all background pid(s): 12271 12277)...
..
Link binaries successful.

Setup files in progress.
.
Setup files successful.

Setup Inventory in progress.
.
Setup Inventory successful.

Finish Setup successful.
The cloning of OraRAC12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh

Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes:
[rac001]


..................................................   100% Done.

INFO (node:rac001): Relinking the oracle binary to disable Database Enterprise Edition options (09:58:45)...
..
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:59:25, elapsed: 0h:06m:42s, 2) nodes remaining, all background pid(s): 12271 12277)...
..2017-12-18 10:00:28:[racrootlocal:Start:rac001] Running root.sh on RAC Home
Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac001_2017-12-18_10-00-28-680953042.log for the output of root script
2017-12-18 10:00:29:[racrootlocal:Done :rac001] Running root.sh on RAC Home
2017-12-18 10:00:29:[racrootlocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)...
2017-12-18 10:00:29:[racclonelocal:Done :rac001] Cloning 12cR2 RAC Home
2017-12-18 10:00:29:[racclonelocal:Time :rac001] Completed successfully in 464 seconds (0h:07m:44s)

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.
.
Setup Inventory successful.

Finish Setup successful.
The cloning of OraRAC12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh

Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes:
[rac002]


..................................................   100% Done.

INFO (node:rac002): Relinking the oracle binary to disable Database Enterprise Edition options (10:01:17)...
..2017-12-18 10:02:21:[racrootlocal:Start:rac002] Running root.sh on RAC Home
Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac002_2017-12-18_10-02-21-660060386.log for the output of root script
2017-12-18 10:02:22:[racrootlocal:Done :rac002] Running root.sh on RAC Home
2017-12-18 10:02:22:[racrootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)...
2017-12-18 10:02:22:[racclonelocal:Done :rac002] Cloning 12cR2 RAC Home
2017-12-18 10:02:22:[racclonelocal:Time :rac002] Completed successfully in 576 seconds (0h:09m:36s)

INFO (node:rac001): All racclonelocal operations completed on all (2) node(s) at: 10:02:24
2017-12-18 10:02:24:[racclone:Done :rac001] Cloning 12cR2 RAC Home on all nodes
2017-12-18 10:02:24:[racclone:Time :rac001] Completed successfully in 583 seconds (0h:09m:43s)

INFO (node:rac002): Disabling passwordless ssh access for root user (from remote nodes)
2017-12-18 10:02:28:[rmsshrootlocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac001): Disabling passwordless ssh access for root user (from remote nodes)
2017-12-18 10:02:31:[rmsshrootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)
2017-12-18 10:02:31:[rmsshroot:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s)

INFO (node:rac001): Current cluster state (10:02:31)...

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/olsnodes -n -s -t
rac001  1       Active  Hub     Unpinned
rac002  2       Active  Hub     Unpinned
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
Oracle Clusterware version on node [rac001] is [12.2.0.1.0]
CRS Administrator List: oracle root
Cluster is running in "flex" mode
CRS-41008: Cluster class is 'Standalone Cluster'
ASM Flex mode enabled: ASM instance count: 3
ASM is running on rac001,rac002

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGDATA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGFRA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGOCRVOTING.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.net1.network
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ons
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac001                   STABLE
               OFFLINE OFFLINE      rac002                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac001                   STABLE
ora.asm
      1        ONLINE  ONLINE       rac001                   Started,STABLE
      2        ONLINE  ONLINE       rac002                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac001                   STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.rac001.vip
      1        ONLINE  ONLINE       rac001                   STABLE
ora.rac002.vip
      1        ONLINE  ONLINE       rac002                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac001                   STABLE
--------------------------------------------------------------------------------

INFO (node:rac001): For an explanation on resources in OFFLINE state, see Note:1068835.1
2017-12-18 10:02:40:[clusterstate:Time :rac001] Completed successfully in 9 seconds (0h:00m:09s)
2017-12-18 10:02:40:[buildcluster:Done :rac001] Building 12cR2 RAC Cluster
2017-12-18 10:02:40:[buildcluster:Time :rac001] Completed successfully in 3357 seconds (0h:55m:57s)

INFO (node:rac001): This entire build was logged in logfile: /u01/racovm/buildcluster3.log

 

We are now going to multiplex network both, the cluster Hearthbeat and ASM because currently the configuration is not HA. We also need only 2 flex asm instances for my 2 nodes R.A.C :

[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl modify asm -count 2

[root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.3.0
Home: <CRS home>
End points: TCP:1525
Listener is disabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

/u01/app/12.2.0/grid/bin/srvctl add listener -asmlistener -l ASMNET2LSNR_ASM -subnet 192.168.58.0
/u01/app/12.2.0/grid/bin/srvctl start listener -l ASMNET2LSNR_ASM


[root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.3.0
Home: <CRS home>
End points: TCP:1525
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
Name: ASMNET2LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.58.0
Home: <CRS home>
End points: TCP:1526
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:



[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif
eth1  192.168.179.0  global  public
eth2  192.168.3.0    global  cluster_interconnect
eth3  192.168.58.0   global  asm


[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth2/192.168.3.0:cluster_interconnect,asm
[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth3/192.168.58.0:cluster_interconnect,asm


[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif
eth1  192.168.179.0  global  public
eth2  192.168.3.0    global  cluster_interconnect,asm
eth3  192.168.58.0   global  cluster_interconnect,asm

 

Let check the overall cluster state

[root@rac002 ~]# /u01/app/12.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ASMNET2LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGDATA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGFRA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGOCRVOTING.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.net1.network
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ons
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac001                   STABLE
               OFFLINE OFFLINE      rac002                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac002                   STABLE
ora.asm
      1        ONLINE  ONLINE       rac001                   Started,STABLE
      2        ONLINE  ONLINE       rac002                   Started,STABLE
ora.cvu
      1        ONLINE  ONLINE       rac002                   STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.rac001.vip
      1        ONLINE  ONLINE       rac001                   STABLE
ora.rac002.vip
      1        ONLINE  ONLINE       rac002                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac002                   STABLE
--------------------------------------------------------------------------------

The cluster is now up, functional and resilient to network failure. We could have choose to create a database as a part of the deployment process because the “Deploy Cluster Tool” permits us to do that. Nevertheless, in this demonstration, I choose to execute manually the database creation on top of this deployment.
So we add a new database to the cluster:

 

[oracle@rac001 ~]$ dbca -silent -ignorePreReqs \
> -createDatabase \
> -gdbName app01 \
> -nodelist rac001,rac002 \
> -templateName General_Purpose.dbc \
> -characterSet AL32UTF8 \
> -createAsContainerDatabase false \
> -databaseConfigType RAC \
> -databaseType MULTIPURPOSE \
> -dvConfiguration false \
> -emConfiguration NONE \
> -enableArchive true \
> -memoryMgmtType AUTO_SGA \
> -memoryPercentage 75 \
> -nationalCharacterSet AL16UTF16 \
> -adminManaged \
> -storageType ASM \
> -diskGroupName DGDATA \
> -recoveryGroupName DGFRA \
> -sysPassword 0rAcle-Sys \
> -systemPassword 0rAcle-System \
> -useOMF true

Copying database files
1% complete
2% complete
15% complete
27% complete
Creating and starting Oracle instance
29% complete
32% complete
36% complete
40% complete
41% complete
43% complete
45% complete
Creating cluster database views
47% complete
63% complete
Completing Database Creation
64% complete
65% complete
68% complete
71% complete
72% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/app01/app01.log" for further details.

[oracle@rac001 ~]$ export ORACLE_SID=app011
[oracle@rac001 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Dec 20 08:54:03 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

SQL> select host_name from gv$instance ;

HOST_NAME
----------------------------------------------------------------
rac001
rac002

 

We have now a R.A.C 12.2 Standard Edition ready to run our critical applications with the latest patch level for OS and Oracle including all best practices and requirements.

So with this post we have a demonstration of how it make your life simpler, with a good underlying OVM infrastructure, to deploy various kinds of Oracle database infrastructure. This automation can also easily be done for any other technologies like PostgreSQL database or “Big Data” technologies like Hortonworks or any applications.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 02) est apparu en premier sur Blog dbi services.

Oracle docker image from docker store

Fri, 2017-12-15 00:54

Did you notice that each time you want to download the Oracle binaries from the OTN website (for non-production usage) you have to click on the “Accept License Agreement”? This is because you have to agree with the OTN license agreement, and that makes it not easy to distribute an automated way to build an environment.
The only exception I have seen was the sealed envelopes provided for the RACattack:

Software is provided for @ludodba #racattack thanks to @OracleRACpm pic.twitter.com/1GQTVVn2BS

— Franck Pachot (@FranckPachot) September 17, 2015

In both cases, there must be a physical action involved to agree legally with the license terms.

Docker

There is now a new possibility where you click on ‘Agree’ only once, in the Docker Store, and then can download (aka ‘pull’) a container containing the binary distribution. You just go to: https://store.docker.com/images/oracle-database-enterprise-edition, login (you can create one in two minutes with a username, e-mail address, and password) and accept the license agreement:
CaptureOracle Docker

Once this is done, you will be able to pull the Oracle containers from the command line, after a ‘docker login’.

It may not be easy to use Docker on your laptop, especially in you are on Windows and don’t want to enable Hyper-V. Here is how I run it on a VirtualBox VM running Oracle Enterprise Linux. You may wonder what’s the point to run containers within a VM. But I think that you don’t have the choice here. The docker processes will run within the host. This means that you need an OS that is supported (and Oracle Enterprise Linux is the best fitted to run Oracle Database). This also means that you also need to adapt the kernel parameters for it, shm, limits, have the oracle user, etc. Better to do that in a VM dedicated for Oracle Database.

Then you wonder what’s the point of running in a container, given that you have to do all those installation prerequisites anyway, and installing Oracle is just two more commands (runInstaller and dbca). Well, it seems that the main reason is that it’s cool. In my opinion, any user of database (from developer to administrator) should have installed an Oracle Database at least with the DBA GUI, because it is a good way to understand what is a database, a listener, a datafile, the characterset,… But let’s be cool and pull it instead of install it.

Docker evolves quickly, I remove old releases just in case:

[root@localhost oracle]# yum -y remove docker-ce docker docker-common docker-selinux docker-engine
Loaded plugins: ulninfo
No Match for argument: docker-ce
No Match for argument: docker
No Match for argument: docker-common
No Match for argument: docker-selinux
No Match for argument: docker-engine
No Packages marked for removal

I’m on the latest OEL7:

[oracle@localhost ~]$ cat /etc/oracle-release
Oracle Linux Server release 7.4
 
[root@localhost oracle]# yum upgrade
Loaded plugins: ulninfo
No packages marked for update
 
[oracle@localhost ~]$ uname -a
Linux localhost.localdomain 4.1.12-103.10.1.el7uek.x86_64 #2 SMP Tue Dec 5 15:42:37 PST 2017 x86_64 x86_64 x86_64 GNU/Linux

I don’t want docker to fill my / filesystem, and those images with an Oracle Database are big, so I’m creating a link from /var/lib/docker to /u01

[root@localhost oracle]# mkdir /u01/docker
[root@localhost oracle]# ln -s /u01/docker /var/lib/docker
[root@localhost oracle]# ls -ld /var/lib/docker
lrwxrwxrwx. 1 root root 11 Dec 10 15:48 /var/lib/docker -> /u01/docker

Installing Docker (Community Edition):

[root@localhost oracle]# yum -y install docker-ce
Loaded plugins: ulninfo
docker-ce-stable | 2.9 kB 00:00:00
ol7_UEKR4 | 1.2 kB 00:00:00
ol7_addons | 1.2 kB 00:00:00
ol7_latest | 1.4 kB 00:00:00
docker-ce-stable/x86_64/primary_db | 10 kB 00:00:00
(1/7): ol7_UEKR4/x86_64/updateinfo | 135 kB 00:00:00
(2/7): ol7_addons/x86_64/updateinfo | 40 kB 00:00:00
(3/7): ol7_addons/x86_64/primary | 78 kB 00:00:00
(4/7): ol7_latest/x86_64/group | 681 kB 00:00:00
(5/7): ol7_latest/x86_64/updateinfo | 1.6 MB 00:00:02
(6/7): ol7_UEKR4/x86_64/primary | 25 MB 00:00:25
(7/7): ol7_latest/x86_64/primary | 31 MB 00:00:26
ol7_UEKR4 506/506
ol7_addons 251/251
ol7_latest 23517/23517
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:17.09.1.ce-1.el7.centos will be installed
--> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-17.09.1.ce-1.el7.centos.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.21-1.el7 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
==============================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================
Installing:
docker-ce x86_64 17.09.1.ce-1.el7.centos docker-ce-stable 21 M
Installing for dependencies:
container-selinux noarch 2:2.21-1.el7 ol7_addons 28 k
 
Transaction Summary
==============================================================================================================================================================================================
Install 1 Package (+1 Dependent package)
 
Total download size: 21 M
Installed size: 76 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): container-selinux-2.21-1.el7.noarch.rpm | 28 kB 00:00:01
(2/2): docker-ce-17.09.1.ce-1.el7.centos.x86_64.rpm | 21 MB 00:00:07
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 2.7 MB/s | 21 MB 00:00:07
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 2:container-selinux-2.21-1.el7.noarch 1/2
Installing : docker-ce-17.09.1.ce-1.el7.centos.x86_64 2/2
Verifying : docker-ce-17.09.1.ce-1.el7.centos.x86_64 1/2
Verifying : 2:container-selinux-2.21-1.el7.noarch 2/2
 
Installed:
docker-ce.x86_64 0:17.09.1.ce-1.el7.centos
 
Dependency Installed:
container-selinux.noarch 2:2.21-1.el7
 
Complete!

Starting Docker:

[root@localhost oracle]# systemctl start docker

I have to login with my credentials. This is the way to connect with the agreement I accepted online:

[root@localhost oracle]# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: franck.pachot
Password:
Login Succeeded

Then I pull the docker container provided by Oracle. Oracle software is quit large when including all features, so I choose the ‘slim’ one:

[root@localhost oracle]# docker pull store/oracle/database-enterprise:12.2.0.1-slim
12.2.0.1-slim: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
be0a1f1e8dfd: Pull complete
Digest: sha256:dbd87ae4cc3425dea7ba3d3f34e062cbd0afa89aed2c3f3d47ceb5213cc0359a
Status: Downloaded newer image for store/oracle/database-enterprise:12.2.0.1-slim
[root@localhost oracle]#

Here is the image:

[root@localhost oracle]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 3 months ago 2.08GB

To run a database, you just have to run the container. In order to connect to it, you need to forward the 1521 port:

[root@localhost oracle]# docker run -p 0.0.0.0:9001:1521 store/oracle/database-enterprise:12.2.0.1-slim
Setup Oracle Database
Oracle Database 12.2.0.1 Setup
Sun Dec 10 19:09:14 UTC 2017
 
Check parameters ......
log file is : /home/oracle/setup/log/paramChk.log
paramChk.sh is done at 0 sec
 
untar DB bits ......
log file is : /home/oracle/setup/log/untarDB.log

Ok. This takes some time. The ORACLE_HOME is unzipped, then the database created…

I’ll not describe further. Just go to the Setup Instructions in https://store.docker.com/images/oracle-database-enterprise-edition where everything is clearly explained.

In my opinion, it is good to try and think about it. Docker was created to containerize an application with process(es) and memory. A database is a different beast. The database is persistent, so you should store it in an external volume, because you don’t want to create a new empty database each time you start it. There are also the configuration files which should be persistent: should they belong to the container or be external? And the logs under ORACLE_BASE_DIAG? Do you want to keep them? purge them? Or just let the image grow, which can be very quick if you fill the Recovery Area. Finally, do you want to run a container into a Virtual Machine, this container running Oracle Database 12c, which is a Container Database (CDB), itself containing containers (PDBs)? Personally, I’m very skeptical about the usage of docker for an Oracle Database. But I also gave it an early try some years ago (read here). And you may see things differently in your context. The very good point is that Oracle now provides an easy way to test whether it helps you to run a database quickly or not, with a way to get the binaries without an ‘accept’ click for each download.

 

Cet article Oracle docker image from docker store est apparu en premier sur Blog dbi services.

Does pg_upgrade in check mode raises a failure when the old cluster is running?

Tue, 2017-12-12 14:31

Today I had the pleasure to have Bruce Momjian in my session about PostgreSQL Upgrade Best Practices at the IT Tage 2017 in Frankfurt. While browsing through the various options you have for upgrading there was one slide where I claimed that the old cluster needs to be down before you run pg_upgrade in check mode as you will hit a (non-critical) failure message otherwise. Lets see if that really is the case or I did something wrong…

To start with lets initialize a new 9.6.2 cluster:

postgres@pgbox:/home/postgres/ [PG962] initdb --version
initdb (PostgreSQL) 9.6.2 dbi services build
postgres@pgbox:/home/postgres/ [PG962] initdb -D /tmp/aaa
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/aaa ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/aaa -l logfile start

Start that:

postgres@pgbox:/home/postgres/ [PG962] pg_ctl -D /tmp/aaa -l logfile start
postgres@pgbox:/home/postgres/ [PG962] psql -c "select version()" postgres
                                                           version                                                           
-----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.2 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

Time: 0.861 ms

For being able to upgrade we’ll need a new cluster, so:

postgres@pgbox:/home/postgres/ [PG10] initdb --version
initdb (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] initdb -D /tmp/bbb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/bbb ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/bbb -l logfile start

We’ll not start that one but will just run pg_upgrade in check mode from the new binaries:

postgres@pgbox:/home/postgres/ [PG10] pg_upgrade --version
pg_upgrade (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] export PGDATAOLD=/tmp/aaa
postgres@pgbox:/home/postgres/ [PG10] export PGDATANEW=/tmp/bbb
postgres@pgbox:/home/postgres/ [PG10] export PGBINOLD=/u01/app/postgres/product/96/db_2/bin/
postgres@pgbox:/home/postgres/ [PG10] export PGBINNEW=/u01/app/postgres/product/10/db_0/bin/
postgres@pgbox:/home/postgres/ [PG10] pg_upgrade -c

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
...

… and here we go. From the log:

postgres@pgbox:/home/postgres/ [PG10] cat pg_upgrade_server.log

-----------------------------------------------------------------
  pg_upgrade run on Tue Dec 12 21:23:43 2017
-----------------------------------------------------------------

command: "/u01/app/postgres/product/96/db_2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/tmp/aaa" -o "-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c unix_socket_permissions=0700" start >> "pg_upgrade_server.log" 2>&1
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....FATAL:  lock file "postmaster.pid" already exists
HINT:  Is another postmaster (PID 2194) running in data directory "/tmp/aaa"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

So, @Bruce: Something to improve :)
Again: It was a pleasure to have you there and I hope we’ll meet again at one of the conferences in 2018.

 

Cet article Does pg_upgrade in check mode raises a failure when the old cluster is running? est apparu en premier sur Blog dbi services.

#UKOUG_TECH17

Fri, 2017-12-08 15:40
Award

ukoug_tech17_award_paul_fitton.jpgI’ve received an award for an article I’ve written last year, the Most Read Oracle Scene Article in 2016 – Technology. I like to write for Oracle Scene magazine. It is rare today to have a magazine both available in a paper version, and also publicly available on the web. And I must say that as an author, all the people behind are doing a great work. Thanks to them and thanks for the award. Seeing that what I write helps is the motivation to do so.

The article is: Is your AWR/Statspack Report Relevant.

This is the occasion to link to other articles I’ve written for the UKOUG Magazine. Statspack, because not everybody has Enterprise Edition with optional Diagnostic Pack: Improving Statspack Experience. One on the parameter I hate the most: CBO Choice between Index & Full Scan: the Good, the Bad & the Ugly parameters. Another about the statistics that should replace this awful parameter: Demystifying WORKLOAD System Statistics Gathering.

ukoug_tech17_award_keynoteTwo interesting features in 12c: 12c Online Statistics Gathering & Session Private Statistics. A big change that happened in 12.2.0.2 concerning availability: I/O Error on Datafile: Instance Crash or Datafile Offline?. My SLOB performance tests when ODA Lite came with MVMe SSD: Performance for All Editions on ODA X6-2S/M. And finally, the article on the great Oracle features for physical transport/clone/migration: From Transportable Tablespaces to Pluggable Databases

I’ve stolen a few pictures to illustrate this blog post, from UKOUG photo albums, and from friends.

Presentations

Pres1The article on Transportable Tablespaces and Pluggable Databases is actually based one one presentation I did. I was in that big Hall 1 [I realize that some may do a joke on this, but I’m talking about the room] where I look like Ant-Man from the back of the room. But doing live demos is great on this huge screen.

The idea for this presentation came 1 year ago when preparing a 5 minutes talk for Oracle Open World 2016 ACED talks (blog post about this) and this subject is so exciting that I wanted to share more about it. Online PDB clone and relocate will be the features you will like the most when going to Multitenant.

pres2 My second presentation was more developer focused, exposing all Join Methods that can magically construct your query result in a record time, or, when bad Join Method is chosen, make your 2-second query still running after one hour. I explained the join methods by 3 ways: theory with .ppt animation, execution plan with dbms_xplan and in live when running queries, with SQL Monitor, showing the different operations and A-Rows increasing in live.

I was also co-presenting in a roundtable on Oracle Cloud PaaS, sharing my little experience on DBaaS. Everybody talks about Cloud and it is good to talk about problems encountered and how to deal with it.

Round tables

Despite the hard concurrency of good sessions, I also attended a few round tables. Those conferences are a good opportunity to meet and share other users and product managers. Oracle is a big company, and we sometimes think that they care only about their biggest customers, but that is not exact. There are several product managers who really listen to customers. A great one was the discussion about something that slowly changes for a few years: the documentation which was really precise in the past – in explaining the When, Why, and How – is now more vague.
IMG_4806

Community

Geeking at the Oracle Developer Community lounge, Eating something decent near the conference center, drinking while discussing technical stuff, the ACE dinner (and the first ACED briefing out of US), posing with award winners, and the amazing chocolate testing…

ukoug_tech17_geeksbulgugiIMG_4812aceDQTvPKSUIAADCqVachocolate1

 

Cet article #UKOUG_TECH17 est apparu en premier sur Blog dbi services.

SQL Server Tips: an orphan user owns a database role

Fri, 2017-12-08 02:15

A few days ago, I conduct an audit to detect all orphan’s windows accounts in a database and I was surprise to have an error during the drop user query.

 

The first step is to find all orphan’s windows accounts in a database

USE [dbi_database]

GO

/*Step1: Search the orphan user */

SELECT * FROM  sys.database_principals a

LEFT OUTER JOIN sys.server_principals b ON a.sid = b.sid

WHERE b.sid IS NULL

AND   a.type In ('U', 'G')

AND   a.principal_id > 4

 

I find the user called “dbi\orphan_user” and run the query to drop it

/*Drop Orphran User*/

DROP USER [dbi\orphan_user]

GO

orphan_user01

But as you can see, I receive the error message:

Msg 15421, Level 16, State 1, Line4

“The database principal owns a database role and cannot be dropped.”

 

This user is owner of database roles…

Be careful it is not this error message:

Msg 15138, Level 16, State 1, Line 4

The database principal owns a schema in the database, and cannot be dropped.

In this case, the user is owner on schema.

Do not confuse these two error messages:

  • Msg 15421 is for database role
  • Msg 15138 is for schema

 

The goal is to search all database roles owns by the user dbi\orphan_user

/*Search database role onws by this Orphran  user*/

  SELECT dp2.name, dp1.name FROM sys.database_principals AS dp1

                JOIN sys.database_principals AS dp2

                ON dp1.owning_principal_id = dp2.principal_id

                WHERE dp1.type = 'R' AND dp2.name = 'dbi\orphan_user';

As you can see in my select, I use two times the view sys.database_principals to do a cross check between the owning_principal_id and the principal_id.

orphan_user02

After that, I change the owner from this role to the good one (by default dbo).

/*Change the owner from these database role*/

ALTER AUTHORIZATION ON ROLE::<database role> TO dbo;

orphan_user03

And I drop the orphan user without problems…

/*Drop Orphran User*/

DROP USER [dbi\orphan_user]

GO

orphan_user04

To finish, I give you a Santa Klaus Gift:

I also rewrite the query to have the “Alter Authorization” query directly in the SELECT. You have just to copy/paste and execute it

SELECT dp2.name, dp1.name, 'ALTER AUTHORIZATION ON ROLE::' + dp1.name + ' TO dbo;' as query

FROM sys.database_principals AS dp1

JOIN sys.database_principals AS dp2

ON dp1.owning_principal_id = dp2.principal_id

WHERE dp1.type = 'R' AND dp2.name = 'dbi\orphan_user';

 

Et voila! 8-)

 

Cet article SQL Server Tips: an orphan user owns a database role est apparu en premier sur Blog dbi services.

How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7?

Wed, 2017-12-06 10:17

DISCLAIMER: I know it exists other solutions to do it

Pre-requisites:
– a virtual machine (or not) with CentOS7 installed
– a free disk or partition

I use a VBox machine and I added a 5GiB hard disk

We list the disk and partition to check if our new hard is added.

[root@deploy ~]$ lsblk
NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                          8:0    0   20G  0 disk
├─sda1                       8:1    0    1G  0 part /boot
└─sda2                       8:2    0   19G  0 part
  ├─cl-root                253:0    0   21G  0 lvm  /
  └─cl-swap                253:1    0    2G  0 lvm  [SWAP]
sdb                          8:16   0   10G  0 disk

Good, we can continue..

Let’s partition the disk using fdisk

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): +5G
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now, we need to inform the kernel that the partition table has changed. To do that, either we reboot the server or we run partprobe

[root@deploy ~]$ partprobe /dev/sdb1
[root@deploy ~]$

We create a physical volume

[root@deploy ~]$ pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
[root@deploy ~]$ pvs
  PV         VG Fmt  Attr PSize  PFree
  /dev/sda2  cl lvm2 a--  19.00g       0
  /dev/sdb1     lvm2 ---   5.00g    5.00g
  /dev/sdc2  cl lvm2 a--   5.00g 1020.00m

We create a volume group

[root@deploy ~]$ vgcreate  vg_deploy /dev/sdb1
  Volume group "vg_deploy" successfully created

We check that the volume group was created properly

[root@deploy ~]$ vgdisplay vg_deploy
  --- Volume group ---
  VG Name               vg_deploy
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       0 / 0
  Free  PE / Size       1279 / 5.00 GiB
  VG UUID               5ZhlvC-lpor-Ti8x-mS9P-bnxW-Gdtw-Gynocl

Here, I set the size of the logical volume with PE (Physical Extent). One PE represents 4.00 MiB

We create a logical volume on our volume group

[root@deploy ~]$ lvcreate -l 1000 -n lv_deploy vg_deploy
  Logical volume "lv_deploy" created.

We have a look to check how our logical volume “lv_deploy” looks like

[root@deploy ~]$ lvdisplay /dev/vg_deploy/lv_deploy
  --- Logical volume ---
  LV Path                /dev/vg_deploy/lv_deploy
  LV Name                lv_deploy
  VG Name                vg_deploy
  LV UUID                2vxcDv-AHfB-7c2x-1PM8-nbn3-38M5-c1QoNS
  LV Write Access        read/write
  LV Creation host, time deploy.example.com, 2017-12-05 08:15:59 -0500
  LV Status              available
  # open                 0
  LV Size                3.91 GiB
  Current LE             1000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

Let’s create our file system on the new logical volume

[root@deploy ~]$ mkfs.xfs  /dev/vg_deploy/lv_deploy
meta-data=/dev/vg_deploy/lv_deploy isize=512    agcount=4, agsize=256000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1024000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We now create a new directory “mysqldata” for example

[root@deploy ~]$ mkdir /mysqldata

We add the new entry for our new logical volume

[root@deploy ~]$ echo "/dev/mapper/vg_deploy-lv_deploy     /mysqldata      xfs     defaults      0 0" >> /etc/fstab

We mount it

[root@deploy ~]$ mount -a

We check the filesystem is mounted properly

[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.9G   33M  3.9G   1% /mysqldata

We add some files to the /mysqldata directory (a for loop will help us)

[root@deploy mysqldata]$ for i in 1 2 3 4 5; do dd if=/dev/zero  of=/mysqldata/file0$i bs=1024 count=10; done
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000282978 s, 36.2 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000202232 s, 50.6 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000255617 s, 40.1 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000195752 s, 52.3 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000183672 s, 55.8 MB/s
[root@deploy mysqldata]$ ls -l
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

NOW the interesting part is coming because we are going to reduce our /mysqldata filesystem
But first let’s make a backup of our current /mysqldata FS

[root@deploy mysqldata]$ yum -y install xfsdump
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile

Bad news! we cannot reduce an xfs partition directly so we need:
– to backup our filesystem
– umount the filsesytem && delete the logical volume
– re-partition tle logical volume with xfs FS
– restore our data

Backup the file system

[root@deploy mysqldata]$ xfsdump -f /tmp/mysqldata.dump /mysqldata
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.4 (dump format 3.0) - type ^C for status and control

 ============================= dump label dialog ==============================

please enter label for this dump session (timeout in 300 sec)
 -> test
session label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: level 0 dump of deploy.example.com:/mysqldata
xfsdump: dump date: Tue Dec  5 08:36:20 2017
xfsdump: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsdump: session label: "test"
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 83840 bytes

 ============================= media label dialog =============================

please enter label for media in drive 0 (timeout in 300 sec)
 -> test
media label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 75656 bytes
xfsdump: dump size (non-dir files) : 51360 bytes
xfsdump: dump complete: 5 seconds elapsed
xfsdump: Dump Summary:
xfsdump:   stream 0 /tmp/mysqldata.dump OK (success)
xfsdump: Dump Status: SUCCESS

Then, we unmount the filesystem and delete the logical volume

[root@deploy ~]$ umount /mysqldata/

[root@deploy ~]$ df -hT
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root xfs        21G  8.7G   13G  42% /
devtmpfs            devtmpfs  910M     0  910M   0% /dev
tmpfs               tmpfs     920M     0  920M   0% /dev/shm
tmpfs               tmpfs     920M  8.4M  912M   1% /run
tmpfs               tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1           xfs      1014M  227M  788M  23% /boot
tmpfs               tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2          iso9660   4.3G  4.3G     0 100% /media/iso

[root@deploy ~]$ lvremove /dev/vg_deploy/lv_deploy
Do you really want to remove active logical volume vg_deploy/lv_deploy? [y/n]: y
  Logical volume "lv_deploy" successfully removed

We recreate the logical volume with a lower size (from 1000 PE to 800 PE)

[root@deploy ~]$ lvcreate -l 800 -n lv_deploy vg_deploy
WARNING: xfs signature detected on /dev/vg_deploy/lv_deploy at offset 0. Wipe it? [y/n]: y
  Wiping xfs signature on /dev/vg_deploy/lv_deploy.
  Logical volume "lv_deploy" created.

We build the XFS filesystem

[root@deploy ~]$ mkfs.xfs /dev/mapper/vg_deploy-lv_deploy
meta-data=/dev/mapper/vg_deploy-lv_deploy isize=512    agcount=4, agsize=204800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=819200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We remount the filesystem

[root@deploy ~]$ mount -a
[root@deploy ~]$
[root@deploy ~]$
[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.2G   33M  3.1G   2% /mysqldata

We list the content of /mysqldata directory

[root@deploy ~]$ ls -l /mysqldata
total 0

Let’s restore our data

[root@deploy ~]$ xfsrestore -f /tmp/mysqldata.dump /mysqldata
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.4 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: deploy.example.com
xfsrestore: mount point: /mysqldata
xfsrestore: volume: /dev/mapper/vg_deploy-lv_deploy
xfsrestore: session time: Tue Dec  5 08:36:20 2017
xfsrestore: level: 0
xfsrestore: session label: "test"
xfsrestore: media label: "test"
xfsrestore: file system id: 84832e04-e6b8-473a-beb4-f4d59ab9e73c
xfsrestore: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsrestore: media id: 8fda43c1-c7de-4331-b930-ebd88199d0e7
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 1 directories and 5 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /tmp/mysqldata.dump OK (success)
xfsrestore: Restore Status: SUCCESS

Our data are back

[root@deploy ~]$ ls -l /mysqldata/
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

Hope this helps :-)

 

Cet article How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7? est apparu en premier sur Blog dbi services.

Naming of archivelog files with non existing top level archivelog directory

Wed, 2017-12-06 06:33

In Oracle 12.2 an archive log directory is accepted, if top level directory does not exist:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/
 total 2267920
 drwxr-xr-x. 2 oracle dba        96 Dec  6 05:36 arch ...

Now database accepts this non existing archivelog destination:

SQL> alter system set log_archive_dest_3='LOCATION=/u02/oradata/DMK/arch/arch2';
System altered.

But not this:

SQL> alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4';
 alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4'
 *
 ERROR at line 1:
 ORA-02097: parameter cannot be modified because specified value is invalid
 ORA-16032: parameter LOG_ARCHIVE_DEST_4 destination string cannot be translated
 ORA-07286: sksagdi: cannot obtain device information.
 Linux-x86_64 Error: 2: No such file or directory

Log file format is set as following:

SQL> show parameter log_archive_format;
NAME                                 TYPE        VALUE
 ------------------------------------ ----------- ------------------------------
 log_archive_format                   string      %t_%s_%r.dbf
 SQL>

 

Now let’s see how archive log files look like in log_archive_dest_3:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/arch/arch2*
 -rw-r-----. 1 oracle dba 3845120 Dec  6 05:36 /u02/oradata/DMK/arch/arch21_5_960106002.dbf

So Oracle just adds the non existing top level directory to beginning of archivelog filename.

 

Cet article Naming of archivelog files with non existing top level archivelog directory est apparu en premier sur Blog dbi services.

No journal messages available before the last reboot of your CentOS/RHEL system?

Tue, 2017-12-05 02:20

As you probably noticed RedHat as well as CentOS switched to systemd with version 7 of their operating system release. This also means that instead of looking at /var/log/messages you are supposed to use journcalctl to browse the messages of the operating system. One issue with that is that messages before the last reboot of your system will not be available, which is probably not want you want.

Lets say I started my RedHat linux system just now:

Last login: Tue Dec  5 09:12:34 2017 from 192.168.22.1
[root@rhel7 ~]$ uptime
 09:14:14 up 1 min,  1 user,  load average: 0.33, 0.15, 0.05
[root@rhel7 ~]$ date
Die Dez  5 09:14:15 CET 2017

Asking for any journal logs before that will not show anything:

[root@rhel7 ~]$ journalctl --help  | grep "\-\-since"
  -S --since=DATE          Show entries not older than the specified date
[root@rhel7 ~]$ journalctl --since "2017-12-04 00:00:00"
-- Logs begin at Die 2017-12-05 09:13:07 CET, end at Die 2017-12-05 09:14:38 CET. --
Dez 05 09:13:07 rhel7.localdomain systemd-journal[86]: Runtime journal is using 6.2M (max allowed 49.6M, trying to 
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuset
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpu
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuacct

Nothing for yesterday, which is bad. The issue here is the default configuration:

[root@rhel7 ~]$ cat /etc/systemd/journald.conf 
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See journald.conf(5) for details.

[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg

“Storage=auto” means that the journal will only be persistent if this directory exists (it does not in the default setup):

[root@rhel7 ~]$ ls /var/log/journal
ls: cannot access /var/log/journal: No such file or directory

As soon as this is created and the service is restarted the journal will be persistent and will survive a reboot:

[root@rhel7 ~]$ mkdir /var/log/journal
[root@rhel7 ~]$ systemctl restart systemd-journald.service
total 4
drwxr-xr-x.  3 root root   46  5. Dez 09:15 .
drwxr-xr-x. 10 root root 4096  5. Dez 09:15 ..
drwxr-xr-x.  2 root root   28  5. Dez 09:15 a473db3bada14e478442d99da55345e0
[root@rhel7 ~]$ ls -al /var/log/journal/a473db3bada14e478442d99da55345e0/
total 8192
drwxr-xr-x. 2 root root      28  5. Dez 09:15 .
drwxr-xr-x. 3 root root      46  5. Dez 09:15 ..
-rw-r-----. 1 root root 8388608  5. Dez 09:15 system.journal

Of course you should look at the other parameters that control the size of journal as well as rotation.

 

Cet article No journal messages available before the last reboot of your CentOS/RHEL system? est apparu en premier sur Blog dbi services.

DOAG 2017: Automation in progress

Sun, 2017-12-03 05:28

DOAG2017_dbi

A week ago, I had the chance to be speaker at the DOAG Konferenz 2017 in Nürnberg. It’s sometimes hard to find time to be at the conferences because the end of year is quite busy at customers. But it’s also important because it’s time for sharing. I can share what I’m working on about automation and patching and I can also see how other people are doing.

And it was great for me this year, I started to work with Ansible to automate some repetitives tasks, and I saw a lot of interesting presentations either about Ansible itself or where Ansible was used in the demo.

The session “Getting Started with Ansible and Oracle” of Ron Ekins from Pure Storage showed a very interesting use case to see the strengh of Ansible. A live demo where he cloned 1 Production database to 6 different demo environments for the developpers. And doing this way, with a playbook, we are sure that the 6 environments are done without human errors because Ansible will play the same tasks across all nodes.

The previous day, I attended the session “Practical database administration automation with Ansible” of Mikael Sandström and Ilmar Kerm from Kindred Group. They presented some modules they wrote to interact with the database using Ansible. The modules can be used to validate some parameters or create users, etc… I found the code while I was working on my project but I did not dive in the details. The code is available on Github and I will definitively have a closer look.

We can think that Ansible is not designed to manage databases but using modules you can extend Ansible to do a lot of things.

Next week, I have the chance the be also at the Tech17 organised by the UKOUG. Let’s hope I can continue to learn and share!
Speaker_UKOUG2017

 

Cet article DOAG 2017: Automation in progress est apparu en premier sur Blog dbi services.

Alfresco – Unable to move/rename a file/folder using AOS

Sat, 2017-12-02 05:00

When playing with the AOS implementation, a colleague of mine faced an interesting issue where he just wasn’t able to move or rename any files or folders. The creation and deletion were working properly but he was unable to move or rename anything when using a Network Location or a Network Drive. This environment was freshly installed with a front-end (Apache HTTPD) setup in SSL so we worked together to find out what was the issue. All workstations were impacted no matter what OS was used (Windows 7, 8, 10, aso…).

The Network Location or Drive were mounted using the following:

  • URL Style => https://alfresco_server_01.domain.com/alfresco/aos/
  • WebDAV Style => \\alfresco_server_01.domain.com@SSL\DavWWWRoot\alfresco\aos\

In all cases, the workstations were able to connect and create nodes in the Alfresco Server (through AOS), it meant that the parameters/configuration on the Alfresco and Apache HTTPD sides were pretty much OK but nevertheless we still checked them to be sure:

alfresco@alfresco_server_01:~$ grep -E "^alfresco\.|^share\." $CATALINA_HOME/shared/classes/alfresco-global.properties
alfresco.context=alfresco
alfresco.host=alfresco_server_01.domain.com
alfresco.port=443
alfresco.protocol=https
share.context=share
share.host=alfresco_server_01.domain.com
share.port=443
share.protocol=https
alfresco@alfresco_server_01:~$
alfresco@alfresco_server_01:~$
alfresco@alfresco_server_01:~$ grep aos $CATALINA_HOME/shared/classes/alfresco-global.properties
aos.baseUrlOverwrite=https://alfresco_server_01.domain.com:443/alfresco/aos
#aos.sitePathOverwrite=/alfresco/aos
alfresco@alfresco_server_01:~$

 

For me, the parameters above seemed correct at first sight. The Apache HTTPD being in SSL on the port 443 and redirecting everything to the Tomcat using the mod_jk, it is normal for the alfresco and share parameters to use https and the port 443 (and even if the Tomcat is actually not in SSL) because these values should reflect the front-end. For the “aos.baseUrlOverwrite”, it is normally to be used only in case you have a proxy server in front of your Alfresco and that this proxy isn’t an Apache HTTPD. Since my colleague was using Apache, this parameter wasn’t really needed but having it set to the correct value shouldn’t hurt either. The correct value for this parameter is also the front-end URL and it is the current value, or so it seemed.

With the above parameters, we were able to create any kind of files and folders in our Network Locations/Drives. I took some screenshots for this blog and I used a simple folder to demonstrate the issue and the solution. So creating a folder with the default name is working properly:

RenameFolder1

At this point, I had a new folder in my Alfresco Server which I could clearly see and manage via the Share client. So renaming it from Share wasn’t a problem but doing the same thing through AOS (Network Location or Drive) resulted in this:

RenameFolder2

At the same time, the following logs were generated on Alfresco side:

2017-11-14 08:34:50,342  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-7] doMove: BAD REQUEST: Destination malformed
2017-11-14 08:34:50,442  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-8] doMove: BAD REQUEST: Destination malformed
2017-11-14 08:34:50,544  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-9] doMove: BAD REQUEST: Destination malformed
2017-11-14 08:34:50,647  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-1] doMove: BAD REQUEST: Destination malformed

 

With the default log level, that’s not particularly helpful… From the Apache logs, it is pretty easy to see when the folder “New Folder” has been created (MKCOL @ 08:34:21) as well as when I tried to rename it (MOVE @ 08:34:50):

alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:21 +0100] "MKCOL /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 201 640 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:21 +0100] "PROPPATCH /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 971 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:21 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder/desktop.ini HTTP/1.1" 404 588 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos HTTP/1.1" 207 5473 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary HTTP/1.1" 207 1784 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 1803 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary HTTP/1.1" 207 1784 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1606 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1711 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1711 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1711 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 1909 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"

 

As you can see above, the renaming failed and the Apache responded with a 400 error code. The error “doMove: BAD REQUEST” on the Alfresco side reminded me of this JIRA but the outcome of this ticket was that the parameter “aos.baseUrlOverwrite” was wrongly set… In our case, its value was “https://alfresco_server_01.domain.com:443/alfresco/aos” (as shown above) and this seemed to be the correct URL… But in fact it wasn’t.

Just to avoid any uncertainty, we tried to change the value to “https://alfresco_server_01.domain.com/alfresco/aos” (so just removing the port :443 which technically can be here or not…) and then restart Alfresco… After doing that, the rename was actually working:

RenameFolder3

So magically the issue was gone… The associated Apache HTTPD logs showed this time a 201 return code:

alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:04 +0100] "PROPFIND /alfresco/aos HTTP/1.1" 207 5347 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:04 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 1799 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary HTTP/1.1" 207 1780 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco/aos/Sites HTTP/1.1" 207 1736 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco HTTP/1.1" 302 224 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco/ HTTP/1.1" 207 1858 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND / HTTP/1.1" 302 572 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /share/page/repository HTTP/1.1" 501 1490 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:06 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 201 640 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:07 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/Renamed%202 HTTP/1.1" 207 1797 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"

 

Conclusion for this blog? Take care when you want to set the “aos.baseUrlOverwrite”, do not add the port if it is not really necessary! Another solution to this issue would be to just comment the “aos.baseUrlOverwrite” parameter since it is not needed when using Apache HTTPD. I personally never use this parameter (keeping it commented) because I’m always using Apache :).

 

 

Cet article Alfresco – Unable to move/rename a file/folder using AOS est apparu en premier sur Blog dbi services.

WebLogic – SSO/Atn/Atz – 403 Forbidden, another issue

Sat, 2017-12-02 03:30

Earlier today, I posted another blog with almost the same title but the similarities stop here. The thing is that when you access your application, there aren’t much error codes possible so if there is an issue, it’s often the same generic message that is being displayed on the browser.

To get some background on the issue that I will present below, we are usually setting up the WebLogic Server, enabling the SSO on the WLS level, aso… Then there are different Application Teams that prepare their specific application war files and deploy them into our WLS. Since all teams are using standard procedures and IQs to deploy all that, the applications are properly working in SSO 99% of the time but human errors can happen, especially in the dev environments where there are a less verification…

So after deploying their customized war files, an Application Team tried to access it using the SSO URL but then got a ‘403 – Forbidden’ error message, crap. As we are responsible for the whole platform, they are usually directly coming to us so that we can check what is wrong. So as always: enable the debug logs, find out what the issue is, where a mistake was done and how to solve it. In this case (and contrary to the previous blog), the SAML2 response was correct and accepted by WebLogic so the SSO process was already going one step further, this is why I will just skip the first part of the logs (as well as the LDAP authentication & retrieval of groups) and only show what is happening afterwards (so no Atn but only Atz):

<Nov 12, 2017, 5:43:25,15 PM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 12, 2017, 5:43:25,15 PM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 12, 2017, 5:43:25,15 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 3
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = class weblogic.security.principal.WLSGroupImpl("readers")
        Principal = class weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" ]>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Subject: 3
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = weblogic.security.principal.WLSGroupImpl("readers")
        Principal = weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Roles:Anonymous>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Resource: type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Direction: ONCE>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Context Handler: >
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous]>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous]) -> false>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to NotApplicable because of Condition>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Deny>
<Nov 12, 2017, 5:43:25,17 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning DENY>
<Nov 12, 2017, 5:43:25,17 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned DENY>
<Nov 12, 2017, 5:43:25,17 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: false>

 

As you can see at the end of the log, the access is denied (‘isAccessAllowed returning adjudicated: false’) and if you check the previous lines, you can see that this is actually because the function ‘string-at-least-one-member-of’ is expecting the user to have the role ‘consumer’ but here the user only have the role ‘Anonymous’. There is a mismatch and therefore the access is denied which caused the ‘403 – Forbidden’ message. So where are those roles assigned? It is actually not on the IdP Partner or LDAP side but on the WebLogic side directly, or rather on the Application side to be more precise…

When working with SSO solutions, there are some additional configurations that you will need to incorporate to your application war file like for example what is transport mode for all communications, which URLs (/context) are not to be authenticated through SSO and which ones are, which roles the users need, aso… This is all done (for WebLogic) inside the web.xml and weblogic.xml.

In this case, the ‘consumer’ role shown above is defined in the web.xml file inside the ‘<auth-constraint>’ (Authorization constraint) so it basically says that this is the only authorized role to perform constrained requests. This is an example of configuration that you can put in your web.xml (this one is an extract of what can be done for Documentum D2 with SAML2 SSO):

[weblogic@weblogic_server_01 D2]$ tail -50 WEB-INF/web.xml
  <security-constraint>
    <web-resource-collection>
      <web-resource-name>SSO Public</web-resource-name>
      <description>Non-authenticated resources</description>
      <url-pattern>/help/en/*</url-pattern>
      <url-pattern>/resources/*</url-pattern>
	  ...
      <http-method>GET</http-method>
      <http-method>POST</http-method>
    </web-resource-collection>
  </security-constraint>

  <security-constraint>
    <web-resource-collection>
      <web-resource-name>SSO Private</web-resource-name>
      <description>Authenticated resources</description>
      <url-pattern>/*</url-pattern>
      <http-method>GET</http-method>
      <http-method>POST</http-method>
    </web-resource-collection>
    <auth-constraint>
      <description>Authorized role</description>
      <role-name>consumer</role-name>
    </auth-constraint>
    <user-data-constraint>
      <description>User Data</description>
      <transport-guarantee>CONFIDENTIAL</transport-guarantee>
    </user-data-constraint>
  </security-constraint>

  <security-role>
    <role-name>consumer</role-name>
  </security-role>

  ...
[weblogic@weblogic_server_01 D2]$

 

So the above configuration exactly match what WebLogic requires and what is shown in the logs. The user must have a role that is ‘consumer’ to be able to access the Application. The only question left is why the users aren’t assigned to this role if they have been authenticated via the SAML2 SSO and LDAP and that’s where the issue is in this case. In the web.xml, you can define what security should apply to your application but for the assignment to the security roles, you should rather take a look at the weblogic.xml file. You can assign users using their principals (coming from the WLS Security Realm) but it is usually better to use groups instead to avoid managing users individually (more information there). So you already understood it, the issue in this case was simply that the Application Team configured the security roles on the web.xml file but they forgot the assignments on the weblogic.xml and therefore the issue was solved by simply adding the following lines in this file:

[weblogic@weblogic_server_01 D2]$ tail -5 WEB-INF/weblogic.xml
  <security-role-assignment>
      <role-name>consumer</role-name>
      <principal-name>users</principal-name>
  </security-role-assignment>
</weblogic-web-app>
[weblogic@weblogic_server_01 D2]$

 

With these four simple lines, all existing users from the LDAP (that is configured in our WebLogic) are getting automatically granted the role ‘consumer’ and are therefore allowed to access the application.

From the WebLogic Administration Console, you can check what is the assignment of a security role using these steps:

  1. Login to the Admin Console using your weblogic account
  2. Navigate to the correct page: DOMAIN > Deployments > YourApp (click on the name) > Security > URL Patterns > Roles > YourRole (click on the name)

The list of roles can be seen here:

SecurityRole1

Then after clicking on the name of the role, you can see the conditions for the assignments:

SecurityRole2

To compare the logs before/after, this is what is being printed to the logs after the correction:

<Nov 12, 2017, 6:30:55,10 PM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 12, 2017, 6:30:55,10 PM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 12, 2017, 6:30:55,10 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 3
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = class weblogic.security.principal.WLSGroupImpl("readers")
        Principal = class weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" "consumer" ]>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Subject: 3
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = weblogic.security.principal.WLSGroupImpl("readers")
        Principal = weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Roles:Anonymous, consumer>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Resource: type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Direction: ONCE>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Context Handler: >
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous,consumer]>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous,consumer]) -> true>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to Permit>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Permit>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning PERMIT>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned PERMIT>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: true>

 

Access is allowed. :)

 

 

Cet article WebLogic – SSO/Atn/Atz – 403 Forbidden, another issue est apparu en premier sur Blog dbi services.

WebLogic – SSO/Atn/Atz – 403 Forbidden, a first issue

Sat, 2017-12-02 02:00

In a previous blog, I explained how it was possible to enable the SSO/Atn/Atz (SSO/Authentication/Authorization) debug logs in order to troubleshoot an issue. In this blog, I will show the logs generated by an issue that I had to deal with last month at one of our customers. This issue will probably not occur very often but it is a pretty funny one so I wanted to share it!

So the issue I will talk about in this blog happened on an environment that is configured with a SAML2 SSO. With a fully working SAML2 SSO, the WebLogic hosting the application is supposed to redirect the end-user to the IdP Partner (with a SAML2 request) which process it and then redirect the end-user again to the WebLogic (with the SAML2 response) which process the response and finally grant access to the Application. On this issue, both redirections were apparently happening properly but then for an unknown reason the WebLogic Server was blocking the access to the application with a “403 – Forbidden” message.

Obviously the first thing I did is to enable the debug logs and then I replicated the issue. These are the logs that I could see on the Managed Server log file:

<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <SAML2Servlet: Processing request on URI '/saml2/sp/acs/post'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/saml2/sp/acs/post'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): service URI is '/sp/acs/post'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'ACS'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <Assertion consumer service: processing>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <get SAMLResponse from http request:PBvbnNSJ1cXMlFtZzaXM6bmHhtbG5zm46NhwOlJlOnNhbWxHbWxIb2Fc3wP6dGM6
U0FiB4bWxuczp4NTAwPSJ1cm46b2FzNTDoyLjA6YXNzZXJ0aW9uIaXM6bmFtZXM6
U0FdG9jb2wiIHhtbG5zOmRzaWc9Imh0dHA6Ly93NTDoyLjA6cHJvd3cudzMub3Jn
aHR0cDoa5vcmcvMjAwMS9W5zdGFuY2vL3d3dy53MyYTUxTY2hlbWEtUiIERlc3Rp
MWNxM2FjNzI1ZDjYmIVhNDM1Zlzc3VlSW5zdGFudD0ijhlNjc3OTkiIEMjAxNy0x
LzINpZyMiIHhtwMDAvMDkveG1sZHbG5DovL3d3dy53My5vczOmVuYz0iaHR0cmcv
MMS8wNC94bWxjAwlbmWxuczpzYW1sPMjIiB4bSJ1aXM6bmFtZXM6dcm46b2FzGM6
dGdXRlOM6U0FNZXM6YXR0cmliTDoyLjAAiIHhtbG5zOnhzaT6cHJvZmlslg1MD0i
bmF0aW9JodHRwczovL5ldS5uPSub3Zhc3BoY2hicy1DIyMinRpcyzdDIyM5uZXQ6
ODA4NSMvcG9zdCI9zYW1sMi9zcC3SUhwRHRuN3I1WH9hY3gSUQ9ImlkLUhhTTFha
Z3hBiIEmVzcWW5URXhybHJlG9uc2VUbz0RVFGbWt1VkRaNC0iXzB4ZluUGM1Mjk2
MS0xNw6SXNzdWjo0OTFZlcnNpVyIEZvo1MloiIlQxMb249IjIuMCI+PHNhbWcm1h
...
LTExIgTDEyPSIyMDE3LFjQ525PcLTE2VETExLTOk2VDOjUyWimdGVym90TEyOjU0
OjUyWiI+PHNh8c2FtbDpBdWRpZW5bWw6QXVkabj4jWVuY2VSZXN0cmljdGlvZT5T
c3NGVkVHJhb3b3JkUHJvdG1sOkF1dGhuQ29udGV4VjdnNwb3J0PC9zYWdENsYXNz
QXV0aG5gQXV0TdGF0LTExLTE2VDEaG5JZW1lbnQSIyMDE3bnN0YW50PyOjQ5OjUy
aEV25PcVucnhPSIyMDE3LTExEZmZ2IiBkFmdGVyLTEJWTTZXNzaW9uTm90T2VDEz
WivUWuZGV4PSJpZC13UlVMWGRYOXd6xWzc2lvbklRThFZDJwRDdIgU2VR210OUc0
dWJXYSUQ8L3NhRE1PQ19XlfREVWTQU1MMl9FbnbWw6QXVkaWVuY2U+RpdHlfPC9z
YWRpb24+P1sOkF1ZGllbHJpY3mPHNhNlUmVzdC9zYW1sOkNvbmRpdGlvbnM+bWw6
YXNzUzpjbYW1YXNpczpuIuMlmVmPnVybjpvDphYxzp0YzpTQU1MOjGFzc2VzOlBh
OjAXh0Pj0OjUyWiI+PHNsOkF1dGhuQhxzYW129udGV4bWw6QXV0aG5Db250ZdENs
UmVnRlepBdXRobkNvbzYWmPjwvc2FtbDF1dGhuU1sOkHQ+PC93RhdGVtZW50Pjwv
c2F9zY25zZtbDpBcW1scDpSZXNwb3NlcnRpb24+PCT4=
>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <BASE64 decoded saml message:<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:enc="http://www.w3.org/2001/04/xmlenc#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://weblogic_server_01/saml2/sp/acs/post" ID="id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-" InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" IssueInstant="2017-11-12T12:23:42Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>YGtUZvsfo3z51AsBo7UDhbd6Ts=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>al8sJwbqzjh1qgM3Sj0QrX1aZjwyI...JB6l4jmj91BdQrYQ7VxFzvNLczZ2brJSdLLig==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><samlp:Status><samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></samlp:Status><saml:Assertion ID="id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi" IssueInstant="2017-11-12T12:23:42Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>7+jZtq8SpY3BKVaFjIFeEJm51cA=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>GIlXt4B4rVFoDJRxidpZO73gXB68Dd+mcpoV9DKrjBBjLRz...zGTDcEYY2MG8FgtarZhVQGc4zxkkSg8GRT6Wng3NEuTUuA==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><saml:Subject><saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">PATOU_MORGAN</saml:NameID><saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml:SubjectConfirmationData InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" NotOnOrAfter="2017-11-12T12:28:42Z" Recipient="https://weblogic_server_01/saml2/sp/acs/post"/></saml:SubjectConfirmation></saml:Subject><saml:Conditions NotBefore="2017-11-12T12:23:42Z" NotOnOrAfter="2017-11-12T12:28:42Z"><saml:AudienceRestriction><saml:Audience>SAML2_Entity_ID_01</saml:Audience></saml:AudienceRestriction></saml:Conditions><saml:AuthnStatement AuthnInstant="2017-11-12T12:23:42Z" SessionIndex="id-oX9wXdpGmt9GQlVffvY4hEIRULEd25nrxDzE8D7w" SessionNotOnOrAfter="2017-11-12T12:38:42Z"><saml:AuthnContext><saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef></saml:AuthnContext></saml:AuthnStatement></saml:Assertion></samlp:Response>>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <<samlp:Response> is signed.>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionTokenServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: start assert SAML2 token>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: SAML2IdentityAsserter: tokenType is 'SAML2.Assertion.DOM'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion signature>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: The assertion is signed.>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion signature>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion attributes>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion attributes>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion issuer>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion issuer>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion conditions>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionTokenServiceImpl.assertIdentity - IdentityAssertionException>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <[Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:096537]Assertion is not yet valid (NotBefore condition).>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <exception info
javax.security.auth.login.LoginException: [Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:096537]Assertion is not yet valid (NotBefore condition).
        at com.bea.common.security.internal.service.IdentityAssertionServiceImpl.assertIdentity(IdentityAssertionServiceImpl.java:89)
        at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.bea.common.security.internal.utils.Delegator$ProxyInvocationHandler.invoke(Delegator.java:64)
		...
>

 

I cut some of the strings above (like all signatures, the SSL Certificates, aso…) because it was really too big and it is not really important. What is important above is the java exception. Indeed, the Identity Assertion failed because of the following: ‘Assertion is not yet valid (NotBefore condition)’. This message might seems a little bit mystical but it actually points you right to the issue: the ‘NotBefore’ condition is causing the validation to fail.

So why is that? Well when you have a SAML2 SSO like I said above, you first have a request and then a response. For security reasons, there are some conditions that apply on them and that need to be fulfilled for the SSO to be working. To understand that a little bit better, I took the decoded SAML2 response from the logs above (line 32) and I reformatted it into an XML format so it is more readable:

<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:enc="http://www.w3.org/2001/04/xmlenc#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://weblogic_server_01/saml2/sp/acs/post" ID="id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-" InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" IssueInstant="2017-11-12T12:23:42Z" Version="2.0">
	<saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer>
	<dsig:Signature>
		<dsig:SignedInfo>
			<dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
			<dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
			<dsig:Reference URI="#id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-">
				<dsig:Transforms>
					<dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
					<dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
				</dsig:Transforms>
				<dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
				<dsig:DigestValue>YGtUZvsfo3z51AsBo7UDhbd6Ts=</dsig:DigestValue>
			</dsig:Reference>
		</dsig:SignedInfo>
		<dsig:SignatureValue>al8sJwbqzjh1qgM3Sj0QrX1aZjwyI...JB6l4jmj91BdQrYQ7VxFzvNLczZ2brJSdLLig==</dsig:SignatureValue>
		<dsig:KeyInfo>
			<dsig:X509Data>
				<dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate>
			</dsig:X509Data>
		</dsig:KeyInfo>
	</dsig:Signature>
	<samlp:Status>
		<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
	</samlp:Status>
	<saml:Assertion ID="id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi" IssueInstant="2017-11-12T12:23:42Z" Version="2.0">
		<saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer>
		<dsig:Signature>
			<dsig:SignedInfo>
				<dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
				<dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
				<dsig:Reference URI="#id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi">
					<dsig:Transforms>
						<dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
						<dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
					</dsig:Transforms>
					<dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
					<dsig:DigestValue>7+jZtq8SpY3BKVaFjIFeEJm51cA=</dsig:DigestValue>
				</dsig:Reference>
			</dsig:SignedInfo>
			<dsig:SignatureValue>GIlXt4B4rVFoDJRxidpZO73gXB68Dd+mcpoV9DKrjBBjLRz...zGTDcEYY2MG8FgtarZhVQGc4zxkkSg8GRT6Wng3NEuTUuA==</dsig:SignatureValue>
			<dsig:KeyInfo>
				<dsig:X509Data>
					<dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate>
				</dsig:X509Data>
			</dsig:KeyInfo>
		</dsig:Signature>
		<saml:Subject>
			<saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">PATOU_MORGAN</saml:NameID>
			<saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
				<saml:SubjectConfirmationData InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" NotOnOrAfter="2017-11-12T12:28:42Z" Recipient="https://weblogic_server_01/saml2/sp/acs/post"/>
			</saml:SubjectConfirmation>
		</saml:Subject>
		<saml:Conditions NotBefore="2017-11-12T12:23:42Z" NotOnOrAfter="2017-11-12T12:28:42Z">
			<saml:AudienceRestriction>
				<saml:Audience>SAML2_Entity_ID_01</saml:Audience>
			</saml:AudienceRestriction>
		</saml:Conditions>
		<saml:AuthnStatement AuthnInstant="2017-11-12T12:23:42Z" SessionIndex="id-oX9wXdpGmt9GQlVffvY4hEIRULEd25nrxDzE8D7w" SessionNotOnOrAfter="2017-11-12T12:38:42Z">
			<saml:AuthnContext>
				<saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef>
			</saml:AuthnContext>
		</saml:AuthnStatement>
	</saml:Assertion>
</samlp:Response>

 

As you can see on the XML, there are two conditions that apply on the SAML2 response:

  • The usage of the response needs to take place ‘NotBefore’ the current time
  • The usage of the response needs to take place ‘NotOnOrAfter’ the current time + 5 minutes

In this case, the NotBefore is set to ‘2017-11-12T12:23:42Z’ which is the current time of the IdP Partner Server. However you can see in the logs that the WebLogic Server hosting the application is actually one second before this time (Nov 12, 2017 12:23:41 PM UTC) and therefore the NotBefore restriction applies and the WebLogic Server hosting the application has no other choice than to return a ‘403 – Forbidden’ message because the SAML2 response is NOT valid.

In this case, the NTP daemon (Network Time Protocol) on the IdP Partner Linux server has been restarted and the time on this server has been resynched which solved the issue. Having a server in the future can cause some interesting behaviors :).

 

 

Cet article WebLogic – SSO/Atn/Atz – 403 Forbidden, a first issue est apparu en premier sur Blog dbi services.

Transfer redo in async-mode to the Gold/Master copy of the Production DB for ACFS snapshots

Thu, 2017-11-30 04:58

If you store your databases on the cluster filesystem ACFS you may use the provided Perl-script gDBClone from OTN to clone databases or create snapshot databases. It is an interesting approach to create clones from the Production DB in minutes regardless of the production DB size. What you do is to create a standby DB from your production DB on a separate cluster and use that standby DB as a Gold/Master copy for ACFS snapshots.

In a Production environment with Data Guard Broker we wanted to use that technique, but were confronted with an issue:

The Production DB had already a physical standby DB with the Data Guard Broker running. The protection mode was MaxAvailability, which means transport of the redo in sync mode. The master/gold copy to get the snapshots from should receive the redo data in async mode. How to achieve that?

Actually not very common parameters in a Broker configuration are


ExternalDestination1
ExternalDestination2

With those parameters (which are available in 11.2.0.4 and 12.1.0.2 onwards) you actually can send your redo to a destination in async mode. The parameters are documented as follows:

The ExternalDestination1 configuration property is used to specify a redo transport destination that can receive redo data from the current primary database. To set up transport of redo data to the specified destination, the broker uses the values specified for this parameter to define a LOG_ARCHIVE_DEST_n initialization parameter on the primary database. The broker also monitors the health of the transport to the specified destination.

After a role change, the broker automatically sets up a LOG_ARCHIVE_DEST_n initialization parameter on the new primary database to ship redo data to the specified destination.

I.e. you can set the parameter the same as LOG_ARCHIVE_DEST_n, but the following options are not allowed:

ALTERNATE
DELAY
LOCATION
MANDATORY
MAX_FAILURE
NET_TIMEOUT
SYNC
VALID_FOR

So let’s assume I created my DB GOLDCOP as a standby DB using the rman duplicate command


RMAN> duplicate target database for standby from active database dorecover nofilenamecheck;

or alternatively using


# ./gDBClone clone -sdbname PRIM -sdbscan scoda7 -tdbname GOLDCOP -tdbhome OraDb11g_home1 -dataacfs /cloudfs -standby

In the broker configuration I added the DB GOLDCOP as follows:

DGMGRL> show configuration;
 
Configuration - MYPROD
 
Protection Mode: MaxAvailability
Databases:
PRIM - Primary database
STBY - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS
 
DGMGRL> edit configuration set property ExternalDestination1 = 'service=goldcop db_unique_name=GOLDCOP noaffirm async';
Property "externaldestination1" updated
DGMGRL> show configuration;
 
Configuration - MYPROD
 
Protection Mode: MaxAvailability
Databases:
PRIM - Primary database
STBY - Physical standby database
GOLDCOP - External destination 1
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS

Let’s check if I really do NOAFFIRM ASYNC redo transport on PRIM:

SQL> select DEST_NAME, DB_UNIQUE_NAME, AFFIRM, TRANSMIT_MODE from v$archive_dest where dest_id in (2,3);
 
DEST_NAME DB_UNIQUE_NAME AFF TRANSMIT_MOD
-------------------------------- ------------------------------ --- ------------
LOG_ARCHIVE_DEST_2 STBY YES PARALLELSYNC
LOG_ARCHIVE_DEST_3 GOLDCOP NO ASYNCHRONOUS

The external destination is not a database in the configuration:

DGMGRL> show database "GOLDCOP";
Object "GOLDCOP" was not found

But the transport to its destination is monitored by the Broker. I.e. when shutting down the DB GOLDCOP I do get an error:

DGMGRL> show configuration;
 
Configuration - MYPROD
 
Protection Mode: MaxAvailability
Databases:
PRIM - Primary database
Error: ORA-16778: redo transport error for one or more databases
 
STBY - Physical standby database
GOLDCOP - External destination 1
 
Fast-Start Failover: DISABLED
 
Configuration Status:
ERROR
 
DGMGRL> show instance "PRIM";
 
Instance 'PRIM' of database 'PRIM'
 
Instance Error(s):
ORA-16737: the redo transport service for standby database "GOLDCOP" has an error
 
Instance Status:
ERROR

As the External destination 1 is not “a database” in the broker configuration, it actually also does not matter if the broker is started (dg_broker_start=TRUE) at the external destination GOLDCOP or not.
To start applying redo on the external destination, you have to start managed recovery as you would without a broker configuration:

alter database recover managed standby database using current logfile disconnect from session;

And redo real time apply is happening on GOLDCOP:

SQL> select name,value
2 from v$dataguard_stats
3 where name in ('apply lag','transport lag');
 
NAME VALUE
-------------------- --------------------
transport lag +00 00:00:00
apply lag +00 00:00:00
 
SQL>
SQL> select inst_id,process,pid,status,thread#,sequence#, block#
2 from gv$managed_standby
3 where process like 'MRP%';
 
INST_ID PROCESS PID STATUS THREAD# SEQUENCE# BLOCK#
---------- --------- ---------- ------------ ---------- ---------- ----------
1 MRP0 5155 APPLYING_LOG 1 50 420

To make the external destination self managing I did set the archivelog deletion policy on GOLDCOP to

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

in rman so that applied archives become reclaimable automatically in the fast recovery area. In addition I set

fal_server='PRIM'

on GOLDCOP to ensure that archive gaps can be resolved.

The pro’s of above configuration are:
– the GOLDCOP-DB does not cause much overhead for my Production DB (async redo transport).
– Decoupling my GOLDCOP DB from Primary (temporarily) is fast and easy:
edit configuration set property ExternalDestination1 = '';

REMARK: Of course I do also have the other advantages of the gDBClone-approach:
– A production copy on a separate cluster which serves as a gold-copy to take snapshots from for testing or development purposes.
– Creating a snapshot database takes minutes regardless of the DB size.

Con’s:
– I have to take care to start managed standby database recovery on my GOLDCOP-DB. I.e. the same as when running data guard without the Broker.

To create a snapshot DB I just do something like:


# ./gDBClone snap -sdbname GOLDCOP -tdbname PRODCOP1

Et voilà a production copy in 2 minutes.

If PRODCOP1 is no longer needed I can delete it:


# ./gDBClone deldb -tdbname PRODCOP1

Besides using the configuration property ExternalDestination1 there are other possibilities in 12c to run a standby DB as a master copy for snapshots without affecting the production system (like e.g. the support of cascaded standby DBs in the Broker), but I still think that the external destinations feature offers a good possibility to run a master copy.

 

Cet article Transfer redo in async-mode to the Gold/Master copy of the Production DB for ACFS snapshots est apparu en premier sur Blog dbi services.

How to rename an existing Fusion Middleware WebLogic Domain

Tue, 2017-11-28 23:54

Some times it happens that we need to rename an existing fusion Middleware WebLogic domain. I was asked to do such on a Fusion Middleware Reports & Forms environment.
I took some time to check how this can be done and did some testing to confirm it works as expected. The difficulty is not the WebLogic Domain itself as a WebLogic domain can be created quickly but it is time consuming if the complete configuration has to be redone like SSL, logging settings, etc and what about the system components.

I used pack and unpack to rename the FMW WebLogic Domain.

Let’s say I wanted to rename a Fusion Middleware Forms & Reports WebLogic domain named fr_domain in fr_domain_new

First I used pack to create the domain archive:

cd $MW_HOME/oracle_common/common/bin
./pack.sh -domain /u01/config/domains/fr_domain -template $HOME/fr_domain.jar -template_name full_fr_domain

Then using unpack, I changed the domain directory path thus the domain name.

./unpack.sh -domain /u01/config/domains/fr_domain_new -template /home/oracle/fr_domain.jar -user_name weblogic -password Welcome1 -server_start_mode prod -app_dir /u01/config/applications/fr_domain_new -java_home $JAVA_HOME

Of course the JAVA_HOME environment variable needs to be set before.

This simply worked but I had to recreate the security files  for the Administration Server and Managed Servers (boot.properties) if needed and those for the system Components.

To create the security files for the System Components, the Node Manager needs to be started

export WEBLOGIC_DOMAIN_HOME=/u01/config/domains/fr_domain_new/
nohup ${WEBLOGIC_DOMAIN_HOME}/bin/startNodeManager.sh >> ${WEBLOGIC_DOMAIN_HOME}/nodemanager/nohup-NodeManager.out 2>&1 &

And then start once the System  components with the storeUserConfig option. for example:

cd /u01/config/domains/fr_domain_new/bin 
./startComponent.sh ohs1 storeUserConfig
./startComponent.sh vm01_reportsServer storeUserConfig

This was for a simple WebLogic Domain on a single machine. For clustered WebLogic Domains installed on several hosts, the pack and unpack needs to be used again to dispatch the WebLogic Managed  Servers on the targeted machines.

As example, to create the archive files for the Managed Servers to be installed on remote machines:

$MW_HOME/oracle_common/common/bin/pack.sh -managed=true -domain /u01/config/domains/fr_domain_new -template /home/oracle/fr_domain_new.jar -template_name fr_domain_new

 

 

 

Cet article How to rename an existing Fusion Middleware WebLogic Domain est apparu en premier sur Blog dbi services.

impdp logtime=all metrics=y and 12cR2 parallel metadata

Tue, 2017-11-28 14:35

A quick post to show why you should always use LOGTIME=ALL METRICS=Y when using Data Pump. Just look at an example showing the timestamp in front of each line and a message about each task completed by the worker:

15-NOV-17 11:48:32.305: W-5 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
15-NOV-17 11:48:34.439: W-13 Completed 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.439: W-13 Completed by worker 1 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.440: W-13 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
15-NOV-17 11:48:35.472: W-17 Startup took 70 seconds
15-NOV-17 11:48:35.596: W-18 Startup took 70 seconds
15-NOV-17 11:48:35.719: W-20 Startup took 70 seconds
15-NOV-17 11:48:35.841: W-19 Startup took 70 seconds
15-NOV-17 11:48:43.520: W-5 Completed 1714 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 1 39 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 2 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 3 85 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 4 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 5 25 TABLE objects in 1 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 6 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 7 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 8 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 9 89 TABLE objects in 5 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 10 74 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 11 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 12 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 13 34 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 14 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 15 108 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 16 90 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 17 82 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 18 40 TABLE objects in 3 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 19 97 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 20 53 TABLE objects in 3 seconds
15-NOV-17 11:48:44.266: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Here, I was running an import with PARALLEL=20 and I can see exactly how many tables were processed by each worker. You see it is ‘TABLE’ and not ‘TABLE_DATA’ which is the proof that 12cR2 can import metadata in parallel.

I see no reason not to use LOGTIME=ALL METRICS=Y always and you will be happy to have this detail if something goes wrong.

 

Cet article impdp logtime=all metrics=y and 12cR2 parallel metadata est apparu en premier sur Blog dbi services.

Dataguard: QUIZZ on Snapshot Standby

Tue, 2017-11-28 07:51

In oracle documentation we can find this about snapshot standby: A snapshot standby database is a fully updatable standby database that provides disaster recovery and data protection benefits that are similar to those of a physical standby database.

The concept of snapshot standby is that we can temporary convert a physical standby to an updatable database for different purposes and then convert back to a physical standby. During the time that the database is a snapshot standby, it can be used as a normal read write database. And then after the flashback technology is used combined with archived logfiles to convert back the snapshot to a physical standby.
In this blog I have tested some common tasks on a snapshot database and I am describing below the results.

We show our configuration, oracle 12.2 is used.
ORCL_SITE: Primary
ORCL_SITE2: Physical Standby
ORCL_SITE2: Physical Standby
ORCL_SITE3: Logical Standby


DGMGRL> show configuration;
.
Configuration - ORCL_DR
.
Protection Mode: MaxPerformance
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
ORCL_SITE3 - Logical standby database
.
Fast-Start Failover: DISABLED
.
Configuration Status:
SUCCESS (status updated 42 seconds ago)
.
DGMGRL>

The first question we can ask is which type of standby can be converted to a snapshot database

1- Can we convert a logical standby to a snapshot standby
Let’s convert our logical standby to a snapshot standby

DGMGRL> CONVERT DATABASE 'ORCL_SITE3' TO SNAPSHOT STANDBY;
Converting database "ORCL_SITE3" to a Snapshot Standby database, please wait...
Error: ORA-16831: operation not allowed on this member
.
Failed.
Failed to convert database "ORCL_SITE3"
DGMGRL>

Answer: NO we cannot convert a logical standby to a snapshot standby

2- Can we convert a physical standby to a snapshot standby
Let’s convert our physical standby to a snapshot standby

DGMGRL> CONVERT DATABASE 'ORCL_SITE2' TO SNAPSHOT STANDBY;
Converting database "ORCL_SITE2" to a Snapshot Standby database, please wait...
Database "ORCL_SITE2" converted successfully
DGMGRL>

And we can verify the new status of the database ‘ORCL_SITE2′

DGMGRL> show database 'ORCL_SITE2';
.
Database - ORCL_SITE2
.
Role: SNAPSHOT STANDBY
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 1 minute 33 seconds (computed 1 second ago)
Instance(s):
ORCL
.
Database Status:
SUCCESS
.
DGMGRL>
.

Answer: Yes we can convert a physical standby to a snapshot standby.

Now that the physical is converted to a snapshot let’s continue our quizz.

3- Can we open a snapshot standby on a read only mode
Let’s shutdown our standby snapshot and let’s open it on read only mode

SQL> startup open read only;
ORACLE instance started.
.
Total System Global Area 943718400 bytes
Fixed Size 8627440 bytes
Variable Size 348130064 bytes
Database Buffers 583008256 bytes
Redo Buffers 3952640 bytes
Database mounted.
Database opened.

The status is now open read only

SQL> select db_unique_name,database_role,open_mode from v$database;
.
DB_UNIQUE_NAME DATABASE_ROLE OPEN_MODE
------------------------------ ---------------- --------------------
ORCL_SITE2 SNAPSHOT STANDBY READ ONLY

Answer: Yes a snapshot standby can be opened in a READ ONLY mode

4- Can we create a tablespace on a snapshot standby
Connected to the standby database let’s create a tablespace

SQL> create tablespace mytab_snap datafile '/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf' size 2M autoextend on maxsize 10M;
.
Tablespace created.

We can verify in the table dba_tablespaces

SQL> select tablespace_name from dba_tablespaces;
.
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
MYTAB_SNAP
TAB_TEST

Answer: Yes we can see that the tablespace MYTAB_SNAP was created.

5- Can we drop a tablespace on a snapshot standby
Let’s drop a tablespace
SQL> drop tablespace TAB_TEST including contents and datafiles;
drop tablespace TAB_TEST including contents and datafiles
*
ERROR at line 1:
ORA-38881: Cannot drop tablespace TAB_TEST on primary database due to
guaranteed restore points.

Answer: No due to guaranteed restore point, we cannot drop a tablespace on a snapshot database.

6- Can we extend a datafile on a snapshot standby
We are going to consider two types of datafiles.
• One from tablespace MYTAB_SNAP created on the snapshot standby
• Another from tablespace TAB_TEST created on the primary

FILE_NAME TABLESPACE_NAME
-------------------------------------------------- ------------------------------
/u01/app/oracle/oradata/ORCL/users01.dbf USERS
/u01/app/oracle/oradata/ORCL/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/ORCL/system01.dbf SYSTEM
/u01/app/oracle/oradata/ORCL/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/ORCL/stab_test.dbf TAB_TEST
/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf MYTAB_SNAP

Let’s extend first the datafile created on the snapshot standby

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf' resize 20M;
.
Database altered.

And then let’s do the same operation on the datafile created on the primary

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/stab_test.dbf' resize 20M;
.
Database altered.

We can verify the new sizes

SQL> select FILE_NAME,sum(BYTES) from dba_data_files group by FILE_NAME;
.
FILE_NAME SUM(BYTES)
-------------------------------------------------- ----------
/u01/app/oracle/oradata/ORCL/users01.dbf 5242880
/u01/app/oracle/oradata/ORCL/undotbs01.dbf 73400320
/u01/app/oracle/oradata/ORCL/system01.dbf 870318080
/u01/app/oracle/oradata/ORCL/sysaux01.dbf 587202560
/u01/app/oracle/oradata/ORCL/stab_test.dbf 20971520
/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf 20971520
.
6 rows selected.

Answer: Yes we can extend datafiles on a snapshot standby.

Just note that when the snapshot standby is converted back to a physical, the datafile is shrinked until his previous size.

7- Can we reduce a datafile on a standby database
Let’s now reduce the size of the datafile created on the snapshot

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf' resize 5M;
.
Database altered.

And let’s do the same operation on the datafile created on the primary

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/stab_test.dbf' resize 5M;
alter database datafile '/u01/app/oracle/oradata/ORCL/stab_test.dbf' resize 5M
*
ERROR at line 1:
ORA-38883: Cannot shrink data file /u01/app/oracle/oradata/ORCL/stab_test.dbf
on primary database due to guaranteed restore points.

Answer: Yes we see that we can only reduce size for datafiles created on the snapshot standby.

8- Can we do a switchover to a snapshot standby
As a snapshot is a physical standby which was converted, one may ask if a switchover is possible to a snapshot standby.

DGMGRL> switchover to 'ORCL_SITE2';
Performing switchover NOW, please wait...
Error: ORA-16831: operation not allowed on this member
.
Failed.
Unable to switchover, primary database is still "ORCL_SITE"
DGMGRL>

Answer: No we cannot do a switchover to a snapshot standby.

9- Can we do a failover to a snapshot standby
The same question can be asked about failover.

DGMGRL> connect sys/root@ORCL_SITE2
Connected to "ORCL_SITE2"
Connected as SYSDBA.
DGMGRL> failover to 'ORCL_SITE2';
Converting database "ORCL_SITE2" to a Physical Standby database, please wait...
Operation requires shut down of instance "ORCL" on database "ORCL_SITE2"
Shutting down instance "ORCL"...
Connected to "ORCL_SITE2"
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "ORCL" on database "ORCL_SITE2"
Starting instance "ORCL"...
ORACLE instance started.
Database mounted.
Connected to "ORCL_SITE2"
Connected to "ORCL_SITE2"
Continuing to convert database "ORCL_SITE2" ...
Database "ORCL_SITE2" converted successfully
Performing failover NOW, please wait...
Failover succeeded, new primary is "ORCL_SITE2"
DGMGRL>

Answer: Yes, we can do a failover to a snapshot standby but the time of the failover is longer than if the failover was done to a physical standby. Indeed oracle has

• Convert the snapshot to physical standby one
• Applied archived logs to the physical standby
• And then do the failover

Conclusion: In this blog, we tried to explain some behaviors of snapshot standby. Hope that this article may help

 

Cet article Dataguard: QUIZZ on Snapshot Standby est apparu en premier sur Blog dbi services.

Are statistics immediately available after creating a table or an index in PostgreSQL?

Tue, 2017-11-28 06:32

While giving the last PostgreSQL DBA Essentials workshop this question came up: When we create a table or an index: are the statistics available automatically? To be more precise: When we create and load a table in one step, create an index on that table afterwards: Do we have the statistics available by default or do we need to wait for autovacuum to kick in or analyze manually? Lets see …

First of all lets disable autovacuum so it does not kick off analyze in the background:

postgres=# \! ps -ef | grep autov | grep -v grep
postgres  1641  1635  0 07:08 ?        00:00:00 postgres: MY_CLUSTER: autovacuum launcher process   
postgres=# alter system set autovacuum=off;
ALTER SYSTEM
postgres=# select * from pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# \! ps -ef | grep autov | grep -v grep

Create and populate the table:

postgres=# \! cat a.sql
drop table if exists t;
create table t
as select a.*, md5(a::varchar) from generate_series(1,5000000) a;
postgres=# \i a.sql
psql:a.sql:1: NOTICE:  table "t" does not exist, skipping
DROP TABLE
SELECT 5000000

Create an index:

postgres=# create index i1 on t(a);
CREATE INDEX
postgres=# \d+ t
                                     Table "public.t"
 Column |  Type   | Collation | Nullable | Default | Storage  | Stats target | Description 
--------+---------+-----------+----------+---------+----------+--------------+-------------
 a      | integer |           |          |         | plain    |              | 
 md5    | text    |           |          |         | extended |              | 
Indexes:
    "i1" btree (a)

Do we have statistics already? Lets check:

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

No, at least not for the table. What about the index?

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i1'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

No. Lets analyze:

postgres=# analyze t;
ANALYZE
postgres=# analyze i1;
WARNING:  skipping "i1" --- cannot analyze non-tables or special system tables
ANALYZE

Apparently we can not analyze an index. What do we see now?

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
           0 |        4 |          -1
           0 |       33 |          -1
(2 rows)

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i1'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

We do see statistics for the table but not for the index. The reason is that “analyze” works on the tables, but not on the indexes. For regular indexes there will be nothing in pg_statistic because that information would be redundant with the underlying table columns. But there will be statistics for function based indexes:

postgres=# create index i2 on t(lower(a::text));
CREATE INDEX
postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i2'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

postgres=# analyze t;
ANALYZE
postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i2'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
           0 |       10 |          -1
(1 row)

So, when autovacuum is off we do not get statistics when we do not kick off a manual analyze (which is not a surprise). What happens when autovacuum is on?

postgres=# alter system set autovacuum=on;
ALTER SYSTEM
postgres=# select * from pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)
postgres=# \i a.sql
DROP TABLE
SELECT 5000000
postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

Nope, same picture here. But some seconds later:

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
           0 |        4 |          -1
           0 |       33 |          -1
(2 rows)

… statistics are there. Conclusion: When you require current statistics directly after loading a table you’d better kick of a manual analyze right after. Otherwise autovacuum will take care about that, but not immediately.

 

Cet article Are statistics immediately available after creating a table or an index in PostgreSQL? est apparu en premier sur Blog dbi services.

Pages