Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 12 hours 10 min ago

Documentum – Silent Install – Things to know, binaries & JMS installation

Fri, 2018-08-03 13:55

Documentum introduced some time ago already the silent installations for its software. The way to use this changed a little bit but it seems they finally found their way. This blog will be the first of a series to present how to work with the silent installations on Documentum because it is true that it is not really well documented and most probably not much used at the moment.

We are using this where possible for our customers and it is true that it is really helpful to avoid human errors and install components more quickly. Be aware that this isn’t perfect! There are some parameters with typos, some parameters that are really not self-explanatory, so you will need some time to understand everything but, in the end, it is still helpful.

Using the silent installation is a first step but you will still need a lot of manual interventions to execute these as well as actually making your environment working. I mean it only replaces the GUI installers so everything you were doing around that is still needed (preparation of files/folders/environment, custom start/stop scripts, service setup, Java Method Server (JMS) configuration, Security Baselines, SSL setup, aso…). That’s why we also developed internally scripts or playbooks (Ansible for example) to perform everything around AND use the Documentum silent installations. In this blog and more generally in this series, I will only talk about the silent installations coming from Documentum.

Let’s start with the basis:

  1. Things you need to know
  2. Documentum Content Server installation (binaries & JMS)

 

1. Things you need to know
  • Each and every component installation needs its own properties file that is used by the installer to know what to install and how to do it, that’s all you need to do.
  • As I mentioned above, there are some typos in a few parameters coming from the properties files like “CONGINUE” instead of “CONTINUE”. These aren’t errors in my blogs, the parameters are really like that. All the properties files I’m showing here have been tested and validated in a lot of environments, including PROD ones in High Availability.
  • To know more about the silent installation, you can check the installation documentation. There isn’t much to read about it but still some potentially interesting information.
  • The Documentum documentation does NOT contain any description of the parameters you can/should use, that’s why I will try in each blogs to describe them as much as possible.
  • You can potentially do several things at once using a single silent properties file, the only restriction for that is that it needs to use the same installer. Therefore, you could install a docbroker/connection broker, a docbase/repository and configure/enable a licence using a single properties file but you wouldn’t be able to do the silent installation of the binaries as well because it needs another installer. That’s definitively not what I’m doing because I find it messy, I really prefer to separate things, so I know I’m using only the parameters that I need for a specific component and nothing else.
  • There are examples provided when you install Documentum. You can look at the folder “$DM_HOME/install/silent/templates” and you will see some properties file. In these files, you will usually find most of the parameters that you can use but from what I remember, there are a few missing. Be aware that some files are for Windows and some are for Linux, it’s not always the same because some parameters are specific to a certain OS:
    • linux_ files are for Linux obviously
    • win_ files are for Windows obviously
    • cfs_ files are for a CFS/Remote Content Server installation (to provide High Availability to your docbases/repositories)
  • If you look at the folder “$DM_HOME/install/silent/silenttool”, you will see that there is a utility to generate silent files based on your current installation. You need to provide a silent installation file for a Content Server and it will generate for you a CFS/Remote CS silent installation file with most of the parameters that you need. Do not 100% rely on this file, there might still be some parameters missing but present ones should be the correct ones. I will write a blog on the CFS/Remote CS as well, to provide an example.
  • You can generate silent properties file by running the Documentum installers with the following command: “<installer_name>.<sh/bin> -r <path>/<file_name>.properties”. This will write the parameters you selected/enabled/configured into the <file_name>.properties file so you can re-use it later.
  • To install an additional JMS, you can use the jmsConfig.sh script or jmsStandaloneSetup.bin for an IJMS (Independent JMS – Documentum 16.4 only). It won’t be in the blogs because I’m only showing the default one created with the binaries.
  • The following components/features can be installed using the silent mode (it is possible that I’m missing some, these are the ones I know):
    • CS binaries + JMS
    • JMS/IJMS
    • Docbroker/connection broker
    • Licences
    • Docbase/repository (CS + CFS/RCS + DMS + RKM)
    • D2
    • Thumbnail

 

2. Documentum Content Server installation (binaries & JMS)

Before starting, you need to have the Documentum environment variables defined ($DOCUMENTUM, $DM_HOME, $DOCUMENTUM_SHARED), that doesn’t change. Once that is done, you need to extract the installer package (below I used the package for a CS 7.3 on Linux with an Oracle DB):

[dmadmin@content_server_01 ~]$ cd /tmp/dctm_install/
[dmadmin@content_server_01 dctm_install]$ tar -xvf Content_Server_7.3_linux64_oracle.tar
[dmadmin@content_server_01 dctm_install]$
[dmadmin@content_server_01 dctm_install]$ chmod 750 serverSetup.bin
[dmadmin@content_server_01 dctm_install]$ rm Content_Server_7.3_linux64_oracle.tar

 

Then prepare the properties file:

[dmadmin@content_server_01 dctm_install]$ vi CS_Installation.properties
[dmadmin@content_server_01 dctm_install]$ cat CS_Installation.properties
### Silent installation response file for CS binary
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
APPSERVER.SERVER_HTTP_PORT=9080
APPSERVER.SECURE.PASSWORD=adm1nP4ssw0rdJMS

### Common parameters
COMMON.DO_NOT_RUN_DM_ROOT_TASK=true

[dmadmin@content_server_01 dctm_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • APPSERVER.SERVER_HTTP_PORT: The port to be used by the JMS that will be installed
  • APPSERVER.SECURE.PASSWORD: The password of the “admin” account of the JMS. Yes, you need to put all passwords in clear text in the silent installation properties files so add it just before starting the installation and remove them right after
  • COMMON.DO_NOT_RUN_DM_ROOT_TASK: Whether or not you want to run the dm_root_task in the silent installation. I usually set it to true, so it is NOT executed because the Installation Owner I’m using do not have root accesses for security reasons
  • On Windows, you would need to provide the Installation Owner’s password as well and the path you want to install Documentum on ($DOCUMENTUM). On linux, the first one isn’t needed and the second one needs to be in the environment before starting.
  • You could also potentially add more properties in this file: SERVER.LOCKBOX_FILE_NAMEx and SERVER.LOCKBOX_PASSPHRASE.PASSWORDx (where x is a number starting with 1 and incrementing in case you have several lockboxes). These parameters would be used for existing lockbox files that you would want to load. Honestly, these parameters are useless. You will anyway need to provide the lockbox information during the docbase/repository creation and you will need to specify if you want a new lockbox, an existing lockbox or no lockbox at all so specifying it here is kind of useless…

 

Once the properties file is ready, you can install the Documentum binaries and the JMS in silent using the following command:

[dmadmin@content_server_01 dctm_install]$ ./serverSetup.bin -f CS_Installation.properties

 

This conclude the first blog of this series about Documentum silent installations. Stay tuned for more soon.

 

Cet article Documentum – Silent Install – Things to know, binaries & JMS installation est apparu en premier sur Blog dbi services.

Upgrade Oracle Grid Infrastructure from 12.1.0.2.0 to 12.2.0.1.0

Fri, 2018-08-03 03:26

The following blog will provide the necessary steps to upgrade the Grid Infrastructure from 12.1 to 12.2, for a Standalone Server.
One of the new features of GI 12.2 is the usage of the AFD (Oracle ASMFD Filter Driver).

Assumptions :

 You have installed Oracle GI 12.1 as grid user
 You have installed Oracle Database 12.1 as oracle user
 You have configured the groups asmadmin,asmoper,asmdba
 You installed oracle-rdbms-server-12cr2-preinstall rpm
 You patched your Oracle GI to PSU July 2017 (combo patch 25901062 to patch Oracle stack 12.1 , GI & RDBMS)
 [root]mkdir /u01/app/grid/product/12.2.0/grid/
 [root]chown -R grid:oinstall /u01/app/grid/product/12.2.0/grid/
 --stop all dbs that are using ASM
 [oracle]srvctl stop database -d ORCL

Installation : Tasks

[grid]cd /u01/app/grid/product/12.2.0/grid/
[grid]unzip /stage/linuxx64_12201_grid_home.zip
[grid]./gridSetup.sh
	Choose Upgrade Oracle Grid Infrastructure option.
	Confirm that all Oracle DBs using ASM are stopped.
	Check :
        Oracle base : /u01/app/grid/  
        Software Location : /u01/app/grid/product/12.2.0/grid/
		
	Uncheck "Automatically run configuration scripts". Is not recommanded by Oracle, but if you are doing like that 
is very possible that your upgrade process is dying without any output. 
	So at the right moment you will be asked to run rootUpgrade.sh maually.
	Click Next and validate that all the pre-requirements are confirmed.
	Monitor the progress and run the script rootUpgrade.sh when is prompted
	Once your action completed succesfully:
[grid@dbisrv04 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM
The Oracle base has been set to /u01/app/grid

[grid@dbisrv04 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.2.0.1.0]

Migrating ASM disks from ASMlib to AFD : Tasks

Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.

Step1:

[grid@dbisrv04 ~]$ asmcmd dsget
parameter:
profile:

[grid@dbisrv04 ~]$ asmcmd dsset '/dev/xvda*','ORCL:*','AFD:*'

[grid@dbisrv04 ~]$ asmcmd dsget
parameter:/dev/xvda*, ORCL:*, AFD:*
profile:/dev/xvda*,ORCL:*,AFD:*

Step2:

[root]export ORACLE_HOME=/u01/app/grid/product/12.2.0/grid/
[root]$GRID_HOME/bin/crsctl stop has -f

Step3:

root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure

ASMCMD-9524: AFD configuration failed 'ERROR: ASMLib deconfiguration failed'
Cause: acfsload is running.To configure AFD oracleasm and acfsload must be stopped
Solution: stop acfsload and rerun asmcmd afd_configure

[root@dbisrv04 ~]# oracleasm exit
[root@dbisrv04 ~]# $ORACLE_HOME/bin/acfsload stop

root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.

Step4:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'dbisrv04.localdomain'

Step5:

[root]$ORACLE_HOME/bin/crsctl stop has

Step6:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_refresh
[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1

Step7:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_dsset '/dev/sd*'

Step8:

[root]$ORACLE_HOME/bin/crsctl stop has -f
[root]$GRID_HOME/bin/asmcmd afd_scan
[root]$GRID_HOME/bin/asmcmd afd_refresh

Step9:

[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1

Step10:

select name,label,path from v$asm_disk;SQL> SQL> SQL>

NAME       LABEL                PATH
---------- -------------------- --------------------
DISK04     DISK04               AFD:DISK04
DISK03     DISK03               AFD:DISK03
DISK02     DISK02               AFD:DISK02
DISK01     DISK01               AFD:DISK01
DISK07     DISK07               AFD:DISK07
DISK05     DISK05               AFD:DISK05
DISK06     DISK06               AFD:DISK06
DISK09     DISK09               AFD:DISK09
DISK08     DISK08               AFD:DISK08

Step11: Confirm your AFD is loaded

[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /dbhome_1,STABLE

——————————————————————————–

Step 11b: Introduce new disks with AFD

[root]. oraenv
[root]+ASM
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1 --init
ASMCMD-9521: AFD is already configured
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1
[root@dbisrv04 ~]# asmcmd afd_lslbl

Step 12: Erase Oracle ASMLib

[root] yum erase oracleasm-support.x86_64
[root] yum erase oracleasmlib.x86_64
 

Cet article Upgrade Oracle Grid Infrastructure from 12.1.0.2.0 to 12.2.0.1.0 est apparu en premier sur Blog dbi services.

Hitachi Content Intelligence deployment

Thu, 2018-08-02 07:19

Hitachi Content Intelligence (HCI) is a search and data processing solution. It allows the extraction, classification, enrichment, and categorization of data, regardless of where the data lives or what format it’s in.

Content Intelligence provides tools at large scale across multiple repositories. These tools are useful for identifying, blending, normalizing, querying, and indexing data for search, discovery, and reporting purposes.

Architecture

HCI has components called data connections that it uses to access the places where your data is stored (these places are called data sources). A data connection contains all the authentication and access information that HCI needs to read the files in the data source.

HCI is extensible with published application programming interfaces (APIs) that support customized data connections, transformation stages, or building new applications.

HCI-1

HCI is composed of many services running on Docker.

[centos@hci ~]$ sudo docker ps -a
[sudo] password for centos:
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS               NAMES
0547ec8761cd        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           admin-app
1f22db4aec4b        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           sentinel-service
fa54650ec03a        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           haproxy-service
6b82daf15093        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           marathon-service
a12431829a56        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           mesos_master_service
812eda23e759        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           mesos_agent_service
f444ab8e66ee        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           zookeeper-service
c7422cdf3213        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           watchdog-service

Below a representation of all services of HCI platform.

HCI-2

System Requirements

HCI has been qualified using these Linux distributions:

  • Fedora 24
  • Centos 7.2
  • Ubuntu 16.04 LTS
Docker requirements

HCI requires Docker software installed in each server running HCI. Docker version > 1.3.0 must be installed on all instances.

Network requirements

Each HCI instance must have a static IP address and the multiple ports must be opened for HCI tools such as Zookeeper, Mesos, Cassandra, Kafka, etc…

To see the list of port, refer to HCI official documentation. For our testing environment, we will stop the firewall service.

System configuration & Installation

HCI can run on physical or virtual servers, or hosted on public or private clouds. It is instantiated as a set of containers and provided to users as a self-service facility with support for detailed queries and ad hoc natural language searches. HCI can run in the single instance or in a cluster mode. For our blog, we will use a single instance.

Docker version:
[centos@hci ~]$ docker --version
Docker version 1.13.1, build 87f2fab/1.13.

If Docker is not installed, please follow the installation methods from the Docker official website.

Disable SELinux:
  • Backup current SELinux configuration:
[centos@hci ~]$ sudo cp /etc/selinux/config /etc/selinux/config.bak
  • Disable SELinux:
[centos@hci ~]$ sudo sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
 Create user and group:
[centos@hci ~]$ sudo groupadd hci -g 10001

[centos@hci ~]$ sudo useradd hci -u 10001 -g 1000
Disable firewall service:
[centos@hci ~]$ sudo service firewalld stop

Redirecting to /bin/systemctl stop firewalld.service
 Run Docker service:
[centos@hci ~]$ sudo systemctl status docker

* docker.service - Docker Application Container Engine

Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)

Active: active (running) since Thu 2018-08-02 10:08:38 CEST; 1s ago
Configure the Docker service to start automatically at boot:
[centos@hci ~]$ sudo systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service
Change the vm.max_map_count value:

Add ‘vm.max_map_count = 262144′ to /etc/sysctl.conf

[centos@hci ~]$ sudo vi /etc/sysctl.conf
[centos@hci ~]$ sudo sysctl -w vm.max_map_count=262144
Download HCI

Create first your Hitachi Vantara account, https://sso.hitachivantara.com/en_us/user-registration.html.

Then, from the Hitachi Vantara Community website https://community.hitachivantara.com/hci, clicks “Downloads”. You will have access to a 90-day trial license with the full feature set.

HCI Installation

Create a directory called /hci, in the location of your choice. We recommend you to use the largest disk partition:

[centos@hci ~]$ mkdir hci

Move the installation package to your hci directory:

[centos@hci ~]$ mv HCI-1.3.0.93.tgz hci/

Extract the installation package:

[centos@hci hci]$ sudo tar –xzf HCI-1.3.0.93.tgz

Run the installation script in the version-specific directory:

[centos@hci hci]$ sudo 1.3.0.93/bin/install

Run the hci_setup script:

[centos@hci50 bin]$ sudo ./setup -i <ip-address-instance>

Run the hci_run script, and ensure that the method you use can keep the hci_run script running and can automatically restart in case of server reboot:

We recommend running the script as a service using systemd:

In the installation package, a service file is provided and you can edit this file according to your configuration:

  1. Edit the HCI.service file:
[centos@hci bin]$ vi HCI.service
  1. Ensure the ExecStart parameter is properly set, with the right path:
ExecStart=/home/centos/hci/bin/run

If not, change it to your hci installation path.

  1. Copy the HCI.service file to the appropriate location for your OS:
[centos@hci bin]$ sudo cp /hci/bin/HCI.service /etc/systemd/system
  1. Enable and start HCI service:
[centos@hci bin]$ sudo systemctl enable HCI.service

Created symlink from /etc/systemd/system/multi-user.target.wants/HCI.service to /etc/systemd/system/HCI.service.

[centos@hci bin]$ sudo systemctl start HCI.service

Check if the service has properly started:

[centos@dhcp-10-32-0-50 bin]$ sudo systemctl status HCI.service

* HCI.service - HCI

   Loaded: loaded (/etc/systemd/system/HCI.service; enabled; vendor preset: disabled)

   Active: active (running) since Thu 2018-08-02 11:15:09 CEST; 45s ago

 Main PID: 5849 (run)

    Tasks: 6

   Memory: 5.3M

   CGroup: /system.slice/HCI.service

           |-5849 /bin/bash /home/centos/hci/bin/run

           |-8578 /bin/bash /home/centos/hci/bin/run

           `-8580 /usr/bin/docker-current start -a watchdog-service

HCI Deployment

With your favorite web browser, connect to HCI administrative App:

https://<HCI-instance-ip-address>:8000

On the Welcome page, set a password for the admin user:

HCI-3

Choose what you would like to deploy:

Screen Shot 2018-08-02 at 11.26.40

Click on Hitachi Content Search button and click on Continue button.

Click on Deploy Single Instance button:

Screen Shot 2018-08-02 at 11.28.07

Wait for the HCI deployment until it finishes.

Screen Shot 2018-08-02 at 12.15.50

 

 

 

 

 

Cet article Hitachi Content Intelligence deployment est apparu en premier sur Blog dbi services.

A password() function for dmgawk

Tue, 2018-07-31 16:31

A few days ago, as I was preparing a dmawk script for a presentation, I stumbled against another unexpected error.
The script was attempting to connect to a docbase by providing a docbase name, a user name and a password. But before that, it tested whether a password was provided as a command-line parameter (I know, this is not very secure but it was for demonstration purpose only); if not, it prompted for one using dmawk’s built-in password() function. The full command was:

echo "select count(*) from dm_user" | dmawk -v docbase=dmtest -v username=dmadmin -f select.awk

with select.awk narrowed down to:

cat select.awk
BEGIN {
   passwd = password("please, enter password: ")
   print "password was:", passwd
}

The problem was that when piping something into the script, it didn’t prompt anymore for a password. Without piping, it prompted as expected:

echo "select count(*) from dm_user" | dmawk73 -f ./getpasswd.dmawk
password was:
exiting ...

==> not prompted for password;

dmawk73 -f ./getpasswd.dmawk
please, enter password:
password was: Supercalifragilisticexpialidocious!
exiting ...

==> prompted for password;
Here, the alias dmawk73 points to the content server v7.3’s dmawk, my current version of Documentum contentServer.
Note that the variant below did not work either:

cat query_file 
select
   count(*)
from
dm_user

dmawk73 -f ./getpasswd.dmawk < query_file
password was:
exiting ...

==> not prompted for password;
This proves that what screws up the dmawk’s password() function is the presence of characters in stdin, whether they come from a pipe or from a redirection.
Did they change (a politically correct way to say “break”) something in this version relatively to a previous one ? To be sure, I tried the same tiny script with dmawk from an ancient 5.3 installation I keep around for those puzzling occasions, and guess what ? No special weirdness here, it worked as expected:

dmadmin@dmclient:~/getpasswd$ echo "select count(*) from dm_user" | dmawk53 -f ./getpasswd.dmawk
please, enter password:
password was: Supercalifragilisticexpialidocious
exiting ...

where the alias dmawk53 points to the content server v5.3’s dmawk.
A strace on dmawk53 shows that the device /dev/tty is read for input:

open("/dev/tty", O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 4
...
write(4, "please, enter password: ", 24) = 24
read(4, "kdk\n", 4096) = 4
...
close(4) = 0
...
write(1, "password was: kdk\n", 18) = 18
write(1, "exiting ...\n", 12) = 12

For sure, the original behavior was changed somehow around reading from tty and the built-in password() function gets disrupted when something is first input into stdin.
So how to work around this new pesky issue ? Let’s see a few solutions. To be clear, I assume from the beginning that security is not a major concern here. Proposed solutions 4, 5 and 6 however are on the same security level as dmawk’s password() since they restore this function.

1. Give up piping into dmawk

This means that it will not be possible to concatenate the awk script to the previous command. If this is acceptable, why not ? dmawk’s input will have to come from a file, e.g.:

cat query_file 
select
   count(*)
from
dm_user

cat getpasswd_no_pipe.dmawk 
BEGIN {
   while ((getline < query_file) > 0)
      query = query "\n" $0
   close(query_file)
   print "query string is:", query
   pp = password("please, enter password: ")
   print "password was:", pp

   exit

}
END {
  print "exiting ..."
}

Execution:
dmadmin@dmclient:~/getpasswd$ dmawk73 -f getpasswd_no_pipe.dmawk -v query_file=query_file
query is: 
select
   count(*)
from
dm_user

please, enter password: 
password was: Et tu, Brute?
exiting ...

If security matters and command concatenation is not needed, the above may be an acceptable work-around.

2. Using an environment variable

If security is not significant, the password could be passed in a environment variable, e.g.:

cat getpasswd_env.dmawk
BEGIN {
   cmd = "echo $password"
   cmd | getline pp
   close(cmd)
   print "password was:", pp
}
END {
  print "exiting ..."
}

Execution:

export password=Supercalifragilisticexpialidocious!
echo "select count(*) from dm_user" | dmawk73 -f ./getpasswd_env.dmawk
password was: Supercalifragilisticexpialidocious!
exiting ...

Here, it is mandatory to use the export statement because dmawk launches a sub-process to read the parent’s environment variable.
Unlike dmawk, gawk can map the process’ environment into the built-in associative array ENVIRON, which makes accessing $password more elegant and also faster as no sub-process gets spawned:

cat ./getpasswd_env.awk
BEGIN {
   print "password was:", ENVIRON["password"]
   exit
}
END {
   print "exiting..."
}

Execution:

echo "select count(*) from dm_user" | gawk -f ./getpasswd_env.awk
password was: Supercalifragilisticexpialidocious!
exiting...

A little digression here while on the subject of environment variables: it’s a little known fact that the tools iapi and idql supports 3 handy but rarely used environment variables: DM_DOCBASE_NAME, DM_USER_NAME and DM_PASSWORD; if those are set, either as a whole or individually, the above utilities can be launched with the corresponding option -DM_CONNECT_DOCBASE, -DM_CONNECT_USER_NAME and -DM_CONNECT_PASSWORD and the corresponding parameter can be omitted. E.g.:

export DM_DOCBASE_NAME=dmtest
export DM_USER_NAME=kermit
export DM_PASSWORD=conehead
idql -ENV_CONNECT_DOCBASE_NAME -ENV_CONNECT_USER_NAME -ENV_CONNECT_PASSWORD </dev/null
   select count(*) from dm_user
   go
   quit
EoQ
Connected to Documentum Server running Release 7.3.0000.0214  Linux64.Oracle
1> 2> count(*)              
----------------------
                    61
(1 row affected)
1> Bye

However, there is no prompt for missing parameters or unset variables and, quite surprisingly, the command fails silently in such cases.
Nonetheless, the point here is that we could standardize on these variable names and use them with awk, e.g. (dm)awk would pull out those parameters from the environment as follows:

echo "select count(*) from dm_user" | dmawk73 'BEGIN {
   cmd = "echo $DM_DOCBASE_NAME $DM_USER_NAME $DM_PASSWORD"
   cmd | getline docbase_name dm_user_name passwd
   print docbase_name, dm_user_name, passwd ? passwd : "N/A"
   close(cmd)
}'
dmtest kermit conehead 

whereas gawk could chose to access those environment variables through the built-in ENVIRON associative array:

echo "select count(*) from dm_user" | gawk 'BEGIN { print ENVIRON["DM_DOCBASE_NAME"], ENVIRON["DM_USER_NAME"], ENVIRON["DM_PASSWORD"] ? ENVIRON["DM_PASSWORD"] : "N/A"}'
dmtest kermit conehead

which can be more readable in some cases since its indexes are explicitly named vs. positional.
See section 5 below to know what dmawk and gawk have in common regarding Documentum.

3. Reading the password from a file

Here too, let’s admit that security is not important so a cleartext password could be read from a text file as follows:

cat getpasswd_from_file.awk
# Usage:
#    dmawk -v password_file=... -f getpasswd_from_file.dmawk 
BEGIN {
   if (!password_file) {
      print "missing password_file parameter"
      exit
   }
   getline pp < password_file
   close(password_file)
   print "password was:", pp
}

Execution:

cat password_file
Supercalifragilisticexpialidocious!

echo "select count(*) from dm_user" | dmawk -f getpasswd_from_file.awk  -v password_file=password_file
password was: Supercalifragilisticexpialidocious!

No surprise here.

4. Access bash’s read -s command

The bash shell has the built-in command read which take the -s option in order to prevent echoing on the screen the entered characters. Unfortunately, while bash is most of the time a login shell, it is not always the subshell invoked when spawning a command, which awk does when executing things like “cmd | getline”. Actually, it is /bin/sh that is invoked as a subshell under Linux, which is a sym link to /bin/dash (at least the Ubuntu 16.04 and 18.04 I’m using here; under Centos, /usr/bin/sh is symlinked to /usr/bin/dash), a much smaller shell than bash and supposedly faster. So, how to force bash as a subshell ?
I could not find any system setting to configure the choice of the subshell. Obviously, changing the /bin/sh symlink and making it point to /bin/bash works indeed but it is a system-wide change and it is not recommended because of possible compatibility issues.
The solution is to explicitly tell the subshell to make bash execute the read. But it is not enough, we also need to explicitly tell read to get its input from /dev/tty otherwise it gets messed up with any piped or redirected input. Here is a solution:

cat getpasswd_tty.dmawk
BEGIN {
   pp = getpassword("please, enter password: ")
   print "\npassword was:", pp
   exit
}
END {
  print "exiting ..."
}
function getpassword(prompt     , cmd, passwd) {
   cmd = "/bin/bash -c 'read -s -p \"" prompt "\" passwd < /dev/tty; echo $passwd'"
   cmd | getline passwd
   close(cmd)
   return passwd
}

Execution:
echo "select count(*) from dm_user" | dmawk -f  getpasswd_tty.dmawk 
please, enter password: password: 
password was: AreYo7Kidd8ngM3?
exiting ...

Line 11 invokes bash from whatever subshell is launched by dmawk, and asks it to execute the read built-in without echo, with the given prompt, and with its input coming directly from the device /dev/tty.
On line 10, note the function getpassword’s formal parameters cmd and passwd; since the function is called without any effective value for those, they are considered as local variables; this is a common idiom in awk where all variables are global and come to existence as soon as they are referenced.
Under Centos, where /usr/bin/bash is also invoked as a subshell, line 11 can be slightly simplified:

   cmd = "'read -s -p \"" prompt "\" passwd < /dev/tty; echo $passwd'""

This work-around is the easiest and closest to the original built-in password() function.

5. Implement password() in dmgawk

Those who have read my blog here know that we have now a much more powerful implementation of awk in our toolbox, GNU gawk, which we can extend to suit our needs. The above blog describes how to extend gawk with a connectivity to Documentum docbases; I jokingly named the resulting awk dmgawk. As glibc includes the getpass() function just for this purpose, why not use the same approach and add to dmgawk a sensible password() function around C’s getpass() that works as before ? Let’s put our money where our mouth is and implement this function in dmgawk. In truth, it should be noted that getpass() is marked as being obsolete so this alternative should be considered as a temporary work-around.
I won’t copy here all the steps from the above blog though; here are only the distinctive ones.
The interface’s source:

cat ~/dmgawk/gawk-4.2.1/extension/password.c
/*
 * password.c - Builtin function that provide an interface to the getpass() function;
 * see dmapp.h for description of functions;
 *
 * C. Cervini
 * dbi-services.com
 * 7/2018
 */
#ifdef HAVE_CONFIG_H
#include 
#endif

#include "gawkapi.h"

#include "gettext.h"
#define _(msgid)  gettext(msgid)
#define N_(msgid) msgid

static const gawk_api_t *api;   /* for convenience macros to work */
static awk_ext_id_t ext_id;
static const char *ext_version = "password extension: version 1.0";
static awk_bool_t (*init_func)(void) = NULL;

int plugin_is_GPL_compatible;

/*  do_password */
static awk_value_t *
do_password(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t prompt;
   char *passwd;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, &prompt)) {
      passwd = getpass(prompt.str_value.str);
   }
   else passwd = getpass("");

   make_const_string(passwd == NULL ? "" : passwd, strlen(passwd), result);
   return result;
}

/*
these are the exported functions along with their min and max arities;
let's make the prompt parameter optional, as in dmawk;
*/
static awk_ext_func_t func_table[] = {
        { "password", do_password, 1, 0, awk_false, NULL },
};

/* define the dl_load function using the boilerplate macro */

dl_load_func(func_table, password, "")

Compilation steps:

cd ~/dmgawk/gawk-4.2.1/extension
vi Makefile.am
append the new library to the pkgextension_LTLIBRARIES list:
pkgextension_LTLIBRARIES =      \
        filefuncs.la    \
        fnmatch.la      \
        fork.la         \
        inplace.la      \
        intdiv.la       \
        ordchr.la       \
        readdir.la      \
        readfile.la     \
        revoutput.la    \
        revtwoway.la    \
        rwarray.la      \
        time.la         \
        dctm.la         \
        password.la

later:
dctm_la_SOURCES       = dctm.c
dctm_la_LDFLAGS       = $(MY_MODULE_FLAGS)
dctm_la_LIBADD        = $(MY_LIBS)

password_la_SOURCES  = password.c
password_la_LDFLAGS  = $(MY_MODULE_FLAGS)
password_la_LIBADD   = $(MY_LIBS)

run the make command:
make

go one level up and run the make command again:
make

At this point, the new gawk is ready for use. Let’s test it:

cat getpasswd.awk
@load "password"

BEGIN {
   passwd = password("please, enter password: ")
   print "password was:", passwd
}

END {
   print "exiting..."
}

Execution:

AWKLIBPATH=~/dmgawk/gawk-4.2.1/extension/.libs echo "select count(*) from dm_user" | ~/dmgawk/gawk-4.2.1/gawk -f ./getpasswd.awk 
please, enter password: 
password was: precipitevolissimevolmente
exiting...

If all is good, install the new extension system-wide as follows:

cd ~/dmgawk/gawk-4.2.1
sudo make install

make an alias to the new gawk:
alias dmgawk=/usr/local/bin/gawk
The usage is simplified now:
echo "select count(*) from dm_user" | dmgawk -f ./getpasswd.awk
please, enter password: 
password was: humptydumpty
exiting...

dmgawk looks more and more like a valuable substitute for dmawk. What gets broken in dmawk can be fixed by dmgawk.

6. And in python ?

Those who use python for their Documentum administration tasks, extended with the Documentum connectivity as proposed in my blog here, are even luckier because python has a library for just about everything but the kitchen sink, and an interface to C’s getpass(), appropriately named getpass(), already exists, see here. Therefore, there is no need to write one using e.g. ctypes. Here is how to call the python’s getpass():

cat getpasswd.py 
#!/usr/bin/python

import getpass

passwd = getpass.getpass(prompt = "Please, enter password: ")
print("The password is: " + passwd)

Execution:
echo "select count(*) from dm_user" | ./getpasswd.py
Please, enter password: 
The password is: Did the quick brown fox jump over the lazy dog ?

No muss, no fuss here.

Conclusion

It’s quite interesting to see how basic things that we take for granted get broken from one Documentum release to another. On the bright side though, those little frustrations gives us the opportunity to look for work-arounds, and write blogs about them ;-). I am eager to find the next dysfunction and pretty confident that Documentum will not be disappoint me in this respect.

 

Cet article A password() function for dmgawk est apparu en premier sur Blog dbi services.

Patching ODA lite to 12.2.1.4.0

Mon, 2018-07-30 10:30

Here is how to apply the latest patch for your ODA. As usual the Oracle documentation is not 100% accurate. I applied this patch on 2 ODAs X7-2M previously deployed in 12.2.1.2.0, no intermediate patch was needed.

1) Download the patch

The patch number is 28216794. This patch will update the following components:  dcs (odacli), operating system, bios/firmwares, ilom, GI, dbhomes and databases.
Download and copy the patch to a temporary folder on the server, for example /opt/patch. You’ll have to be root to apply the patch.

2) Check the actual versions and free space on disk

First check the current version:

odacli describe-component

System Version
---------------
12.2.1.2.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.2.0            up-to-date
GI                                        12.2.0.1.171017       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.171017       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.171017       up-to-date
}
DCSAGENT                                  12.2.1.2.0            up-to-date
ILOM                                      4.0.0.28.r121827      4.0.0.22.r120818
BIOS                                      41017600              41017100
OS                                        6.8                   up-to-date

The “available version” stands for “what’s available in the ODA repository?”. As this ODA has never been patched, it seems that some of the components are already in newer versions than those provided in the initial deployment package. Registering a new patch will refresh this repository.

Check that folders /, /u01 and /opt have enough free GB to process (>=10GB).

3) Prepare the patch files

Previous patch was slightly different from the others because Oracle simply forgot to double zip the file 2 and 3 of the patch. Now this patch is back to a more classic update: unzip and register the 3 unzipped files (they are zip files, too).

cd /opt/patch
unzip p28216794_122140_Linux-x86-64_1of3.zip
unzip p28216794_122140_Linux-x86-64_2of3.zip
unzip p28216794_122140_Linux-x86-64_3of3.zip

odacli update-repository -f /tmp/oda-sm-12.2.1.4.0-180708-server1of3.zip
odacli update-repository -f /tmp/oda-sm-12.2.1.4.0-180708-server2of3.zip
odacli update-repository -f /tmp/oda-sm-12.2.1.4.0-180708-server3of3.zip

Updating the repository, as other tasks through odacli, will generate a job-id you can look at to confirm that everything is running fine:

odacli describe-job -i "4087f0f4-2582-4621-b75c-59c9915a0cb5"

Job details
----------------------------------------------------------------
                     ID:  4087f0f4-2582-4621-b75c-59c9915a0cb5
            Description:  Repository Update
                 Status:  Success
                Created:  July 19, 2018 10:32:53 AM CEST
                Message:  oda-sm-12.2.1.4.0-180703-server1of3.zip

Note: you can update the repository with a single update-repository command including the 3 files. But it’s better to do the update separately in case of a corrupted file.

New feature with this patch, you can check if your system is ready for patching with this command:

odacli create-prepatchreport -s -v 12.2.1.4.0
odacli describe-prepatchreport -i f45f9750-ec9b-411f-ba53-43205cb17f87

4) Update the dcs-agent

First, you’ll have to update the dcs-agent:

/opt/oracle/dcs/bin/odacli update-dcsagent -v 12.2.1.4.0

odacli describe-job -i "5cc9174e-bd7a-435d-aaff-0113e9ab01bc"

Job details
----------------------------------------------------------------
                     ID:  5cc9174e-bd7a-435d-aaff-0113e9ab01bc
            Description:  DcsAgent patching
                 Status:  Success
                Created:  July 19, 2018 10:36:18 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                July 19, 2018 10:36:18 AM CEST      July 19, 2018 10:36:19 AM CEST      Success
dcs-agent upgrade                        July 19, 2018 10:36:19 AM CEST      July 19, 2018 10:36:19 AM CEST      Success

This update takes only few seconds. Check again the version and you will see a new 18c component:

odacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.2.0            12.2.1.4.0
GI                                        12.2.0.1.171017       12.2.0.1.180417
DB {
[ OraDB12201_home1 ]                      12.2.0.1.171017       12.2.0.1.180417
[ OraDB11204_home1 ]                      11.2.0.4.171017       11.2.0.4.180417
}
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      4.0.0.28.r121827      4.0.2.20.b.r123704
BIOS                                      41017600              41021300
OS                                        6.8                   6.9
FIRMWARECONTROLLER                        QDV1RE0F              qdv1re14

5) Update the server

Updating the server will update OS, ILOM, BIOS, firmwares and the GI. Update process will assume that no additionnal packages have been installed. If you installed additionnal packages, please remove them as they can prevent the patch to apply correctly.

This part of the update is the longest one (about 1 hour), and will end with a reboot of the server.

odacli update-server -v 12.2.1.4.0

odacli describe-job -i "a2e296c2-1b3e-4ed1-a5cc-0cb7c4d23120"

Job details
----------------------------------------------------------------
                     ID:  a2e296c2-1b3e-4ed1-a5cc-0cb7c4d23120
            Description:  Server Patching
                 Status:  Success
                Created:  July 19, 2018 12:12:53 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                July 19, 2018 12:12:53 PM CEST      July 19, 2018 12:12:53 PM CEST      Success
dcs-controller upgrade                   July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:54 PM CEST      Success
Patch location validation                July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:54 PM CEST      Success
dcs-cli upgrade                          July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:54 PM CEST      Success
Creating repositories using yum          July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:55 PM CEST      Success
Applying HMP Patches                     July 19, 2018 12:12:55 PM CEST      July 19, 2018 12:14:31 PM CEST      Success
Patch location validation                July 19, 2018 12:14:31 PM CEST      July 19, 2018 12:14:31 PM CEST      Success
oda-hw-mgmt upgrade                      July 19, 2018 12:14:31 PM CEST      July 19, 2018 12:14:31 PM CEST      Success
Creating repositories using yum          July 19, 2018 12:14:31 PM CEST      July 19, 2018 12:14:35 PM CEST      Success
Applying OS Patches                      July 19, 2018 12:14:35 PM CEST      July 19, 2018 12:16:18 PM CEST      Success
OSS Patching                             July 19, 2018 12:16:18 PM CEST      July 19, 2018 12:16:18 PM CEST      Success
Applying Firmware Disk Patches           July 19, 2018 12:16:18 PM CEST      July 19, 2018 12:16:23 PM CEST      Success
Applying Firmware Expander Patches       July 19, 2018 12:16:23 PM CEST      July 19, 2018 12:16:26 PM CEST      Success
Applying Firmware Controller Patches     July 19, 2018 12:16:26 PM CEST      July 19, 2018 12:16:30 PM CEST      Success
Checking Ilom patch Version              July 19, 2018 12:16:31 PM CEST      July 19, 2018 12:16:33 PM CEST      Success
Patch location validation                July 19, 2018 12:16:33 PM CEST      July 19, 2018 12:16:34 PM CEST      Success
Apply Ilom patch                         July 19, 2018 12:16:35 PM CEST      July 19, 2018 12:16:36 PM CEST      Success
Copying Flash Bios to Temp location      July 19, 2018 12:16:37 PM CEST      July 19, 2018 12:16:37 PM CEST      Success
Starting the clusterware                 July 19, 2018 12:16:57 PM CEST      July 19, 2018 12:16:58 PM CEST      Success
clusterware patch verification           July 19, 2018 12:16:58 PM CEST      July 19, 2018 12:17:00 PM CEST      Success
Patch location validation                July 19, 2018 12:17:00 PM CEST      July 19, 2018 12:17:03 PM CEST      Success
Opatch updation                          July 19, 2018 12:17:33 PM CEST      July 19, 2018 12:17:35 PM CEST      Success
Patch conflict check                     July 19, 2018 12:17:35 PM CEST      July 19, 2018 12:18:16 PM CEST      Success
clusterware upgrade                      July 19, 2018 12:18:16 PM CEST      July 19, 2018 12:33:07 PM CEST      Success
Updating GiHome version                  July 19, 2018 12:33:07 PM CEST      July 19, 2018 12:33:11 PM CEST      Success
preRebootNode Actions                    July 19, 2018 12:33:30 PM CEST      July 19, 2018 12:34:11 PM CEST      Success
Reboot Ilom                              July 19, 2018 12:34:11 PM CEST      July 19, 2018 12:34:11 PM CEST      Success

If there is a problem during the patching process (you forgot to remove an additionnal rpm or one of the patch file is missing for example), you can relaunch the patching and it will skip the already patched components.

Once this part of the patch is successfuly applied, check again the components:

odacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.4.0            up-to-date
GI                                        12.2.0.1.180417       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.171017       12.2.0.1.180417
[ OraDB11204_home1 ]                      11.2.0.4.171017       11.2.0.4.180417
}
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      4.0.2.20.b.r123704    up-to-date
BIOS                                      41017600              41021300
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER                        QDV1RE14              up-to-date

Note: at this time, BIOS is not yet updated. Don’t know why but check this later and it will be OK.

6) Patch the dbhomes

You now need to patch the dbhomes separately. First of all, list them:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
90556d26-6756-4fed-9546-d44d55b6fc04     OraDB11204_home1     11.2.0.4.171017 (26609929, 26392168)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
18a9c067-3629-409d-9bae-60d27516c914     OraDB12201_home1     12.2.0.1.171017 (27020386, 26710464)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

Patch the first one. Target version is actually version of the patch. Remember that updating the ODA through a patch will never upgrade your database to a newer release, your 11gR2 databases will stay in 11.2.0.4 and 12cR1 databases will stay in 12.1.0.2 (if you’re using one or both of these versions). Only the PSU number will change (the fifth number which is actually a date).

odacli update-dbhome -i 90556d26-6756-4fed-9546-d44d55b6fc04 -v 12.2.1.4.0

odacli describe-job -i "c1abf083-d597-4673-b07b-d7cb79ec434a"

Job details
----------------------------------------------------------------
                     ID:  c1abf083-d597-4673-b07b-d7cb79ec434a
            Description:  DB Home Patching: Home Id is 90556d26-6756-4fed-9546-d44d55b6fc04
                 Status:  Success
                Created:  July 19, 2018 12:59:11 PM CEST
                Message:  WARNING::Failed to run the datapatch as db DB03_S2 is not registered with clusterware##WARNING::Failed to run the datapatch as db DB04_S2 is not registered with clusterware##WARNING::Failed to run the datapatch as db DB05_S2 is not r

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           July 19, 2018 12:59:12 PM CEST      July 19, 2018 12:59:15 PM CEST      Success
Patch location validation                July 19, 2018 12:59:15 PM CEST      July 19, 2018 12:59:19 PM CEST      Success
Opatch updation                          July 19, 2018 12:59:39 PM CEST      July 19, 2018 12:59:40 PM CEST      Success
Patch conflict check                     July 19, 2018 12:59:40 PM CEST      July 19, 2018 12:59:49 PM CEST      Success
db upgrade                               July 19, 2018 12:59:49 PM CEST      July 19, 2018 1:01:22 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:22 PM CEST       July 19, 2018 1:01:22 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:22 PM CEST       July 19, 2018 1:01:22 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:22 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:25 PM CEST       Success

Despite the overall Success status, some database could stay in previous version for some reason:
– database is a StandBy database : no update is possible on the dictionary (dictionary will be updated when the primary server will be patched)
– database is not correctly registered in the Oracle Cluster repository (if you create the database with odacli in instance-only mode: create-database -io – Probably a bug)

After applying the patch, please check the version of your dictionary on each database with this statement :

select ACTION_TIME, ACTION, VERSION, ID, BUNDLE_SERIES, COMMENTS from  DBA_REGISTRY_HISTORY;
ACTION_TIME                    ACTION       VERSION                  ID BUNDLE_SERIES   COMMENTS
------------------------------ ------------ ---------------- ---------- --------------- ------------------------------
23/07/18 20:25:26,765974       APPLY        11.2.0.4             180417 PSU             PSU 11.2.0.4.180417

Then update the next dbhome:

odacli update-dbhome -i 18a9c067-3629-409d-9bae-60d27516c914 -v 12.2.1.4.0

odacli describe-job -i "ef53a8a6-bd74-40f6-a338-343489d41a1c"

Job details
----------------------------------------------------------------
                     ID:  ef53a8a6-bd74-40f6-a338-343489d41a1c
            Description:  DB Home Patching: Home Id is 18a9c067-3629-409d-9bae-60d27516c914
                 Status:  Success
                Created:  July 19, 2018 1:20:20 PM CEST
                Message:  WARNING::Failed to run datapatch on db DB12TST Failed to run Utlrp script##WARNING::Failed to run the datapatch as db DB12DEV: is not registered with clusterware

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           July 19, 2018 1:20:41 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
Patch location validation                July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
Opatch updation                          July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
Patch conflict check                     July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
db upgrade                               July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:47 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:20:47 PM CEST       July 19, 2018 1:20:47 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:20:47 PM CEST       July 19, 2018 1:21:03 PM CEST       Success

As previously patched dbhome, this one also has warnings. Please check if each database is OK.

Check the /opt/oracle/dcs/log/dcs-agent.log for extended warning messages if needed, and then check the DB Version after the update of all dbhomes:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
90556d26-6756-4fed-9546-d44d55b6fc04     OraDB11204_home1     11.2.0.4.180417 (27441052, 27338049)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
18a9c067-3629-409d-9bae-60d27516c914     OraDB12201_home1     12.2.0.1.180417 (27464465, 27674384)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

Jump to step 8 if your databases are OK.

7) Update the databases where datapatch failed

If for some reason datapatch was not applied on some of your databases and it should have been, you’ll have to do it manually:

For 12c :

su – oracle
. oraenv <<< DB12TST
cd $ORACLE_HOME/OPatch
./datapatch -verbose

For 11gR2 :

su – oracle
. oraenv <<< DB11TST
sqlplus / as sysdba
@?/rdbms/admin/catbundle.sql psu apply

8) Patch the storage

No update-storage is needed for this patch on this ODA.

9) Optional: deploy the latest db clone files

If you’ll never deploy a new dbhome this step is not necessary. If you will, or simply if you don’t know if you’ll have to do that later, download and register the new db clone files in the repository to be able to create a new dbhome at the same patch level than the one already deployed, for example:

cd /opt/patch

unzip p27119402_122140_Linux-x86-64.zip
Archive:  p27119402_122140_Linux-x86-64.zip
 extracting: odacli-dcs-12.2.1.4.0-180617-DB-12.2.0.1.zip
  inflating: README.txt

update-image --image-files odacli-dcs-12.2.1.4.0-180617-DB-12.2.0.1.zip
Unpacking will take some time,  Please wait...
Unzipping odacli-dcs-12.2.1.4.0-180617-DB-12.2.0.1.zip

10) Control the final version of the components

Now the patching is done. ±2 hours were needed, if everything is OK.

oodacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.4.0            up-to-date
GI                                        12.2.0.1.180417       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180417       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.180417       up-to-date
}
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      4.0.2.20.b.r123704    up-to-date
BIOS                                      41021300              up-to-date
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER                        QDV1RE14              up-to-date

11) Optional: patch the SFP firmware

If you’re using SFP network interfaces on your ODA X7 (fiber connected network), you may encounter network troubles (lost of pings, failover not working correctly, and so on). SFP are not part of the ODA patching process, but there is a firmware update available for those kind of SFP on Oracle X7 server familly (including ODA). Please review note 2373070.1 for more information :

Using the onboard SFP28 ports on an ODA X7-2 server node (Doc ID 2373070.1)

 

Cet article Patching ODA lite to 12.2.1.4.0 est apparu en premier sur Blog dbi services.

Logical standby: No view for data guard status on session level

Fri, 2018-07-27 09:26

On logical standby you have to switch off data guard at least on session level to make modifications on objects which are maintained by data guard.

This is done by command


alter session disable guard;

Dataguard can be reenabled for this session by issuing


alter session enable guard;

For executing these commands “alter database” privilege is at least needed, which is often not liked by IT security staff.

Command “alter session disable guard” is also used in login triggers of logical standby databases for power users who have to do modifications in objects which are maintained by data guard. Problem is that no view can be queried to get data guard status of the session.

Workaround is to put result of login trigger into a log table which can be queried by the user.

 

Cet article Logical standby: No view for data guard status on session level est apparu en premier sur Blog dbi services.

ODA, network interface and configure-firstnet

Fri, 2018-07-27 09:13

Deploying new ODA X7-2S or 2M, I have been curious how configure-firstnet would interact with the fiber and copper ethernet network interfaces. Reading documentation on the web, I could not clearly understand if it is mandatory to have the ODA connected to the LAN when performing an ODA reimage and/or running the configure-firstnet in additionnal of having ILOM connection. After digging deeper and few tests, I wanted to share my experience in this blog.

configure-firstnet

Running configure-firstnet might need a little attention as it can be run just one time. In the example below, I’m running the firstnet configuration with a VLAN.

configure first net

 network-script files

The network script files are stored in /etc/sysconfig/network-scripts.

Bounding is configured on btbond1 interface as :

  1. em2 as primary interface.
  2. active-backup mode : em3 is used as backup only and will be used if em2 is failling
    BOUNDING_OPTS=”mode=active-backup miimon=100 primary=em2″

The only officially supported bounding option on the ODA is “active-backup”. Manually updating the ifcfg-btbond1 file with BOUNDING_OPTS=”mode=4 miimon=100 lacp_rate=1″ in order to implement LACP (Link Aggregation Control Protocol) would work if your switch is configured so, but is not supported. Recommendation would be to use ODA with “active-backup” mode only.

This can be seen in the ifcfg-btbond1 configuration file.

10.80.6.17-002-btbond1

After running configure-firstnet, using a VLN, a new file ifcfg-btbond1.<vln_id> will be created. This file is having all the IP configuration (IP address, netmask, gateway). If no VLAN is used, the IP configuration will added into the ifcfg-btbond1 configuration file.

10.80.6.17-004-vlan

If we look more closely to the network configuration file, we will see em2 and em3 been configured with btbond1 as master.

10.80.6.17-001-em2&3

As we can see here there is, so far, no reference in the network configuration file as if you are using fiber or copper ethernet interface.

em2 and em3 interface : Fiber or copper?

em2 and em3 interfaces are automatically connected either on fiber or on copper ethernet according to the physical connection. Em2 and em3 interfaces would then either be using the 2 fiber ports (SFP28) or the 2 ethernet ports (NET1 – NET2).
In fact, as soon as a GBIC converter is plugged into the fiber channels and a reboot is performed, the em2 and em3 will be automatically link to the fiber adapter. And no need to have any fiber cabling.

No GBIC converter installed on the ODA

IMG_1158 (Small)

em2 and em3 interfaces would be seen as Twisted Pair.

10.80.6.17-003-no gbic

GBIC converter

IMG_1160 (Small)

GBIC converter installed on the ODA

IMG_1161 (Small)

After a server reboot, em2 and em3 interfaces would be seen as Fiber.

10.80.6.17-004 (after plugging gbic and reboot)

Conclusion

Based on this short experience, we can see that the Fiber is detected as soon as a GBIC adapter is plugged into the SFP28 interfaces. The ifconfig network scripts are totally independent of this choice. Therefore, there is no need to have the ODA em2 and em3 network connections to reimage and run the configure-firstnet. These connections will be mandatory for the next step when we will create the appliance.

 

Cet article ODA, network interface and configure-firstnet est apparu en premier sur Blog dbi services.

How to shrink tables with on commit materialized views

Fri, 2018-07-27 08:44

Usually it is not possible to shrink tables which are used by on commit materialized views.

The result is an ORA-10652 “Object has on-commit materialized views” error, for which in action section nothing is suggested.

There is a workaround for this error: Convert all materialized views which rely on your table to be shrinked from on-commit to on-demand views. Application must tolerate that the affected materialized views are not updated during shrinking space of the table.
The affected materialized views must be queried in dba_mviews and the sql query must be checked whether table to be shrinked is used by this materialized view.

This gives following procedure:
alter table table_name enable row movement;
alter materialized view materialized_view_name refresh on demand;
alter table table_name shrink space;
exec dbms_mview.refresh('materialized_view_name');
alter materialized view materialized_view_name refresh on commit;
alter table table_name disable row movement;

Note: For alter table enable row movement and alter table shrink space table must be accessible, otherwise locks on the table may cause delays or errors.
The statements alter materialized view on demand, exec dbms_mview.refresh and alter materialized view on commit must be executed for every materialized view which uses the table to be shrinked.

 

Cet article How to shrink tables with on commit materialized views est apparu en premier sur Blog dbi services.

Running SQL Server containers on K8s Docker for Windows CE stable channel

Fri, 2018-07-27 07:33

The release of Docker for Windows Stable version 18.06..0-ce-win70 comes with some great new features I looked for a while including the K8s support! That’s a pretty good news because this support has existed on Edge channel since the beginning of this year but no chance to install a beta version on my laptop from my side.

So, we get now a good opportunity to test locally our SQL Server image with a K8s single node architecture.

 

blog 141 - 1 -docker 18.06.0-ce-win72

 

One interesting point here is I may switch the context of K8s infrastructure as shown below:

blog 141 - 2 -docker k8s switch context

 

The first one (dbi8scluster) corresponds to my K8s cluster on Azure. I wrote about it some time ago and the second one is about my single K8s node on my Windows 10 laptop. So, switching to my different environments is very easy as shown below:

[dab@DBI:$]> kubectl get nodes
NAME                       STATUS     ROLES     AGE       VERSION
aks-nodepool1-78763348-0   NotReady   agent     57d       v1.9.6
aks-nodepool1-78763348-1   NotReady   agent     57d       v1.9.6
C:\Users\dab\Desktop
[dab@DBI:$]> kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    1d        v1.10.3
C:\Users\dab\Desktop

 

It is also interesting to get a picture of the container installed and that run the K8s infrastructure on Docker. Assuming you already enabled the showing system containers option in advanced mode you may display all the K8s related containers as following:

[dab@DBI:$]> docker ps --format "table {{.ID}}\t {{.Names}}"
CONTAINER ID         NAMES
829a5941592e         k8s_compose_compose-7447646cf5-l5shp_docker_7e3ff6d9-908e-11e8-91f0-00155d0013a6_4
53666469f25d         k8s_compose_compose-api-6fbc44c575-7jrj8_docker_7e3ff0d0-908e-11e8-91f0-00155d0013a6_4
cd3772216e72         k8s_sidecar_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
8ae73505dfb0         k8s_dnsmasq_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
8066fbefc371         k8s_kubedns_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
2591d102e6fb         k8s_kube-proxy_kube-proxy-p9jv9_kube-system_7e43eaab-908e-11e8-91f0-00155d0013a6_4
80f6d997a225         k8s_POD_compose-7447646cf5-l5shp_docker_7e3ff6d9-908e-11e8-91f0-00155d0013a6_4
23751c4fd2fc         k8s_POD_kube-proxy-p9jv9_kube-system_7e43eaab-908e-11e8-91f0-00155d0013a6_4
f96406dedefb         k8s_POD_compose-api-6fbc44c575-7jrj8_docker_7e3ff0d0-908e-11e8-91f0-00155d0013a6_4
9149e9b91fd3         k8s_POD_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
2316ed63e2ee         k8s_kube-controller-manager_kube-controller-manager-docker-for-desktop_kube-system_120c685a17dc3d67e505450a6ea9243c_4
52defb42bbaf         k8s_kube-apiserver_kube-apiserver-docker-for-desktop_kube-system_814863b48e4b523c13081a7bb4c85f0d_4
3366ebf8f058         k8s_kube-scheduler_kube-scheduler-docker-for-desktop_kube-system_ea66a171667ec4aaf1b274428a42a7cf_4
bd903b9dce3f         k8s_etcd_etcd-docker-for-desktop_kube-system_d82203d846b07255217d0e72211752f0_4
7e650673b6d2         k8s_POD_kube-apiserver-docker-for-desktop_kube-system_814863b48e4b523c13081a7bb4c85f0d_4
24e4bfb59184         k8s_POD_kube-scheduler-docker-for-desktop_kube-system_ea66a171667ec4aaf1b274428a42a7cf_4
3422edb44165         k8s_POD_etcd-docker-for-desktop_kube-system_d82203d846b07255217d0e72211752f0_4
aeca6879906b         k8s_POD_kube-controller-manager-docker-for-desktop_kube-system_120c685a17dc3d67e505450a6ea9243c_4

 

We retrieve all the K8s components including the API server, the controller manager, the K8s scheduler, the kube-proxy and the etcd cluster database. It is not my intention to go further on this topic and I will probably get the opportunity to dig further in the next blog posts.

My first test consisted in deploying our dbi services custom image for development about SQL Server 2017 Linux on my K8s cluster node. I already did it with our dbi services production image on my previous K8s infrastructure on Azure and it could be interesting to check if we have to operate in the same way. This is at least what I expected and I was right. Just note  I didn’t perform the same test with Windows containers yet but it will be soon hopefully.

Obviously, I had to change my storage classes to StorageClassName = hostpath to point to my host storage as follows:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-data-sql
  labels:
    type: local
spec:
  storageClassName: hostpath
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /T/Docker/DMK/BACKUP
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv-claim-data-sql
spec:
  storageClassName: hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

 

I just want to draw your attention to the hostPath because we have to apply some modifications from the initial path to be understood by K8s. On Windows side my path is T:/Docker/DMK/BACKUP and it contains a backup of my custom AdventureWorks database for our tests.

Here the command to deploy my persistence volume and the correspond persistent volume claim that will be used by my SQL Server pod:

[dab@DBI:$]> kubectl create -f .\docker_k8s_storage.yaml
persistentvolume "pv-data-sql" created
persistentvolumeclaim "pv-claim-data-sql" created

 

Then my development file didn’t change a lot as expected:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 10
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pv-claim-data-sql
      containers:
      - name: mssql
        image: dbi/dbi_linux_sql2017:2017-CU4
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          value: "Password1"
          # valueFrom:
          #   secretKeyRef:
          #     name: mssql
          #     key: SA_PASSWORD 
        - name: DMK
          value: "Y"
        volumeMounts:
        - name: mssqldb
          mountPath: "/backup"
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer

 

The command to deploy my SQL Server container pod and the correspond service is:

[dab@DBI:$]> kubectl create -f .\docker_k8s_sql.yaml
deployment.apps "mssql-deployment" created
service "mssql-deployment" created

 

So, let’s get a picture of my new deployed environment:

[dab@DBI:$]> kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mssql-deployment   1         1         1            1           2m

[dab@DBI:$]> kubectl get services
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes         ClusterIP      10.96.0.1       <none>        443/TCP          1d
mssql-deployment   LoadBalancer   10.109.238.88   localhost     1433:30200/TCP   2m

[dab@DBI:$]> kubectl get pods -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP          NODE
mssql-deployment-6c69bb6f7c-pqb2d   1/1       Running   0          7m        10.1.0.39   docker-for-desktop

 

Everything seems to be deployed successfully. I use a load balancer service here but bear in mind I just have only one node. Here a description of my new deployed pod:

[dab@DBI:$]> kubectl describe pod mssql-deployment-6c69bb6f7c-pqb2d
Name:           mssql-deployment-6c69bb6f7c-pqb2d
Namespace:      default
Node:           docker-for-desktop/192.168.65.3
Start Time:     Fri, 27 Jul 2018 12:45:50 +0200
Labels:         app=mssql
                pod-template-hash=2725662937
Annotations:    <none>
Status:         Running
IP:             10.1.0.39
Controlled By:  ReplicaSet/mssql-deployment-6c69bb6f7c
Containers:
  mssql:
    Container ID:   docker://a17039dcdcb22c1b4b80c73fb17e73df90efda19ed77e215ef92bf86c9bfc538
    Image:          dbi/dbi_linux_sql2017:2017-CU4
    Image ID:       docker://sha256:2a693c121c33c390f944df7093b8c902f91940fa966ae8e5190a7a4a5b0681d2
    Port:           1433/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 27 Jul 2018 12:45:51 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      ACCEPT_EULA:        Y
      MSSQL_SA_PASSWORD:  Password1
      DMK:                Y
    Mounts:
      /backup from mssqldb (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qts2 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  mssqldb:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv-claim-data-sql
    ReadOnly:   false
  default-token-5qts2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5qts2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  Scheduled              3m    default-scheduler            Successfully assigned mssql-deployment-6c69bb6f7c-pqb2d to docker-for-desktop
  Normal  SuccessfulMountVolume  3m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "pv-data-sql"
  Normal  SuccessfulMountVolume  3m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-5qts2"
  Normal  Pulled                 3m    kubelet, docker-for-desktop  Container image "dbi/dbi_linux_sql2017:2017-CU4" already present on machine
  Normal  Created                3m    kubelet, docker-for-desktop  Created container
  Normal  Started                3m    kubelet, docker-for-desktop  Started container

 

And finally, let’s have a look at a sample of my SQL Server log:

========================== 2018-07-27 10:46:43 Restoring AdventureWorks database OK ==========================
========================== 2018-07-27 10:46:43 Installing TSQLt ==========================
2018-07-27 10:46:44.01 spid51      Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-07-27 10:46:44.02 spid51      Configuration option 'clr enabled' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-07-27 10:46:44.02 spid51      Configuration option 'clr strict security' changed from 1 to 0. Run the RECONFIGURE statement to install.
2018-07-27 10:46:44.02 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 3072. Run the RECONFIGURE statement to install.
Configuration option 'clr enabled' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'clr strict security' changed from 1 to 0. Run the RECONFIGURE statement to install.
Configuration option 'max server memory (MB)' changed from 2147483647 to 3072. Run the RECONFIGURE statement to install.
2018-07-27 10:46:47.39 spid51      Starting up database 'dbi_tools'.
2018-07-27 10:46:47.72 spid51      Parallel redo is started for database 'dbi_tools' with worker pool size [2].
2018-07-27 10:46:47.74 spid51      Parallel redo is shutdown for database 'dbi_tools' with worker pool size [2].
Installed at 2018-07-27 10:46:47.863
2018-07-27 10:46:48.54 spid51      AppDomain 3 (dbi_tools.dbo[runtime].2) created.

(1 rows affected)

+-----------------------------------------+
|                                         |
| Thank you for using tSQLt.              |
|                                         |
| tSQLt Version: 1.0.5873.27393           |
|                                         |
+-----------------------------------------+
0
========================== 2018-07-27 10:46:49 Installing TSQLt OK ==========================
======= 2018-07-27 10:46:49 MSSQL CONFIG COMPLETED =======
2018-07-27 10:51:03.35 spid55      Using 'dbghelp.dll' version '4.0.5'
2018-07-27 10:51:10.32 spid55      Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
2018-07-27 10:51:10.40 spid55      Using 'xplog70.dll' version '2017.140.3022' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.

 

My SQL Server pod is mounted with my restored AdventureWorks database accessible from my /backup path inside my container. We also added to the image, the tSQLt framework for our unit tests.

Let’s connect to my SQL Server pod by using mssql-cli CLI:

C:\WINDOWS\system32>mssql-cli -S localhost -U sa -P Password1
Version: 0.15.0
Mail: sqlcli@microsoft.com
Home: http://github.com/dbcli/mssql-cli
master>

Time: 0.000s
master> select name from sys.databases;
+--------------------+
| name               |
|--------------------|
| master             |
| tempdb             |
| model              |
| msdb               |
| AdventureWorks_dbi |
| dbi_tools          |
+--------------------+
(6 rows affected)
Time: 0.406s

 

My first deployment is successful. There are plenty of topics to cover about K8s and SQL Server containers and other writes-up will come soon for sure. Stay tuned!

 

 

 

 

Cet article Running SQL Server containers on K8s Docker for Windows CE stable channel est apparu en premier sur Blog dbi services.

How to install MariaDB 10.2.9 from binary tarball on CentOS 7 ?

Thu, 2018-07-26 03:22

 

In this blog, I will show you how to install it from binary tarball. This installation will be a VirtualBox machine (2 GB RAM , 20 GB of disk)

There are several ways to install MariaDB on your Linux:

  • rpm
  • binary tarball
  • building it from source

Prerequisites:

First ssh to your linux server

Update it:

 [root@deploy mariadb]$ yum update -y

Install wget:

[root@deploy mariadb]$ yum install wget
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.spreitzer.ch

Remove all existing mariadb packages

[root@deploy ~]$ rpm -qa | grep mariadb
mariadb-libs-5.5.56-2.el7.x86_64
[root@deploy ~]$ yum remove mariadb-libs-5.5.56-2.el7.x86_64
Loaded plugins: fastestmirror
Resolving Dependencies

Create the directory for the binaries

[root@deploy ~]$ mkdir /mariadbBinaries

Create the directory for the datas

[root@deploy ~]$ mkdir /mariadbData/mysqld1 -p

Create a directory where we are going to put the my.cnf configuration file for mariaDB

[root@deploy ~]$ mkdir /mariadbConf
[root@deploy ~]$ touch /mariadbConf/my.cnf

Change the ownership of the directories to mysql:

[root@deploy ~]$ chown -R mysql:mysql /mariadbBinaries/ /mariadbConf/ /mariadbData/

Create the mysql group and user:

[root@deploy ~]$ groupadd mysql
[root@deploy ~]$ useradd -g mysql mysql
[root@deploy ~]$ passwd mysql
[root@deploy ~]$ echo "mysql ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
[root@deploy ~]$ su - mysql
[mysql@deploy ~]$ cd /mariadbBinaries/
[mysql@deploy mariadbBinaries]$ [mysql@deploy mariadbBinaries]$ wget https://downloads.mariadb.org/interstitial/mariadb-10.2.9/bintar-linux-x86_64/mariadb-10.2.9-linux-x86_64.tar.gz

Check our tarball is here:

[mysql@deploy mariadbBinaries]$ ls -l
total 442528
-rw-rw-r--. 1 mysql mysql 453146319 Sep 25 22:39 mariadb-10.2.9-linux-x86_64.tar.gz

Let’s detar the tarball:

[mysql@deploy mariadbBinaries]$ tar zxvf mariadb-10.2.9-linux-x86_64.tar.gz
(omitted output)

Create a symbolic link between /mariadbConf/my.cnf and /etc/my.cnf

[mysql@deploy ~]$ sudo ln -s /mariadbConf/my.cnf /etc/my.cnf



 

 

Cet article How to install MariaDB 10.2.9 from binary tarball on CentOS 7 ? est apparu en premier sur Blog dbi services.

How to deploy an AWS infrastructure using CloudFormation?

Thu, 2018-07-26 03:13

# What is Cloud formation

As you know, one of the big challenges of cloud computing is to deploy infrastructure as fast as possible and in the most easy way.
Of course, this challenge is more than accomplished but when it comes to deploy a large infrastructure, we need another tool if there are not native to the cloud provider which is called Orchestration tool. Below an overview of some cloud providers and its native orchestration tool:

 

 

 

Cet article How to deploy an AWS infrastructure using CloudFormation? est apparu en premier sur Blog dbi services.

SQL Server on Linux – I/O internal thoughts

Tue, 2018-07-24 07:55

Let’s start the story from the beginning with some funny testing I tried to perform with SQL Server on Linux a couple of months ago. At that time, I wanted to get some pictures of syscalls from SQL Server as I already did in a past on Windows side with sysinternal tools as procmon and stack traces. On Linux strace is probably one of the best counterparts.

blog 140 - 0 - 0 -  banner

Firstly, please note this blog is just from my own researches and it doesn’t constitute in any cases an official documentation from Microsoft and may lead likely to some inaccuracies. This kind of write up is definitely not easy especially when you’re not the direct developer of the product and because things change quickly nowadays making at the same time your blog post biased :) Anyway, I was just curious to figure out how SQL Server deals with I/O on Linux side and the main safety point here is certainly to show how you may achieve it on Linux. So let’s start from the beginning with already what we know on the Windows operating system: The SQL Server engine goes through Win32 API and functions like CreateFile(), ReadFile(), WriteFile() to deal with I/O but let’s focus specifically on the CreateFile() function here. CreateFile() is used to create or to open an existing file or an I/O device with some specific flags. Some of them as FILE_FLAG_WRITE_THROUGH are used to meet the Write-Ahead Logging (WAL) Protocol by bypassing all system / disk caches (cf. Microsoft article).

On April 10 2018 I did my first tests on the Linux side with SQL Server 2017 CU5 and here was my first strace output after creating dbi_db database:

blog 140 - 0 - 2 - strace stack twitter

blog 140 - 0 - 1 - strace stack

It was an expected output for me because on Linux SQL Server uses an low-level open() system call – that is the counterpart of createfile() on Windows – but the surprising thing was with O_DIRECT flag only. I’m not a system developer and from my position, I may understand benefits from using O_DIRECT with database systems because it is driving by AIO (asynchronous I/O) and by the fact we may completely bypass any kernel space buffers (by default on Linux). I get the opportunity to thanks @dbaffaleuf with our interesting discussions on this topic

But referring to the documentation we may read the following sample as well about O_DIRECT:

File I/O is done directly to/from user-space buffers. The O_DIRECT flag on its own makes an effort to transfer data synchronously, but does not give the guarantees of the O_SYNC flag that data and necessary metadata are transferred

From my understanding using O_DIRECT that implies durable writes on block devices are not guaranteed and from my trace I noticed the transaction log seemed to be open with O_DIRECT in this case …

2837  20:44:32 open("/u02/sqlserverlog/mssqlserver/dbi_test_log.ldf", O_RDWR|O_CREAT|O_EXCL|O_DIRECT, 0660) = 187
2837  10:13:09 fstat(187, {st_mode=S_IFREG|0660, st_size=0, ...}) = 0

 

… From a WAL protocol perspective using such flag might lead to not meet the requirements because we might experience data loss in case of a system / storage outage unless either implementing another low-level mechanism like fsync() for transaction log and checkpoints or to be sure the storage guarantees writes are O_DIRECT safe. At this stage I expected to find out more fsync() related entries in my trace but no chance as shown below (I put only a sample but in fact no other relevant syscalls in the entire trace that might indicate forcing synchronization stuff)

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 72.59   12.993418        2089      6220      2503 futex
 10.78    1.929811        7509       257           epoll_wait
  9.08    1.625657       47813        34        28 restart_syscall
  3.28    0.587733        8643        68           io_getevents
  1.97    0.351806       70361         5           nanosleep
  1.92    0.344254       34425        10        10 rt_sigtimedwait
  0.32    0.056943           3     22116           clock_gettime
  0.02    0.003530         101        35           munmap
  0.01    0.002149          32        67           io_submit
  0.01    0.001706           6       273           epoll_ctl
  0.01    0.000897           6       154           read
  0.00    0.000765          11        68           writev
  0.00    0.000605           4       136        68 readv
  0.00    0.000591           4       137           write
  0.00    0.000381          10        40        23 open
  …
------ ----------- ----------- --------- --------- ----------------
100.00   17.900633                 29827      2638 total

 

A couple of weeks ago, I wanted to update my test environment with SQL Server 2017 CU8 (on Linux) I noticed the following messages (in the red rectangle):

blog 140 - 2 - forceflush

 

Hmm .. that was pretty new and interesting and in fact, the aforementioned messages were related to this Microsoft article (Thanks @Bobwardms to pointed me out). This new behavior is available since SQL Server 2017 CU6 and the article describes that how Microsoft has introduced a change with a new “forced flush” mechanism for SQL Server on Linux system.

In a nutshell for all scenarios, a new flush mechanism guarantees data is safely written to a stable media for transaction logs and during checkpoints. Let’s dig further into the both methods.

Let’s say first I applied the same pattern for all the tests that follow. The test’s protocol included one dbo.t1 table with only one column (id int). I inserted for each test a bunch of data rows (67 rows to be more precise related to 67 distinct implicit transactions) without any other user concurrent activities. It remains some internal stuff from SQL Server but I guess we may consider them as negligible compared to my tests.

insert dbo.t1 values (1)
go 67

 

  • Default forced flush mechanism behavior

 

In this scenario referring to my strace file output database files are still open with O_DIRECT only as shown below (my database’s name is toto this time)

4745  10:54:34 open("/u01/sqlserverdata/mssqlserver/toto.mdf", O_RDWR|O_CREAT|O_EXCL|O_DIRECT, 0660) = 188

 

I also used the following command to get directly a picture summary of the number of calls per syscall

$ sudo strace -f -c -o sql_strace.txt $(pidof sqlservr |sed 's/\([0-9]*\)/\-p \1/g')

 

Here the sample output I got:

$ cat sql_strace.txt
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 71.36   24.078231        2685      8969      3502 futex
 12.13    4.093680      120402        34        30 restart_syscall
 10.03    3.384817       12583       269           epoll_wait
  2.67    0.901541       13258        68           io_getevents
  1.77    0.598830       46064        13        13 rt_sigtimedwait
  1.66    0.560893       93482         6           nanosleep
  0.23    0.077873           2     31422           clock_gettime
  0.09    0.030924         462        67           fsync
  0.01    0.003212         321        10           madvise
  0.01    0.003026          22       136           write
  …
------ ----------- ----------- --------- --------- ----------------
100.00   33.742080                 42195      3651 total

 

This time I noticed additional calls of fsync() to guarantee writes on blocks – 67 calls that seem to be related to 67 transactions right? I double checked in my strace output and it seems that is the case but I may be wrong on this point so please feel free to comment. Another interesting point is that I also continued to notice asynchronous IO from io_getevents function that appeared from my trace. That make sense for me. Writes of data are asynchronous while those on transaction logs are synchronous by design. In this mode fsync() is triggered during transaction commits and checkpoints.

 

  • Enabling trace flag 3979

Enabling trace flag 3979 has effect to disable the forced flush behavior replaced by writethrough and alternatewritethrough options. Referring to the Microsoft article the former will translate the well-known FILE_FLAG_WRITE_THROUGH flag requests into O_DSYNC opens but with some performance optimization stuff by using fdatasync() rather than fsync() Indeed, fdatasync() is supposed to generate less I/O activity because it doesn’t require to synchronize file metadata (only the data portion of the file is concerned here).

Anyway, my strace sample file output below confirmed that both data file and transaction log were open with both O_DIRECT and O_DSYNC meaning we are bypassing the kernel buffer space and we are also forcing synchronous I/O for both files.  Does it matter? Well, from my understanding writing dirty data pages is still an asynchronous process when checkpoints occur. How much better it performs in this case? I don’t know and it will require likely another bunch of strong tests on different use cases –

5279  11:07:36 open("/u01/sqlserverdata/mssqlserver/dbi_test.mdf", O_RDWR|O_CREAT|O_EXCL|O_DSYNC|O_DIRECT, 0660) = 185
…
5279  11:07:38 open("/u02/sqlserverlog/mssqlserver/dbi_test_log.ldf", O_RDWR|O_CREAT|O_EXCL|O_DSYNC|O_DIRECT, 0660) = 185

 

As previously I noticed 67 calls of fdatasync() (and not anymore fsync() here) related likely to my 67 implicit transactions and I still continued to notice asynchronous IO driving by io_getevents() or io_submit() syscalls.

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 73.90   12.678046        2360      5371      1927 futex
 10.80    1.853571        7788       238           epoll_wait
  9.05    1.551962       45646        34        30 restart_syscall
  1.93    0.331275       47325         7         7 rt_sigtimedwait
  1.84    0.316524      105508         3           nanosleep
  1.61    0.276806        4131        67           io_getevents
  0.71    0.121815           7     18259           clock_gettime
  0.06    0.010184         152        67           fdatasync
  0.02    0.003851          26       146           read
  0.02    0.003217          12       272           epoll_ctl
  0.01    0.002139          32        67           io_submit
  0.01    0.002070          15       136           write
  …
------ ----------- ----------- --------- --------- ----------------
100.00   17.156630                 25170      2054 total

 

 

Finally, at the moment of this write up, let’s say that Microsoft recommends enabling the trace flag 3979 as well as changing the default values of writethrough and alternatewritethrough options to 0 to revert back to old behavior before CU6 illustrated in the first section of this blog post but only in the case your storage guarantees your writes will be “O_DIRECT” safe. I think you may understand why now – if my understanding of Linux I/O is obviously correct- :)

See you!

 

 

Cet article SQL Server on Linux – I/O internal thoughts est apparu en premier sur Blog dbi services.

Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries

Tue, 2018-07-24 05:57

Good news, the latest Patchset for Oracle 12cR2 (which is not named patchset anymore, is actually called release 18c and numbered 18.0.0.0.0) is available for download on OTN. It is great because OTN download does not require access to Support and Software Updates. It is available to anybody under the Free Developer License Terms (basically development, testing, prototyping, and demonstrating for an application that is not in production and for non-commercial use). We all complained about the ‘Cloud First’ strategy because we were are eager to download the latest version. But the positive aspect of it is that we have now on OTN a release that has been stabilized after a few release updates. In the past, only the first version of the latest release was available there. Now we have one with many bug fixed.

Of course, I didn’t wait and I have tested 18c as soon as it was available on the Oracle Cloud thanks to the ACE Director program that provided me with some Cloud Credits. In this post, I’ll update my Cloud database to run it with the on-premises binary. Because that’s the big strength of Oracle: we can run the same software, 100% compatible, on the Cloud and on our own servers. There are some limitations in the features available, but technically it is the same software.

Oracle Cloud First

Here is my Cloud version of Oracle 18c installed on February (18.1) updated on April (18.2) and July (18.3):

SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 
SQL> select banner from v$version;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> select banner_full from v$version;
 
BANNER_FULL
----------------------------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> select banner_legacy from v$version;
 
BANNER_LEGACY
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

LINUX.X64_180000_db_home.zip

I have installed the on-premises 18c available on OTN. The good things with the new releases are:

  • No need to extract installer files. Just unzip the Oracle Home and link the executable
  • This Oracle Home image already includes the latest Release Updates


SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 

We have 4 updates from July here for the following components:

  • The Database (28090523)
  • The Java in the Oracle Home, aka JDK (27908644)
  • The Java in the database, aka OJVM (27923415)
  • The clusterware component for the database to match the CRS, aka OCW (28090553)

So, now we have an image of the Oracle Home which already contains all the latest updates… except one:

$ cat $ORACLE_HOME/sqldeveloper/sqldeveloper/bin/version.properties
 
COMPANY=Oracle
PRODUCT=SQL Developer
VERSION=17.30003410937f
VER=17.3.2
VER_FULL=17.3.2.341.0937
BUILD_LABEL=341.0937
BUILD_NUM=341.0937
EDITION=

Unfortunately, that’s an old version of SQL Developer here, and with no SQLcl. Then just download this additional one and unzip it in the Oracle Home.

DataPatch

So, what happens when I open the database that I have created on 18.1 and patched with 18.2 and 18.3 RUs on the Oracle Cloud? There are two updates for the database (DBRU and OJVM). The DBRU is already there then DataPatch has only to apply the OJVM:

[oracle@VM183x dbhome_1]$ $ORACLE_HOME/OPatch/datapatch
SQL Patching tool version 18.0.0.0.0 Production on Tue Jul 24 10:57:55 2018
Copyright (c) 2012, 2018, Oracle. All rights reserved.
 
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_11104_2018_07_24_10_57_5 5/sqlpatch_invocation.log
 
Connecting to database...OK
Gathering database info...done
 
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
 
Bootstrapping registry and package to current versions...done
Determining current state...done
 
Current state of interim SQL patches:
Interim patch 27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)):
Binary registry: Installed
PDB CDB$ROOT: Not installed
PDB PDB$SEED: Not installed
PDB PDB1: Not installed
 
Current state of release update SQL patches:
Binary registry:
18.3.0.0.0 Release_Update 1806280943: Installed
PDB CDB$ROOT:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.20.321353 AM
PDB PDB$SEED:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.21.802495 AM
PDB PDB1:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.23.230513 AM
 
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
No interim patches need to be rolled back
No release update patches need to be installed
The following interim patches will be applied:
27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415))
 
Installing patches...
Patch installation complete. Total patches installed: 3
 
Validating logfiles...done
Patch 27923415 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08. log (no errors)
Patch 27923415 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDBSEED_2018Jul24_10_58_56. log (no errors)
Patch 27923415 apply (pdb PDB1): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDB1_2018Jul24_10_58_56.log (no errors)
SQL Patching tool complete on Tue Jul 24 10:59:21 2018

Now here is the history of patches:

SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
Patch Id : 27923415
Action : APPLY
Action Time : 24-JUL-2018 10:59:19
Description : OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

This is all good. Despite the different release schedules, the level of software is exactly the same. And we can start on-premises on a release with low regression risk (18c like a patchset) but many fixes (several release updates). For the moment only the Linux port is there. The other platforms should come this summer.

 

Cet article Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries est apparu en premier sur Blog dbi services.

Syncing Active Directory users and groups to PostgreSQL

Mon, 2018-07-23 10:55

A lot of companies use Active Directory to manage their users and groups. What most of this companies also want to do is to manage their database users and groups in Active Directory. PostgreSQL comes with ldap/kerberos authentication by default but does not provide anything that helps with managing users and groups in an external directory. And even for the authentication the user already needs to be existent in PostgreSQL. One tool you might want to have a look at and that helps with this requirement is pg-ldap-sync.

As usual I am using CentOS 7 for the scope of this post. For getting pg-ldap-sync onto the system PostgreSQL needs to be installed as pg_config is expected to be there. Once you have that several packages need to be installed (the openldap-clients is not required but it is handy to have it just in case you want to test some ldapsearch commands against Active Directory):

[root@pgadsync ~]$ yum install -y ruby rubygem-rake rubygems ruby-devel openldap-clients git

pg-ldap-sync can either be installed directly with ruby commands or you can install it from Git:

[postgres@pgadsync ~]$ git clone https://github.com/larskanis/pg-ldap-sync.git
[postgres@pgadsync ~]$ cd pg-ldap-sync
[postgres@pgadsync pg-ldap-sync]$ gem install bundler
[postgres@pgadsync pg-ldap-sync]$ bundle install
[postgres@pgadsync pg-ldap-sync]$ bundle exec rake install
[postgres@pgadsync pg-ldap-sync]$ which pg_ldap_sync 
~/bin/pg_ldap_sync
[postgres@pgadsync pg-ldap-sync]$ cd ..
[postgres@pgadsync ~]$ bin/pg_ldap_sync --help
Usage: bin/pg_ldap_sync [options]
    -v, --[no-]verbose               Increase verbose level
    -c, --config FILE                Config file [/etc/pg_ldap_sync.yaml]
    -t, --[no-]test                  Don't do any change in the database

And then, of course, you need something in the Active Directory for synchronization. In my test Active Directory I create a new “Organizational Unit” called “PostgreSQL”:

Selection_001

Inside this “Organizational Unit” there is a user which is used for authenticating against Active Directory:
Selection_002

Then we have two other “Organizational Units”, one for the PostgreSQL DBAs and one for the groups we’d like to sync:
Selection_003

There are three people in the pgadmins unit:
Selection_004

There is one group in the groups unit:
Selection_005

… and the group has two members:
Selection_006

This is what we want to synchronize to PostgreSQL. The final requirement is that two roles need to be there is PostgreSQL (you’ll notice later why that is important):

postgres@pgbox:/home/postgres/ [PG10] psql -X postgres
psql (10.3)
Type "help" for help.

postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=# create role ldap_users;
CREATE ROLE
postgres=# create role ldap_groups;
CREATE ROLE
postgres=# \du
                                    List of roles
  Role name  |                         Attributes                         | Member of 
-------------+------------------------------------------------------------+-----------
 ldap_groups | Cannot login                                               | {}
 ldap_users  | Cannot login                                               | {}
 postgres    | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=# 

With pg-ldap-sync each instance you want to have synchronized needs a separate yaml file like this one:

# With this sample config the distinction between LDAP-synchronized
# groups/users from is done by the membership to ldap_user and
# ldap_group. These two roles has to be defined manally before
# pg_ldap_sync can run.

# Connection parameters to LDAP server
# see also: http://net-ldap.rubyforge.org/Net/LDAP.html#method-c-new
ldap_connection:
  host: 172.22.30.1
  port: 389
  auth:
    method: :simple
    username: CN=pgadsync,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
    password: xxxxx
#  encryption:
#    method: :simple_tls

# Search parameters for LDAP users which should be synchronized
ldap_users:
  base: OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
  # LDAP filter (according to RFC 2254)
  # defines to users in LDAP to be synchronized
#  filter: (&(objectClass=person)(objectClass=organizationalPerson)(givenName=*)(sn=*)(sAMAccountName=*))
  filter: (sAMAccountName=*)
  # this attribute is used as PG role name
  name_attribute: sAMAccountName
  # lowercase name for use as PG role name
  lowercase_name: true

# Search parameters for LDAP groups which should be synchronized
ldap_groups:
  base: OU=pggroups,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
  filter: (cn=dbas)
  # this attribute is used as PG role name
  name_attribute: cn
  # lowercase name for use as PG role name
  lowercase_name: false
  # this attribute must reference to all member DN's of the given group
  member_attribute: member

# Connection parameters to PostgreSQL server
# see also: http://rubydoc.info/gems/pg/PG/Connection#initialize-instance_method
pg_connection:
  host: 192.168.22.99
  dbname: postgres
  user: postgres
  password: postgres

pg_users:
  # Filter for identifying LDAP generated users in the database.
  # It's the WHERE-condition to "SELECT rolname, oid FROM pg_roles"
  filter: oid IN (SELECT pam.member FROM pg_auth_members pam JOIN pg_roles pr ON pr.oid=pam.roleid WHERE pr.rolname='ldap_users')
  # Options for CREATE RULE statements
  create_options: LOGIN IN ROLE ldap_users

pg_groups:
  # Filter for identifying LDAP generated groups in the database.
  # It's the WHERE-condition to "SELECT rolname, oid FROM pg_roles"
  filter: oid IN (SELECT pam.member FROM pg_auth_members pam JOIN pg_roles pr ON pr.oid=pam.roleid WHERE pr.rolname='ldap_groups')
  # Options for CREATE RULE statements
  create_options: NOLOGIN IN ROLE ldap_groups
#grant_options:

When you have a look at the “pg_users” and “pg_groups” you will notice why the two PostgreSQL roles created above are required. They are used to distinguish the users and groups coming from the directory and those created locally.

Ready to sync:

[postgres@pgadsync ~]$ bin/pg_ldap_sync -c etc/pg_ldap_sync.yaml -vv 
I, [2018-07-23T14:23:46.350588 #29270]  INFO -- : found user-dn: CN=dba1,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.360073 #29270]  INFO -- : found user-dn: CN=dba2,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.363133 #29270]  INFO -- : found user-dn: CN=dba3,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.474105 #29270]  INFO -- : found group-dn: CN=dbas,OU=pggroups,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.517468 #29270]  INFO -- : user stat: create: 3 drop: 0 keep: 0
I, [2018-07-23T14:23:46.517798 #29270]  INFO -- : group stat: create: 1 drop: 0 keep: 0
I, [2018-07-23T14:23:46.518047 #29270]  INFO -- : membership stat: grant: 2 revoke: 0 keep: 0
I, [2018-07-23T14:23:46.518201 #29270]  INFO -- : SQL: CREATE ROLE "dba1" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:23:46.522229 #29270]  INFO -- : SQL: CREATE ROLE "dba2" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:23:46.525156 #29270]  INFO -- : SQL: CREATE ROLE "dba3" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:23:46.528058 #29270]  INFO -- : SQL: CREATE ROLE "dbas" NOLOGIN IN ROLE ldap_groups
I, [2018-07-23T14:23:46.531065 #29270]  INFO -- : SQL: GRANT "dbas" TO "dba3","dba1" 

… and that’s it. Users and groups are now available in PostgreSQL:

postgres=# \du
                                        List of roles
  Role name  |                         Attributes                         |     Member of     
-------------+------------------------------------------------------------+-------------------
 dba1        |                                                            | {ldap_users,dbas}
 dba2        |                                                            | {ldap_users}
 dba3        |                                                            | {ldap_users,dbas}
 dbas        | Cannot login                                               | {ldap_groups}
 ldap_groups | Cannot login                                               | {}
 ldap_users  | Cannot login                                               | {}
 postgres    | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

When you add anther user to the directory:

Selection_007

… and run the sync again all remaining users will of course not be touched but the new one gets created (notice that I copied the dba4 in the directory, this is why the user is member of the dbas group):

[postgres@pgadsync ~]$ bin/pg_ldap_sync -c etc/pg_ldap_sync.yaml -vv 
I, [2018-07-23T14:27:26.314729 #29273]  INFO -- : found user-dn: CN=dba1,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.323719 #29273]  INFO -- : found user-dn: CN=dba2,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.326764 #29273]  INFO -- : found user-dn: CN=dba3,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.328800 #29273]  INFO -- : found user-dn: CN=dba4,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.394066 #29273]  INFO -- : found group-dn: CN=dbas,OU=pggroups,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.434236 #29273]  INFO -- : found pg-user: "dba1"
I, [2018-07-23T14:27:26.434443 #29273]  INFO -- : found pg-user: "dba2"
I, [2018-07-23T14:27:26.434531 #29273]  INFO -- : found pg-user: "dba3"
I, [2018-07-23T14:27:26.439065 #29273]  INFO -- : found pg-group: "dbas" with members: ["dba3", "dba1"]
I, [2018-07-23T14:27:26.439357 #29273]  INFO -- : user stat: create: 1 drop: 0 keep: 3
I, [2018-07-23T14:27:26.439468 #29273]  INFO -- : group stat: create: 0 drop: 0 keep: 1
I, [2018-07-23T14:27:26.439656 #29273]  INFO -- : membership stat: grant: 1 revoke: 0 keep: 2
I, [2018-07-23T14:27:26.439759 #29273]  INFO -- : SQL: CREATE ROLE "dba4" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:27:26.441692 #29273]  INFO -- : SQL: GRANT "dbas" TO "dba4" 

To more tips: When you want the complete ldap path for a user can do it like this:
Selection_008

It is advisable to test the filters you have in the yaml like:

[postgres@pgadsync ~]$ ldapsearch -x -h 172.22.30.1 -D "pgadsync@test.dbiservices.com" -W "(sAMAccountName=*)" -b "OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com"  | grep sAMAccountName
Enter LDAP Password: 
# filter: (sAMAccountName=*)
sAMAccountName: dba1
sAMAccountName: dba2
sAMAccountName: dba3
sAMAccountName: dba4

You might wonder how you can assign the permissions then. Just pre-create the role and give the permissions you want:

postgres=# drop role dbas;
DROP ROLE
postgres=# create role dbas in role ldap_groups;
CREATE ROLE
postgres=# grant CONNECT ON DATABASE postgres to dbas;
GRANT

The assignments to that group will come from the directory once you run the next synchronization.

Hope that helps …

 

Cet article Syncing Active Directory users and groups to PostgreSQL est apparu en premier sur Blog dbi services.

New features and known issues with RMAN tool on Oracle database 12.1.0.2

Fri, 2018-07-20 04:50

Oracle Database 12c has new enhancements and additions in Recovery Manager (RMAN).
The RMAN tool continues to enhance and extend the reliability, efficiency, and availability of Oracle Database Backup and Recovery.
Below, I will mention couple of new features for the RMAN duplicate command, but also how to avoid issues that can happen on the creation of the temporary files.

FEATURES:

<INFO>Using BACKUPSET clause :

In previous releases, active duplicates were performed using implicit image copy backups, transferred directly to the destination server. From 12.1 it is also possible to perform active duplicates using backup sets by including the USING BACKUPSET clause.
Compared to the other method (image copy backups), the unused block compression associated with a backup set reduces the amount of the data pulled across the network.

<INFO>Using SECTION SIZE clause:

The section size clause takes into account the parallel degree and the size of the datafile that will be used.
In my case I have configured the parallel degree to 6:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

Starting restore at 19-JUL-2018 14:11:06
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4
using channel ORA_AUX_DISK_5
using channel ORA_AUX_DISK_6
channel ORA_AUX_DISK_3: using network backup set from service PROD2_SITE1
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00005 to /u02/oradata/PROD/data.dbf
channel ORA_AUX_DISK_3: restoring section 2 of 7

------
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using network backup set from service PROD2_SITE1
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00005 to /u02/oradata/PROD/data.dbf
channel ORA_AUX_DISK_2: restoring section 7 of 7

ISSUES :
<WARN>Duplicating on 12cR1, creation of the temp files is not handled correctly.
Duplicating from active or from backup, using Oracle 12cR1, you can run into some issues with the temporary files.

oracle@dbisrv02:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/ [PROD] rman target sys/pwd00@<TNS_NAME_TARGET> auxiliary sys/pwd00@<TNS_NAME_AUXILIARY> 
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 19 13:31:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: <TNS_NAME_TARGET> (DBID=xxxxxxxxxx)
connected to auxiliary database: <TNS_NAME_AUXILIARY> (not mounted)

duplicate target database to <TNS_NAME_AUXILIARY> from active database using backupset section size 500m;

----------------------------------------
contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 19-JUL-2018 14:26:09

<INFO>Querying the v$tempfile will not reveal any error

SQL> select file#,name,status from v$tempfile;

     FILE# NAME                           STATUS
---------- ------------------------------ -------
         1 /u02/oradata/<AUXILIARY>/temp01.dbf   ONLINE

<INFO>But querying the dba_temp_files, or run some transactions against your database that need usage of the temporary tablespace, you will got :

SQL> select * from dba_temp_files;
select * from dba_temp_files
              *
ERROR at line 1:
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 201: '/u02/oradata/<AUXILIARY>/temp01.dbf'

Solution1 : Drop and recreate your temporary tablespace(s) manually. Could be difficult if you have several of them, OR
Solution2 : Drop temp files from your <to_be_cloned_DB>, on the OS side, before launching the duplicate. For more details you can consult this note from MOS :  2250889.1

SQL> col TABLESPACE_NAME format a50;
SQL> col file_name format a50;
SQL> select file_name,TABLESPACE_NAME from dba_temp_files;

FILE_NAME                                          TABLESPACE_NAME
-------------------------------------------------- --------------------------------------------------
/u02/oradata/<AUXILIARY>/temp01.dbf                       TEMP

SQL>startup nomount;

rm -rf /u02/oradata/<AUXILIARY>/temp01.dbf

 

oracle@dbisrv02:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/ [PROD] rman target sys/pwd00@<TNS_NAME_TARGET> auxiliary sys/pwd00@<TNS_NAME_AUXILIARY> 
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 19 13:31:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: <TNS_NAME_TARGET> (DBID=xxxxxxxxxx)
connected to auxiliary database: <TNS_NAME_AUXILIARY> (not mounted)

duplicate target database to <TNS_NAME_AUXILIARY> from active database using backupset section size 500m;

At then end of the duplicate action, you should be able to use the database without any action performed against temp files :

SQL> select file#,name,status from v$tempfile;

     FILE# NAME                           STATUS
---------- ------------------------------ -------
         1 /u02/oradata/<AUXILIARY>/temp01.dbf   ONLINE

Additionally, if you are running your auxiliary DB using the Oracle Grid Infra, you need to remove it from Grid during your actions and add again once you finished.

SQL> alter system set db_unique_name='PROD_SITE2' scope=spfile;
alter system set db_unique_name='PROD_SITE2' scope=spfile
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

--remove from GRID
[grid@dbisrv02 ~]$ srvctl stop database -d PROD
[grid@dbisrv02 ~]$ srvctl remove database -d PROD
Remove the database PROD? (y/[n]) Y

SQL> startup
ORACLE instance started.

Total System Global Area  788529152 bytes
Fixed Size                  2929352 bytes
Variable Size             314576184 bytes
Database Buffers          465567744 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

SQL> alter system set db_unique_name='PROD_SITE2' scope=spfile;

System altered.
 

Cet article New features and known issues with RMAN tool on Oracle database 12.1.0.2 est apparu en premier sur Blog dbi services.

Installing ZFS on OEL7 UEK4 for Docker storage

Thu, 2018-07-19 23:08

The Oracle Database is fully supported on Docker according that Linux is Red Hat Enterprise Linux 7 or Oracle Enterprise Linux 7 with Unbreakable Enterprise 4. This is documented in MOS Note 2216342.1. Given the size of the Oracle database in GigaBytes even empty, the way it is installed at build with many file updates, and the per-block modifications of the datafiles, a block level copy-on-write filesystem is a must and deduplication and compression are appreciated. This makes ZFS a good option for the Docker storage driver, but also the external volumes. By the way, the Docker documentation about the storage drivers mention that zfs is a good choice for high-density workloads such as PaaS and this of course includes Database as a Service.

I’ve run this example on OEL 7.2 created in the the Oracle Cloud:
CaptureCreateInstance

kernel-uek-devel

We need to install the kernel headers. Of course, it is probably better to run a ‘yum update’ and reboot in order to run the latest kernel.
Here, I’m just installing the headers for the current kernel:

[root@localhost opc]# yum -y install kernel-uek-devel-$(uname -r)
...
Installed:
kernel-uek-devel.x86_64 0:4.1.12-112.14.13.el7uek
Dependency Installed:
cpp.x86_64 0:4.8.5-28.0.1.el7_5.1 gcc.x86_64 0:4.8.5-28.0.1.el7_5.1 glibc-devel.x86_64 0:2.17-222.el7
glibc-headers.x86_64 0:2.17-222.el7 kernel-headers.x86_64 0:3.10.0-862.9.1.el7 libdtrace-ctf.x86_64 0:0.8.0-1.el7
libmpc.x86_64 0:1.0.1-3.el7 mpfr.x86_64 0:3.1.1-4.el7
Dependency Updated:
glibc.x86_64 0:2.17-222.el7 glibc-common.x86_64 0:2.17-222.el7 libgcc.x86_64 0:4.8.5-28.0.1.el7_5.1
libgomp.x86_64 0:4.8.5-28.0.1.el7_5.1

DKMS

We need Dynamic Kernel Module Support to load ZFS modules. I had problems in the past with this so I install it step by step to verify that everything is ok. First, enable the EPEL repository:

[root@localhost opc]# yum install -y yum-utils
[root@localhost opc]# yum-config-manager --enable ol7_developer_EPEL

Then install DKMS:

[root@localhost opc]# yum -y install -y dkms
...
Installed:
dkms.noarch 0:2.4.0-1.20170926git959bd74.el7
Dependency Installed:
elfutils-default-yama-scope.noarch 0:0.170-4.el7 elfutils-libelf-devel.x86_64 0:0.170-4.el7
kernel-debug-devel.x86_64 0:3.10.0-862.9.1.el7 zlib-devel.x86_64 0:1.2.7-17.el7
Dependency Updated:
elfutils-libelf.x86_64 0:0.170-4.el7 elfutils-libs.x86_64 0:0.170-4.el7 zlib.x86_64 0:1.2.7-17.el7

Install ZFS repository

There is a zfs-release package that installs the /etc/yum.repos.d/zfs.repo:

[root@localhost opc]# sudo rpm -Uvh http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
Retrieving http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
warning: /var/tmp/rpm-tmp.yvRURo: Header V4 RSA/SHA256 Signature, key ID f14ab620: NOKEY
Preparing... ################################# [100%] Updating / installing...
1:zfs-release-1-5.el7_4 ################################# [100%]

Basically, all it contains is the following enabled section:

[zfs] name=ZFS on Linux for EL7 - dkms
baseurl=http://download.zfsonlinux.org/epel/7.4/$basearch/
enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

Install ZFS

This is the important part, installing ZFS:

[root@localhost opc]# sudo yum install -y zfs
...
======================================================================================================================
Package Arch Version Repository Size
======================================================================================================================
Installing:
zfs x86_64 0.7.9-1.el7_4 zfs 413 k
Installing for dependencies:
kernel-devel x86_64 3.10.0-862.9.1.el7 ol7_latest 16 M
libnvpair1 x86_64 0.7.9-1.el7_4 zfs 30 k
libuutil1 x86_64 0.7.9-1.el7_4 zfs 35 k
libzfs2 x86_64 0.7.9-1.el7_4 zfs 130 k
libzpool2 x86_64 0.7.9-1.el7_4 zfs 591 k
lm_sensors-libs x86_64 3.4.0-4.20160601gitf9185e5.el7 ol7_latest 41 k
spl x86_64 0.7.9-1.el7_4 zfs 29 k
spl-dkms noarch 0.7.9-1.el7_4 zfs 456 k
sysstat x86_64 10.1.5-13.el7 ol7_latest 310 k
zfs-dkms noarch 0.7.9-1.el7_4 zfs 4.9 M

The most important is to check that the zfs module is installed correctly:

zfs.ko:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/4.1.12-112.14.13.el7uek.x86_64/extra/

I’ve seen cases where it was not and then the module cannot load. You can also check:

[root@localhost opc]# dkms status
spl, 0.7.9, 4.1.12-112.14.13.el7uek.x86_64, x86_64: installed
zfs, 0.7.9, 4.1.12-112.14.13.el7uek.x86_64, x86_64: installed

If you have a problem (such as “modprobe: FATAL: Module zfs not found” when loading the module), check the status and maybe re-install it with:

dkms remove zfs/0.7.9 --all
dkms --force install zfs/0.7.9

If everything is ok, you can load the module:

[root@localhost opc]# /sbin/modprobe zfs
[root@localhost opc]#

Create a ZFS filesystem

If the ZFS module was not loaded you have this error:

[root@localhost opc]# zpool list
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

If it has been loaded correctly, you have no ZFS Storage Pool yet:

[root@localhost opc]# zpool list
no pools available

First I need to add a disk to my machine. Here I have only one disk created when I created the Compute Service:

[root@localhost opc]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb 202:16 0 128G 0 disk
├─xvdb1 202:17 0 500M 0 part /boot
└─xvdb2 202:18 0 127.5G 0 part
├─vg_main-lv_root 249:0 0 123.5G 0 lvm /
└─vg_main-lv_swap 249:1 0 4G 0 lvm [SWAP]

I add a new disk in the Storage tab:
CaptureCreateStorage
And attach it and attach it to my Cloud Instance:
CaptureAttachStorage

Here is the new disk visible from the system:

[root@localhost opc]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb 202:16 0 128G 0 disk
├─xvdb1 202:17 0 500M 0 part /boot
└─xvdb2 202:18 0 127.5G 0 part
├─vg_main-lv_root 249:0 0 123.5G 0 lvm /
└─vg_main-lv_swap 249:1 0 4G 0 lvm [SWAP] xvdc 202:32 0 120G 0 disk
&nbsp
[root@localhost opc]# ls -l /dev/xvdc /dev/block/202:32
lrwxrwxrwx 1 root root 7 Jul 19 15:05 /dev/block/202:32 -> ../xvdc
brw-rw---- 1 root disk 202, 32 Jul 19 15:05 /dev/xvdc

Here is where I add a ZFS Storage Pool for Docker:

[root@localhost opc]# zpool create -f zpool-docker -m /var/lib/docker /dev/xvdc
 
[root@localhost opc]# zpool status
pool: zpool-docker
state: ONLINE
scan: none requested
config:
 
NAME STATE READ WRITE CKSUM
zpool-docker ONLINE 0 0 0
xvdc ONLINE 0 0 0
 
[root@localhost opc]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zpool-docker 119G 118K 119G - 0% 0% 1.00x ONLINE -

And while I’m there I set some attributes to enable compression and deduplication. And as Docker writes to layers with 32k I/O I set the recordsize accordingly:

zfs set compression=on zpool-docker
zfs set dedup=on zpool-docker
zfs set recordsize=32k zpool-docker

Just to test that everything is ok, I install Docker as I did in a previous post:

[root@localhost opc]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@localhost opc]# yum-config-manager --enable ol7_addons
[root@localhost opc]# yum -y install docker-ce
[root@localhost opc]# systemctl start docker

Docker layers

I pull a small image and start a container on it:

[root@localhost opc]# docker run oraclelinux:7-slim

Here is the image and the ZFS dataset for its layer, mounted under /var/lib/docker/zfs:

[root@localhost opc]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
oraclelinux 7-slim b1af4ba0cf19 12 days ago 117MB
 
[root@localhost opc]# docker inspect oraclelinux:7-slim | jq -r .[0].GraphDriver
 
{
"Data": {
"Dataset": "zpool-docker/fe31ff466872588506b1a3a3575c64d458beeb94d15bea593e5048237abf4fcc",
"Mountpoint": "/var/lib/docker/zfs/graph/fe31ff466872588506b1a3a3575c64d458beeb94d15bea593e5048237abf4fcc"
},
"Name": "zfs"
}

And here is the container layer:

[root@localhost opc]# docker container ls -a
 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9eb7610c1fc5 oraclelinux:7-slim "/bin/bash" 6 minutes ago Exited (0) 6 minutes ago inspiring_shannon
 
[root@localhost opc]# docker inspect inspiring_shannon | jq -r .[0].GraphDriver
 
{
"Data": {
"Dataset": "zpool-docker/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982",
"Mountpoint": "/var/lib/docker/zfs/graph/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982"
},
"Name": "zfs"
}

If you don’t have jq just ‘yum install jq’. It is very convenient to filter and display the ‘inspect’ output.

We can see those datasets from ZFS list:

[root@localhost opc]# zfs list -o creation,space,snapshot_count,written -r | sort
 
CREATION NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD SSCOUNT WRITTEN
Thu Jul 19 15:13 2018 zpool-docker 115G 126M 0B 964K 0B 125M none 964K
Thu Jul 19 15:38 2018 zpool-docker/fe31ff466872588506b1a3a3575c64d458beeb94d15bea593e5048237abf4fcc 115G 125M 0B 125M 0B 0B none 0
Thu Jul 19 15:39 2018 zpool-docker/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982 115G 87K 0B 87K 0B 0B none 87K
Thu Jul 19 15:39 2018 zpool-docker/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982-init 115G 46K 0B 46K 0B 0B none 0

Here, sorted by creation time, we see the datasets used by each layer. The initial files before having any image are less than 1MB. The image uses 125MB. The container creation has written 87KB and 46KB additional once running.

 

Cet article Installing ZFS on OEL7 UEK4 for Docker storage est apparu en premier sur Blog dbi services.

How to install Docker Enterprise Edition on CentOS 7 ?

Thu, 2018-07-19 07:54

In this blog we are going to see how to install Docker EE trial edition on CentOS 7 hosts. As you may know or not, Docker has two editions: Docker Community Edition  (CE) and Docker Enterprise Edition (EE). To make it simple, let’s say that Docker EE is designed for production environment. More infos here.

 

This will be our architecture:

  • 1 manager node
    • hostname: docker-ee-manager1
  • 1 worker node + Docker Trust Registry (DTR) node
    • hostname: docker-ee-worker1

Both nodes should be in the same network range.

We will assume that CentOS 7 is already installed on all hosts:

[root@docker-ee-manager1 ~] cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

[root@docker-ee-worker1 ~]$ cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

 Create docker user and group
[root@docker-ee-manager1 ~]$ groupadd docker
[root@docker-ee-manager1 ~]$ useradd -g docker docker
[root@docker-ee-manager1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-manager1 ~]$ su - docker

Do the same on worker

[root@docker-ee-worker1 ~]$ groupadd docker
[root@docker-ee-worker1 ~]$ useradd -g docker docker
[root@docker-ee-worker1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-worker1 ~]$ su - docker

 

 

Get the Docker URL for installating Docker EE

Then you need to go to this link, make sure that you already have a Docker account, it’s free you can make one very quickly.

 

 

Fill the formula and you will have access to this :

 

dockerstore

storebits

 

Copy the url and save the license key in a safe location, you will need it later.

 

1. Configure Docker URL
[docker@docker-ee-manager1 ~]$ export DOCKERURL="<YOUR_LINK>"
[docker@docker-ee-manager1 ~]$ sudo -E sh -c 'echo "$DOCKERURL/centos" > /etc/yum/vars/dockerurl'

 

2. We install required packages
[docker@docker-ee-manager1 ~]$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

 

3. Add the Docker-EE repository
[docker@docker-ee-manager1 ~]$ sudo -E yum-config-manager \
--add-repo \
"$DOCKERURL/centos/docker-ee.repo"

 

4. Install docker-ee package
[docker@docker-ee-manager1 ~]$ sudo yum -y install docker-ee
[docker@docker-ee-manager1 ~]$ sudo systemctl enable docker.service
[docker@docker-ee-manager1 ~]$ sudo systemctl start docker.service

 

Repeat step 1 to 4 for worker1 node

 

Install UCP on manager

Simple command, just run this on your manage

 

[docker@docker-ee-manager1 ~]$ docker container run --rm -it --name ucp   -v /var/run/docker.sock:/var/run/docker.sock   docker/ucp:3.0.2 install   --host-address <YOUR_IP>   --interactive
INFO[0000] Your engine version 17.06.2-ee-15, build 64ddfa6 (3.10.0-514.el7.x86_64) is compatible with UCP 3.0.2 (736cf3c)
Admin Username: admin
Admin Password:
Confirm Admin Password:
WARN[0014] None of the hostnames we'll be using in the UCP certificates [docker-ee-manager1 127.0.0.1 172.17.0.1 <YOUR_IP>] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0000] Found existing UCP config com.docker.ucp.config-2
Do you want to proceed with the install with config com.docker.ucp.config-2? (y/n): y
y
INFO[0032] Installing UCP with host address 10.29.14.101 - If this is incorrect, please specify an alternative address with the '--host-address' flag
INFO[0032] Deploying UCP Service... (waiting for all 2 nodes to complete)
INFO[0083] Installation completed on docker-ee-manager1 (node uvzvuefehznf22k4wa5zg9cy1)
INFO[0083] Installation completed on docker-ee-worker1 (node z7gq7z3336jnwcyojyqq1h3wa)
INFO[0083] UCP Instance ID: x0fg0phnkgzm5730thoncucn2
INFO[0083] UCP Server SSL: SHA-256 Fingerprint=E6:2F:38:69:5D:26:A8:06:D3:8B:11:69:D9:DC:3A:77:CE:16:EA:23:9C:D0:D8:8F:34:D6:97:9D:4B:D2:E2:D2
INFO[0083] Login to UCP at https://<YOUR_IP>1:443
INFO[0083] Username: admin
INFO[0083] Password: (your admin password)

Ignore if there is a insecure message and accept exception. We can see the UCP admin interface. Enter your credentials and upload your license key ucplogin     ucpinterface  

Adding a worker node

  clickonnodes   And click on Add Node Addnode Then tell UCP that you want to deploy a new worker node and copy the command displayed. chooseWorker     Connect to the worker and run this command to join the worker to the cluster

[docker@docker-ee-worker1 ~]$ docker swarm join --token SWMTKN-1-4kt4gyk00n69tiywlzhst8dwsgo4oblylnsl1aww2048isi44u-7j9hmcrsn3lr048yu30xlnsv7 <IP_OF_MANAGER>:2377
This node joined a swarm as a worker.

 

Now, we have two nodes: one manager and one worker

 

Now2nodes

 

Install Docker Trusted Registry

 

The docker EE includes a DTR which is a secure registry where you can store your docker images.  The DTR will be installed on the worker node, it’s not recommended to install it on a manager node.

To install it, you just need to run this command:

[docker@docker-ee-worker1 ~]$ docker run -it --rm docker/dtr install --ucp-node docker-ee-worker1 --ucp-url https://<IP_OF_MANAGER> --ucp-username admin --ucp-password <YOUR_PASSWORD> --ucp-ca "-----BEGIN CERTIFICATE-----
MIIBggIUJ+Y+MFXH1XcyJnCU4ACq26v5ZJswCgYIKoZIzj0EAwIw
HTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMB4XDTE4MDcxOTA4MjEwMFoX
DTIzMDcxODA4MjEwMFowHTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMFkw
EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEDJxHOIhHoV4NBZGnEQClFShjQfpoL5mQ
LH7E6x6GL4AexYtdWgGIcOlV2NXQpdadBK9cZG2z6r7+zwCj7EP/iqNFMEMwDgYD
VR0P7ojp1CIMAoGCCqGSM49BAMCA0gAMEUCIQDqbBiCqXgFdtIb6uP9
EdDTI1YGWn97AFPU+YJ9s1/CSAIgBsqIn1v7BVNjJ3AeUQfo1d8Kfc//ZwHYr4XW
uWIHmkM=
-----END CERTIFICATE-----"

You can find the certificate here: https://<IP_OF_MANAGER>/manage/settings/certs

findCertificate

Then go to the DTR URL which is https://<IP_OF_WORKER>  and enter your credentials

 

DTRLogin

 

 

 

Here we are:

 

DTRUI

Congratulations, you just have installed Docker EE. Hope this helps  :-)

 

 

Cet article How to install Docker Enterprise Edition on CentOS 7 ? est apparu en premier sur Blog dbi services.

Google Cloud Spanner – inserting data

Thu, 2018-07-19 04:17

In a previous post I’ve created a Google Cloud Spanner database and inserted a few rows from the GUI. This is definitely not a solution fo many rows and here is a post about using the command line.

If I start the Google Shell from the icon on the Spanner page for my project, everything is set. But if I run it from elsewhere, using the https://console.cloud.google.com/cloudshell as I did in A free persistent Google Cloud service with Oracle XE I have to set the project:

franck_pachot@cloudshell:~$ gcloud config set project superb-avatar-210409
Updated property [core/project].
franck_pachot@superb-avatar-210409:~$

Instance

I create my Spanner instance with 3 nodes across the world:
¨
franck_pachot@superb-avatar-210409:~$ time gcloud spanner instances create franck --config nam-eur-asia1 --nodes=3 --description Franck
Creating instance...done.
 
real 0m3.940s
user 0m0.344s
sys 0m0.092s

Database

and Spanner database – created in 6 seconds:

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases create test --instance=franck
Creating database...done.
&nbssp;
real 0m6.832s
user 0m0.320s
sys 0m0.128s

Table

The DDL for table creation can also be run from there:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO1 ( ID1 int64, TEXT string(max) ) primary key (ID1)'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty

I’m now ready to insert one million rows. Here is my table:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl describe test --instance=franck
--- |-
CREATE TABLE DEMO1 (
ID1 INT64,
TEXT STRING(MAX),
) PRIMARY KEY(ID1)

Insert

The gcloud command line has a limited insert possibility:

franck_pachot@superb-avatar-210409:~$ time for i in $(seq 1 1000000) ; do gcloud beta spanner rows insert --table=DEMO1 --database=test --instance=franck --data=ID1=${i},TEXT=XXX${i} ; done
commitTimestamp: '2018-07-18T11:09:45.065684Z'
commitTimestamp: '2018-07-18T11:09:50.433133Z'
commitTimestamp: '2018-07-18T11:09:55.752857Z'
commitTimestamp: '2018-07-18T11:10:01.044531Z'
commitTimestamp: '2018-07-18T11:10:06.285764Z'
commitTimestamp: '2018-07-18T11:10:11.106936Z'
^C

Ok, let’s stop there. Calling a service for each row is not efficient with a latency of 5 seconds.

API

I’ll use the API from Python. Basically, a connection is a Spanner Client:

franck_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> spanner_client = spanner.Client()
>>> instance = spanner_client.instance('franck')
>>> database = instance.database('test')
>>>

Batch Insert

With this I can send a batch of rows to insert. Here is the full Python script I used to insert one million, by batch of 1000 rows:

from google.cloud import spanner
spanner_client = spanner.Client()
instance = spanner_client.instance('franck')
database = instance.database('test')
for j in range(1000):
records=[] for i in range(1000):
records.append((1+j*1000+i,u'XXX'+str(i)))
with database.batch() as batch:
batch.insert(table='DEMO1',columns=('ID1', 'TEXT',),values=records)

This takes 2 minutes:

franck_pachot@superb-avatar-210409:~$ time python3 test.py
 
real 2m52.707s
user 0m21.776s
sys 0m0.668s
franck_pachot@superb-avatar-210409:~$

If you remember my list of blogs on Variations on 1M rows insert that’s not so fast. But remember that rows are distributed across 3 nodes in 3 continents but here inserting with constantly increasing value have all batched rows going to the same node. The PRIMARY KEY in Google Spanner is not only there to declare a constraint but also determines the organization of data.

Query

The select can also be run from there from a read-only transaction called ‘Snapshot’ because it is doing MVCC consistent reads:

frank_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> with spanner.Client().instance('franck').database('test').snapshot() as snapshot:
... results = snapshot.execute_sql('SELECT COUNT(*) FROM DEMO1')
... for row in results:
... print(row)
...
[1000000]

The advantage of the read-only transaction is that it can do consistent reads without locking. The queries executed in a read-write transaction have to acquire some locks in order to guarantee consistency when reading across multiple nodes.

Interleave

So, you can look at the PRIMARY KEY as a partition by range, and we have also reference partitioning with INTERLEAVE IN PARENT. This reminds me of the Oracle CLUSTER segment that is so rarely used because storing the tables separately is finally the better compromise on performance and flexibility for a multi-purpose database.

Here is my creation of DEMO2 where ID1 is a foreign key referencing DEMO1

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO2 ( ID1 int64, ID2 int64, TEXT string(max) ) primary key (ID1,ID2), interleave in parent DEMO1 on delete cascade'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty
 
real 0m24.418s
user 0m0.356s
sys 0m0.088s

I’m now inserting 5 detail rows per each parent row:

from google.cloud import spanner
database = spanner.Client().instance('franck').database('test')
for j in range(1000):
records=[] for i in range(1000):
for k in range(5):
records.append((1+j*1000+i,k,u'XXX'+str(i)+' '+str(k)))
with database.batch() as batch:
batch.insert(table='DEMO2',columns=('ID1','ID2','TEXT'),values=records)

This ran in 6 minutes.

Join (Cross Apply)

Here is the execution plan for

SELECT * FROM DEMO1 join DEMO2 using(ID1) where DEMO2.TEXT=DEMO1.TEXT

where I join the two tables and apply a filter on the join:
CaptureSpannerCrossApply

Thanks to the INTERLEAVE the join is running locally. Each row from DEMO1 (the Input of the Cross Apply) is joined with DEMO2 (the Map of Cross Apply) locally. Only the result is serialized. On this small number of rows we do not see the benefit from having the rows in multiple nodes. There are only 2 nodes with rows here (2 local executions) and probably one node contains most of the rows. The average time per node is 10.72 seconds and the elapsed time is 20.9 seconds, so I guess that one node ran un 20.9 seconds and the other in 1.35 only.

The same without the tables interleaved (here as DEMO3) is faster to insert but the join will be more complex where DEMO1 must be distributed to all nodes.
CaptureSpannerDistributedCrossApply
Without interleave, the input table of the local Cross Apply is a Batch Scan, which is actually like a temporary table distributed to all nodes (seems to have 51 chunks here), created by the ‘Create Batch’. This is called Distributed Cross Applied.

So what?

Google Spanner has only some aspects of SQL and Relational databases. But it is still, like the NoSQL databases, a database where the data model is focused at one use case only because the data model and the data organization have to be designed for specific data access.

 

Cet article Google Cloud Spanner – inserting data est apparu en premier sur Blog dbi services.

Control File issues on duplicating with non patched Oracle version.

Wed, 2018-07-18 02:34

Introduction :

RMAN has the ability to duplicate, or clone, a database from a backup or from an active database.
It is possible to create a duplicate database on a remote server with the same file structure,
or on a remote server with a different file structure or on the local server with a different file structure.
For some old and  non patched Oracle versions such as that earlier than 11.2.0.4 , the duplicate (from active or backup) can be a real
challenge even for those DBAs with years of experience,  due to different bugs encountered.

The scenario specified  below will focus on control file issues revealed by duplication from active database an Oracle 11.2.0.2 version EE.

<INFO>Make sure to use nohup command line-utility which allows to run command/process or shell script.

Demonstration :

Step1: Prepare your script:

vi script_duplicate.ksh

#!/bin/ksh
export ORACLE_HOME=$ORACLE_HOME
export PATH=$PATH1:$ORACLE_HOME/bin
rman target sys/pwd@TNS_NAME_TARGET auxiliary sys/pwd@TNS_NAME_AUXILIARY log=duplicate.log cmdfile=/home/oracle/rman_bkup.cmd

vi rman_bkup.cmd
run
{
allocate channel ch1 device type disk;
allocate channel ch2 device type disk;
allocate channel ch3 device type disk;
allocate auxiliary channel dh1 device type disk;
allocate auxiliary channel dh2 device type disk;
allocate auxiliary channel dh3 device type disk;
duplicate target database to <AUXILIARY_NAME> from active database nofilenamecheck;
release channel ch3;
release channel ch2;
release channel ch1;
}

and launch like that : nohup ./script_duplicate.ksh &

Step2: Check instance parameters.
Depending on the PSU level of your instance, even before starting the duplicate, can fail with this error.

RMAN-00571: ===================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
RMAN-00571: ===================================================
RMAN-03002: failure of Duplicate Db command at 11/02/2011 06:05:48
RMAN-04014: startup failed: ORA-00600: internal error code, arguments: [kck_rls_check must use (11,0,0,0,0) or lower], [kdt.c], [9576], [11.2.0.2.0], [], [], [], [], [], [], [], []
RMAN-04017: startup error description: ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-00600: internal error code, arguments: [kck_rls_check must use (11,0,0,0,0) or lower], [kdt.c], [9576], [11.2.0.2.0], [], [], [], [], [], [], [], []
RMAN-04017: startup error description: ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance

According with Oracle Support note : 1064264.1

1. Edit the pfile, add parameter:
_compression_compatibility= "11.2.0"

2. Restart the instance using the pfile
SQL> startup pfile='<fullpath name of pfile>'

3. Create the SPFILE again
SQL> create spfile from pfile;

4. Restart the instance with the SPFILE
SQLl> shutdown immediate;
SQL> startup

and relaunch the previous command (Step 1).

Step3 : Control file issue, trying to open the database.
After transferring the datafiles , your duplicate will crash with these errors , trying to open the database.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 15/07/2018 17:39:30
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script



SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-19838: Cannot use this control file to open database

Basically is because of a known bug (Bug 11063122 in 11gr2).
Controlfile created during the duplicate in 11gr2 will store redolog file locations as of primary.
We need to recreate control file changing the locations of redo logfiles and datafiles and open database with resetlogs.
In the controlfile recreation script the database name is the source <db_name> and the directory names for redo logs are still pointing to the source database .

The workaround is :

1. Backup as trace your control file (cloned DB)

sql> alter database backup controlfile to trace ;

2. Open the file  , and extract the section RESETLOGS, to modify like that :

CREATE CONTROLFILE REUSE DATABASE "<src_db_name>" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 11680
LOGFILE
  GROUP 9  '<path_of_the_cloned_DB>redo09.log'  SIZE 150M BLOCKSIZE 512,
  GROUP 10 '<path_of_the_cloned_DB>/redo10.log'  SIZE 150M BLOCKSIZE 512,
  GROUP 11 '<path_of_the_cloned_DB>/redo11.log'  SIZE 150M BLOCKSIZE 512,

DATAFILE
  '<path_of_the_cloned_DB>/system01.dbf',
  '<path_of_the_cloned_DB>/undotbs01.dbf',
  '<path_of_the_cloned_DB>/sysaux01.dbf',
  '<path_of_the_cloned_DB>/users01.dbf',
-------------more datafiles
CHARACTER SET EE8ISO8859P2;

Save as trace_control.ctl

3. SQL> alter system set db_name=<new db_name> scope=spfile;
4. SQL> startup nomount
5. SQL>@trace_control.ctl
      --control file created and multiplexed in all the destinations mentioned on your spfile 
6. SQL> alter database open resetlogs

<INFO>If your source db had activity during the duplicate process you should apply manually some required archivelogs.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 15/07/2018 19:21:30
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/u01/oradata/DBName/system01.dbf'

Search on source database , for those  archivelogs with sequence# greater or equal to 399747 and apply them manually on the target DB.

If somehow those are not available you need to take an incremental backup to roll forward your cloned database.

7. SQL> recover database using backup controlfile;

ORA-00279: change 47260162325 generated at  15/07/2018 19:27:40 needed for thread 1
ORA-00289: suggestion : <path>o1_mf_1_399747_%u_.arc
ORA-00280: change 47260162325 for thread 1 is in sequence #399747

Once the required archivelogs files have been applied , try again to open your database:

RMAN> alter database open resetlogs;

database opened

RMAN> exit

Conclusion :
If you’re the kind of Oracle administrator who has the power to approve or deny, you must know how dangerous it is to run your applications with  non patched Oracle databases.
Your data within your organization is better protected if your are taking advantage of patches issued by Oracle and running your production data against supported Oracle versions only.

 

Cet article Control File issues on duplicating with non patched Oracle version. est apparu en premier sur Blog dbi services.

Restarting a failed transportable tablespace metadata import

Tue, 2018-07-17 09:39

I’m currently working in a project to migrate a Datawarehouse-database from Solaris to Linux Intel (Endian-change). We do use the cross platform incremental backups method as described in My Oracle Support Note 1389592.1 for that.
I.e. incremental backups are applied to database files and during the migration the recovered datafiles are attached to the target database via the transportable tablespace method. When testing the transportable tablespace metadata import I got an error for a table:


Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39083: Object type TABLE:"DWHT"."TDWHTAB" failed to create with error:
ORA-01843: not a valid month
Failing sql is:
CREATE TABLE "DWHT"."TDWHTAB" (...,"PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE,...

I.e. metadata of all tables in the tablespace were successfully imported except one. The reason for the error was a default-defintion of column “PROCESSING_TM” without a TO_DATE-casting. I.e. it relied on the NLS_DATE_FORMAT-setting. E.g.


SQL> create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE);
create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE)
  *
ERROR at line 1:
ORA-01843: not a valid month
 
SQL> alter session set nls_date_format='dd-mm-yyyy';
 
Session altered.
 
SQL> create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE);
 
Table created.

There are different possibilities on why a metadata import fails for specific tables (see e.g. My Oracle Support Note 1082116.1 for another reason). The question was: How to restart the import without going through the whole process of copying datafiles from the source-database again? As we imported the metadata through a network-link the tablespace on the source database and the tablespace on the target database were still set to “READ ONLY”.

So first of all I fixed the error in the table definition on the source database:


alter table "DWHT"."TDWHTAB" modify ("PROCESSING_TM" DATE DEFAULT TO_DATE('01-01-0001', 'DD-MM-YYYY'));

That was possible even having the associated tablespace of the table on READ ONLY as it is just a metadata update and does not touch the tablespace.

On the target database I cleaned up the tablespace of the failed table metadata import:


drop tablespace DWH_DATA including contents;

It’s of course important to NOT specify “AND DATAFILES” when dropping the tablespace to avoid having to copy all files again.

In my case I actually transported 146 tablespaces and modified the impdp-parameterfile xttplugin.par to contain only the single tablespace, which contained failing tables. Of course you have to drop and include all tablespaces, which have tables with dependencies between each other. In my case that was not an issue as my tables had no dependencies to tables in other tablespaces.

Afterwards I could just restart my metadata-import with the single tablespace:


$ cat xttplugin2.par
directory=DATA_PUMP_DIR
logfile=tts_imp2.log
network_link=ttslink
transport_full_check=no
transport_tablespaces=DWH_DATA
transport_datafiles='/u01/DWH/oradata/DWH_DATA_001.dbf','/u01/DWH/oradata/DWH_DATA_002.dbf'
$ imdp parfile=./xttplugin2.par
Password: / as sysdba

So to restart a failed transportable tablespace metadata import, just fix the root cause, drop the associated tablespace on the target database without dropping the datafiles and restart the import again. That becomes handy especially in cross platform incremental backup migration scenarios.

 

Cet article Restarting a failed transportable tablespace metadata import est apparu en premier sur Blog dbi services.

Pages