Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 17 hours 10 min ago

ORACLE_HOME with symbolic link and postupgrade_fixups

Mon, 2018-08-13 07:26

Here is a quick post you may google into if you got the following error when running postupgrade_fixups.sql after an upgrade:

ERROR - Cannot open the preupgrade_messages.properties file from the directory object preupgrade_dir
DECLARE
*
ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.DBMS_PREUP", line 3300
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at "SYS.DBMS_PREUP", line 3260
ORA-06512: at "SYS.DBMS_PREUP", line 9739
ORA-06512: at line 11


Before upgrading a database with dbupgrade, you run, on the current version of your database, the preupgrade.jar from the new version (and probably download the lastest one from MOS). This generates a script to run before the upgrade, and one to run after the upgrade. Those scripts are generated under $ORACLE_BASE/cfgtoollogs/<database>/preupgrade where you find something like that:

drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 ..
drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 oracle
drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 upgrade
-rw-r--r--. 1 oracle oinstall 14846 Aug 11 20:19 dbms_registry_extended.sql
-rw-r--r--. 1 oracle oinstall 7963 Aug 11 20:19 preupgrade_driver.sql
-rw-r--r--. 1 oracle oinstall 422048 Aug 11 20:19 preupgrade_package.sql
-rw-r--r--. 1 oracle oinstall 14383 Aug 11 20:19 parameters.properties
-rw-r--r--. 1 oracle oinstall 83854 Aug 11 20:19 preupgrade_messages.properties
-rw-r--r--. 1 oracle oinstall 50172 Aug 11 20:19 components.properties
-rw-r--r--. 1 oracle oinstall 2 Aug 11 20:19 checksBuffer.tmp
-rw-r--r--. 1 oracle oinstall 6492 Aug 11 20:20 preupgrade_fixups.sql
-rw-r--r--. 1 oracle oinstall 7575 Aug 11 20:20 postupgrade_fixups.sql
-rw-r--r--. 1 oracle oinstall 5587 Aug 11 20:20 preupgrade.log

Everything is straightforward.

oracle@vmreforatun01:/u00/app/oracle/product/ [DB2] java -jar /u00/app/oracle/product/18EE/rdbms/admin/preupgrade.jar
...
==================
PREUPGRADE SUMMARY
==================
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade.log
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade_fixups.sql
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql
 
Execute fixup scripts as indicated below:
 
Before upgrade log into the database and execute the preupgrade fixups
@/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade_fixups.sql
 
After the upgrade:
 
Log into the database and execute the postupgrade fixups
@/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql
 
Preupgrade complete: 2018-08-11T19:37:29
oracle@vmreforatun01:/u00/app/oracle/product/ [DB2]

For a database we have in a lab for our workshops, which I upgraded to 18c, I’ve run the postfix script after the upgrade but got the error mentioned above about UTL_FILE invalid file operation in the preupgrade_dir. I looked at the script. The postupgrade_fixups.sql script creates a directory on $ORACLE_HOME/rdbms/admin and calls preupgrade_package.sql which reads preupgrade_messages.properties.

This is a bit confusing because there’s also the same file in the cfgtoollogs preupgrade subdirectory but my directory looks good:

SQL> select directory_name,directory_path from dba_directories where directory_name='PREUPGRADE_DIR';
 
DIRECTORY_NAME
--------------------------------------------------------------------------------------------------------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
PREUPGRADE_DIR
/u00/app/oracle/product/18SE/rdbms/admin

So, as the “ORA-29283: invalid file operation” is not very detailed, I traced all the system calls on files (strace -fye trace=file) when running sqlplus and got this:

[pid 29974] clock_gettime(CLOCK_MONOTONIC, {17811, 723389136}) = 0
[pid 29974] stat("/u00/app/oracle/product/18SE/rdbms/admin/preupgrade_messages.properties", {st_mode=S_IFREG|0644, st_size=83854, ...}) = 0
[pid 29974] stat("/u00/app/oracle/product/18SE/rdbms/admin/", {st_mode=S_IFDIR|0755, st_size=65536, ...}) = 0
[pid 29974] lstat("/u00", {st_mode=S_IFLNK|0777, st_size=11, ...}) = 0
[pid 29974] readlink("/u00", "/oracle/u00", 4095) = 11
[pid 29974] lstat("/oracle", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE/rdbms", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE/rdbms/admin", {st_mode=S_IFDIR|0755, st_size=65536, ...}) = 0
[pid 29974] clock_gettime(CLOCK_MONOTONIC, {17811, 724514469}) = 0

Then I realized that the ORACLE_HOME is under a symbolic link. For whatever reason, on this environment, ORACLE_BASE is physically /oracle/u00/app/oracle but there’s a /u00 link to /oracle/u00 and this short one was used to set the environment variables. UTL_FILE, since 11g, and for security reasons, does not accept directories which use a symbolic link. And we can see on the strace above that it was detected (readlink).

So, the solution can be a quick workaround here, changing the postupgrade_fixups.sql to set the physical path instead of the one read from ORACLE_HOME by dbms_system.get_env.

However, if you can restart the instance, then it will be better to set the ORACLE_HOME to the physical path. Symbolic links for the ORACLE_HOME may be misleading. Remember that the ORACLE_HOME text string is part of the instance identification, combined with ORACLE_SID. So, having different values even when resolved to the same path will bring lot of problems. Do not forget to change it everywhere (shell environment, listener.ora) so that you are sure that nobody will use a different one when starting the database.

 

Cet article ORACLE_HOME with symbolic link and postupgrade_fixups est apparu en premier sur Blog dbi services.

Bringing up your customized PostgreSQL instance on Azure

Mon, 2018-08-13 06:53

The Azure cloud becomes more and more popular so I gave it try and started simple. The goal was to provision a VM, compiling and installing PostgreSQL and then connecting to the instance. There is also a fully managed PostgreSQL service but I wanted to do it on my own just to get a feeling about the command line tools. Here is how I’ve done it.

Obviously you need to login which is just a matter of this:

dwe@dwe:~$ cd /var/tmp
dwe@dwe:/var/tmp$ az login

For doing anything in Azure you’ll need to create a resource group which is like container holding your resources. As a resource group needs to be created in a specific location the next step is to get a list of those:

dwe@dwe:/var/tmp$ az account list-locations
[
  {
    "displayName": "East Asia",
    "id": "/subscriptions/030698d5-42d6-41a1-8740-355649c409e7/locations/eastasia",
    "latitude": "22.267",
    "longitude": "114.188",
    "name": "eastasia",
    "subscriptionId": null
  },
  {
    "displayName": "Southeast Asia",
    "id": "/subscriptions/030698d5-42d6-41a1-8740-355649c409e7/locations/southeastasia",
    "latitude": "1.283",
    "longitude": "103.833",
    "name": "southeastasia",
    "subscriptionId": null
  },
...

Once you have selected a location the resource group can be created:

dwe@dwe:/var/tmp$ az group create --name PGTEST --location "westeurope"
{
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST",
  "location": "westeurope",
  "managedBy": null,
  "name": "PGTEST",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null
}

All you need to do for creating a CentOS VM is this simple command:

dwe@dwe:/var/tmp$ az vm create -n MyPg -g PGTEST --image centos --data-disk-sizes-gb 10 --size Standard_DS2_v2 --generate-ssh-keys
{
  "fqdns": "",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg",
  "location": "westeurope",
  "macAddress": "xx-xx-xx-xx-xx-xx",
  "powerState": "VM running",
  "privateIpAddress": "x.x.x.x",
  "publicIpAddress": "x.x.x.x",
  "resourceGroup": "PGTEST",
  "zones": ""
}

While the VM is getting created you can watch the resources appearing in the portal:
Selection_026
Selection_027
Selection_028

As soon as the VM is ready connecting via ssh is possible (the keys have automatically been added, no password required):

dwe@dwe:/var/tmp$ ssh x.x.x.x
The authenticity of host 'xx.xx.x.x (xx.xx.x.x)' can't be established.
ECDSA key fingerprint is SHA256:YzNOzg30JH0A3U1R+6WzuJEd3+7N4GmwpSVkznhuTuE.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xx.xx.x.x' (ECDSA) to the list of known hosts.
[dwe@MyPg ~]$ ls -la /etc/yum.repos.d/
total 44
drwxr-xr-x.  2 root root  209 Sep 25  2017 .
drwxr-xr-x. 86 root root 8192 Aug  2 08:05 ..
-rw-r--r--.  1 root root 1706 Sep 25  2017 CentOS-Base.repo
-rw-r--r--.  1 root root 1309 Nov 29  2016 CentOS-CR.repo
-rw-r--r--.  1 root root  649 Nov 29  2016 CentOS-Debuginfo.repo
-rw-r--r--.  1 root root  314 Nov 29  2016 CentOS-fasttrack.repo
-rw-r--r--.  1 root root  630 Nov 29  2016 CentOS-Media.repo
-rw-r--r--.  1 root root 1331 Nov 29  2016 CentOS-Sources.repo
-rw-r--r--.  1 root root 2893 Nov 29  2016 CentOS-Vault.repo
-rw-r--r--.  1 root root  282 Sep 25  2017 OpenLogic.repo
[dwe@MyPg ~]$ sudo su -
[root@MyPg ~]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core) 
[root@MyPg ~]# 

Of course we want to update all the operating system packages to the latest release before moving on. Be careful here to really exclude the WALinuxAgent because otherwise the agent will be upgraded as well (and restarted) and the script execution will fail as you lose connectivity:

dwe@dwe:/var/tmp$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"yum update -y --exclude=WALinuxAgent"}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "yum update -y --exclude=WALinuxAgent"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

When we want to compile PostgreSQL we need some packages for that, so (not all of them required for compiling PostgreSQL but this is what we usually install):

dwe@dwe:/var/tmp$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel openssh-clients bzip2 net-tools wget screen unzip sysstat xorg-x11-xauth systemd-devel bash-completion"}'

{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel openssh-clients bzip2 net-tools wget screen unzip sysstat xorg-x11-xauth systemd-devel bash-completion"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

Preparation work for the user, group and directories:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"groupadd postgres; useradd -m -g postgres postgres; mkdir -p /u01/app; chown postgres:postgres /u01/app; mkdir -p /u02/pgdata; chown postgres:postgres /u02/pgdata"}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "groupadd postgres; useradd -m -g postgres postgres; mkdir -p /u01/app; chown postgres:postgres /u01/app; mkdir -p /u02/pgdata; chown postgres:postgres /u02/pgdata"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

For the next steps we will just copy over this script and then execute it:

dwe@dwe:~$ cat installPG.sh 
#!/bin/bash
cd /u01/app; wget https://ftp.postgresql.org/pub/source/v10.5/postgresql-10.5.tar.bz2
tar -axf postgresql-10.5.tar.bz2
rm -f postgresql-10.5.tar.bz2
cd postgresql-10.5
PGHOME=/u01/app/postgres/product/10/db_5/
SEGSIZE=2
BLOCKSIZE=8
WALSEGSIZE=16
./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
	    --with-systemd 
make -j 4 all
make install
cd contrib
make -j 4 install

dwe@dwe:~$ scp installPG.sh x.x.x.x:/var/tmp/
installPG.sh                                                                                                100% 1111     1.1KB/s   00:00    

Of course you could also add the yum commands to the same script but I wanted to show both ways. Using the CustomScript feature and copying over a script for execution. Lets execute that:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"chmod +x /var/tmp/installPG.sh; sudo su - postgres -c /var/tmp/installPG.sh"}'

  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "chmod +x /var/tmp/installPG.sh; sudo su - postgres -c /var/tmp/installPG.sh"
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

Binaries ready. Initialize the cluster:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/initdb -D /u02/pgdata/PG1\""}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/initdb -D /u02/pgdata/PG1\""
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

Startup:

dwe@dwe:~$ az vm extension set --publisher Microsoft.Azure.Extensions --version 2.0 --name CustomScript --vm-name MyPg --resource-group PGTEST --settings '{"commandToExecute":"sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/pg_ctl -D /u02/pgdata/PG1 start\""}'
{
  "autoUpgradeMinorVersion": true,
  "forceUpdateTag": null,xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx030698d5-42d6-41a1-8740-355649c409e7/resourceGroups/PGTEST/providers/Microsoft.Compute/virtualMachines/MyPg/extensions/CustomScript",
  "instanceView": null,
  "location": "westeurope",
  "name": "CustomScript",
  "protectedSettings": null,
  "provisioningState": "Succeeded",
  "publisher": "Microsoft.Azure.Extensions",
  "resourceGroup": "PGTEST",
  "settings": {
    "commandToExecute": "sudo su - postgres -c \"/u01/app/postgres/product/10/db_5/bin/pg_ctl -D /u02/pgdata/PG1 start\""
  },
  "tags": null,
  "type": "Microsoft.Compute/virtualMachines/extensions",
  "typeHandlerVersion": "2.0",
  "virtualMachineExtensionType": "CustomScript"
}

… and the instance is up and running:

dwe@dwe:~$ ssh x.x.x.x
Last login: Mon Aug 13 10:43:53 2018 from ip-37-201-6-36.hsi13.unitymediagroup.de
[dwe@MyPg ~]$ sudo su - postgres
Last login: Mon Aug 13 11:33:52 UTC 2018 on pts/0
[postgres@MyPg ~]$ /u01/app/postgres/product/10/db_5/bin/psql -c 'select version()'
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)
[postgres@MyPg ~]$ 

When you want to access this instance from outside Azure you will need to open the port:

dwe@dwe:~$ az vm open-port --resource-group PGTEST --name MyPg --port 5432

Once you have configured PostgreSQL for accepting connections:

[postgres@MyPg ~]$ /u01/app/postgres/product/10/db_5/bin/psql
psql (10.5)
Type "help" for help.

postgres=# alter system set listen_addresses = '*';
ALTER SYSTEM
postgres=# alter user postgres password 'secret';
ALTER ROLE
postgres=# show port ;
 port 
------
 5432
(1 row)

postgres=# \q
[postgres@MyPg ~]$ echo "host    all             all             37.201.6.36/32   md5" >> /u02/pgdata/PG1/pg_hba.conf 
[postgres@MyPg ~]$ /u01/app/postgres/product/10/db_5/bin/pg_ctl -D /u02/pgdata/PG1/ restart

… you can access the instance from your outside Azure:

dwe@dwe:~$ psql -h 137.117.157.183 -U postgres
Password for user postgres: 
psql (9.5.13, server 10.5)
WARNING: psql major version 9.5, server major version 10.
         Some psql features might not work.
Type "help" for help.

postgres=# 

Put all that into a well written script and you can have your customized PostgreSQL instance ready in Azure in a couple of minutes. Now that I have a feeling on how that works in general I’ll look into the managed PostgreSQL service in another post.

 

Cet article Bringing up your customized PostgreSQL instance on Azure est apparu en premier sur Blog dbi services.

Documentum – Silent Install – D2

Sun, 2018-08-12 07:50

In previous blogs, we installed in silent the Documentum binaries, a docbroker (+licence(s) if needed) as well as several repositories. In this one, we will see how to install D2 on a predefined list of docbases/repositories (on the Content Server side) and you will see that, here, the process is quite different.

D2 is supporting the silent installation since quite some time now and it is pretty easy to do. At the end of the D2 GUI Installer, there is a screen where you are asked if you want to generate a silent properties (response) file containing the information that have been set in the D2 GUI Installer. Therefore, this is a first way to start working with silent installation or you can just read this blog ;).

So, let’s start this with the preparation of a template file. I will use a lot of placeholders in the template and will replace the values with sed commands, just as a quick look at how you can script a silent installation with a template configuration file and some properties prepared before.

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/D2_template.xml
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/D2_template.xml
<?xml version="1.0" encoding="UTF-8"?>
<AutomatedInstallation langpack="eng">
  <com.izforge.izpack.panels.HTMLHelloPanel id="welcome"/>
  <com.izforge.izpack.panels.UserInputPanel id="SelectInstallOrMergeConfig">
    <userInput>
      <entry key="InstallD2" value="true"/>
      <entry key="MergeConfigs" value="false"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.HTMLInfoPanel id="readme"/>
  <com.izforge.izpack.panels.PacksPanel id="UNKNOWN (com.izforge.izpack.panels.PacksPanel)">
    <pack index="0" name="Installer files" selected="true"/>
    <pack index="1" name="D2" selected="###WAR_REQUIRED###"/>
    <pack index="2" name="D2-Config" selected="###WAR_REQUIRED###"/>
    <pack index="3" name="D2-API for Content Server/JMS" selected="true"/>
    <pack index="4" name="D2-API for BPM" selected="###BPM_REQUIRED###"/>
    <pack index="5" name="DAR" selected="###DAR_REQUIRED###"/>
  </com.izforge.izpack.panels.PacksPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.0">
    <userInput>
      <entry key="jboss5XCompliant" value="false"/>
      <entry key="webappsDir" value="###DOCUMENTUM###/D2-Install/war"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.2">
    <userInput>
      <entry key="pluginInstaller" value="###PLUGIN_LIST###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.3">
    <userInput>
      <entry key="csDir" value="###DOCUMENTUM###/D2-Install/D2-API"/>
      <entry key="bpmDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/bpm.ear"/>
      <entry key="jmsDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/ServerApps.ear"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.4">
    <userInput>
      <entry key="installationDir" value="###DOCUMENTUM###/D2-Install/DAR"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.5">
    <userInput>
      <entry key="dfsDir" value="/tmp/###DFS_SDK_PACKAGE###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.7">
    <userInput>
      <entry key="COMMON.USER_ACCOUNT" value="###INSTALL_OWNER###"/>
      <entry key="install.owner.password" value="###INSTALL_OWNER_PASSWD###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.8">
    <userInput>
      <entry key="SERVER.REPOSITORIES.NAMES" value="###DOCBASE_LIST###"/>
      <entry key="setReturnRepeatingValue" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.9">
    <userInput>
      <entry key="securityRadioSelection" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPD2ConfigOrClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseUsetheSameDFC">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableDFCSessionPool">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDFCKeyStoreInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetD2ConfigLanguage">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableD2BOCS">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetHideDomainforConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTemporaryMaxFiles">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="10">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="11">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfoForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="12">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="13">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="14">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="15">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="16">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="17">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="18">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="19">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="20">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="21">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="22">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTransferMode">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="24">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="25">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableAuditing">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPchooseWebAppServer">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskWebappsDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskNewWarDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.InstallPanel id="UNKNOWN (com.izforge.izpack.panels.InstallPanel)"/>
  <com.izforge.izpack.panels.XInfoPanel id="UNKNOWN (com.izforge.izpack.panels.XInfoPanel)"/>
  <com.izforge.izpack.panels.FinishPanel id="UNKNOWN (com.izforge.izpack.panels.FinishPanel)"/>
</AutomatedInstallation>

[dmadmin@content_server_01 ~]$

 

As you probably understood by looking at the above file, I’m using “/tmp/” for the input elements needed by D2 like the DFS package, the D2 installer or the D2+Pack Plugins and I’m using “$DOCUMENTUM/D2-Install” as the output folder where D2 generates its stuff into.

Once you have the template ready, you can replace the placeholders as follow (this is just an example of configuration based on the other silent blogs I wrote so far):

[dmadmin@content_server_01 ~]$ export d2_install_file=/tmp/dctm_install/D2.xml
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cp /tmp/dctm_install/D2_template.xml ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###WAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###BPM_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###DAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DOCUMENTUM###,$DOCUMENTUM," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###PLUGIN_LIST###,/tmp/D2_pluspack_4.7.0.P18/Plugins/C2-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/D2-Bin-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/O2-Install-4.7.0.jar;," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###JMS_HOME###,$DOCUMENTUM_SHARED/wildfly9.0.1," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DFS_SDK_PACKAGE###,emc-dfs-sdk-7.3," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ read -s -p "  ----> Please enter the Install Owner's password: " dm_pw; echo; echo
  ----> Please enter the Install Owner's password: <TYPE HERE THE PASSWORD>
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER###,dmadmin," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER_PASSWD###,${dm_pw}," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s/###DOCBASE_LIST###/Docbase1/" ${d2_install_file}
[dmadmin@content_server_01 ~]$

 

A short description of these properties as well as some notes on the values used above:

  • langpack: The language you are usually using for running the installers… English is fine if you use this template
  • entry key=”InstallD2″: Whether or not you want to install D2
  • entry key=”MergeConfigs”: Whether or not you want to merge the actual configuration/installation with the new one. I’m always restarting a D2 installation from scratch (removing the D2 hidden files for that) so I always set this to false
  • pack index=”0″ name=”Installer files”: Always set this to true to install D2 on a CS
  • pack index=”1″ name=”D2″: Whether or not you want to generate the D2 WAR file. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • pack index=”2″ name=”D2-Config”: Same as above but for the D2-Config WAR file
  • pack index=”3″ name=”D2-API for Content Server/JMS”: Whether or not you want the D2 Installer to put the D2 specific libraries into the JMS lib folder (path defined in: entry key=”jmsDir”). Even if you set this to true, you will still need to manually put a lot of D2 libs into the JMS lib folder because D2 only put a few of them but much more are required to run D2 properly (see documentation for the full list)
  • pack index=”4″ name=”D2-API for BPM”: Same as above but for the BPM this time (path defined in: entry key=”bpmDir”)
  • pack index=”5″ name=”DAR”: Whether or not you want to generate the DARs. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • entry key=”jboss5XCompliant”: I guess this is for the JBoss 5 support so if you are on Dctm 7.x, leave this as false
  • entry key=”webappsDir”: The path the D2 Installer will put the generated WAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/war” so this folder MUST exist before running the installer in silent
  • entry key=”pluginInstaller”: This one is a little bit trickier… It’s a semi-colon list of all D2+Pack Plugins you would like to install in addition to the D2. In the above, I’m using the C2, D2-Bin as well as O2 plugins. The D2+Pack package must obviously be extracted BEFORE running the installer in silent and all the paths MUST exist (you will need to extract the plugins jar from each plugin zip files). I opened a few bugs & enhancements requests for these so if you are facing an issue, let me know, I might be able to help you
  • entry key=”csDir”: The path the D2 Installer will put the generated libraries into. In this example, I set it to “$DOCUMENTUM/D2-Install/D2-API” so this folder MUST exist before running the installer in silent
  • entry key=”bpmDir”: The path the D2 Installer will put a few of the D2 libraries into for the BPM (it’s not all needed JARs and this parameter is obviously not needed if you set ###BPM_REQUIRED### to false)
  • entry key=”jmsDir”: Same as above but for the JMS this time
  • entry key=”installationDir”: The path the D2 Installer will put the generated DAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/DAR” so this folder MUST exist before running the installer in silent
  • entry key=”dfsDir”: The path where the DFS SDK can be found. The DFS SDK package MUST be extracted in this folder before running the installer in silent
  • entry key=”COMMON.USER_ACCOUNT”: The name of the Documentum Installation Owner
  • entry key=”install.owner.password”: The password of the Documentum Installation Owner. I used above a “read -s” command so it doesn’t appear on the command line, but it will be put in clear text in the xml file…
  • entry key=”SERVER.REPOSITORIES.NAMES”: A comma separated list of all docbases/repositories (without spaces) that need to be configured for D2. The DARs will be installed automatically on these docbases/repositories and if you want to do it properly, it mustn’t contain the GR. You could potentially add the GR in this parameter but all D2 DARs would be installed into the GR and this isn’t needed… Only the “D2-DAR.dar” and “Collaboration_Services.dar” are needed to be installed on the GR so I only add normal docbases/repositories in this parameter and once D2 is installed, I manually deploy these two DARs into the GR (I wrote a blog about deploying DARs easily to a docbase a few years ago if you are interested). So, here I have a value of “Docbase1″ but if you had two, you could set it to “Docbase1,Docbase2″
  • entry key=”setReturnRepeatingValue”: Whether or not you want the repeating values. A value of true should set the “return_top_results_row_based=false” in the server.ini
  • entry key=”securityRadioSelection”: A value of true means that D2 have to apply Security Rules to content BEFORE applying AutoLink and a value of false means that D2 can do it AFTER only
  • That’s the end of this file because I’m using D2 4.7 and in D2 4.7, there is no Lockbox anymore! If you are using previous D2 versions, you will need to put additional parameters for the D2 Lockbox generation, location, password, aso…

 

Once the properties file is ready, you can install the docbroker/connection broker using the following command:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -DTRACE=true -DDEBUG=true -Djava.io.tmpdir=$DOCUMENTUM/D2-Install/tmp -jar /tmp/D2_4.7.0_P18/D2-Installer-4.7.0.jar ${d2_install_file}

 

You now know how to install D2 on a Content Server using the silent installation provided by D2. As you saw above, it is quite different compared to all Documentum components silent installation, but it is working so… Maybe at some point in the future, D2 will switch to use the same kind of properties file as Documentum.

 

Cet article Documentum – Silent Install – D2 est apparu en premier sur Blog dbi services.

18c runInstaller -silent

Sun, 2018-08-12 02:34

You find two different ‘runInstaller’ under an Oracle Home. The old one, the Oracle Universal Installer, in $ORACLE_HOME/oui/bin. And the new one, in $ORACLE_HOME directly. They have the same name but are completely different. The old one was used to install an Oracle Home from the installation media. But in 18c you don’t use it. It has been used by Oracle to build the Oracle Home image. Then you download and unzip directly your Oracle Home. You have only to configure it and re-link the binaries. And this is done by the new runInstaller which is at the root of the Oracle Home. Actually, it is just a shell script that runs the Perl dbSetup.pl to setup the Oracle Database software. In my opinion, it would be better to have it called dbSetup.sh rather than rename it to runInstaller, especially given that the same thing for Grid Infrastructure is called GridSetup.sh since 12cR2. The Perl script finally runs the Java GUI. It can also be run in command line, aka silent mode, which is the goal of this post. The command line arguments are similar, but not the same as in the old runInstaller.

Prerequisites

You may want to run the prerequisites only to check if your system is ready for the installation. Here is how to do so in command line:

$ $ORACLE_HOME/runInstaller -silent -executePrereqs -responseFile $ORACLE_HOME/inventory/response/db_install.rsp
 
Launching Oracle Database Setup Wizard...
 
[FATAL] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

From there we can check the log about the tests that have failed, such as in the following example:

INFO: [Aug 11, 2018 6:08:21 PM] Physical Memory: This is a prerequisite condition to test whether the system has at least 8GB (8388608.0KB) of total physical memory.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:IGNORABLE
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] Run Level: This is a prerequisite condition to test whether the system is running with proper run level.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] OS Kernel Version: This is a prerequisite condition to test whether the system kernel version is at least "2.6.39-400.211.1".
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED

Software Install

You can pass all parameters in command line (‘runInstaller -silent -help’ to see all possibilities), but in all cases you need a response file. Then I put everything I need in the response file. There’s no mention of the ORACLE_HOME because you already unzipped it at the right place. The most important is the edition which seems to accept [EE, SEONE, SE2, HP, XP, PE]. I didn’t try it but Standard Edition One is for versions <= 12.1.0.1 by the way.

cd $ORACLE_HOME
 
cat > db18EE.rsp <<END
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v18.0.0
oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u00/app/oraInventory
ORACLE_BASE=/u00/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=oper
oracle.install.db.OSBACKUPDBA_GROUP=backupdba
oracle.install.db.OSDGDBA_GROUP=dgdba
oracle.install.db.OSKMDBA_GROUP=kmdba
oracle.install.db.OSRACDBA_GROUP=dba
END

There is no need for the oracle.install.db.config variables because I’ll install the software only without creating a database.

Here is how to run the dbSetup. You can use ‘-ignorePrereqFailure’ to ignore the prerequisites if you want to install to a host where some prerequisites fail:

./runInstaller -silent -noconfig -ignorePrereqFailure -responseFile ./db18EE.rsp

 
The log of the installation goes into the oraInventory/logs and, as usual, you have to run the root.sh

As a root user, execute the following script(s):
1. /u00/app/oracle/product/18SE/root.sh
 
Execute /u00/app/oracle/product/18SE/root.sh on the following nodes:
[vmreforatun01]

This new runInstaller can also apply one-off patches with -applyOneOffs mentioning the patch locations. You can also build an Oracle Home image that you customize, with -createGoldImage -destinationLocation and even mention some files or path to exclude to make it smaller: -exclFiles

 

Cet article 18c runInstaller -silent est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Docbases/Repositories

Sat, 2018-08-11 23:31

In previous blogs, we installed in silent the Documentum binaries as well as a docbroker (+licence(s) if needed). In this one, we will see how to install docbases/repositories and by that, I mean either a Global Registry (GR) repository or a normal repository.

As you all know, you will need a repository to be a GR and I would always recommend to setup a GR that isn’t used by the end-users (no real documents). That’s why I will split this blog into two: the installation of a GR and then, the installation of a normal repository that will be used by end-users. So, let’s get to it.

 

1. Documentum Global Registry repository installation

The properties file for a GR installation is as follow (it’s a big one):

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbase_GR.properties
### Silent installation response file for a Docbase (GR)
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=true
SERVER.CONFIGURATOR.BROKER=false

### Docbase parameters
SERVER.DOCBASE_ACTION=CREATE

common.use.existing.aek.lockbox=common.create.new
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS=false
SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_01.dbi-services.com

SERVER.DOCBASE_NAME=gr_docbase
SERVER.DOCBASE_ID=1010101
SERVER.DOCBASE_DESCRIPTION=Global Registry repository for silent install blog

SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.TEST_DOCBROKER=true
SERVER.CONNECT_MODE=dual

SERVER.USE_EXISTING_DATABASE_ACCOUNT=true
SERVER.INDEXSPACE_NAME=DM_GR_DOCBASE_INDEX
SERVER.DATABASE_CONNECTION=DEMODBNAME
SERVER.DATABASE_ADMIN_NAME=gr_docbase
SERVER.SECURE.DATABASE_ADMIN_PASSWORD=gr_d0cb4seP4ssw0rdDB
SERVER.DOCBASE_OWNER_NAME=gr_docbase
SERVER.SECURE.DOCBASE_OWNER_PASSWORD=gr_d0cb4seP4ssw0rdDB
SERVER.DOCBASE_SERVICE_NAME=gr_docbase

SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION=USE_THIS_REPOSITORY
SERVER.BOF_REGISTRY_USER_LOGIN_NAME=dm_bof_registry
SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=dm_b0f_reg1s7ryP4ssw0rd

### Common parameters
SERVER.ENABLE_XHIVE=false
SERVER.CONFIGURATOR.DISTRIBUTED_ENV=false
SERVER.ENABLE_RKM=false
START_METHOD_SERVER=false
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$

 

In the above commands, I didn’t put the SERVER.DOCUMENTUM_DATA and SERVER.DOCUMENTUM_SHARE into the file directly but I used sed commands to update the file later because I didn’t want to direct you to use a certain path for your installation like /app or /opt or /var or whatever… This choice is yours, so I just used sub-folders of $DOCUMENTUM and used this environment variable to set both parameters so you can choose which path you want for the Data and Share folder (the above is the default but you can set what you want).

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.CONFIGURATOR.LICENSING: Whether or not you want to configure a licence using this properties file. Here since we just want a docbase/repository, it is obviously false
  • SERVER.CONFIGURATOR.REPOSITORY: Whether or not you want to configure a docbase/repository. That’s the purpose of this properties file so it will be true
  • SERVER.CONFIGURATOR.BROKER: Whether or not you want to configure a docbroker/connection broker. Same as the licence, it will be false
  • SERVER.DOCBASE_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • common.use.existing.aek.lockbox: Whether to use an existing aek or create a new one. Possible values are “common.create.new” or “common.use.existing”. In this case, it is the first docbase/repository created so we are creating a new one. In case of migration/upgrade, you might want to use an existing one (after upgrading it) …
  • common.aek.passphrase.password: The password to be used for the AEK
  • common.aek.key.name: The name of the AEK key to be used. This is usually something like “CSaek”
  • common.aek.algorithm: The algorithm to be used for the AEK key. I would recommend the strongest one, if possible: “AES_256_CBC”
  • SERVER.ENABLE_LOCKBOX: Whether or not you want to use a Lockbox to protect the AEK key. If set to true, a lockbox will be created and the AEK key will be stored in it
  • SERVER.LOCKBOX_FILE_NAME: The name of the Lockbox to be used. This is usually something like “lockbox.lb”
  • SERVER.LOCKBOX_PASSPHRASE.PASSWORD: The password to be used for the Lockbox
  • SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS: Whether or not the “SERVER.DOCUMENTUM_DATA” and “SERVER.DOCUMENTUM_SHARE” are using a SAN or NAS path
  • SERVER.DOCUMENTUM_DATA: The path to be used to store the Documentum documents, accessible from all Content Servers which will host this docbase/repository
  • SERVER.DOCUMENTUM_SHARE: The path to be used for the share folder
  • SERVER.FQDN: The Fully Qualified Domain Name of the current host the docbase/repository is being installed on
  • SERVER.DOCBASE_NAME: The name of the docbase/repository to be created (dm_docbase_config.object_name)
  • SERVER.DOCBASE_ID: The ID of the docbase/repository to be created
  • SERVER.DOCBASE_DESCRIPTION: The description of the docbase/repository to be created (dm_docbase_config.title)
  • SERVER.PROJECTED_DOCBROKER_HOST: The hostname to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.PROJECTED_DOCBROKER_PORT: The port to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.TEST_DOCBROKER: Whether or not you want to test the docbroker/connection broker connection during the installation. I would recommand to always set this to true to be sure the docbase/repository is installed properly… If a docbroker/connection broker isn’t available, the installation will not be complete (DARs installation for example) but you will not see any error, unless you manually check the installation log…
  • SERVER.CONNECT_MODE: The connection mode of the docbase/repository to be used (dm_server_config.secure_connect_mode), it can be either native, dual or secure. If it is dual or secure, you have 2 choices:
    • Use the default “Anonymous” mode, which is actually not really secure
    • Use a real “SSL Certificate” mode, which requires some more parameters to be configured:
      • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbase/repository
      • SERVER.KEYSTORE_FILE_NAME: The name of the p12 file that contains the keystore
      • SERVER.KEYSTORE_PASSWORD_FILE_NAME: The name of the password file that contains the password of the keystore
      • SERVER.TRUST_STORE_FILE_NAME: The name of the p7b file that contains the SSL Certificate needed to trust the targets (from a docbase point of view)
      • SERVER.CIPHER_LIST: Colon separated list of ciphers to be enabled (E.g.: EDH-RSA-AES256-GCM-SHA384:EDH-RSA-AES256-SHA)
      • SERVER.DFC_SSL_TRUSTSTORE: Full path and name of the truststore to be used that contains the SSL Certificate needed to trust the targets (from a DFC/client point of view)
      • SERVER.DFC_SSL_TRUSTSTORE_PASSWORD: The password of the truststore in clear text
      • SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE: Whether or not to use the Java truststore or the 2 above parameters instead
  • SERVER.USE_EXISTING_DATABASE_ACCOUNT: Whether or not you want to use an existing DB Account or create a new one. I don’t like when an installer is requesting you full access to a DB so I’m usually preparing the DB User upfront with only the bare minimal set of permissions required and then using this account for the Application (Documentum docbase/repository in this case)
  • SERVER.INDEXSPACE_NAME: The name of the tablespace to be used to store the indexes (to be set if using existing DB User)
  • SERVER.DATABASE_CONNECTION: The name of the Database to connect to. This needs to be available on the tnsnames.ora if using Oracle, aso…
  • SERVER.DATABASE_ADMIN_NAME: The name of the Database admin account to be used. There is no reason to put anything else than the same as the schema owner’s account here… If you configured the correct permissions, you don’t need a DB admin account at all
  • SERVER.SECURE.DATABASE_ADMIN_PASSWORD: The password of the above-mentioned account
  • SERVER.DOCBASE_OWNER_NAME: The name of the schema owner’s account to be used for runtime
  • SERVER.SECURE.DOCBASE_OWNER_PASSWORD: The password of the schema owner’s account
  • SERVER.DOCBASE_SERVICE_NAME: The name of the service to be used. To be set only when using Oracle…
  • SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION: If this docbase/repository should be a Global Registry, then set this to “USE_THIS_REPOSITORY”, otherwise do not set the parameter. If the GR is on a remote host, you need to set this to “SPECIFY_DIFFERENT_REPOSITORY” and then use a few additional parameters to specify the name of the GR repo and the host it is currently running on
  • SERVER.BOF_REGISTRY_USER_LOGIN_NAME: The name of the BOF Registry account to be created. This is usually something like “dm_bof_registry”
  • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=The password to be used for the BOF Registry account
  • SERVER.ENABLE_XHIVE: Whether or not you want to enable the XML Store Feature. As I mentioned in the blog with the licences, this is one of the thing you might want to enable the licence during the docbase/repository configuration. If you want to enable the XHIVE, you will need to specify a few additional parameters like the XDB user/password, host and port, aso…
  • SERVER.CONFIGURATOR.DISTRIBUTED_ENV: Whether or not you want to enable/configure the DMS. If you set this to true, you will need to add a few more parameters like the DMS Action to be performed, the webserver port, host, password, aso…
  • SERVER.ENABLE_RKM: Whether or not you want to enable/configure the RKM. If you set this to true, you will need to add a few more parameters like the host/port on which the keys will be stored, the certificates and password, aso…
  • START_METHOD_SERVER: Whether or not you want the JMS to be re-started again once the docbase/repository has been created. Since we usually create at least 2 docbases/repositories, we can leave it stopped there
  • MORE_DOCBASE: Never change this value, it should remain as false as far as I know
  • SERVER.CONGINUE.MORECOMPONENT: Whether or not you want to configure some additional components. Same as above, I would always let it as false… I know that the name of this parameter is strange but that’s the name that is coming from the templates… But if you look a little bit on the internet, you might be able to find “SERVER.CONTINUE.MORE.COMPONENT” instead… So which one is working, which one isn’t is still a mystery for me. I use the first one but since I always set it to false, that doesn’t have any impact for me and I never saw any errors coming from the log files.

 

Once the properties file is ready, you can install the Global Registry repository using the following command:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbase_GR.properties

 

Contrary to previous installations, this will take some time (around 20 minutes) because it needs to install the docbase/repository, then there are DARs that need to be installed, aso… Unfortunately, there is no feedback on the progress, so you just need to wait and in case something goes wrong, you won’t even notice since there are no errors shown… Therefore, check the logs to be sure!

 

2. Other repository installation

Once you have a Global Registry repository installed, you can install the repository that will be used by the end-users (which isn’t a GR then). The properties file for an additional repository is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbase_Other.properties
### Silent installation response file for a Docbase
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=true
SERVER.CONFIGURATOR.BROKER=false

### Docbase parameters
SERVER.DOCBASE_ACTION=CREATE

common.use.existing.aek.lockbox=common.use.existing
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS=false
SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_01.dbi-services.com

SERVER.DOCBASE_NAME=Docbase1
SERVER.DOCBASE_ID=1010102
SERVER.DOCBASE_DESCRIPTION=Docbase1 repository for silent install blog

SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.TEST_DOCBROKER=true
SERVER.CONNECT_MODE=dual

SERVER.USE_EXISTING_DATABASE_ACCOUNT=true
SERVER.INDEXSPACE_NAME=DM_DOCBASE1_INDEX
SERVER.DATABASE_CONNECTION=DEMODBNAME
SERVER.DATABASE_ADMIN_NAME=docbase1
SERVER.SECURE.DATABASE_ADMIN_PASSWORD=d0cb4se1P4ssw0rdDB
SERVER.DOCBASE_OWNER_NAME=docbase1
SERVER.SECURE.DOCBASE_OWNER_PASSWORD=d0cb4se1P4ssw0rdDB
SERVER.DOCBASE_SERVICE_NAME=docbase1

### Common parameters
SERVER.ENABLE_XHIVE=false
SERVER.CONFIGURATOR.DISTRIBUTED_ENV=false
SERVER.ENABLE_RKM=false
START_METHOD_SERVER=true
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$

 

I won’t list all these parameters again but just the ones that changed, except the docbase/repository name/id/description and DB accounts/tablespaces since these are pretty obvious:

  • Updated parameter’s value:
    • common.use.existing.aek.lockbox: As mentioned above, since the AEK key is now created (as part of the GR installation), this now need to be set to “common.use.existing” instead
  • Removed parameter (all these will be taken from the dfc.properties that has been updated as part of the GR installation):
    • SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION
    • SERVER.BOF_REGISTRY_USER_LOGIN_NAME
    • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD

 

Once the properties file is ready, you can install the additional repository in the same way:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbase_Other.properties

 

You now know how to install and configure a Global Registry repository as well as any other docbase/repository on a “Primary” Content Server using the silent installation provided by Documentum. In a later blog, I will talk about specificities related to a “Remote” Content Server for a High Availability environment.

 

Cet article Documentum – Silent Install – Docbases/Repositories est apparu en premier sur Blog dbi services.

TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT

Fri, 2018-08-10 09:47

When you have a Data Guard configuration, you want the application to connect to the right server, where the primary is, without taking too much time. The default TCP timeout is 1 minute which is too long. When you don’t want to configure a virtual IP address (VIP) you can simply list all the addresses in the client connection string. But then you need to reduce the timeout. A short duration in 1 to 5 seconds will be ok most of the time, but in case of network issue, you want to give a chance to retry with a longer timeout. This post is about the connection string parameters to define this. Of course, all is documented but the goal of this post is also to show how to quickly test it. Because a reliable understanding of how it works relies on both documentation and test.

Here is a simple client failover configuration where the connection tries 10.10.10.10 and, if it fails, tries 10.10.10.11

DEFAULT=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The problem with that is when the 10.10.10.10 is down then the 10.10.10.11 will be tried only after 60 seconds, the default TCP timeout. You can completely avoid waiting for the timeout by using a virtual IP that will always be up, started on the failed-over server. But you can also reduce the TCP timeout to a few seconds.

Here is a tnsping with the above tnsnames.ora entry and when both servers are down:


$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 2m0.051s
user 0m0.005s
sys 0m0.011s

That’s 2 minutes because there is a 1 minute timeout for each address.

TRANSPORT_CONNECT_TIMEOUT

Now, just adding the TRANSPORT_CONNECT_TIMEOUT to the connection string description to reduce the timout to 4 seconds:


DESCRIPTION=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=4)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The total time to get the answer from both addresses is 8 seconds – 4 second for each:

$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m8.023s
user 0m0.010s
sys 0m0.006s

RETRY_COUNT

If you lower the timeout, you may give a chance to retry a few times with RETRY_COUNT. There, RETRY_COUNT=2 will give 3 attempts ( 1 + 2 retries ) to the address list:

$ time tnsping RETRY_COUNT
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:49:34
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (RETRY_COUNT=2) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m24.049s
user 0m0.011s
sys 0m0.010s

This has tried 10.10.10.10 and then 10.10.10.11 for 4 seconds each, and then retried 2 times wich in total takes 2x4x4=24 seconds

DESCRIPTION_LIST

The TRANSPORT and RETRY_COUNT are used only in the DESCRIPTION. You may want to give several attempts with an increasing timeout. For example: try each address for one second to get a quick connection to the primary, wherever it is, when the network is in good health. Then give two attempts with a 5 seconds timeout for bad network times. And then one final attempt to each with the default timeout to be sure that the servers are down.

You can use a DESCRIPTION_LIST for this:

INCREASING=
(DESCRIPTION_LIST=
(LOAD_BALANCE=off)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=5)
(RETRY_COUNT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=2)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
)

Rather than just time the total attempts, I’ll strace each connections:

$ strace -tT tnsping INCREASING 2>&1 | grep -C1 --color=auto -E 'poll.*|inet_addr[()".0-9]*'
 
16:15:49 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000008>
16:15:49 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000087>
16:15:49 times(NULL) = 434920117 <0.000011>
16:15:49 mmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efce31bc000 <0.000013>
16:15:49 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.001435>
16:15:50 close(4) = 0 <0.000256>
--
16:15:50 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000060>
16:15:50 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000495>
16:15:50 times(NULL) = 434920218 <0.000062>
16:15:50 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.000768>
16:15:51 close(4) = 0 <0.000050>
--
16:15:51 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000015>
16:15:51 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000060>
16:15:51 times(NULL) = 434920318 <0.000010>
16:15:51 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.005563>
16:15:56 close(4) = 0 <0.000027>
--
16:15:56 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000012>
16:15:56 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000081>
16:15:56 times(NULL) = 434920819 <0.000015>
16:15:56 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.006265>
16:16:01 close(4) = 0 <0.000192>
--
16:16:01 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000079>
16:16:01 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000486>
16:16:01 times(NULL) = 434921320 <0.000087>
16:16:01 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.004660>
16:16:06 close(4) = 0 <0.000611>
--
16:16:06 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000114>
16:16:06 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000536>
16:16:06 times(NULL) = 434921822 <0.000097>
16:16:06 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.008128>
16:16:11 close(4) = 0 <0.000135>
--
16:16:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000137>
16:16:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000584>
16:16:11 times(NULL) = 434922323 <0.000079>
16:16:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.053782>
16:17:11 close(4) = 0 <0.000166>
--
16:17:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000195>
16:17:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000549>
16:17:11 times(NULL) = 434928329 <0.000488>
16:17:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.007246>
16:18:11 close(4) = 0 <0.000043>

With ‘-T’ strace shows the duration of the poll() system call between brackets after the return code. You can see here 1-second timeout attempts to each address, then 2 attempts with 5 seconds timeout and then 60 seconds.

Note that I have added (LOAD_BALANCE=OFF) here because the default is ON in a DESCRIPTION_LIST but here I want to take them in the order I specified them.

 

Cet article TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT est apparu en premier sur Blog dbi services.

ATP vs ADW – the Autonomous Database lockdown profiles

Thu, 2018-08-09 14:49

The Oracle database has always distinguished two types of workloads: transactional (OLTP) and datawarehouse (VLDB, DWH, DSS, BI, analytics). There is the same idea in the managed Oracle Cloud with two autonomous database services.

To show how this is old, here is how they were defined in the Oracle7 Tuning Book:

CaptureOLTPvsDSS

The definition has not changed a lot. But the technology behind DSS/DWH has improved. Now, with In-Memory Column Store, Smart Scan, Result Cache we can even see that indexes, materialized views, star transformation, hints,.. are disabled in the Autonomous Datawarehouse cloud service.

The difference between the two autonomous cloud services, ATP (Autonomous Transaction Processing) for OLTP and ADW (Autonomous Datawarehouse) for analytics have been described by Maria Colgan after Larry Ellison announce:
https://sqlmaria.com/2018/08/07/how-does-autonomous-transaction-processing-differ-from-the-autonomous-data-warehouse/

PDBaaS

Those autonomous services are PDB as a Service. They are using the consolidation and isolation features of 12cR2 multitenant. And we can even see that the ATP and ADW services can run within the same CDB and same instance. They are different PDBs and they differ only by their resource management plans and PDB lockdown profiles. So let’s see the differences from this point of view: ATP lockdown profile is called ‘OLTP’ and ADW lockdown profile is called ‘DWCS’.

Options

The only difference is about partitioning which is enabled for ATP and disabled for ASW

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION';
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
OLTP OPTION PARTITIONING ENABLE ALL 284
 
SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION' order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------- ----- ------
DWCS OPTION PARTITIONING DISABLE ALL 73

Features

All disabled features are the same:
Disabled for all users: COMMON_SCHEMA_ACCESS, CONNECTIONS, CTX_LOGGING, NETWORK_ACCESS, OS_ACCESS, SYSTEM_DATA
Disabled for local users only: USER_PASSWORD_VERIFIERS
Enabled for all users: COMMON_USER_CONNECT, TRACE_VIEW_ACCESS, UTL_FILE

Those last ones are good news. We can query V$DIAG_OPT_TRACE_RECORDS, V$DIAG_SQL_TRACE_RECORDS, $DIAG_TRACE_FILE_CONTENTS to show some diagnostics. Unfortunately, I’ve seen no allowed ways to enable SQL Trace.

DDL Statements

In the ADW service, the local user cannot create indexes and materialized views, They are supposed to be autonomously created when required (probably by a common user):

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where regexp_like(rule,'(INDEX|MATERIALIZED)') order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
DWCS STATEMENT ALTER INDEX DISABLE LOCAL 73
DWCS STATEMENT ALTER INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEX DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT DROP INDEX DISABLE LOCAL 73
DWCS STATEMENT DROP INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW LOG DISABLE LOCAL 73

The ATP service enables all those for all users.

System parameters

Both services disable all ALTER SYSTEM statements except KILL SESSION. They allow all ALTER SYSTEM SET for the common users but allow only a few parameters to be changed by local users.

Basically, the ADW service sets the following:


_default_pct_free=1
_ldr_io_size=33554432
_ldr_io_size2=33554432
_max_io_size=33554432
_optimizer_allow_all_access_paths=FALSE
_optimizer_answering_query_using_stats=TRUE
optimizer_ignore_hints=TRUE
optimizer_ignore_parallel_hints=TRUE
parallel_degree_policy=AUTO
parallel_min_degree=CPU
result_cache_max_result=1
result_cache_mode=FORCE

The ATP service keeps the defaults.

Basically, all the features for OLTP are there for years in the Oracle Database. Under the ‘autonomous’ umbrella we can see some well-known features

  • Automatic provisioning: pluggable databases
  • Automatic scaling: PDB resource manager
  • Automatic tuning: SQL Plan Management, Adaptive Plans, SQL Tuning Advisor, Storage Indexes
  • Automatic security: Rolling patches, Encryption, Database Vault
  • Automatic Fault Tolerant Failover: RAC, Data Guard
  • Automatic Backup and Recovery: RMAN, Flashback

All these proven features are enhanced to work together in a consolidated multitenant environment. Povisioning is as fast as a CREATE PDB. The ATP and ADW services enable and configure the right set of features for each workload.

 

Cet article ATP vs ADW – the Autonomous Database lockdown profiles est apparu en premier sur Blog dbi services.

Using Oratcptest

Thu, 2018-08-09 10:32

Last day I did some tests with Oratcptest. This a command line tool which can be used to measure network bandwidth and network latency between client and server.
In this blog I am going to show how to use this tool in a DataGuard environment.
I am using a DataGuard environment with following configuration.


Configuration - PROD

  Protection Mode: MaxAvailability
  Members:
  PROD_SITE1 - Primary database
    PROD_SITE2 - Physical standby database
    PROD_SITE3 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 54 seconds ago)

DGMGRL> show database 'PROD_SITE1' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE2' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL> show database 'PROD_SITE3' LogXptMode;
  LogXptMode = 'SYNC'
DGMGRL>

The installation of Oratcptest is very simple. We just have to download on both servers the oratcptest.jar file from the oracle support site. Note that JRE 6 or higher is required. In my case I have JRE 8 on both servers

[oracle@primaserver oratcptest]$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
[oracle@primaserver oratcptest]$

[oracle@standserver1 oratcptest]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver1 oratcptest]$

[oracle@standserver2 ~]$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
[oracle@standserver2 ~]$

We can invoke the help command to see all options available for the oratcptest

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar -help

Now we can assess the network bandwidth for our DataGuard. Note that I am using simple virtual machines. But the steps will be the same on real productions servers.
We first have to determine the highest volume of redo log in my database. Following Oracle query can be used.

select thread#,sequence#,blocks*block_size/1024/1024 MB,(next_time-first_time)*86400 sec, blocks*block_size/1024/1024/((next_time-first_time)*86400) "MB/s" from v$archived_log where ((next_time-first_time)*864000) and first_time between  to_date('2018/08/09 08:00:00','YYYY/MM/DD HH24:MI:SS') and to_date('2018/08/09 11:00:00','YYYY/MM/DD HH24:MI:SS') and dest_id=2 order by first_time;

   THREAD#  SEQUENCE#         MB        SEC       MB/s
---------- ---------- ---------- ---------- ----------
         1        124 .003417969          9 .000379774
         1        125 .271972656        184 .001478112
         1        126 .001953125          5 .000390625
         1        127 11.3662109        915 .012422088
         1        128 10.8466797       6353 .001707332

We can see that the highest value is .012422088 MB/s. The goal is to see if our network bandwidth can support this rate.
As we are using SYNC mode, the primary database will wait for a confirmation from standby databases that they have written the change to disk before informing the application of the commit success.
For SYNC transport we then have to collect the Average redo write size which is calculated using following formula

Average=redo size / redo writes

These metrics can be obtained using an AWR report. In our case the value is

Average=15924844/4015=3967

Now we are going to simulate SYNC writes over the network using Oratcptest. Note I need the location of my standby redo logs

SQL> select member from v$logfile where type='STANDBY';

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/PROD/stredo01.log
/u01/app/oracle/oradata/PROD/stredo02.log
/u01/app/oracle/oradata/PROD/stredo03.log
/u01/app/oracle/oradata/PROD/stredo04.log

From the standby server I can run following command

[oracle@standserver1 oratcptest]$ java -jar oratcptest.jar -server -port=5555 -file=/u01/app/oracle/oradata/PROD/myoratcp.tmp
OraTcpTest server started.

From the primary server

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555  -write  -mode=sync -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = SYNC
        Disk write             = YES
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:26:16) The server is ready.
                    Throughput             Latency
(14:26:18)      1.124 Mbytes/s            3.375 ms   (disk-write 2.537 ms)
(14:26:20)      0.813 Mbytes/s            4.668 ms   (disk-write 3.775 ms)
(14:26:22)      1.094 Mbytes/s            3.467 ms   (disk-write 2.773 ms)
(14:26:24)      1.004 Mbytes/s            3.778 ms   (disk-write 2.991 ms)
(14:26:26)      0.560 Mbytes/s            6.779 ms   (disk-write 5.623 ms)
(14:26:26) Test finished.
               Socket send buffer = 11700 bytes
                  Avg. throughput = 0.920 Mbytes/s
                     Avg. latency = 4.126 ms (disk-write 3.280 ms)

[oracle@primaserver oratcptest]$

We can see that the Average throughput is 0.920 M/s which is sufficient to handle our highest peak rate which is .012422088 MB/s. We can also note the latency which includes the time to send the message to the server host, the optional disk write at the server host, and the acknowledgment back to the client process
If we are using ASYNC mode the test will be like

[oracle@primaserver oratcptest]$ java -jar oratcptest.jar standserver1 -port=5555    -mode=async -length=3967 -duration=10s -interval=2s
[Requesting a test]
        Message payload        = 3967 bytes
        Payload content type   = RANDOM
        Delay between messages = NO
        Number of connections  = 1
        Socket send buffer     = (system default)
        Transport mode         = ASYNC
        Disk write             = NO
        Statistics interval    = 2 seconds
        Test duration          = 10 seconds
        Test frequency         = NO
        Network Timeout        = NO
        (1 Mbyte = 1024x1024 bytes)

(14:58:03) The server is ready.
                    Throughput
(14:58:05)     13.897 Mbytes/s
(14:58:09)      5.193 Mbytes/s
(14:58:11)     40.007 Mbytes/s
(14:58:13)     21.475 Mbytes/s
(14:58:13) Test finished.
               Socket send buffer = 210600 bytes
                  Avg. throughput = 16.901 Mbytes/s

[oracle@primaserver oratcptest]$

Conclusion
In this blog we have talked about Oratcptest which is a simple tool which can help to assess our network bandwidth. Oratcptest can help before performing Rman Backups, file copies, a DataGuard environment.
Reference : Oracle Doc ID 2064368.1

 

Cet article Using Oratcptest est apparu en premier sur Blog dbi services.

MERGE JOIN CARTESIAN: a join method or a join type?

Wed, 2018-08-08 17:00

I’ll present about join methods at POUG and DOAG. I’ll show how the different join methods work in order to better understand them. The idea is to show Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian on the same query. I’ll run a simple join between DEPT and EMP with the USE_NL, USE_HASH, USE_MERGE and USE_MERGE_CARTESIAN hints. I’ll show the execution plan, with SQL Monitoring in text mode. And I’ll put some gdb breakpoints on the ‘qer’ (query execution rowsource) functions to run the plan operations step by step. Then I’ll do the same on a different query in order to show in detail the 12c adaptive plans.

But wait, I listed Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian… but is Merge Cartesian Join really a join method? I mean, my query is not a cartesian join. I have all join predicates here. But for sure you can also do an inner join by starting with a cartesian join and then filter on the join predicate. As if doing physically what the old join syntax of Oracle is doing logically: by not putting any predicates in the from clause and add the join predicates in the where clause to filter over it.

If I look at the 12.2 documentation, it is a Join method
CaptureJoinMethods122

For the definition, a Join Method is how the join will be executed. It is not a decision of the SQL developer because SQL is declarative: you declare the result you want, and the optimizer will decide how to do it. And this is why hints are in comments: they are not part of the declarative syntax. Forcing how to do it is not part of SQL.

Just after listing the join methods, the documentation lists the join types which are part of the SQL because it declares the join result you expect. Inner join to get all matching rows. Semi join to get only the first matching row. Anti Join to get all rows which do not match. Outer join to get all matching rows in addition to those which matches. The syntax is INNER JOIN, OUTER JOIN, EXISTS or IN, NOT EXISTS or NOT IN. Join type is not ‘how’ but ‘what’.

Ok, so back to the join method. Let’s force it on my inner join between DEPT and EMP:

11g


SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
10 RESEARCH DALLAS 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 RESEARCH DALLAS 7839 KING PRESIDENT 17-NOV-81 5000
10 RESEARCH DALLAS 7934 MILLER CLERK 7782 23-JAN-82 1300
10 SALES CHICAGO 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 SALES CHICAGO 7839 KING PRESIDENT 17-NOV-81 5000
10 SALES CHICAGO 7934 MILLER CLERK 7782 23-JAN-82 1300
10 OPERATIONS BOSTON 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 OPERATIONS BOSTON 7839 KING PRESIDENT 17-NOV-81 5000
10 OPERATIONS BOSTON 7934 MILLER CLERK 7782 23-JAN-82 1300
 
12 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 2034389985
 
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 12 |00:00:00.01 | 7 | | | |
| 1 | MERGE JOIN CARTESIAN| | 1 | 14 | 12 |00:00:00.01 | 7 | | | |
| 2 | TABLE ACCESS FULL | DEPT | 1 | 4 | 4 |00:00:00.01 | 4 | | | |
| 3 | BUFFER SORT | | 4 | 4 | 12 |00:00:00.01 | 3 | 2048 | 2048 | 2048 (0)|
|* 4 | TABLE ACCESS FULL | EMP | 1 | 4 | 3 |00:00:00.01 | 3 | | | |
------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - filter("DEPT"."DEPTNO"="EMP"."DEPTNO")
 

Ok, then I declared my result with an inner join query, and I forced the join method with a hint to show that it is possible. But look at the result. 12 rows? Only DEPTNO 10 where the SCOTT schema has employees in 10, 20 and 30? And only 3 employees here, repeated 4 times for each department name? That’s wrong result.

NEVER FORCE A CARTESIAN JOIN WITH USE_MERGE_CARTESIAN!

That’s a very old bug: Bug 17064391 Wrong result with USE_MERGE_CARTESIAN hint finally fixed in 12c (12.2 and backported in 12.1 PSU)

Then how is it fixed?

18c

With the fix, the hint is just ignored and a SORT MERGE JOIN is used here:

SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
20 RESEARCH DALLAS 7566 JONES MANAGER 7839 02-APR-81 2975
20 RESEARCH DALLAS 7902 FORD ANALYST 7566 03-DEC-81 3000
20 RESEARCH DALLAS 7876 ADAMS CLERK 7788 23-MAY-87 1100
20 RESEARCH DALLAS 7369 SMITH CLERK 7902 17-DEC-80 800
20 RESEARCH DALLAS 7788 SCOTT ANALYST 7566 19-APR-87 3000
30 SALES CHICAGO 7521 WARD SALESMAN 7698 22-FEB-81 1250 500
30 SALES CHICAGO 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0
30 SALES CHICAGO 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300
30 SALES CHICAGO 7900 JAMES CLERK 7698 03-DEC-81 950
30 SALES CHICAGO 7698 BLAKE MANAGER 7839 01-MAY-81 2850
30 SALES CHICAGO 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400
 
14 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 1407029907
 
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 12 | 12 | | | |
| 1 | MERGE JOIN | | 1 | 14 | 14 |00:00:00.01 | 12 | 12 | | | |
| 2 | SORT JOIN | | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 3 | TABLE ACCESS FULL| DEPT | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | | | |
|* 4 | SORT JOIN | | 4 | 14 | 14 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 5 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | 6 | | | |
--------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")
filter("DEPT"."DEPTNO"="EMP"."DEPTNO")

So here the result is good, thanks to the fix, and we clearly see how it is fixed: the USE_MERGE_CARTESIAN hint has been ignored.

And the funny thing is that when you look at the 18c documentation, the Merge Join Cartesian is not a join method anymore but a join type:
CaptureJoinMethods183

Exactly the same paragraph, but now in join types (the ‘what’) rather than in join methods (the ‘when’).

What or How?

Actually, in my opinion, it is both. When you explicitly want a cartesian join, that’s a join type described by the CROSS JOIN in the ANSI join syntax, or the lack of related predicates in the old syntax. This is ‘what’. But you may also encounter a MERGE JOIN CARTESIAN for a non-cartesian join just because the optimizer decides it is more efficient. When you have very few rows on both sides, it may be faster to start with a cartesian product on small rowsources. This can be part of star transformation where fact rows are joined back to the cartesian product of filtered dimensions in order to project the dimension attributes. This is ‘how’ it will be executed. We also see it when the optimizer underestimates the cardinalities and is followed by a long nested loop.

When?

So, let’s look at the documentation “When the Optimizer Considers Cartesian Joins”:
CaptureJoinMethods183-when

  • No join condition exists: that’s when cartesian product is what we want
  • The ORDERED hint specifies a table before its join table is specified: that’s when it is the only join method possible with the specified join order
  • A Cartesian join is an efficient method: then it is a method there, even if documented in join types.

In conclusion, cartesian join is a join type. It can also be used as a join method when the optimizer decides to. But you cannot decide it yourself by hinting since 12c, and trying to do so in previous version is a very bad idea and can returns wrong results.

So, for this one I’ll explicitely run a CROSS JOIN:
CaptureUSEMERGECARTESIAN

The query is on top. The SQL monitor in the middle, showing that we are currently active on reading rows from EMP. The bottom shows the ‘qer’ functions backtrace: the fetch call is propagated from opifch2 for the SELECT STATEMENT, through the MERGE JOIN CARTESIAN (querjo), the BUFFER SORT (qerso), to the TABLE ACCESS (qertb).

So basically, the goal of this full-demo presentation is to show how to read the execution plan by understanding how it is executed. This qertbFetch on the inner table EMP is executed only on the first row coming from the outer table DEPT. As the rows are returned to a buffer, the further iterations will fetch only from this buffer and will not go further than qersoFetchSimple. The qersoProcessULS (‘process underlying row source’ – see Frits Hoogland annotations) is run only once. This is the big difference with Nested Loop where the inner loop on the underlying rowsource is run for each outer loop iteration: those two loops are nested – thus the name. But the function for the join part is the same for Nested Loop, Sort Merge Join and Merge Join Cartesian: qerjo. Only the underlying operations differenciate the join methods.

Last comment, we don’t see any function which really sort the rows in this buffer (as we will see for the Sort Merge Join method) because there is no sorting despites the name of the BUFFER SORT operation. More info on Jonathan Lewis blog.

 

Cet article MERGE JOIN CARTESIAN: a join method or a join type? est apparu en premier sur Blog dbi services.

Backing up and restoring EDB containers in MiniShift/OpenShift

Wed, 2018-08-08 09:38

The last blogs in the series are already some days old: Setting up MiniShift, Deploying EDB containers in MiniShift/OpenShift, Customizing PostgreSQL parameters in EDB containers in MiniShift/OpenShift, Scaling the EDB containers in MiniShift/OpenShift, EDB Failover Manager in EDB containers in Minishift/OpenShift and EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers. What is missing is how you can backup and restore instances running in this container deployment and that is the topic of this post.

What you usually use to backup and restore EDB Postgres is BART and the container world is no exception to that. Lets see how that works.

My current deployment looks like this:
Selection_013

Two pgpool containers are serving three database containers which you can also check on the command line:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP           NODE        ROLE
edb-as10-0-1-b8lvj   1/1       Running   0          3m        172.17.0.9   localhost   masterdb
edb-as10-0-1-gj76h   1/1       Running   0          1m        172.17.0.5   localhost   standbydb
edb-as10-0-1-sb5lt   1/1       Running   0          2m        172.17.0.4   localhost   standbydb
edb-pgpool-1-qzk5v   1/1       Running   0          3m        172.17.0.7   localhost   queryrouter
edb-pgpool-1-rvtl6   1/1       Running   0          3m        172.17.0.6   localhost   queryrouter

What we want to do is to backup the database instances or at least one of them. What you need to prepare before deploying the BART container is shared storage between the databases containers and the BART container. The is especially important for the restore case as the restore procedure needs to access the backup which is hosted in the BART container. Notice that this storage configuration has the “Read-Write-Many” attributes:
Selection_030

When I initially deployed the database containers I provided exactly these storage claim and volume as a parameter so I have that available in the database containers:
Selection_031

This means in any of the database containers I will be able to see the backup volume:

dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/
total 12
drwxrwx---  3 root    root 4096 Aug  6 11:49 .
drwxr-xr-x 86 root    root 4096 Aug  8 14:03 ..
drwxrwxr-x  4 edbuser root 4096 Aug  6 11:49 edb-bart-1-89k7s
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/
total 16
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 .
drwxrwx--- 3 root    root 4096 Aug  6 11:49 ..
drwxrwxr-x 2 edbuser root 4096 Aug  6 11:49 bart_log
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 pgbackup
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/
total 12
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 .dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/
total 12
drwxrwx---  3 root    root 4096 Aug  6 11:49 .
drwxr-xr-x 86 root    root 4096 Aug  8 14:03 ..
drwxrwxr-x  4 edbuser root 4096 Aug  6 11:49 edb-bart-1-89k7s
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/
total 16
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 .
drwxrwx--- 3 root    root 4096 Aug  6 11:49 ..
drwxrwxr-x 2 edbuser root 4096 Aug  6 11:49 bart_log
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 pgbackup
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/
total 12
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 .
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 edb
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/edb/
total 16
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 .
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:52 1533556356576
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:49 archived_wals
drwxrwxr-x 4 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 edb
sh-4.2$ ls -la /edbbackup/edb-bart-1-89k7s/pgbackup/edb/
total 16
drwxr-xr-x 4 edbuser root 4096 Aug  6 11:52 .
drwxrwxr-x 3 edbuser root 4096 Aug  6 11:49 ..
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:52 1533556356576
drwxr-xr-x 2 edbuser root 4096 Aug  6 11:49 archived_wals

The same storage configuration then needs to be provided to the BART deployment. Here is the yaml file for the BART deployment:

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-custom
   annotations:
    description: "Custom EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-pgpool:v3.5
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres'
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-as:v10.3
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: ${BACKUP_PERSISTENT_VOLUME}
            mountPath: /edbbackup
          - name: pg-initconf
            mountPath: /initconf
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: ${BACKUP_PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${BACKUP_PERSISTENT_VOLUME_CLAIM}
        - name: pg-initconf
          configMap:
            name: postgres-map
              
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: BACKUP_PERSISTENT_VOLUME
  displayName: Backup Persistent Volume
  description: Backup Persistent volume name
  value: ''
  required: false
- name: BACKUP_PERSISTENT_VOLUME_CLAIM
  displayName: Backup Persistent Volume Claim
  description: Backup Persistent volume claim name
  value: ''
  required: false
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

Once that is imported we can deploy the BART container:
Selection_032
Notice that this is actually the same storage configuration as it was used to setup the database containers.
Selection_033

What I didn’t tell you is that you need to do another step before. As the BART container is supposed to backup all the instances in a project we need to pass the BART configuration file to the container via a configMap. In this setup I only have one instance so the configMap would look like this:
Selection_034

Here you would add all the instances you’d need to backup per project. Once the BART container is ready:

dwe@dwe:~$ oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
edb-as10-0-1-b8lvj   1/1       Running   0          17m
edb-as10-0-1-gj76h   1/1       Running   0          14m
edb-as10-0-1-sb5lt   1/1       Running   0          16m
edb-bart-1-7cgfv     1/1       Running   0          19s
edb-pgpool-1-qzk5v   1/1       Running   0          17m
edb-pgpool-1-rvtl6   1/1       Running   0          17m

… you can connect to it and perform a manual backup:

dwe@dwe:~$ oc rsh edb-bart-1-7cgfv
sh-4.2$ bart backup -s edb
INFO:  creating backup for server 'edb'
INFO:  backup identifier: '1533738106320'
65043/65043 kB (100%), 1/1 tablespace

INFO:  backup completed successfully
INFO:  backup checksum: 16fba63925ac3e77d474a36496c2a902 of base.tar
INFO:  
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1533738106320
BACKUP NAME: none
BACKUP PARENT: none
BACKUP LOCATION: /edbbackup/edb-bart-1-7cgfv/pgbackup/edb/1533738106320
BACKUP SIZE: 63.52 MB
BACKUP FORMAT: tar
BACKUP TIMEZONE: UTC
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 1
 ChkSum                             File      
 16fba63925ac3e77d474a36496c2a902   base.tar  

TABLESPACE(s): 0
START WAL LOCATION: 000000010000000000000008
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2018-08-08 14:21:46 UTC
STOP TIME: 2018-08-08 14:21:47 UTC
TOTAL DURATION: 1 sec(s)

This backup is now available on the BART container but in addition it is accessible in the database container:

dwe@dwe:~$ oc rsh edb-as10-0-1-b8lvj
sh-4.2$ ls -la /edbbackup/edb-bart-1-7cgfv/pgbackup/edb/1533738106320/
total 65060
drwxr-xr-x 2 edbuser root     4096 Aug  8 14:21 .
drwxr-xr-x 4 edbuser root     4096 Aug  8 14:21 ..
-rwxr-xr-x 1 edbuser root      664 Aug  8 14:21 backupinfo
-rwxr-xr-x 1 edbuser root 66605568 Aug  8 14:21 base.tar

In case you’d need to restore that you would deploy a new database configuration specifying this backup as the “Restore file”:
Selection_035

One downside with the current versions of the containers: You can not do point in time recovery. Only restores from full backups are supported until now. This will change in the next release, though.

Have fun with the containers …

 

Cet article Backing up and restoring EDB containers in MiniShift/OpenShift est apparu en premier sur Blog dbi services.

New installed Fusion Middleware Reports or Forms WebLogic Servers fails to start after configuring SSL

Wed, 2018-08-08 06:52

We installed a Fusion Middleware reports and Forms 12.2.1.2.0 on a Linux Server.
Those are single node Reports and Forms WebLogic Servers.
After disabling the Non SSL Listen Port, we got the below error in the WebLogic Servers log files while starting the Managed Server with only SSL ports enabled.

####<Aug 19, 2017 9:22:44 PM GMT+00:00> <Notice> <Security> <host01.example.com> <WLS_Reports> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320441764877> <BEA-090082> <Security initializing using security realm myrealm.>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Critical> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765180> <BEA-000386> <Server subsystem failed. Reason: java.lang.AssertionError: Servers do not have a common channel to communicate over
java.lang.AssertionError: Servers do not have a common channel to communicate over
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.createConfiguration(ConfiguredServersMonitorImpl.java:124)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.<init>(ConfiguredServersMonitorImpl.java:55)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.<init>(ConfiguredServersMonitorImpl.java:28)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl$Factory.<clinit>(ConfiguredServersMonitorImpl.java:39)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.getInstance(ConfiguredServersMonitorImpl.java:44)
at weblogic.cluster.messaging.internal.server.UnicastFragmentSocket.<init>(UnicastFragmentSocket.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:355)
at java.lang.Class.newInstance(Class.java:308)
at weblogic.cluster.FragmentSocketWrapper.getUnicastFragmentSocket(FragmentSocketWrapper.java:76)
at weblogic.cluster.FragmentSocketWrapper.<init>(FragmentSocketWrapper.java:64)
at weblogic.cluster.FragmentSocketWrapper.getInstance(FragmentSocketWrapper.java:47)
at weblogic.cluster.MulticastManager.<init>(MulticastManager.java:158)
at weblogic.cluster.MulticastManager.initialize(MulticastManager.java:111)
at weblogic.cluster.ClusterService.startService(ClusterService.java:196)
at weblogic.server.ServiceActivator.start(ServiceActivator.java:96)
at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Notice> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765413> <BEA-000365> <Server state changed to FAILED>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Error> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765414> <BEA-000383> <A critical service failed. The server will shut itself down>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Notice> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765426> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Info> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765447> <BEA-000236> <Stopping execute threads.>

Even if no cluster is used, the default Reports and Forms domain configuration creates  WebLogic Clusters.

To solve “Servers do not have a common channel to communicate over” errors do the following steps for each cluster as administrator

  1. Login into the Administration Console
  2. Expand the Environment tab and navigate to Clusters
  3. In production mode, you need to select “Lock and edit”
  4. Go to General TAB and select the ‘Replication’ subTAB
  5. In the -> Replication -> find Replication Channel, check box beside “Secure Replication Enabled” (to set it as true)
  6. Activate the changes
  7. Restart the managed server(s).

Or in WLST:

Start wlst.sh(*)  and connect to the WebLogic Domain

connect('AdminUser','password','t3s://host01.example.com:7002')

startEdit()

cd('/Clusters/cluster_reports')
cmo.setClusterType('none')
cmo.setPersistSessionsOnShutdown(false)
cmo.setReplicationChannel('ReplicationChannel')
cmo.setSecureReplicationEnabled(true)

activate()

(*) wlst.sh is located in the Middleware software home $M_HOME/oracle_common/common/bin directory

 

Cet article New installed Fusion Middleware Reports or Forms WebLogic Servers fails to start after configuring SSL est apparu en premier sur Blog dbi services.

How much free space can be reclaimed from a segment?

Tue, 2018-08-07 14:06

You have the feeling that your table takes more blocks than it should? Here are the queries I use to quickly check the free space. The idea is to call DBMS_SPACE.SPACE_USAGE and infer the minimum space from the percentages. For example, a block in FS3 (defined as having at least 50 to 75% free space) is supposed to have at least 50% of free space. Of course it can have more, but you don’t know.

Here is some PL/SQL to do so:

set serveroutput on
declare
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
for i in (select * from (select * from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','INDEX','INDEX PARTITION','INDEX SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc) where 10>=rownum)
loop
begin
dbms_space.space_usage(i.owner,i.segment_name,i.segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>i.partition_name);
dbms_output.put_line(to_char((unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75)/1024/1024/1024,'999G999D999')||' GB free in '||i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'"');
exception
when others then dbms_output.put_line(i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'": '||sqlerrm);
end;
end loop;
end;
/

The output looks like:

.001 GB free in INDEX "DEMO"."ACCOUNT_PK" partition ""
.001 GB free in TABLE "APEX_040200"."WWV_FLOW_PAGE_PLUGS" partition ""
.009 GB free in TABLE "SCOTT"."DEMO" partition ""
.000 GB free in TABLE "APEX_040200"."WWV_FLOW_STEP_ITEMS" partition ""
.003 GB free in INDEX "SYS"."WRH$_SYSMETRIC_HISTORY_INDEX" partition ""
.000 GB free in TABLE "MDSYS"."SDO_CS_SRS" partition ""
.002 GB free in INDEX "SYS"."I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST" partition ""
.006 GB free in TABLE "SYS"."WRH$_SYSMETRIC_HISTORY" partition ""
.002 GB free in TABLE "SYS"."WRH$_SQL_PLAN" partition ""

If you are in 12c, an inline function in the query might come handy:

with function freebytes(segment_owner varchar2, segment_name varchar2, segment_type varchar2,partition_name varchar2) return number as
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
dbms_space.space_usage(segment_owner,segment_name,segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>partition_name);
return unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75;
end;
select round(freebytes(owner,segment_name,segment_type,partition_name)/1024/1024/1024,3) free_GB,segment_type,owner,segment_name,partition_name
from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','INDEX','INDEX PARTITION','INDEX SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc fetch first 10 rows only
/

The result looks like:

FREE_GB SEGMENT_TYPE OWNER SEGMENT_NAME PARTITION_NAME
------- ------------ ----- ------------ --------------
0 TABLE DEMO ACCOUNTS
0.001 INDEX DEMO ACCOUNT_PK
0.001 TABLE APEX_040200 WWV_FLOW_PAGE_PLUGS
0.009 TABLE SCOTT DEMO
0.003 INDEX SYS WRH$_SYSMETRIC_HISTORY_INDEX
0 TABLE APEX_040200 WWV_FLOW_STEP_ITEMS
0.002 INDEX SYS I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST
0 TABLE MDSYS SDO_CS_SRS
0.006 TABLE SYS WRH$_SYSMETRIC_HISTORY
0.002 TABLE SYS WRH$_SQL_PLAN

Future evolution will be published on GitHub:
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_plsql.sql
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_sql.sql

Note that having free space does not mean that you have to shrink or reorg. Try to understand what happened to your data before, and wheter this space will be reused soon.

 

Cet article How much free space can be reclaimed from a segment? est apparu en premier sur Blog dbi services.

Catching interface state changes with Network Manager

Mon, 2018-08-06 10:08

Sometimes it is required that you can react on state changes of your network interfaces. Maybe you want to get notified by mail or you want to execute a script that does something in case an event is trapped. As usual I will be using CentOS 7 for this little demonstration but as most distributions use systemd and NetworkManager nowadays this should work more or less the same on Debian based distributions.

As this will be a short post anyway: The way to do it is to use the dispatcher scripts coming with Network Manager. All you need to do for getting notified on down or up events is to create a script in /etc/NetworkManager/dispatcher.d/:

root@:/etc/NetworkManager/dispatcher.d/ [] pwd
/etc/NetworkManager/dispatcher.d
root@:/etc/NetworkManager/dispatcher.d/ [] cat 30-network 
#!/bin/bash

INTERFACE=$1
ACTION=$2

echo $INTERFACE >> /var/log/tmp.log
echo $ACTION >> /var/log/tmp.log

The script will get two parameters passed in by default: The name of the interface and the action that just happened. What you do with these is totally up to you. The only point you need to take care of (which is also mentioned in the documentation): “Each script should be a regular executable file, owned by root. Furthermore, it must not be writable by group or other, and not setuid.”

root@:/etc/NetworkManager/dispatcher.d/ [] ls -la
total 16
drwxr-xr-x. 5 root root  131 Aug  6 17:02 .
drwxr-xr-x. 7 root root  134 Jul 20 10:18 ..
-rwxr-xr-x. 1 root root  175 Jan  2  2018 00-netreport
-rwxr-xr-x. 1 root root 1123 May 15 15:03 11-dhclient
-rwxr-xr-x. 1 root root  985 Sep 15  2017 20-chrony
-rwxr-xr-x. 1 root root  108 Aug  6 17:02 30-network
drwxr-xr-x. 2 root root    6 Jun 27 16:39 no-wait.d
drwxr-xr-x. 2 root root    6 Jun 27 16:39 pre-down.d
drwxr-xr-x. 2 root root    6 Jun 27 16:39 pre-up.d

As soon as you have your script in place it will get kicked off when the interface state is changing:

root@:/etc/NetworkManager/dispatcher.d/ [] systemctl restart network
root@:/etc/NetworkManager/dispatcher.d/ [] cat /var/log/tmp.log

connectivity-change
enp0s3
down
enp0s8
down
enp0s3
up

connectivity-change
enp0s8
up

Hope that helps …

 

Cet article Catching interface state changes with Network Manager est apparu en premier sur Blog dbi services.

When does PostgreSQL create the table and index files on disk?

Sun, 2018-08-05 07:26

A question that pops up from time to time is: When we create a table or an index in PostgreSQL are the files on disk created immediately or is this something that happens when the first row is inserted? The question mostly is coming from Oracle DBAs because in Oracle you can have deferred segment creation. In PostgreSQL there is no parameter for that so lets do a quick test.

We start with a simple table:

postgres=# create table t1 ( a int );
CREATE TABLE

To get the real file name we can either use the pg_relation_filepath function:

postgres=# select pg_relation_filepath('t1');
 pg_relation_filepath 
----------------------
 base/33845/33933
(1 row)

… or we can use the oid2name utility:

postgres@pgbox:/home/postgres/ [PG10] oid2name -d postgres -t t1
From database "postgres":
  Filenode  Table Name
----------------------
     33933          t1

Now we can easily check if that file is already existing:

postgres@pgbox:/home/postgres/ [PG10] ls -la $PGDATA/base/33845/33933
-rw-------. 1 postgres postgres 0 Jul 24 07:47 /u02/pgdata/10/PG103/base/33845/33933

It is already there but empty. The files for the visibility map and the free space map are not yet created:

postgres@pgbox:/home/postgres/ [PG10] ls -la $PGDATA/base/33845/33933*
-rw-------. 1 postgres postgres 0 Jul 24 07:47 /u02/pgdata/10/PG103/base/33845/33933

What happens when we create an index on that empty table?

postgres=# create index i1 on t1 (a);
CREATE INDEX
postgres=# select pg_relation_filepath('i1');
 pg_relation_filepath 
----------------------
 base/33845/33937
(1 row)
postgres=# \! ls -la $PGDATA/base/33845/33937
-rw-------. 1 postgres postgres 8192 Jul 24 08:06 /u02/pgdata/10/PG103/base/33845/33937

The file is created immediately as well but it is not empty. It is exactly one page (my blocksize is 8k). Using the pageinspect extension we can confirm that this page is just for metadata information:

postgres=# create extension pageinspect;
CREATE EXTENSION
postgres=# SELECT * FROM bt_metap('i1');
 magic  | version | root | level | fastroot | fastlevel 
--------+---------+------+-------+----------+-----------
 340322 |       2 |    0 |     0 |        0 |         0
(1 row)
postgres=# SELECT * FROM bt_page_stats('i1', 0);
ERROR:  block 0 is a meta page
postgres=# 

The remaining questions is: When will the free space map and the visibility map be created? After or with the first insert?

postgres=# insert into t1 (a) values (1);
INSERT 0 1
postgres=# \! ls -la $PGDATA/base/33845/33933*
-rw-------. 1 postgres postgres 8192 Jul 24 08:19 /u02/pgdata/10/PG103/base/33845/33933

Definitely not. The answer is: vacuum:

postgres=# vacuum t1;
VACUUM
postgres=# \! ls -la $PGDATA/base/33845/33933*
-rw-------. 1 postgres postgres  8192 Jul 24 08:19 /u02/pgdata/10/PG103/base/33845/33933
-rw-------. 1 postgres postgres 24576 Jul 24 08:22 /u02/pgdata/10/PG103/base/33845/33933_fsm
-rw-------. 1 postgres postgres  8192 Jul 24 08:22 /u02/pgdata/10/PG103/base/33845/33933_vm

Hope that helps …

 

Cet article When does PostgreSQL create the table and index files on disk? est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Docbroker & Licences

Sat, 2018-08-04 14:20

In a previous blog, I quickly went through the different things to know about the silent installations as well as how to install the CS binaries. Once the CS binaries are installed, you can install/configure a few more components. On this second blog, I will continue with:

  • Documentum docbroker/connection broker installation
  • Configuration of a Documentum licence

 

1. Documentum docbroker/connection broker installation

As mentioned in the previous blog, the examples provided by Documentum contain almost all possible parameters but for this section, only a very few of them are required. The properties file for a docbroker/connection broker installation is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbroker.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbroker.properties
### Silent installation response file for a Docbroker
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=false
SERVER.CONFIGURATOR.BROKER=true

### Docbroker parameters
SERVER.DOCBROKER_ACTION=CREATE
SERVER.DOCBROKER_PORT=1489
SERVER.DOCBROKER_NAME=Docbroker
SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.DOCBROKER_CONNECT_MODE=dual

### Common parameters
START_METHOD_SERVER=false
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.CONFIGURATOR.LICENSING: Whether or not you want to configure a licence using this properties file. Here since we just want a docbroker/connection broker, it is obviously false
  • SERVER.CONFIGURATOR.REPOSITORY: Whether or not you want to configure a docbase/repository. Same here, it will be false
  • SERVER.CONFIGURATOR.BROKER: Whether or not you want to configure a docbroker/connection broker. That’s the purpose of this properties file so it will be true
  • SERVER.DOCBROKER_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • SERVER.DOCBROKER_PORT: The port the docbroker/connection broker will listen to (always the native port)
  • SERVER.DOCBROKER_NAME: The name of the docbroker/connection broker to create/upgrade/delete
  • SERVER.PROJECTED_DOCBROKER_HOST: The hostname to use for the dfc.properties projection for this docbroker/connection broker
  • SERVER.PROJECTED_DOCBROKER_PORT: The port to use for the dfc.properties projection related to this docbroker/connection broker. It should obviously be the same as “SERVER.DOCBROKER_PORT”, don’t ask me why there are two different parameters for that…
  • SERVER.DOCBROKER_CONNECT_MODE: The connection mode to use for the docbroker/connection broker, it can be either native, dual or secure. If it is dual or secure, you have 2 choices:
    • Use the default “Anonymous” mode, which is actually not really secure
    • Use a real “SSL Certificate” mode, which requires some more parameters to be configured (and you need to have the keystore and truststore already available):
      • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbroker/connection broker
      • SERVER.DOCBROKER_KEYSTORE_FILE_NAME: The name of the p12 file that contains the keystore
      • SERVER.DOCBROKER_KEYSTORE_PASSWORD_FILE_NAME: The name of the password file that contains the password of the keystore
      • SERVER.DOCBROKER_CIPHER_LIST: Colon separated list of ciphers to be enabled (E.g.: EDH-RSA-AES256-GCM-SHA384:EDH-RSA-AES256-SHA)
      • SERVER.DFC_SSL_TRUSTSTORE: Full path and name of the truststore to be used that contains the SSL Certificate needed to trust the targets
      • SERVER.DFC_SSL_TRUSTSTORE_PASSWORD: The password of the truststore in clear text
      • SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE: Whether or not to use the Java truststore or the 2 above parameters instead
  • START_METHOD_SERVER: Whether or not you want the JMS to be re-started again once the docbroker/connection broker has been created. Since we usually create the docbroker/connection broker just before creating the docbases/repositories and since the docbases/repositories will anyway stop the JMS, we can leave it stopped there
  • MORE_DOCBASE: Never change this value, it should remain as false as far as I know
  • SERVER.CONGINUE.MORECOMPONENT: Whether or not you want to configure some additional components. Same as above, I would always let it as false… I know that the name of this parameter is strange but that’s the name that is coming from the templates… But if you look a little bit on the internet, you might be able to find “SERVER.CONTINUE.MORE.COMPONENT” as well… So which one is “correct”, which one isn’t is still a mystery for me. I’m using the first one but since I always set it to false, that doesn’t have any impact for me and I never saw any errors coming from the log files or anything.

 

Once the properties file is ready, you can install the docbroker/connection broker using the following command:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbroker.properties

 

That’s it, after a few seconds, the prompt will be returned and the docbroker/connection broker will be installed with the provided parameters.

 

2. Configuration of a Documentum licence

Once you have a docbroker/connection broker installed, you can configure/enable a certain amount of licences (actually you could have done it before). For this example, I will only enable the TCS but you can do it for all others too. The properties file for a licence configuration is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Licence.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Licence.properties
### Silent installation response file for a Licence
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=true
SERVER.CONFIGURATOR.REPOSITORY=false
SERVER.CONFIGURATOR.BROKER=false

### Licensing parameters
SERVER.TCS_LICENSE=DummyLicenceForTCS

### Common parameters
START_METHOD_SERVER=false
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$

 

A short description of these properties – compared to the above ones:

  • SERVER.CONFIGURATOR.LICENSING & SERVER.CONFIGURATOR.BROKER: This time, we will obviously set the broker to false and the licensing to true so we do not re-install another docbroker/connection broker
  • Licences:
    • SERVER.TCS_LICENSE: Licence string to enable the Trusted Content Services on this CS
    • SERVER.XHIVE_LICENSE: Licence string to enable the XML Store Feature
    • SERVER.AS_LICENSE: Licence string to enable the Archive Service
    • SERVER.CSSL_LICENSE: Licence string to enable the Content Storage Service Licence
    • aso… Some of these licences require more parameters to be added (XHIVE: “XHIVE.PAGE.SIZE”, “SERVER.ENABLE_XHIVE”, “SERVER.XHIVE_HOST”, aso…)

 

It might make sense to enable some licences during the installation of a specific docbase/repository so then that would be up to you to think about this. In the above example, I only enabled the TCS so it is available to all docbases/repositories that will be installed on this Content Server. Therefore, it makes sense to do separately, before the installation of the docbases/repositories.

You now know how to install and configure a docbroker/connection broker as well as how to enable licences using the silent installation provided by Documentum

 

Cet article Documentum – Silent Install – Docbroker & Licences est apparu en premier sur Blog dbi services.

Oracle 18c preinstall RPM on RedHat RHEL

Fri, 2018-08-03 17:03

The Linux prerequisites for Oracle Database are all documented but using the pre-install rpm makes all things easier. Before 18c, this was easy on Oracle Enterprise Linux (OEL) but not so easy on RedHat (RHEL) where the .rpm had many dependencies on OEL and UEK.
Now that 18c is there to download, there’s also the 18c preinstall rpm and the good news is that it can be run also on RHEL without modification.

This came to my attention on Twitter:

On the other hand, you may not have noticed that it no longer requires Oracle Linux specific RPMs. It can now be used on RHEL and all its derivatives.

— Avi Miller (@AviAtOracle) July 29, 2018

And of course this is fully documented:
https://docs.oracle.com/en/database/oracle/oracle-database/18/cwlin/about-the-oracle-preinstallation-rpm.html#GUID-C15A642B-534D-4E4A-BDE8-6DC7772AA9C8

In order to test it I’ve created quickly a CentOS instance on the Oracle Cloud:
CaptureCentosPreinstall

I’ve downloaded the RPM from the OEL7 repository:

[root@instance-20180803-1152 opc]# curl -o oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm https ://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-18c-1.0-1 .el7.x86_64.rpm
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 18244 100 18244 0 0 63849 0 --:--:-- --:--:-- --:--:-- 63790

then ran the installation:

[root@instance-20180803-1152 opc]# yum -y localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_ 64.rpm

 
It installs automatically all dependencies:
Installed:
oracle-database-preinstall-18c.x86_64 0:1.0-1.el7
 
Dependency Installed:
compat-libcap1.x86_64 0:1.10-7.el7 compat-libstdc++-33.x86_64 0:3.2.3-72.el7 glibc-devel.x86_64 0:2.17-222.el7 glibc-headers.x86_64 0:2.17-222.el7
gssproxy.x86_64 0:0.7.0-17.el7 kernel-headers.x86_64 0:3.10.0-862.9.1.el7 keyutils.x86_64 0:1.5.8-3.el7 ksh.x86_64 0:20120801-137.el7
libICE.x86_64 0:1.0.9-9.el7 libSM.x86_64 0:1.2.2-2.el7 libXext.x86_64 0:1.3.3-3.el7 libXi.x86_64 0:1.7.9-1.el7
libXinerama.x86_64 0:1.1.3-2.1.el7 libXmu.x86_64 0:1.1.2-2.el7 libXrandr.x86_64 0:1.5.1-2.el7 libXrender.x86_64 0:0.9.10-1.el7
libXt.x86_64 0:1.1.5-3.el7 libXtst.x86_64 0:1.2.3-1.el7 libXv.x86_64 0:1.0.11-1.el7 libXxf86dga.x86_64 0:1.1.4-2.1.el7
libXxf86misc.x86_64 0:1.0.3-7.1.el7 libXxf86vm.x86_64 0:1.1.4-1.el7 libaio-devel.x86_64 0:0.3.109-13.el7 libbasicobjects.x86_64 0:0.1.1-29.el7
libcollection.x86_64 0:0.7.0-29.el7 libdmx.x86_64 0:1.1.3-3.el7 libevent.x86_64 0:2.0.21-4.el7 libini_config.x86_64 0:1.3.1-29.el7
libnfsidmap.x86_64 0:0.25-19.el7 libpath_utils.x86_64 0:0.2.1-29.el7 libref_array.x86_64 0:0.1.5-29.el7 libstdc++-devel.x86_64 0:4.8.5-28.el7_5.1
libverto-libevent.x86_64 0:0.2.5-4.el7 nfs-utils.x86_64 1:1.3.0-0.54.el7 psmisc.x86_64 0:22.20-15.el7 xorg-x11-utils.x86_64 0:7.5-22.el7
xorg-x11-xauth.x86_64 1:1.0.9-1.el7

Note that the limits are stored in limits.d which has priority over limits.conf:

[root@instance-20180803-1152 opc]# cat /etc/security/limits.d/oracle-database-preinstall-18c.conf
 
# oracle-database-preinstall-18c setting for nofile soft limit is 1024
oracle soft nofile 1024
 
# oracle-database-preinstall-18c setting for nofile hard limit is 65536
oracle hard nofile 65536
 
# oracle-database-preinstall-18c setting for nproc soft limit is 16384
# refer orabug15971421 for more info.
oracle soft nproc 16384
 
# oracle-database-preinstall-18c setting for nproc hard limit is 16384
oracle hard nproc 16384
 
# oracle-database-preinstall-18c setting for stack soft limit is 10240KB
oracle soft stack 10240
 
# oracle-database-preinstall-18c setting for stack hard limit is 32768KB
oracle hard stack 32768
 
# oracle-database-preinstall-18c setting for memlock hard limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90 % of RAM
oracle hard memlock 134217728
 
# oracle-database-preinstall-18c setting for memlock soft limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90% of RAM
oracle soft memlock 134217728

Note that memlock is set to 128GB here but can be higher on machines with huge RAM (up to 90% of RAM)

And for information, here is what is set in /etc/sysctl.conf:

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

Besides that, the preinstall rpm disables NUMA and transparent huge pages (as boot options in GRUB). It creates the oracle user (id 54321 and belonging to groups oinstall,dba,oper,backupdba,dgdba,kmdba,racdba)

 

Cet article Oracle 18c preinstall RPM on RedHat RHEL est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Things to know, binaries & JMS installation

Fri, 2018-08-03 13:55

Documentum introduced some time ago already the silent installations for its software. The way to use this changed a little bit but it seems they finally found their way. This blog will be the first of a series to present how to work with the silent installations on Documentum because it is true that it is not really well documented and most probably not much used at the moment.

We are using this where possible for our customers and it is true that it is really helpful to avoid human errors and install components more quickly. Be aware that this isn’t perfect! There are some parameters with typos, some parameters that are really not self-explanatory, so you will need some time to understand everything but, in the end, it is still helpful.

Using the silent installation is a first step but you will still need a lot of manual interventions to execute these as well as actually making your environment working. I mean it only replaces the GUI installers so everything you were doing around that is still needed (preparation of files/folders/environment, custom start/stop scripts, service setup, Java Method Server (JMS) configuration, Security Baselines, SSL setup, aso…). That’s why we also developed internally scripts or playbooks (Ansible for example) to perform everything around AND use the Documentum silent installations. In this blog and more generally in this series, I will only talk about the silent installations coming from Documentum.

Let’s start with the basis:

  1. Things you need to know
  2. Documentum Content Server installation (binaries & JMS)

 

1. Things you need to know
  • Each and every component installation needs its own properties file that is used by the installer to know what to install and how to do it, that’s all you need to do.
  • As I mentioned above, there are some typos in a few parameters coming from the properties files like “CONGINUE” instead of “CONTINUE”. These aren’t errors in my blogs, the parameters are really like that. All the properties files I’m showing here have been tested and validated in a lot of environments, including PROD ones in High Availability.
  • To know more about the silent installation, you can check the installation documentation. There isn’t much to read about it but still some potentially interesting information.
  • The Documentum documentation does NOT contain any description of the parameters you can/should use, that’s why I will try in each blogs to describe them as much as possible.
  • You can potentially do several things at once using a single silent properties file, the only restriction for that is that it needs to use the same installer. Therefore, you could install a docbroker/connection broker, a docbase/repository and configure/enable a licence using a single properties file but you wouldn’t be able to do the silent installation of the binaries as well because it needs another installer. That’s definitively not what I’m doing because I find it messy, I really prefer to separate things, so I know I’m using only the parameters that I need for a specific component and nothing else.
  • There are examples provided when you install Documentum. You can look at the folder “$DM_HOME/install/silent/templates” and you will see some properties file. In these files, you will usually find most of the parameters that you can use but from what I remember, there are a few missing. Be aware that some files are for Windows and some are for Linux, it’s not always the same because some parameters are specific to a certain OS:
    • linux_ files are for Linux obviously
    • win_ files are for Windows obviously
    • cfs_ files are for a CFS/Remote Content Server installation (to provide High Availability to your docbases/repositories)
  • If you look at the folder “$DM_HOME/install/silent/silenttool”, you will see that there is a utility to generate silent files based on your current installation. You need to provide a silent installation file for a Content Server and it will generate for you a CFS/Remote CS silent installation file with most of the parameters that you need. Do not 100% rely on this file, there might still be some parameters missing but present ones should be the correct ones. I will write a blog on the CFS/Remote CS as well, to provide an example.
  • You can generate silent properties file by running the Documentum installers with the following command: “<installer_name>.<sh/bin> -r <path>/<file_name>.properties”. This will write the parameters you selected/enabled/configured into the <file_name>.properties file so you can re-use it later.
  • To install an additional JMS, you can use the jmsConfig.sh script or jmsStandaloneSetup.bin for an IJMS (Independent JMS – Documentum 16.4 only). It won’t be in the blogs because I’m only showing the default one created with the binaries.
  • The following components/features can be installed using the silent mode (it is possible that I’m missing some, these are the ones I know):
    • CS binaries + JMS
    • JMS/IJMS
    • Docbroker/connection broker
    • Licences
    • Docbase/repository (CS + CFS/RCS + DMS + RKM)
    • D2
    • Thumbnail

 

2. Documentum Content Server installation (binaries & JMS)

Before starting, you need to have the Documentum environment variables defined ($DOCUMENTUM, $DM_HOME, $DOCUMENTUM_SHARED), that doesn’t change. Once that is done, you need to extract the installer package (below I used the package for a CS 7.3 on Linux with an Oracle DB):

[dmadmin@content_server_01 ~]$ cd /tmp/dctm_install/
[dmadmin@content_server_01 dctm_install]$ tar -xvf Content_Server_7.3_linux64_oracle.tar
[dmadmin@content_server_01 dctm_install]$
[dmadmin@content_server_01 dctm_install]$ chmod 750 serverSetup.bin
[dmadmin@content_server_01 dctm_install]$ rm Content_Server_7.3_linux64_oracle.tar

 

Then prepare the properties file:

[dmadmin@content_server_01 dctm_install]$ vi CS_Installation.properties
[dmadmin@content_server_01 dctm_install]$ cat CS_Installation.properties
### Silent installation response file for CS binary
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
APPSERVER.SERVER_HTTP_PORT=9080
APPSERVER.SECURE.PASSWORD=adm1nP4ssw0rdJMS

### Common parameters
COMMON.DO_NOT_RUN_DM_ROOT_TASK=true

[dmadmin@content_server_01 dctm_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • APPSERVER.SERVER_HTTP_PORT: The port to be used by the JMS that will be installed
  • APPSERVER.SECURE.PASSWORD: The password of the “admin” account of the JMS. Yes, you need to put all passwords in clear text in the silent installation properties files so add it just before starting the installation and remove them right after
  • COMMON.DO_NOT_RUN_DM_ROOT_TASK: Whether or not you want to run the dm_root_task in the silent installation. I usually set it to true, so it is NOT executed because the Installation Owner I’m using do not have root accesses for security reasons
  • On Windows, you would need to provide the Installation Owner’s password as well and the path you want to install Documentum on ($DOCUMENTUM). On linux, the first one isn’t needed and the second one needs to be in the environment before starting.
  • You could also potentially add more properties in this file: SERVER.LOCKBOX_FILE_NAMEx and SERVER.LOCKBOX_PASSPHRASE.PASSWORDx (where x is a number starting with 1 and incrementing in case you have several lockboxes). These parameters would be used for existing lockbox files that you would want to load. Honestly, these parameters are useless. You will anyway need to provide the lockbox information during the docbase/repository creation and you will need to specify if you want a new lockbox, an existing lockbox or no lockbox at all so specifying it here is kind of useless…

 

Once the properties file is ready, you can install the Documentum binaries and the JMS in silent using the following command:

[dmadmin@content_server_01 dctm_install]$ ./serverSetup.bin -f CS_Installation.properties

 

This conclude the first blog of this series about Documentum silent installations. Stay tuned for more soon.

 

Cet article Documentum – Silent Install – Things to know, binaries & JMS installation est apparu en premier sur Blog dbi services.

Upgrade Oracle Grid Infrastructure from 12.1.0.2.0 to 12.2.0.1.0

Fri, 2018-08-03 03:26

The following blog will provide the necessary steps to upgrade the Grid Infrastructure from 12.1 to 12.2, for a Standalone Server.
One of the new features of GI 12.2 is the usage of the AFD (Oracle ASMFD Filter Driver).

Assumptions :

 You have installed Oracle GI 12.1 as grid user
 You have installed Oracle Database 12.1 as oracle user
 You have configured the groups asmadmin,asmoper,asmdba
 You installed oracle-rdbms-server-12cr2-preinstall rpm
 You patched your Oracle GI to PSU July 2017 (combo patch 25901062 to patch Oracle stack 12.1 , GI & RDBMS)
 [root]mkdir /u01/app/grid/product/12.2.0/grid/
 [root]chown -R grid:oinstall /u01/app/grid/product/12.2.0/grid/
 --stop all dbs that are using ASM
 [oracle]srvctl stop database -d ORCL

Installation : Tasks

[grid]cd /u01/app/grid/product/12.2.0/grid/
[grid]unzip /stage/linuxx64_12201_grid_home.zip
[grid]./gridSetup.sh
	Choose Upgrade Oracle Grid Infrastructure option.
	Confirm that all Oracle DBs using ASM are stopped.
	Check :
        Oracle base : /u01/app/grid/  
        Software Location : /u01/app/grid/product/12.2.0/grid/
		
	Uncheck "Automatically run configuration scripts". Is not recommanded by Oracle, but if you are doing like that 
is very possible that your upgrade process is dying without any output. 
	So at the right moment you will be asked to run rootUpgrade.sh maually.
	Click Next and validate that all the pre-requirements are confirmed.
	Monitor the progress and run the script rootUpgrade.sh when is prompted
	Once your action completed succesfully:
[grid@dbisrv04 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM
The Oracle base has been set to /u01/app/grid

[grid@dbisrv04 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.2.0.1.0]

Migrating ASM disks from ASMlib to AFD : Tasks

Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.

Step1:

[grid@dbisrv04 ~]$ asmcmd dsget
parameter:
profile:

[grid@dbisrv04 ~]$ asmcmd dsset '/dev/xvda*','ORCL:*','AFD:*'

[grid@dbisrv04 ~]$ asmcmd dsget
parameter:/dev/xvda*, ORCL:*, AFD:*
profile:/dev/xvda*,ORCL:*,AFD:*

Step2:

[root]export ORACLE_HOME=/u01/app/grid/product/12.2.0/grid/
[root]$GRID_HOME/bin/crsctl stop has -f

Step3:

root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure

ASMCMD-9524: AFD configuration failed 'ERROR: ASMLib deconfiguration failed'
Cause: acfsload is running.To configure AFD oracleasm and acfsload must be stopped
Solution: stop acfsload and rerun asmcmd afd_configure

[root@dbisrv04 ~]# oracleasm exit
[root@dbisrv04 ~]# $ORACLE_HOME/bin/acfsload stop

root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.

Step4:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'dbisrv04.localdomain'

Step5:

[root]$ORACLE_HOME/bin/crsctl stop has

Step6:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_refresh
[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1

Step7:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_dsset '/dev/sd*'

Step8:

[root]$ORACLE_HOME/bin/crsctl stop has -f
[root]$GRID_HOME/bin/asmcmd afd_scan
[root]$GRID_HOME/bin/asmcmd afd_refresh

Step9:

[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1

Step10:

select name,label,path from v$asm_disk;SQL> SQL> SQL>

NAME       LABEL                PATH
---------- -------------------- --------------------
DISK04     DISK04               AFD:DISK04
DISK03     DISK03               AFD:DISK03
DISK02     DISK02               AFD:DISK02
DISK01     DISK01               AFD:DISK01
DISK07     DISK07               AFD:DISK07
DISK05     DISK05               AFD:DISK05
DISK06     DISK06               AFD:DISK06
DISK09     DISK09               AFD:DISK09
DISK08     DISK08               AFD:DISK08

Step11: Confirm your AFD is loaded

[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /dbhome_1,STABLE

——————————————————————————–

Step 11b: Introduce new disks with AFD

[root]. oraenv
[root]+ASM
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1 --init
ASMCMD-9521: AFD is already configured
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1
[root@dbisrv04 ~]# asmcmd afd_lslbl

Step 12: Erase Oracle ASMLib

[root] yum erase oracleasm-support.x86_64
[root] yum erase oracleasmlib.x86_64
 

Cet article Upgrade Oracle Grid Infrastructure from 12.1.0.2.0 to 12.2.0.1.0 est apparu en premier sur Blog dbi services.

Hitachi Content Intelligence deployment

Thu, 2018-08-02 07:19

Hitachi Content Intelligence (HCI) is a search and data processing solution. It allows the extraction, classification, enrichment, and categorization of data, regardless of where the data lives or what format it’s in.

Content Intelligence provides tools at large scale across multiple repositories. These tools are useful for identifying, blending, normalizing, querying, and indexing data for search, discovery, and reporting purposes.

Architecture

HCI has components called data connections that it uses to access the places where your data is stored (these places are called data sources). A data connection contains all the authentication and access information that HCI needs to read the files in the data source.

HCI is extensible with published application programming interfaces (APIs) that support customized data connections, transformation stages, or building new applications.

HCI-1

HCI is composed of many services running on Docker.

[centos@hci ~]$ sudo docker ps -a
[sudo] password for centos:
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS               NAMES
0547ec8761cd        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           admin-app
1f22db4aec4b        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           sentinel-service
fa54650ec03a        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           haproxy-service
6b82daf15093        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           marathon-service
a12431829a56        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           mesos_master_service
812eda23e759        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           mesos_agent_service
f444ab8e66ee        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           zookeeper-service
c7422cdf3213        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           watchdog-service

Below a representation of all services of HCI platform.

HCI-2

System Requirements

HCI has been qualified using these Linux distributions:

  • Fedora 24
  • Centos 7.2
  • Ubuntu 16.04 LTS
Docker requirements

HCI requires Docker software installed in each server running HCI. Docker version > 1.3.0 must be installed on all instances.

Network requirements

Each HCI instance must have a static IP address and the multiple ports must be opened for HCI tools such as Zookeeper, Mesos, Cassandra, Kafka, etc…

To see the list of port, refer to HCI official documentation. For our testing environment, we will stop the firewall service.

System configuration & Installation

HCI can run on physical or virtual servers, or hosted on public or private clouds. It is instantiated as a set of containers and provided to users as a self-service facility with support for detailed queries and ad hoc natural language searches. HCI can run in the single instance or in a cluster mode. For our blog, we will use a single instance.

Docker version:
[centos@hci ~]$ docker --version
Docker version 1.13.1, build 87f2fab/1.13.

If Docker is not installed, please follow the installation methods from the Docker official website.

Disable SELinux:
  • Backup current SELinux configuration:
[centos@hci ~]$ sudo cp /etc/selinux/config /etc/selinux/config.bak
  • Disable SELinux:
[centos@hci ~]$ sudo sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
 Create user and group:
[centos@hci ~]$ sudo groupadd hci -g 10001

[centos@hci ~]$ sudo useradd hci -u 10001 -g 1000
Disable firewall service:
[centos@hci ~]$ sudo service firewalld stop

Redirecting to /bin/systemctl stop firewalld.service
 Run Docker service:
[centos@hci ~]$ sudo systemctl status docker

* docker.service - Docker Application Container Engine

Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)

Active: active (running) since Thu 2018-08-02 10:08:38 CEST; 1s ago
Configure the Docker service to start automatically at boot:
[centos@hci ~]$ sudo systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service
Change the vm.max_map_count value:

Add ‘vm.max_map_count = 262144′ to /etc/sysctl.conf

[centos@hci ~]$ sudo vi /etc/sysctl.conf
[centos@hci ~]$ sudo sysctl -w vm.max_map_count=262144
Download HCI

Create first your Hitachi Vantara account, https://sso.hitachivantara.com/en_us/user-registration.html.

Then, from the Hitachi Vantara Community website https://community.hitachivantara.com/hci, clicks “Downloads”. You will have access to a 90-day trial license with the full feature set.

HCI Installation

Create a directory called /hci, in the location of your choice. We recommend you to use the largest disk partition:

[centos@hci ~]$ mkdir hci

Move the installation package to your hci directory:

[centos@hci ~]$ mv HCI-1.3.0.93.tgz hci/

Extract the installation package:

[centos@hci hci]$ sudo tar –xzf HCI-1.3.0.93.tgz

Run the installation script in the version-specific directory:

[centos@hci hci]$ sudo 1.3.0.93/bin/install

Run the hci_setup script:

[centos@hci50 bin]$ sudo ./setup -i <ip-address-instance>

Run the hci_run script, and ensure that the method you use can keep the hci_run script running and can automatically restart in case of server reboot:

We recommend running the script as a service using systemd:

In the installation package, a service file is provided and you can edit this file according to your configuration:

  1. Edit the HCI.service file:
[centos@hci bin]$ vi HCI.service
  1. Ensure the ExecStart parameter is properly set, with the right path:
ExecStart=/home/centos/hci/bin/run

If not, change it to your hci installation path.

  1. Copy the HCI.service file to the appropriate location for your OS:
[centos@hci bin]$ sudo cp /hci/bin/HCI.service /etc/systemd/system
  1. Enable and start HCI service:
[centos@hci bin]$ sudo systemctl enable HCI.service

Created symlink from /etc/systemd/system/multi-user.target.wants/HCI.service to /etc/systemd/system/HCI.service.

[centos@hci bin]$ sudo systemctl start HCI.service

Check if the service has properly started:

[centos@dhcp-10-32-0-50 bin]$ sudo systemctl status HCI.service

* HCI.service - HCI

   Loaded: loaded (/etc/systemd/system/HCI.service; enabled; vendor preset: disabled)

   Active: active (running) since Thu 2018-08-02 11:15:09 CEST; 45s ago

 Main PID: 5849 (run)

    Tasks: 6

   Memory: 5.3M

   CGroup: /system.slice/HCI.service

           |-5849 /bin/bash /home/centos/hci/bin/run

           |-8578 /bin/bash /home/centos/hci/bin/run

           `-8580 /usr/bin/docker-current start -a watchdog-service

HCI Deployment

With your favorite web browser, connect to HCI administrative App:

https://<HCI-instance-ip-address>:8000

On the Welcome page, set a password for the admin user:

HCI-3

Choose what you would like to deploy:

Screen Shot 2018-08-02 at 11.26.40

Click on Hitachi Content Search button and click on Continue button.

Click on Deploy Single Instance button:

Screen Shot 2018-08-02 at 11.28.07

Wait for the HCI deployment until it finishes.

Screen Shot 2018-08-02 at 12.15.50

 

 

 

 

 

Cet article Hitachi Content Intelligence deployment est apparu en premier sur Blog dbi services.

A password() function for dmgawk

Tue, 2018-07-31 16:31

A few days ago, as I was preparing a dmawk script for a presentation, I stumbled against another unexpected error.
The script was attempting to connect to a docbase by providing a docbase name, a user name and a password. But before that, it tested whether a password was provided as a command-line parameter (I know, this is not very secure but it was for demonstration purpose only); if not, it prompted for one using dmawk’s built-in password() function. The full command was:

echo "select count(*) from dm_user" | dmawk -v docbase=dmtest -v username=dmadmin -f select.awk

with select.awk narrowed down to:

cat select.awk
BEGIN {
   passwd = password("please, enter password: ")
   print "password was:", passwd
}

The problem was that when piping something into the script, it didn’t prompt anymore for a password. Without piping, it prompted as expected:

echo "select count(*) from dm_user" | dmawk73 -f ./getpasswd.dmawk
password was:
exiting ...

==> not prompted for password;

dmawk73 -f ./getpasswd.dmawk
please, enter password:
password was: Supercalifragilisticexpialidocious!
exiting ...

==> prompted for password;
Here, the alias dmawk73 points to the content server v7.3’s dmawk, my current version of Documentum contentServer.
Note that the variant below did not work either:

cat query_file 
select
   count(*)
from
dm_user

dmawk73 -f ./getpasswd.dmawk < query_file
password was:
exiting ...

==> not prompted for password;
This proves that what screws up the dmawk’s password() function is the presence of characters in stdin, whether they come from a pipe or from a redirection.
Did they change (a politically correct way to say “break”) something in this version relatively to a previous one ? To be sure, I tried the same tiny script with dmawk from an ancient 5.3 installation I keep around for those puzzling occasions, and guess what ? No special weirdness here, it worked as expected:

dmadmin@dmclient:~/getpasswd$ echo "select count(*) from dm_user" | dmawk53 -f ./getpasswd.dmawk
please, enter password:
password was: Supercalifragilisticexpialidocious
exiting ...

where the alias dmawk53 points to the content server v5.3’s dmawk.
A strace on dmawk53 shows that the device /dev/tty is read for input:

open("/dev/tty", O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 4
...
write(4, "please, enter password: ", 24) = 24
read(4, "kdk\n", 4096) = 4
...
close(4) = 0
...
write(1, "password was: kdk\n", 18) = 18
write(1, "exiting ...\n", 12) = 12

For sure, the original behavior was changed somehow around reading from tty and the built-in password() function gets disrupted when something is first input into stdin.
So how to work around this new pesky issue ? Let’s see a few solutions. To be clear, I assume from the beginning that security is not a major concern here. Proposed solutions 4, 5 and 6 however are on the same security level as dmawk’s password() since they restore this function.

1. Give up piping into dmawk

This means that it will not be possible to concatenate the awk script to the previous command. If this is acceptable, why not ? dmawk’s input will have to come from a file, e.g.:

cat query_file 
select
   count(*)
from
dm_user

cat getpasswd_no_pipe.dmawk 
BEGIN {
   while ((getline < query_file) > 0)
      query = query "\n" $0
   close(query_file)
   print "query string is:", query
   pp = password("please, enter password: ")
   print "password was:", pp

   exit

}
END {
  print "exiting ..."
}

Execution:
dmadmin@dmclient:~/getpasswd$ dmawk73 -f getpasswd_no_pipe.dmawk -v query_file=query_file
query is: 
select
   count(*)
from
dm_user

please, enter password: 
password was: Et tu, Brute?
exiting ...

If security matters and command concatenation is not needed, the above may be an acceptable work-around.

2. Using an environment variable

If security is not significant, the password could be passed in a environment variable, e.g.:

cat getpasswd_env.dmawk
BEGIN {
   cmd = "echo $password"
   cmd | getline pp
   close(cmd)
   print "password was:", pp
}
END {
  print "exiting ..."
}

Execution:

export password=Supercalifragilisticexpialidocious!
echo "select count(*) from dm_user" | dmawk73 -f ./getpasswd_env.dmawk
password was: Supercalifragilisticexpialidocious!
exiting ...

Here, it is mandatory to use the export statement because dmawk launches a sub-process to read the parent’s environment variable.
Unlike dmawk, gawk can map the process’ environment into the built-in associative array ENVIRON, which makes accessing $password more elegant and also faster as no sub-process gets spawned:

cat ./getpasswd_env.awk
BEGIN {
   print "password was:", ENVIRON["password"]
   exit
}
END {
   print "exiting..."
}

Execution:

echo "select count(*) from dm_user" | gawk -f ./getpasswd_env.awk
password was: Supercalifragilisticexpialidocious!
exiting...

A little digression here while on the subject of environment variables: it’s a little known fact that the tools iapi and idql supports 3 handy but rarely used environment variables: DM_DOCBASE_NAME, DM_USER_NAME and DM_PASSWORD; if those are set, either as a whole or individually, the above utilities can be launched with the corresponding option -DM_CONNECT_DOCBASE, -DM_CONNECT_USER_NAME and -DM_CONNECT_PASSWORD and the corresponding parameter can be omitted. E.g.:

export DM_DOCBASE_NAME=dmtest
export DM_USER_NAME=kermit
export DM_PASSWORD=conehead
idql -ENV_CONNECT_DOCBASE_NAME -ENV_CONNECT_USER_NAME -ENV_CONNECT_PASSWORD </dev/null
   select count(*) from dm_user
   go
   quit
EoQ
Connected to Documentum Server running Release 7.3.0000.0214  Linux64.Oracle
1> 2> count(*)              
----------------------
                    61
(1 row affected)
1> Bye

However, there is no prompt for missing parameters or unset variables and, quite surprisingly, the command fails silently in such cases.
Nonetheless, the point here is that we could standardize on these variable names and use them with awk, e.g. (dm)awk would pull out those parameters from the environment as follows:

echo "select count(*) from dm_user" | dmawk73 'BEGIN {
   cmd = "echo $DM_DOCBASE_NAME $DM_USER_NAME $DM_PASSWORD"
   cmd | getline docbase_name dm_user_name passwd
   print docbase_name, dm_user_name, passwd ? passwd : "N/A"
   close(cmd)
}'
dmtest kermit conehead 

whereas gawk could chose to access those environment variables through the built-in ENVIRON associative array:

echo "select count(*) from dm_user" | gawk 'BEGIN { print ENVIRON["DM_DOCBASE_NAME"], ENVIRON["DM_USER_NAME"], ENVIRON["DM_PASSWORD"] ? ENVIRON["DM_PASSWORD"] : "N/A"}'
dmtest kermit conehead

which can be more readable in some cases since its indexes are explicitly named vs. positional.
See section 5 below to know what dmawk and gawk have in common regarding Documentum.

3. Reading the password from a file

Here too, let’s admit that security is not important so a cleartext password could be read from a text file as follows:

cat getpasswd_from_file.awk
# Usage:
#    dmawk -v password_file=... -f getpasswd_from_file.dmawk 
BEGIN {
   if (!password_file) {
      print "missing password_file parameter"
      exit
   }
   getline pp < password_file
   close(password_file)
   print "password was:", pp
}

Execution:

cat password_file
Supercalifragilisticexpialidocious!

echo "select count(*) from dm_user" | dmawk -f getpasswd_from_file.awk  -v password_file=password_file
password was: Supercalifragilisticexpialidocious!

No surprise here.

4. Access bash’s read -s command

The bash shell has the built-in command read which take the -s option in order to prevent echoing on the screen the entered characters. Unfortunately, while bash is most of the time a login shell, it is not always the subshell invoked when spawning a command, which awk does when executing things like “cmd | getline”. Actually, it is /bin/sh that is invoked as a subshell under Linux, which is a sym link to /bin/dash (at least the Ubuntu 16.04 and 18.04 I’m using here; under Centos, /usr/bin/sh is symlinked to /usr/bin/dash), a much smaller shell than bash and supposedly faster. So, how to force bash as a subshell ?
I could not find any system setting to configure the choice of the subshell. Obviously, changing the /bin/sh symlink and making it point to /bin/bash works indeed but it is a system-wide change and it is not recommended because of possible compatibility issues.
The solution is to explicitly tell the subshell to make bash execute the read. But it is not enough, we also need to explicitly tell read to get its input from /dev/tty otherwise it gets messed up with any piped or redirected input. Here is a solution:

cat getpasswd_tty.dmawk
BEGIN {
   pp = getpassword("please, enter password: ")
   print "\npassword was:", pp
   exit
}
END {
  print "exiting ..."
}
function getpassword(prompt     , cmd, passwd) {
   cmd = "/bin/bash -c 'read -s -p \"" prompt "\" passwd < /dev/tty; echo $passwd'"
   cmd | getline passwd
   close(cmd)
   return passwd
}

Execution:
echo "select count(*) from dm_user" | dmawk -f  getpasswd_tty.dmawk 
please, enter password: password: 
password was: AreYo7Kidd8ngM3?
exiting ...

Line 11 invokes bash from whatever subshell is launched by dmawk, and asks it to execute the read built-in without echo, with the given prompt, and with its input coming directly from the device /dev/tty.
On line 10, note the function getpassword’s formal parameters cmd and passwd; since the function is called without any effective value for those, they are considered as local variables; this is a common idiom in awk where all variables are global and come to existence as soon as they are referenced.
Under Centos, where /usr/bin/bash is also invoked as a subshell, line 11 can be slightly simplified:

   cmd = "'read -s -p \"" prompt "\" passwd < /dev/tty; echo $passwd'""

This work-around is the easiest and closest to the original built-in password() function.

5. Implement password() in dmgawk

Those who have read my blog here know that we have now a much more powerful implementation of awk in our toolbox, GNU gawk, which we can extend to suit our needs. The above blog describes how to extend gawk with a connectivity to Documentum docbases; I jokingly named the resulting awk dmgawk. As glibc includes the getpass() function just for this purpose, why not use the same approach and add to dmgawk a sensible password() function around C’s getpass() that works as before ? Let’s put our money where our mouth is and implement this function in dmgawk. In truth, it should be noted that getpass() is marked as being obsolete so this alternative should be considered as a temporary work-around.
I won’t copy here all the steps from the above blog though; here are only the distinctive ones.
The interface’s source:

cat ~/dmgawk/gawk-4.2.1/extension/password.c
/*
 * password.c - Builtin function that provide an interface to the getpass() function;
 * see dmapp.h for description of functions;
 *
 * C. Cervini
 * dbi-services.com
 * 7/2018
 */
#ifdef HAVE_CONFIG_H
#include 
#endif

#include "gawkapi.h"

#include "gettext.h"
#define _(msgid)  gettext(msgid)
#define N_(msgid) msgid

static const gawk_api_t *api;   /* for convenience macros to work */
static awk_ext_id_t ext_id;
static const char *ext_version = "password extension: version 1.0";
static awk_bool_t (*init_func)(void) = NULL;

int plugin_is_GPL_compatible;

/*  do_password */
static awk_value_t *
do_password(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t prompt;
   char *passwd;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, &prompt)) {
      passwd = getpass(prompt.str_value.str);
   }
   else passwd = getpass("");

   make_const_string(passwd == NULL ? "" : passwd, strlen(passwd), result);
   return result;
}

/*
these are the exported functions along with their min and max arities;
let's make the prompt parameter optional, as in dmawk;
*/
static awk_ext_func_t func_table[] = {
        { "password", do_password, 1, 0, awk_false, NULL },
};

/* define the dl_load function using the boilerplate macro */

dl_load_func(func_table, password, "")

Compilation steps:

cd ~/dmgawk/gawk-4.2.1/extension
vi Makefile.am
append the new library to the pkgextension_LTLIBRARIES list:
pkgextension_LTLIBRARIES =      \
        filefuncs.la    \
        fnmatch.la      \
        fork.la         \
        inplace.la      \
        intdiv.la       \
        ordchr.la       \
        readdir.la      \
        readfile.la     \
        revoutput.la    \
        revtwoway.la    \
        rwarray.la      \
        time.la         \
        dctm.la         \
        password.la

later:
dctm_la_SOURCES       = dctm.c
dctm_la_LDFLAGS       = $(MY_MODULE_FLAGS)
dctm_la_LIBADD        = $(MY_LIBS)

password_la_SOURCES  = password.c
password_la_LDFLAGS  = $(MY_MODULE_FLAGS)
password_la_LIBADD   = $(MY_LIBS)

run the make command:
make

go one level up and run the make command again:
make

At this point, the new gawk is ready for use. Let’s test it:

cat getpasswd.awk
@load "password"

BEGIN {
   passwd = password("please, enter password: ")
   print "password was:", passwd
}

END {
   print "exiting..."
}

Execution:

AWKLIBPATH=~/dmgawk/gawk-4.2.1/extension/.libs echo "select count(*) from dm_user" | ~/dmgawk/gawk-4.2.1/gawk -f ./getpasswd.awk 
please, enter password: 
password was: precipitevolissimevolmente
exiting...

If all is good, install the new extension system-wide as follows:

cd ~/dmgawk/gawk-4.2.1
sudo make install

make an alias to the new gawk:
alias dmgawk=/usr/local/bin/gawk
The usage is simplified now:
echo "select count(*) from dm_user" | dmgawk -f ./getpasswd.awk
please, enter password: 
password was: humptydumpty
exiting...

dmgawk looks more and more like a valuable substitute for dmawk. What gets broken in dmawk can be fixed by dmgawk.

6. And in python ?

Those who use python for their Documentum administration tasks, extended with the Documentum connectivity as proposed in my blog here, are even luckier because python has a library for just about everything but the kitchen sink, and an interface to C’s getpass(), appropriately named getpass(), already exists, see here. Therefore, there is no need to write one using e.g. ctypes. Here is how to call the python’s getpass():

cat getpasswd.py 
#!/usr/bin/python

import getpass

passwd = getpass.getpass(prompt = "Please, enter password: ")
print("The password is: " + passwd)

Execution:
echo "select count(*) from dm_user" | ./getpasswd.py
Please, enter password: 
The password is: Did the quick brown fox jump over the lazy dog ?

No muss, no fuss here.

Conclusion

It’s quite interesting to see how basic things that we take for granted get broken from one Documentum release to another. On the bright side though, those little frustrations gives us the opportunity to look for work-arounds, and write blogs about them ;-). I am eager to find the next dysfunction and pretty confident that Documentum will not be disappoint me in this respect.

 

Cet article A password() function for dmgawk est apparu en premier sur Blog dbi services.

Pages